uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
1,941,325,220,888
arxiv
\section{Derivation heat equation}\label{ap2:derivation} To find the heat equation Eq.~(\ref{eq:heatequation}), we follow Ref.~\cite{groot1962non} and define the local specific internal energy $u\equiv U/\varrho V_{\rm el}$ as the energy associated with thermal agitation and all short ranged (nonelectrostatic) particle interactions. We can now write the internal energy balance as \begin{eqnarray}\label{eq:energy_balance} \varrho \partial_{t} u&=&-\partial_{z} J_{q}+IE, \end{eqnarray} or, similarly, the internal enthalpy balance, \begin{equation}\label{eq:enthalpy_balance2} \varrho\partial_{t} h=-\partial_{z} J_{q}+\partial_{t} p+IE, \end{equation} where the local specific enthalpy $h$ is defined via $\varrho h=\varrho u+p$. In these equations, the heat flow $J_{q}=-\kappa \partial_{z} T+\sum_{i}\bar{H}_{i} J_{i}$ contains Fourier heat diffusion, and the partial molecular enthalpy $\bar{H}_{i}$ carried by particle currents. Note that we did not include a Dufour term $\sim\nabla \mu$ in $J_{q}$, which is consistent with our disregarding Soret terms $\sim\nabla T$, the reciprocal phenomenon, in the Nernst-Planck equation \cite{curtiss1999multicomponent}. \subsection{ Partial molecular enthalpy} The partial molecular enthalpy $\bar{H}_{i}$ of component $i\in\{+,-,s\}$ is defined for homogeneous systems in terms of the enthalpy $H(S,p,N_{\pm},N_{s})$ as \begin{align}\label{eq:partialmolecularenthalpydefinition} \bar{H}_{i}&\equiv \left(\frac{\partial H}{\partial N_{i}}\right)_{T,p,N_{i'\neq i}}, \end{align} which is related to the partial {\it molar} enthalpy (common in chemistry literature) by division by Avogadro's number. Above, we defined the internal energy as the kinetic energy and microscopic interaction energy of the constituent particles {\it without} electric contributions, which is in line with Refs.~\cite{groot1962non,haase1968thermodynamics,kontturi2008ionic}. With this choice, Refs.~\cite{de1953thermodynamics, groot1962non, kontturi2008ionic} argue that it is the chemical potential, and not the electrochemical potential that enters the Gibbs relation. Therefore, the total differential of the enthalpy reads \begin{align}\label{eq:differentialenthalpy} dH&=TdS+Vdp+\sum_{i}\mu_{i}dN_{i}. \end{align} Using the total differential of the entropy (employing a Maxwell identity and identifying the heat capacity $C_{p}$), \begin{align}\label{eq:differentialentropy} d &=\frac{C_{p}}{T}dT-\left(\frac{\partial V}{\partial T}\right)_{p,N_{i}}dp-\sum_{i}\left(\frac{\partial \mu_{i}}{\partial T}\right)_{p,N_{i}}dN_{i}, \end{align} we eliminate $dS$ in Eq.~(\ref{eq:differentialenthalpy}) in favor of $dT$ to find \begin{align}\label{eq:differentialenthalpyT} dH=&C_{p} dT+\left[V-T\left(\frac{\partial V}{\partial T}\right)_{p,N_{i}} \right]dp\nonumber\\ &+\sum_{i}\left[\mu_{i}-T\left(\frac{\partial \mu_{i}}{\partial T}\right)_{p,N_{i}}\right]dN_{i}. \end{align} The partial molecular enthalpy Eq.~(\ref{eq:partialmolecularenthalpydefinition}) then reduces to the so-called partial Gibbs-Helmholtz equation \begin{align}\label{eq:derivationgibsshelmholtz} \bar{H}_{i}&=\mu_{i}-T\left(\frac{\partial \mu_{i}}{\partial T}\right)_{p,N_{i'}}=-T^{2}\left(\frac{\partial \mu_{i}/T}{\partial T}\right)_{p,N_{i'}}, \end{align} with $i'\in\{+,-,s\}$. Since our system of interest is {\it not} homogenous, one should instead consider a subspace region of volume $\mathcal{V}$, carrying entropy $\mathcal{S}$ and occupied by $\mathcal{N}_{i}$ particles of each of the $i$ species. This region should be small enough so that $\rho_{i}=\mathcal{N}_{i}/\mathcal{V}$ is the locally homogenous particle density. One could then repeat the above exercise to find the partial molecular enthalpy $\bar{H}_{i}\equiv \left(\partial \mathcal{H}/\partial \mathcal{N}_{i}\right)_{T,p,\mathcal{N}_{i'\neq i}}$, defined in terms of the enthalpy $\mathcal{H}(\mathcal{S},p, \mathcal{N}_{\pm}, \mathcal{N}_{s})$ within this space region, to find the same expression Eq.~(\ref{eq:derivationgibsshelmholtz}) in terms of the $z$-dependent chemical potential. Inserting the ionic chemical potential (first part of Eq.~(\ref{eq:electrochemicalpotential})) into Eq.~(\ref{eq:derivationgibsshelmholtz}) then gives \begin{align}\label{eq:partialmolarenthalpycontinued} \bar{H}_{\pm =&\frac{3}{2}k_{ B}T+\frac{k_{ B}T}{1-v(\rho_{+}+\rho_{-})}\left(\frac{\partial \ln \mathcal{V}}{\partial \ln T} \right)_{p}. \end{align} were we used that $\Lambda_{i}\sim 1/\sqrt{T}$. Interestingly, the first term in Eq.~(\ref{eq:partialmolarenthalpycontinued}) that appears is the ideal-gas energy, and {\it not} the ideal-gas enthalpy. If all species $i$ had been treated at the ideal-gas-level (with $v=0$), then one should have substituted $\rho_{i}=p/k_{ B}T$ in Eq.~(\ref{eq:electrochemicalpotential}), which would have led to the ideal-gas enthalpy $\bar{H}_{i}=5k_{ B}T/2$. However, adding ions (see Eq.~(\ref{eq:partialmolecularenthalpydefinition})), the volume they explore does not grow as an ideal gas at fixed pressure as $\mathcal{V}\sim \mathcal{N}_{\pm}$, instead $\mathcal{V}$ stays roughly unaffected because it is primarily determined by the solvent molecules, which are markedly nonideal. In the second term of Eq.~(\ref{eq:partialmolarenthalpycontinued}) we recognize $\left(\partial \ln \mathcal{V}/\partial \ln T \right)_{p}=\alpha T$ with $\alpha$ the volumetric expansivity $\alpha\equiv \left(\partial \ln \varrho/\partial T \right)_{p}$ of the fluid at mass density $\varrho$. In the bulk electrolyte, the volume $\mathcal{V}$ is predominantly occupied by water molecules, so that we can interpret the volumetric expansivity of the fluid with that of pure water. At $\SI{20}{\degreeCelsius}$ this amounts to $\alpha T=0.06$ \cite{kell1975density}. The second term in Eq.~(\ref{eq:partialmolarenthalpycontinued}) therefore only gives a small correction to the first term, and is often disregarded \cite{davidson1962}. As opposed to the dilute bulk, at high electric potentials Eq.~(\ref{eq:partialmolarenthalpycontinued}) is problematic in the EDL. Interpreting the coefficient of thermal expansivity $\alpha$ with that of pure water does not hold here because of high local ion densities. Moreover, an artifact of the lattice-gas free energy functional Eq.~(\ref{eq:grandpotential}), the steric interactions are a-thermal (do not depend on temperature). This is also reflected by the {\it fixed} lattice spacing $v^{1/3}$ which erroneously always gives $(\partial \ln \mathcal{V}/\partial \ln T)_{p}=0$. Meanwhile, the vanishing of the term $(\partial \ln \mathcal{V}/\partial \ln T)_{p}$ is at high electric potentials accompanied by a divergence of its prefactor $k_{ B}T/[1-v(\rho_{+}+\rho_{-})]$. More explicit solvent modeling might be necessary to get a better grip on the thermal expansivity term of $\bar{H}_{i}$ in the EDL region. \subsection{Internal enthalpy balance} With the partial molecular enthalpy at hand, we now set out to derive the heat equation, which we do by finding an alternative expression for the l.h.s. of the internal enthalpy balance Eq.~(\ref{eq:enthalpy_balance2}). Similar treatments can be found in Refs. \cite{haase1968thermodynamics,groot1962non,bird2007transport}. We write the total mass of component $i$ as $m_{i}=N_{i}M_{i}$, with $M_{i}$ the molecular weight, such that the total mass is $m=m_{+}+m_{-}+m_{s}$ and the electrolyte mass density is $\varrho=m/V_{\rm el}$. The enthalpy $H(S,p,N_{\pm},N_{s})$ can then be written in terms of the total mass of the individual components $H(m_{+},m_{-},m_{s})$. Euler's theorem then allows us to write the enthalpy in terms mass fractions $c_{\pm}=m_{\pm}/m$ as $H(m_{+},m_{-},m_{s})=m h(c_{+},c_{-})$, with $h$ the specific enthalpy density, the total differential of which reads \begin{align}\label{eq:specificenthalpydifferential} \varrho \frac{dh}{dt}=&\varrho\left(\frac{\partial h}{\partial T}\right)_{p,c_{k}}\frac{dT}{dt}+\varrho\left(\frac{\partial h}{\partial p}\right)_{T,c_{k}}\frac{dp}{dt}\nonumber\\ &+\sum_{k}\varrho\left(\frac{\partial h}{\partial c_{k}}\right)_{T,p,c_{k'\neq k}}\frac{dc_{k}}{dt}, \end{align} with $k\in\{+,-\}$. For the last term Ref.~\cite{groot1962non} (p. 458) and Ref.~\cite{bird2007transport} (p. 609) provide alternative derivations both yielding \begin{align} \left(\frac{\partial h}{\partial c_{k}}\right)_{T,p}=\frac{ \bar{H}_{k}}{M_{k}}-\frac{\bar{H}_{s}}{M_{s}}. \end{align} The continuity equation for mass fluxes reads $\varrho(dc_{i}/dt)=-\partial_{z} j_{i}$, in terms of the mass flux $j_{i}$, which is related to the particle flux as $j_{i}=M_{i}J_{i}$. The absence of barycentric motion implies that we can replace the material derivatives with partial derivatives, and also that the mass fluxes obey $\sum_{i}j_{i}=0$. The above considerations yield \begin{align} \sum_{k}\varrho\left(\frac{\partial h}{\partial c_{k}}\right)_{T,p,c_{k'\neq k}}\frac{dc_{k}}{dt}=\sum_{i}\bar{H}_{i}\partial_{z} j_{i}. \end{align} The first two partial derivatives in Eq.~(\ref{eq:specificenthalpydifferential}) can be easily identified in Eq.~(\ref{eq:differentialenthalpyT}) to find \begin{align}\label{eq:enthalpy_balance1} \varrho\partial_{t}h=&\varrho c_{p}\partial_{t}T+\left[1-\left(\frac{\partial \ln V}{\partial \ln T}\right)_{p} \right]\partial_{t}p-\sum_{i} \bar{H}_{i}\partial_{z} J_{i}, \end{align} which is the alternative expression for the l.h.s. of the internal enthalpy balance Eq.~(\ref{eq:enthalpy_balance2}) we referred to at the start of this subsection. \subsection{Heat equation} Combination of Eqs.~(\ref{eq:enthalpy_balance2}) and (\ref{eq:enthalpy_balance1}), and inserting the heat flow, $J_{q}=-\kappa \partial_{z} T+\sum_{i}\bar{H}_{i} J_{i}$, then gives \begin{align}\label{eq:fullheatequation} \varrho c_{p}\partial_{t}T=\kappa\partial_{z}^{2} T+IE+\alpha T \partial_{t}p-\sum_{i} J_{i}\partial_{z}\bar{H}_{i}. \end{align} Here, the electric field is found via Eq.~(\ref{eq:ioniccurrent}) as \begin{align}\label{eq:electricfield} E=&I\mathbbl{r}+\mathbbl{r}De\left(\partial_{z}q+q\partial_{z}\ln \left[1-v(\rho_{+}+\rho_{-}) \right]\right). \end{align} with $\mathbbl{r}=k_{ B}T/(De^{2}(\rho_{+}+\rho_{-}))$. The term $IE$ then gives the (ir)reversible heating rates $\mathbbl{\dot{q}}_{\rm irr}\equiv I^2 \mathbbl{r}$ and $\mathbbl{\dot{q}}_{\rm rev}\equiv I\mathbbl{r} De\left\{\partial_{z}q+q\partial_{z}\ln \left[1-v(\rho_{+}+\rho_{-}) \right]\right\}$. As discussed above, the term $\alpha\equiv \left(\partial \ln \varrho/\partial T \right)_{p}$ in Eq.~(\ref{eq:fullheatequation}) is the (usually small) volumetric thermal expansivity of the electrolyte. Also $\partial_{t}p$ is small since our (incompressible) liquid is isobaric throughout the cell, and from here on we therefore drop the term $\alpha T \partial_{t} p$. For the ionic contribution to the term $J_{i}\nabla \bar{H}_{i} $ consider the gradient of Eq.~(\ref{eq:partialmolarenthalpycontinued}). Firstly, the term $\sim \partial_{z}T$ vanishes in equilibrium. Meanwhile, Eq.~(\ref{eq:derivationgibsshelmholtz}) also contains a term proportional to the thermal expansivity of the solvent $\sim \partial_{T} \ln \mathcal{V}$, the gradient of which vanishes in the bulk. In equilibrium $\partial_{z}\bar{H}_{s}=0$ because the solvent chemical potential is uniform though the cell. All these terms vanishing in equilibrium implies that $J_{i}\partial_{z} H_{i}$ is proportional to $\sim J_{i}^{2}$ (or higher powers in $J_{i}$) and therefore vanishes faster than reversible contributions that go as $\mathbbl{\dot{q}}_{\rm rev}\sim I$. We conclude that $J_{i}\partial_{z} H_{i}$ does {\it not} contribute to reversible heating. Moreover, we see that the ratio $(J_{+}+J_{-})\nabla k_{ B}T/\mathbbl{\dot{q}}_{\rm irr}$ also goes to zero since for the system of interest both the neutral salt current $(J_{+}+J_{-})$ and temperature variations $\nabla T$ are very small. We therefore omit $J_{i}\nabla \bar{H}_{i} $ from Eq.~(\ref{eq:fullheatequation}) that now simplifies to the heat equation Eq.~(\ref{eq:heatequation}) \begin{align} \varrho c_{p}\partial_{t}T=\kappa\partial_{z}^{2} T+\mathbbl{\dot{q}}_{\rm irr}+\mathbbl{\dot{q}}_{\rm rev}. \end{align} Note that our derivation of this equation differs from the one presented in Ref.~\cite{d2014first}. These authors started from internal energy balance (similar to our Eq.~(\ref{eq:energy_balance}) but) lacking the term $IE$, with this term appearing in the heat equation via the partial molecular enthalpy that they claim to be $\bar{H}_{i}=\tilde{\mu}_{i}-T\left(\partial \mu_{i}/\partial T\right)$.\\ \section{Appendix: Analytical approximation to the adiabatic temperature rise}\label{ap4:largeandsmallcharge} For the case of vanishing ionic volume ($v=0$) and no double-layer overlap, Gouy and Chapman famously found an analytic solution to the Poisson-Boltzmann equation. The electrodes of our model EDLC are sufficiently separated that the ionic density profiles can be considered to be nonoverlapping. Consequently, we can approximate the adiabatic temperature rise predicted within Poisson-Boltzmann theory (the green line in Fig.~\ref{fig2}(a)) by inserting the Gouy-Chapman potential $\Psi= (2k_{ B}T/e) \sinh^{-1}(\sigma/\bar{\sigma})$ into Eq.~(\ref{eq:2adiabat}), which gives \begin{align}\label{eq:c1} \frac{1}{T}d &= \frac{4k_{\rm B}}{\varrho c_{Q}L}\left[\sinh^{-1} \frac{\sigma'}{\bar{\sigma}}-\frac{\sigma}{\sqrt{\bar{\sigma}^{2}+\sigma^{2}}}\frac{\partial \ln\bar{\sigma}}{\partial \ln T}\right]d\sigma. \end{align} Separation of variables in this equation is only possible for the special case where $\partial_{T}\bar{\sigma}(T)=0$, which occurs if $T\cdot \epsilon(T)=cst$ (i.e., not the case considered in this Letter). In that case we find \begin{align}\label{eq4} \ln\frac{T_H}{T_L}&=\frac{4k_{\rm B}}{\varrho c_{Q}L}\bigg[\sigma\sinh^{-1} \frac{\sigma}{\bar{\sigma}}-\sqrt{\bar{\sigma}^{2}+\sigma^{2}}+\bar{\sigma}\bigg], \end{align} with $T_L$ the low initial temperature and $T_H$ the higher temperature after adiabatic charging. For small temperature changes, $\Delta T=T_H-T_L$, this equation simplifies to \begin{align}\label{eq5} \Delta T&\approx\frac{4k_{\rm B}T_L}{\varrho c_{Q}L}\times\begin{cases} \frac{\sigma^2}{2\bar{\sigma}}\quad\quad&\textrm{if~} \, \sigma\ll\bar{\sigma}, \\ \sigma\ln \frac{2\sigma}{\bar{\sigma}}-\sigma \quad\quad&\textrm{if~} \, \sigma\gg\bar{\sigma}. \end{cases} \end{align} While this special case of a $T$-independent $\bar{\sigma}$ reveals the small- and large-$\sigma$ scaling behavior, neglecting the second term in Eq.~(\ref{eq:c1}) leads to an overestimation of the adiabatic temperature rise predicted by Eq.~(\ref{eq4}) by about 40\% w.r.t. the prediction of Eq.~(\ref{eq:c1}) for the parameters chosen. \section{ Excess correlations}\label{ap3:correlations} Electrolytic partial molar enthalpy Eq.~(\ref{eq:derivationgibsshelmholtz}) is well-documented in the chemistry literature \cite{davidson1962, devoe2001thermodynamics} for the electro-neutral bulk. Instead of the term proportional to the thermal expansivity of the solvent, discussed at length above, usually excess chemical potential due to ionic correlations are considered, via Debye-H\"{u}ckel (DH) theory where $\beta\mu^{exc}_{\rm DH}(T,\rho_{0})=-\lambda_{B}/2\lambda_{D}$, or extensions thereof that include for instance finite ion size. In the bulk, adding $\mu^{exc}_{\rm DH}$ will not contribute to the reversible heating since both $\partial_{z}T$ and $\partial_{z} \rho_{0}$ vanish in equilibrium. Meanwhile, the assumptions underlying DH theory are strongly violated in the EDL where same-sign counterions are at high density. We therefore expect the DH heating rates of Ref.~\cite{d2014first}, with a significant nonzero contribution in the EDL (see Fig.~(6) of Ref.~\cite{d2014first}), to be unreliable. This is substantiated by our Fig.~\ref{fig2}(a) which shows that the solutions to the PNPh model as formulated in this Letter for slow charging coincide with the adiabatic temperature rise as predicted by the thermodynamic identity Eq.~(\ref{eq:2adiabat}). Had we added ionic correlations via the formulation of Ref.~\cite{d2014first}, that is, only in the partial molecular enthalpy to only affect the PNPh equations, we would have found only the black dotted lines of Fig.~\ref{fig2}(a) shifted, but not red line of the thermodynamic results of Eq.~(\ref{eq:2adiabat}). If first-principles modeling is desired, excess ion correlations should be incorporated at the level of the grand potential functional Eq.~(\ref{eq:grandpotential}), to impact (via the electrochemical potential) both the EDL and the bulk, in as well as out of equilibrium. \end{appendix}
1,941,325,220,889
arxiv
\section{Introduction} \label{intro} The time dependent, cyclic evolution of the internal parameters of a quantum system can lead to the accumulation of a geometric phase that depends solely on the path traversed in parameter space and not on the duration of the cycle itself \cite{Berry1984a}. This geometric phase manifests itself in different phenomena in all areas of physics \cite{cohen2019geometric}. Geometric contributions to quantum evolution become particularly interesting for many-body systems in a topological phase of matter, hosting non-Abelian excitations at their edges \cite{wilczek1990fractional}. In this case, the phase is generalized to a unitary operation protected against details of the system and the evolution, which makes anyons appealing excitations for potential use in fault tolerant quantum computation \cite{Nayak2008,Aasen2016}. A particularly interesting case of non-Abelian zero energy excitations are Majorana zero modes, which exist on the surface of topological superconductors \cite{read2000paired,kitaev2001unpaired}. The proposal to engineer topological superconductivity in semiconductor nanostructures \cite{oreg2010helical,sau2010generic,Alicea2012,lutchyn2018majorana} has received compelling experimental indications \cite{mourik2012signatures,deng2016majorana,albrecht2016exponential,zhang2018quantized,lutchyn2018majorana} and led to proposals for quantum information processing \cite{Aasen2016,Sarma2015,Plugge2016,Karzig2017}. The experimental indications of Majorana zero modes are based on charge transport measurements, reflecting the identification of topological indices in scattering processes for topological superconductors \cite{akhmerov2011quantized,fulga2012scattering,meidan2014scattering}. While these experiments are sensitive to the existence of protected zero modes, their charge neutrality makes a direct manifestation of the topologically protected unitary evolution more elusive to detection via charge transport. Heat transport does not suffer from this limitation. It appears to be non-trivially affected by Majorana zero modes \cite{smirnov2018universal,smirnov2019dynamic,ricco2018tuning,molignini2017sensing}, and geometric effects of Majorana braiding have been predicted for heat pumped through braiding \cite{Meidan2019}. Geometric contributions have generically found to be evident in transport processes such as pumped charge and heat currents in cyclically manipulated quantum systems with few degrees of freedom \cite{Ren2010,Placke2018,Yuge2012,KumarYadalam2016,Makhlin2001,Bhandari2020,Takahashi2020,Andreev2000,Brouwer1998,Shutenko2000,Avron2001,Moskalets2002,bhandari2020geometric}. For these systems with few degrees of freedom, recent studies \cite{Ren2010,Agarwalla2012,Goswami2016a} have suggested that geometric contributions to the full statistics of transfer processes result in the apparent violation of fluctuation theorems, which quantify the likelihood of anomalous heat transfer against a thermal gradient \cite{Gallavotti1995}. Motivated by these findings, it becomes of interest to investigate how the topological protection, of the geometric phases in Majorana based manipulations, is reflected in the corrections of the aforementioned fluctuation theorem. In this spirit, we further explore the influence of geometric contributions to the full counting statistics of pumped heat transport, in the case of the exchange of two Majorana fermions performed within a Y-junction of topologically superconducting nanowires. We address our interest specifically to the effect upon fluctuation theorems. By using a scattering matrix approach, we will show that we find a non-zero geometric contribution to the probability generating function and that this contribution does indeed lead to a correction to the Gallavotti-Cohen type fluctuation theorem. Such a correction generically exisit for arbitrary adiabatic cycles in parameter space, but it extends to the case of Majorana braiding, in which it becomes insensitive to slow time-fluctuations of the driving parameters. The paper is organized as follows. In Sec. \ref{sec:FCS} we develop the general formalism to compute the full counting statistics of energy transfer via scattering matrices, including both particle and hole degrees of freedom required for superconducting systems. We then address the protocols of interest in Sec. \ref{Braiding Sec} , where we analyze the scattering matrix for a driven Y-junction of 1-dimensional p-wave superconductors. We employ our formalism to compute the transport properties beyond the average current in Sec. \ref{sec:transport} and the corrections to the Gallavotti-Cohen fluctuation theorem in Sec. \ref{sec:theorem}. In the latter we first address pumping cycles of small amplitude and finally extend our results to topologically protected braiding, where we characterize the topological features in the corrections to the mentioned fluctuation theorem. \section{Full Counting Statistics for Pumped Heat Transport} \label{sec:FCS} In order to study the behaviour of thermal fluctuations throughout any pumped process, it is necessary to extract information beyond the average pumped quantities and hence uncover the full statistical distribution of the transport process. Such information is provided by the probability distribution, $P(Q,\mathcal{T})$, for some quantity of interest $Q$, e.g.\ charge or energy, transported across a system throughout some time period $\mathcal{T}$. This distribution can be accessed via its Fourier transform, the characteristic function (CF) $\chi(\lambda)$, where $\lambda$ denotes the counting field. The full counting statistics of charge transfer, originally introduced for DC transport \cite{levitov1996electron}, has previously been evaluated for pumped electronic charge \cite{Makhlin2001,Andreev2000} and for specific non-adiabatic periodic driving of superconducting devices \cite{romito2004full}. For the case of a Majorana braiding, for which topological features are apparent in scattering properties, we construct the FCS based on the scattering matrix formalism. We consider a superconducting system under the cyclic modulation of some internal parameters, which in this case correspond to the couplings between the external and central Majorana states present in a superconducting Y-junction (cf.\ Fig.\ \ref{Setup}). This time dependent manipulation facilitates inelastic scattering events and as a result, it is important to carefully consider both the energy and time dependence of the scattering events when defining the CF. We define the CF for the heat, $Q$, pumped during the total cycle period $\mathcal{T}$ as \begin{equation} \label{Total Char Func 1} \chi_Q(\lambda) = \int dQ e^{i\lambda Q} P(Q,\mathcal{T}). \end{equation} The probability distribution for the total cycle, $P(Q,\mathcal{T})$, can be obtained by considering the heat transported during the time steps, $t_i$, of a discretised cycle: \begin{equation} \label{Total Char Func} \chi_Q(\lambda) = \int dQ e^{i\lambda Q} \sum_{\{q_{t_i}\}} \delta \Big (Q=\sum_{t_i}q_{t_i} \Big ) P(q_{t_1},q_{t_2},...), \end{equation} where $\{q_{t_i}\}$ denotes all possible combinations of the heat quantities $q_{t_i}$, transported in each discrete time step $t_i$. By considering the particle baths in the external leads to be large, so that the ingoing distribution function at any time $t$ is given by the equilibrium Fermi distribution function, and if relaxation times are fast enough, we can assume independence of the probability distribution at each time. The CF can then be written as a product of the contribution from each time step: \begin{equation} \label{Total Char Func Int} \begin{aligned} \chi_Q(\lambda) =& \sum_Q e^{i\lambda Q} \sum_{\{q_{t_i}\}} \delta \Big (Q=\sum_{t_i}q_{t_i} \Big ) \prod_{t_i} P(q_{t_i}) \\ =& \sum_{\{q_{t_i}\}} e^{i\lambda \sum_{t_i} q_{t_i}} \prod_{t_i} P(q_{t_i}) \\ =& \sum_{q_{t_1}} e^{i\lambda q_{t_1}} \sum_{q_{t_2}}e^{i\lambda q_{t_2}} ... \prod_{t_i} P(q_{t_i}) \\ =& \prod_{t_i} \sum_{q_{t_i}} e^{i\lambda q_{t_i}}P(q_{t_i}) = \prod_{t_i} \chi_{t_i}(\lambda). \end{aligned} \end{equation} Taking the continuous limit, $t_i \to 0$, we can write the cumulant generating function (CGF), defined as $G_Q(\lambda) = \ln(\chi_Q(\lambda))$ as an integral over the driving time period: \begin{equation} \label{Generating Function} G_Q(\lambda) = \int_0^\mathcal{T} dt \ln(\chi_t(\lambda)). \end{equation} We have therefore reduced the calculation, of the total FCS of driven heat transport, to that of a CF at a frozen time $t$, which we denote as $\chi_t(\lambda)$. The latter is computed analogously to the case of charge FCS \cite{Muzykantskii1994b}, as outlined in Appendix \ref{CF derivation}: \begin{equation} \label{Char Func t} \chi_t(\lambda) = \Big < e^{i\lambda \hat{Q}_\rightarrow(t)}e^{-i\lambda \hat{Q}_\leftarrow(t)} \Big >. \end{equation} Here the operators $\hat{Q}_{\rightarrow(\leftarrow)}(t)$ describe the heat carried by particles in the left lead, entering (leaving) the junction with the internal system of interest, at some time $t$. The CF in Eq. \ref{Char Func t} has been previously evaluated for the case of charge transfer between superconducting leads in static systems by B.A. Muzykantskii and D.E. Khmelnitskii \cite{Muzykantskii1994b}. We extend, hereafter, this formalism to the case of heat transport and adiabatically driven systems. The heat operators can be written in terms of fermionic creation and annihilation operators in the left external lead \cite{Moskalets2014}: \begin{equation} \begin{aligned} \label{Ingoing Heat} \hat{Q}_{\rightarrow}(t) = \iint_{-\infty}^\infty d\epsilon d\epsilon' & \Big(\frac{\epsilon+\epsilon'}{2} \Big) e^{\frac{i(\epsilon-\epsilon')t}{\hbar}} \\ & \times \Big(\hat{a}_{L^e}^\dagger(\epsilon) \hat{a}_{L^e}(\epsilon') +\hat{a}_{L^h}^\dagger(\epsilon) \hat{a}_{L^h}(\epsilon')\Big) \\ \end{aligned} \end{equation} and \begin{equation} \begin{aligned} \label{Outgoing Heat} \hat{Q}_{\leftarrow}(t) = \iint_{-\infty}^\infty d\epsilon d\epsilon' & \Big(\frac{\epsilon+\epsilon'}{2} \Big) e^{\frac{i(\epsilon-\epsilon')t}{\hbar}} \\ & \times \Big(\hat{\phi}_{L^e}^\dagger(\epsilon) \hat{\phi}_{L^e}(\epsilon') +\hat{\phi}_{L^h}^\dagger(\epsilon) \hat{\phi}_{L^h}(\epsilon')\Big). \end{aligned} \end{equation} The ingoing, $\hat{a_{i}}$, and outgoing, $\hat{\phi_{i}}$, electron (e) and hole (h) scattering states in the left (L) and right (R) leads are related by the scattering matrix: \begin{eqnarray} \label{scattering relation} \begin{pmatrix} \hat{\phi}_{L^e}(\tilde{\epsilon}) \\ \hat{\phi}_{L^h}(\tilde{\epsilon}) \\ \hat{\phi}_{R^e}(\tilde{\epsilon}) \\ \hat{\phi}_{R^h}(\tilde{\epsilon}) \end{pmatrix} = S_F(\tilde{\epsilon},\epsilon) \begin{pmatrix} \hat{a}_{L^e}(\epsilon) \\ \hat{a}_{L^h}(\epsilon) \\ \hat{a}_{R^e}(\epsilon) \\ \hat{a}_{R^h}(\epsilon) \end{pmatrix}, \end{eqnarray} where the scatering matrix depends explicitly on two energies due to the fact that the scattered particles can absorb or emit energy, due to the external driving. For a periodically driven system, as we are considering here, energy can be absorbed or emitted only in multiples of the driving frequency, so that $\tilde{\epsilon}=\epsilon-n\omega$. With this relationship between ingoing and outgoing states, Eq.\ \ref{Char Func t}, \ref{Ingoing Heat} and \ref{Outgoing Heat} allow the FCS for heat transport to be accessed entirely via the full Floquet scattering matrix describing transport across the internal system. \subsection{Adiabatic and Small Driving Amplitude Limit} \label{sec:Adiabatic} In general, for a time dependent driven system it is difficult to determine the elements of the full Floquet scattering matrix, which accommodates for all possible inelastic scattering events induced by the driving. In order to make analytical progress, we choose to study a model subjected to two important approximations. Firstly, we assume that the periodic driving of the system is adiabatic, in the sense that the driving period, $\mathcal{T}$, is large compared to the scattering time. In this situation, scattering can be considered instantaneous and described by a frozen scattering matrix, $S(\epsilon,t)$, the properties of which are modulated periodically by the driving. More precisely, if the driving is switched off, the Floquet Scattering matrix in Eq.\ (9), $S_F(\epsilon, \epsilon)$, describes an energy dependent, time-translation-invariant scattering process. If the matrix depends on time via a parameter, one can consider such a frozen scattering matrix parametrically depending on time, $S_{F,t}(\epsilon, \epsilon) \equiv S(\epsilon,t)$. Secondly, we assume that the amplitude of the driving, in the relevant parameter space of the system, is small. In Sec. \ref{Large Amp Drive} we will show that the results yielded from this approach can be extended to the case of the Majorana braiding, for which the amplitude of the driving can no longer be consider small with respect to the values of the parameters at the center of the cycle. The weak, adiabatic, periodic driving of parameters, such as the lead temperatures or lead coupling strength, with frequency $\omega$ can be modeled as \begin{equation} \label{parameter modulation} X_j(t) = X_{j,0} + X_{j,\omega}e^{i(\omega t-\eta_j)} + X_{j,\omega}e^{-i(\omega t-\eta_j)}. \end{equation} With the assumption that the amplitude of this modulation, $X_{j,\omega}$, is small enough to expand to first order, then the corresponding time dependence of the scattering matrix can be expressed as \cite{Moskalets2002} \begin{eqnarray} \label{S expand} \begin{aligned} S(\epsilon,t) \approx& \ S(\epsilon,X_{j,0}) + S_\omega(\epsilon) e^{-i\omega t} + S_{-\omega}(\epsilon) e^{i\omega t}, \\ \mathrm{where} \ S_{\pm \omega} =& \sum_j X_{j,\omega}e^{\mp i\eta_j} \frac{\partial S}{\partial X_j}. \end{aligned} \end{eqnarray} The scattering matrix in this form corresponds to a zeroth order expansion in frequency of the full Floquet scattering matrix, whilst allowing only scattering processes between nearest energy sidebands in addition to elastic events. The corresponding operators for scattered states then take the form \begin{equation} \begin{aligned} {\hat{\phi}_i(\epsilon)} = \sum_{\alpha}\Big ( S^{i\alpha }(\epsilon){\hat{a}_\alpha(\epsilon)}& + {S_{-\omega}^{i\alpha }(\epsilon)}{\hat{a}_\alpha(\epsilon + \omega)} \\ &+ {S_{+\omega}^{i\alpha }(\epsilon)}{\hat{a}_\alpha(\epsilon - \omega)} \Big ), \end{aligned} \end{equation} where creation and annihilation operators for the four ingoing channels at energy $\epsilon_i$ are defined in Eq. \ref{scattering relation}. Before using this approximation of the scattering matrix in the expressions for the ingoing and outgoing heat operators (Eqs. \ref{Ingoing Heat}, \ref{Outgoing Heat}), we notice that upon calculation of the CGF in Eq. \ref{Generating Function}, and hence integration over the driving time period, only terms where $\epsilon=\epsilon'$ will contribute to the heat operators. Consequently, the evaluation of the CGF only requires the calculation of the ingoing and outgoing number operators at a single energy, defined as \begin{equation} \begin{aligned} & \hat{N}^{e(h)}_\rightarrow (\epsilon) = \hat{a}^\dagger_{L^{e(h)}} (\epsilon) \hat{a}_{L^{e(h)}} (\epsilon) \\ \mathrm{and} & \ \hat{N}^{e(h)}_\leftarrow (\epsilon) = \hat{\phi}^\dagger_{L^{e(h)}} (\epsilon) \hat{\phi}_{L^{e(h)}} (\epsilon). \end{aligned} \end{equation} The number operators for both the ingoing and outgoing particle states can then be expressed in terms of matrices, $P$, acting on the ingoing scattering states: \begin{equation} \hat{N}^{e(h)}_{ \rightarrow(\leftarrow)}(\epsilon_l) = \sum_{\substack{\alpha,\beta \\ \epsilon_i,\epsilon_j}} \Big [ P^{e(h)}_{\rightarrow(\leftarrow)}(\epsilon_l) \Big ]^{\alpha \beta}_{\epsilon_i\epsilon_j} {\hat{a}^\alpha(\epsilon_i)}^\dagger \hat{a}^\beta(\epsilon_j), \end{equation} with $\alpha, \beta \in \{L^e,L^h,R^e,R^h\}$. Here the ingoing scattering matrices are diagonal in both the discretised energy and the electron-hole bases: \begin{equation} \label{ingoing P} \Big [ P^{e(h)}_{\epsilon_l \rightarrow} \Big ]^{\alpha \beta}_{\epsilon_i\epsilon_j} = \delta_{\alpha L^{e(h)}}\delta_{\alpha \beta}\delta_{\epsilon_i \epsilon_l}\delta_{\epsilon_i \epsilon_j}. \end{equation} However, the inelastic scattering events, induced by the driving, result in non-diagonal matrices defining the outgoing number operators: \begin{widetext} \begin{equation} \label{outgoing P} \begin{aligned} \Big [ P^{e(h)}_{\epsilon_l \leftarrow} \Big ]^{\alpha \beta}_{\epsilon_i\epsilon_j} =& \delta_{\epsilon_i \epsilon_l}{S^{\alpha L^{e(h)}}(\epsilon_l)}^* \Big ( S^{L^{e(h)} \beta}(\epsilon_l) \delta_{\epsilon_i \epsilon_j} + S^{L^{e(h)} \beta}_{-\omega}(\epsilon_l) \delta_{(\epsilon_i + \omega) \epsilon_j} + S^{L^{e(h)} \beta}_{\omega}(\epsilon_l) \delta_{(\epsilon_i - \omega) \epsilon_j} \Big ) \\ +& \delta_{\epsilon_i (\epsilon_l+\omega)}{S^{\alpha L^{e(h)}}_{-\omega}(\epsilon_l)}^* \Big ( S^{L^{e(h)} \beta}(\epsilon_l) \delta_{(\epsilon_i-\omega)\epsilon_j} + S^{L^{e(h)} \beta}_{-\omega}(\epsilon_l) \delta_{\epsilon_i \epsilon_j} + S^{L^{e(h)} \beta}_{\omega}(\epsilon_l) \delta_{(\epsilon_i - 2\omega) \epsilon_j} \Big ) \\ +& \delta_{\epsilon_i (\epsilon_l-\omega)}{S^{\alpha L^{e(h)}}_{\omega}(\epsilon_l)}^* \Big ( S^{L^{e(h)}\beta}(\epsilon_l) \delta_{(\epsilon_i+\omega)\epsilon_j} + S^{L^{e(h)} \beta}_{-\omega}(\epsilon_l) \delta_{(\epsilon_i + 2\omega) \epsilon_j} + S^{L^{e(h)} \beta}_{\omega}(\epsilon_l) \delta_{\epsilon_i \epsilon_j} \Big ). \end{aligned} \end{equation} \end{widetext} Using these matrices $P$ in Eq. \ref{Char Func t}, the characteristic function can be expressed as the average of a product of exponentials: \begin{equation} \label{CF expectation} \chi_t(\lambda) = \Big < \mathrm{exp} \Big (i \lambda \sum_{\alpha,\beta} C_{\alpha \beta} \hat{a}_\alpha^\dagger \hat{a}_\beta \Big ) \mathrm{exp} \Big (-i \lambda \sum_{\alpha,\beta} D_{\alpha \beta} \hat{a}_\alpha^\dagger \hat{a}_\beta \Big ) \Big >, \end{equation} with $C = \sum_i \epsilon_i P_{\epsilon_i \rightarrow}$ $D = \sum_i \epsilon_i P_{\epsilon_i \leftarrow}$ and the sum of the electron and hole number operator matrices defined as $P_{\epsilon_i \rightarrow}=P^e_{\epsilon_i \rightarrow} + P^h_{\epsilon_i \rightarrow}$. The relevant density matrix, $\rho$, is block diagonal in the energy basis with the block at each energy $\epsilon_i$ being given by $\rho^{\epsilon_{l}}_{ij} = \langle \hat{a}^{i \dagger}_{\epsilon_{l}} \hat{a}^j_{\epsilon_{l}} \rangle = f_i(\epsilon_{l})\delta_{ij}$. Importantly $P$ are projective matrices, $P^2=P$, as shown in Appendix \ref{projector derivation}. Under this condition, it has been proven \cite{Muzykantskii1994b} that the expectation value in Eq. \ref{CF expectation} can be expressed as a determinant via \begin{equation} \begin{aligned} \label{det form 1} &\chi_{t}(\lambda) = \det (1-\rho+\rho e^{i\lambda C}e^{-i\lambda D}) \\ =& \det(1-\rho+\rho e^{i \lambda \sum_i \epsilon_i P_{\epsilon_{i}\rightarrow}} \Big(1 + \sum_i P_{\epsilon_{i}\leftarrow} (e^{-i \lambda \epsilon_{i}}-1) \Big)) \\ &= \det(M_t(\lambda)). \end{aligned} \end{equation} In general the matrix $M_t(\lambda)$ will be of block pentadiagonal form in an infinite energy basis. In order to make analytical progress we can split the matrix $P_{\epsilon_{i}\leftarrow}$ into two contributions as $P_{\epsilon_i \leftarrow}^0 + \tilde{P}_{\epsilon_i \leftarrow}$, where $P_{\epsilon_i \leftarrow}^0$ describes the part of the matrix which survives in the static limit and $\tilde{P}_{\epsilon_i \leftarrow}$ includes all contributions that arise from the periodic driving and hence all terms involving the sideband scattering matrix coefficients $S_{\pm \omega}$. Subsequently we can split the matrix $M_t$ as $M_{t,0}+\tilde{M}_t$, where \begin{widetext} \begin{equation} \label{GF Full Form} \begin{aligned} & M_{t,0} = 1 - \rho + \rho \exp(i \lambda \sum_i \epsilon_i P_{\epsilon_i \rightarrow})\Bigg(1 + \sum_i P_{\epsilon_i \leftarrow}^0 (e^{-i \lambda \epsilon_i}-1)\Bigg), \\ &\tilde{M}_t = \rho \exp(i \lambda \sum_i \epsilon_i P_{\epsilon_i \rightarrow}) \Bigg ( \sum_i \tilde{ P}_{\epsilon_i \leftarrow} (e^{-i \lambda \epsilon_i}-1) \Bigg ), \end{aligned} \end{equation} \begin{equation} \nonumber \begin{aligned} & P_{\epsilon_i \leftarrow}^{e,0} = \begin{pmatrix} \ddots \\ & \bigzero \\ & & 0 & 0 &0 & & \\ & & 0 & {S^{\alpha L^{e}}(\epsilon_i)}^*S^{L^{e} \beta}(\epsilon_i) & 0 & & \\ & & 0 & 0 & 0 & & \\ & & & & & \bigzero \\ & & & & & & \ddots \end{pmatrix} \\ & \mathrm{and} \\ & \tilde{P}^e_{\epsilon_i \leftarrow} = \begin{pmatrix} \ddots \\ & \bigzero \\ & & {S_\omega^{\alpha L^e}(\epsilon_i)}^*S_\omega^{L^e \beta}(\epsilon_i) & {S_\omega^{\alpha L^e}(\epsilon_i)}^*S^{L^e \beta}(\epsilon_i) & {S_\omega^{\alpha L^e}(\epsilon_i)}^*S_{-\omega}^{L^e \beta}(\epsilon_i) & & \\ & & {S^{\alpha L^e}(\epsilon_i)}^*S_\omega^{L^e \beta}(\epsilon_i) & 0 & {S^{\alpha L^e}(\epsilon_i)}^*S_{-\omega}^{L^e \beta}(\epsilon_i) & & \\ & & {S_{-\omega}^{\alpha L^e}(\epsilon_i)}^*S_\omega^{L^e \beta}(\epsilon_i) & {S_{-\omega}^{\alpha L^e}(\epsilon_i)}^*S^{L^e \beta}(\epsilon_i) & {S_{-\omega}^{\alpha L^e}(\epsilon_i)}^*S_{-\omega}^{L^e \beta}(\epsilon_i) & & \\ & & & & & \bigzero \\ & & & & & & \ddots \end{pmatrix}. \end{aligned} \end{equation} \end{widetext} With these definitions, the CGF, $G(\lambda) = \ln \chi(\lambda)$, can be expressed as a sum of two contributions: \begin{equation} \begin{aligned} \label{CGF} G(\lambda) =& \int_0^\mathcal{T} dt\ln(\det(M_{t,0} + \tilde{M}_t)) \\ =& \ G^{\mathrm{elas}}(\lambda) + G^{\mathrm{pump}}(\lambda) \end{aligned} \end{equation} where \begin{equation} \begin{aligned} \nonumber G^{\mathrm{elas}}(\lambda)&=\int_0^\mathcal{T} dt \ln(\det(M_{t0})), \\ G^{\mathrm{pump}}(\lambda) &= \int_0^\mathcal{T} dt \Tr \Big( \ln(\mathcal{I}+M_{t,0}^{-1}\tilde{M}_t)\Big), \end{aligned} \end{equation} where $\mathcal{I}$ is the identity matrix. Here we have labeled the contribution which would survive in the static limit and arises due to only elastic scattering events as $G^{\mathrm{elas}}(\lambda)$ and the contribution arising from the adiabatic driving as $G^{\mathrm{pump}}(\lambda)$. Since we are working in the limit of small amplitude parameter modulation, in which these dynamic contributions to the scattering matrix are small, keeping only terms quadratic in $X_{j,\omega}$ would appear to be a justifiable approximation. Terms of this nature appear in both the linear and quadratic contributions to the expansion of the matrix $\ln(M_{t,0}^{-1}\tilde{M}_t)$. Consequently, the contribution to the CGF from the pump can be Taylor expanded and truncated to quadratic order: \begin{equation} G^{\mathrm{pump}}(\lambda) \approx \int_0^\mathcal{T} dt \Tr \Big (M_{t,0}^{-1}\tilde{M}_t - \frac{1}{2}\big (M_{t,0}^{-1}\tilde{M}_t \big )^2 \Big ). \end{equation} This approximation has significantly simplified our calculation. In particular the matrix $M_{t,0}$ is block diagonal in the discretised energy basis. Its determinant can then be written as a product of the determinant of each of the individual blocks $M_{t,0}(\epsilon)$. In the continuous limit the static contribution to the CGF is then given by \begin{equation} \label{static gen 1} G^{\mathrm{elas}}(\lambda) = \int_0^\mathcal{T} dt \int_{-\infty}^{\infty} d\epsilon \ln\big(\det(M_0(\epsilon))\big). \end{equation} Similarly, since the dynamic contribution can be expressed as a trace, we can again consider the diagonal blocks at each energy separately and in the continuous limit we have that \begin{equation} \begin{aligned} \label{dyn gen 1} G^{\mathrm{pump}}(\lambda) = \int_0^\mathcal{T} dt \int_{-\infty}^{\infty} d\epsilon \Tr \Bigg[ M_{t,0}(\epsilon)^{-1}\tilde{M}_t(\epsilon) \\ - \frac{1}{2}\bigg(M_{t,0}^{-1}(\epsilon)\tilde{M}_t(\epsilon) \bigg )^2 \Bigg]. \end{aligned} \end{equation} Eqs. \ref{static gen 1} and \ref{dyn gen 1} constitute the first main results of this work and can be used to determine the heat transport statistics and fluctuation theorems for weak and adiabatic cyclically driven system in terms of the scattering matrix. In the case that we have the simultaneous variation of just two parameters of the Hamiltonian, the dynamic contribution to the generating function exhibits two distinct contributions. The first consists of terms dependent only on the variation of a single Hamiltonian parameter and is hence proportional to $X_{j,\omega}^2$. This contribution is independent upon the direction of the driving in parameter space and survives in the case where only a single parameter is varied. The second contribution is, in contrast, geometric in nature and hence only dependent upon the path traversed in parameter space during the driving cycle. We find that this contribution, $G^{\mathrm{geom}}(\lambda)$, is independent of the driving frequency and identified by its proportionality to $X_{1,\omega}X_{2,\omega}$. The sign of this contribution is sensitive to the direction of traversal of the contour in parameter space associated with the driving. Geometric contributions to the full counting statistics of heat transport have previous been demonstrated to produce correction to fluctuation theorems \cite{Ren2010}. This contribution takes on further interest within systems where the accumulated geometric phase is topologically protected against fluctuations in the driving cycle, such as a Majorana braiding protocol. Furthermore, although the derivation of the full counting statistics here used the approximation that the amplitude of the driven cycle is small in parameter space, we show in Sec.\ \ref{Large Amp Drive} that our analysis can be extended to large amplitude pumps for the case of such geometric contributions. \subsection{Full Counting Statistics for Pumped Charge Transport} The calculation in the previous section can be reproduced for the case of electronic transport of an adiabatically driven system. In this case, the characteristic function is given by \begin{equation} \chi_{e,t}(\lambda) = \Big < e^{i\lambda \hat{Q}_{e,\rightarrow}}e^{-i\lambda \hat{Q}_{e\leftarrow}} \Big >, \end{equation} with ingoing and outgoing charge operators defined as $\hat{Q}_{e,\rightarrow(\leftarrow)} = \sum_{\epsilon_i} e\Big (\hat{N}^e_{\epsilon_i \rightarrow(\leftarrow)}-\hat{N}^h_{\epsilon_i \rightarrow(\leftarrow)} \Big )$ and where $e$ is the unit of electronic charge. This expression reflects the fact that electrons and holes, traveling in the same direction with respect to the scattering centre, carry charge in opposite directions. From this new starting point, one can show that the corresponding elastic and dynamic contributions to the CGF can be expressed analogously to those for the case of heat transport: \begin{equation} \begin{aligned} \label{elec GF} G_{e}^{\mathrm{elas}}(\lambda) =& \int_0^\mathcal{T} dt \int_{-\infty}^{\infty} d\epsilon \ln\big(\det(M_{0}^e(\epsilon))\big). \\ G_{e}^{\mathrm{pump}}(\lambda) =& \int_0^\mathcal{T} dt \int_{-\infty}^{\infty} d\epsilon \Tr(M_{t,0}^e(\epsilon)^{-1}\tilde{M}^e_t(\epsilon)), \end{aligned} \end{equation} where now \begin{equation} \begin{aligned} & M_{t,0}^e = 1 - \rho \\ &+\rho \exp(i \lambda \sum_i P_{\epsilon_i \rightarrow})\Bigg(1 + \sum_i P_{\epsilon_i \leftarrow}^0 (e^{-i \lambda }-1)\Bigg), \\ &\tilde{M}_t ^e= \rho \exp(i \lambda \sum_i P_{\epsilon_i \rightarrow}) \Bigg ( \sum_i \tilde{ P}_{\epsilon_i \leftarrow} (e^{-i \lambda }-1) \Bigg ). \end{aligned} \end{equation} In this case, $ P_{\epsilon_i \rightarrow (\leftarrow)} = P^e_{\epsilon_i \rightarrow (\leftarrow)} - P^h_{\epsilon_i \rightarrow (\leftarrow)}$ and the matrices $ P^{e(h)}_{\epsilon_i \rightarrow (\leftarrow)}$ are defined as in Eq. \ref{ingoing P} and \ref{outgoing P}. As for the analogous expressions for heat transport, these results are valid for any cyclically driven system, provided the driving can be considered adiabatic and with an amplitude that is small in the relevant parameter space. \section{Driven 1D topological superconductors: Braiding Cycle and Pumped Heat} \label{Braiding Sec} \begin{figure} \centering \includegraphics[width=0.45\textwidth]{"Presentation2".pdf} \caption{(a) Y-junction of p-wave superconducting nanowires (blue) with Majorana zero modes at positions indicted by the green dots. Each of the external Majorana modes, $\gamma_{x,y,z}$, is coupled to central mode with corresponding coupling strengths $\Delta_{x,y,z}$ and the modes $\gamma_x$ and $\gamma_y$ are further coupled to conducting normal metal leads with strengths $\Gamma_L$ and $\Gamma_R$. (b) Schematic depicting the Majorana braiding cycle. The diagram on the right illustrates the required sequence of Majorana couplings where the solid blue lines illustrate the couplings which are turned on and dashed lines indicate those that are turned off. The corresponding evolution, $\mathcal{C}_1+\mathcal{C}_2+\mathcal{C}_3$, is shown as a path in spherical parameter space on the left. Also illustrated is an example of a small amplitude driving contour $\mathcal{C}_s$.} \label{Setup} \end{figure} The driven process, for which we would like to study transport statistics and hence will provide the central focus of this paper, is that of a Majorana braiding. The setup under consideration consists of three p-wave superconducting nanowires, in a topologically nontrivial state, arranged in the form of a Y-junction as illustrated in Fig. \ref{Setup}. Such a system has been demonstrated, by D. Meidan \textit{et al.}, to produce a net heat current under Majorana braiding \cite{Meidan2019}. Each of the three wires hosts two zero energy Majorana modes, one at each end of the wire \cite{Kitaev2001}. However, at energies well below the superconducting gap, $\Delta_{sc}$, the low energy Hilbert space is spanned by the three outer Majorana zero modes, $\gamma_{x}$, $\gamma_{y}$, $\gamma_{z}$, and a fourth zero mode, $\gamma_{0}$, formed by a linear combination of the internal Majoranas from each wire. This Y-junction is coupled to two external normal metal leads, $L$ and $R$, which lie on either side of the junction in order to facilitate a particle current and allow the exploration of the transport properties of the braiding protocol. The two Majorana states $\gamma_{x}$ and $\gamma_{y}$ can be exchanged by systematically modulating the couplings between the external Majorana states and the central state $\gamma_{0}$ \cite{VanHeck2012,Karzig2016a}. The couplings between these four states can be quantified in the effective Hamiltonian for the Y-junction, \begin{equation} H_{Y} = i\gamma_{0} \vec{\Delta} \cdot \vec{\gamma}, \end{equation} where $\vec{\Delta} = \Delta(\sin\theta \cos\phi,\sin\theta \sin\phi,\cos\theta)$ and $\vec{\gamma} = (\gamma_{x},\gamma_{y},\gamma_{z})$. The complete Hamiltonian for the system is then given by $H=H_{Y}+H_{coup}+H_{leads}$ where the contributions from the coupling to the external leads and the leads themselves can be written as \begin{equation} \label{coup H} \begin{gathered} H_{\mathrm{coup}} = \sqrt{\Gamma_{L}}(c_{Lk}-c_{Lk}^{\dagger})\gamma_{x} + \sqrt{\Gamma_{R}}(c_{Rk}-c_{Rk}^{\dagger})\gamma_{y}, \\ H_{\mathrm{leads}} = \sum_{k}\sum_{\alpha=L,R} \xi_{\alpha k}c_{\alpha k}^{\dagger} c_{\alpha k}, \end{gathered} \end{equation} respectively. Here, $\Gamma_{L/R}$ denote the coefficients associated with particle tunneling from the leads onto the superconducting Y-junction and $\xi_{\alpha k}$ are the energy dispersion relations in the leads. The process of braiding now corresponds to adiabatically changing the parameters $\theta$ and $\phi$ in such a way as to generate the evolution of the Majorana couplings, as illustrated in Fig.\ \ref{Setup}(b). This evolution can be better understood by writing the Hamiltonian, $H_{Y}$, in terms of the basis vectors for a spherical coordinate system. \begin{equation} \begin{gathered} \label{spherical Majoranas} H_{Y} = i\Delta \gamma_{0} \gamma_{r}, \\ \mathrm{where} \ \gamma_{r} = \vec{\gamma} \cdot \hat{e}_{r}, \ \gamma_{\theta} = \vec{\gamma} \cdot \hat{e}_{\theta}, \ \gamma_{\phi} = \vec{\gamma} \cdot \hat{e}_{\phi}. \end{gathered} \end{equation} Since they do not enter the Hamiltonian, the basis vectors $\hat{e}_\theta$ and $\hat{e}_{\phi}$ span the degenerate ground space and the adiabatic evolution of the system can now be interpreted as changing the projection of the physical Majorana states on to this degenerate ground space. At energies well below the superconducting gap ($\epsilon \ll \Delta_{sc}$), it is only this subspace that will facilitate particle transport between the left and right leads, via the occupation of the zero energy, non-local Fermi level defined via the annihilation operator, \begin{equation} \hat{a} = \frac{1}{2} \big( \gamma_\theta + i \gamma_\phi \big). \end{equation} It has been demonstrated \cite{Karzig2016a,VanHeck2012} that the sequence of couplings sketched in Fig.\ \ref{Setup}(b) corresponds to this annihilation operator accumulating a phase factor $e^{i \Omega_\mathcal{C}}$, where $\Omega_\mathcal{C}$ corresponds to the solid angle enclosed by the curve $\mathcal{C}=\mathcal{C}_1+\mathcal{C}_2+\mathcal{C}_3$ traversed in parameter space. For the specific process outlined in this section with, $\Omega_\mathcal{C} = \pi/2$, the resulting unitary evolution operator, $U = e^{-\frac{\pi}{4}\gamma_\phi \gamma_\theta}$, corresponds to the exchange of Majoranas $\gamma_x$ and $\gamma_y$: \begin{eqnarray} \nonumber U^\dagger \gamma_x U = \gamma_y, \\ \nonumber U^\dagger \gamma_y U = -\gamma_x. \end{eqnarray} This braiding protocol also leads to a non-zero heat current being pumped between the two external leads and, in the low temperature limit, the total heat pumped tends to some universal value independent of the coupling strength to the external leads and fluctuations to the driving \cite{Meidan2019}. Despite this, the particle-hole symmetry of the Majorana-lead coupling results in no transfer of charge between the leads during the process. In order to find the CGF for the Majorana braiding protocol, we first need to determine its instantaneous scattering matrix $S(\epsilon,t)$. For the superconducting Y-junction, the scattering matrix can be calculated as \cite{Mahaux1969}: \begin{equation} \label{scatter def} S(\epsilon,t) = 1 + 2\pi i W^{\dagger} (H_0-\epsilon-i\pi WW^\dagger )^{-1}W. \end{equation} Where $H_{0}$ denotes the Hamiltonian of the Y-junction in the absence of the external leads and $W$ describes the coupling between the incoming electron and hole (e/h) scattering states in the leads and the states of the system. This coupling matrix can be obtained from the Hamiltonian in Eq. \ref{coup H} and, in the basis of the Majorana zero modes, it takes the form: \begin{equation} \begin{aligned} W = \sqrt{\Gamma_{L}} \Big (\ket{\gamma_x}\bra{e^L}-&\ket{\gamma_x}\bra{h^L}\Big ) \\ &+ \sqrt{\Gamma_{R}} \Big (\ket{\gamma_y}\bra{e^R}-\ket{\gamma_y}\bra{h^R}\Big ). \label{eq:coupling matrix} \end{aligned} \end{equation} Eq. \ref{scatter def} and \ref{eq:coupling matrix} then give the specific form of the scattering matrix for the braiding protocol \cite{Meidan2019}: \begin{equation} \label{specific matrix 1} \begin{gathered} S(\epsilon) = \begin{pmatrix} S^{L^eL^e} & 1-S^{L^eL^e} & S^{L^eR^e} & -S^{L^eR^e} \\ 1-S^{L^eL^e} & S^{L^eL^e} & -S^{L^eR^e} & S^{L^eR^e} \\ S^{L^eR^e} & -S^{L^eR^e} & S^{R^eR^e} & 1-S^{R^eR^e} \\ -S^{L^eR^e} & S^{L^eR^e} & 1-S^{R^eR^e} & S^{R^eR^e} \\ \end{pmatrix}, \\ \mathrm{where} \ \ S^{L^eL^e} = 1 - 4\pi it \Gamma \Bigg (\frac{\sin^2\phi}{\epsilon + 2\pi i\Gamma} + \frac{\cos^2\theta \cos^2\phi}{\epsilon + 2\pi i\cos^2\theta \Gamma} \Bigg ), \\ S^{R^eR^e} = 1 - 4\pi i\Gamma \Bigg (\frac{\cos^2\phi}{\epsilon + 2\pi i\Gamma} + \frac{\cos^2\theta \sin^2\phi}{\epsilon + 2\pi i\cos^2\theta \Gamma} \Bigg ), \\ \mathrm{and} \ \ S^{L^eR^e} = \frac{4\pi i\epsilon\cos\phi\sin^2\theta\sin\phi \Gamma_R^2}{(\epsilon+2\pi i\Gamma)(\epsilon+2\pi i\cos^2\theta \Gamma)}, \end{gathered} \end{equation} where here we display the case of equal coupling to the left and right leads, $\Gamma_L=\Gamma_R=\Gamma$, for brevity. The components of this scattering matrix can be used in Eq. \ref{static gen 1} and \ref{dyn gen 1} in order to determine the heat and charge transfer statistics of a driven Majorana Y-junction. From the form of the scattering matrix components it is evident that there exists two distinct energy scales that will influence the transport. These scales are illustrated in Fig.\ \ref{fig:Scat} for the case of the Andreev reflection component of the scattering matrix, $S^{L_e,L_h}(\epsilon)=1-S^{L_e,L_e}(\epsilon)$. In Fig.\ \ref{fig:Scat}(a) we see that the energy dependence of the Andreev reflection via the Majorana zero modes consists of the sum of two peaks centred at $\epsilon=0$. The width of the narrower peak, $\Gamma_R \cos^2\theta $, is set by the position in the parameter space $(\theta,\phi)$, with the width decreasing as we approach the line $\theta=\pi/2$, corresponding to the equator of the spherical parameter space shown in Fig.\ \ref{Setup}(b). This energy scale is not visible as we move a sufficient distance away from this line and the larger energy scale dominates. The size of this larger energy scale is set by the strength of the coupling to the external leads $\Gamma_{L/R}$. It is also worth noting the difference in behaviour between the real and imaginary components of the scattering matrix in the limit $\epsilon \to 0$. Whereas, the real part can be approximated as constant in this limit, the imaginary part varies linearly with energy and hence quantities that include this contribution will show sensitivity to the energy dependence of the scattering matrix even in the limit $T \to 0$. \begin{figure} \includegraphics[width=0.45\textwidth]{"Scat".pdf} \caption{(a) Real and (b) imaginary parts of the Andreev reflection component of the scattering matrix for the topological superconducting Y-junction. Results are plotted for several positions in the parameter space, $(\theta_0,\phi_0)$, and for equal coupling to the left and right external leads $\Gamma_L=\Gamma_R=1$.} \label{fig:Scat} \end{figure} \section{Heat and charge transport cumulants in small cycles} \label{sec:transport} \begin{figure*} \includegraphics[width=0.9\textwidth]{"Static_Noise".pdf} \caption{Period-averaged static contribution to the second cumulant of (a) the pumped heat and (b) pumped charge throughout the driving of a Majorana Y-junction centred at $(\theta_0,\phi_0) = (\pi/2-0.1,\pi/4)$, with amplitude $\theta_\omega = \phi_\omega = 0.01$. The noise is plotted as a function of the external lead temperature $T/\Gamma_R$, for a driving frequency $\omega/\Gamma_R=0.001$. The insets show the temperature dependence of this quantity scaled by $T^5$ and $T$ for heat and charge respectively, highlighting the behaviour as $T \to 0$. The different colours correspond to various values of the coupling between the Y-junction and the external leads $\Gamma_L/\Gamma_R$ (cf. legend).} \label{fig:elas heat noise} \end{figure*} With the motivation of studying the FCS for a Majorana braiding protocol, we begin by considering a situation where the superconducting Y-junction, outlined in Sec. \ref{Braiding Sec}, is driven through a small amplitude cycle on the surface of the spherical phase space defined by the parameters $\theta$ and $\phi$ and illustrated by the contour $\mathcal{C}_s$ in Fig. \ref{Setup}(b). The CGF for this process is found by substituting the scattering matrix, defined in Eq.\ \ref{specific matrix 1}, into the elastic and dynamic contributions (Eq. \ref{static gen 1} and \ref{dyn gen 1}) at some time $t$. For simplicity, we will consider the case in which we have no chemical potential bias, $\mu_L=\mu_R=0$, between the external leads and that both leads are held at the same temperature, $T_L=T_R=T$, so that the distribution functions for holes and electrons in each lead are identical: $f_\mathrm{in}^{e,L}(\epsilon) = f_\mathrm{in}^{h,L}(\epsilon) = f_\mathrm{in}^{e,R}(\epsilon) = f_\mathrm{in}^{h,R}(\epsilon)$. Our approach is valid in the adiabatic limit, which in this case corresponds to restricting the driving frequency to be much smaller than the coupling between the system and the external leads, $\omega \ll \Gamma_{L,R}$. The expressions for the generating function in Eq. \ref{static gen 1} and \ref{dyn gen 1} can be used to determine all cumulants of both the heat, $\mathcal{M}^{(k)}$, and charge, $\mathcal{M}^{(k)}_{e}$ transport between the external leads: \begin{equation} \mathcal{M}^{(k)}_{(e)} = \frac{\partial^k G_{Q_{(e)}}(\lambda)}{\partial (i \lambda)^k}\Bigg \rvert _{\lambda=0}. \end{equation} For example the first cumulants $\mathcal{M}^{(1)}$ and $\mathcal{M}^{(1)}_{e}$ correspond to the total heat, $\big<\hat{Q}(t)\big>$ and charge, $\big<\hat{Q}_{e}(t)\big>$ pumped during the cycle respectively, and the second cumulants $\mathcal{M}^{(2)}_{(e)}$ give the variances, or noise, $ \big<\hat{Q}^2_{(e)}(t)\big> - \big<\hat{Q}_{(e)}(t)\big>^2$ of these distributions. \subsection{Elastic contribution to energy and charge transfer statistics} We first discuss the contribution to the CGF arising from elastic scattering events only, which would survive in the limit that the system is not driven and is hence relevant for cycles of arbitrary amplitude. For the case of heat transport we find that the static contribution can be expressed as \begin{equation} \label{static G 1} \begin{aligned} G^{\mathrm{elas}}(\lambda) = \mathcal{T} \int_{-\infty}^{\infty} d\epsilon \ln(1 + \sum_{n=-1}^{1} B_n(\epsilon) (e^{i\lambda \epsilon n}-1)), \end{aligned} \end{equation} where $\mathcal{T}$ denotes the period of the driving and the coefficients $B_n(\epsilon)$ take the form \begin{equation} \begin{gathered} B_{1}(\epsilon)=B_{-1}(\epsilon) = 4|S^{L^e,R^e}(\epsilon,\theta_0,\phi_0)|^2 f(\epsilon) \big ( 1-f(\epsilon) \big ). \end{gathered} \end{equation} Here, $(\theta_0,\phi_0)$ corresponds to the location of the driving path centre in parameter space. In this form the physical meaning of the CGF becomes clear, as heat is only transferred across the junction by the normal and Andreev transmission of electrons and holes in both directions. For example, the transmission of an electron from the left to the right lead will occur with a probability of $|S^{L^eR^e}(\epsilon)|^2 f(\epsilon)(1-f(\epsilon))$, as expected. The corresponding expression for charge transport is found to be \begin{equation} \label{static G Heat} G_{e}^{\mathrm{elas}}(\lambda) = \mathcal{T}\int_{-\infty}^{\infty} d\epsilon \ln(1 + \sum_{n=-2}^{2} B_n(\epsilon) (e^{i\lambda n}-1)), \end{equation} where \begin{equation} \nonumber \begin{gathered} \ \ B_{-1}(\epsilon) = B_1(\epsilon) = 4|S^{L^eR^e}(\epsilon,\theta_0,\phi_0)|^2 f(\epsilon) \big ( 1-f(\epsilon) \big ), \\ \ \ B_{-2}(\epsilon) = B_{2}(\epsilon) = |S^{L^eL^h}(\epsilon,\theta_0,\phi_0)|^2f(\epsilon) \big ( 1-f(\epsilon) \big ). \end{gathered} \end{equation} Here we see the additional contribution of Andreev reflection processes, which result in the creation and annihilation of Cooper pairs within the superconducting nanowire system. These processes result in the propagation of an electronic charge of $\pm 2e$ but no energy transport in the form of heat. In the case of both zero temperature and chemical potential bias between the external leads, both the heat and charge current contributions arising from the elastic CGF, $ G_{(e)}^{\mathrm{elas}}(\lambda)$, are identically zero, $\big<\hat{Q}_{\mathrm{elas}} \big> = \big<\hat{Q}_{e,\mathrm{elas}}\big>=0$. Despite this, the elastic scattering processes still allow for fluctuations. This contribution to the noise is thermal in nature, arising due to thermal fluctuations in the occupation of the ingoing scattering states. It vanishes in the limit $T=0$ and $\mu=0$ when the occupation of all ingoing energy states is fixed and no charge or energy transfer processes take place. This thermal noise is present in both energy and charge transport and for our setup it is natural to identify two distinct temperature regimes relative to the strength of coupling to the external metal leads. \subsubsection{Thermal noise at low temperature} For low values of the external lead temperature, $T\ll \min \Gamma_{L/R}$, the energy dependence of the absolute value of the scattering matrix elements can be considered weak (cf.\ Fig.\ \ref{fig:Scat}). Consequently, one would expect that the behaviour of the thermal noise as a function of temperature in this regime should be dictated by the Fermi distribution functions, $f(\epsilon)$, appearing in the elastic contribution to the CGF (Eq.\ \ref{static G 1}, \ref{static G Heat}). Taking the scattering matrix elements to be energy independent, the form of the static contribution to the CGF implies that the elastic charge noise should depend linearly on temperature, whereas the elastic heat noise should vary as $T^3$. This behaviour is well understood and in agreement with previous studies of transport statistics. \cite{Moskalets2002,Moskalets2002a} The period averaged second cumulants of the static contribution to the CGF, $\frac{1}{\mathcal{T}}\mathcal{M}_{(e),\mathrm{elas}}^{(2)}$, which quantify the thermal noise, are plotted for both heat and charge in Fig. \ref{fig:elas heat noise}(a) and \ref{fig:elas heat noise}(b) respectively. At low temperatures $T\ll \min \Gamma_{L/R}$, we see that, as expected, the electronic charge thermal noise scales linearly with temperature. Additionally, we see that the charge noise becomes independent of the coupling to the external leads, $\Gamma_{L/R}$ (cf.\ Fig.\ \ref{fig:elas heat noise}(b) inset). This is a further consequence of the weak energy dependence of the frozen in time scattering matrix, $\hat{S}(\epsilon,X_{j,0})$ at energies close to zero. We find that the thermal heat noise, however, is sensitive to the energy dependence of the scattering matrix even in the low temperature limit. In fact, if the energy dependence was neglected entirely, and the scattering matrix evaluated at the chemical potential $\mu=0$, the elastic contribution to the heat noise would vanish. The inset in Fig.\ \ref{fig:elas heat noise}(a) illustrates that as $T \to 0$ the elastic heat noise scales as $T^5$ as opposed to the $T^3$ scaling originating from the distribution functions of the normal metal leads. The influence of the scattering matrix is also evident in the fact that the thermal noise is dependent upon the external lead coupling at all temperatures, in contrast to the case of charge transport. However, the energy dependence of the scattering matrix cannot be entirely neglected even in the case of charge transport, as its influence is evident in the contributions to the transport cumulants arising from the pumping. \subsubsection{Thermal noise at high temperature} \begin{figure*} \includegraphics[width=1\textwidth]{"Heat_Noise_Plots".pdf} \caption{The pumped contribution to the second cumulant of the heat transport throughout the driving of a Majorana Y-junction centred at $(\theta_0,\phi_0)=(\pi/2-0.1,\pi/4)$ with amplitude $\theta_\omega=\phi_\omega=0.01$. Plots $(d,e,f)$ show the geometric contribution whereas $(a,b,c)$ illustrate the remaining non-geometric part. Plots $(a,d)$ show the second cumulants as a function of temperature, with the inset highlighting the region $T \ll \omega$. Panels $(b,c,e,f)$ show the same quantities plotted against frequency. $(b,e)$ illustrate the behaviour as a function of low frequencies $\omega < T$ and $(c,f)$ at high frequencies $\omega > T$.} \label{fig: pumped heat noise} \end{figure*} As the temperature becomes comparable with $\min \Gamma_{L/R}$, the energy dependence of the scattering matrix becomes increasingly significant in both the cases of charge and heat noise. For scattering processes via Majorana zero modes, transport is dominated by low energy scattering states, hence high energy occupation fluctuations that occur as $T$ is increased do not contribute to the current noise. As a result, the rate at which the elastic noise increases slows down at high temperatures and will eventually plateau for the case of charge transport and scale $\propto T$ for that of heat transport. The temperature at which this occurs is proportional to the external lead coupling $\min \Gamma_{L/R}$, as can be clearly seen in the main panels of Figs.\ \ref{fig:elas heat noise}a and \ref{fig:elas heat noise}b for heat and charge noise respectively. It is also worth noting that, at all temperatures both the period averaged charge and heat thermal noises are independent of the driving frequency. This is to be expected, as the components of the scattering matrix responsible for elastic scattering events are not influenced by the driving. \subsection{Averaged pumped heat and energy} Next, we analyse the more interesting contributions to the transport statistics arising from the pumped contribution to the CGF given in Eq. \ref{dyn gen 1}. Again considering the case for which the external leads are held at the same temperature, $T$, and zero chemical potential $\mu=0$, one finds that the charge pumped during any modulation of the Majorana Y-junction is identically zero. This is a direct result of the electron-hole symmetry of the coupling between the Majorana zero modes and the leads \cite{Meidan2019}. This result is in contrast to previous works on adiabatic pumps in which the scattering matrix does not possess such a symmetry and the pumped charge is found to vary linearly with the pumping frequency \cite{Moskalets2002}. Despite this there is a finite heat current pumped across the junction which, in the case of zero temperature bias, arises solely from the geometric part of the CGF, $G_Q^{\mathrm{geom}}(\lambda)$: \begin{equation} \label{pumped heat} \begin{aligned} &\big<Q^{\mathrm{pump}}\big> = \big<Q^{\mathrm{geom}}\big> = \frac{\partial}{\partial (i \lambda)} G^{\mathrm{geom}}(\lambda)\Bigg \rvert _{\lambda=0} \\ =&\sum_{\beta=eL,eR} 4 \int_{-\infty}^{\infty} d\epsilon \ \epsilon \iint \frac{\partial f(\epsilon)}{\partial \epsilon} \Im \Bigg[\frac{\partial S^{eL\beta}(\epsilon)}{\partial \theta}\frac{\partial S^{eL\beta}(\epsilon)}{\partial \phi}\Bigg] d\theta d\phi. \end{aligned} \end{equation} In this form the geometric nature of the pumped heat becomes clear, since its value depends only upon the contour traversed in parameter space throughout the driving and is independent of the driving frequency itself. The fact that the pumped heat arises solely from the geometric contribution to the CGF means that this expression is valid for arbitrary amplitude cycles in parameter space, (see Sec. \ref{Large Amp Drive}), and in particular for that of the Majorana braiding protocol illustrated in Fig. \ref{Setup}(b). The energy pumped throughout such a process is found to be in agreement with Ref. \cite{Meidan2019}. The Majorana braiding is of particular interest since the path traversed in parameter space during the process is topologically protected against fluctuations in the driving. As a consequence, any transport properties that can be shown to be geometric in nature will also be protected. Furthermore, at low temperatures, $T \rightarrow 0$, the derivative of the Fermi function ensures that only particles with energies close to the chemical potential take place in the transport, which in this case corresponds to taking the limit $\epsilon \rightarrow 0$ of the area integral in Eq. \ref{pumped heat}. In this limit the contour traversed in parameter space maps on to a fixed path in scattering matrix space and hence the pumped heat tends to some universal value, independent of the coupling to the external leads as well as the nature of the driving \cite{Meidan2019}: \begin{equation} \frac{Q}{2T \log 2} = \frac{1}{4}. \end{equation} \subsection{Heat and charge noise from pumping} We now extend our analysis beyond the known results for the average current by analysing the behaviour of higher order cumulants of the contribution to the CGF arising from the time dependent pumping, for transport via Majorana modes. When dynamic process are included, noise can originate not only from thermal fluctuations, but also from the action of the pump itself. This noise arises due to the non-equilibrium nature of the outgoing scattering states, as a consequence of the possibility of scattering events between nearest energy sidebands, and is present in both the cases of heat and electronic transport. This side band scattering results in correlations between outgoing particle distributions at energies within the range $\epsilon \pm \omega$ which manifest themselves as a source of noise in the average pumped heat and charge. This source of noise vanishes in the case that the pump in switched off and hence inelastic scattering events between side bands cease to occur. From the total pumped noise we can isolate the contribution that arises from the geometric part of the CGF, which we denote $\mathcal{M}^{(2)}_{(e),\mathrm{geom}}$. This is the additional noise that is observed in the case that two parameters are driven simultaneously in a closed cycle. The remaining pumped noise, denoted $\mathcal{M}^{(2)}_{(e),\mathrm{pump}}$, would be present even in the case that just a single parameter is driven. These pumped contributions to the second cumulant of the heat and charge noise are shown in Fig.\ \ref{fig: pumped heat noise} and \ref{fig: pumped charge noise} respectively. They illustrate the existence of three distinct temperature regimes that dictate the behaviour of the noise as a function of driving frequency $\omega$, each of which are discussed in the following sections. \subsubsection{Low temperature regime: $T \ll \omega$} \begin{figure*} \includegraphics[width=1\textwidth]{"Charge_Noise_Plots".pdf} \caption{The pumped contribution to the second cumulant of the charge transport throughout the driving of a Majorana Y-junction centred at $(\theta_0,\phi_0)=(\pi/2-0.1,\pi/4)$ with amplitude $\theta_\omega=\phi_\omega=0.01$. Plots $(d,e,f)$ show the geometric contribution whereas $(a,b,c)$ illustrate the remaining non-geometric part. Plots $(a,d)$ show the second cumulants as a function of temperature, with the inset highlighting the region $T \ll \omega$. Panels $(b,c,e,f)$ show the same quantities plotted against frequency. $(b,e)$ illustrate the behaviour as a function of low frequencies $\omega < T$ and $(c,f)$ at high frequencies $\omega > T$.} \label{fig: pumped charge noise} \end{figure*} In the case that the temperature is lowered below the energy associated with the driving frequency $\omega$, the thermal noise becomes negligible and the noise associated with the pumping itself dominates. In the case of driven noise, the key quantity dictating the characteristic behaviour is the difference in Fermi occupation functions between neighbouring energy sidebands, of which the pump can stimulate transitions between. In this regime, the quantity $f_0(\epsilon)-f_0(\epsilon \pm \omega)$ is only non-zero over an energy window close to $\epsilon=0$, the width of which scales linearly with $\omega$, but is insensitive to the temperature of the external leads. The absence of temperature dependence is reflected in Figs.\ \ref{fig: pumped heat noise}(a,d) and \ref{fig: pumped charge noise}(a,d) for heat and charge transport respectively. The inset panels within each of these figures show that the pumped contributions to the noise tend to some non-zero value in the limit $T \to 0$, in contrast to the static contributions to the noise (cf. Fig. \ref{fig:elas heat noise}). Although the energy dependence of the real contribution to the scattering matrix in negligible in this low temperature limit, (cf.\ Fig.\ \ref{fig:Scat}), the linear energy dependence of the imaginary contribution will influence the transport properties, as will the behaviour of the scattering matrix derivatives appearing in the inelastic terms $S_{\pm \omega}(\epsilon)$. This energy dependence manifests itself in the form of a difference in the frequency dependence between the geometric and non-geometric contributions to the pumped noise, since they depend differently on the scattering matrices, as implicit from Eq.\ \ref{GF Full Form}. Specifically, $\mathcal{M}^{(2)}_{\mathrm{pump}}\propto\omega^4$ and $\mathcal{M}^{(2)}_{\mathrm{geom}}\propto\omega^5$ (cf. Fig.\ \ref{fig: pumped heat noise}(c,f)) whereas for charge the non geometric and geometric contributions are found to vary as $\omega^2$ and $\omega^3$ respectively (cf.\ Fig.\ \ref{fig: pumped charge noise}(c,f)). This difference between heat and charge noise can be justified by the fact that the dominant process is the scattering from states of energy $\epsilon$ to $\epsilon\pm \omega$. Consequently, the heat noise is underpinned by the same fluctuations as those in the case of charge, and differ only by the fact that the scattering events are weighted by the energy absorbed/emitted $\sim \omega$. \subsubsection{Mid-temperature regime: $\omega<T\ll\Gamma_{L,R}$} As the temperature is increased beyond the driving frequency, the temperature becomes the quantity which determines the energy window within which scattering events can occur. This transition can be seen in panels (a) and (d) of Figs.\ \ref{fig: pumped heat noise} and \ref{fig: pumped charge noise} by the deviation of the pumped noises away from their corresponding constant low temperature values at approximately $T=\omega$. Beyond this point the temperature dependence is governed by a combination of the distribution functions and the energy dependence of the scattering matrix. Initially, the heat noise varies as $T^5$ and we find that the charge noise is proportional to $T^3$, with the ratio $\mathcal{M}^{(2)}/\mathcal{M}^{(2)}_e = T^2$, where the temperature now plays the same role of the frequency in the previous case, in agreement with previous works \cite{Moskalets2004}. However, as $T$ increases further we see that, for both heat and charge, the second cumulant is not monotonic and exhibits a turning point corresponding to the temperature exceeding the width of the scattering matrix resonance. The width of this resonance is set by the location of the centre of the driving in the parameter space, $(\theta_0,\phi_0)$ (cf.\ Fig.\ \ref{fig:Scat}). We also see that the pumped contributions to both the heat and charge noise undergo a sign change which originates from the energy dependence of the derivatives of the scattering matrix with respect to the driving parameters, $S_{\pm \omega}(\epsilon)$, which dictate transitions between nearest energy sidebands. The sum of the static and pumped contributions to the noise, however, remains positive at all temperatures. The driving frequency dependence of the pumped noise in this regime is similar for both the cases of heat and charge. The difference between the non-geometric and geometric contributions persists however as illustrated in panels (b) and (e) respectively. We see that the non-geometric part is now inversely proportional to $\omega$ whereas the geometric contribution is independent of the frequency of the driving and determined purely by the path traversed in parameter space. \subsubsection{High temperature regime: $T>\Gamma_{L,R}$} As the temperature is increased beyond the broadening of the scattering matrix resonance, set by the strength of the coupling between the system and the external leads, the scattering matrix dependence on energy is dominated by the generic $1/\epsilon^2$ behaviour. This leads to saturation of charge noise and heat noise that is linear in $T$, (cf.\ panels (a,d) of Figs.\ \ref{fig: pumped heat noise} and \ref{fig: pumped charge noise}). \section{Impact upon Fluctuation Theorems} \label{sec:theorem} \begin{figure} \includegraphics[width=0.45\textwidth]{"Prob_Small_Cycle".pdf} \caption{Probability distribution, $P(Q)$, for the heat pumped via the small amplitude ($\theta_\omega=\phi_\omega=0.01$) driving of a Majorana Y-junction centred at $(\theta_0,\phi_0)=(\frac{\pi}{2}-0.01,\frac{\pi}{4})$. Results are shown for several values of the coupling to the external leads, $\Gamma_L=\Gamma_R=\Gamma$, with an external lead temperature of $T/\omega = 10$. The inset shows the corresponding behaviour of the fluctuation theorem violation quantifier $A(\lambda)=|\chi(\lambda)-\chi(-\lambda)|$ which is identically zero when the Gallavotti-Cohen fluctuation theorem holds true.} \label{fig:Prob Small Cycle} \end{figure} When studying systems which involve heat transfer to thermal reservoirs, fluctuation theorems (FT) dictate the likelihood of anomalous transfer events which appear in violation with the second law of thermodynamics. Therefore, they provide useful information on the nature of the heat flow. From their general formulation in terms of entropy production \cite{seifert2005entropy,seifert2012stochastic}, fluctuation theorems can be recast in more specific forms for different settings. One such example is the Gallavotti-Cohen (GC) FT, determining the statistics of heat transfer between reservoirs at different temperatures. It states that, over a sufficiently long time interval $\tau$ \cite{Ren2010,Gallavotti1995}: \begin{equation} \label{fluc thm} \lim_{\tau \rightarrow \infty} \frac{1}{\tau}\ln \Bigg [\frac{P_\tau(Q)}{P_\tau(-Q)} \Bigg ] = \frac{Q(\beta_R-\beta_L)}{\tau}, \end{equation} where $P_\tau(Q)$ denotes the probability distribution of the heat $Q$ transferred from the left to the right bath and $\beta_{L,R}=\frac{1}{k_BT_{L,R}}$. This statement describes the probability at which heat transfer occurs against the thermal gradient. However, in the case of cyclic time-dependent manipulations of the system, it has been shown \cite{Ren2010} that geometric contributions to the heat transfer statistics lead to the addition of correction terms to this theorem. The formalism presented in Sec.~\ref{sec:FCS} allows us to compute the corrections to the fluctuation theorem for systems in which this geometric term is topologically protected against fluctuations in the driving. In order to compute these corrections we start by noticing that the GC fluctuation theorem holds if and only if the characteristic function obeys the Gallavotti-Cohen symmetry $\chi(\lambda) = \chi(-\lambda + i \beta^*)$, where $\beta^* \equiv (\beta_R - \beta_L) = 0$ for our system of interest, since the temperature of the external leads are assumed to be equal and remain constant throughout the braiding process. Using this expected symmetry of the CF, we can define the following function quantifying the corrections to the fluctuation theorem: \begin{eqnarray} A(\lambda) = |\chi(\lambda) - \chi(-\lambda)|. \end{eqnarray} A non-zero value of $A(\lambda)$ at any value of the counting field, $\lambda$, indicates a correction to the GCFT. For the case of small amplitude oscillations in the parameter space of the Majorana Y-junction, we can access the probability distribution for pumped charge via the inverse Fourier transform of the exponentiated total GCF $G(\lambda)$ defined in Eqs.\ \ref{CGF}. The probability distributions for one such cycle are illustrated in Fig. \ref{fig:Prob Small Cycle} for several values of the coupling to the external leads $\Gamma$. Here the asymmetry of the probability distribution with respect to $Q=0$ is clearly visible and corresponds to fact that heat is driven between the external leads, despite the absence of any temperature or chemical potential bias. The inset of Fig. \ref{fig:Prob Small Cycle} illustrates the behaviour of our violation quantifier $A(\lambda)$, which is non-zero and hence indicative of a correction to the FT. The magnitude of the correction also appears to be increasing with $\Gamma$, indicated by the larger area under the graph of $A(\lambda)$. This is a consequence of the increasing translation of $P(Q)$ as an increasing heat current is pumped between the external leads. However, this trend is not general as increasing noise will act to obscure any translation of $P(Q)$ hence reducing the magnitude of the correction function $A(\lambda)$. At the low temperature, relative to the energy scales associated with the scattering matrix, the variance of $P(Q)$ is found to be decreasing with increasing coupling strength (cf.\ Fig.\ \ref{fig:elas heat noise}(a) inset). However, in the high temperature limit, the static noise becomes linearly dependent on $\Gamma$ and we would see that the resulting violation would become less prominent with increasing coupling to the external leads. \subsection{Impact upon fluctuation theorems for arbitrary amplitude pumps} \begin{figure} \centering \includegraphics[width=0.4\textwidth]{"Contour".pdf} \caption{An illustration of how the difference between contour integrals in opposite directions for arbitrary amplitude cycles, can be broken down into the sum of similar differences on smaller cycles. This result is due to the cancellation of the integrals along the interior sides of the smaller cycles and is valid upon division of the contour $C$ into an arbitrary number of smaller cycles $\{C_i\}$.} \label{Contours} \end{figure} \label{Large Amp Drive} For the case of the Majorana braiding [cf.\ Fig.\ \ref{Setup}(b)], the amplitude of the parameter modulation is not small enough for the linear expansion of the scattering matrix in Eq. \ref{S expand} to apply. Yet, in this section, we show that the our results can be extended to demonstrate the correction to the Gallavotti-Cohen fluctuation theorem for large amplitude cycles such as this. Firstly, we make use of the fact that we can write the total generating function for a small amplitude cycle, $G(\lambda)$, as the sum of geometric and dynamic contributions: $G(\lambda) = G^{\mathrm{geom}}(\lambda) + G^{\mathrm{dyn}}(\lambda)$. From our numerical results, we see that for the dynamic contribution, the GC symmetry holds for all $\lambda$ and consequently any corrections to the FT arise solely from the geometric contribution. Hence, an equivalent indicator of FT correction is given by $A^{\mathrm{geom}} = |\chi^{\mathrm{geom}}(\lambda) - \chi^{\mathrm{geom}}(-\lambda)|$, where $\chi^{\mathrm{geom}}(\lambda) = e^{G^{\mathrm{geom}}(\lambda)}$. This quantity, unlike $A(\lambda)$, can be calculated for arbitrary amplitude cycles. In order to demonstrate this, we define the generating function to be dependent upon the direction we travel around the driving contour in parameter space as $G^\circlearrowleft(\lambda)$ and $G^\circlearrowright(\lambda)$. The difference between these two generating functions $D_Q(\lambda) =G_Q^\circlearrowleft(\lambda)-G_Q^\circlearrowright(\lambda)$ will clearly change sign upon reversal of the pumping direction. For this reason, we can calculate this quantity for a large amplitude pump by dividing the area enclosed by the contour, traversed throughout the braiding process, into smaller areas, within which the weak amplitude approximation is valid. An example of this reasoning is illustrated in Fig. \ref{Contours}. If we write the directional generating functions for each small cycle as closed contour integrals in parameter space, $\ointclockwise _C ds \frac{dt}{ds} G_{\theta,\phi}(\lambda)$, we see that the subtraction of these integrals in opposite directions leads to the cancellation of the interior contributions. This leaves only the desired line integral around the boundary of the larger cycle: \begin{equation} \begin{aligned} D_Q(\lambda) =& \int_{0}^{\mathcal{T}} dt \big( G_{t}^\circlearrowleft(\lambda)-G^\circlearrowright(\lambda,t) \big) \\ =& \ointclockwise _C ds \frac{dt}{ds} G_{\theta,\phi}(\lambda) - \ointctrclockwise _C ds \frac{dt}{ds} G_{\theta,\phi}(\lambda) \\ =& \sum_i \Bigg[ \ointclockwise_{C_i} ds_i\frac{dt}{ds_i} G_{\theta,\phi}(\lambda) - \ointctrclockwise_{C_i} ds_i\frac{dt}{ds_i} G_{\theta,\phi}(\lambda)\Bigg]. \end{aligned} \end{equation} We can hence obtain the quantity $D_Q(\lambda)$ for large amplitude cycles by summing the contributions of cycles in which the small amplitude approximation is valid. In the limit that the interior cycles are made infinitesimally small, the summation of the contributions from each smaller cycle can be used to approximate the form of the generating function resulting from the traversal of a contour of an arbitrary shape and size. The quantity $D_Q(\lambda)$ isolates the contribution to the generating function which is sensitive to the direction in which the pumping contour is traversed. One can show that, for a small amplitude, two parameter pump, this term is the proportional to $X_{\omega,1}X_{\omega,2}$ and corresponds to $2G^{\mathrm{geom}}(\lambda)$. We can hence access the violation function $A^{\mathrm{geom}}(\lambda)$ for arbitrary amplitude cycles. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{"GF_Plots".pdf} \caption{Absolute value (a) and argument (b) of the geometric contribution to the heat transport characteristic function $\chi^{\mathrm{geom}}(\lambda)$ for the case of a Majorana braiding protocol. Results are plotted for several values of the external lead temperature. Asymmetry of this function in $\lambda$ indicates an apparent violation of the Gallavotti-Cohen type fluctuation theorem.} \label{GF} \end{figure} The absolute value and argument of $\chi^{\mathrm{geom}}(\lambda)$ are plotted in Fig.\ \ref{GF}(a,b) for a Majorana braiding process. It can be seen that, although the GC symmetry is present in the real part of this quantity, the imaginary part is non-zero and antisymmetric with respect to $\lambda$. As a result, the violation function takes the form \begin{equation} A^{\mathrm{geom}}(\lambda)=2|\chi^{\mathrm{geom}}| |\sin\Big(\arg(\chi^{\mathrm{geom}})\Big)| \end{equation} and corrections are clearly required to the GCFT when considering a Majorana braiding process. Furthermore, the presence of this correction does not require the modulation of the external temperature gradient as in topologically trivial systems \cite{Ren2010} and hence stems solely from the cyclic variation of the systems internal parameters. In order to consider the temperature dependence of this apparent violation one must take into account two competing factors. Although the pumped heat increases as a function of $T$, illustrated by the increasing gradient of $\arg(\chi^{\mathrm{geom}})(\lambda)$ in Fig.\ \ref{GF}(b), we also know that the second cumulant of the pumped heat, $\mathcal{M}^{(2)}$ varies as $T^5$. This increased variance, indicated by the rate of decay of $|\chi^{\mathrm{geom}}|^2$ plotted in Fig.\ \ref{GF}(a), leads to the overlap of the probability distributions $P(Q)$ and $P(-Q)$ and hence a reduction in the correction to the GCFT. Furthermore, the fact that the correction is purely geometric in nature, means that, for $T\gg\omega$, the correction is dependent only on the contour traversed in parameter space and is independent of the driving frequency. Given that, in the case of a Majorana braiding, this contour is topologically protected against fluctuations in the driving, the behaviour of the violation function $A^{\mathrm{geom}}(\lambda)$ will also exhibit this protection. \color{black} \section{Conclusions} A system driven in an adiabatic cycle shows corrections to thermodynamic fluctuation theorems which depend on geometric properties of the cycle, as opposed to its dynamical features. Here we have studied the statistics of heat transfer for adiabatic cycles associated with the topologically protected evolution of a quantum system, specifically, a 1-dimensional topological superconductor undergoing braiding of its Majorana zero modes. We have first obtained general expressions for the statistics of heat transfer, which extend known results for the charge transport full counting statistics. We singled out the peculiarities of Majorana zero modes in the heat and charge current noise, including a correction to the Gallavotti-Cohen type fluctuation theorem. We have successfully extended this result to finite amplitude cycles and showed that the heat transfer associated with Majorana braiding induces a correction to a Gallavotti-Cohen type fluctuation theorem. As opposed to analogous corrections in non-topological systems which require cyclical variation of the external temperatures \cite{Ren2010}, our contribution stems solely form a cycle in the system's parameter space at constant temperature gradient and is a result of the coherent dynamics of the driving. Moreover, the correction term is geometric in nature and topologically protected against small, slow fluctuations of the driving. The identification of corrections to transport fluctuation theorems, in terms of quantum coherent contributions to scattering processes, allows for further investigation to incorporate such contributions in properly modified fluctuation theorems. \section{Acknowledgments} A.R. acknowledges the UK Engineering and Physical Sciences Research Council (EPSRC) via Grant No. EP/P010180/1. D.M. acknowledges support from the Israel Science Foundation (Grant No. 1884/18). \begin{widetext}
1,941,325,220,890
arxiv
\section{Introduction} \label{intro} Strong coupling studies of lattice 2-d principal chiral models, with the standard nearest-neighbour interaction \begin{equation} S_L = -2 N \beta \sum_{x,\mu} {\mathop{\rm Re}}{\mathop{\rm Tr}}\left[ U(x)\,U^\dagger(x{+}\mu)\right]\;, \;\;\;\;\;\;\;\;\;\;\;\;\;\beta\;=\;{1\over NT}\;, \label{action} \end{equation} have shown evidence of a large-$N$ phase transition at a finite $\beta_c$, separating the strong coupling and the weak coupling regions~\cite{Green,chiralPRL}. An analysis of the $18^{\rm th}$ order $N=\infty$ strong coupling series of the specific heat showed a second order critical behavior \begin{equation} C \sim |\beta - \beta_c |^{-\alpha} \,, \label{Ccrit} \end{equation} with the following estimates of $\beta_c$ and $\alpha$: $\beta_c = 0.3058(3)$ and $\alpha = 0.23(3)$~\cite{chiralPRL,chiralSC}. This critical phenomenon is somehow effectively decoupled from the continuum limit ($\beta\rightarrow \infty$), indeed dimensionless ratios of physical quantities are reproduced with great accuracy even for $\beta < \beta_c$~\cite{chiral,chiralPRL}. A critical behavior at $N=\infty$ is also present in 1-d lattice chiral models, where at $N=\infty$ the free energy is piecewise analytical with a third order transition between the strong coupling and weak coupling domains~\cite{Gross}. In these models the parameter $N$ plays a role analogous to the volume in ordinary systems with a finite number of degrees of freedom per site, and the double scaling limit describing the simultaneous approach $N\rightarrow \infty$ and $\beta\rightarrow\beta_c$ is shown to be equivalent to finite size scaling of 2-d spin systems close to the criticality~\cite{Brezin,Heller}. In this paper we investigate the above large-$N$ critical phenomenon by Monte Carlo simulations, that is by extrapolating, possibly in a controlled manner, numerical results at sufficiently large $N$, in the same spirit of the double scaling limit technique developed in the studies of 1-d matrix models. We performed Monte Carlo simulations of $SU(N)$ and $U(N)$ models for several large values of $N$, studying the approach to $N=\infty$. Some $SU(N)$ Monte Carlo results at large $N$ were already presented in Ref.~\cite{chiral}. Since $SU(N)$ and $U(N)$ models are expected to have the same large-N limit, $U(N)$ Monte Carlo results provide further information and check of the $N\rightarrow\infty$ behavior of lattice principal chiral models. In the continuum limit $SU(N)$ and $U(N)$ 2-d lattice actions should describe the same theory even at finite $N$, in that the additional $U(1)$ degrees of freedom of the $U(N)$ models decouple. The $U(N)$ lattice action, when restricting ourselves to its $SU(N)$ degrees of freedom, represents a different regularization of the $SU(N)\times SU(N)$ chiral field theory. One loop calculations in perturbation theory give the following $\Lambda$-parameter ratios \begin{equation} {\Lambda_{\overline {MS}}\over \Lambda_L^{^U}}\;=\; \sqrt{32}\,\exp\left( {\pi\over 2}\right)\;, \end{equation} \begin{equation} {\Lambda_L^{^{SU}}\over \Lambda_L^{^U}}\;=\; \exp\left( {\pi\over N^2}\right)\;, \end{equation} where $\Lambda_L^{^U}$ and $\Lambda_L^{^{SU}}$ are respectively the $\Lambda$-parameters of the $U(N)$ and $SU(N)$ lattice actions~(\ref{action}). The fundamental group invariant correlation function of $SU(N)$ models is \begin{equation} G(x) \;=\; {1\over N} \langle\,{\rm Tr} \,[ U^\dagger (x) U(0) ]\, \rangle \;. \label{GSUN} \end{equation} Introducing its lattice momentum transform $\widetilde{G}(p)$, we define the magnetic susceptibility $\chi=\widetilde{G}(0)$, and the second moment correlation length \begin{equation} \xi_G^2 = {1\over4\sin^2\pi/L} \, \left[{\widetilde G(0,0)\over\widetilde G(0,1)} - 1\right]\;\;\;. \label{xiG} \end{equation} In the $U(N)$ case we consider two Green's functions. One describes the propagation of $SU(N)$ degrees of freedom: \begin{eqnarray} G(x) &=& {1\over N} \langle\,{\rm Tr} \,[ {\hat{U}}^\dagger (x) {\hat{U}}(0) ]\, \rangle\;,\nonumber \\ {\hat{U}}(x)&\equiv& {U(x)\over ({\rm det} \,U(x))^{1/N}}\;. \label{GUN} \end{eqnarray} The other describes the propagation of the $U(1)$ degrees of freedom associated with the determinant of $U(x)$: \begin{equation} G_{\rm d}(x) \;=\; \langle\,\left( {\rm det} \,[ U^\dagger (x) U(0) ]\right)^{1/N}\, \rangle\;, \label{GUND} \end{equation} {}From the Green's functions $G(x)$ and $G_{\rm d}(x)$ we can define the corresponding magnetic susceptibilities $\chi$, $\chi_{\rm d}$ and second moment correlation lengths $\xi_G$, $\xi_{{\rm d}}$. At finite $N$, while $SU(N)$ lattice models do not have any singularity at finite $\beta$, $U(N)$ lattice models should undergo a phase transition, driven by the $U(1)$ degrees of freedom corresponding to the determinant of $U(x)$, and following a pattern similar to the 2-d XY model~\cite{Green2}. The mass propagating in the determinant channel $M_{\rm d}$ should vanish at a finite value $\beta_{\rm d}$ and stay zero for larger $\beta$. Then for $\beta > \beta_{\rm d}$ this sector of the theory decouples from the other ($SU(N)$) degrees of freedom, which are those determining the continuum limit of principal chiral models for $\beta\rightarrow\infty$. We recall that the 2-d XY model critical behavior is characterized by a sharp approach to the critical point $\beta_{XY}$ ( the correlation length grows exponentially), a line of fixed point for $\beta > \beta_{XY}$, and a finite specific heat having a peak for a $\beta < \beta_{XY}$ (see e.g. Ref.~\cite{Gupta}). \section{Numerical results.} \label{NR} \subsection{The Monte Carlo algorithm.} In our simulations we used local algorithms containing overrelaxation precedures. In the $SU(N)$ case, we employed the Cabibbo-Marinari algorithm~\cite{Cabibbo} to upgrade $SU(N)$ matrices by updating their $SU(2)$ subgroups, chosen randomly among the ${N(N-1)\over 2}$ subgroups acting on each $2\times 2$ submatrix. At each site the $SU(2)$ subgroup identified by the indices $i,j$ ($1\le i < j \le N$) was updated with a probability $P={2 \over N-1}p$, so that the average number of $SU(2)$ updatings per $SU(N)$ site variable was $\bar{n}=p N$. In our simulations we always chose $p \lesssim 1$, decreasing $p$ when increasing $N$. We used $p\simeq 1$ at $N=9$, $p\simeq 2/3$ at $N=15$ and $p\simeq 1/2$ at $N=21,30$. The extension to the $U(N)$ case is easily achieved by updating, beside $SU(2)$ subgroups, $U(1)$ subgroups. In our simulations we upgraded the $U(1)$ subgroups identified by the diagonal elements of the $U(N)$ matrix. The $SU(2)$ and $U(1)$ updatings were performed by a mixture of over-heat-bath algorithm~\cite{PV} (90\%) and standard heat-bath (10\%). At fixed parameter $p$, the number of operations per site increases as $N^2$ at large $N$. The above algorithm experiences a critical slowing down in $N$, that is keeping the correlation length fixed the autocorrelation time grows with increasing $N$. This effect is partially compensated by a reduction of the fluctuations of group invariant quantities when $N$ grows. In the $U(N)$ simulations the quantities related to the determinant channel are subjected to large fluctuations, causing large errors in the measurements. In Tables~\ref{UNtable} and \ref{SUNtable} we present Monte Carlo data respectively for the $U(N)$ and $SU(N)$ simulations. Finite size systematic errors in evaluating infinite volume quantities should be smaller than the statistical errors of all numerical results presented in this paper. \subsection{Numerical evidence of a large-N phase transition.} Lattice chiral models have a peak in the specific heat \begin{equation} C\;=\; {1\over N} { {\rm d} E\over {\rm d} T} \end{equation} which becomes sharper and sharper with increasing $N$. In Figs.~\ref{CUN} and ~\ref{CSUN} we plot the specific heat respectively for the $U(N)$ and $SU(N)$ models. Such a behavior of the specific heat should be an indication of a phase transition for $N=\infty$ at a finite $\beta_c$. The positions of the peaks $\beta_{peak}$ in $SU(N)$ and $U(N)$ converge from opposite directions, restricting the possible values of $\beta_c$ to $0.304 \lesssim \beta_c \lesssim 0.309$. Notice that Monte Carlo data for $\beta\lesssim \beta_c\simeq 0.306$ approach, for growing $N$, the resummed $18^{\rm th}$ order large-$N$ strong coupling series of the specific heat~\cite{chiralSC}; in this region, as expected by strong coupling considerations, the convergence of $U(N)$ models is faster. A more accurate estimate of the critical coupling $\beta_c$ can be obtained by using a finite $N$ scaling Ansatz \begin{equation} \beta_{peak}(N) \;\simeq\; \beta_c\,+\, cN^{-\epsilon}\;, \label{FNS} \end{equation} in order to extrapolate $\beta_{peak}(N)$ to $N\rightarrow \infty$. The above Ansatz is suggested by the idea that the parameter $N$ may play a role quite analogous to the volume in the ordinary systems close to the criticality. This idea was already exploited in the study of 1-d matrix models~\cite{Carlson,Brezin,Heller}, where double scaling limit turned out to be very similar to finite size scaling in a two-dimensional critical phenomenon. Substituting $L\rightarrow N$ and $1/\nu \rightarrow \epsilon$, Eq.~(\ref{FNS}) becomes the well-known finite size scaling relationship derived in the context of the renormalization group theory. Furthermore the exponent $\epsilon$ should be the same in the $U(N)$ and $SU(N)$ models, in that it should be a critical exponent associated to the $N=\infty$ phase transition. Notice that the function $\beta_{peak}(N)$ in Eq.~(\ref{FNS}) is considered at infinite space volume. In the study of ordinary critical phenomena the reweighting technique~\cite{Ferrenberg}, turns out to be very efficient to determine quantities like the position of the specific heat peak. In our work we could use this technique only for $N=9$, since for larger $N$ the reweighting range around the point where the simulation is performed turned out to be much smaller than the typical $\beta$ interval of our simulations. For $N\geq 15$ $\beta_{peak}(N)$ data and their errors were estimated from the specific heat data reported in the Tables~\ref{UNtable} and \ref{SUNtable}, supported by the direct measurements of the specific heat derivatives at each $\beta$. Our estimates of $\beta_{peak}$ at $N=9,15,21$ for $U(N)$ and $N=9,15,21,30$ for $SU(N)$ fit very well formula (\ref{FNS}). By a fit with four free parameters, $\beta_c$, $\epsilon$, $c_{_{U(N)}}$ and $c_{_{SU(N)}}$, we found \begin{eqnarray} &&\beta_c \;=\; 0.3057(3)\;,\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \nonumber \\ &&\epsilon \;=\; 1.45(8)\;. \label{resfit} \end{eqnarray} In Fig.~\ref{bpeak} the fit result is compared with the $\beta_{peak}(N)$ data. A fit with two independent $\epsilon$ exponents, $\epsilon_{_{U(N)}}$ and $\epsilon_{_{SU(N)}}$ gave compatible results, but larger errors. Notice that this Monte Carlo estimate of $\beta_c$ is in agreement with the determination (\ref{Ccrit}) coming from strong coupling computations. We checked the finite $N$ scaling Ansatz (\ref{FNS}) in the similar context of the large-$N$ phase transition of 1-d lattice $U(N)$ chiral models with free boundary conditions, where the critical point $\beta_c$ and the critical exponents $\nu$ and $\alpha$ are known: $\beta_c=1/2$, $\nu=3/2$ and $\alpha=-1$. We computed the position of the specific heat peak at finite $N$ finding the asymptotic behavior (\ref{FNS}) with $\epsilon=2/3$. Details on these calculations are given in the Appendix. As already mentioned, from standard finite size scaling arguments the critical exponent $\epsilon$ should be related to the critical exponent $\nu$: $\epsilon=1/\nu$. Notice that the critical exponents $\nu$ and $\alpha$ satisfy a two-dimensional hyperscaling relation: $2\nu=2-\alpha$. In 1-d lattice chiral models the number $d_e=2$ of effective dimensions of the large-$N$ critical phenomenon is related to the fact that the double limit $N\rightarrow\infty$ and $\beta\rightarrow\beta_c$ is equivalent to the continuous limit of a two-dimensional gravity model with central charge $c=-2$. Since the large-$N$ phase transition of the 2-d lattice chiral models is of the second order type, its behavior cannot be found in the classification of double scaling limits of Refs.~\cite{Kazakov,Gross}, which are parametrized by a central charge $c<1$ implying $\alpha < 0$. Moreover, unlike 1-d lattice chiral models, the interpretation of the large-$N$ phase transition of 2-d lattice chiral models as an effective $d_e=2$ ordinary critical phenomenon does not seem to be valid: in fact, if $\epsilon=1/\nu$, by substituting our estimates of $\alpha$ and $\epsilon$ in the hyperscaling relation $d_e=(2-\alpha)\epsilon$ we would obtain $d_e=2.6(2)$. A more general thermodynamic inequality would give $d_e \geq (2-\alpha)\epsilon$~\cite{Stanley}. Monte Carlo data of $\chi$ and $\xi_G$ for $\beta\lesssim \beta_c$ compare very well with the large-$N$ strong coupling series of $\chi$ (up to $15^{\rm th}$ order) and $\xi_G$ (up to $14^{\rm th}$ order)~\cite{chiralPRL}. Fig.~\ref{xi}, where $\xi_G$ is plotted versus $\beta$, shows that data approach, with growing $N$, the curve obtained by resumming the strong coupling series of $\xi_G$~\cite{chiralSC}, and in particular the $U(N)$ data, whose convergence is faster, are in quantitative agreement. Large-$N$ numerical results seem to indicate that all physical quantities, such as $\chi$ and $\xi_G$, are well behaved functions of the internal energy $E$ even at $N=\infty$~\cite{chiral}. Therefore as a consequence of the specific heat divergence at $\beta_c$, the $N=\infty$ $\beta$-function $\beta_L(T)\equiv -a {\rm d}T/{\rm d}a$ should have a non-analytical zero at $\beta_c$, that is $\beta_L(\beta)\sim |\beta-\beta_c|^\alpha$ in the neighbourhood of $\beta_c$. By defining a new temperature proportional to the energy~\cite{Parisi}, this singularity disappears, and one can find good agreement between the measured mass scale and the asymptotic scaling predictions in the ``energy'' scheme even for $\beta < \beta_c$, where strong coupling expansion is expected to converge~\cite{chiral}. In fact strong coupling computations show asymptotic scaling with a surprising accuracy of few per cent~\cite{chiralPRL}. In the $U(N)$ case, a Kosterlitz-Thouless phase transition driven by the determinant is expected at $\beta_{\rm d} > \beta_{peak}$ for each finite $N$. Our data seem to support this picture, indeed after the peak of $C$, the magnetic susceptibility $\chi_{\rm d}$ and the second moment correlation length $\xi_{{\rm d}}$ defined from the determinant correlation function~(\ref{GUND}) begin to grow very fast. In Fig.~\ref{chid} we plot $\chi_{\rm d}$ versus $\beta$. Green and Samuel argued (using strong coupling and weak coupling arguments) that the large-$N$ phase transition is nothing but the large-$N$ limit of the determinant phase transition present in the $U(N)$ lattice models~\cite{Green2,Green3}. According to this conjecture, in the large-$N$ limit $\beta_{\rm d}$ and $\beta_{peak}$ should both converge to $\beta_c$, and the order of the determinant phase transition would change from an infinite order of the Kosterlitz-Thouless mechanism to a second order with divergent specific heat. The large-$N$ phase transition of the $SU(N)$ models could then be explained by the fact that the large-$N$ limit of the $SU(N)$ theory should be the same as the large-$N$ limit of the $U(N)$ theory. Our numerical results give only a partial confirm of this scenario, we can just hint from the behavior of $\chi_{\rm d}$ and $\xi_{{\rm d}}$ with growing $N$ that the expected phase transition is moving toward $\beta_c$. The large-$N$ strong coupling series of the mass $M_{\rm d}$ propagating in the determinant channel has been calculated up to $6^{\rm th}$ order, indicating a critical point, determined by the zero of the $M_{\rm d}$ series, slightly larger than our determination of $\beta_c$: $\beta_{\rm d}(N=\infty)\simeq 0.324$~\cite{Green2}. This discrepancy could be explained either by the shortness of the strong coupling series of $M_{\rm d}$, or by the fact that such a determination of $\beta_c$ relies on the absence of non-analiticity points before the strong coupling series of $M_{\rm d}$ vanishes, and therefore a non-analiticity at $\beta_c\simeq 0.306$ would invalidate all strong coupling predictions for $\beta > \beta_c$. \subsection{Phase distribution of the link operator.} In 1-d principal chiral models the large-N third order phase transition is consequence of a compactification of the eigenvalues of the link operator \begin{equation} L \;=\; U(x)U^\dagger (x+\mu)\;, \end{equation} which are of the form $\lambda=e^{i\theta}$. In the weak coupling region ($\beta > \beta_c$) the phase distribution of the eigenvalues of the link operator $L$, $\rho(\beta,\theta)$ with $\theta \in (-\pi,\pi]$, is nonvanishing only in the region $|\theta|\leq \theta_c(\beta) < \pi$. The third order critical point $\beta_c$ is determined by the limit condition $\theta_c(\beta)=\pi$, separating the weak coupling from the strong coupling region where $\rho (\beta,\pi) > 0$~\cite{Gross}. In order to see if a similar phenomenon characterizes the large-$N$ phase transition also in 2-d, we have extracted from our simulations the phase distribution $\rho(\beta,\theta)$ of the eigenvalues of $L$. Notice that $\rho(\beta,\theta)=\rho(\beta,-\theta)$ by symmetry, therefore in the following we will show $\rho(\beta,\theta)$ only in the range $0\le \theta \le \pi$. Large-$N$ numerical results seems to support the compactification of the phase distribution at $\beta_c$, indeed we found $\rho(\beta,\pi)\simeq 0$ for $\beta \gtrsim \beta_{peak}$ ($\rho(\beta,\pi)$ can be strictly zero only for $N=\infty$). This fact is illustrated in Fig.~\ref{rho21}, where we compare the distributions $\rho(\beta,\theta)$ at $\beta=0.300$ and $\beta=0.305$ for $N=21$, whose $\beta_{peak}\simeq 0.3025$: the distribution values at $\theta=\pi$ ($\rho(0.300,\pi)\simeq 0.010$ and $\rho(0.305,\pi)\simeq 0.0007$) decrease by about a factor 15, becoming very small. Similar behaviors are observed at the other values of $N$. In the $SU(N)$ models $\rho(\beta,\theta)$ presents $N$ maxima, as Fig.~\ref{rho21} shows. This structure is absent in the $U(N)$ models and should disappear in the large-$N$ limit, in that the height of the peaks with respect to the background curve should vanish. For example, the $U(N)$ and $SU(N)$ phase distributions at $\beta=0$ are respectively \begin{equation} \rho(0,\theta)\;=\; {1\over 2\pi}\;, \end{equation} and \begin{equation} \rho(0,\theta) \;=\; {1\over 2\pi} \left[ 1 + (-1)^{N+1}{2\over N} \cos \left(N\theta\right)\right]\;. \label{rhobeta0} \end{equation} In our $SU(N)$ simulations we found the peak heights to decrease approximately as $1/N$. It is also interesting to see how the distributions $\rho(n,\beta,\theta)$ of the generalized link operators \begin{equation} L(n) \;=\; U(x)U^\dagger (x+n\mu)\;, \end{equation} ($\rho(1,\beta,\theta)\equiv \rho(\beta,\theta)$) evolve as function of the distance $n$. In Fig.~\ref{rho15} we plot $\rho(n,\beta,\theta)$ for $N=15$, at $\beta=0.305$ ($\xi_G\simeq 3.79$) and for various values of $n$. When $d\equiv n/\xi\rightarrow \infty$, $\rho(n,\beta,\theta)$ appears to tend to the $\beta=0$ distribution (\ref{rhobeta0}). \subsection{Critical slowing down around the large-$N$ singularity.} The large-$N$ critical behavior causes a phenomenon of critical slowing down in the Monte Carlo simulations. At sufficiently large $N$ ($N\gtrsim 15$) and for both $U(N)$ and $SU(N)$ models, the autocorrelation times of the internal energy $\tau_E$ and the magnetical susceptibility $\tau_{\chi}$ (estimated by a blocking procedure) showed a maximum around the peak of the specific heat, and a sharper and sharper behavior with growing $N$. The increase of the autocorrelation times, with growing $N$, was much larger around the specific heat peak than elsewhere. In the $SU(N)$ simulations, $\tau_E$ ($\tau_\chi$) went from $\sim$ 600 (400) at $\beta=0.3025$ and $N=21$ to $\sim$ 3000 (2500) at $\beta=0.304$ and $N=30$ (the uncertainty on this numbers is large, they are just indicative). After the peak of $C$ $\tau_E$ and $\tau_\chi$ decreased, for example at $N=30$ and $\beta=0.305$ $\tau_E\simeq 700$ and $\tau_\chi\simeq 300$. Similar behavior was observed in the $U(N)$ simulations. The above critical slowing down phenomenon represents the most serious difficulty in getting numerical results around $\beta_c$ at larger $N$ by the Monte Carlo algorithm used in this work. At large correlation length $\tau_\chi$ increases again due the critical slowing down associated to the continuum limit, while $\tau_E$ tends to be stable. We want to mention an attempt for a better algorithm in the $U(N)$ case, by constructing a microcanonical updating involving globally the $U(N)$ matrix instead of using its subgroups. A microcanonical updating of $U$ according to the action \begin{equation} A(U)\;=\; {\rm Re} \,{\rm Tr} \left[U F\right] \end{equation} can be achieved by performing the reflection with respect to the $U(N)$ matrix $U_{max}$ which maximizes $A(U)$: \begin{eqnarray} U_{new}&=& U_{max}\, U_{old}^\dagger \,U_{max}\;,\nonumber \\ U_{max}&=& {1\over \sqrt{F^\dagger F} } F^\dagger\;. \label{mcup} \end{eqnarray} Notice that the determination of $U_{max}$ requires the diagonalization of the complex matrix $F$. The update~(\ref{mcup}) does not change the action and it must be combined with ergodic algorithms (e.g. heat bath). We found that, at large $N$ and in the region of $\beta$ values we considered, the algorithm based on the $SU(2)$ and $U(1)$ subgroups performs better than those based on the updating (\ref{mcup}). The latter may become convenient at relatively small $N$ and/or for larger correlation lengths. On the other hand, at large space correlation lengths multigrid algorithms should eventually become more efficient, in that they should have smaller dynamical exponents (see Refs.~\cite{Sokal,Meyer} for some implementations of multigrid algorithms in the context of lattice chiral models).
1,941,325,220,891
arxiv
\section{Introduction} Black holes, regions with very strong gravity from which not even light can escape, are one of the most fascinating theoretical predictions of Einstein's general relativity~\cite{Einstein1916}. The first exact solution to this theory was almost immediately found by Schwarzschild \cite{Schwarzschild1916}, describing a static spherically symmetric spacetime. However, it took several decades to fully understand its black-hole nature. This initiated the `golden age' of black hole studies, epitomized by the discovery of astrophysically more relevant Kerr rotating solution \cite{Kerr}. Studies of various aspects of these `collapsed objects', such as influence on matter and fields, the no-hair conjecture, thermodynamic properties, or quantum evaporation, followed soon. Moreover, a great observational effort brought the direct evidence of their existence in our universe when Cygnus X-1 source was identified as a black hole. Also, now it seems that supermassive black holes reside in nuclei of almost all galaxies. Mergers of black-hole binaries have been recently detected as the first gravitational wave signals. Another remarkable interplay between Einstein's theory and observed astronomical phenomena is a concept of the cosmological constant. The famous $\Lambda$-term was introduced by Einstein into his field equations to allow a static cosmological model \cite{Einstein1917}. However, it was soon demonstrated by de~Sitter \cite{deSitter1917} that the cosmological constant causes even an empty space to expand exponentially fast \cite{Schroedinger1956}. Nowadays, this is employed for a phenomenological description of the observed accelerated expansion of our universe caused by `dark energy'. The de~Sitter solution also captures main features of the inflationary epoch in the very early universe. Despite all the great successes of Einstein's gravity theory, it also has its limits, in particular, impossibility to quantize it in the same way as other fundamental interactions, and perhaps some open cosmological issues. Various extensions of general relativity have thus been considered, see \cite{Sotiriou:2010, DeFelice:2010, Capozziello:2011, Clifton:2012} for reviews. In these modified theories, the black hole solutions play a prominent role, providing natural test beds for their comparison \cite{Tangherlini:1986, MP:1986, BoulwareDeser:1985, LuPerkinsPopeStelle:2015}. Assuming a constant scalar curvature, we derive a \emph{new class} of static spherically symmetric black hole solutions with a cosmological constant $\Lambda$ in quadratic gravity~\cite{Stelle:1978}, which includes the Einstein-Weyl theory~\cite{Weyl1919,Bach1921}. It generalizes \cite{Kottler:1918} to include higher-order gravity corrections, and \cite{LuPerkinsPopeStelle:2015, PodolskySvarcPravdaPravdova:2018a} to admit any $\Lambda$. In contrast with the black holes of \cite{LuPerkinsPopeStelle:2015, PodolskySvarcPravdaPravdova:2018a}, the second (cosmological) horizon may appear due to ${\Lambda>0}$. On large scales, the higher-order corrections considerably affect the asymptotic behaviour of the geometry, which, even in the case of ${\Lambda=0}$, is not asymptotically flat (except for finely tuned parameters). This additional freedom thus opens completely new and more involved possibilities. Moreover, both the cosmological constant and higher-order corrections are of key importance in quantum gravity models, e.g.,~\cite{CoPe:2006}. Within this setting, the vacuum action of quadratic gravity contains $\Lambda$, the Ricci scalar $R$, and a contraction of the Weyl tensor $C_{abcd}$, namely \be S = \int \dd^4 x\, \sqrt{-g}\, \Big(\gamma \, (R-2\Lambda) +\beta\,R^2 - \alpha\, C_{abcd}\, C^{abcd}\Big) \,, \label{actionQG} \ee where $\alpha$, $\beta$, ${\gamma=G^{-1}}$ are constants. The corresponding field equations read \begin{align} &\gamma \left(R_{ab} - {\pul} R\, g_{ab}+\Lambda\,g_{ab}\right)-4 \alpha\,B_{ab} \nonumber \\ &\quad +2\beta\left(R_{ab}-\tfrac{1}{4}R\, g_{ab}+ g_{ab}\, \Box - \nabla_b \nabla_a\right) R = 0 \,, \label{GenQGFieldEq} \end{align} where ${B_{ab} \equiv \big( \nabla^c \nabla^d + {\pul} R^{cd} \big)C_{acbd}}$ is the traceless, symmetric and conserved~\emph{Bach~tensor}. Assuming ${R=\hbox{const.}}$, the last term in (\ref{GenQGFieldEq}) simplifies and the trace of the field equations implies ${R=4\Lambda}$, so that they become \be R_{ab}-\Lambda\,g_{ab}=4k\, B_{ab}\,, \qquad \hbox{with}\qquad k \equiv \frac{\alpha}{\gamma+8\beta\Lambda} \,, \label{fieldeqsEWmod} \ee see \cite{PravdaPravdovaPodolskySvarc:2017}. For ${k=0}$ vacuum Einstein's equations with a cosmological constant are obtained. For ${\beta=0}$ we get Einstein-Weyl gravity. For ${\gamma+8\beta\Lambda= 0}$ the conformal Weyl theory is restored, in which the rotational curves of galaxies were studied \cite{MaKa:1989} within the spherically symmetric setting. Our solution, as an unifying model, may enable the analysis of relations between these theories in such astrophysical situations. \section{The geometry}\label{BH metric} A spherically symmetric metric is usually written~as \begin{equation} \dd s^2 = -h(\bar r)\,\dd t^2+\frac{\dd \bar r^2}{f(\bar r)}+\bar r^2(\dd \theta^2+\sin^2\theta\,\dd \phi^2) \,. \label{Einstein-WeylBH} \end{equation} However, in \cite{PodolskySvarcPravdaPravdova:2018a, PodolskySvarcPravdaPravdova:2018b} it was shown that for investigation of such geometries in quadratic gravity, an \emph{alternative form is more convenient}, \be \dd s^2 = \Omega^2(r)\big[\,\dd \theta^2+\sin^2\theta\,\dd \phi^2 -2\,\dd u\,\dd r+{\cal H}(r)\,\dd u^2 \,\big]\,. \label{BHmetric} \ee This is related to (\ref{Einstein-WeylBH}) via \begin{equation} \bar{r} = \Omega(r)\,, \qquad t = u - {\textstyle\int}\, {\H(r)}^{-1}\dd r \,, \label{to static} \end{equation} and the metric functions $\Omega$, $\H$ give $f$, $h$ using \be h({\bar r}) = -\Omega^2\, \H \,, \qquad f({\bar r}) = -\left(\frac{\Omega'}{\Omega}\right)^2 \H \,, \label{rcehf} \ee (prime denotes the derivative with respect to $r$). The new metric \eqref{BHmetric} is \emph{conformal} to a simple direct-product Kundt `seed', ${\dd s^2 =\Omega^2\,\dd s^2_{\hbox{\tiny Kundt}}}$, which is of the algebraic type D, see \cite{Stephanietal:2003, GriffithsPodolsky:2009, PravdaPravdovaPodolskySvarc:2017}. In the metric \eqref{BHmetric}, the \emph{Killing horizons} corresponding to ${\partial_u=\partial_t}$ are located at specific radii $r_h$ satisfying \begin{equation} \H \big|_{r=r_h}=0\,. \label{horizon} \end{equation} Of course, via \eqref{rcehf} this gives ${h({\bar r_h})=0=f({\bar r_h})}$. There is a time-scaling freedom ${t\to\sigma^{-1}\, t}$ of the metric \eqref{Einstein-WeylBH} implying ${h\to \sigma^2\, h}$, which can be used, e.g., to adjust appropriate value of ${h}$ at a chosen radius. To uniquely characterize the geometries \eqref{BHmetric}, we need the Weyl and Bach \emph{scalar curvature invariants}, \begin{align} C_{abcd}\, C^{abcd} &= \tfrac{1}{3}\,\Omega^{-4}\,({\cal H}'' +2)^2 \,, \label{invC} \\ B_{ab}\, B^{ab} &= \tfrac{1}{72}\,\Omega^{-8}\big[(\B_1)^2+2(\B_1+\B_2 )^2\big] \,,\label{invB} \end{align} where \emph{two independent Bach components} are \be \B_1 \equiv \H \H''''\,, \qquad \B_2 \equiv \H'\H'''-\tfrac{1}{2}{\H''}^2+2\,. \label{B2} \ee Interestingly, ${ B_{ab}=0 \ \Leftrightarrow \ B_{ab}\, B^{ab}=0}$. Thus we distinguish two geometrically different types of solutions in quadratic gravity defined by ${B_{ab}=0}$ and ${B_{ab}\ne0}$, respectively. \section{The field equations}\label{derivingFE} Under conformal transformations, the Bach tensor simply scales as ${B_{ab} = \Omega^{-2}\,B_{ab}^{\hbox{\tiny Kundt}}}$ and since higher-order corrections in \eqref{fieldeqsEWmod} are represented by the Bach tensor, using the metric \eqref{BHmetric} leads to a remarkable simplification of the field equations. Explicit evaluation of the field equations \eqref{fieldeqsEWmod} for \eqref{BHmetric}, using the Bianchi identities, yields two simple ODEs for the metric functions $\Omega(r)$ and ${\cal H}(r)$, namely \begin{align} \Omega\Omega''-2{\Omega'}^2 = &\ \tfrac{1}{3}k\, \B_1 \H^{-1} \,, \label{Eq1}\\ \Omega\Omega'{\cal H}'+3\Omega'^2{\cal H}+\Omega^2 -\Lambda\Omega^4 = &\ \tfrac{1}{3}k \,\B_2 \,, \label{Eq2} \end{align} see \cite{PPPS:2018d} for more details. It is also convenient to express the trace of \eqref{fieldeqsEWmod}, namely ${R=4\Lambda}$, \begin{equation} {\cal H}\Omega''+{\cal H}'\Omega'+{\textstyle \frac{1}{6}} ({\cal H}''+2)\Omega = \tfrac{2}{3}\Lambda\,\Omega^3 \,. \label{trace} \end{equation} In fact, it is the derivative of \eqref{Eq2} minus ${\H'}$ times~\eqref{Eq1}. The crucial point for further investigations is that Eqs.~\eqref{Eq1}, \eqref{Eq2} do not explicitly depend on~$r$. Solutions to such an \emph{autonomous system} can thus be found as \emph{power series} in $r$ expanded around \emph{any} point~${r_0}$ \be \Omega(r) = \Delta^n \, \sum_{i=0}^\infty a_i \,\Delta^{i}\,, \qquad \H(r) = \Delta^p \, \sum_{i=0}^\infty c_i \,\Delta^{i}\,, \label{rozvoj} \ee where ${\Delta\equiv r-r_0}$, ${n,p\in \mathbb{R}}$, and ${a_0,\,c_0\neq0}$. \subsection{Vanishing Bach tensor} \label{integration:Schw} For ${\B_1=0=\B_2}$, we deal with Einstein's theory, and Eqs.~\eqref{Eq1}, \eqref{Eq2} can be directly integrated. Using the gauge freedom ${r \to \lambda\,r+\nu}$, ${u \to\lambda^{-1}u}$ of the metric \eqref{BHmetric}, this immediately implies \begin{equation} \Omega(r)= \bar{r} = -\frac{1}{r}\,, \qquad \H(r) = \frac{\Lambda}{3}-r^2-2m\, r^3 \,, \label{SchwAdS} \end{equation} where the mass parameter $m$ is fixed by (\ref{horizon}), see (\ref{SchwAdSH}). These functions represent the \emph{Schwarzschild-(anti--)de Sitter} spacetime \cite{Kottler:1918,Stephanietal:2003, GriffithsPodolsky:2009} which, expressed in the form \eqref{Einstein-WeylBH} using \eqref{rcehf}, reads ${f=h=1-2m\,{\bar{r}}^{-1}-\frac{1}{3}\Lambda\,\bar{r}^2}$. It is well known \cite{GriffithsPodolsky:2009} that for ${0<9\Lambda m^2<1}$ there are \emph{two horizons} determined by \eqref{horizon}, namely the \emph{black-hole event horizon} at $r_h$ and the \emph{cosmological horizon} at ${r_c>r_h}$ (they degenerate to ${{\bar r}_h={\bar r}_c=3m=1/\sqrt\Lambda}$ when ${9\Lambda m^2=1}$; ${\Lambda<0}$ admits only the black hole horizon). \subsection{Non-vanishing Bach tensor} \label{integration:nonSchw} In a generic case (${\B_1, \B_2 \ne0}$), the system \eqref{Eq1}, \eqref{Eq2} becomes non-trivially coupled but its solutions can be found in the form \eqref{rozvoj}. Substituting these series into the field equations, we obtain polynomial expressions where the dominant (lowest) powers of $\Delta$ immediately put specific restrictions on the parameters ${[n,\,p]}$ and the possible value of $\Lambda$, see Tab.~\ref{table:npLambda} and \cite{PPPS:2018d}. In the next section, we will discuss the most interesting case ${[0,\, 1]}$ corresponding to a \emph{single} root ${r_0}$ of (\ref{horizon}). \begin{table}[htb] \begin{center} \caption{\label{table:npLambda} The only admitted parameters ${[n,\, p]}$ in \eqref{rozvoj}, and the cosmological constant ${\Lambda}$, restricted by dominant powers of $\Delta$ in the field equations \eqref{Eq1}, \eqref{Eq2}, and the trace \eqref{trace}. Note that in the last column, $n\not=-1,-1/2$.} \vspace{2.0mm} \begin{tabular}{c|ccccccccc} $n$\hs &\hs 0 &\hs 0 &\hs 1 &\hs $-1$ &\hs $-1$ &\hs 0 &\hs 0 &\hs ${<0}$ \\ \hline $p$\hs &\hs 1 &\hs 0 &\hs 0 &\hs 2 &\hs 0 &\hs 2 &\hs ${\ge 2}$ &\hs ${2n+2}$ \\ \hline $\Lambda$\hs&\hs any&\hs any&\hs any&\hs 0 &\hs ${\neq0}$&\hs ${\neq0}$&\hs ${\frac{3}{8k}}$&\hs $\frac{11n^2+6n+1}{1-4n^2}{\frac{3}{8k}}$\\ \end{tabular} \end{center} \end{table} \section{Explicit black holes\label{ExplicitBH}} In the case ${n=0}$, ${p=1}$, the root of~$\H$ representing the non-degenerate Killing horizon \eqref{horizon} is explicitly given by ${r_0\equiv r_h}$. The field equations (\ref{Eq1}), (\ref{Eq2}), with (\ref{trace}), then restrict the coefficients in the expansions \eqref{rozvoj} as \begin{align} &a_1 =\frac{1}{3c_0}\left[2\Lambda a_0^3-a_0(1+c_1)\right] , \nonumber \\ &c_2 =\frac{1}{6kc_0}\left[a_0^2(2-c_1-\Lambda a_0^2)+2k(c_1^2-1)\right] , \nonumber \\ &a_{l}=\frac{1}{l^2c_0}\Big[\,\tfrac{2}{3}\Lambda \sum^{l-1}_{j=0}\sum^{j}_{i=0}a_i\,a_{j-i}\,a_{l-1-j}-\tfrac{1}{3}\,a_{l-1} \nonumber\\ & \hspace{10.0mm} -\sum^{l}_{i=1}c_i\,a_{l-i}\left(l(l-i)+\tfrac{1}{6}i(i+1)\right)\Big] \,, \label{01coeff} \\ &c_{l+1}=\frac{3}{k(l+2)(l+1)l(l-1)} \nonumber \\ &\hspace{10.0mm} \times \sum^{l-1}_{i=0}a_i\, a_{l-i}(l-i)(l-1-3i) \,,\quad \hbox{for}\quad l\geq2\,,\nonumber \end{align} with three free parameters ${a_0,\ c_0,\ c_1}$. To identify the Schwarzschild-(anti--)de Sitter spacetime \eqref{SchwAdS} in the form (\ref{rozvoj}) with (\ref{01coeff}), first we evaluate the Bach tensor \eqref{B2} on the horizon, yielding ${\B_1(r_h) = 0}$, ${\B_2(r_h) = -\frac{3}{k}a_0^2\,b}$, where ${b\equiv\frac{1}{3}(c_1-2+\Lambda a_0^2)}$. Interestingly, by setting ${b=0}$ (i.e. for ${c_1=2-\Lambda a_0^2}$), the Bach tensor vanishes \emph{everywhere}. Employing the gauge freedom of \eqref{BHmetric}, we may also set \begin{equation} a_0 =-\frac{1}{r_h} \,, \qquad c_0 =r_h-\frac{\Lambda}{r_h} \,. \label{01background} \end{equation} The explicit solution \eqref{rozvoj}, \eqref{01coeff} for ${b=0}$ then becomes \begin{equation} \Omega(r)=-\frac{1}{r}\,, \qquad \H(r) = \frac{\Lambda}{3}-r^2-\Big(\,\frac{\Lambda}{3}-r_h^2\Big)\,\frac{r^3}{r_h^3} \,, \label{SchwAdSH} \end{equation} where the expansions (\ref{rozvoj}) were summed-up as geometric series. This is exactly the \emph{Schwarzschild-(anti--)de Sitter black hole} \eqref{SchwAdS} since ${\frac{\Lambda}{3}-r_h^2=2m\,r_h^3}$. In the case ${b\neq0}$, we may now separate the `Bach contribution' in the coefficients \eqref{01coeff} proportional to $b$ by introducing $\alpha_i,\,\gamma_i$. With the same gauge choice \eqref{01background}, we obtain a one-parameter \emph{extension} of the Schwarzschild-(A)dS spacetime in quadratic gravity, \begin{align} \Omega(r) & = -\frac{1}{r}-\frac{b}{r_h}\sum_{i=1}^\infty\alpha_i\Big(\,\frac{r_h-r}{\X\,r_h}\Big)^i \,, \label{Omega_[0,1]}\\ \mathcal{H}(r) & = (r-r_h)\bigg[\,\frac{r^2}{r_h}-\frac{\Lambda}{3r_h^3}\left(r^2+rr_h+r_h^2\right) \nonumber \\ &\hspace{18.0mm} +3b\,\X\,r_h\sum_{i=1}^\infty\gamma_i\Big(\,\frac{r-r_h}{\X\,r_h}\Big)^i\,\bigg] \,, \label{H_[0,1]} \end{align} where \begin{align} \X &\equiv 1-\frac{\Lambda}{r_h^2}\,, \label{alphasgammainitial_[0,1]} \\ \alpha_1 &\equiv 1\,,\quad \gamma_1=1\,,\quad \gamma_2 = \frac{1}{3}\Big[4-\frac{1}{r_h^2}\Big(2\Lambda+\frac{1}{2k}\Big)+3b\Big] \,, \nonumber \end{align} and $\alpha_l,\,\gamma_{l+1}$ are (with ${\alpha_0\equiv 0}$) \emph{recursively} given by \begin{widetext} \vspace{-6.0mm} \begin{align} &\alpha_{l}= \, \frac{1}{l^2}\Big[-\frac{2\Lambda}{3r_h^2}\,\sum_{j=0}^{l-1}\sum_{i=0}^{j}\big[\alpha_{l-1-j}\,\X^j+\big(\X^{l-1-j}+b\,\alpha_{l-1-j}\big)\big(\alpha_i\, \X^{j-i}+\alpha_{j-i}(\X^i+b\,\alpha_{i})\big)\big]-\tfrac{1}{3}\alpha_{l-2}(2+\X)\X(l-1)^2 \nonumber\\ & \hspace{14.0mm} +\alpha_{l-1}\big[\tfrac{1}{3}+(1+\X)\big(l(l-1)+\tfrac{1}{3}\big)\big]-3\sum_{i=1}^{l}(-1)^i\,\gamma_i\,(\X^{l-i}+b\,\alpha_{l-i})\big(l(l-i)+\tfrac{1}{6}i(i+1)\big)\Big]\,, \nonumber\\ &\gamma_{l+1}= \, \frac{(-1)^{l}}{kr_h^2\,(l+2)(l+1)l(l-1)}\sum_{i=0}^{l-1}\big[\alpha_i \,\X^{l-i}+\alpha_{l-i}\big(\X^i+b\,\alpha_i\big) \big](l-i)(l-1-3i) \,, \quad \hbox{for}\quad l\geq2\,. \label{alphasgammasgeneral_[0,1]} \end{align} \vspace{-6.0mm} \end{widetext} All these solutions form a \emph{three-parameter family of spherically symmetric black holes} (with static regions). In particular: \begin{itemize} \item The radius ${r=r_h}$ determines the \emph{Killing horizon} since ${\H(r_h)=0}$, see \eqref{H_[0,1]}, \eqref{horizon}. \item The parameter ${\Lambda=R/4}$ is the \emph{cosmological constant}. It can be zero, recovering the results of \cite{PodolskySvarcPravdaPravdova:2018a}. \item The \emph{Bach parameter} $b$ determines the Bach tensor contribution. For ${b=0}$, this Schwa-Bach-(A)dS black hole \eqref{Omega_[0,1]}, \eqref{H_[0,1]} reduces to \eqref{SchwAdSH}. \end{itemize} In terms of these three physical parameters, the scalar invariants \eqref{invC}, \eqref{invB} on the horizon are \begin{align} C_{abcd}\, C^{abcd}(r_h) &= 12\,\big((1+b)r_h^2-\tfrac{1}{3}\Lambda\big)^2 \,, \label{CinvBinv1}\\ B_{ab}\,B^{ab}(r_h) &= \frac{r_h^4}{4 k^2}\,b^2 \,. \label{CinvBinv2} \end{align} In Fig.~\ref{fig:1}, convergence of the series in \eqref{Omega_[0,1]}, \eqref{H_[0,1]} is examined using the d'Alembert ratio test for two different sets of parameters. It clearly indicates that, with $n$ growing, the ratio between two subsequent terms approaches a specific constant. The series thus \emph{asymptotically behave as geometric series}. This enables us to estimate the radius of convergence. Typical behaviour of the metric function ${\cal H}(r)$ outside the black-hole horizon is plotted in Fig.~\ref{fig:2}. There is a significant qualitative difference between ${\Lambda<0}$ and ${\Lambda>0}$. In both cases, the black-hole horizon separates static (${r>r_h}$) and non-static (${r<r_h}$) regions of the spacetime. However, for ${\Lambda>0}$ an outer boundary of this static region appears, which corresponds to the cosmological horizon given by the second root of $\H$ (as in the classic Schwarzschild-de Sitter black hole). This is also demonstrated in Fig.~\ref{fig:3} by plotting the function ${f(\bar{r})}$ of the common metric \eqref{Einstein-WeylBH}. \begin{figure}[h!] \includegraphics[scale=0.44]{fig1} \caption{\label{fig:1} The convergence radius can be estimated from the ratio convergence test for solutions \eqref{Omega_[0,1]}, \eqref{H_[0,1]}, here given by ${r_h=-1,\, k=0.5}$ with ${b=0.3,\, \Lambda=0.2}$ (bottom) and ${b=0.2,\, \Lambda=-2}$ (top).} \vspace{8mm} \includegraphics[scale=0.44]{fig2} \caption{\label{fig:2} The function ${\cal H}(r)$ given by \eqref{H_[0,1]} for two values of the cosmological constant ${\Lambda}$ (with the same parameters as in Fig.~\ref{fig:1}). Both plots start on the black-hole horizon ${r_h=-1}$ and are reliable up to the vertical dashed lines indicating the radii of the convergence. For ${\Lambda>0}$ the function ${\cal H}(r)$ seems to have another root corresponding to the cosmological horizon, while for ${\Lambda<0}$ it remains non-vanishing. First 50 (red), 100 (orange), 200 (green), 300 (blue) terms in the expansions are used. The results fully agree with the numerical solutions up to the dashed lines, where such simulations also fail.} \vspace{8mm} \includegraphics[scale=0.44]{fig3} \caption{\label{fig:3} The function ${f(\bar{r})}$ of standard line element \eqref{Einstein-WeylBH} corresponding to the solution \eqref{Omega_[0,1]}, \eqref{H_[0,1]} via \eqref{rcehf} (with the same parameters as in Figs.~\ref{fig:1}, \ref{fig:2}). The ${\Lambda>0}$ case (left) indicates the presence of the cosmological horizon at the boundary of the convergence interval (the dashed line). For ${\Lambda<0}$ (right), the series converge in the whole plotted range, indicating a static region everywhere above the black-hole horizon.} \end{figure} \section{Specific tidal effects\label{GeodDev}} The two independent parts \eqref{B2} of the Bach tensor $\B_1, \B_2$ can be observed via a \emph{specific relative motion of free test particles} described by the equation of geodesic deviation \cite{PodolskySvarc:2012}. For an invariant description, we employ an orthonormal frame associated with \emph{initially static observer} (${\dot{r}=\dot{\theta}=\dot{\phi}=0}$) with velocity ${\boldu=\dot{u}\,\partial_u\equiv\bolde_{(0)}}$, namely ${\bolde_{(1)}=-\dot{u}\,(\partial_u+\H\,\partial_r)}$, ${\bolde_{(2)}=\Omega^{-1}\partial_\theta}$, and ${\bolde_{(3)}=(\Omega\sin\theta)^{-1}\partial_\phi}$. Projection of the equation of geodesic deviation onto this frame gives \begin{align} \ddot{Z}^{(1)} = &\, \frac{\Lambda}{3}\,Z^{(1)}+\frac{1}{6} \frac{{\cal H}''+2}{\Omega^2}\,Z^{(1)}-\frac{k}{3}\,\frac{\B_1+\B_2}{\Omega^4}\,Z^{(1)} , \label{InvGeoDevBH1r0}\\ \ddot{Z}^{(i)} = &\, \frac{\Lambda}{3}\,Z^{(i)}-\frac{1}{12}\frac{{\cal H}''+2}{\Omega^2}\,Z^{(i)}-\frac{k}{6}\,\frac{\B_1}{\Omega^4}\,\,Z^{(i)} , \label{InvGeoDevBHir0} \end{align} where ${i=2,3}$, ${Z^{(a)} \equiv {e^{(a)}}_{\!\!\mu}\,Z^\mu}$ denotes \emph{relative position} of two particles, and ${\ddot Z^{(a)} \equiv {e^{(a)}}_{\!\!\mu}\,\frac{\Dif^2 Z^\mu}{\dd\, \tau^2}}$ their \emph{mutual acceleration}. In \eqref{InvGeoDevBH1r0}, \eqref{InvGeoDevBHir0}, we easily identify classic parts corresponding to the \emph{isotropic influence} of the cosmological constant $\Lambda$ and the \emph{Newtonian tidal effect} caused by the \emph{Weyl tensor} proportional to the square root of \eqref{invC}. Moreover, the theory satisfying (\ref{fieldeqsEWmod}) admits \emph{two additional effects} encoded in the non-trivial \emph{Bach tensor} components $\B_1, \B_2$. The first of them affects particles in the \emph{transverse} directions~$\partial_\theta, \partial_\phi$, see \eqref{InvGeoDevBHir0}, while the second one induces their \emph{radial} acceleration along $\partial_{\bar r}$ via \eqref{InvGeoDevBH1r0}. Since ${\B_1(r_h)=0}$, \emph{on any horizon} there is \emph{only the radial} effect caused by ${\B_2(r_h)}$. \section{Thermodynamic quantities: horizon area, temperature, entropy\label{Thermodynamic}} Let us also determine main thermodynamic properties of this explicit family of spherically symmetric Schwa-Bach-(A)dS black holes. The horizon is generated by the (rescaled) null Killing vector ${\ell\equiv\sigma\partial_u=\sigma\partial_t}$ and thus is located at ${r=r_h}$ where ${\H=0}$, cf.~\eqref{horizon}, \eqref{H_[0,1]}. Its \emph{area} is, using \eqref{BHmetric}, \eqref{Omega_[0,1]}, \be {\cal A} =4\pi\,r_h^{\,-2}= 4\pi\,{\bar r}_h^2\,, \label{horizon_area} \ee while its \emph{surface gravity} (${\kappa^2\equiv-\frac{1}{2}\,\ell_{\mu;\nu}\,\ell^{\,\mu;\nu}}$) reads \be \kappa/\sigma = -\tfrac{1}{2}\,\H'(r_h) = -\tfrac{1}{2}\rho\, r_h =\tfrac{1}{2}\, {\bar r}_h^{\,-1}(1-\Lambda {\bar r}_h^2) \,. \label{surface_gravity} \ee It is \emph{the same expression as in the Schwarzschild-(A)dS case}, independent of the Bach parameter $b$. The black-hole horizon \emph{temperature} ${T = \tfrac{1}{2\pi}\,\kappa}$ is thus \be T/\sigma = -\tfrac{1}{4\pi}\rho\,r_h = \tfrac{1}{4\pi}\,{\bar r}_h^{\,-1}(1-\Lambda {\bar r}_h^2) \,. \label{temperature} \ee This is zero for ${{\bar r}_h=1/\sqrt\Lambda}$ corresponding to the case of extreme Schwarzschild-de Sitter black hole for which the black-hole and cosmological horizons coincide at ${{\bar r}_h={\bar r}_c}$. However, in higher-derivative theories, we must apply the generalized definition of \emph{entropy} ${S=(2\pi/\kappa)\oint \mathbf{Q}\,}$, see \cite{Wald:1993}, where the Noether charge 2-form on the horizon~is \begin{align} &\mathbf{Q} = -\frac{\Omega^2\, \H'}{16\pi}\left[\gamma+\frac{4}{3}\Lambda(\alpha+6\beta)+\frac{4}{3}k\alpha\,\frac{\B_1+\B_2}{\Omega^4}\right]\!\!\bigg|_{r=r_h} \nonumber \\ & \hspace{50.0mm}\times\sin\theta\,\dd\theta\wedge\dd\phi \,. \label{Noether} \end{align} Evaluating the integral, using \eqref{horizon_area}, \eqref{surface_gravity}, \eqref{CinvBinv2} and ${r_h=-1/{\bar r}_h}$, we get {\be S = \frac{1}{4}{\cal A}\left[\gamma+\frac{4}{3}\Lambda\,(\alpha+6\beta) -4\alpha\,\frac{b}{{\bar r}_h^{\,2}}\right] \,. \label{entropy} \ee } For the Schwarzschild black hole (${b=0,\, \Lambda=0}$) or in the Einstein theory {(${\alpha=0,\ \beta=0}$)}, this reduces to the standard expression ${S = \tfrac{1}{4G}\,{\cal A}}$. For ${\Lambda=0}$, the results of \cite{LuPerkinsPopeStelle:2015,PodolskySvarcPravdaPravdova:2018a} are recovered. For the Schwarzschild-(A)dS black hole (${b=0}$) in Einstein-Weyl gravity {(${\beta=0}$)}, we obtain ${S = \tfrac{1}{4G}\,{\cal A}\,\big(1+\tfrac{4}{3}k\Lambda \big)}$, which agrees with the results of \cite{LuPope:2011}. In critical gravity, defined by {${\beta=0}$}, ${\alpha=k\gamma}$, ${\Lambda=-\frac{3}{4k}<0}$, the entropy is zero. Our formula \eqref{entropy} for entropy generalizes all these expressions to the case of Schwarzschild-Bach-(anti--)de~Sitter black holes when the Bach tensor is non-vanishing, parameterized by ${b\ne0}$. In this case, the \emph{entropy is non-zero even in critical gravity}. For smaller black holes, the deviations from {${S = \tfrac{1}{4}{\cal A}\,\big[\gamma+\tfrac{4}{3}\Lambda (\alpha+6\beta)\big]}$} are larger. By replacing the root $r_h$ by $r_c$ in \eqref{Omega_[0,1]}, \eqref{H_[0,1]}, the solution is expanded around the \emph{cosmological horizon}. Its temperature and entropy are thus given by \eqref{temperature} and \eqref{entropy}, respectively, in which ${\bar r}_h$ is simply replaced by ${\bar r}_c$. \vspace{-4.0mm} \section{Acknowledgements} This work has been supported by the Czech Science Foundation Grant No. GA\v{C}R 17-01625S and the Czech-Austrian MOBILITY grant 8J18AT02 (JP, R\v{S}), and the Research Plan RVO: 67985840 (VP, AP). We thank H.~Maeda for reading the manuscript. We are also grateful to anonymous referees for their very useful comments, observations and suggestions.
1,941,325,220,892
arxiv
\section{Introduction} \label{sec1} ImageNet pretraining is a dominant paradigm in computer vision. As many vision tasks are related, it is expected a deep learning model, pretrained on one dataset, to help another downstream task. It is now common practice to pretrain the backbones of object detection~\cite{he2019rethinking} and segmentation~\cite{long2015fully} on ImageNet~\cite{deng2009imagenet} dataset. In the field of person Re-ID, most of works~\cite{fu2019self,bai2021unsupervised,zeng2020hierarchical} try to leverage models pretrained on ImageNet to mitigate the shortage of person Re-ID data, which has achieved remarkable performance. However, this practice has been recently challenged by Fu et al.~\cite{fu2021unsupervised}, who show a surprising result that such ImageNet pretraining may not be the best choice for the re-identification task due to the intrinsic domain gap between ImageNet and person Re-ID data. Additionally, some research works~\cite{desai2021virtex,radford2021learning} also indicate that learning visual representations from textual annotations can be more competitive to methods based on ImageNet pretraining, which has attracted considerable attention from both the academia and industry worldwide. For these reasons, there has been increasing interests for us to explore novel vision-and-language pretraining strategy which can replace the supervised ImageNet pretraining on Re-ID tasks. Unfortunately, existing datasets~\cite{zheng2015scalable,ristani2016performance,zheng2017unlabeled,li2014deepreid,xiang2020unsupervised,xiang2021taking} in Re-ID community are all of limited scale due to the costly efforts required for data collection and annotation, especially none of them has diversified attributes to obtain dense captions, which cannot satisfy the need of semantic-based pretraining. \begin{figure}[t] \centering \includegraphics[width=1.00\columnwidth]{IMG2.pdf} \caption{The overview of our framework. First, we jointly train ResNet and Transformer using image caption pairs for the task of image captioning. Then, we transfer the learned ResNet as the backbone of the downstream Re-ID task.} \label{fig1} \end{figure} \begin{figure*}[!t] \centering{\includegraphics[width=1.00\linewidth]{picture.pdf}} \caption{\textbf{(a)} Illustration of attributes at the identity level; \textbf{(b)} Distribution of attributes in terms of IDs for \textit{FineGPR} dataset.} \label{fig2} \end{figure*} Targeting to address above mentioned limitations, we start from two aspects, namely data and methodology. From the data perspective, we construct a \textit{\textbf{FineGPR-C}} caption dataset for the first time on person Re-ID events, which involves human describing event in a fine-grained manner. From the methodology perspective, we propose a pure \textbf{V}ir\textbf{T}ex \textbf{B}ased \textbf{R}e-ID pretraining approach named {\textbf{VTBR}, which uses transformers to learn visual representations from textual annotations, the overview of our framework is illustrated in Fig.~\ref{fig1}. Particularly, we jointly train a ResNet and transformers from scratch using image caption pairs for the task of image captioning. Then, we transfer the learned residual network to downstream Re-ID tasks. In general, our method seeks a common vision-language feature space with discriminative learning constraints for better practical deployment. The initial motivation of this research comes from comprehensive study of Re-ID pretraining, we notice that semantic captions provide a denser learning signal than traditional unsupervised or supervised learning~\cite{desai2021virtex}, so using language supervision on Re-ID task is more appealing, which can provide supervision for learning transferable visual representations with better data-efficiency than other approaches. Another benefit of textual annotation is simplified data collection. Traditional labelling procedure of real pedestrian data always costs intensive human labor, sometimes even involving person privacy concerns and data security problems. In contrast, natural language description from fine-grained attributes on synthetic data do not require an explicit category and can be easily labelled by non-expert workers, leading to a simplified data labelling procedure without ethical issues regarding privacy. To the best of our knowledge, we are among the first attempts to use textual features to perform pretraining for downstream Re-ID tasks. We hope this study and the \textit{FineGPR-C} caption dataset will serve as a solid baseline and advance the research of pretraining in Re-ID community. As a consequence, the major contributions of our work can be summarized into three-fold: 1) We construct a \textit{FineGPR-C} caption dataset for the first time to enable the semantic pretraining for Re-ID tasks. 2) Based on it, we propose a pure VirTex-based Re-ID pre-training approach named VTBR to learn visual representations from textual annotations. 3) Comprehensive experiments show that our VTBR matches or exceeds the performance of existing methods for supervised or unsupervised pretraining on ImageNet with fewer images. \section{Proposed Method} \subsection{\textit{FineGPR-C} Caption Dataset} Data is the life-blood of training deep neural network models and ensuring their success. For the person Re-ID task, sufficient and high-quality data are necessary for increasing the model's generalization capability. In this work, we ask the question: \textit{can we construct a person dataset with captions which can be used as semantic-based pretraining on Re-ID task?} To answer this question, we revisit the previously developed \textit{FineGPR}~\cite{xiang2021less} dataset, which contains fine-grained attributes such as viewpoint, weather, illumination and background, as well as 13 accurate annotations at the identity level (shown in Fig.~\ref{fig2}). \begin{figure}[!t] \centering{\includegraphics[width=0.9\linewidth]{13.pdf}} \caption{ Some exemplars of semantic caption in \textit{FineGPR-C} dataset, which is generated using our dynamic strategy. } \label{fig3} \end{figure} \begin{figure*}[!t] \centering{\includegraphics[width=0.9\linewidth]{IMG1.pdf}} \caption{The framework of our pretraining approach VTBR, which consists of a visual backbone ResNet-50 and Transformer. The visual backbone extracts visual features, and transformers extract semantic features to predict captions via bidirectional language modeling. After pretraining, the visual backbone is transferred to downstream Re-ID tasks.} \label{fig4} \end{figure*} On the basis of \textit{FineGPR}, we introduce a dynamic strategy to generate high-quality captions with fine-grained attribute annotations for semantic-based pretraining. To be more specific, we rearrange the different attributes as word embeddings into caption expressions at the different position, and then generate semantically dense caption containing high-quality description, this gives rise to our newly-constructed \textit{FineGPR-C} caption dataset. Some exemplars of \textit{FineGPR-C} dataset are depicted in Fig.~\ref{fig3}. It is worth mentioning that different pedestrian images have different captions by the different regular expressions. During the caption generation based on fine-grained \textit{FineGPR} dataset, we found that there exists serious redundancy among the different attributes. To maintain a high diversity of generated caption, we propose a \textbf{R}efined \textbf{S}electing (\textbf{RS}) strategy to increase the inter-class diversity and minimize the intra-class variation of semantic caption. Particularly, we set a threshold and dynamically add the attribute which appears with a lower probability $\alpha$ into the final caption sentence $c$, the formula can be expressed as : \begin{equation} c = \left\{w_{1}, a_{1}, w_{2}, a_{2}, \ldots w_{K}, a_{K}\right\},\ if\ P_{a_{1}}, P_{a_{2}},..., P_{a_{K}} \leq \alpha \label{eq1} \end{equation} where $w_{K}$, $a_{K}$ denote fixed words and labelled attributes, respectively. $P_{a_{K}}$ represents the appearing probability of the attribute annotation in \textit{FineGPR}. In summary, our goal is to improve the caption discriminative ability according to their attribute distribution, so the generated caption (token by token) will be more diversified and contain more discriminative information. More details about \textit{FineGPR} and \textit{FineGPR-C} can be found at \textcolor{magenta}{https://github.com/JeremyXSC/FineGPR}. \subsection{Our VTBR Approach} In order to learn deep visual representations from textual annotations for Re-ID task, we introduce a semantic-based pretraining method VTBR based on our newly-built \textit{FineGPR-C} dataset. As illustrated in Fig.~\ref{fig4}, our VTBR framework consists of a visual backbone ResNet-50~\cite{he2016deep} and semantic backbone Transformer~\cite{vaswani2017attention}, which extracts visual features of images and textual features of caption respectively. Firstly, the visual features extracted from ResNet-50 are used to predict captions of pedestrian images by transformer networks. Following the~\cite{desai2021virtex}, we use projection layer to receive features from the visual backbone, then put them to the textual head to predict captions with transformers for images, which provides a learning signal to the visual backbone during pretraining. Note that we use the log-likelihood loss function to train the visual and semantic backbones in an end-to-end manner. \begin{equation} \mathcal{L}=\sum_{k=1}^{K+1} \log \left(p\left(T, V ; \psi_{f}, \phi\right)\right)+\sum_{k=0}^{K} \log \left(p\left(T, V ; \psi_{b}, \phi\right)\right) \label{eq2} \end{equation} where $\psi_{f}$, $\psi_{b}$ and $\phi$ mean forward transformer, backward transformer and ResNet-50 respectively. $T$ and $V$ denote textual feature and visual feature separately. Therefore, we train our entire VTBR model from scratch without any pretrained weight on our \textit{FineGPR-C} caption dataset, whereas they rely on pretrained transformer to extract textual features. After obtaining the pretrained model based on our \textit{FineGPR-C} caption dataset, we perform downstream Re-ID evaluation continuously. Specifically, we adopt global features extracted by visual backbone ResNet-50 to perform metric learning. Note that we only modify the output dimension of the latest fully-connected layer to the number of training identities~\cite{xiang2021taking}. During the period of testing, we extract the 2,048-dim pool-5 vector for retrieval under the Euclidean distance. \begin{table}[t] \centering \caption{Comparisons between ResNet and our VTBR pretraining on supervised tasks. ``S" denotes semantic feature.} \footnotesize \setlength{\tabcolsep}{0.15mm}{ \begin{tabular}{lcccccccc} \toprule \multicolumn{3}{c}{Supervised Fine-tuning $\rightarrow$} & \multicolumn{2}{c}{Market-1501} & \multicolumn{2}{c}{DukeMTMC} & \multicolumn{2}{c}{CUHK03} \\ \cmidrule(lr){1-3} \cmidrule(lr){4-5} \cmidrule(lr){6-7} \cmidrule(lr){8-9} Pretrain $\downarrow$ &S &\#Imgs & Rank-1 & mAP & Rank-1 & mAP & Rank-1 & mAP\\ \toprule ResNet (ImageNet) & $\times$ & 1.28M & 94.3 & 85.0 & 86.7 & 76.6 & 61.6 & 59.0 \\ ResNet (\textit{FineGPR}) & $\times$ & 2.00M & 85.5 & 74.2 & 63.9 & 59.8 & 43.3 & 37.8 \\ VTBR (\textit{FineGPR-C}) & \checkmark & 1.83M & 93.6 & 83.7 & 85.0 & 72.9 & 61.8 & 58.6 \\ VTBR+RS(\textit{FineGPR-C}) & \checkmark & 0.91M & \textbf{94.9} & \textbf{85.3} & \textbf{87.3} & \textbf{76.8} & \textbf{61.9} & \textbf{59.3} \\ \bottomrule \end{tabular}}% \label{tab1}% \end{table}% \begin{table*}[htbp] \centering \caption{Comparisons between ResNet and our VTBR pretraining on domain adaptation tasks. ``S" denotes semantic feature.} \footnotesize \setlength{\tabcolsep}{0.9mm}{ \begin{tabular}{lcccccccccccccc} \toprule \multicolumn{3}{c}{Domain Adaptation $\rightarrow$} & \multicolumn{3}{c}{DukeMTMC$\rightarrow$Market-1501} & \multicolumn{3}{c}{Market-1501$\rightarrow$DukeMTMC} & \multicolumn{3}{c}{DukeMTMC$\rightarrow$CUHK03} & \multicolumn{3}{c}{Market-1501$\rightarrow$CUHK03} \\ \cmidrule(lr){1-3} \cmidrule(lr){4-6} \cmidrule(lr){7-9} \cmidrule(lr){10-12} \cmidrule(lr){13-15} Pretrain $\downarrow$ &S &\#Imgs & Rank-1 & Rank-5 & mAP & Rank-1 & Rank-5 & mAP & Rank-1 & Rank-5 & mAP & Rank-1 & Rank-5 & mAP \\ \toprule ResNet (ImageNet) & $\times$ & 1.28M & 48.0 & 64.1 & 21.7 & \textbf{24.5} & \textbf{38.8} & \textbf{13.8} & 4.9 & 11.6 & 5.6 & 3.9 & 8.6 & 4.0 \\ ResNet (\textit{FineGPR}) & $\times$ & 2.00M & 44.2 & 62.8 & 20.5 & 20.8 & 33.5 & 10.2 & 4.7 & 11.3 & 5.5 & 4.5 & 11.5 & 4.3 \\ VTBR (\textit{FineGPR-C}) & \checkmark & 1.83M & 45.9 & 64.8 & 21.2 & 21.3 & 34.6 & 10.9 & 4.9 & 11.4 & 5.2 & 5.4 & 11.8 & 5.8 \\ VTBR+RS (\textit{FineGPR-C}) & \checkmark & 0.91M & \textbf{50.6} & \textbf{67.7} & \textbf{23.8} & 24.3 & 38.4 & 13.5 & \textbf{5.7} & \textbf{13.1} & \textbf{5.7} & \textbf{5.8} & \textbf{13.2} & \textbf{6.2} \\ \bottomrule \end{tabular}}% \label{tab2}% \end{table*}% \section{Experimental Results} \subsection{Datasets} In this paper, we conduct experiments on three benchmarks, including Market-1501~\cite{zheng2015scalable}, DukeMTMC-reID~\cite{ristani2016performance,zheng2017unlabeled} and CUHK03~\cite{li2014deepreid}. Market-1501 has 1,501 identities in 32,668 images. 12,936 images of 751 identities are used for training, the query has 3,368 images and gallery has 19,732 images. DukeMTMC-reID contains 16,522 images of 702 identities for training, and the remaining images of 702 identities for testing. CUHK03 consists of 14,097 images with a total 1,467 identities. We evaluate the performance by mAP and Cumulative Matching Characteristic curves at Rank-1 and Rank-5. \subsection{Implementation Details} For the pretraining of VTBR, we apply standard random cropping and normalization as data augmentation. Following the training procedure in \cite{desai2021virtex}, we adopt SGD with momentum 0.9 and weight decay $10^{-4}$ wrapped in LookAhead~\cite{kukunuri2019lookahead} with $\alpha$=0.5 and 5 steps. We empirically set the $\alpha$=0.8 in Eq.~\ref{eq1}. The max learning rate of visual backbone is $2\times10^{-1}$; learning rate of the textual head is set as $1\times10^{-3}$. For the downstream Re-ID task, we follow a widely used open-source project$\footnote[1]{\textcolor{black}{https://github.com/michuanhaohao/reid-strong-baseline}}$ as standard baseline, which is built only with commonly used softmax cross-entropy loss~\cite{zhang2018generalized} and triplet loss~\cite{hermans2017defense} on vanilla ResNet-50. Following the practice in~\cite{luo2019bag}, the batch size of training samples is set as 64. As for triplet selection, we randomly selected 16 persons and sampled 4 images for each identity, m is set as 0.5 as triplet margin. Adam method and warmup learning strategy are also adopted to optimize the model. All the experiments are performed on PyTorch~\cite{paszke2019pytorch} with two Nvidia GeForce RTX 3090 GPUs on a server equipped with a Intel Xeon Gold 6240 CPU. \subsection{Supervised Fine-tuning} In this paper, the Re-ID caption data is the fundamental part of the semantic-based pretraining baseline. Here, we adopt supervised fine-tuning performance on real datasets as the indicator to show the quality of \textit{FineGPR-C} caption dataset. From Table~\ref{tab1}, we can obviously observe that the results of supervised learning are significantly promoted by using our method. For example, when training and testing on Market with ImageNet pretrained model, we can only achieve a rank-1 accuracy of \textbf{94.3\%}, while our VTBR method on \textit{FineGPR-C} can obtain a competitive performance of \textbf{93.6\%}. After employing the Refined Selecting strategy, our VTBR+RS reaches a remarkable performance of \textbf{94.9\%} with \textbf{1.4$\times$} fewer pretraining images (\textcolor[rgb]{1.00,0.39,0.09}{\textbf{0.91M}} vs. \textcolor[rgb]{0.20,0.40,0.80}{\textbf{1.28M}}), leading to a record mAP performance of \textbf{85.3\%}. Not surprisingly, same performance gain can also be achieved on Duke. The success of VTBR can be largely contributed to the discriminative features learned by semantic captions in a data-efficient manner. \subsection{Unsupervised Domain Adaption} Our semantic-based pretraining method enjoys the benefits of flexible corner scenarios of domain adaptive Re-ID tasks, where labelled data in target domain is hard to obtain. In this section, we present four domain adaptive Re-ID tasks on several benchmark datasets. More detailed results can be seen in Table~\ref{tab2}. When trained on Duke dataset, it can be easily observed that our VTBR+RS achieves a significant rank-1 performance of \textbf{50.6\%} and \textbf{5.7\%} on Market and CUHK03 respectively, outperforming the ImageNet pretraining by \textbf{+2.6\%} and \textbf{+0.8\%} in terms of rank-1 accuracy. When trained on Market dataset, our method can also lead to an obvious improvement of \textbf{+1.9\%} on CUHK03 in rank-1 accuracy. However, when tested on Duke dataset, it is surprising to find that our method obtain a slightly inferior performance than ImageNet pretraining (mAP \textcolor[rgb]{1.00,0.39,0.09}{\textbf{13.5\%}} vs. \textcolor[rgb]{0.20,0.40,0.80}{\textbf{13.8\%}}, \textcolor[rgb]{1.00,0.39,0.09}{\textbf{0.91M}} vs. \textcolor[rgb]{0.20,0.40,0.80}{\textbf{1.28M}} images). We suspect that captions generated on \textit{FineGPR} have obvious domain gap with Duke since there are some occlusion and multiple persons in the queries, which will undoubtedly degrade the performance of our method. \subsection{Visualization} In order to verify the effectiveness of our proposed method, we show some Grad-CAM~\cite{muhammad2020eigen} visualizations of some attention maps with VTBR in Fig.~\ref{fig5}. Compared with ImageNet pretraining method, we observe that our model attends to relevant image regions or discriminative parts for making caption predictions, indicating that VTBR can greatly help the model learn more meaningful visual features with better semantic understanding, which significantly makes our semantic-based pretraining VTBR model more robust to perturbations. \begin{figure}[!t] \centering{\includegraphics[width=0.98\linewidth]{12.pdf}} \caption{Visualization of attention maps. (a) Original images, (b) ImageNet pretraining method, (c) Our VTBR method.} \label{fig5} \end{figure} \section{Conclusion and Future Work} \label{sec4} This paper takes a big step forward to construct the first \textit{FineGPR-C} caption dataset for person Re-ID events, which covers human describing in a fine-grained manner. Based on it, we present a simple yet effective semantic-based pretraining method to replace the ImageNet pretraining, which helps to learn visual representations from textual annotations on downstream Re-ID task. Extensive experiments conducted on several benchmarks show that our method outperforms the traditional ImageNet pretraining -- both in supervised and unsupervised manner -- by a clear margin. In the future, we will focus on other downstream vision tasks with our VTBR, such as human related segmentation and pose estimation. \bibliographystyle{IEEEbib}
1,941,325,220,893
arxiv
\section{Introduction} One of the fundamental problems in chemistry is the computation of the ground state energy of a many-body quantum system. Although this major difficulty has been circumvented to some extent by the density-functional theory \cite{Kohn1965}, the quantum Monte Carlo (QMC) method \cite{anderson1975random,reynolds1982fixed,foulkes2001quantum,von1992quantum,anderson2007quantum} still remains an important approach to determine the ground state energy and electron correlations. This paper is concerned with the implementation of the QMC for many-body systems. More specifically, we consider the Hamiltonian, \begin{equation}\label{eq: ham} \widehat{H}= \sum_{i=1}^{N} -\frac{\hbar^2}{2m}{\triangle_{\boldsymbol r_i}} + \sum_{i\ne j} W(\boldsymbol r_i - \boldsymbol r_j) + \sum_{i=1}^{N} V_{ext}(\boldsymbol r_i). \end{equation} Here we use $\boldsymbol r=(\boldsymbol r_1, \boldsymbol r_2, \cdots, \boldsymbol r_N)$ to denote the particle coordinates with $N$ being the total number of particles. and the Laplacian ($-\Laplace$) in the first term of the Hamiltonian indicates the kinetic energy. The second term in the Hamiltonian, which is a double sum, embodies the pairwise interactions, {e.g.}, Coulomb, while the last term includes the external potential, namely, \begin{equation}\label{eq: Vext} V_{ext}(\boldsymbol r_i) = \sum_{\alpha=1}^{M} U(\boldsymbol r_i - R_\alpha), \end{equation} where $R_\alpha$, for instance, can be the position of an atom. In principle, the ground state can be obtained by computing the smallest eigenvalue and the corresponding eigenfunction. It can be expressed in terms of a Rayleigh quotient, \begin{equation}\label{eq: E-var} E= \min_{\Phi} \frac{\displaystyle\int_{\mathbb{R}^{3N}} \Phi \widehat{H} \Phi d\boldsymbol r_1 \cdots d \boldsymbol r_N} {\displaystyle\int_{\mathbb{R}^{3N}} |\Phi|^2 d\boldsymbol r_1 \cdots d \boldsymbol r_N}, \end{equation} and the minimizer $\Phi$ corresponds to the ground state wave function. However, due to the high dimensionality, a direct numerical approach, {e.g.}, using finite difference or finite element methods together with numerical quadrature for the integrals suffers from the curse of dimensionality, thus is typically prohibitively expensive. Within the variational Monte Carlo (VMC) framework, this issue is addressed by selecting an appropriate ansatz, denoted here by $\Phi \approx \Psi_0,$ for the many-body wave function. Then the multi-dimensional integral is interpreted as a statistical average, which can be sampled using a Monte Carlo procedure. Traditionally, $\Psi_0$ is constructed using the one-body wave functions, with the effect of particle correlations described by Jastrow factors \cite{foulkes2001quantum}. Recently, artificial neural networks from machine learning have also been used to represent the many-body wave function \cite{carleo2017solving,han2020solving,han2019solving,pfau2019ab}. In fact, the recent surge of interest in applying machine-learning algorithms to scientific computing problems has been a strong motivation for the current work. The first part of this paper is concerned with the numerical implementation of VMC. Since VMC formulates the energy calculation as a sampling problem, the most natural approach is the Metropolis-Hastings (MH) algorithm which, in general, falls into the category of Markov chain Monte Carlo (MCMC) algorithms in statistics. At each step, the chain is updated by calculating the energy change. As can be seen from \eqref{eq: ham} and \eqref{eq: E-var}, this requires visiting all particles in the system. A direct treatment would involve $\mathcal{O}\big(N(N+M)\big)$ operations in each time step. The presence of the Jastrow factor further complicates the computation. To alleviate the computational cost, we propose a random batch method (RBM), originated from emerging machine learning algorithms \cite{bottou1998online,wright2015coordinate,bubeck2014convex}, and recently introduced to classical interacting particle systems in \cite{jin2020random} and extended to various applications in both classical and quantum $N$-body systems \cite{GJP, jin2020mean,JLL2,KZ2020, li2020stochastic, li2020random, LLT}. In particular, \cite{jin2020random} established an error of RBM to be of $O(\sqrt{\Delta t})$, where $\Delta t$ is the time step, {\it uniformly} in $N$. For the present problem, the objective is to use such an idea to quickly relax the quantum system and sample the energy in the VMC method. To this end, we first formulate the sampling problem using an over-damped Langevin equation, where the particles are driven by a drift and a stochastic force. The idea of using a Langevin dynamics to construct a VMC algorithm has been pursued in \cite{scemama2006efficient}. Rather than computing the particle interactions directly, our proposed RBM algorithm divides the system into random batches and only the interactions within each batch are computed. As a result, on average, updating all $N$ particles only requires $\mathcal{O}(N+M)$ operations. We justify the method by examining the transition density and show that at each step the density induced by the RBM is consistent with the exact transition kernel up to $\mathcal{O}(\Delta t^2)$, the same order as the Euler-Maruyama method. The other important approach in QMC is the diffusion Monte Carlo (DMC) method \cite{anderson1975random,reynolds1982fixed}, which starts with the time-dependent Schr\"odinger equation (TDSE), and evolves the quantum system in an imaginary time scale, leading to a parabolic equation \cite{reynolds1982fixed}, \begin{equation}\label{eq: tdse} \partial_t \Psi = (E_T-\widehat{H} ) \Psi. \end{equation} The energy shift $E_T$ is adjusted on-the-fly based on the change of the magnitude of the wave function. The key observation is that the dynamics \eqref{eq: tdse} can be associated with a stochastic process. In particular, the wave function $|\Psi|^2$ can be interpreted as the empirical measure of a particle system, in which the particles are driven by drift velocity and diffusion. The growth/decay of the wave function is treated by introducing multiple copies of the system, each of which is called a walker or a diffuser \cite{anderson1975random,reynolds1982fixed}. The number of walkers, which reflects the change of the norm of the wave function, is realized by using a birth/death process. The movement of the walkers is driven by the same over-damped Langevin dynamics. Therefore, the RBM is again a natural fit. On the other hand, the probability associated with the birth/death process depends on the total energy. To avoid the computation of the total energy $E$, especially before the ground state is reached, we propose to decompose the energy into one-, two-, and three-body terms. We construct an RBM where at each step a batch with three particles are selected and we only compute the energy within the batch. Speeding up QMC simulations has been an important focus in computational chemistry. Various software packages have been developed to this end \cite{scemama2012qmc,needs2020variational,kim2018qmcpack}. For instance, Kim et al. \cite{kim2018qmcpack} demonstrated how DMC algorithms can be efficiently implemented on high-performance computer clusters. They showed that when the dynamics of walkers is distributed among the OPENMP threads or MPI units, one can achieve an almost ideal speedup. Toward this end, we implemented the RBM algorithm by moving the walkers in parallel, and we are able to perform QMC simulations of a Helium system with 5016 particles using only 60 cores. The rest of the paper is organized as follows. We first consider the RBM in the VMC setting in section 2, and justify the method in terms of the transition density. Numerical results are presented for the Helium system. In section 3, we show the RBM in the DMC setting, followed by numerical results. The paper is concluded in section 4. \section{The Random Batch Algorithm for the Variational Monte Carlo Methods} The crucial observation that motivated the VMC framework is that the ground state energy can be viewed as an average with respective to a probability density, \begin{equation}\label{eq: avgE} E = \big\langle E_\text{tot}(\cdot) \big\rangle = \int p(\boldsymbol r) E_\text{tot}(\boldsymbol r) d\boldsymbol r, \end{equation} where $ p(\boldsymbol r)$ is regarded as a probability density function (PDF), \begin{equation}\label{eq: pdf0} p(\boldsymbol r) \propto |\Phi_0(\boldsymbol r)|^2, \end{equation} and the energy $E_\text{tot}$, given by, \begin{equation}\label{eq: E0} E_\text{tot}(\boldsymbol r)= \frac{\widehat{H} \Phi_0} {\Phi_0}, \end{equation} will be regarded as a random variable. The ground state wave function is usually sought in a Slater determinant form with a Jastrow factor \cite{jastrow1955many,foulkes2001quantum}, \begin{equation}\label{eq: ansatz0} \Phi_0=e^{-J(\boldsymbol r)} \Pi_{i=1}^{N} S\big(\phi(\boldsymbol r_1), \dots, \phi(\boldsymbol r_N)\big), \quad J(\boldsymbol r)= \sum_{i< j} u(|\boldsymbol r_i - \boldsymbol r_j|). \end{equation} Here $S$ is the Slater determinant with $\phi(\boldsymbol r)$ being the single-particle wave function, and we assume a common pairwise form $u(|\boldsymbol r_i - \boldsymbol r_j|$ for the Jastrow factor $J$. It is also possible to include three-body terms. For simplicity, we do not consider the spin orbitals. We will consider Boson systems, which allow us to neglect the sign problem \cite{reynolds1982fixed} and focus exclusively on the sampling procedure. In addition, to have a class of explicit trial wave functions to work with, we follow the QMC methods for liquid Helium interacting with a graphite surface \cite{whitlock1998monte,pang2014diffusion}, where the following ansatz has been proven successful, \begin{equation}\label{eq: ansatz} \Phi_0=e^{-J(\boldsymbol r)} \Pi_{i=1}^{N} \phi(\boldsymbol r_i), \quad u(r)= \left( \frac{a}{r} \right)^5 + \frac{b^2}{r^2+c^2}. \end{equation} For homogeneous Hellium systems, the ansatz with only the Jastor factor has been widely used in QMC simulations \cite{kalos1974helium,mcmillan1965ground}. The ansatz in \eqref{eq: ansatz} includes orbitals centered around the graphite atoms. From \eqref{eq: ansatz}, we can write the density \eqref{eq: pdf0} in an exponential form, \begin{equation}\label{eq: pr} p(\boldsymbol r) \propto e^{-2V}, \quad V= - \ln \Phi_0 = - \sum_i \log \phi (\boldsymbol r_i) + \frac12 \sum_{i} \sum_{j\ne i} u(|\boldsymbol r_i - \boldsymbol r_j|). \end{equation} The PDF is reminiscent of a Gibbs distribution with temperature $\beta^{-1}=1/2.$ The goal of VMC is to create samples according to such a probability density function, from which the ground state energy can be computed from \eqref{eq: avgE} by averaging over those samples. Most VMC methods are of Markov chain Monte Carlo (MCMC) type. Namely, one constructs a Markov chain, which equilibrates to the PDF given by (or close to) \eqref{eq: pr}. Thanks to the explicit ansatz \eqref{eq: ansatz} for the wave function, the total energy can be explicitly expressed as follows, \begin{equation}\label{eq: E-tot} E_\text{tot}(\boldsymbol r) = \frac{1}{2} K_i^2 + \sum_{i\ne j} W(\boldsymbol r_i - \boldsymbol r_j) + \sum_{i=1}^{N} V_{ext}(\boldsymbol r_i) \end{equation} Since the computational cost is of primary concern here, let us write out all the relevant terms. The first term comes from the kinetic energy, \begin{equation}\label{eq: Ki} K_i^2 = -\frac{\hbar^2}{m} \frac{\triangle_i \Phi_0}{\Phi_0} = - \frac{\hbar^2}{m} \triangle_i \ln \Phi_0 - \frac{\hbar^2}{m} | \nabla_i \ln \Phi_0 |^2. \end{equation} The actual form of the kinetic energy depends on the choice of the ansatz for $\Phi$. For instance, with the choice \eqref{eq: pr},the total energy is given by \begin{equation}\label{eq: E-tot'} E_\text{tot}(\boldsymbol r) = - \frac{\hbar^2}{2m} \triangle V - \frac{\hbar^2}{2m} \| \nabla V\|^2 + \sum_{i\ne j} W(\boldsymbol r_i - \boldsymbol r_j) + \sum_{i=1}^{N} \sum_{\alpha=1}^{M} U(\boldsymbol r_i - R_\alpha). \end{equation} Since the one-particle wave function is non-negative, we express it as exponential functions, \begin{equation}\label{eq: phii} \phi(\boldsymbol r_i) = \sum_{\alpha=1}^M e^{-\theta(\boldsymbol r_i - R_\alpha)}, \end{equation} for some function $\theta$. This form has been used in \cite{whitlock1998monte} and the parameters were obtained by solving a one-dimensional Schr\"odinger equation. In light of \eqref{eq: E-tot'}, the calculation of the total energy, which will be part of both the variational and diffusion Monte Carlo algorithms, scales {\it quadratically} in terms of the number of particles $N$. \subsection{The classical Metropolis-Hastings Algorithm} A classical algorithm in VMC is the Metropolis-Hastings algorithm. This algorithm is usually implemented by randomly displacing one particle as a time. With the observation that, \begin{equation}\label{eq: Vi} V= \sum_i V_i, \quad V_i= -\log \phi (\boldsymbol r_i) + \sum_{\overset{j=1}{j\ne i}}^N u(|\boldsymbol r_i - \boldsymbol r_j|), \end{equation} only $V_i$ needs to be computed to determine the energy change due to the change of $\boldsymbol r_i$, which subsequently determines the rejections/acceptance of this move. The MH algorithm is standard in computational chemistry for both classical and quantum systems \cite{allen2017computer}, so we keep the discussion brief and summarize the algorithm in {\bf Algorithm} \ref{alg:mh}. Notice that the only parameters in the algorithm are the size of the trial moves, denoted by $\Delta x$, $\Delta y$ and $\Delta z$ in each of the three spatial directions, respectively. \begin{algorithm} \caption{Metropolis-Hastings (MH) algorithm for variational Monte Caro } \label{alg:mh} \begin{algorithmic} \FOR{ nt=1, num\_steps} \FOR{np=1, num\_particles} \STATE{ Randomly pick an atom $i$ } \STATE{ e\_old = $V_i$ in \eqref{eq: Vi}; } \STATE{ $\bf r$\_old = $\bf r$\_i; } \STATE{ ${\bf r}$\_i $\leftarrow$ ${\bf {r}}$\_i + ( (rand() -0.5)*$\Delta$x, (rand() -0.5)*$\Delta $y, (rand() -0.5)*$\Delta$z ); } \STATE{ Compute the energy e\_new= $V_i$ and $\Delta$E = e\_new - e\_old; } \IF{ exp[-$2\Delta$E] $>$ rand() } \STATE{$\bf r$\_i= $\bf r$\_old} \ENDIF \ENDFOR \ENDFOR \end{algorithmic} \end{algorithm} It is clear from \eqref{eq: phii} and \eqref{eq: Vi} that updating the position of one particle requires $\mathcal{O}(N+M)$ operations. Our goal is to reduce the cost of this computation to $\mathcal{O}(1).$ \subsection{A random batch algorithm based on the over-damped Langevin Dynamics} The idea behind the random batch algorithm can be best explained in terms of an over-damped Langevin dynamics, \begin{equation}\label{eq: lgv} d {\boldsymbol r}_i = \nabla \log \phi(\boldsymbol r_i) dt - \sum_{\overset{j=1}{j\ne i}}^N \nabla_{\boldsymbol r_i} u(|\boldsymbol r_i - \boldsymbol r_j|) dt + dW_i(t), \quad 1 \le i \le N. \end{equation} Here $W_i(t)$'s are independent Wiener processes. Its empirical measure $f(\boldsymbol r,t)$ corresponds to the Fokker Planck equation (FPE), \begin{equation}\label{eq: fpe} \partial_t f = - \nabla \cdot \big( \boldsymbol v f \big) + \frac12 \triangle f, \end{equation} where $\boldsymbol v= (\boldsymbol v_1, \boldsymbol v_2, \ldots, \boldsymbol v_N)$ and \begin{equation}\label{eq: vi} \boldsymbol v_i=\nabla \log \phi(\boldsymbol r_i) - \sum_{\overset{j=1}{j\ne i}}^N \nabla_{\boldsymbol r_i} u(|\boldsymbol r_i - \boldsymbol r_j|),\end{equation} is interpreted as a drift velocity. Under suitable conditions \cite{Mattingly:02}, the dynamical system with potential given by \eqref{eq: pr} is ergodic, and the PDF $p(\boldsymbol r)$ in \eqref{eq: pr} is the unique equilibrium measure of this stochastic system. Therefore the numerical integration of the SDEs \eqref{eq: lgv} offers a route to navigate to \eqref{eq: pr} and sample the energy. Using the over-dampled Langevin equation to sample the Gibb distribution has been a widely known method. In the context of VMC, this approach has been adopted by Scemama et al. \cite{scemama2006efficient} to improve standard methods. In addition, they combined the Langevin dynamics with the Metropolis-Hastings algorithm to accept/reject the produced samples. A direct discretization, e.g., the Euler-Maruyama method \cite{kloeden2013numerical}, would involve the following step \cite{kloeden2013numerical}, \begin{equation}\label{eq: EM} {\boldsymbol r}_i(t+\Delta t) = {\boldsymbol r}_i(t) + \nabla \log \phi(\boldsymbol r_i) \Delta t - \sum_{j\ne i} \nabla_{\boldsymbol r_i} u(|\boldsymbol r_i(t) - \boldsymbol r_j(t)|) \Delta t + \Delta W_i, \quad 1 \le i \le N. \end{equation} Here we assume that the step size $\Delta t$ is uniform, and the discrete time is given by $\mathcal{T} := \{n\Delta t, n\geq 0 \}.$ The method \eqref{eq: EM} is applied to each time step $t\in \mathcal{T}$. At each step, $\Delta W_i$ is sampled from a normal random distribution with zero mean and variance $\Delta t.$ Although the Euler-Maruyama method is completely different from the Metropolis-Hastings algorithm, they nevertheless have a similar computational cost for updating the position of each particle. More specifically, one has to compute the interactions with all other particles ($ u(|\boldsymbol r_i(t) - \boldsymbol r_j(t)|) $), for all $j \neq i$. In addition, one needs to compute $\log \phi(\boldsymbol r_i)$, which is given by, \begin{equation}\label{eq: log-phi} \log \phi(\boldsymbol r_i) = \log \sum_{\alpha=1}^M e^{-\theta(\boldsymbol r_i - R_\alpha)}. \end{equation} Together, they contribute to $\mathcal{O}(M+N)$ operations for each particle at each time step. \medskip To reduce the cost of evaluating the two-body interactions, the RBM proceeds as follows (this corresponds to the RBM with replacement in \cite{jin2020random}): At each step, one randomly picks out two particles, $i$ and $j$, and compute their interactions, $\nabla_{\boldsymbol r_i} u(|\boldsymbol r_i - \boldsymbol r_j|)$, then updates their positions as follows, \begin{equation} \label{eq-1} \left\{ \begin{aligned} {\boldsymbol r}_i(t+\Delta t) &= {\boldsymbol r}_i(t) + \nabla \log \phi(\boldsymbol r_i) \Delta t + (N-1) \nabla_{\boldsymbol r_i} u(|\boldsymbol r_i - \boldsymbol r_j|) \Delta t + \Delta W_i, \\ {\boldsymbol r}_j(t+\Delta t) &= {\boldsymbol r}_j(t) + \nabla \log \phi(\boldsymbol r_j) \Delta t + (N-1) \nabla_{\boldsymbol r_j} u(|\boldsymbol r_i - \boldsymbol r_j|) \Delta t + \Delta W_j. \end{aligned} \right. \end{equation} Notice that $\nabla_{\boldsymbol r_j} u(|\boldsymbol r_i - \boldsymbol r_j|)=-\nabla_{\boldsymbol r_i} u(|\boldsymbol r_i - \boldsymbol r_j|) $, thus only one of them needs to be computed. The factor $(N-1)$ accounts for the fact that we are using {\it one} term $u(|\boldsymbol r_i - \boldsymbol r_j|)$ to account for the interactions with all $(N-1)$ particles. In general, it is also possible to pick larger random batches. Choosing batches with two particles is most popular. In light of \eqref{eq: log-phi}, the computation of the one-body term still involves $\mathcal{O}(M)$ operations. However, since \begin{equation}\label{eq: 1-body} \nabla \log \phi(\boldsymbol r_i) = \displaystyle \sum_{\alpha=1}^M - \nabla \theta(\boldsymbol r_i - R_\alpha) q_\alpha^i, \quad q_\alpha^i = \frac{e^{-\theta(\boldsymbol r_i - R_\alpha)}}{\sum_{\beta=1}^M e^{-\theta(\boldsymbol r_i - R_\beta)}}, \end{equation} where the coefficients $q_\alpha^i$'s are non-negative and $\sum_\alpha q_\alpha^i =1,$ thus the log-gradient term can be viewed as a statistical average with discrete probability given by $\left\{q_\alpha^i\right\}_{\alpha=1}^M.$ So a simple idea is to pick {\it just one} term $\alpha$ randomly, {e.g.}, by using a direct Monte Carlo method for one step. The implementation is straightforward: Assume that one starts with $\alpha$ and computes $e_{old}=\theta(\boldsymbol r_i - R_\alpha)$, and then we randomly pick $1 \le \beta \le M$, and compute $e_{new}=\theta(\boldsymbol r_i - R_\beta)$. We accept $\beta$ with probability $\exp\big[e_{new}-e_{old}\big].$ \medskip We summarize the random batch algorithm in {\bf Algorithm 2}. \begin{algorithm} \caption{Random batch algorithm for variational Monte Carlo } \label{alg:rb-vmc} \begin{algorithmic} \FOR{nt=1, num\_steps} \FOR{np=1, num\_particles/2} \STATE{Randomly pick two particles $i$ and $j$ with $i\neq j$.} \STATE{Perform one step of the Monte Carlo algorithm with respect to $\left\{q_\alpha^i\right\}$ and select $\alpha$. Compute $\boldsymbol b_i=- \nabla \theta(\boldsymbol r_i - R_\alpha)$.} \STATE{Perform one step of the Monte Carlo algorithm with respect to $\left\{q_\alpha^j\right\}$ and select $\beta$. Compute $\boldsymbol b_j=- \nabla \theta(\boldsymbol r_j - R_\beta)$.} \STATE{Evaluate $\boldsymbol u_{ij}= -\boldsymbol u_{ji}= (N-1) \nabla_{\boldsymbol r_i} u(|\boldsymbol r_i - \boldsymbol r_j|).$} \STATE{Update the particle positions, \begin{equation} \begin{aligned} {\boldsymbol r}_i \longleftarrow &{\boldsymbol r}_i + \boldsymbol b_i \Delta t + \boldsymbol u_{ij} \Delta t + \Delta W_i, \\ {\boldsymbol r}_j \longleftarrow &{\boldsymbol r}_j + \boldsymbol b_j \Delta t + \boldsymbol u_{ji} \Delta t + \Delta W_j. \end{aligned} \end{equation}} \ENDFOR \ENDFOR \end{algorithmic} \end{algorithm} As a result of the random sampling of the one- and two-body interactions, updating the position of each particle {\it only requires $\mathcal{O}(1)$ operations}. In the next section, we will study the transition density of the random algorithm, which in turn serves as a validation of the algorithms. \medskip Another practical issue emerges when the interaction $u(|\boldsymbol r|) $ has a singularity near zero. In this case, a direct implementation of the random batch algorithm would often require much smaller step sizes in the integration of the Langevin dynamics \eqref{eq: lgv} \cite{li2020random}. The issue can be mitigated by separating $u(|\boldsymbol r|)$ into a singular, but short-ranged part, and a long-ranged, but smooth part \cite{li2020random}. The short-range interactions can be efficiently computed using Verlet's cell list method which, for each particle, still involves $\mathcal{O}(1)$ operations. This is a common practice in classical molecular simulations \cite{allen2017computer,frenkel2001understanding}. Meanwhile, the long-range part, which is where most computations are involved, can be simulated by the random batch algorithm. Here we use a simple approach to separate out the singularity by introducing a cut-off distance $r_\text{cut}$, then replacing the short-range part by an extrapolation using a Taylor expansion, namely, \begin{equation}\label{eq: uLS} u_L(r) = \left\{ \begin{array}{ll} u(r) & r > r_\text{cut}, \\ u(r_\text{cut}) + u'(r_\text{cut})(r-r_\text{cut}) + \frac12 u''(r_\text{cut})(r-r_\text{cut})^2, & \text{Otherwise}. \end{array}\right. \end{equation} \begin{figure}[H] \begin{center} \includegraphics[scale=0.4]{ucut} \caption{Separation of the interaction $u(r)=r^{-5}$ with singularity at $r=0$ (solid line) into a long range interaction $u_L(r)$ (dashed) without singularity, and a short range interaction $u_S(r)$ (dot-dashed).} \label{fig: ur} \end{center} \end{figure} The short-range part is then defined as $u_S(r) = u(r) - u_L(r).$ Figure \ref{fig: ur} shows an example of how such a decomposition can be easily constructed. \subsection{The transition kernel of the random batch algorithm} \subsubsection{The random batch algorithm for the one-body term} We will first consider the Monte-Carlo sampling of the one-body term \eqref{eq: 1-body}, and for clarity we place the problem in the setting of solving a $d$-dimensional SDE system, \begin{equation} d\boldsymbol r(t)= \boldsymbol a(\boldsymbol r(t)) dt + \sigma dW_t. \end{equation} Here $\sigma \geq 0$ is a constant, which is also allowed to be zero. In light of \eqref{eq: 1-body}, we consider a vector field $\boldsymbol a$ that can be expressed as, \begin{equation} \boldsymbol a(\boldsymbol r) = \sum_{\alpha=1}^M q_\alpha \boldsymbol a_\alpha(\boldsymbol r), \end{equation} where the coefficients $q_\alpha$'s represent a discrete probability density, that is, $q_\alpha \geq 0$ and $\sum_\alpha q_\alpha =1.$ We examine the random algorithm, \begin{equation}\label{eq: rand-1} \boldsymbol r(t+\Delta t) = \boldsymbol r(t) + \boldsymbol a_\alpha\big(\boldsymbol r(t) \big) \Delta t + \sigma \Delta W, \end{equation} where the index $\alpha$ is selected at random according to the discrete density. We consider uniform step size $\Delta t$, and the equation will be applied to each step $t.$ Clearly, the corresponding transition density is given by, \begin{equation} p\big( \boldsymbol r(t+\Delta t)=\boldsymbol y| \boldsymbol r(t) =\boldsymbol x\big) =\sum_{\alpha=1}^M q_\alpha \frac{1}{(2\pi\sigma^2)^{d/2}} \exp \left[-\frac{\big(\boldsymbol y- \boldsymbol x - \boldsymbol a_\alpha(\boldsymbol x) \Delta t \big)^2}{2\sigma^2} \right]. \end{equation} For any function $A(\boldsymbol r) \in C^4(\mathbb{R}^d)$ with suitable growth conditions \cite{kloeden2013numerical}, one has, \begin{equation} \begin{aligned} &\int_{\mathbb{R}^d} A(\boldsymbol y) p\big( \boldsymbol x(t+\Delta t)=\boldsymbol y| \boldsymbol x(t) =\boldsymbol x\big) d\boldsymbol y \\ & =\sum_{\alpha=1}^M q_\alpha \big[ A(\boldsymbol x) + \boldsymbol a_\alpha(\boldsymbol x) \cdot \nabla A(\boldsymbol x) \Delta t + \frac12 \triangle A(x) \Delta t + \mathcal{O}(\Delta t^2) \big]\\ =& A(\boldsymbol x) + \boldsymbol a(\boldsymbol x) \cdot \nabla A(\boldsymbol x) \Delta t + \frac12 \triangle A(x) \Delta t + \mathcal{O}(\Delta t^2). \end{aligned} \end{equation} Therefore, this random algorithm has a first weak-order of accuracy, which is comparable to the Euler-Maruyama method. Even though the drift term $\boldsymbol a(\boldsymbol r)$ is only sampled once at each step, the method is still convergent. To our knownledge, this surprising property was first noticed by E et al. in the context of multiscale methods for SDEs \cite{weinan2005analysis}, where the weak convergence is proved in a more general (multiscale) setting. \subsubsection{The random batch algorithm for pair-wise interactions} We now turn to the SDE system \eqref{eq: lgv} with pair-wise interactions, \begin{equation}\label{eq: sde-pair} d\boldsymbol r_i(t) = \nabla \log \phi (\boldsymbol r_i) dt - \sum_{j \neq i} \nabla u (|\boldsymbol r_i - \boldsymbol r_j|) dt + dW_t. \end{equation} By letting $\boldsymbol u_{ij}= \nabla u (|\boldsymbol r_i - \boldsymbol r_j|)$, we can write the pair-wise terms as, \begin{equation} \boldsymbol u_i = \sum_{j \neq i} \boldsymbol u_{ij}, \;\; \boldsymbol u_{ij}=-\boldsymbol u_{ji}. \end{equation} To study the weak convergence, one may consider the conditional expectation, \begin{equation} \mathbb{E} \Big[ A(\boldsymbol r(t+\Delta t) | \boldsymbol r(t)=\boldsymbol x \Big]. \end{equation} This is represented by the transition density as follows, \begin{equation} \mathbb{E} \big[ A(\boldsymbol r(t+\Delta t) | \boldsymbol r(t)=\boldsymbol x \big] =\int A(\boldsymbol y) p\big(\boldsymbol r(t+\Delta t)=\boldsymbol y| \boldsymbol r(t)=\boldsymbol x \big) d\boldsymbol y. \end{equation} The transition density for the SDEs \eqref{eq: sde-pair} follows the Fokker-Planck equation \cite{kloeden2013numerical}. The explicit form of the solution is often unknown. But with the approximation by the Euler-Maruyama method, \begin{equation} \boldsymbol r_i(t+\Delta t)= \boldsymbol r_i(t) + \nabla \log \phi(\boldsymbol r_i) \Delta t + \boldsymbol u_i \Delta t + \Delta W_i, \end{equation} we can identify an approximate transition kernel, \begin{equation} \begin{aligned} &p^{EM}(\boldsymbol r(t+\Delta t)=\boldsymbol y| \boldsymbol r(t)=\boldsymbol x ) \\ &= \frac{1}{ ({2\pi \sigma^2\Delta t) }^{d/2}} \exp \left[ - \big(\boldsymbol y - \boldsymbol x - \nabla \log \phi(\boldsymbol x) \Delta t - \boldsymbol u(\boldsymbol x) \Delta t \big)^2/(2\sigma^2 \Delta t) \right]. \end{aligned} \end{equation} By the weak It\^o-Taylor expansion \cite{kloeden2013numerical}, we have from the density induced by the Euler-Maruyama method, \begin{equation}\label{eq: pem} \mathbb{E} \big[ A(X(t+\Delta t) | X(t)=x \big] = A(x) + \mathcal{L}A(x) \Delta t + \mathcal{O}(\Delta t^2), \end{equation} where $\mathcal{L}$ is the generator, \begin{equation} \mathcal{L}A(\boldsymbol x) = \sum_i \big(\nabla \log \phi (\boldsymbol x_i) + \boldsymbol u_i \big) \cdot \nabla_{x_i} A(\boldsymbol x) + \frac12 \triangle A(\boldsymbol x). \end{equation} The expansion \eqref{eq: pem} is consistent with that of the exact transition density up to $\mathcal{O}(\Delta t^2)$, making the Euler-Maruyama method first order in the weak sense \cite{kloeden2013numerical}. \bigskip We now turn to the random batch algorithm \ref{eq-1} with replacement \cite{jin2020random}. The convergence property has recently been proved in \cite{JLL2}: \begin{thm} The random batch algorithm over $N/2$ steps has weak order 1. \end{thm} Here we illustrate the weak convergence in terms of the transition density. This also helps us to construct RBM for diffusion Monte Carlo. Since we randomly pick a pair of components to update, the transition density, denoted here by $p^{RB}$, is given by, \begin{equation}\label{eq: prb} p^{RB}\big(\boldsymbol r(t+\Delta t)=\boldsymbol y| \boldsymbol r=\boldsymbol x \big) = \frac{2}{(N-1)N} \sum_{i > j} q_{ij}(\boldsymbol y| \boldsymbol x), \end{equation} where, \begin{equation} \begin{aligned} q_{ij}(\boldsymbol y| \boldsymbol x)= &\frac{1}{ ( 2\pi \Delta t)^{3N/2} } \exp \left[- \big(\boldsymbol y_i - \boldsymbol x_i - \nabla \log \phi(\boldsymbol x_i) \Delta t - (N-1) \boldsymbol u_{ij}) \Delta t \big)^2/(2 \Delta t) \right] \\ &\qquad \quad \times \exp \left[- \big(\boldsymbol y_j -\boldsymbol x_j -\nabla \log \phi(\boldsymbol x_j) \Delta t - (N-1) \boldsymbol u_{ji} \Delta t \big)^2/(2 \Delta t) \right]\\ & \displaystyle \qquad \quad \times \displaystyle\Pi_{k\ne i, j} \delta(\boldsymbol y_k-\boldsymbol x_k). \end{aligned} \end{equation} The delta functions were included to ensure that when the pair $(i,j)$ is selected, other components are not updated. In the following discussions, we will simply write the transition density as $p^{RB}(\boldsymbol y|\boldsymbol x).$ With direct Taylor expansions, one finds that, for any observable $A(\boldsymbol x)$, \begin{equation} \begin{aligned} \int A(\boldsymbol y) q_{ij}(\boldsymbol y|\boldsymbol x) dy =& A(\boldsymbol x) + \nabla \log \phi(\boldsymbol x_i) \cdot \nabla_{\boldsymbol x_i} A(\boldsymbol x) \Delta t +\nabla \log \phi(\boldsymbol x_j) \cdot \nabla_{\boldsymbol x_j} A(\boldsymbol x) \Delta t \\ &+(N-1) \boldsymbol u_{ij} \cdot \nabla_{\boldsymbol x_i} A(\boldsymbol x) \Delta t + (N-1) \boldsymbol u_{ji} \cdot \nabla_{\boldsymbol x_j} A(x) \Delta t \\ &+ \frac12 \triangle_{\boldsymbol x_i} A(\boldsymbol x) \Delta t + \frac12 \triangle_{\boldsymbol x_j} A(\boldsymbol x) \Delta t +\mathcal{O} (\Delta t^2). \end{aligned} \end{equation} Combining this with \eqref{eq: prb}, we have, \begin{equation} \begin{aligned} \int A(\boldsymbol y) p^{RB}(\boldsymbol y|\boldsymbol x) d\boldsymbol y =& A(\boldsymbol x) + \frac{2\Delta t}{N} \Big\{ \sum_i \nabla \log \phi(\boldsymbol x_i)\cdot \nabla_{\boldsymbol x_i} A(\boldsymbol x) \\ & + \sum_i \sum_{j\ne i} \boldsymbol u_{ij} \cdot \nabla_{\boldsymbol x_i} A(\boldsymbol x) + \frac12 \triangle A(\boldsymbol x)\Big\} + \mathcal{O}(\Delta t^2). \end{aligned} \end{equation} Therefore, the random batch algorithm with replacement, when applied to one batch of two particles, has the same accuracy as the Euler-Maruyama method over a time step of $2\Delta t/N.$ Note one full time step in Euler-Maruyama method corresponds to $N/2$ such steps in the RBM with replacement. \subsection{Numerical Results} We conduct numerical experiments with ${}^4$He atoms interacting with a two-dimensional lattice. The ${}^4$He atoms, due to the fact that the total spin is zero, are bosons. Driven by its superfluid properties and many observed quantum effects, ${}^4$He atoms have been extensively studied by computer simulations. Acting as a substrate, the lattice has a triangular structure with lattice spacing given by $a_0= 4.2576 $ \AA. Such a lattice can be generated using rectangular unit cells, each of which contains two atoms. For example, Figure \ref{fig: latt} shows such a system with $12\times 7$ unit cells and a total of 168 atoms. The model is adapted from \cite{joly1992helium}. We choose \AA~ as the length unit and $k_B $Kelvin as the unit of energy. \begin{figure}[htbp] \centering \includegraphics[scale=0.4]{lattice.jpg} \caption{A two dimensional lattice with Helium atoms.} \label{fig: latt} \end{figure} Particles that represent the wave function $\Phi_0$ are created randomly near the nuclei. We follow the setup in \cite{pang2014diffusion}. In particular, in the wave function ansatz \eqref{eq: ansatz}, the one-particle wave function is assumed to be, \begin{equation} \phi(\boldsymbol r_i) = \exp -\big( (z_i - z_e)^2/z_0^2 \big) \sum_{\alpha=1}^M \exp \big( -(\boldsymbol r_i - R_\alpha)^2/r_0^2\big). \end{equation} Here $z_i$ indicates the third component of the coordinate $\boldsymbol r_i.$ In addition, the two-body terms in the Jastrow factor are chosen to consist of both short and long range terms, \begin{equation}\label{eq: ur} u(r) = \left( \frac{a}{r}\right)^5 + \frac{b^2}{c^2+r^2}. \end{equation} Although the first term decays rather quickly, we do not use an abrupt truncation of the function. Instead, we follow the construction \eqref{eq: uLS}, and split it into a function that vanishes beyond a cut-off distance $r_{cut}$. The remaining part is merged into the second term in \eqref{eq: ur} and regarded as a long-range interaction. The parameters, with unit \AA, are given in Table \ref{params}. \begin{table}[thp] \caption{ Model parameters in the QMC simulations of ${}^4$He. \label{params} } \begin{tabularx}{\textwidth}{s|s|s|s|s|s|s} \hline\hline $z_e$ & $z_0$ & $r_0$ & $a$ & b & c &$r_{cut}$ \\ \hline 2.85 & 0.521 &15 & 2.771 & 5.0 & 10.0 & 8.0\\ \hline\hline \end{tabularx} \end{table} We first carry out VMC simulations using RBM-VMC ({\bf Algorithm} \ref{alg:rb-vmc} ) and the Euler-Maruyama method \eqref{eq: EM}. In the simulations, we run the algorithms with 300 ensembles and the average energy at each step will be computed as an average over these ensembles. In principle, the algorithms can be implemented with just one realization, and the ground state energy would be computed entirely from the time series. But multiple ensembles can be easily implemented in parallel. In addition, the ensembles can later be turned into walkers in the DMC simulations. Figure \ref{fig:vmc} shows the average energy computed from the RBM-VMC and the Euler-Maruyama methods in the time interval [0,150]. The step size is $\Delta t= 10^{-3}.$ We observe that both methods relax to equilibrium around $t=25.$ Since the time scale is fictitious, we do not assign a unit for the time variable. \begin{figure}[htbp] \centering \includegraphics[scale=0.36]{vmc_rb_tseries.jpg}\\ \includegraphics[scale=0.36]{vmc_em_tseries.jpg}\\ \includegraphics[scale=0.36]{vmc_ct.jpg} \caption{A comparison of the random batch Algorithm \ref{alg:rb-vmc} (top) to the Euler-Maruyama method (middle). The bottom panel shows the time correlation. } \label{fig:vmc} \end{figure} We also show the time correlation of the sampled energy after the system has reached equilibrium. To obtain a more quantitative comparison, we implemented an MCMC diagnostics. In this context, the relaxation is known as the burn-in period, and a thinning parameter can be used to indicate correlations. More specifically, we use the Raftery and Lewis criteria \cite{raftery1992practical} ($q=0.025, r=0.0125,s=0.95$) and find that the burn-in period is 23.49 and 38.54, with thinning parameters 0.058 and 0.066, for the Euler-Maruyama and RBM, respectively. One can see that the random batch method has slightly longer burn-in time, and longer correlation. Since both of these methods are constructed by integrating SDEs in time, we have factored in the step size $\Delta t$ in estimating these parameters. We also show the energy sampled from the Metropolis-Hastings algorithm in Figure \ref{fig:vmc-mh}. The average energy is $2.361113 \times 10^4$ with standard statistical error $1.698.$ Note that it is not straightforward to compare the previous two algorithms to the Metropolis-Hastings algorithm, since the latter method does not have an associated time scale. \begin{figure}[thp] \centering \includegraphics[scale=0.42]{vmc_mh.jpg} \caption{The energy sampled from 200,000 steps of the Metropolis-Hastings algorithm. } \label{fig:vmc-mh} \end{figure} We now compare the CPU time that is needed to move the 300 Markov chains for 1000 steps. In this comparison, we have excluded the cost associated with the energy calculations in the random batch and Euler-Maruyama methods, since they are not needed in the burn-in period, and even upon equilibrium, it is a good practice to sample it every few steps to obtain less correlated samples. From Table \ref{vmc}, one clearly sees that the RBM is more efficient than the Euler-Maruyama method, mainly due to the random sampling of the pairwise interactions in the Jastrow factor in the wave function \eqref{eq: ansatz}. It is much more efficient than the Metropolis-Hastings algorithm, mainly because the latter method requires the calculation of the energy at {\it every } step. \begin{table}[thp] \caption{ Comparison of the CPU time (measured in seconds) for several VMC methods. \label{vmc} } \begin{tabularx}{\textwidth}{b|s|s|s} \hline\hline & Metropolis-Hastings & Euler-Maruyama & Random Batch \\ \hline CPU time for a 1000-step sampling period & 1503 & 469 & 54 \\ \hline\hline \end{tabularx} \end{table} Finally, we examine the effect of the time discretization. Unlike the metropolis-Hastings algorithm, the RBM and Euler-Maruyama methods are biased, and the results depend on the step size. Figure \ref{fig: ebar} shows the averages computed from the two methods for different choices of $\Delta t.$ We choose $10^5$ samples from equilibrium in the estimation. Compared to the values from the MH algorithm, it can be observed that the Euler-Maruyama method over-estimates the ground state energy, while the random batch method under-estimates it. \begin{figure}[htbp] \centering \includegraphics[scale=0.2]{vmc_ebar.jpg} \caption{The average of the energy computed from the random batch and Euler-Maruyama methods for various choices of the step size $\Delta t$. } \label{fig: ebar} \end{figure} \section{The Random Batch Algorithm in Diffusion Quantum Monte Carlo Methods} The accuracy of the VMC method is limited by the ansatz of the wave function \eqref{eq: ansatz}. The idea of the DMC is to go back to the time-dependent Schr\"{o}dinger equation and evolve the system along the imaginary time, \begin{equation}\label{eq: imag-time} \partial_t \Psi = ( E_T - \widehat{H}) \Psi. \end{equation} Here a rescaling of time scale $ i t/\hbar \to t$ has been introduced and $t$ now represents a fictitious time scale. Since the transient is not of interest here, we will not keep track of the time scales. Depending on the choice of the reference energy $E_T$, the solution would either decay or grow exponentially, unless $E_T$ coincides with the ground state energy, at which point, the wave function converges to the ground state as $t \to +\infty$. Instead of solving \eqref{eq: imag-time} directly, it is often more practical to find $f(\boldsymbol r,t)$ with \begin{equation} f(\boldsymbol r,t) = \Psi(\boldsymbol r,t) \Phi_0(\boldsymbol r). \end{equation} This ansatz has the flavor of the importance sampling. In addition, if one chooses $\Psi(\boldsymbol r,0) = \Phi_0(\boldsymbol r)$, then $f(\boldsymbol r, 0)= |\Phi_0|^2 \propto p(\boldsymbol r)$ in \eqref{eq: pr}. Therefore, we can use a VMC method to initialize $f(\boldsymbol r, t).$ Direct calculations yield the following differential equation \cite{reynolds1982fixed}, \begin{equation}\label{eq: f} \partial_t f = -\nabla\cdot \big( \frac{\hbar^2}{{m}} \boldsymbol v(\boldsymbol r) f\big) + \frac{\hbar^2}{2m} \nabla^2 f - \big(E_T - {E}_\text{tot}(\boldsymbol r) \big) f. \end{equation} The average energy ${E}(t)$ is defined as a weighted average, \begin{equation} {E}(t)= \frac{\displaystyle\int f(\boldsymbol r,t) E_\text{tot}(\boldsymbol r) d\boldsymbol r}{\displaystyle \int f(\boldsymbol r, t) d\boldsymbol r}. \end{equation} Without the last term on the right hand side of (\ref{eq: f}), the equation above, with a time rescaling \(\tau \to \tau \hbar^2/m\), would be reduced to the Fokker-Planck equation \eqref{eq: fpe} associated with the SDE \eqref{eq: lgv}, with the additional term that embodies the influence of the choice of the energy shift on the change of total mass. Within a short time step, $\Delta t$, the solution of \eqref{eq: f} can be approximated by \cite{reynolds1982fixed}, \begin{equation}\label{eq: f'} f(\boldsymbol r,t+\Delta t) = \int_{\mathbb{R}^{3N} } G(\boldsymbol r, \boldsymbol r', \Delta t) f(\boldsymbol r',t) d\boldsymbol r', \end{equation} where the function $G$, often referred to as Green's function, is given by \cite{reynolds1982fixed}, \begin{equation} G(\boldsymbol r, \boldsymbol r', \Delta t) = \frac{1}{ \big( 2\pi \sigma^2 \big)^{3N/2}} \exp \left[ - \frac{ \big(\boldsymbol r' - \boldsymbol r - \frac{\Delta t \hbar}{m} \boldsymbol v(\boldsymbol r)\big)^2 }{2\sigma^2} \right] \exp \left[\Delta t\big(E_T - {E}_\text{tot}(\boldsymbol r) \big)\right]. \end{equation} The parameter $\sigma = \sqrt{\Delta t} \hbar/\sqrt{m}$ and the vector field $\boldsymbol v$ is given by \eqref{eq: vi}. This Green's function can be interpreted as a transition kernel in a general sense. In terms of an observable $A$, the action of the Green's function is expressed as follows, \begin{equation} \int A(\boldsymbol r') G(\boldsymbol r', \boldsymbol r,\Delta t) d\boldsymbol r' = A(\boldsymbol r) + \frac{\hbar^2}m \boldsymbol v(\boldsymbol r) \cdot \nabla A(\boldsymbol r) \Delta t + \frac{\hbar^2}m \tfrac{\Delta t}{2} \Laplace A(\boldsymbol r) + \Delta t\big(E_T - {E}_\text{tot}(\boldsymbol r) \big) A(\boldsymbol r)+ \mathcal{O}(\Delta t^2). \end{equation} One can write $G(\boldsymbol r', \boldsymbol r,\Delta t) =G_1(\boldsymbol r', \boldsymbol r,\Delta t) G_2(\boldsymbol r', \boldsymbol r,\Delta t),$ with \begin{equation} \begin{aligned} G_1(\boldsymbol r', \boldsymbol r,\Delta t) =& \frac{1}{ \big( 2\pi \sigma^2 \big)^{3N/2}} \exp \left[ - \frac{ \big(\boldsymbol r' - \boldsymbol r - \frac{h\hbar \Delta t}m \boldsymbol v(\boldsymbol r)\big)^2 }{2\sigma^2} \right], \\ G_2(\boldsymbol r', \boldsymbol r,\Delta t) =& \exp \left[\Delta t \big( E_T - E_\text{tot}(\boldsymbol r) \big) \right]. \end{aligned} \end{equation} Computationally, the two operations are carried out in two steps, which can be viewed as an operator-splitting method. Better results are often obtained with a symmetric splitting, which corresponds to redefining, \begin{equation}\label{eq: G2} G_2(\boldsymbol r', \boldsymbol r,\Delta t) = \exp \left[ \Delta t\big(E_T - \tfrac{1}{2}({E}_\text{tot}(\boldsymbol r) +{E}_\text{tot}(\boldsymbol r')) \big) \right]. \end{equation} A typical DMC algorithm begins with an ensemble of $L$ copies of the system, also known as walkers \cite{anderson1975random}. For each realization, one first solves the SDEs, \begin{equation}\label{eq: lgv'} d \boldsymbol r_i(t) = \frac{\hbar^2}{{m}} \nabla \log \phi(\boldsymbol r_i) dt + \frac{\hbar^2}{{m}} \sum_{j\ne i} \boldsymbol v_{ij} dt + \sigma dW_i(t). \end{equation} This step corresponds to the action of the first Green's function $G_1.$ Specifically, $\boldsymbol r$ and $\boldsymbol r'$ in $G_1$ refer to, respectively, the positions of the particles before and after these SDEs are solved for one time step. As alluded to at the beginning of this section, these SDEs coincide with the over-damped Langevin equations \eqref{eq: lgv} after a simple rescaling of the time variable. One can think of the approximations by these SDEs as an approximation of the function $f(\boldsymbol r, t)$ using a sum of delta functions, \begin{equation} f(\boldsymbol r, t) \approx \frac{1}{L} \sum_{\ell=1}^L \delta(\boldsymbol r - \boldsymbol r^{(\ell)}(t) ). \end{equation} The Green's function $G_1$ is precisely the transition kernel. In particular, the number of walkers will not be changed by this step. After the particles at the step $t+\Delta t$ are updated by $G_1$, the Green's function $G_2$ in \eqref{eq: G2} needs to be incorporated. This is done by using a birth/death process to determine whether a realization should be removed or duplicated. For each walker, one computes a weight factor, \begin{equation} w(t+\Delta t) = \exp \left[ \Delta t\big(E_T - \tfrac{1}{2}({E}_\text{tot}(\boldsymbol r) +{E}_\text{tot}(\boldsymbol r')) \big) \right], \end{equation} which corresponds to the Green's function $G_2 $ in \eqref{eq: G2}. To apply Green's function $G_2$, the walkers are duplicated (removed) based on the magnitude of $w(t+\Delta t)$. The overall algorithm is summarized on {\bf Algorithm} \ref{alg:dmc}, which will be later referred to as the direct DMC method. \begin{algorithm} \caption{Diffusion Monte Carlo (Direct DMC) } \label{alg:dmc} \begin{algorithmic} \STATE{Sample the initial num\_walkers walkers using the VMC algorithm. Set $M(1)$ as the number of walkers initially. Set $E_T$ to be the average energy computed from the VMC. } \smallskip \FOR{nt=1, num\_steps} \smallskip \FOR{n=1, num\_walkers} \smallskip \STATE{Compute the energy ${E}_\text{tot}(\boldsymbol r).$} \STATE{Drift and diffuse the nth walker according to \eqref{eq: lgv'}. } \STATE{Compute the energy ${E}_\text{tot}(\boldsymbol r').$} \STATE{Determine the probability of the branching process: $$w_n= \exp \left[ \Delta t \big(E_T- ({E}_\text{tot}(\boldsymbol r)+ {E}_\text{tot}(\boldsymbol r'))/2 \big)\right].$$ } \ENDFOR \FOR{$n$=1, num\_walkers} \IF{ $w_n < 1$ } \STATE{The walker survives with probability $w_n.$} \ELSE \STATE{The walker is duplicated $\floor{w_n}$ times. A new walker is created with probability $w_n-\floor{w_n}$.} \ENDIF \ENDFOR \STATE{Recount the number of walkers num\_walkers, and set it to $M(\text{nt}+1)$.} \STATE{} \STATE{Adjust the energy shift: $E_T \leftarrow E_T + \kappa \ln \frac{M(\text{nt}+1)}{M(\text{nt})}. $} \ENDFOR \end{algorithmic} \end{algorithm} \subsection{The random batch algorithm for DMC} Since the initialization, as well as the drift-diffusion step of the DMC involves the solution of the over-damped Langevin dynamics \eqref{eq: lgv} (or \eqref{eq: lgv'}), our random batch algorithm for VMC can be directly applied to this part of the DMC method, to mitigate the same issue encountered in the Metropolis-Hastings algorithm. It remains to treat the transition kernel $G_2(\boldsymbol r', \boldsymbol r, \Delta t)$ \eqref{eq: G2}. The primary challenge is that computing the energy at each step requires $\mathcal{O}((N+M)N)$ operations in order to update the position of $N$ particles. To reduce this part of the computation, we propose to write the total energy \eqref{eq: E-tot'} as follows, \begin{equation}\label{eq: e-parti} {E}_\text{tot}(\boldsymbol r)= \sum_{i=1}^N E_1(\boldsymbol r_i) + \sum_{1\le i<j\le N} E_2(\boldsymbol r_i, \boldsymbol r_j) + \sum_{1\le i <j<k\le N} E_3(\boldsymbol r_i, \boldsymbol r_j, \boldsymbol r_k). \end{equation} These three terms are onsite, two-body, and three-body contributions. The on-site energy comes from the one-particle wave function and the external potential, \begin{equation}\label{eq: E1} E_1(\boldsymbol r_i)= - \frac{\hbar^2}{2m} \nabla^2 \ln \phi(\boldsymbol r_i) - \frac{\hbar^2}{2m} |\nabla \ln \phi(\boldsymbol r_i) |^2 + \sum_{\alpha=1}^{M} U(\boldsymbol r_i - R_\alpha). \end{equation} To ensure that this part of the energy is evaluated with $\mathcal{O}(1)$ operations, we pick one atom $\alpha$ in the external potential randomly in the last term, and compute, \begin{equation}\label{eq: E1'} E_1(\boldsymbol r_i)= - \frac{\hbar^2}{2m} \nabla^2 \ln \phi(\boldsymbol r_i) - \frac{\hbar^2}{2m} |\nabla \ln \phi(\boldsymbol r_i) |^2 + M U(\boldsymbol r_i - R_\alpha). \end{equation} Let $\boldsymbol r_{ij}=\boldsymbol r_i -\boldsymbol r_j$ be the relative position and $r_{ij}=|\boldsymbol r_{ij}|$ be its distance. The two-body term consists of the following terms, \begin{equation}\label{eq: E2} E_2(\boldsymbol r_i, \boldsymbol r_j)= - \frac{\hbar^2}{m} \nabla^2 \ln u( r_{ij}) + \frac{\hbar^2}{m} \big(\nabla \ln \phi(\boldsymbol r_i)-\nabla \ln \phi(\boldsymbol r_j)\big) \cdot \nabla u( r_{ij}) + \frac{\hbar^2}{m} |\nabla u( r_{ij})|^2 + W( r_{ij}). \end{equation} The three-body term can be derived from the first term in the kinetic energy \eqref{eq: Ki}, and it is given by, \begin{equation}\label{eq: E3} E_3(\boldsymbol r_i, \boldsymbol r_j, \boldsymbol r_k)= \frac{\hbar^2}{m} \Big[ \nabla u( r_{ij}) \cdot \nabla u( r_{ik}) + \nabla u(r_{ji}) \cdot \nabla u( r_{jk}) + \nabla u( r_{ki}) \cdot \nabla u( r_{kj}) \Big]. \end{equation} These three-body terms arise due to the $\|\nabla V\|^2$ term in \eqref{eq: E-tot}. This partition of the energy is structured in the same manner as in molecular dynamics models \cite{allen2017computer}. In the random batch algorithm, we randomly pick a batch $C_I$ with three particles: $C_I=\{ i, j, k\}.$ We first update the position of the three particles (drift and diffuse) by solving the over-damped Langevin dynamics \eqref{eq: lgv'} using the random batch algorithm with batch size 3. This is demonstrated in \eqref{eq: move-ijk} in {\bf Algorithm \ref{alg:dmc-rb}}. We then define a {\it local} energy, \begin{equation}\label{eq: EIijk} \begin{aligned} {E}_I(\boldsymbol r_i, \boldsymbol r_j, \boldsymbol r_k) =& E_1(\boldsymbol r_i) + E_1(\boldsymbol r_j) + E_1(\boldsymbol r_k) \\ & + \tfrac{N-1}{2} \Big[ E_2(\boldsymbol r_i,\boldsymbol r_j) + E_2(\boldsymbol r_j,\boldsymbol r_k) + E_2(\boldsymbol r_k,\boldsymbol r_i) \Big],\\ & + \tfrac{(N-1)(N-2)}{2} E_3(\boldsymbol r_i,\boldsymbol r_j,\boldsymbol r_k). \end{aligned} \end{equation} In light of \eqref{eq: E1'}, \eqref{eq: E2}, and \eqref{eq: E3}, the cost for evaluating this local energy \eqref{eq: EIijk} remains $\mathcal{O}(1).$ In the branching step of our new DMC method, we assign a batch with a weight, \begin{equation} w_I = \exp \left[ \Delta t\big( \tfrac3N E_T- {E}_I(\boldsymbol r_i, \boldsymbol r_j, \boldsymbol r_k ) \big)\right], \end{equation} which helps to determine whether a walker should be continued/duplicated/deleted. This amounts to an approximation of Green's function $G_2$. To see this, note, on average, the effect of this random procedure on $f(\boldsymbol r, t)$ is given by, \begin{equation}\label{eq: G2-rbm} \begin{aligned} \frac{6}{N(N-1)(N-2)} &\sum_{i<j<k} w_I (\boldsymbol r_i, \boldsymbol r_j, \boldsymbol r_k ) f(\boldsymbol r, t) \\ =& \frac{6}{N(N-1)(N-2)} \sum_{i<j<k} \Big[ 1 + E_1(\boldsymbol r_i) \Delta t + E_1(\boldsymbol r_j) \Delta t + E_1(\boldsymbol r_k) \Delta t \Big] f(\boldsymbol r, t)\\ & + \frac{3}{N(N-2)} \sum_{i<j<k} \big( E_2(\boldsymbol r_i,\boldsymbol r_j) + E_2(\boldsymbol r_j,\boldsymbol r_k) + E_2(\boldsymbol r_k,\boldsymbol r_i) \big) \Delta t f(\boldsymbol r, t)\\ & + \frac{3}{N} \sum_{i<j<k} E_3(\boldsymbol r_i,\boldsymbol r_j,\boldsymbol r_k) \Delta t f(\boldsymbol r, t) + \mathcal{O}(\Delta t^2),\\ = & f(\boldsymbol r, t) + (E_T - {E}_\text{tot}(\boldsymbol r) ) \frac{3\Delta t}N f + \mathcal{O}(\Delta t^2). \end{aligned} \end{equation} Therefore the random batch algorithm is consistent with Green's function $G_2$ in \eqref{eq: G2} up to order $\mathcal{O}(\Delta t^2)$. Note that the evaluation of ${E}_I$ only requires $\mathcal{O}(1)$ operations. In the implementation, to avoid frequent removal and duplication of walkers, we apply the branching process after $N/3$ batches of particles are updated. In this case, the weight function is defined by collecting the local energy from each batch (denoted by $I_m$ here), \begin{equation} w(\boldsymbol r) = \exp \left[\Delta t \big( E_T - \widetilde{E}_\text{tot} \big)\right], \quad \widetilde E_\text{tot}=\sum_{m=1}^{N/3} {E}_{I_m}. \end{equation} Similar to \eqref{eq: G2-rbm}, one can verify with direct calculations that the branching process with probability $w(\boldsymbol r)$ is also consistent with Green's function $G_2$ in \eqref{eq: G2}. Overall, the algorithm is summarized in {\bf Algorithm 4}. \begin{algorithm} \caption{Diffusion Monte Carlo using Random Batch (RBM-DMC) } \label{alg:dmc-rb} \begin{algorithmic} \STATE{Sample the initial num\_walkers walkers using a VMC algorithm. Set $M(1)$ to be the number of walkers initially. Set $E_T$ to be the average energy computed from the VMC.} \smallskip \FOR{nt=1, num\_steps} \smallskip \FOR{n=1, num\_walkers} \smallskip \FOR{m=1, N/3} \STATE{Randomly pick a batch $I_m$ with three particles $(i, j, k)$.} \STATE{Perform one step of the Monte Carlo algorithm with respect to $\left\{q_\alpha^i\right\}$ and select $\alpha$. Compute $\boldsymbol b_i=- \nabla \theta(\boldsymbol r_i - R_\alpha)$. Similarly compute $\boldsymbol b_j$ and $\boldsymbol b_k$.} \STATE{Evaluate $\boldsymbol u_{ij}= -\boldsymbol u_{ji}= (N-1) \nabla_{\boldsymbol r_i} u(|\boldsymbol r_i - \boldsymbol r_j|).$ Similarly evaluate $\boldsymbol u_{ik}$ and $\boldsymbol u_{jk}$.} \STATE{Update the position of the three particles, \begin{equation}\label{eq: move-ijk} \begin{aligned} {\boldsymbol r}_i \longleftarrow &{\boldsymbol r}_i + \frac{\hbar^2}m \boldsymbol b_i \Delta t + \frac{\hbar^2}m (\boldsymbol u_{ij} + \boldsymbol u_{ik}) \Delta t + \sigma \Delta W_i, \\ {\boldsymbol r}_j \longleftarrow &{\boldsymbol r}_j + \frac{\hbar^2}m \boldsymbol b_j \Delta t + \frac{\hbar^2}m (\boldsymbol u_{ji} + \boldsymbol u_{jk}) \Delta t + \sigma \Delta W_j, \\ {\boldsymbol r}_k \longleftarrow &{\boldsymbol r}_k + \frac{\hbar^2}m \boldsymbol b_k \Delta t + \frac{\hbar^2}m (\boldsymbol u_{ki} + \boldsymbol u_{kj}) \Delta t + \sigma \Delta W_k. \\ \end{aligned} \end{equation} \STATE{Compute the local batch energy ${E}_{I_m}\big(\boldsymbol r(t+\Delta t)\big)$ from \eqref{eq: EIijk}. } } \ENDFOR \STATE{Determine the probability of the branching process from $E_n,$ $E_n=\sum_{m=1}^{N/3} {E}_{I_m},$ $$w_n= \exp \left[\Delta t \big( E_T - {E}_n \big)\right].$$ } \ENDFOR \STATE{Branch the walkers and adjust the energy $E_T$ as in the direct DMC algorithm} \ENDFOR \end{algorithmic} \end{algorithm} \subsection{Numerical Results} Now we test the RBM-DMC ({\bf Algorithm} \ref{alg:dmc-rb}) and compare the results with the direct DMC method ({\bf Algorithm} \ref{alg:dmc}). For the initialization, we first apply a VMC method using the ansatz \eqref{eq: ansatz} for the wave function $\Phi_0.$ The Metropolis-Hastings Monte Carlo method is used in both methods so that they start at the same states. 300 ensembles are created by sub-sampling one sample out of every 500 steps from the VMC runs to avoid correlations among the ensembles. For both methods, we use $\Delta t=10^{-4}$ and run $200,000$ steps of simulations. \begin{figure}[hptb] \centering \includegraphics[scale=0.1]{dmc_tseries.jpg}\\ \includegraphics[scale=0.1]{dmc_np.jpg}\\ \includegraphics[scale=0.1]{dmc_ct.jpg} \caption{A comparison of the RBM-DMC ({\bf Algorithm} \ref{alg:dmc-rb}) to the direct DMC method ({\bf Algorithm} \ref{alg:dmc}). Top: time series; Middle: The number of walkers; Bottom: time correlation. } \label{fig:dmc} \end{figure} Figure \ref{fig:dmc} shows the time series (top panel) generated by the two algorithms. We observe that the random batch method generates samples with slightly larger fluctuations during the burn in period. But the fluctuations eventually become comparable to those from the direct DMC simulations. The population of the walkers (middle panel) exhibits a similar behavior. We also examined the time correlation of the total energy \eqref{eq: E-tot'}. This is done by using the time series within the time interval $(10,20)$ and regard it as a stationary process. We conduct simulations with various choices of the step size $\Delta t$ to monitor the convergence. Figure \ref{fig:dmc_err} shows the energy computed from each instance. We decreased $\Delta t$ from $ 10^{-4}$ to $0.5\times10^{-4}$, and then further to $0.25\times10^{-4}.$ We observe that the results from the direct DMC and the random batch DMC methods both exhibit linear convergences. The extrapolated energy values at $\Delta t =0$ are $-2.39723\times 10^4$ and $-2.39756\times 10^4$, respectively. \begin{figure}[htbp] \centering \includegraphics[scale=0.18]{dmc_err.jpg} \caption{The computed average energy for several choices of the step size $\Delta t$ . } \label{fig:dmc_err} \end{figure} Since our primary focus is on the speedup of the computation, We examine the CPU runtime for various system sizes. More specifically, we increase the system size from the original 168 particles, to $N= 378$, $N=672$ and $N=1050$ particles, and in each case, we run the direct DMC and the RBM-DMC for 1000 steps. For the initial system $N=168$, the runtimes are 129.29 and 474.44 (seconds) for RBM-DMC and direct DMC, respectively. In this case, the random batch algorithm requires 1/4 of the CPU time, which is a moderate speedup. But as shown in Figure \ref{fig:CPU}, the CPU time for the direct DMC method increases much more rapidly as $N$ increases. \begin{figure}[htbp] \centering \includegraphics[scale=0.09]{dmc_cpu.jpg} \caption{A comparison of the CPU runtime (in seconds) for running 1000 steps of DMC. } \label{fig:CPU} \end{figure} With the advent of modern high-performance computer clusters, QMC methods have become a leading candidate for computing electronic structures of relatively large systems. As demonstrated in \cite{kim2018qmcpack}, direct DMC methods can be implemented in multi-core processors, by distributing the random walkers among different units. As a first step toward this goal, we study the $^4$He system on a graphite lattice with non-homogeneous deformation. More specifically, by mimicking an external load, we displace the atoms in the third direction according to a Gaussian profile: \begin{equation}\label{eq: defm} z_j = z_e + h_0 \exp \left[ -(x_j^2 + y_j^2)/1000\right], \end{equation} with $h_0$ indicating the height of the sheet at the origin. To establish such a spatial profile, a much larger system is needed. We consider a system with 5016 atoms, as shown in Figure \ref{fig:dsp}. We implemented RBM-DMC ({\bf Algorithm} \ref{alg:dmc-rb}) on 60 CPUs by distributing the walkers among the CPUs. After each branching step, the walkers are re-distributed to maintain a load balance. \begin{figure}[htbp] \centering \includegraphics[scale=0.3]{dspl_atoms.jpg} \caption{The out-of-plane displacement of the atoms on the graphite lattice. } \label{fig:dsp} \end{figure} We first perform the VMC simulations with 180 ensembles on the two systems, including the homogeneous lattice ($h_0=0),$ and the deformed lattice (we pick $h_0=2a_0$). This is done by using the RBM-DMC ({\bf Algorithm} \ref{alg:dmc-rb}) with the branching process turned off. We choose $\Delta t=10^{-4} $ and run the algorithms for $160,000$ steps. Figure \ref{fig:vmcL} shows the energy computed from the iterations and averaged over the 180 ensembles. In both cases, the energy exhibits a sharp relaxation before reaching a steady profile. We notice that the deformation leads to higher ground state energy. Each of the VMCs simulations take about 30 hours. \begin{figure}[htbp] \centering \includegraphics[scale=0.23]{vmcLp.jpg} \includegraphics[scale=0.23]{vmcLq.jpg} \caption{The energy from the VMC simulations. Left: undeformed lattice; Right: with deformation \eqref{eq: defm}. The insets show the energy after the system reaches equilibrium. } \label{fig:vmcL} \end{figure} At the end of the VMC run, we computed the particle density, from the 180 ensembles. For visualization purpose, we use the smoothed-kernel density estimator (mvksdensity in MATLAB) with width $1.5$\AA~ to obtain the density. In this method, the position of each particle (out of 5016) is interpreted as a data point, and the kernel density includes the contribution from all particles and all the ensembles. Figure \ref{fig:den} shows the density plots for both cases. An interesting observation is that in the deformed case, higher density is found in an annulus region, where the deformation is the largest. \begin{figure}[htbp] \centering \includegraphics[scale=0.156]{densityp.jpg} \includegraphics[scale=0.156]{densityq1.jpg} \includegraphics[scale=0.156]{densityq2.jpg} \caption{The particle density. Left: undeformed lattice after the VMC sampling; Middle: System with deformation after the VMC sampling; Right: System with deformation after the DMC sampling. } \label{fig:den} \end{figure} With the walkers prepared by the VMC simulation, we perform DMC simulations with the RBM-DMC method ({\bf algorithm} \ref{alg:dmc-rb}). Again we use $\Delta t=10^{-4}$ and we ran 240,000 steps of the algorithm. We monitor the energy and Figure \ref{fig:dmcL} shows how the energy changes during the simulations. The system with homogeneous lattice takes slightly longer to reach the steady state, and therefore we run the simulation for an extended period (360,000 steps). \begin{figure}[htbp] \centering \includegraphics[scale=0.34]{dmcLp.jpg} \includegraphics[scale=0.34]{dmcLq.jpg} \caption{The energy from the DMC simulations. Left: undeformed lattice; Right: with deformation. } \label{fig:dmcL} \end{figure} \section{Summary and Discussions} We have constructed random batch algorithms for quantum Monte Carlo simulations. The main objective is to alleviate the computational cost associated with the calculations of two-body interactions, including the particle interactions in the potential energy, and the pairwise terms in the Jastrow factor. In the framework of variational Monte Carlo methods, the random batch algorithm is constructed based on the over-damped Langevin dynamics, so that updating the position of each particle only requires $\mathcal{O}(1)$ operations per time step. Consequently for the N-particle system the computational cost per time step is reduced from $O(N^2)$ to $O(N)$. For the diffusion Monte Carlo method, we proposed to decompose the total energy into on-site, two-body, and three-body terms, which can be evaluated within a random batch of three particles. This still guarantees $\mathcal{O}(N)$ operations per time step for the $N$-body particle system. We have placed the main emphasis on the speedup of the computation. The speedup is more significant for larger systems, where the asymptotic scaling kicks in. In terms of the accuracy, we have shown that the random algorithms have first-order accuracy, comparable to the Euler-Maruyama method. This is certainly a low-order method. For instance, in the VMC simulations, we observed that the random batch algorithm remains stable when $\Delta t=0.05,$ but the step size has to be reduced to at least $\Delta t = 0.001$ to ensure a good accuracy. In this case, high-order diffusion Monte Carlo methods \cite{forbert2001fourth} would be helpful, and the construction of random batch algorithms with higher accuracy is certainly an open issue. Another common practice to correct the bias is to combine the algorithm with an Metropolis-Hastings step to accept/reject samples generated by the random batch method \cite{reynolds1982fixed,scemama2006efficient}. Maintaining detailed balance in the random batch algorithm is another interesting direction. In principle, some of these interactions in QMC can be (and have been) treated using fast summation methods, {e.g.}, the fast multipole methods for Coulomb interactions or Gaussian functions \cite{cheng1999fast,greengard1991fast}. But compared to the fast summation methods, the implementation of RBM is much easier. This paper only focuses on the VMC and DMC methods. Another important methodology is the path-integral quantum Monte Carlo \cite{herman1982path,sarsa2000path,ceperley1995path}, which works with the density-matrix at finite temperature. The formulation of path integral method using molecular dynamics techniques \cite{tuckerman1993efficient} seems to be an appropriate platform to implement the RBM. \section*{Acknowledgment} Jin's research is partly supported by NSFC grant No. 11871297. Li's research is supported by NSF under grant DMS-1819011 and DMS-1953120. \bibliographystyle{plain}
1,941,325,220,894
arxiv
\section{Introduction} The improved Poincar{\'e} inequality for the gradient developed in \cite{hurri1994} has been recently proved in \cite{jiang2014} to be equivalent to the solvability of the divergence equation, that is, to the validity of the Babu\v{s}ka-Aziz inequality for the divergence on a general class of domains. If we consider the $L_2$-norm, then there is also a simple connection between the domain specific optimal constants in the respective inequalities provided the problem domain supports the Hardy inequality, see \cite{duran2012}. In \cite{zsuppan2017} with the same method as in the divergence-gradient case an analogous improved Poincar{\'e} inequality for the rotation was derived in connection with the Babu\v{s}ka-Aziz inequality for the rotation and with the corresponding Friedrichs-Velte inequality on spatial domains. On the other hand equivalence between the Babu\v{s}ka-Aziz and Friedrichs-Velte inequalities has been proved in \cite{costabeldauge2015} for planar domains in the divergence-gradient case and it has been generalized later in \cite{costabel2015} for differential forms on arbitrary dimensional domains. In this paper we derive relations between the optimal domain specific constants between the Friedrichs-Velte inequality and an improved Poincar{\'e} inequality for differential forms using the framework of \cite{costabel2015}. Herewith we obtain a generalization of the improved Poincar{\'e} inequality for the case of differential forms, that is, using the exterior derivative instead of the gradient. We also develop a generalization of the Horgan-Payne type estimates of the Friedrichs-Velte constant of a star-shaped planar \cite{costabeldauge2015,horganpayne1983} or spatial \cite{payne2007} domain for star-shaped domains of arbitrary dimension. \section{Notation and preliminaries}\label{sec:prelim} In this paper we use the notations and results of \cite{costabel2015}, which we describe in the following. Let $\Omega$ be a bounded domain in $\mathbb{R}^n$ ($n\ge 2$) and let $\rho_{\Omega}$ be the distance to the boundary function on $\Omega$, i.e. $\rho_{\Omega}(x)=\text{dist}(x,\partial\Omega)$ for $x\in\Omega$. For $0\le\ell\le n$ we denote by $\Lambda^{\ell}$ the exterior algebra of $\mathbb{R}^n$ and by $|\cdot|$ the Euclidean norm on $\Lambda^{\ell}$. $L_2(\Omega,\Lambda^{\ell})$ is the space of differential forms of order $\ell$ with square integrable coefficients. The norm of $u\in L_2(\Omega,\Lambda^{\ell})$ is defined by \begin{equation}\label{eq:L2norm} \|u\|^2=\int_{\Omega}|u|^2 \end{equation} with the corresponding inner product $\left<\cdot,\cdot\right>$. In the space $C_0^{\infty}(\Omega,\Lambda^{\ell})$ of smooth differential forms with compact support in $\Omega$ we denote by $\d$ the exterior derivative and by $\d^{\ast}$ the coderivative which is the formal adjoint of $\d$ with respect to the $L_2$ scalar product, i.e. \begin{equation}\label{eq:exteriordiffs} \d: C_0^{\infty}(\Omega,\Lambda^{\ell})\to C_0^{\infty}(\Omega,\Lambda^{\ell+1}) \text{ and } \d^{\ast}: C_0^{\infty}(\Omega,\Lambda^{\ell})\to C_0^{\infty}(\Omega,\Lambda^{\ell-1}), \end{equation} where we also have $\d=0$ for $\ell=n$ and $\d^{\ast}=0$ for $\ell=0$. Let $H_0^1(\Omega,\Lambda^{\ell})$ be the completion of the space $C_0^{\infty}(\Omega,\Lambda^{\ell})$ under the semi-norm \begin{equation}\label{eq:H1seminorm} |u|_1=\left<\Delta u,u\right>^{\frac{1}{2}} \end{equation} defined by the Hodge-Laplacian \begin{equation}\label{eq:hodgelaplacian} \Delta=\d^{\ast}\d+\d\d^{\ast}. \end{equation} Denoting by $H^{-1}(\Omega,\Lambda^{\ell})$ the dual space of $H_0^1(\Omega,\Lambda^{\ell})$ with the dual norm \begin{equation}\label{eqn:dualnorm} \left|u\right|_{-1}=\sup_{w\in H_0^1(\Omega,\Lambda^{\ell})}\frac{\left<u,w\right>}{\left|w\right|_1}, \end{equation} we have the following extensions \begin{eqnarray} \label{eq:downdiffs}\underline{\d}:H_0^1(\Omega,\Lambda^{\ell})\to L_2(\Omega,\Lambda^{\ell+1})& \text{ and } &\underline{\d}^{\ast}:L_2(\Omega,\Lambda^{\ell+1})\to H^{-1}(\Omega,\Lambda^{\ell}),\\ \label{eq:updiffs}\overline{\d}^{\ast}:H_0^1(\Omega,\Lambda^{\ell})\to L_2(\Omega,\Lambda^{\ell-1}) & \text{ and } & \overline{\d}:L_2(\Omega,\Lambda^{\ell-1})\to H^{-1}(\Omega,\Lambda^{\ell}), \end{eqnarray} which we denote simply again by $\d$ and $\d^{\ast}$ below. We also need the orthogonal complements of the kernels of $\d$ and $\d^{\ast}$: \begin{eqnarray} \label{eq:kerupdiffoc}M&=&\left\{u\in L_2(\Omega,\Lambda^{\ell-1})\mid \forall v\in L_2(\Omega,\Lambda^{\ell-1}): \overline{\d}v=0\Rightarrow\left<u,v\right>=0\right\}\\ \label{eq:kerdowncodiffoc}M^{\ast}&=&\left\{u\in L_2(\Omega,\Lambda^{\ell+1})\mid \forall v\in L_2(\Omega,\Lambda^{\ell+1}): \underline{\d}^{\ast}v=0\Rightarrow\left<u,v\right>=0\right\} \end{eqnarray} that is $M=\left(\ker\overline{\d}\right)^{\bot}$ and $M^{\ast}=\left(\ker\underline{\d}^{\ast}\right)^{\bot}$. By Lemma 1.1 in \cite{costabel2015} the operator $\d^{\ast}$ in \eqref{eq:updiffs} maps $H_0^1(\Omega,\Lambda^{\ell})$ to a dense subspace of $M$ and \begin{equation}\label{eq:codiffM} u\in M\Rightarrow \d^{\ast}u=0. \end{equation} The image of $\d^{\ast}$ coincides with $M$ iff the Babu\v{s}ka-Aziz inequality is satisfied. \begin{definition}\label{def:BAineq} The domain $\Omega$ supports the Babu\v{s}ka-Aziz inequality of order $\ell$ if there is a finite positive constant $C_{\ell}$ depending only on $\Omega$ and $\ell$ such that for every $u\in M$ there exists a $w\in H_0^1(\Omega,\Lambda^{\ell})$ such that \begin{equation}\label{ineq:BA} \d^{\ast}w=u\text{ and }|w|_1^2\le C_{\ell}\|u\|^2. \end{equation} The least possible constant in \eqref{ineq:BA} is called the Babu\v{s}ka-Aziz constant of order $\ell$ of $\Omega$ and it is denoted by $C_{\Omega,\ell}$. \end{definition} The Babu\v{s}ka-Aziz inequality \eqref{ineq:BA} can be equivalently formulated as \begin{equation}\label{ineq:Lions} \forall u\in M: \|u\|^2\le C_{\ell,\Omega}|\d u|_{-1}^2, \end{equation} or as the inf-sup condition \begin{equation}\label{eq:infsup} \inf_{u\in M}\sup_{w\in H_0^1(\Omega,\Lambda^{\ell})}\frac{\left<u,\d^{\ast}w\right>}{\|u\|\cdot |w|_1}=\frac{1}{C_{\ell,\Omega}}. \end{equation} This is on the other hand equivalent to \begin{equation}\label{eq:infsupschur} \inf_{u\in M}\frac{\left<u,\d^{\ast}\Delta^{-1}\d u\right>}{\left<u,u\right>}=\frac{1}{C_{\ell,\Omega}}, \end{equation} where the operator \begin{equation}\label{def:schurcomplement} \mathcal{S}=\d^{\ast}\Delta^{-1}\d: L_2(\Omega,\Lambda^{\ell-1})\to L_2(\Omega,\Lambda^{\ell-1}) \end{equation} is called the Schur complement operator of the Stokes equation, i.e. $\mathcal{S}u=\d^{\ast}w$, $w\in H_0^1(\Omega,\Lambda^{\ell})$ being the solution of $\Delta w=\d u$. Equation \eqref{eq:infsupschur} means that the least eigenvalue of the restriction of $\mathcal{S}$ onto the subspace $M$ is positive, that is, $\mathcal{S}:M\to M$ is invertible. According to \cite{costabel2015} the Babu\v{s}ka-Aziz inequality is related to a generalization of the Friedrichs-Velte inequality for conjugate harmonic functions \cite{velte1998}. The differential forms $u\in L_2(\Omega,\Lambda^{\ell-1})$ and $v\in L_2(\Omega,\Lambda^{\ell+1})$ are called conjugate if they satisfy \begin{equation}\label{eq:conjugatediffforms} \d u=\d^{\ast}v. \end{equation} \begin{definition}\label{def:FVineq} The domain $\Omega$ supports the Friedrichs-Velte inequality of order $\ell$ if there is a finite positive constant $\Gamma_{\ell}$ depending only on $\Omega$ and $\ell$ such that for every $u\in M$ and $v\in L_2(\Omega,\Lambda^{\ell+1})$ conjugate in the sense of \eqref{eq:conjugatediffforms} there follows \begin{equation}\label{ineq:FV} \|u\|^2\le\Gamma_{\ell}\|v\|^2. \end{equation} The least possible constant in \eqref{ineq:FV} is called the Friedrichs-Velte constant of order $\ell$ of $\Omega$ and it is denoted by $\Gamma_{\Omega,\ell}$. \end{definition} In inequality \eqref{ineq:FV} we can assume $v\in M^{\ast}$ because for a given $u\in M$ the element $v\in L_2(\Omega,\Lambda^{\ell+1})$ with minimal $L_2$-norm that satisfies \eqref{eq:conjugatediffforms} also satisfies $v\in M^{\ast}$. Hence similar to \eqref{eq:codiffM} we have for such a differential form \begin{equation}\label{eq:diffcoM} v\in M^{\ast}\Rightarrow \d v=0. \end{equation} Thus the conjugate differential forms \eqref{eq:conjugatediffforms} with $u\in M$ and $v\in M^{\ast}$ satisfy the equations \begin{equation}\label{eq:conjdiffeqs} \d^{\ast}u=0,\quad \d u=\d^{\ast}v,\quad \d v=0, \end{equation} which mean that both $u$ and $v$ belong to the kernel of the Laplacian \eqref{eq:hodgelaplacian}, that is, both are harmonic. The following relation between the Babu\v{s}ka-Aziz and Friedrichs-Velte constants has been proved in \cite{costabel2015}. \begin{theorem}[\cite{costabel2015}, Theorem 2.1]\label{thm:CoBAFV} For any bounded open set $\Omega\subset\mathbb{R}^n$ and any $1\le\ell\le n-1$, the Babu\v{s}ka-Aziz constant $C_{\Omega,\ell}$ is finite if and only if the Friedrichs-Velte constant $\Gamma_{\Omega,\ell}$ is finite, and there holds \begin{equation}\label{eq:CoBAFV} C_{\Omega,\ell}=\Gamma_{\Omega,\ell}+1. \end{equation} \end{theorem} \section{Main results}\label{sec:main} \subsection{Improved Poincar{\'e} inequality for differential forms}\label{ssec:diffforms} In order to formulate the main results let us first define the improved Poincar{\'e} inequality for differential forms. \begin{definition}\label{def:iP} The domain $\Omega\subset\mathbb{R}^n$ supports the improved Poincar{\'e} inequality of order $\ell$ if there is a finite constant $P_{\ell}$ depending only on $\Omega$ and $\ell$ such that for every $u\in M$ there holds \begin{equation}\label{ineq:iP} \|u\|^2\le P_{\ell}\|\rho_{\Omega}\d u\|^2. \end{equation} The least possible constant $P_{\Omega,\ell}$ in \eqref{ineq:iP} is called the improved Poincar{\'e} constant of order $\ell$ of $\Omega$. \end{definition} This definition is a generalization of the improved Poincar{\'e} inequalities introduced in \cite{hurri1994} because we use the exterior derivative instead of the gradient, however, it is only special case because we only consider the $L_2$-space. Rephrasing \eqref{ineq:iP} yields \begin{equation*} \left<u,u\right>\le P_{\ell}\left<\rho_{\Omega}^2\d u,\d u\right>, \end{equation*} where $u\in M$, $\rho_{\Omega}^2\d u\in H_0^1(\Omega,\Lambda^{\ell})$ and $\d u\in H^{-1}(\Omega,\Lambda^{\ell})$. From the previous inequality there follows \begin{equation*} \frac{1}{P_{\ell}}\le\frac{\left<\d^{\ast}\rho_{\Omega}^2\d u, u\right>}{\left<u,u\right>} \end{equation*} for each $u\in M$, i.e. \begin{equation}\label{eq:infsupmodifiedschur} \frac{1}{P_{\Omega,\ell}}=\inf_{u\in M}\frac{\left<\d^{\ast}\rho_{\Omega}^2\d u, u\right>}{\left<u,u\right>}. \end{equation} This equation is similar to \eqref{eq:infsupschur} but now we have the diffusion type operator \begin{equation}\label{eq:modifiedschurcomplement} \d^{\ast}\rho_{\Omega}^2\d: M\to M \end{equation} instead of the Schur complement \eqref{def:schurcomplement}. Hence, the improved Poincar{\'e} constant in \eqref{eq:infsupmodifiedschur} is connected to the operator \eqref{eq:modifiedschurcomplement} in the same way as the Babu\v{s}ka-Aziz constant in \eqref{eq:infsupschur} to the Schur complement \eqref{def:schurcomplement}. Moreover, the inverse of the multiplication operator with factor $\rho_{\Omega}^{-2}$ figuring in \eqref{eq:modifiedschurcomplement} is connected to the Laplacian via the Hardy inequality \begin{equation}\label{ineq:Hardy_original} \int_{\Omega}\frac{1}{\rho_{\Omega}^2}|v|^2\le H_{\Omega}\int_{\Omega}\left|\nabla v\right|^2 \text{ for every } v\in H_0^1(\Omega), \end{equation} which was utilized in \cite{duran2012} for the estimation of the improved Poincar{\'e} constant for the gradient from above by the Babu\v{s}ka-Aziz constant for the divergence. Regarding inequality \eqref{ineq:Hardy_original} and various estimations for the Hardy constant we refer to \cite{BEL2015} and the references therein. In this paper we use the following straightforward generalization of \eqref{ineq:Hardy_original}. \begin{lemma}\label{lem:hardy_for_diff_forms} Let the domain $\Omega\subset\mathbb{R}^n$ satisfy the Hardy inequality \eqref{ineq:Hardy_original}. If $v\in H_0^1(\Omega,\Lambda^{\ell})$, then \begin{equation}\label{ineq:Hardy_norm} \left\|\frac{1}{\rho_{\Omega}}v\right\|^2\le H_{\Omega}\left|v\right|_1^2, \end{equation} where $H_{\Omega}$ is the Hardy constant of $\Omega$. \end{lemma} \textbf{Proof.} Let be $v(x)=\sum v_{i_1,\dots,i_{\ell}}(x)\d x_{i_1}\wedge\dots\wedge\d x_{i_{\ell}}$, where we have for the components $v_{i_1,\dots,i_{\ell}}\in H_0^1(\Omega)$ and the summation extends over the set of all $\ell$-tuples $(i_1,\dots,i_{\ell})$ with $1\le i_1<\dots<i_{\ell}\le n$. Use the Hardy inequality \eqref{ineq:Hardy_original} for each component: \begin{equation} \int_{\Omega}\frac{1}{\rho_{\Omega}^2}|v_{i_1,\dots,i_{\ell}}|^2\le H_{\Omega}\int_{\Omega}|\nabla v_{i_1,\dots,i_{\ell}}|^2. \end{equation} In view of \begin{equation} \left\|\frac{1}{\rho_{\Omega}}v\right\|^2=\sum\int_{\Omega}\frac{1}{\rho_{\Omega}^2}|v_{i_1,\dots,i_{\ell}}|^2 \text{ and } |v|_1^2=\sum\int_{\Omega}|\nabla v_{i_1,\dots,i_{\ell}}|^2, \end{equation} the assertion of Lemma \ref{lem:hardy_for_diff_forms} follows by summation for all indices. \hfill$\Box$ \begin{lemma}\label{lem:BAFV->iP} If the domain $\Omega\subset\mathbb{R}^n$ supports the Babu\v{s}ka-Aziz and Hardy inequalities \eqref{ineq:BA} and \eqref{ineq:Hardy_norm}, respectively, then $\Omega$ also supports the improved Poincar{\'e} inequality \eqref{ineq:iP}. Moreover, we have \begin{equation}\label{ineq:P<HC} P_{\Omega,\ell}\le H_{\Omega}C_{\Omega,\ell} \end{equation} between the optimal constants in the respective inequalities. \end{lemma} \textbf{Proof.} The proof is essentially the same as that of Theorem 5.3 in \cite{duran2012} but we repeat it here for the convenience of the reader. Let be $u\in M$. According to \cite{costabel2015} by the Babu\v{s}ka-Aziz inequality there exists a differential form $w\in H_0^1(\Omega,\Lambda^{\ell})$ such that $\d^{\ast}w=u$ and $|w|_1^2\le C_{\Omega,\ell}\|u\|^2$. \begin{eqnarray*} \|u\|^2&=&\left<u,\d^{\ast}w\right>=\left<\d u,w\right>\le \left\|\frac{1}{\rho_{\Omega}}w\right\|\cdot\|\rho_{\Omega}\d u\|\le H_{\Omega}^{\frac{1}{2}}|w|_1\|\rho_{\Omega}\d u\|\\ &\le &H_{\Omega}^{\frac{1}{2}}C_{\Omega,\ell}^{\frac{1}{2}}\|u\|\cdot\|\rho_{\Omega}\d u\| \end{eqnarray*} There follows \begin{equation} \|u\|^2\le H_{\Omega}C_{\Omega,\ell}\|\rho_{\Omega}\d u\|^2 \end{equation} for each $u\in M$, which implies that $\Omega$ supports the improved Poincar{\'e} inequality \eqref{ineq:iP}. In addition we obtain \eqref{ineq:P<HC} and using \eqref{eq:CoBAFV} there also follows $P_{\Omega,\ell}\le H_{\Omega}\left(1+\Gamma_{\Omega,\ell}\right)$. \hfill$\Box$ For the opposite direction we adapt the proof of Lemma 3.1 from the paper \cite{zsuppan2017} of the present author. \begin{lemma}\label{lem:iP->BAFV} If the domain $\Omega\subset\mathbb{R}^n$ supports the improved Poincar{\'e} inequality \eqref{ineq:iP}, then $\Omega$ also supports the Friedrichs-Velte inequality \eqref{ineq:FV}. Moreover, we have \begin{equation}\label{ineq:Gamma<4P} \Gamma_{\Omega,\ell}\le 4P_{\Omega,\ell} \end{equation} between the optimal constants in the respective inequalities. \end{lemma} \textbf{Proof.} Let $u\in M$ and $v\in M^{\ast}$ be conjugate in the sense of \eqref{eq:conjugatediffforms}. \begin{eqnarray*} \|\rho_{\Omega}\d u\|^2&=&\left<\rho_{\Omega}^2\d u,\d u\right>=\left<\rho_{\Omega}^2\d u,\d^{\ast} v\right>=\left<\d\left(\rho_{\Omega}^2\d u\right), v\right>\\ &=&\left<\d\left(\rho_{\Omega}^2\right)\wedge\d u+\rho_{\Omega}^2\d\d u,v\right>=2\left<\rho_{\Omega}\d\rho_{\Omega}\wedge\d u,v\right>\\ &=&2\left<\d\rho_{\Omega}\lrcorner v,\rho_{\Omega}\d u\right>\le 2\|\d\rho_{\Omega}\lrcorner v\|\cdot\|\rho_{\Omega}\d u\|, \end{eqnarray*} where $\d\rho_{\Omega}\lrcorner v$ is the interior product (contraction) of the differential $\ell+1$ form $v$ with the vector identified with the 1-form $\d\rho_{\Omega}$, i.e. \begin{equation*} \d\rho_{\Omega}\lrcorner v=\sum v_{i_1,\dots,i_{\ell+1}}\sum_{k=1}^{\ell+1}(-1)^{k-1}\left(\partial_{i_k}\rho_{\Omega}\right)\d x_{i_1}\wedge\dots\wedge\widehat{\d x_{i_k}}\wedge\dots\wedge\d x_{i_{\ell+1}}, \end{equation*} for the $\ell+1$ form $v=\sum v_{i_1,\dots,i_{\ell+1}}\d x_{i_1}\wedge\dots\wedge\d x_{i_{\ell+1}}$, where $\widehat{\d x_{i_k}}$ denotes the omission of the corresponding factor and the first summation ranges over all possible $\ell+1$-tuples of indices $1\le i_1<\dots<i_{\ell+1}\le n$. We obtain \begin{equation*} \|\rho_{\Omega}\d u\|\le 2\|\d\rho_{\Omega}\lrcorner v\|, \end{equation*} the right-hand side of which we estimate by the norm of $v$. The Leibniz rule for the interior product gives \begin{equation*} \d\rho_{\Omega}\wedge\left(\d\rho_{\Omega}\lrcorner v\right)= |\d\rho_{\Omega}|^2v-\d\rho_{\Omega}\lrcorner\left(\d\rho_{\Omega}\wedge v\right), \end{equation*} which implies by identity (2.8) in \cite{costabel2009} that \begin{equation*} \|\d\rho_{\Omega}\lrcorner v\|^2= \left<v,v-\d\rho_{\Omega}\lrcorner\left(\d\rho_{\Omega}\wedge v\right)\right>= \|v\|^2-\|\d\rho_{\Omega}\wedge v\|^2, \end{equation*} where we have also used that $|\d\rho_{\Omega}|^2=|\nabla\rho_{\Omega}|^2=1$ almost everywhere in $\Omega$ by the eikonal equation for the boundary distance function. There follows \begin{equation*} \|\d\rho_{\Omega}\lrcorner v\|\le\|v\| \end{equation*} by omitting the positive term $\|\d\rho_{\Omega}\wedge v\|^2$ on the right-hand side. Hence, if $\Omega$ supports the improved Poincar{\'e} inequality \eqref{ineq:iP} then \begin{equation} \|u\|^2\le 4 P_{\Omega,\ell}\|v\|^2, \end{equation} that is, $\Omega$ also supports the Friedrichs-Velte inequality \eqref{ineq:FV}, moreover, we obtain \eqref{ineq:Gamma<4P} for the involved constants. By \eqref{eq:CoBAFV} we also have $C_{\Omega,\ell}\le 1+4 P_{\Omega,\ell}$. \hfill$\Box$ Lemma \ref{lem:BAFV->iP} and Lemma \ref{lem:iP->BAFV} imply the following \begin{theorem}\label{thm:iP<->BAFV} If the bounded domain $\Omega\in\mathbb{R}^n$ supports the Hardy inequality \eqref{ineq:Hardy_norm} then $\Omega$ supports the Friedrichs-Velte inequality \eqref{ineq:FV} and simultaneously the Babu\v{s}ka-Aziz inequality \eqref{ineq:BA} if and only if $\Omega$ supports the improved Poincar{\'e} inequality \eqref{ineq:iP}. Moreover, we have \begin{equation}\label{ineq:iP<->BAFV} \frac{1}{4}\left(C_{\Omega,\ell}-1\right)=\frac{1}{4}\Gamma_{\Omega,\ell}\le P_{\Omega,\ell}\le H_{\Omega}C_{\Omega,\ell}=H_{\Omega}\left(1+\Gamma_{\Omega,\ell}\right) \end{equation} for the domain specific constants in the corresponding inequalities. \hfill$\Box$ \end{theorem} According to Proposition 3.1.~of \cite{costabel2015} the Babu\v{s}ka-Aziz and the Friedrichs-Velte constants are finite for bounded Lipschitz domains, the Hardy constant of which is also finite. Hence Theorem \ref{thm:iP<->BAFV} holds for Lipschitz domains with finite constants, that is, Lipschitz domains support the improved Poincar{\'e} inequality \eqref{ineq:iP} for differential forms. \begin{remark}\label{rem:case_l=1} Setting $n\ge 2$ and $\ell=1$ as in the Section 4.1.~of \cite{costabel2015}, we have $\d=\overline{\d}=\mathop{\textrm{grad}}\nolimits$, $\d^{\ast}=\overline{\d}^{\ast}=-\div$ and the subspace $M=\left(\ker\mathop{\textrm{grad}}\nolimits\right)^{\bot}$ consists of functions with vanishing integral over each connected component of $\Omega$. In this case, according to \cite{hurri1994, jiang2014}, the improved Poincar{\'e} inequality \eqref{ineq:iP} holds for a larger class of domains including John domains. Moreover, as proved in \cite{jiang2014}, for simply connected planar domains being a John domain is equivalent with the simultaneous finiteness of the investigated constants. Theorem \ref{thm:iP<->BAFV} gives the additional information that for such domains \begin{equation*} \frac{1}{4}\Gamma_{\Omega,\ell}\le P_{\Omega,\ell}\le 16C_{\Omega,\ell} \end{equation*} because $H_{\Omega}\le 16$, see \cite{BEL2015}. \hfill$\Box$ \end{remark} \begin{remark}\label{rem:hardyconstant} As one sees from \eqref{ineq:iP<->BAFV} finiteness of the Hardy constant is needed only for estimating the improved Poincar{\'e} constant with the Babu\v{s}ka-Aziz constant. Finding exact values of the Hardy constant of domains is not so easy, however, for example in case of convex domains in any dimensions we have $H_{\Omega}=4$. More accessible are estimations of the Hardy constant in terms of geometric characteristics of the domain, e.g. for $\Omega$ satisfying a $\theta$-cone condition we have \begin{equation*} H_{\Omega}\le\frac{16}{n\cdot\omega\left(\frac{\sin\theta}{2}\right)}, \text{ where } \omega\left(\alpha\right)=\frac{\int_0^{\arcsin\alpha}\sin^{n-2}t\,\mathrm{d}t}{2\int_0^{\frac{\pi}{2}}\sin^{n-2}t\,\mathrm{d}t}. \end{equation*} For this and other estimations c.f. \cite{BEL2015} and the references given there. \hfill$\Box$ \end{remark} \subsection{Estimations for star-shaped domains} In this section we give an upper estimation for the Friedrichs-Velte constant $\Gamma_{\Omega,1}$ of a domain $\Omega\subset\mathbb{R}^n$ ($n\ge 2$) star-shaped with respect to a ball $B\subset\Omega$. Note, that there already exist such upper estimations for planar and spatial star-shaped domains. For planar domains the differential form $v\in L_2(\Omega,\Lambda^{2})$ specializes to a scalar valued function as its conjugate pair $u$, and the investigation of the associated conjugate pair $u^2-v^2$ and $2uv$ leads to the estimation of the Friedrichs-Velte constant, c.f. \cite{costabeldauge2015,horganpayne1983}. On the other hand, for spatial domains $v\in L_2(\Omega,\Lambda^{2})$ can be represented by a vector function, \eqref{eq:conjdiffeqs} becomes $\mathop{\textrm{grad}}\nolimits u=\mathop{\textrm{rot}}\nolimits v$ and $\div v=0$, but the associated functions $u^2-|v|^2$ and $2uv$ are no longer conjugate harmonic. They satisfy instead identity (2.9) of \cite{payne2007}, which reads in the notation of the present paper as \begin{equation}\label{eq:payneequation} \nabla\left(u^2-|v|^2\right)=\mathop{\textrm{rot}}\nolimits\left(2uv\right)-2\div\left(v\otimes v\right). \end{equation} This identity constitutes the basis for the estimation of the Friedrichs-Velte constant of a spatial star-shaped domain, c.f. \cite{payne2007}. In order to derive the intended estimation for $\Gamma_{\Omega,1}$ we need a generalization of the identity \eqref{eq:payneequation} when $u\in L_2(\Omega,\Lambda^{0})$ is further a harmonic function normalized by $\int_{\Omega}u=0$ but $v\in L_2(\Omega,\Lambda^{2})$ is a skew-symmetric second order tensor which can be represented by an antisymmetric matrix of order $n(n-1)/2$. Setting \begin{equation}\label{eq:v} v=\sum_{i<j}v_{ij}\d x_i\wedge\d x_j \end{equation} we calculate \begin{eqnarray} \label{eq:dastv} \d^{\ast}v&=&\sum_{i=1}^n\left(\sum_{j=1}^n\partial_j v_{ij}\right)\d x_i,\\ \label{eq:dv} \d v&=&\sum_{i<j<k}\left(\partial_i v_{jk}-\partial_j v_{ik}+\partial_k v_{ij}\right)\d x_i\wedge\d x_j\wedge\d x_k, \end{eqnarray} and the equaltions \eqref{eq:conjdiffeqs} become \begin{equation}\label{eq:conjdiffeqs_2} \partial_i u=\sum_{j=1}^n\partial_j v_{ij}\text{ and }\partial_i v_{jk}-\partial_j v_{ik}+\partial_k v_{ij}=0. \end{equation} Using the first equation of \eqref{eq:conjdiffeqs_2} we calculate \begin{equation*} \begin{split} \d^{\ast}\left(uv\right)&=\sum_{i=1}^n\left(\sum_{j=1}^n\partial_j \left(uv_{ij}\right)\right)\d x_i= \sum_{i=1}^n\left(\sum_{j=1}^n u\partial_j v_{ij}+v_{ij}\partial_j u \right)\d x_i\\ &= u\,\d^{\ast}v+\sum_{i=1}^n\left(\sum_{j=1}^n\sum_{k=1}^n v_{ij}\partial_kv_{jk}\right)\d x_i, \end{split} \end{equation*} which leads to \begin{equation*} \begin{split} \d^{\ast}\left(uv\right) &= \frac{1}{2}\d\left(u^2+|v|^2\right)+ \sum_{i=1}^n\sum_{j=1}^n\left( \sum_{k=1}^n v_{ij}\partial_k v_{jk}-\sum_{k=1}^{j-1} v_{jk}\partial_i v_{jk} \right)\d x_i \\ &= \sum_{i=1}^n\left(\sum_{k=1}^n \partial_k\left(\sum_{j=1}^n v_{ij}v_{jk}\right)-\sum_{j=1}^n v_{jk}\partial_k v_{ij}-\sum_{j=k+1}^nv_{jk}\partial_i v_{jk}\right)\d x_i. \end{split} \end{equation*} The first term is the divergence of the single dot product of the second order tensor $v$ with itself, while the other terms can be further evaluated to zero using the second equation of \eqref{eq:conjdiffeqs_2} and the skew-symmetry of $v$: \begin{multline*} \sum_{k=1}^n\left(\sum_{j=1}^n v_{jk}\partial_k v_{ij}+\sum_{j=k+1}^nv_{jk}\partial_i v_{jk}\right) =\sum_{j=1}^n\left(\sum_{k=1}^{j-1}v_{kj}\partial_j v_{ik}+\sum_{k=j+1}^n v_{kj}\partial_i v_{kj}\right)\hfill\\ =\sum_{j=1}^n\sum_{k=1}^{j-1}v_{kj}\partial_j v_{ik}-\sum_{j=1}^n\sum_{k=j+1}^nv_{jk}\left(\partial_j v_{ik}-\partial_i v_{jk}\right)\hfill\\ =\sum_{j=1}^n\sum_{k=1}^{j-1}v_{kj}\partial_j v_{ik}-\sum_{j=1}^n\sum_{k=j+1}^n v_{jk}\partial_kv_{ij} =\sum_{k=1}^n\sum_{j=1}^{k-1}v_{jk}\partial_k v_{ij}-\sum_{j=1}^n\sum_{k=j+1}^n v_{jk}\partial_kv_{ij}\hfill\\ =\sum_{j=1}^n\sum_{k=j+1}^n v_{jk}\partial_k v_{ij}-\sum_{j=1}^n\sum_{k=j+1}^n v_{jk}\partial_kv_{ij}=0 \hfill \end{multline*} for each $i=1,\dots,n$. Thus, as a generalization of \eqref{eq:payneequation} we have obtained \begin{equation}\label{eq:gen_payneequation} \d^{\ast}\left(2uv\right)-\d\left(u^2\right)-\d\left(|v|^2\right)=2\div\left(v\cdot v\right). \end{equation} \begin{remark}\label{rem:CRMTeq} For planar domains \eqref{eq:gen_payneequation} reduces to the Cauchy-Riemann system $\nabla^{\bot}(2uv)=\nabla(u^2-v^2)$ because \eqref{eq:v} has only one component and the right hand side of \eqref{eq:gen_payneequation} simplifies to $-2\nabla(v^2)$. For spatial domains \eqref{eq:gen_payneequation} reduces to \eqref{eq:payneequation} because $\div\left(v\cdot v\right)$ becomes $\div\left(v\otimes v\right)-\d\left(|v|^2\right)$ by identifying the 2-form $v_{12}d x_1\wedge\d x_2+v_{13}\d x_1\wedge\d x_3+v_{23}\d x_2\wedge\d x_3$ with the vector $(v_{23},-v_{13},v_{12})^{\top}$. \hfill$\Box$ \end{remark} Another ingredient for the estimation of $\Gamma_{\Omega,1}$ is the 1-form \begin{equation}\label{eq:oneform_psi} \psi= \begin{cases} \sum_{i=1}^n\left(\frac{1}{r^n}-\frac{1}{r_0^n}\right)x_i\d x_i&\text{ for }a< r\le r_0,\\ \sum_{i=1}^n\left(\frac{1}{a^n}-\frac{1}{r_0^n}\right)x_i\d x_i&\text{ for }0\le r\le a, \end{cases} \end{equation} which generalizes the vector field defined by equation (2.6) of \cite{payne2007}. In \eqref{eq:oneform_psi} the function \begin{equation}\label{eqn:r_0} r=r_0(\theta)\text{ for }\theta\in\mathbb{S}, \end{equation} parametrizes the boundary of $\Omega$ which is star-shaped with respect to a ball $B$ centered in the origin with radius $a$. Here $\mathbb{S}$ denotes the domain of the angles of the $n$-dimensional spherical coordinate system, that is $0\le\theta_j\le\pi$ for $j=1,\dots,n-2$ and $0\le\theta_{n-1}\le 2\pi$. By partial integration we obtain \begin{eqnarray*} \label{eq:u2v2_psi} \left<\d\left(u^2\right)+\d\left(|v|^2\right),\psi\right>&=& \int_{\Omega}\frac{n}{r_0^n}\left(u^2+|v|^2\right)-\frac{n}{a^n}\int_{B}\left(u^2+|v|^2\right),\\ \label{eq:2uv_psi} \left<\d^{\ast}\left(2uv\right),\psi\right>&=& -\int_{\Omega}2u\frac{n}{r_0^{n+1}}x^{\top}V\nabla r_0,\\ \label{eq:divvv_psi} \left<\div\left(v\cdot v\right),\psi\right>&=& -\int_{\Omega}\frac{2}{r_0^n}|v|^2-\int_{\Omega}\frac{n}{r_0^{n+1}}x^{\top}V^2\nabla r_0 +\int_{\Omega\setminus B}\frac{2}{r^n}|v|^2\\ &&+\int_{\Omega\setminus B}\frac{n}{r^{n+2}}x^{\top}V^2x+\frac{2}{a^n}\int_B|v|^2, \end{eqnarray*} where $V=(v_{ij})_{n\times n}$ denotes the antisymmetric matrix corresponding to $v$. Combining these equations with \eqref{eq:gen_payneequation} gives \begin{equation}\label{eq:payneeq_psi} \begin{split} \int_{\Omega}\frac{n}{r_0^n}u^2&= \frac{n}{a^n}\int_B u^2+\frac{n-4}{a^n}\int_B |v|^2-\int_{\Omega\setminus B}\left(\frac{4}{r^n}|v|^2+\frac{2n}{r^{n+2}}x^{\top}V^2x\right)\\ & -\int_{\Omega}2u\frac{n}{r_0^{n+1}}x^{\top}V\nabla r_0+\int_{\Omega}\frac{2n}{r_0^{n+1}}x^{\top}V^2\nabla r_0+ \int_{\Omega}\frac{4-n}{r_0^n}|v|^2 \end{split} \end{equation} From this point on we impose on the harmonic function $u$ the normalization $u(0)=0$ instead of $\int_{\Omega}u=0$. This can be done because \begin{equation*} \int_{\Omega}\left(u-u(0)\right)^2=\int_{\Omega}u^2+|\Omega|u(0)^2\ge\int_{\Omega}u^2\text{ if }\int_{\Omega}u=0. \end{equation*} These two normalizations are equivalent by the mean value theorem if $\Omega$ itself is a ball. Similar to equation (2.14) in \cite{payne2007} we obtain using \eqref{eq:conjdiffeqs_2} and $v_{ii}=0$ the estimation \begin{equation*} \begin{split} \Delta\left(u^2-(n-1)|v|^2\right)&=2\left(|\nabla u|^2-(n-1)\sum_{i<j}|\nabla v_{ij}|^2\right)\\ &\le 2\left(\sum_{i=1}^n\left(\sum_{j=1}^n\partial_j v_{ij}\right)^2-(n-1)\sum_{i<j}|\nabla v_{ij}|^2\right)\\ &= 2(n-1)\sum_{i<j}\left(\left(\partial_i v_{ij}\right)^2+\left(\partial_j v_{ij}\right)^2-|\nabla v_{ij}|^2\right) \le 0, \end{split} \end{equation*} which implies by the mean value theorem and the normalization $u(0)=0$ that $\Gamma_{B,1}\le n-1$ for an $n$-dimensional ball. In fact $\Gamma_{B,1}=n-1$, c.f. \cite{costabeldauge2015,horganpayne1983,payne2007}. Substituting this into \eqref{eq:payneeq_psi} there follows \begin{equation*} \begin{split} \int_{\Omega}\frac{n}{r_0^n}u^2&\le \frac{n^2-4}{a^n}\int_B |v|^2-\int_{\Omega\setminus B}\left(\frac{4}{r^n}|v|^2+\frac{2n}{r^{n+2}}x^{\top}V^2x\right)\\ &-\int_{\Omega}2u\frac{n}{r_0^{n+1}}x^{\top}V\nabla r_0+\int_{\Omega}\frac{2n}{r_0^{n+1}}x^{\top}V^2\nabla r_0+ \int_{\Omega}\frac{4-n}{r_0^n}|v|^2. \end{split} \end{equation*} Using $x^{\top}V^2x=-|Vx|^2$ and $|Vx|^2=|x\lrcorner v|^2\le r^2|v|^2$ we estimate the integral over $\Omega\setminus B$ as \begin{equation*} -\int_{\Omega\setminus B}\left(\frac{4}{r^n}|v|^2+\frac{2n}{r^{n+2}}x^{\top}V^2x\right) \le\int_{\Omega\setminus B}\frac{2n-4}{r^n}|v|^2 \end{equation*} and we obtain \begin{equation}\label{ineq:payneineq_psi} \begin{split} \int_{\Omega}\frac{n}{r_0^n}u^2&\le \frac{n^2-4}{a^n}\int_{\Omega} |v|^2+\int_{\Omega}\frac{4-n}{r_0^n}|v|^2\\ &-\int_{\Omega}2u\frac{n}{r_0^{n+1}}x^{\top}V\nabla r_0+\int_{\Omega}\frac{2n}{r_0^{n+1}}x^{\top}V^2\nabla r_0. \end{split} \end{equation} \begin{remark}\label{rem:2D_CHP_bound} For planar domains $\Omega$ the first term in \eqref{ineq:payneineq_psi} vanishes evidently. The last term also vanishes because $V^2$ is a multiple of the identity matrix and $\nabla r_0$ is orthogonal to $x$ since $r_0$ depends only on the angular variables. The estimation \eqref{ineq:payneineq_psi} simplifies to \begin{equation*}\label{ineq:CHP} \int_{\Omega}\frac{1}{r_0^2}u^2\le \int_{\Omega}\frac{1}{r_0^2}v^2-\int_{\Omega}2uv\frac{x_1\partial_2r_0-x_2\partial_1r_0}{r_0^3}, \end{equation*} wherein the estimation of the last term leads to the Horgan-Payne estimation of the Friedrichs constants of $\Omega$, c.f. (4.10) and subsequent equations in \cite{costabeldauge2015} for example. \hfill$\Box$ \end{remark} The latter two integrals in \eqref{ineq:payneineq_psi} are estimated as \begin{eqnarray*} \int_{\Omega}2u\frac{n}{r_0^{n+1}}x^{\top}V\nabla r_0&\le & \alpha\int_{\Omega}\frac{n}{r_0^n}u^2+\frac{1}{\alpha}\int_{\Omega}\frac{n}{r_0^n}|v|^2\frac{|r\nabla r_0|^2}{r_0^2},\\ \int_{\Omega}\frac{2n}{r_0^{n+1}}x^{\top}V^2\nabla r_0&=& -\int_{\Omega}\frac{2n}{r_0^{n+1}}(Vx)^{\top}(V\nabla r_0)\le \int_{\Omega}\frac{n}{r_0^n}|v|^2\frac{2|r\nabla r_0|}{r_0} \end{eqnarray*} for some parameter $0<\alpha<1$. Substituting these estimates into \eqref{ineq:payneineq_psi} gives \begin{equation}\label{ineq:gen_payne} (1-\alpha)\int_{\Omega}\frac{n}{r_0^n}u^2\le \frac{n^2-4}{a^n}\int_{\Omega} |v|^2+\int_{\Omega}\frac{4-n}{r_0^n}|v|^2+ \left(\frac{1}{\alpha}Q^2+2Q\right)\int_{\Omega}\frac{n}{r_0^n}|v|^2, \end{equation} where $\frac{|r\nabla r_0|}{r_0}$ depends only on the angular variables of the spherical coordinate system and hence the quantity $Q=\max_{\partial\Omega}\frac{|r\nabla r_0|}{r_0}\ge 0$ depends only on the shape of $\Omega$. \begin{remark}\label{rem:3D_P_bound} In case of three dimensional domains \eqref{ineq:gen_payne} reduces to \begin{equation*} \int_{\Omega}u^2\le\frac{2+2Q+\frac{Q^2}{\alpha}}{1-\alpha}\cdot\max\left(\frac{r_0}{a}\right)^3\cdot\int_{\Omega}|v|^2, \end{equation*} which leads to \begin{equation} \int_{\Omega}u^2\le\left(Q+\sqrt{Q^2+2Q+2}\right)^2\cdot\max\left(\frac{r_0}{a}\right)^3\cdot\int_{\Omega}|v|^2 \end{equation} by optimizing with respect to $\alpha$. This constitutes an upper estimation for the Friedrichs-Velte constant of the spatial domain comparable to that of \cite{payne2007}. \hfill$\Box$ \end{remark} For any dimensions $n\ge 2$ from \eqref{ineq:gen_payne} there follows \begin{equation*} \frac{(1-\alpha)n}{\max_{\partial\Omega}r_0^n}\int_{\Omega}u^2\le \left(\frac{n^2-4}{a^n}+\frac{4-n}{\max_{\partial\Omega}r_0^n}+\frac{\frac{nQ^2}{\alpha}+2nQ}{a^n}\right) \int_{\Omega}|v|^2, \end{equation*} which implies \begin{equation*}\label{ineq:gen_payne_eta} \int_{\Omega}u^2\le \eta^n\frac{\frac{n^2-4}{n}-\frac{n-4}{n}\eta^{-n}+\frac{Q^2}{\alpha}+2Q}{1-\alpha} \int_{\Omega}|v|^2, \end{equation*} where $\eta=\max_{\partial\Omega}\frac{r_0}{a}$ is the eccentricity of $\Omega$ with respect to its center of star-shapedness. By optimizing with respect to $\alpha$ we obtain the upper bound \begin{equation}\label{ineq:gen_FVBound} \Gamma_{\Omega,1}\le \eta^n\left( Q+\sqrt{Q^2+2Q+\frac{n^2-4}{n}-\frac{n-4}{n}\eta^{-n}} \right)^2 \end{equation} for the Friedrichs-Velte constant. \begin{remark}\label{rem:FVnDimBall} For an $n$-dimensional ball $B$ we have $Q=0$ and $\eta=1$ w.r.t. its center. Substituting these into \eqref{ineq:gen_FVBound} gives the known bound $\Gamma_{B,1}\le n-1$. \hfill$\Box$ \end{remark} \begin{remark}\label{rem:nDimBAiPEstim} The upper estimation \eqref{ineq:gen_FVBound} carries over by Theorem \ref{thm:iP<->BAFV} to the corresponding Babu\v{s}ka-Aziz constant for the divergence and improved Poincar{\'e} constant for the gradient of an $n$-dimensional star-shaped domain. It improves on the estimation of \cite{galdi1994_1}. It also generalizes the estimation in \cite{chuawheeden2010} for $P_{\Omega,1}$ of a convex domain for star-shaped ones, however, only in the $L_2$-case, c.f. \cite{chuawheeden2010}. \hfill$\Box$ \end{remark} \begin{remark}\label{rem:estim_general_ell} The estimation \eqref{ineq:gen_FVBound} is valid only for the case $\ell=1$. In order to obtain an estimation for every $\ell\ge 1$ we have to generalize \eqref{eq:gen_payneequation}. Instead of $uv$ on the left-hand side of \eqref{eq:gen_payneequation} we could consider $\mathop{\star}\nolimits\left(u\wedge\mathop{\star}\nolimits v\right)$ for a pair $u\in L_2(\Omega,\Lambda^{\ell-1})$ and $v\in L_2(\Omega,\Lambda^{\ell+1})$ conjugate in the sense \eqref{eq:conjdiffeqs}. However, this is beyond the scope of this paper. \hfill$\Box$ \end{remark} \subsection*{Concluding remarks} In this paper, motivated by the results of \cite{costabel2015}, we have derived relations between the domain specific improved Poincar{\'e} constant and the Friedrichs-Velte constant figuring in the corresponding inequalities for differential forms. This constitutes a generalization of the improved Poincar{\'e} inequalities formulated in \cite{hurri1994} for the gradient, however, only in case of the $L_2$-space on a Lipschitz domain. We have also developed a generalization of the Horgan-Payne type upper estimations for the Friedrichs-Velte constant of a planar \cite{costabeldauge2015,horganpayne1983} and of a spatial \cite{payne2007} domain for arbitrary dimensional star-shaped domains, which also constitutes a unification of these estimations. However, this generalization is only valid for conjugate harmonic differential 0- and 2-forms in arbitrary dimensions. For more general conjugate harmonic differential $\ell-1$- and $\ell+1$-forms we only indicated a possible way of generalization.
1,941,325,220,895
arxiv
\section{Introduction} Attitude estimation has been widely studied with various filtering approaches and assumptions~\cite{CraMarJGCD07}. One of the biggest challenges is that the attitude dynamics evolve on a compact, nonlinear manifold, namely the special orthogonal group. The attitude is often parameterized by certain three dimensional coordinates, and an estimator is developed in terms of these local coordinates. However, it is well known that minimal, three-parameter attitude representations, such as Euler-angles or modified Rodriguez parameters, suffer from singularities. They are not suitable for large angle rotational maneuvers, as the type of parameters should be switched persistently in the vicinity of singularities. Quaternions are another popular choice in attitude estimation~\cite{CraMarJGCD97,PsiJGCD00}. They do not exhibit singularities, but as the configuration space of quaternions, namely the three-sphere double covers the special orthogonal group, there exists ambiguity. More explicitly, a single attitude may be represented by two antipodal points on the three-sphere. The ambiguity should be carefully resolved in any quaternion-based attitude observer and controller, otherwise they may exhibit unwinding, for example~\cite{BhaBerSCL00}. Furthermore, quaternions are often considered as vectors in $\Re^4$, instead of incorporating the structures of the three-sphere carefully when designing attitude estimators. Instead, attitude observers have been designed directly on the special orthogonal group to avoid both singularities of local coordinates and the ambiguity of quaternions. The development for \textit{deterministic} attitude observers on the special orthogonal group includes complementary filters~\cite{MahHamITAC08}, a robust filter~\cite{SanLeeSCL08}, and a global attitude observer~\cite{WuKauPICDC15}. The prior efforts to construct \textit{probabilistic} attitude estimators on the special orthogonal group and the relevant research have been relatively unpopular compared with deterministic approaches, especially in the engineering community. Probability and stochastic processes on manifolds have been studied in~\cite{Eme89,Elw82}. Directional statistics have been applied in earth sciences and material sciences~\cite{MarJup99,Chi03}. Earlier works on attitude estimation on the special orthogonal group include~\cite{LoEshSJAM79}, where a probability density function is expressed using noncommutative harmonic analysis~\cite{ChiKya01}. This idea of using Fourier analysis on manifolds has been applied for uncertainty propagation and attitude estimation~\cite{ParKimP2IICRA05,MarPMDSAS05,LeeLeoPICDC08}. The use of noncommutative harmonic analysis allows a probability density function to be expressed globally, and the Fokker-Plank equation to be transformed into ordinary differential equations, thereby providing a fundamental solution for the Bayesian attitude estimation. However, in practice they may cause computational burden, since a higher order of Fourier transform is required as the estimated distribution becomes more concentrated. Recent literature is rich with filtering techniques and measurement models developed in terms of exponential coordinates~\cite{ParLiuR08,Chi12,ChiKobPICDC14,LonWolRSV13}. This is perhaps the most natural approach to develop an estimator formally on an abstract Lie group, while taking advantages of the fact that the lie algebra is a linear space. The limitation is that the exponential map is a local diffeomorphism around the identity element, and as such, the issue of a singularity remains. This paper aims to construct a probabilistic attitude estimator on the special orthogonal group, while avoiding complexities of harmonic analysis and singularities of exponential coordinates. We use a specific form of the probability density, namely the matrix Fisher distribution~\cite{MarJup99}, to represent uncertainties in the estimates of attitudes. Therefore, the proposed approach can be considered as an example of \textit{assumed density filtering}. To project the propagated density onto the space of the matrix Fisher distributions, an unscented transform and its inverse are proposed. Assuming that the attitude measurement errors are represented by a matrix Fisher distribution, it is shown that the posteriori estimation also follows the Fisher distribution. These provide a Bayesian, probabilistic attitude estimator on the special orthogonal group in a global fashion. It is demonstrated that the proposed estimator exhibits excellent convergence properties even with large initial estimation errors and large uncertainties, in contrast to the attitude estimators based on local coordinates and linearization that tend to diverge for such challenging cases. \section{Matrix Fisher Distribution on $\ensuremath{\mathsf{SO(3)}}$}\label{sec:MF} Directional statistics deals with statistics for unit-vectors and rotations in $\Re^n$, where various probability distributions on nonlinear compact manifolds are defined, and statistical analysis, such as inference and regressions are studied in those manifolds~\cite{MarJup99,Chi03}. In particular, the matrix Fisher (or von Mises-Fisher matrix) distribution is a simple exponential model introduced in~\cite{DowB72,KhaMarJRSSS77}. Interestingly, many of the prior work on the matrix Fisher distributions in directional statistics are developed for the Stiefel manifold, $\mathsf{V}_k(\Re^n)=\{X\in\Re^{n\times k}\,|\, XX^T=I_{n\times n}\}$ The configuration manifold for the attitude dynamics of a rigid body is the three-dimensional special orthogonal group, \begin{align*} \ensuremath{\mathsf{SO(3)}} = \{R\in\Re^{3\times 3}\,|\, R^TR=I_{3\times 3},\,\mathrm{det}[R]=1\}. \end{align*} This section provides the definition of the matrix Fisher distribution and several properties developed for $\ensuremath{\mathsf{SO(3)}}$. Throughout this paper, the \textit{hat} map: $\wedge:\Re^3\rightarrow \ensuremath{\mathfrak{so}(3)}$ is defined such that $\hat x = -(\hat x)^T$, and $\hat x y =x\times y$ for any $x,y\in\Re^3$. The inverse of the hat map is denoted by the \textit{vee} map: $\vee:\ensuremath{\mathfrak{so}(3)}\rightarrow\Re^3$. The set of circular shifts of $(1,2,3)$ is defined as $\mathcal{I}=\{(1,2,3),(2,3,1),(3,1,2)\}$. \subsection{Matrix Fisher Distribution on $\ensuremath{\mathsf{SO(3)}}$} The probability density of the matrix Fisher distribution on $\ensuremath{\mathsf{SO(3)}}$ is given by \begin{align} p(R)=\frac{1}{c(F)}\exp(\trs{F^T R}),\label{eqn:MF} \end{align} where $F\in\Re^{3\times 3}$ is a matrix parameter, and $c(F)\in\Re$ is a normalizing constant defined as \begin{align} c(F) = \int_{\ensuremath{\mathsf{SO(3)}}} \exp(\trs{F^T R}) dR. \label{eqn:cF} \end{align} For $\ensuremath{\mathsf{SO(3)}}$, there is a bi-invariant measure, referred to as \textit{Haar} measure, that is unique up to scalar multiples~\cite{ChiKya01}. The above expression is assumed to be defined with respect to the particular Haar measure $dR$ that is normalized such that $\int_{\ensuremath{\mathsf{SO(3)}}} dR=1$. In other words, the uniform distribution on $\ensuremath{\mathsf{SO(3)}}$ is given by $1$ with respect to $dR$. This is often stated that \refeqn{MF} is defined with respect to the uniform distribution. When $R$ is distributed according to the matrix Fisher distribution with the parameter matrix $F$, it is denoted by $R\sim\mathcal{M}(F)$. The singular value decomposition of $F$ is given by \begin{align} F= U S V^T, \end{align} where $U,V\in\Re^{3\times 3}$ are orthonormal matrices, and $S=\mathrm{diag}[s_1,s_2,s_3]$ for the singular values $s_i>0$ and $i\in\{1,2,3\}$. Throughout this paper, we assume that $\mathrm{det}[F]>0$, such that $\mathrm{det}[U]\mathrm{det}[V]=1>0$. Then, $U,V\in\ensuremath{\mathsf{SO(3)}}$ holds without loss of generality (in the case $\mathrm{det}[U]=\mathrm{det}[V]=-1$, we can multiply $U,V$ by $-1$). Let $K\in\Re^{3\times 3}$ and $M\in\ensuremath{\mathsf{SO(3)}}$ be the elliptic component and the polar component of $F$, i.e., \begin{align} F=KM,\quad K=K^T=USU^T,\quad M=UV^T.\label{eqn:KM} \end{align} Since $\trs{F^T R} = \trs{VSU^T R} = \trs{S U^T RV}$, the probability density $p(R)$ is maximized when $R=M$ for a fixed $F$. Therefore, the polar component $M$ is considered as the \textit{mean} attitude. The matrices $S$ and $U$ of the elliptic component determine the degree and the direction of dispersion about the mean attitude. More specifically, the probability density becomes more concentrated as the singular value $s_i$ increases. The role of $S,U$ in determining the shape of the distribution will be discussed more explicitly at Section \ref{sec:UAE}. While there are various approaches to evaluate the normalizing constant for the matrix Fisher distribution on the Stiefel manifold, only a few papers deal with the normalizing constant on $\ensuremath{\mathsf{SO(3)}}$. A method based on the holonomic gradient descent is introduced in~\cite{SeiShiJMA13}, which involves the numerical solution of multiple ordinary differential equations. The normalizing constant is expressed as a simple one-dimensional integration in~\cite{WooAJS93}, but the given result is erroneous as the change of volume over a certain transformation is not considered properly. We follow the approach of~\cite{WooAJS93}, to find a closed form expression of the normalizing constant. \begin{prop} The normalizing constant for the matrix Fisher distribution \refeqn{MF} is given by \begin{align} c(F) = c(S) & = \int_{-1}^1 \frac{1}{2}I_0\!\bracket{\frac{1}{2}(s_i-s_j)(1-u)} \nonumber\\ &\times I_0\!\bracket{\frac{1}{2}(s_i+s_j)(1+u)}\exp (s_ku)\,du,\label{eqn:cS} \end{align} where $(i,j,k)\in\mathcal{I}$, and $I_0$ denotes the zero degree, modified Bessel function for the first kind~\cite{AbrSte65}, i.e., $I_0(u) = \sum_{r=0}^\infty (\frac{1}{2}u)^{2r}/(r!)^2$ \end{prop} \begin{proof} See Appendix \ref{sec:PfC}. \end{proof} This implies that the normalizing constant only depends on $S$, and the order of the singular values in $S$ can be shifted. It is not burdensome to evaluate \refeqn{cS} numerically, as it takes less than 0.01 second with the 2.4 GHz Intel Core i5 processor in Matlab. Also, from \refeqn{cS}, it is straightforward to find a closed form of the derivatives of the normalizing constant with respect to $s_i$, which is useful for maximum log-likelihood estimation of the matrix parameter~\cite{DowB72,KhaMarJRSSS77}. \subsection{Visualization of the matrix Fisher distribution} A method to visualize any probability density function on $\ensuremath{\mathsf{SO(3)}}$ has been proposed in~\cite{LeeLeoPICDC08}. Let $r_i\in\ensuremath{\mathsf{S}}^2$ be the $i$-th column of a rotation matrix $R$, i.e., $R=[r_1,r_2,r_3]\in\ensuremath{\mathsf{SO(3)}}$, where the two-sphere is the space of unit-vectors in $\Re^3$, i.e., $\ensuremath{\mathsf{S}}^2=\{q\in\Re^3\,|\, \|q\|=1\}$. The key idea for visualization is that $r_i$ has a certain geometric meaning of the attitude, namely the direction of the $i$-th body-fixed frame in the inertial frame. Once the marginal distribution for $r_i$ is obtained from a probability density function of $\ensuremath{\mathsf{SO(3)}}$, it can be visualized on the surface of the unit-sphere via color shading. If the distribution of each $r_i$ is mildly concentrated, the distributions of all of three body-fixed axes can be visualized at the single unit-sphere, thereby illustrating the shape of attitude probability dispersion intuitively. Here we show that the marginal distribution for the matrix Fisher distribution can be obtained in a closed form \begin{prop} Suppose $R\sim\mathcal{F}(M)$. Let $(i,j,k)\in\mathcal{I}$, and let $r_i\in\ensuremath{\mathsf{S}}^2$ be the $i$-th column of $R$. Then, the marginal probability density of $r_i$ is \begin{align} p(r_i) & = \frac{c_2(f_{jk},r_i)}{c(S)} \exp (f_i^Tr_i),\label{eqn:pri} \end{align} with respect to the uniform distribution on $\ensuremath{\mathsf{S}}^2$, where $f_i\in\Re^3$ denotes the $i$-th column of the matrix parameter $F$, and $f_{jk}=[f_j,f_k]\in\Re^{3\times 2}$. The constant $c_2(f_{jk},r_i)$ is defined as \begin{align} c_2(f_{jk},r_i) = I_0 \bracket{\sum_{i=1}^2 \sqrt{\lambda_l\bracket{f_{jk}^T (I_{3\times 3}-r_ir_i^T) f_{jk}}}},\label{eqn:c2} \end{align} where $\lambda_l[\cdot]$ denotes the $l$-th eigenvalue of a matrix. \end{prop} \begin{proof} See Appendix \ref{sec:MD}. \end{proof} Visualizations for selected matrix Fisher distributions constructed via \refeqn{pri} are available in \reffig{vis}. \begin{figure} \centerline{ \subfigure[$F_a=5I_{3\times 3}$]{ \includegraphics[width=0.3\columnwidth]{ACC16_vis_1}\label{fig:vis1}} \hfill \subfigure[$F_b=20I_{3\times 3}$]{ \includegraphics[width=0.3\columnwidth]{ACC16_vis_2}\label{fig:vis2}} \hfill \subfigure[{$F_c=\mathrm{diag}[25,5,1]$}]{\hspace*{0.005\textwidth} \includegraphics[width=0.3\columnwidth]{ACC16_vis_3}\label{fig:vis3}\hspace*{0.005\textwidth}} } \caption{Visualization of selected matrix Fisher distributions: the distribution in (b) is more concentrated than in (a), as the singular values of $F_b$ are greater than those of $F_a$; for both (a) and (b), the distributions of each axis are identical and circular as three singular values of each of $F_a$ and $F_b$ are identical; in (c), the first body-fixed axis (lower left) is more concentrated as the first singular value of $F_c$ is the greatest, and the distributions for the other two axes are elongated. Compared with the third body-fixed axis (top), the probability density of the second body-fixed axis (lower right) is greater, as the second singular value of $F_c$ is greater than the third.}\label{fig:vis} \end{figure} \section{Unscented Attitude Estimation}\label{sec:UAE} In this section, an attitude estimation scheme is proposed based on the matrix Fisher distribution on $\ensuremath{\mathsf{SO(3)}}$. Assuming that the initial attitude estimate and the attitude measurement errors are described by certain matrix Fisher distributions on $\ensuremath{\mathsf{SO(3)}}$, we construct an estimated attitude distribution via another matrix Fisher distribution following a Bayesian framework. Therefore, this approach is an example of so-called, \textit{assumed density filters}. One issue of any assumed density filter is that the propagated uncertainty is not guaranteed to be distributed as the selected density model. This has been commonly addressed by two distinct approaches. The first one is approximating the dynamics such that the propagated uncertainty follows the selected density model. For example, in extended Kalman filters, the dynamics is linearized to ensure that the propagated uncertainty is Gaussian. The second option is instead approximating the density model by selected parameters along the solution of the exact dynamic model, such as in unscented filters. In short, selecting one of these corresponds to the following question of `what should be approximated between dynamics and probability distributions?' In attitude estimation problems, the equations of motion are well known, but it is often challenging to obtain accurate probability distributions. In such cases, it may be reasonable to approximate probability distributions rather than corrupting the exact dynamic model by approximations. Here, we propose an unscented transform to approximate the matrix Fisher distribution on $\ensuremath{\mathsf{SO(3)}}$ by selected sigma points, and based on this, we construct a Bayesian attitude estimator. \subsection{Unscented transform for matrix Fisher distribution} Suppose $R\sim\mathcal{M}(F)$. We wish to define several rotation matrices that approximate $\mathcal{M}(F)$. This is achieved by identifying the role of the elliptic component and the polar component of the matrix parameter $F$ introduced in \refeqn{KM}. Consider a set of rotation matrices parameterized $\theta_i\in[0,2\pi)$ for $i\in\{1,2,3\}$ as \begin{align} R_i(\theta_i) = \exp(\theta_i\widehat {Ue_i}) UV^T=U\exp(\theta_i\hat e_i) V^T,\label{eqn:Ri} \end{align} where $e_i\in\Re^3$ denotes the $i$-th column of $I_{3\times 3}$. This corresponds to the rotation of the mean attitude $M=UV^T$ about the axis $Ue_i$ by the angle $\theta_i$, where $Ue_i$ is considered expressed with respect to the inertial frame. Using \refeqn{KM}, the probability density \refeqn{MF} along \refeqn{Ri} is given by \begin{align} p(R_i(\theta_i)) & =\frac{1}{c(F)}\exp(\trs{VSU^T U\exp(\theta_i\hat {e}_i) V^T})\nonumber\\ &=\frac{1}{c(S)}\exp(\trs{S\exp(\theta_i\hat{e}_i)}).\label{eqn:pRi0} \end{align} Substituting Rodrigues' formula~\cite{ShuJAS93}, namely $\exp(\theta_i\hat{e}_i)=I_{3\times 3}+\sin\theta_i\hat e_i +(1-\cos\theta_i)\hat e_i^2$, and rearranging, \begin{align} p(R_i(\theta_i)) &=\frac{1}{c(S)}\exp(s_i + (s_j+s_k)\cos\theta_i),\label{eqn:pRi} \end{align} where $j,k$ are determined such that $(i,j,k)\in\mathcal{I}$. This resembles the von Mises distribution on a circle, where the probability density is proportional to $\exp^{\kappa\theta}$ for a concentration parameter $\kappa\in\Re$~\cite{MarJup99}. The most noticeable property of \refeqn{pRi0} and \refeqn{pRi} is that the probability density depends only on the singular values $s_i$ and the rotation angle $\theta_i$, and it is independent of $U$ or $V$. When considered as a function of $\theta_i$, the overall value of $p(R_i(\theta_i)) $ would increase as $s_i$ becomes larger, and the curve becomes narrower as $s_j+s_k$ increases. For example, a larger $s_1$ implies that the marginal probability density of the first body-fixed axis increases, and the distributions of the marginal probability densities of the second axis and the third axis become narrower along the rotations about the third axis and the second axis, respectively, as illustrated in \reffig{vis3}. Recall \refeqn{Ri} is obtained by rotating the mean attitude $M=UV^T$ about the $i$-th column of $U$. As such, each column of $U$ is considered as the \textit{principle axis} of rotation for $\mathcal{M}(F)$. In short, the role of $F=USV^T$ in determining the shape of the distribution of $\mathcal{M}(F)$ is as follows: (i) the rotation matrix $U$ sets the principle axis of rotations; (ii) the singular vales $S$ describe the concentration of the distribution along the principle axes; (iii) the rotation matrix $V$ determines the mean attitude $M=UV^T$, together with $U$. In unscented transformations for a Gaussian distribution in $\Re^n$, the sigma points are chosen along the principle axis. Motivated by this and the above observations, the following unscented transform is proposed for the matrix Fisher distribution on $\ensuremath{\mathsf{SO(3)}}$. \begin{figure} \centerline{ \subfigure[$F=5I_{3\times 3}$]{ \includegraphics[width=0.45\columnwidth]{ACC16_sig_1}\label{fig:sig1}} \hfill \subfigure[{$F=\mathrm{diag}[25,5,1]$}]{ \includegraphics[width=0.45\columnwidth]{ACC16_sig_3}\label{fig:sig3}} } \caption{Visualization of sigma points: the body-fixed axes of the sigma points selected by \refeqn{SP}, \refeqn{costhetai} with $\sigma=0.9$ are illustrated by white dots.}\label{fig:sig} \end{figure} \begin{definition} Consider a matrix Fisher distribution $\mathcal{M}(F)$, and let the singular value decomposition of $F$ is given by \refeqn{KM}. The set of seven sigma points is defined as \begin{align} \{M\}\cup\{\,R_i(\theta_i),R_i(-\theta_i)\,|\, i\in\{1,2,3\}\},\label{eqn:SP} \end{align} where each angle $\theta_i$ is chosen as \begin{align} \cos\theta_i = \frac{(1-\sigma)\log c(S) +\sigma s_T- s_i}{s_j+s_k},\label{eqn:costhetai} \end{align} for $(i,j,k)\in\mathcal{I}$. The parameter $\sigma< 1$ determines the spread of the sigma points, and $s_T=\sum_{i=1}^3 s_i$. \end{definition} In other words, for a given parameter matrix $F$, the seven sigma points are chosen as the mean attitude, and positive/negative rotations about each principle axis by the angle determined by \refeqn{costhetai}. Note that each sigma point corresponds to a rotation matrix in $\ensuremath{\mathsf{SO(3)}}$. The equation \refeqn{costhetai} to select the rotation angle is motivated as follows. Substituting \refeqn{costhetai} into \refeqn{pRi}, and taking logarithm, \begin{align*} \log p(R_i(\pm \theta_i))= \sigma(s_T-\log c(S)), \end{align*} for any $i\in\{1,2,3\}$. As such, the last six sigma points of \refeqn{SP} have the same value of the probability density, given by $\frac{1}{c(S)}\exp (\sigma s_T)$. The ratio of that probability density to the maximum density, $\frac{1}{c(S)}\exp(s_T)$ is given by $\exp((\sigma-1)s_T)$. Therefore, the last six sigma points will be closer to the mean attitude, when the distribution is concentrated with larger $s_i$, or $\sigma$ becomes larger. As $\sigma\rightarrow 1$, all of the sigma points converge to the mean attitude. The sigma points for selected distributions are illustrated at \reffig{sig}. Next, we show that the set of sigma points is statistically sufficient. \begin{prop}\label{prop:IUT} Suppose the seven sigma points defined in \refeqn{SP} and the parameter $\sigma$ are given for $\mathcal{M}(F)$. Let $\bar R\in\Re^{3\times 3}$ be the arithmetic mean of the sigma points, i.e., \begin{align} \bar R = \frac{1}{7}\bracket{M + \sum_{i=1}^3\braces{R_i(\theta_i)+ R_i(-\theta_i)}}.\label{eqn:barR} \end{align} Then, the singular value decomposition of $\bar R$ is given by \begin{align} \bar R = U D V^T,\label{eqn:UDV} \end{align} where $U,V\in\ensuremath{\mathsf{SO(3)}}$ corresponds to those of \refeqn{KM}, and $D=\mathrm{diag}[d_1,d_2,d_3]\in\Re^{3\times 3}$. For $(i,j,k)\in\mathcal{I}$, $d_i$ is given by \begin{align} d_i = \frac{1}{7}(3 + 2(\cos\theta_j+\cos\theta_k)).\label{eqn:di} \end{align} \end{prop} \begin{proof} See Appendix \ref{sec:UT}. \end{proof} Therefore, for given sigma points, one can find the corresponding matrix parameter $F$ as follow: (i) the matrices $U,V$ are obtained from \refeqn{UDV}; (ii) solve \refeqn{di} for $(\cos\theta_1,\cos\theta_2,\cos\theta_3)$, which can be used to determine $(s_1,s_2,s_3)$ from \refeqn{costhetai}; (iii) $F=USV^T$. Based on the proposed unscented transform and its inverse, we construct a Bayesian estimator as follows. \subsection{Unscented Attitude Estimation} Consider a stochastic differential equation on $\ensuremath{\mathsf{SO(3)}}$, \begin{align} (R^T dR)^\vee = \Omega_z + w_\Omega,\label{eqn:SDE} \end{align} where $\Omega_z,w_\Omega\in\Re^3$ are the measured angular velocity and the angular velocity measurement error, respectively. It is assumed that the value of $\Omega_z$ is provided by an angular velocity sensor. The measurement error $w_\Omega$ is random, but its distribution is known. Suppose that the attitude is also measured by a sensor, such as an inertial measurement unit, and the attitude measurement $R_z\in\ensuremath{\mathsf{SO(3)}}$ is given by \begin{align} R_z = R W_R,\label{eqn:Rz} \end{align} where $W_R\in\ensuremath{\mathsf{SO(3)}}$ is an attitude measurement error, and $W_R\sim\mathcal{M}(F_z)$ for a known matrix parameter $F_z\in\Re^{3\times 3}$. Consider a discrete time sequence $\{t_0,t_1,\ldots\}$. The attitude estimation problem considered in this paper is to find the matrix parameter $F_{k+1}$ that approximates the estimated attitude distribution at $t=t_{k+1}$ via $\mathcal{M}(F_{k+1})$ for given $F_k$, $R_{z_k}$ and $\Omega_{z_k}$ with the assumption that $R_k\sim\mathcal{M}(F_k)$. Here, the subscript $k$ denotes the value of a variable at $t=t_k$. The proposed estimator is composed of a propagation step and a measurement update step. \paragraph{Propagation} The propagation step is defined via the unscented transform as follows. \begin{itemize} \item[(i)] Given $F_k$, seven sigma points at $t=t_k$, namely $R^l_k$ for $l\in\{1,\ldots,7\}$ are computed via \refeqn{SP}. \item[(ii)] Each sigma point is propagated to $t=t_{k+1}$ according to \refeqn{SDE}. For example, a second order Lie group method~\cite{HaiLub00} can be applied to obtain \begin{align} R^l_{k+1} = R^l_k \exp\parenth{\frac{1}{2}h(\Omega_{z_k}+w_{\Omega_k}+\Omega_{z_{k+1}}+w_{\Omega_{k+1}})}, \end{align} where $h=t_{k+1}-t_k$ is the time step. The angular velocity measurements $\Omega_z$ are from the sensor, and the measurement errors $w_\Omega$ are sampled from the given distribution. \item[(iii)] Find $F_{k+1}$ from the propagated sigma points $R^l_{k+1}$ according to the results of Proposition \ref{prop:IUT}. \end{itemize} These steps are repeated until an attitude measurement is available. \paragraph{Measurement Update} Suppose that the attitude is measured at $t_{k+1}$. We wish to find the distribution for $R_{k+1}|R_{z_{k+1}}$. From now on, in this subsection, we do not specify the subscript $k+1$ for brevity. Since $W_R\sim\mathcal{M}(F_z)$, \refeqn{Rz} implies \begin{align} p(R_z|R) = \frac{1}{c(F_z)}\exp(\trs {F_z^T R^T R_z}), \label{eqn:pRzR} \end{align} where we have used $c(RF_z)=c(F_z)$. According to Bayes' rule, the posterior distribution is \begin{align*} p(R|R_z) &= \frac{1}{a}p(R_z|R)P(R), \end{align*} where $a$ is a normalizing constant independent of $R$. Since $R\sim\mathcal{M}(F)$, from \refeqn{pRzR}, \begin{align*} p(R|R_z) &= \frac{1}{a c(F_z)} \exp(\trs {F_z^T R^T R_z}+\trs{F^T R}).\\ &= \frac{1}{c(F+ZF_z)} \exp(\trs {(F+R_zF_z^T)^T R}). \end{align*} Therefore, the posterior distribution for $R_{k+1}$ also follows a matrix Fisher distribution, i.e., $R_{k+1}\sim \mathcal{M}(F_{k+1}+R_{z_{k+1}}F_z^T)$. \section{Numerical Example} \begin{figure} \centerline{ \subfigure[True angular velocity: $\Omega_{true}(t)$ ($\mathrm{rad/s}$)]{ \includegraphics[width=0.45\columnwidth]{ACC16_W_true}\label{fig:W_true}} \hfill \subfigure[Measured angular velocity: $\Omega_z(t)$ ($\mathrm{rad/s}$)]{ \includegraphics[width=0.45\columnwidth]{ACC16_W_mea}\label{fig:W_mea}} } \centerline{ \subfigure[Visualization of $\mathcal{M}(F_z)$]{\hspace*{0.025\columnwidth} \includegraphics[width=0.35\columnwidth]{ACC16_Fz}\label{fig:Fz}\hspace*{0.025\columnwidth}} \hfill \subfigure[Attitude measurement error (deg)]{ \includegraphics[width=0.45\columnwidth]{ACC16_R_mea_err}\label{fig:R_mea_err}} } \caption{True trajectory and measurement errors}\label{fig:true} \end{figure} We implement the proposed approach to a complex attitude dynamics for a 3D pendulum, which is a rigid body pendulum acting under a uniform gravity. It is shown that a 3D pendulum may exhibit highly irregular attitude maneuvers, and we adopt a particular nontrivial maneuver presented in~\cite{LeeChaPICDC07} as the \textit{true} attitude and angular velocity for the numerical example considered in this section. The initial true attitude and angular velocity are given by \begin{align*} R_{true}(0)= I,\quad \Omega_{true}(0)=[4.14,\,4.14,\,4.14]^T\,(\mathrm{rad/s}), \end{align*} and the resulting angular velocity trajectory is illustrated in \reffig{W_true}, which exhibits irregular rotational dynamics. It is assumed that the attitude and the angular velocity are measured at the rate of $10\,\mathrm{Hz}$ and $50\,\mathrm{Hz}$, respectively. The Fisher matrix for the attitude measurement error is chosen as $F_z= \mathrm{diag}[40,50,35]$, and the rotation matrix $W_R$ representing the attitude measurement error is sampled according to the rejection method described in~\cite{KenGan13}. The matrix Fisher distribution for $F_z$, and the corresponding attitude measurement error for the sample used in this numerical simulation are illustrated in \reffig{Fz} and \reffig{R_mea_err}, respectively. The mean attitude measurement error is $10.46^\circ$. The measurement error for the angular velocity is assumed to follow a normal distribution in $\Re^3$ with zero mean and the covariance matrix of $\mathrm{diag}[0.5^2,0.8^2,1^2]\,(\mathrm{rad/s})^2$. The angular velocity measurements are given in \reffig{W_mea}. \subsection{Case I: Large initial estimate error} We consider two cases depending on the estimate of the initial attitude. For Case I, the initial matrix parameter is \begin{align*} F(0) = 100 \exp (\pi\hat e_1), \end{align*} where the initial mean attitude is $M (0) = \exp (\pi\hat e_1)$, that corresponds to $180^\circ$ rotation of $R_{true}(0)$ about the first body-fixed axis. It is highly concentrated, since $S(0)=100I$ is large. In short, this represents the case where the estimator is falsely too confident about the incorrect attitude. The results of the proposed unscented attitude estimation are illustrated in \reffig{ES}, where the attitude estimation error is presented, and the degree of uncertainty in the estimates are measured via $\frac{1}{s_i}$. The estimation error rapidly reduces to $7.6^\circ$ from the initial error of $180^\circ$ after three attitude measurements at $t=0.3$, and the mean attitude error afterward is $5.6^\circ$. The uncertainties in the attitude increase until $t=0.3$ since the measurements strongly conflict with the initial estimate, but they decrease quickly after the attitude estimate converges. These can be also observed from the visualizations of $\mathcal{M}(F(t))$ for selected time instances in \reffig{visES}. Since the color shading of the figures is reinitialized in each figure, the value of the maximum probability density, corresponding to the dark red color, is specified as well. Initially, the probability distribution is highly concentrated, and it becomes dispersed a little at $t=0.08$ due to the angular velocity measurement error. But, after the initial attitude measurement is incorporated at $t=0.1$, the probability distributions for the second axis and the third axis become dispersed noticeably due to the conflict between the belief and the measurement. This is continued until $t=0.3$. But, later at $t=1$ and $t=10$, the estimated attitude distribution becomes concentrated about the true attitude. \begin{figure} \centerline{ \subfigure[Attitude estimation error ($\mathrm{deg}$)]{\hspace*{0.02\columnwidth} \includegraphics[height=0.36\columnwidth]{ACC16_R_est_err}\label{fig:R_est_err}\hspace*{0.02\columnwidth}} \hfill \subfigure[Uncertainty measured by $1/s_i$]{\hspace*{0.02\columnwidth} \includegraphics[height=0.36\columnwidth]{ACC16_est_s}\label{fig:s}\hspace*{0.02\columnwidth}} } \caption{Case I: estimation results}\label{fig:ES} \end{figure} \begin{figure} \centerline{ \subfigure[$t=0$, $p_{\max}=1.41\times 10^4$]{\hspace*{0.06\columnwidth} \includegraphics[height=0.32\columnwidth]{ACC16_F_1}\label{fig:F_1}\hspace*{0.06\columnwidth}} \hfill \subfigure[$t=0.08$, $p_{\max}=9.92\times 10^3$]{\hspace*{0.06\columnwidth} \includegraphics[height=0.32\columnwidth]{ACC16_F_5}\label{fig:F_5}\hspace*{0.06\columnwidth}} } \centerline{ \subfigure[$t=0.1$, $p_{\max}=6.27\times 10^3$]{\hspace*{0.06\columnwidth} \includegraphics[height=0.32\columnwidth]{ACC16_F_6}\label{fig:F_6}\hspace*{0.06\columnwidth}} \hfill \subfigure[$t=0.3$, $p_{\max}=1.18\times 10^4$]{\hspace*{0.06\columnwidth} \includegraphics[height=0.32\columnwidth]{ACC16_F_16}\label{fig:F_16}\hspace*{0.06\columnwidth}} } \centerline{ \subfigure[$t=1$, $p_{\max}=2.00\times 10^4$]{\hspace*{0.06\columnwidth} \includegraphics[height=0.32\columnwidth]{ACC16_F_51}\label{fig:F_51}\hspace*{0.06\columnwidth}} \hfill \subfigure[$t=10$, $p_{\max}=2.02\times 10^4$]{\hspace*{0.06\columnwidth} \includegraphics[height=0.32\columnwidth]{ACC16_F_501}\label{fig:F_501}\hspace*{0.06\columnwidth}} } \caption{Case I: visualizations of $\mathcal{M}(F)$}\label{fig:visES} \end{figure} \subsection{Case II: Large initial uncertainty} For the second case, the matrix parameter is chosen as \begin{align*} F(0)= \mathrm{diag}[2,1,0.5]\exp(0.5\pi\hat e_1), \end{align*} where the initial mean attitude has $90^\circ$ error, and it is largely diffused as $S(0)=\mathrm{diag}[2,1,0.5]$ is relatively small. This corresponds to the case with a large initial uncertainty. \begin{figure} \centerline{ \subfigure[Attitude estimation error ($\mathrm{deg}$)]{\hspace*{0.02\columnwidth} \includegraphics[height=0.36\columnwidth]{ACC16_err_R_1}\label{fig:R_est_err_1}\hspace*{0.02\columnwidth}} \hfill \subfigure[Uncertainty measured by $1/s_i$]{\hspace*{0.02\columnwidth} \includegraphics[height=0.36\columnwidth]{ACC16_est_s_1}\label{fig:s_1}\hspace*{0.02\columnwidth}} } \caption{Case II: estimation results}\label{fig:ES_1} \end{figure} \begin{figure} \vspace*{-0.1cm} \centerline{ \subfigure[$t=0$, $p_{\max}=1.30\times 10^1$]{\hspace*{0.06\columnwidth} \includegraphics[height=0.32\columnwidth]{ACC16_F_1_1}\label{fig:F_1_1}\hspace*{0.06\columnwidth}} \hfill \subfigure[$t=0.08$, $p_{\max}=1.30\times 10^1$]{\hspace*{0.06\columnwidth} \includegraphics[height=0.32\columnwidth]{ACC16_F_5_1}\label{fig:F_5_1}\hspace*{0.06\columnwidth}} } \centerline{ \subfigure[$t=0.1$, $p_{\max}=3.86\times 10^3$]{\hspace*{0.06\columnwidth} \includegraphics[height=0.32\columnwidth]{ACC16_F_6_1}\label{fig:F_6_1}\hspace*{0.06\columnwidth}} \hfill \subfigure[$t=0.18$, $p_{\max}=3.71\times 10^3$]{\hspace*{0.06\columnwidth} \includegraphics[height=0.32\columnwidth]{ACC16_F_10_1}\label{fig:F_10_1}\hspace*{0.06\columnwidth}} } \centerline{ \subfigure[$t=1$, $p_{\max}=1.05\times 10^4$]{\hspace*{0.06\columnwidth} \includegraphics[height=0.32\columnwidth]{ACC16_F_11_1}\label{fig:F_11_1}\hspace*{0.06\columnwidth}} \hfill \subfigure[$t=10$, $p_{\max}=2.02\times 10^4$]{\hspace*{0.06\columnwidth} \includegraphics[height=0.32\columnwidth]{ACC16_F_501_1}\label{fig:F_501_1}\hspace*{0.06\columnwidth}} } \caption{Case II: visualizations of $\mathcal{M}(F)$}\label{fig:visES_1} \end{figure} The corresponding numerical simulation results are presented in \reffig{ES_1} and \ref{fig:visES_1}. Both the attitude estimation error and the uncertainty decrease over time, since there is no strong conflict between the measurement and the estimate as opposed to the first case. In \reffig{visES_1}, it is illustrated that the estimated distribution becomes concentrated, especially after the first attitude measurement is received at $t=0.1$. The presented cases for attitude estimation are particularly challenging due to the following reasons: (i) the estimator is initially strongly confident about an incorrect attitude with the maximum error $180^\circ$, or the initial uncertainty is large; (ii) the considered attitude dynamics is swift and complex; (iii) both attitude and angular velocity measurement errors are relatively large; (iv) the attitude measurements are infrequent. These correspond to the cases where attitude estimators developed in terms of local coordinates or linearization tend to diverge. It is shown that the proposed approach developed directly on the special orthogonal group exhibits satisfactory, reasonable results even for the presented challenging cases.
1,941,325,220,896
arxiv
\section{Introduction}\label{sec1} Evolutionary algorithms are heuristic search methods inspired by biological evolution \cite{Vikhar}. Although evolutionary algorithms have long been verified to be effective and efficient in empirical studies, rigorous analyses about these algorithms did not emerge until the late 1990s. Considerable progress has been made in the theoretical understanding and analysis of evolutionary algorithms \cite{DoerrBook,NeumannBook,Zhoubook}. In particular, it is interesting to see that evolutionary algorithms have good approximation guarantees as well as good running times for many NP-hard problems \cite{Friedrich,Friedrich1,Oliveto,Qian3,Qian4,Qian1,Qian2,Yu}. We notice that most of existing studies reply on the property of submodularity of their utility functions, and those studies which do not require their utility functions to be submodular also depend on parameters measuring how far their utility functions are from submodularity. It remains largely open whether evolutionary algorithms can still achieve good approximation ratios in the absence of submodularity. We also notice that previous studies on multi-objective evolutionary algorithms mainly focus on integral constraints. It is not clear whether real-valued constraints can be dealt with efficiently. In this paper, we aim to (partially) fill this gap by investigating the performance bounds of evolutionary algorithms for a broad class of minimum general cover problems, whose utility functions might be real-valued, and are not necessarily submodular. A formal definition of this class of general cover problems is as follows: \begin{definition}[minimum general cover problem (MinGC)]\label{def:1} {\rm Suppose $X=\{v_1,...,v_n\}$ is an element set, $w:X\mapsto \mathbb R^+$ is a \emph{weight (or cost) function} on $X$, and $g:2^X\mapsto \mathbb R^+$, called the \emph{utility function}, is a real-valued set-monotone-nondecreasing function. The MinGC problem is to find a set $C\subseteq V$ satisfying \begin{align}\label{eq0313-1} \min_{C\subseteq X} & \ \sum_{x\in C}w(x)\\ \mbox{s.t.} & \ g(C)=g(X) \nonumber \end{align}} \end{definition} Note that MinGC is general enough to subsume many important problems, including the {\em minimum connected dominating set problem} (MinCDS) and the {\em minimum submodular cover problem} (MinSubmC), as special cases. In this paper, we develop a new general purpose analytical framework, called {\em multi-phase bin-tracking analysis} (MultiBinTrack), to derive a performance bound that does not rely on the approximate-submodularity for non-submodular utility functions. We develop this technique progressively, explaining the rationale behind the design of each component, and apply this technique to analyze the performance bound of {\em global simple evolutionary multi-objective optimizer} (GSEMO) \cite{Giel0}, a simple and classic evolutionary algorithm, for the MinGC problem. The basic idea of this technique is to build a connection between an evolutionary algorithm and a greedy algorithm, which select items recursively based on their marginal utility-to-cost ratios. It is well known that greedy strategy often has very good performance bounds for many coverage problems. A major contribution of this paper is to derive sufficient conditions under which GSEMO can achieve nearly the same approximation ratio as the greedy algorithm. Both MinCDS and integer-valued MinSubmC satisfy these conditions, hence GSEMO yields bounded approximation ratios for both problems in expected polynomial time. Furthermore, our framework gives a bi-criterion approximation algorithm for the {\em real-valued} MinSubmC problem violating the feasibility constraint by a small additive factor. It should be clarified that our main contribution is on developing a new technique to analyze GSEMO (which is an existing general purpose evolutionary algorithm) for the MinGC problem, rather than inventing new algorithms, with an attempt to reveal deeper approximation mechanism underlying this evolutionary algorithm. \subsection{Related Works}\label{sec1.1} Since the end of the last century, considerable progress has been made in understanding the theoretical performance of evolutionary algorithms \cite{DoerrBook,NeumannBook,Zhoubook}. In the following, we only examine performance bounds of evolutionary algorithms for those most closely related coverage problems. One classic example of such problems is the {\em minimum set cover} (MinSC) problem. Friedrich {\it et al.} \cite{Friedrich} showed that an approximation ratio of $(\ln n+1)$ can be achieved by a {\em global simple evolutionary multi-objective optimizer} (GSEMO) in expected time $O (n^2m + mn(\log m + \log c_{max}))$, where $n$ is the number of elements to be covered, $m$ is the number of sets, and $c_{max}$ is the maximum cost of a set. For the $k$-MinSC problem, in which every set has size at most $k$, \cite{Yu} introduced a framework of evolutionary algorithm which yields an approximation ratio that can be achieved by a centralized approximation algorithm developed in \cite{Levin}. The {\em minimum vertex cover problem} (MinVC) is a special case of the MinSC problem, which has been a focus of many theoretical studies on evolutionary algorithms, including running time versus approximation ratio \cite{Friedrich,Oliveto}, FPT algorithms \cite{Gao,Kratsch,Pourhassan}, and in the dynamic setting \cite{Pourhassan2,Pourhassan1,ShiF}. Coverage function is a special submodular function. Various submodular optimization problems, due to their wide applications in artificial intelligence, have recently attracted a lot of attention from researchers studying theoretical aspects of evolutionary algorithms, especially on submodular maximization under a cardinality constraint \cite{Qian3,Qian4,Qian2}. Friedrich and Neumann further studied the submodular maximization under matroid constraints \cite{Friedrich1}. There had been several attempts to apply evolutionary algorithms to non-submodular optimization problems \cite{Qian3,Qian4,Qian2}. They mostly focus on the (utility) maximization problem rather than the (cost) minimization problem as studied in this paper. Moreover, they often use a parameter called \emph{submodularity ratio}, or similar concepts which may be called \emph{approximate-submodularity}, to bound the distance of a non-submodular function to a submodular function. As a result, their performance bounds depend on the value of the approximate-submodularity. Note that the approximate-submodularity of the utility functions of many MinGC problems such as the MinCDS problem could be arbitrarily large, making existing solutions ineffective for the MinGC problem. Furthermore, in the above submodular or non-submodular optimization problems, the constraints are integer-valued. It is not clear whether good approximation ratios can be achieved by evolutionary algorithms when the constraints are real-valued, such as the real-valued MinSubmC problem. The remaining part of this paper is organized as follows. In Section \ref{sec3}, we give an overview of the technique of multi-phase bin-tracking analysis for GSEMO. In Section \ref{sec2}, we apply this technique to analyze the performance of GSEMO on the MinGC problem, and further apply the results to two special MinGC problems: the integer-valued minimum general cover problem (which includes the MinCDS problem) and the real-valued MinSubmC problem. Section \ref{sec4} concludes the paper and discusses future work. \section{Overview of Algorithm Design and Analysis}\label{sec3} In this section, we give a brief introduction to GSEMO, which is a classic general-purpose evolutionary algorithm. Then we give an overview of the technique of multi-phase bin-tracking analysis (MultiBinTrack). \subsection{Overview of GSEMO}\label{sec3.1} We first introduce some notations. A subset $S\subseteq X=\{v_1,\ldots,v_n\}$ can be identified with its {\em characteristic vector} $\textbf{x}\in \{0,1\}^n$, in which the $i$-th bit $x_i=1$ if and only if $v_i\in S$. In the following, we do not distinguish a vector ${\bf x}$ and the set it represents, and use terminology {\em individual} to refer to them. Consider a minimization problem with a bi-objective function $(f_1({\bf x}), f_2({\bf x}))$, we say an individual $\textbf{x}^\prime$ {\em weakly dominates} $\textbf{x}$, denoted as $\textbf{x}^\prime\succeq \textbf{x}$, if $f_i(\textbf{x}^\prime)\leq f_i(\textbf{x})$ holds for any $i\in\{1, 2\}$. In this case, we also say that ${\bf x}'$ is {\em weakly better} than ${\bf x}$, or ${\bf x}$ is {\em weakly inferior} to ${\bf x}'$. We say that $\textbf{x}^\prime$ {\em dominates} $\textbf{x}$, denoted as $\textbf{x}^\prime\succ\textbf{x}$, if $\textbf{x}^\prime\succeq \textbf{x}$ and there exists an $i\in\{1, 2\}$ with $f_i(\textbf{x}^\prime)<f_i(\textbf{x})$. In this case, we also say that ${\bf x}'$ is {\em better than} ${\bf x}$, or ${\bf x}$ is {\em inferior} to ${\bf x}'$. If neither $\textbf{x}^\prime\succeq \textbf{x}$ nor $\textbf{x}\succeq \textbf{x}^\prime$, then $\textbf{x}$ and $\textbf{x}^\prime$ are {\em incomparable}. A typical multi-objective evolutionary algorithm maintains a \emph{population} $P$, which is composed of a set of individuals that are mutually incomparable, i.e., $\forall\: \textbf{x}\in P$, the set $\{\textbf{x}^\prime\in P, \textbf{x}^\prime\succeq \textbf{x}, \textbf{x}^\prime\neq \textbf{x}\}=\emptyset$. It starts with some initial population $P=\{{\bf x}_0\}$. In each iteration, an individual ${\bf x}$ is picked uniformly at random from $P$, and mutated into an offspring ${\bf x}'$. If ${\bf x}'$ is not inferior to any individual in $P$, then ${\bf x}'$ is added into $P$ and those individuals which are weakly inferior to ${\bf x}'$ are deleted from $P$. There are various ways of performing the mutation. In GSEMO \cite{Giel0}, which is the focus of this paper, it flips every bit of ${\bf x}$ independently with probability $1/n$. As discussed earlier, our main focus is on developing a novel technique to theoretically analyze GSEMO for the MinGC problem rather than inventing new algorithms. \subsection{Multi-Phase Bin-Tracking Analysis}\label{sec3.2} In this section, we give an overview of MultiBinTrack. Consider a problem of minimizing a bi-objective function $(f_1({\bf x}),f_2({\bf x}))$ such that $f_1(\cdot)$ is an utility function which measures feasibility, and $f_2(\cdot)$ is a cost function. Suppose $f_1({\bf x})$ takes values from a discrete set $\{\xi_0,\xi_1,\ldots,\xi_{\beta}\}$, where $\xi_0<\xi_1<\cdots<\xi_{\beta}$, and ${\bf x}$ is a feasible solution if and only if $f_1({\bf x})=\xi_0$. Our goal is to find a feasible solution to minimize $f_2({\bf x})$. As GSEMO progresses, we maintain a group of $\beta+1$ bins $B_{\xi_0}, B_{\xi_1},\ldots, B_{\xi_{\beta}}$, each of which is empty initially. By abusing notation a little without ambiguity, for each $i\in\{0,1,\ldots,\beta\}$, we use the same notation $B_{\xi_i}$ to refer to the $i$-th bin as well as the set of individuals contained in that bin. It is important to clarify that this bin system is created only for the purpose of analysis, the implementation of GSEMO does not rely on this system. Once a new individual ${\bf x}'$ is generated and inserted into $P$ by GSEMO, we add ${\bf x}'$ to $B_{f_1({\bf x}')}$ and delete all individuals that are weakly inferior to ${\bf x}'$ from the bin system if and only if ${\bf x}'$ satisfies some {\em quality control condition} $\pi$. Note that any individual ${\bf x}$ in $B_{\xi_i}$ has $f_1({\bf x})=\xi_i$, and $B_{\xi_0}\neq\emptyset$ implies that a feasible solution has been reached. However, a feasible solution might not be good. To ensure that an individual that can be put into $B_{\xi_0}$ has a good quality, the key is to find appropriate conditions $\pi$ to restrict those individuals that can enter the bin system. To bound the running time, we introduce a \emph{tracker} $I$, which tracks the smallest index of the non-empty bin, i.e., $I=\min\{i\in \{0,1,\ldots,\beta\}\colon B_{\xi_i}\neq \emptyset\}$. The analysis of time complexity involves two factors: $(a)$ prove that $I$ does not increase with more individuals added into the bin system; $(b)$ starting from an arbitrary stage of GSEMO with $I=i$, estimate the expected time for $I$ to decrease by at least 1, denote this expected time as $l_i$. \noindent Then the expected time it takes for $I$ to reach $0$, which indicates that we have successfully found a feasible solution in $B_{\xi_0}$, is at most $\sum_{i=1}^{\beta}l_i$. It turns out that the above framework of analysis is general enough to subsume the analysis used in many existing studies, including the maximum matroid base problem \cite{Qian1,Reichel}, the minimum set cover problem \cite{Friedrich,Yu}, the maximum submodular optimization problem \cite{Friedrich1,Qian3,Qian4}, as special cases. For example, consider the {\em minimum set cover problem} (MinSC). Given a set of $n$ elements $E$ and a collection of $m$ subsets $\mathcal S\subseteq 2^E$, each set $S\in\mathcal S$ has a positive cost $c(S)$, the goal of MinSC is to select a minimum cost subcollection $\mathcal F\subseteq \mathcal S$ to cover all elements, i.e. $\bigcup_{S\in\mathcal F}S=E$ and the cost $c(\mathcal F)=\sum_{S\in\mathcal F}c(S)$ is the minimum. The analysis in \cite{Friedrich} for the $H_n$-approximate evolutionary algorithm for MinSC (where $H_n=\sum_{i=1}^n1/i$ is the $n$-th Harmonic number) can be restated using the above framework as follows. Let $f_1({\bf x})$ be the number of uncovered elements under ${\bf x}$ and $f_2({\bf x})=c({\bf x})$. Then $f_1({\bf x})$ can only take discrete values $0,1,\ldots,n$, and $f_1({\bf x})=0$ indicates that ${\bf x}$ corresponds to a set cover. For each $i\in\{0, 1,\ldots,n\}$ and any individual ${\bf x}$ with $f_1({\bf x})=i$, we say that ${\bf x}$ satisfies condition $\pi$ if and only if $c({\bf x})\leq (H_n-H_{i})opt$, where $opt$ is the optimal value (note that the optimal value $opt$ is only used for the purpose of analysis). This indicates that every individual from $B_0$ is an $H_n$-approximate set cover. It can be shown that the tracker $I$ is monotone non-increasing, and the expected time it takes for $I$ to decrease by at least one is upper bounded by $O(mn)$, hence, the expected time to find an $H_n$-approximate solution is $O(n^2m)$, given that the starting population is $\emptyset$. Unfortunately, when applying the above bin-tracking analysis to the MinGC problem, which subsumes the MinCDS problem and the real-valued MinSubmC problem as special cases, we encounter additional challenges. For the MinCDS problem, we found that the aforementioned bin-tracking analysis only works when $f_1({\bf x})$ is relatively large, that is, when ${\bf x}$ is relatively far from feasible. This motivates us to extend the bin-tracking analysis to {\em multi-phase} bin-tracking analysis. In the analysis, we conduct the bin-tracking analysis in multiple phases, and in each phase, we adopt a different quality-control condition. This enables us to handle the case when $f_1({\bf x})$ is small. One challenge to be conquered in this multi-phase analysis is how to concatenate different phases in a smooth manner. We introduce the concept of ``advance'' to ensure quality control conditions and a smooth concatenation of different phases. These problems will be elaborated in Section \ref{sec3.3}, where we apply this technique to analyze the performance of GSEMO on the MinGC problem. Additional efforts are required to deal with the {\em real-valued} MinSubmC problem, as the feasibility function takes values from a {\em continuous} range. \section{Solving MinGC}\label{sec2} In this section, we apply MultiBinTrack to analyze the performance of GSEMO for the MinGC problem (Definition \ref{def:1}). It is assumed that $g(\cdot)$ is real-valued, normalized ($g(\emptyset)=0$) and monotone nondecreasing, but is not necessarily submodular. In Section \ref{sec0626-1}, we design a greedy algorithm for the MinGC problem and give sufficient conditions under which this algorithm can achieve a theoretically guaranteed approximation ratio. Then in Section \ref{sec3.3}, we show how to use MultiBinTrack to analyze the performance of GSEMO on MinGC. In Section \ref{sec0308-1}, we apply the results to some special cases of the MinGC problem, showing that GSEMO can achieve almost the same approximation ratios as that of greedy algorithms in expected polynomial time. \subsection{Greedy Algorithm}\label{sec0626-1} In this section, we present a greedy algorithm {\sc Greedy} for the MinGC problem. The algorithm uses a greedy strategy, choosing a most cost-effective element in each iteration. What is different from the other works is the sufficient conditions that we formulate for the algorithm to work for the MinGC problem with a theoretically guaranteed approximation ratio. For two subsets $S, S'\subseteq X$, let $\Delta_Sg(S')=g(S\cup S')-g(S')$ be the {\em marginal profit} of $S$ over $S'$. {\sc Greedy} starts with an initial solution $C=\emptyset$. In each subsequent iteration $t$, {\sc Greedy} adds to $C$ an element $b$ satisfying \[b=\arg\max_v \{\Delta_vg(C)/w(v): v\in X\setminus C\},\] where $\Delta_vg(C)/w(v)$ is called the {\em cost-effectiveness }of element $v$, w.r.t. $C$. This process iterates until a feasible solution is reached. A detailed implementation of {\sc Greedy} is described in Algorithm \ref{algo1}. \begin{algorithm} [H] \caption{{\sc Greedy}} \begin{algorithmic}[1]\label{algo1} \STATE \textbf{Input:} A MinGC instance $(X,w,g)$. \STATE \textbf{Output:} A subset $C\subseteq X$ which is a feasible solution to MinGC. \STATE $C\leftarrow \emptyset$ \WHILE{ $g(C)<g(X)$} \STATE $b\leftarrow \arg\max_v \{\Delta_vg(C)/w(v): v\in X\setminus C\}$ \label{line4} \STATE $C\leftarrow C\cup \{b\}$ \ENDWHILE \RETURN $C$ \end{algorithmic} \end{algorithm} Assume, without loss of generality, that the weight of the cheapest element is $1$, and let $w_{\max}=\max_{v\in X}w(v)$ denote the weight of the most expensive element. The following parameter $\delta$ will be used in analyzing the approximation ratio: \begin{equation}\label{eq0810-1} \delta=\min_{C\subset X,v\in X\setminus C,g(C\cup\{v\})>g(C)}\{g(C\cup\{v\})-g(C)\}. \end{equation} Intuitively, $\delta$ measures the {\em degree of sparsity}, that is, the smallest gap between two distinct values of $g$. We always use $C^*$ to denote an optimal solution and let $opt=w(C^*)$. \begin{theorem}\label{thm1} Suppose a MinGC instance has the following properties: \begin{itemize} \item[$(\romannumeral1)$] for any element set $C\subset X$ with $g(C)<g(X)$, there is an element $v\in X\setminus C$ such that $\Delta_vg(C)>0$, and \item[$(\romannumeral2)$] there exists a constant $p$ such that for any $C\subset X$, the elements in $C^*\setminus C$ can be ordered as $v_1,\ldots v_t,v_{t+1},\ldots, v_{\widehat{t}} $, where $t$ is the smallest index satisfying $g(C_t^*\cup C)= g(X)$, here $C^*_{i}=\{v_1,\ldots,v_i\}$ and $C^*_{0}=\emptyset$, and for all $i=1,\dots, t$, \begin{align}\label{eq0728-0} \Delta_{v_{i}}g(C^*_{i-1}\cup C)\leq \Delta_{v_{i}}g(C)+p. \end{align} \end{itemize} Then {\sc Greedy} achieves an approximation ratio of at most $(p+1)\frac{w_{\max}}{\delta}+\ln\frac{g(X)-p\cdot opt}{opt}$ (if $g(X)-(p+1)\cdot opt\leq0$, then $\ln\frac{g(X)-p\cdot opt}{opt}$ is viewed as 0). \end{theorem} \begin{proof} Suppose the output of Algorithm \ref{algo1} is $C=\{b_1,\ldots,b_s\}$ where $b_i$ is the element selected in the $i$-th iteration. Denote $C_0=\emptyset$ and $C_i=\{b_1,\ldots,b_i\}$ for each $i\in\{1,\ldots,s\}$. For $i\in\{1,\ldots, s\}$, let $\{v_{i,1},\ldots,v_{i,t_i},v_{i,t_i+1},\ldots,v_{i,\hat t_i}\}$ be the ordered set of $C^*\setminus C_{i-1}$ satisfying the condition of this theorem, where $t_i$ is the smallest index satisfying $g(C_{i-1}\cup \{v_{i,1},\ldots,v_{i,t_i}\})= g(X)$. Denote $C_{i,j}^*=\{v_{i,1},\ldots,v_{i,j}\}$ for $j=1,\ldots,t_i$, and let $C_{i,0}^*=\emptyset$. By the definition of $\delta$ and the greedy choice of $b_i$, we have $\Delta_{b_i}g(C_{i-1})\geq \delta$ for any $i\in\{1,\ldots,s\}$. Hence \begin{align}\label{eq0730-1} \frac{\Delta_{b_i}g(C_{i-1})}{w(b_i)}\geq \frac{\delta}{w_{\max}}. \end{align} If $g(X)\leq (p+1)opt$, then combining \eqref{eq0730-1} with $g(C_s)=g(X)$ and $g(\emptyset)=0$, we have \begin{align*} w(C_s)&=\sum_{i=1}^sw(b_i) \leq \sum_{i=1}^s\frac{w_{\max}}{\delta}\Delta_{b_i}g(C_{i-1}) \\ &=\frac{w_{\max}}{\delta}(g(C_s)-g(\emptyset))=\frac{w_{\max}}{\delta}g(X) \\ &\leq\frac{w_{\max}}{\delta}\big(p+1)opt, \end{align*} and the desired approximation ratio holds in this case. In the following we assume \begin{align}\label{eq0314-10} g(X)> (p+1)opt. \end{align} For $i\in\{0,1,\dots,s\}$, let $\alpha_i=g(X)-g(C_i)-p\cdot opt$. The following claim shows that $\alpha_i$ decreases geometrically if $\alpha_i$ is nonnegative. \textbf{Claim 1.} If $\alpha_{i-1}>0$, then \begin{align}\label{eq0314-7} \alpha_i\leq e^{-\frac{w(b_i)}{opt}}\alpha_{i-1}. \end{align} Consider the $i$-th iteration. By the greedy choice of $b_i$, we have \begin{align*} \frac{\Delta_{b_i}g(C_{i-1})}{w(b_i)}\geq \frac{\Delta_{v_{i,j}}g(C_{i-1})}{w(v_{i,j})},\ \forall j\in\{1,\ldots,t_i\}. \end{align*} It follows that \begin{align}\label{eq0314-1} \frac{\Delta_{b_i}g(C_{i-1})}{w(b_i)}\geq \frac{\sum_{j=1}^{t_i}\Delta_{v_{i,j}}g(C_{i-1})}{\sum_{j=1}^{t_i}w(v_{i,j})}. \end{align} Because the minimum weight is $1$, we have \begin{align}\label{eq0314-4} t_i\leq |C^*\setminus C_{i-1}|\leq w(C^*\setminus C_{i-1})\leq opt. \end{align} Combining \eqref{eq0728-0}, \eqref{eq0314-1}, \eqref{eq0314-4} and the fact that $\sum_{j=1}^{t_i}w(v_{i,j})\leq opt$, we have \begin{align} \frac{\Delta_{b_i}g(C_{i-1})}{w(b_i)} & \geq \frac{\sum_{j=1}^{t_i}\big(\Delta_{v_{i,j}}g(C_{i-1}\cup C_{i,j-1}^*)-p\big)}{opt}\nonumber \\ &=\frac{\sum_{j=1}^{t_i}\left(g(C_{i-1}\cup C_{i,j}^*)-g(C_{i-1}\cup C_{i,j-1}^*)\right) -p\cdot t_i}{opt}\nonumber\\ &\geq\frac{g(C_{i-1}\cup C_{i,t_i}^*)-g(C_{i-1})-p\cdot opt}{opt}\nonumber\\ & = \frac{g(X)-g(C_{i-1})-p\cdot opt}{opt}.\label{eq0314-5} \end{align} It follows that $\alpha_i=g(X)-g(C_i)-p\cdot opt$ satisfies \begin{equation}\label{eq0627-1} \frac{\alpha_{i-1}-\alpha_i}{w(b_i)}\geq \frac{a_{i-1}}{opt}, \end{equation} and thus $$ \alpha_i\leq \left(1-\frac{w(b_i)}{opt}\right)\alpha_{i-1}\leq e^{-\frac{w(b_i)}{opt}}\alpha_{i-1}, $$ where the second inequality uses the fact $1+x\leq e^x$. Claim 1 is proved. Recursively using inequality \eqref{eq0314-7}, as long as $\alpha_{i-1}>0$, we have \begin{align}\label{eq0314-8} \alpha_i \leq e^{-\frac{\sum_{j=1}^iw(b_j)}{opt}}\alpha_0. \end{align} Note that assumption \eqref{eq0314-10} guarantees $\alpha_0>opt$. Since $\alpha_s=-p\cdot opt\leq opt$, there is an index $i_0$ such that $\alpha_{i_0}> opt$ and $\alpha_{i_0+1}\leq opt$. Let $w(b_{i_0+1})=d_1+d_2$ satisfy the following constraint: \begin{align}\label{eq0315-1} \frac{\alpha_{i_0}-opt}{d_1}=\frac{\alpha_{i_0}-\alpha_{i_0+1}}{w(b_{i_0+1})}=\frac{opt-\alpha_{i_0+1}}{d_2}. \end{align} {\bf Claim 2.} For the above index $i_0$, the following inequalities hold: \begin{align} & \sum_{i=1}^{i_0}w(b_i)+d_1\leq \ln\frac{\alpha_0}{opt}\cdot opt.\label{eq0314-9}\\ & \sum_{i=i_0+2}^sw(b_i)\leq \frac{w_{\max}}{\delta}\big(g(C_s)-g(C_{i_0+1})\big).\label{eq0314-12}\\ & d_2\leq \frac{w_{\max}}{\delta}(opt-\alpha_{i_0+1}).\label{eq0315-10} \end{align} Combining the first equality of \eqref{eq0315-1} with inequality \eqref{eq0627-1} (taking $i=i_0+1$), we have $$ opt\leq \left(1-\frac{d_1}{opt}\right)\alpha_{i_0}\leq e^{-\frac{d_1}{opt}}\alpha_{i_0}. $$ Combining this inequality with \eqref{eq0314-8}, we have $$ opt\leq e^{-\frac{\sum_{j=1}^{i_0}w(b_j)+d_1}{opt}}\alpha_0. $$ Then inequality \eqref{eq0314-9} follows by recollecting the terms. By inequality \eqref{eq0730-1}, we have $$ \sum_{i=i_0+2}^sw(b_i)\leq \frac{w_{\max}}{\delta}\sum_{i=i_0+2}^s\Delta_{b_i}g(C_{i-1})=\frac{w_{\max}}{\delta}\big(g(C_s)-g(C_{i_0+1})\big). $$ Inequality \eqref{eq0314-12} is proved. Inequality \eqref{eq0315-10} follows from the combination of the second equality of \eqref{eq0315-1}, \eqref{eq0730-1} and the fact that $\alpha_{i_0}-\alpha_{i_0+1}=\Delta_{v_{i_0+1}}g(C_{i_0})$. Claim 2 is proved. Combining Claim 2 with the facts $g(C_s)= g(X)$ and $\alpha_{i_0+1}=g(X)-g(C_{i_0+1})-p\cdot opt$, \begin{align*} w(C_g) & =\sum_{i=1}^sw(b_i)= \sum_{i=1}^{i_0}w(b_i)+d_1 +d_2+ \sum_{i=i_0+2}^sw(b_i)\\ & \leq \ln\frac{\alpha_0}{opt}\cdot opt+\frac{ w_{\max}}{\delta}(opt-\alpha_{i_0+1})+\frac{w_{\max}}{\delta}\big(g(C_s)-g(C_{i_0+1})\big)\\ &= \ln\frac{\alpha_0}{opt}\cdot opt+\frac{w_{\max}}{\delta}\big(g(X)-g(C_{i_0+1})+opt-\alpha_{i_0+1}\big)\\ & =\left( \frac{w_{\max}}{\delta}(1+p) + \ln\frac{\alpha_0}{opt}\right)opt. \end{align*} The desired approximation ratio is proved. \end{proof} \subsection{GSEMO on MinGC} \label{sec3.3} In this section, we apply MultiBinTrack to show that under the same conditions of Theorem \ref{thm1}, GSEMO achieves almost the same approximation ratio for the MinGC problem in expected polynomial time. A detailed GSEMO for MinGC is described in Algorithm \ref{algo2}. Given an instance of MinGC, the {\em fitness} of a solution ${\bf x}$ is captured by a bi-objective function $(f_1({\bf x}), f_2({\bf x}))$, where $f_1({\bf x})$ measures the uncovered portion by ${\bf x}$ and $f_2({\bf x})$ denotes the weight of ${\bf x}$. Specifically, let $S_\textbf{x}$ denote the subset of elements corresponding to its characteristic vector $\textbf{x}$, we define \begin{align}\label{eq0704-1} & f_1(\textbf{x})=\left\lfloor\frac{g(X)-g(\textbf{x})}{\delta} \right\rfloor\cdot\delta \ \mbox{and}\\ & f_2(\textbf{x})=w(\textbf{x}),\nonumber \end{align} where $g(\textbf{x})=g(S_\textbf{x})$ and $w(\textbf{x})=\sum_{x\in S_\textbf{x} }w(x)$. GSEMO starts from an empty population, i.e., initial $P=\{{\bf 0}\}$. In each subsequent iteration, it picks an individual ${\bf x}$ uniformly at random from $P$, and generates a new individual $\textbf{x}^\prime$ by flipping each bit of $\textbf{x}$ with probability $\frac{1}{n}$. We add ${\bf x}'$ into $P$ if $\textbf{x}^\prime$ is not inferior to any individual in $P$. If ${\bf x}'$ has been added to $P$, then we remove all individuals which are weakly inferior to ${\bf x}'$ from $P$. On termination, the algorithm outputs a best feasible solution stored in current $P$. It should be noted that usually, an evolutionary algorithm starts from a randomly generated initial solution. We let the algorithm start from the zero solution in order to focus on the most central part of the analysis. In fact, by an analysis similar to that in \cite{Friedrich}, we can show that a zero solution can enter the population in expected polynomial time. It should also be pointed out that usually, an evolutionary algorithm runs infinitely. But we prefer setting a termination time for GSEMO. As we shall show latter, setting the termination time $T$ properly, a performance guaranteed solution can be obtained with high probability. \begin{algorithm} [H] \caption{{\sc GSEMO}} \begin{algorithmic}[1] \label{algo2} \STATE \textbf{Input:} $(X,f_1,f_2)$ with $|X|=n$ and the number of iterations $T$. \STATE \textbf{Output:} an individual $\textbf{x}$. \STATE $P\leftarrow \{{\bf 0}\}$ \FOR{$t=1,2,\dots,T$} \STATE Select $\textbf{x}$ from $P$ uniformly at random;\label{line0606-1} \STATE Generate $\textbf{x}^\prime$ by flipping each bit of $\textbf{x}$ with probability $\frac{1}{n}$;\label{line5} \IF {($\nexists~\textbf{z}\in P$ with $\textbf{z}\succ \textbf{x}^\prime)$}\label{line0531-1} \STATE $P= P\setminus\{{\textbf{z}: \textbf{x}^\prime\succeq \textbf{z}}\wedge\textbf{z}\in P \}\cup \{\textbf{x}^\prime\}$\label{line0602-1} \ENDIF \ENDFOR \STATE Return $\arg\min_{{\bf x}}\{f_2({\bf x})\colon {\bf x}\in P,f_1({\bf x})=0\}$, if any. \end{algorithmic} \end{algorithm} \begin{remark}\label{rem1} {\rm If we simply use $f_1({\bf x})=g({\bf x})$ in GSEMO, then when $g(\cdot)$ is a real-valued function, there might be too many individuals entering $P$, and thus the time/space complexity might not be bounded. Hence we discretize the function $g({\bf x})$ into $f_1({\bf x})$ as in \eqref{eq0704-1}. As a result, the solution returned from Algorithm 2 might violate the feasibility constraint by an additive error up to $\delta$. That is, an individual ${\bf x}$ satisfying $g({\bf x})> g(X)-\delta$ (or equivalently, $f_1({\bf x})=0$) is regarded as a nearly feasible solution. The goal is to find a {\em nearly feasible} solution to minimize $f_2({\bf x})$. Note that when $g(\cdot)$ is integer-valued, we may take $\delta=1$, and there is no loss in feasibility.} \end{remark} \begin{remark}\label{rem2} {\rm If the condition described in Theorem \ref{thm1} is satisfied, then function $f_1(\cdot)$ satisfies the following inequality: \begin{align*} -\Delta_{v_{i}}f_1(C^*_{i-1}\cup C)&= \left \lfloor\frac{g(X)-g(C^*_{i-1}\cup C)}{\delta}\right \rfloor\delta-\left \lfloor\frac{g(X)-g(C^*_{i-1}\cup C \cup \{v_i\})}{\delta}\right \rfloor\delta \nonumber \\ &\leq \big(g(X)-g(C^*_{i-1}\cup C) \big)-\left(\frac{g(X)-g(C^*_{i-1}\cup C\cup \{v_i\})}{\delta}-1\right)\delta \nonumber \\ &=g(C^*_{i-1}\cup C\cup \{v_i\})-g(C^*_{i-1}\cup C)+\delta \nonumber \\ &\leq \Delta_{ v_i }g(C)+p+\delta \nonumber \\ &= \big(g(X)-g(C)\big)-\big(g(X)-g(C\cup \{v_i\})\big) +p+\delta\nonumber \\ &\leq \left(\left \lfloor\frac{g(X)-g(C}{\delta}\right \rfloor\delta+\delta \right )-\left \lfloor\frac{g(X)-g(C)\cup \{v_i\})}{\delta}\right \rfloor\delta+p+\delta\nonumber \\ &=-\Delta_{ v_i }f_1(C)+p+2\delta. \end{align*} Note that if $g(\cdot)$ is integer-valued, then there is no loss of $2\delta$ in the above inequality.} \end{remark} The next lemma estimates the number of bins used in the analysis. \begin{lemma}\label{lem0606-3} Let $\beta=\left \lfloor\frac{g(X)-g(\emptyset)}{\delta} \right \rfloor $. The population $P$ maintained by GSEMO (Algorithm \ref{algo2}) satisfies $|P|\leq \beta+1$ throughout the evolutionary process. \end{lemma} \begin{proof} By the monotonicity of $g(\cdot)$, we have $0=g({\bf 0})\leq g({\bf x})\leq g(X)$ for any individual ${\bf x}$. Then by the definition of $f_1(\cdot)$ in \eqref{eq0704-1}, $0\leq f_1({\bf x})\leq \left \lfloor (g(X)-g({\bf 0}))/\delta\right \rfloor\cdot\delta=f_1({\bf 0})=\beta\delta$. Since $f_1({\bf x})$ can only take values from $\{0, \delta,\ldots,\beta\delta\}$, it has at most $\beta+1$ possible values. According to line \ref{line0602-1} of Algorithm \ref{algo2}, for each $i\in \{0, \delta,\ldots,\beta\delta\}$, $P$ contains at most one individual whose $f_1$-value is $i$. Hence, the size of $P$ is at most $\beta+1$. \end{proof} \begin{theorem}\label{thm2} If a MinGC instance satisfies those conditions described in Theorem \ref{thm1}, then in expected $O(\beta^2 n)$ time, GSEMO returns a nearly feasible solution with approximation ratio of at most $\frac{w_{\max}}{\delta}\big(p+1+2\delta)+\ln\frac{f_1(\emptyset)-(p+2\delta )\cdot opt}{opt-\delta}$. Furthermore, in the case when $g(\cdot)$ is integer-valued, GSEMO returns a feasible solution in $O(g(X)^2n)$ time that has approximation ratio of at most $w_{\max}(1+p) + \ln\frac{g(X)-p\cdot opt}{opt}$. \end{theorem} \begin{proof} In the following we will apply the technique of MultiBinTrack as introduced in Section \ref{sec3.2} to prove this theorem. First create $\beta+1$ bins: $B_0, B_1, \cdots, B_{\beta}$, each of which is empty. Initially, $P={\bf 0}$. Add the initial solution ${\bf 0}$ to $B_{\beta}$. Recall that we use $I$ to track the index of the non-empty bin which has the smallest index. Hence, $I=\beta $ initially. Once a new individual ${\bf x}$ is generated, suppose $\frac{f_1({\bf x})}{\delta}=i $, add ${\bf x}$ to $B_{i}$ if and only if ${\bf x}$ ``advances'' some existing individual in the bin system (the meaning of advance and related operations on the bin system will be clarified latter). We divide the process into two phases. The first phase of analysis aims to bound the expected time it takes for $I$ to drop from $\beta $ to some value less than $\left \lfloor\frac{(p+2\delta+1)opt}{\delta}\right \rfloor$, and the second phase of analysis aims to bound the the expected time it takes for $I$ to further drop to $0$. It should be emphasized again that $opt$ is only used for the purpose of analysis. Next, we explain these two phases of analysis in details. \textbf{In the first phase of analysis,} we say that an individual ${\bf x}$ satisfies a quality-control condition $\pi^{(1)}$ if \begin{align} f_1({\bf x})\leq \alpha^\prime_0e^{-\frac{f_2(\textbf{x})}{opt}}+(p+2\delta)\cdot opt,\label{eq0602-12} \end{align} where $\alpha^\prime_0=f_1(\emptyset)-(p+2\delta )\cdot opt$. We say that ${\bf x}'$ {\em advances} ${\bf x}$ in the first phase if either ${\bf x}'\succeq {\bf x}$ or the following two conditions are satisfied: \begin{align} & f_1(\textbf{x}^\prime)-(p+2\delta)\cdot opt \leq \left (1-\frac{f_2(\textbf{x}^\prime)-f_2(\textbf{x})}{opt}\right)\big(f_1(\textbf{x})-(p+2\delta)\cdot opt \big) \label{eq0316-2-2}\\ & \mbox{and}\ f_1(\textbf{x}^\prime)\leq f_1(\textbf{x})-\delta\ \mbox{and}\ f_2(\textbf{x}^\prime)\leq f_2(\textbf{x})+w_{\max} \label{eq0316-2} \end{align} Note that if ${\bf x}'\succeq {\bf x}$, then ${\bf x}$ and ${\bf x}'$ satisfy condition \eqref{eq0316-2-2}. Also note that condition \eqref{eq0316-2} says that ${\bf x}'$ might be inferior to ${\bf x}$ in terms of $f_2$-value, but the gap is no larger than $w_{\max}$, at the same time, ${\bf x}'$ must be strictly better than ${\bf x}$ in terms of $f_1$-value by an additive amount at least $\delta$. A newly generated individual ${\bf x}'$ {\em has a potential} to be added into the bin system if ${\bf x}'$ advances some existing individual ${\bf x}$ in the bin system. If furthermore, ${\bf x}'$ is eligible to enter the population $P$, then we add ${\bf x}'$ to $B_{i}$, where $i=f_1({\bf x}')/\delta$. In order to be consistent with the population $P$, those individuals in the bin system that are deleted from $P$ because of the entering of ${\bf x}'$ are also deleted from the bin system. If ${\bf x}'$ has the above potential but is not eligible to enter $P$, then there is an individual ${\bf y}\in P$ with ${\bf y}\succ {\bf x}'$. In this case ${\bf y}$ advances ${\bf x}$, and we add ${\bf y}$ to $B_{j}$, where $j=f_1({\bf y})/\delta$. The above manipulation ensures that an individual can enter the bin system only when it advances some existing element in the bin system, and individuals kept in the bin system also belong to the population. Note that the bin system only records those ``good'' individuals in $P$ for analysis. First, we prove that the advance criterion \eqref{eq0316-2-2} can guarantee condition $\pi^{(1)}$. {\bf Claim 1.} If ${\bf x}'$ is added to the bin system in the first phase, then ${\bf x}'$ satisfies $\pi^{(1)}$. This claim can be proved by induction. Initially, $P=\{{\bf 0}\}$, and ${\bf 0}$ trivially satisfies $\pi^{(1)}$. When ${\bf x}'$ is added into the bin system, ${\bf x}'$ must advance some existing individual ${\bf x}$ in the bin system. Suppose ${\bf x}\in B_i$. By induction hypothesis, ${\bf x}$ satisfies property $\pi^{(1)}$, that is, inequality \eqref{eq0602-12}. Combining this with condition \eqref{eq0316-2-2}, we have \begin{align*} f_1({\bf x}') & \leq \left (1-\frac{f_2(\textbf{x}^\prime)-f_2(\textbf{x})}{opt}\right)\big(f_1(\textbf{x})-(p+2\delta)\cdot opt \big)+(p+2\delta)\cdot opt \\ & \leq e^{-\frac{f_2(\textbf{x}^\prime)-f_2(\textbf{x})}{opt}} \big(f_1(\textbf{x})-(p+2\delta)\cdot opt \big)+(p+2\delta)\cdot opt \\ & \leq e^{-\frac{f_2(\textbf{x}^\prime)-f_2(\textbf{x})}{opt}} \cdot \alpha^\prime_0e^{-\frac{f_2(\textbf{x})}{opt}}+(p+2\delta)\cdot opt \\ & = \alpha^\prime_0e^{-\frac{f_2(\textbf{x}^\prime)}{opt}}+(p+2\delta)\cdot opt. \end{align*} Hence, ${\bf x}'$ satisfies $\pi^{(1)}$. Claim 1 is proved. The next claim shows that the cost of an individual in the first phase is not too high. \textbf{Claim 2}. For any individual $\textbf{x} $ added into the bin system in the first phase, we have \begin{align}\label{eq0316-6} f_2(\textbf{x})\leq opt\cdot \ln\frac{\alpha^\prime_0}{opt-\delta}. \end{align} Suppose $\textbf{x}\in B_i$ with $i\geq \left \lfloor\frac{(p+2\delta+1)opt}{\delta}\right \rfloor$. By Claim 1, ${\bf x}$ satisfies inequality \eqref{eq0602-12}. Combining \eqref{eq0602-12} with the observation that $f_1(\textbf{x})=i\delta\geq\left \lfloor\frac{(p+2\delta+1)opt}{\delta}\right \rfloor\delta \geq \left(\frac{(p+2\delta+1)opt}{\delta}-1\right)\delta$, and rearranging, Claim 2 follows. The next claim estimates the expected time it takes for $I$ to drop from $\beta$ to some value less than $\left \lfloor\frac{(p+2\delta+1)opt}{\delta}\right \rfloor $. Here, we assume that $\beta\geq \left \lfloor\frac{(p+2\delta+1)opt}{\delta}\right \rfloor$, otherwise, we can skip the first phase of analysis and directly jump to the second phase. {\bf Claim 3.} The expected time it takes for $I$ to decrease from $\beta $ to some value less than $\left \lfloor\frac{(p+2\delta+1)opt}{\delta}\right \rfloor $ is at most $e\big(\beta-\left \lfloor\frac{(p+2\delta+1)opt}{\delta}\right \rfloor+1\big) (1+\beta)n$. Initially, $P=\{{\bf 0}\}$ and $I=\beta $. Note that \begin{equation}\label{eq0603-1} \mbox{$I$ does not increase over time.} \end{equation} In fact, consider an individual ${\bf x}\in B_I$. If ${\bf x}$ stays in the bin system, then $I$ does not decrease. If ${\bf x}$ is deleted from the bin system, it must due to the generation of an individual ${\bf x}'$ which is weakly better than ${\bf x}$. Note that such ${\bf x}'$ advances ${\bf x}$, and thus can enter the bin system. So in this case, the new $I$ is at most $\frac{f_1({\bf x}')}{\delta}\leq \frac{f_1({\bf x})}{\delta}$. Now, we estimate the expected time it takes for $I$ to decrease by at least $1$ in the first phase. Assume $I\geq \left \lfloor\frac{(p+2\delta+1)opt}{\delta}\right \rfloor $ and ${\bf x}\in B_I$. Let \begin{align}\label{eq0606-5} b_{\bf x}&= \arg\max_{v\in X\setminus S_{\bf x}}-\Delta_vf_1(S_{\bf x})/f_2(v), \end{align} where $S_{\bf x}$ is the set of elements corresponding to ${\bf x}$. Let ${\bf x}'$ be the individual obtained from ${\bf x}$ by changing the bit corresponding to $b_{\bf x}$ from 0 to 1. Note that \begin{equation}\label{eq0811-3} f_1({\bf x}')\leq f_1({\bf x})-\delta. \end{equation} Before proving \eqref{eq0811-3}, we first observe that as long as $f_1({\bf x})>0$, there always exists an element $v\in X\setminus S_{\bf x}$ such that $-\Delta_vf_1({\bf x})>0$. In fact, condition $(\romannumeral1)$ of Theorem \ref{thm1} guarantees that the element $v=\arg\max_{u\in X\setminus S_{\bf x}}\Delta_ug(S_{\bf x})$ satisfies $g(S_{\bf x}\cup \{v\})>g(S_{\bf x})$, and thus by the definition of $\delta$, we have $g(S_{\bf x}\cup \{v\})\geq g(S_{\bf x})+\delta$. It follows that the individual ${\bf y}$ corresponding to $S_{\bf x}\cup \{v\}$ satisfies \begin{align*} f_1({\bf x})-f_1({\bf y}) & =\delta\cdot\left(\left \lfloor\frac{g(X)-g({\bf x})}{\delta}\right \rfloor-\left \lfloor\frac{g(X)-g({\bf y})}{\delta}\right \rfloor\right)\\ & > \delta\cdot\left(\frac{g(X)-g({\bf x})}{\delta}-1-\frac{g(X)-g({\bf y})}{\delta}\right)\\ & =g({\bf y})-g({\bf x})-\delta\geq 0, \end{align*} where ``$>$'' holds by the observation that $a-1<\lfloor a\rfloor\leq a$ ($\forall a\in \mathbb R$). Then, by the choice of $b_{\bf x}$ in \eqref{eq0606-5}, we have $f_1({\bf x})-f_1({\bf x}')>0$. Note that the $f_1$-value of an individual can only be a multiple of $\delta$, so $f_1({\bf x})-f_1({\bf x}')>0$ implies $f_1({\bf x})-f_1({\bf x}')\geq \delta$. Next, we show that \begin{equation}\label{eq0811-1} \mbox{the above ${\bf x}'$ advances ${\bf x}$.} \end{equation} Suppose $C^*\setminus S_{\bf x}=\{v_1,\dots, v_t,\dots, v_{\hat{t}}\}$ is the decomposition described in Theorem \ref{thm1}. Similar to the proof of \eqref{eq0314-4}, we have $t\leq opt$. Combining this with the choice of $b_{\bf x}$ in \eqref{eq0606-5}, Remark \ref{rem2}, the facts $f_1(S_{\bf x}\cup C^*_{t})=0$ and $f_2(C_{t})\leq opt$, we have \begin{align*} \frac{-\Delta_{b_{\bf x}}f_1(S_{\bf x})}{f_2(b_{\bf x})}&\geq \frac{-\sum_{j=1}^{t}\Delta_{v_j}f_1(S_{\bf x})}{\sum_{j=1}^{t}f_2(v_j)} \nonumber \\ &\geq \frac{\sum_{j=1}^{t}(-\Delta_{v_j}f_1(S_{\bf x}\cup C^*_{j-1})-p-2\delta)}{opt} \nonumber \\ &= \frac{\sum_{j=1}^{t}(-\Delta_{v_j}f_1(S_{\bf x}\cup C^*_{j-1}))-(p+2\delta)\cdot t}{opt} \nonumber \\ &\geq \frac{f_1(S_{\bf x})-f_1(S_{\bf x}\cup C^*_t)-(p+2\delta)\cdot opt}{opt} \nonumber \\ &=\frac{f_1(S_{\bf x})-(p+2\delta)\cdot opt}{opt} \end{align*} Rearranging this inequality, using $f_2({\bf x}' )-f_2({\bf x} )=f_2(b_{\bf x} )$, we obtain inequality \eqref{eq0316-2-2}. Furthermore, by inequality \eqref{eq0811-3}, and because $f_2({\bf x}')=f_2({\bf x})+w(b_{\bf x})\leq f_2({\bf x})+w_{\max}$, individuals ${\bf x}'$ and ${\bf x}$ satisfy \eqref{eq0316-2}. So, ${\bf x}'$ advances ${\bf x}$. As a consequence of \eqref{eq0811-1}, if ${\bf x}$ is mutated into ${\bf x}'$, then either ${\bf x}'$ or an individual ${\bf y}\in P$ with ${\bf y}\succ {\bf x}'$ can be added into the bin system. In this case, the tracker $I$ will be decreased to $\frac{f_1({\bf x}')}{\delta}$ or $\frac{f_1({\bf y})}{\delta}$, which are smaller than $I$. By our synchronous maintaining of the bin system and the population, individual ${\bf x}$ belongs to $P$. The probability that ${\bf x}$ is picked by line \ref{line0606-1} of Algorithm \ref{algo2} is $1/|P|\geq 1/(\beta+1)$ (by Lemma \ref{lem0606-3}), and the probability that ${\bf x}$ is mutated into the above ${\bf x}'$ is $\left(\frac{1}{n}\right)\left(1-\frac{1}{n}\right)^{n-1}\geq \frac{1}{en}$. Hence, the probability $$ \Pr(I \ \mbox{decreases by at least 1})\geq\frac{1}{e(\beta+1)n}, $$ and thus the expected time it takes for $I$ to decrease by at least 1 is at most $e(\beta+1)n$. Combining this with property \eqref{eq0603-1}, the total expected time it takes for $I$ to decrease from $\beta$ to some value less than $\left \lfloor\frac{(p+2\delta+1)opt}{\delta}\right \rfloor$ is upper bounded by $e\big(\beta-\left \lfloor\frac{(p+2\delta+1)opt}{\delta}\right \rfloor+1\big) (1+\beta )n$. Hence, Claim 3 is proved. \textbf{We next conduct the second phase of analysis.} Assume that $I$ has dropped to some value $\gamma<\left \lfloor\frac{(p+2\delta+1)opt}{\delta}\right \rfloor$ at the end of the first phase analysis. We say an individual ${\bf x}$ satisfies a quality-control condition $\pi^{(2)}$ if \begin{align}\label{eq0666-1} f_2(\textbf{x})\leq w_{\max}\left( \left \lfloor\frac{(1+p+2\delta)opt}{\delta}\right \rfloor-\frac{f_1(\textbf{x})}{\delta}\right)+\left(\ln\frac{\alpha^\prime_0}{opt- \delta}\right)opt. \end{align} Moreover, we say that ${\bf x}'$ {\em advances ${\bf x}$ in the second phase} if either ${\bf x}'\succeq {\bf x}$ or they satisfy relation \eqref{eq0316-2}. A newly generated individual ${\bf x}'$ has a potential to enter the bin system if ${\bf x}'$ advances some existing individual in the {\em second-phase bin system} (that is, ${\bf x}'$ advances ${\bf x}\in B_j$ with $j<\left \lfloor\frac{(1+p+2\delta)opt}{\delta}\right \rfloor$). The manipulation on the bin system is similar to that in the first phase, using a different meaning of advance: if ${\bf x}'$ has the above potential, then add either ${\bf x}'$ or an individual ${\bf y}\in P$ with ${\bf y}\succ {\bf x}'$ into the bin system, depending on whether ${\bf x}'$ is eligible to enter $P$. And a consistency operation is executed to remove those inferior individuals from both $P$ and the bin system. Assume that ${\bf x}_1$ is the first individual entering the second-phase bin system (it is the last individual added at the end of the first phase and ${\bf x}\in B_{\gamma}$). The next claim builds a bridge between the first and the second phases. {\bf Claim 4.} ${\bf x}_1$ satisfies condition $\pi^{(2)}$, and any individual that is added to the second-phase bin system satisfies $\pi^{(2)}$. Because ${\bf x}_1$ is the first individual entering the second-phase bin system, ${\bf x}_1$ must advance some existing individual ${\bf x}$ in the first-phase bin system. By Claim 2, ${\bf x}$ satisfies inequality \eqref{eq0316-6}. Then due to the criteria of advance in the second phase, we have $$ f_2({\bf x}_1)\leq f_2({\bf x})+w_{\max}< opt\cdot \ln\frac{\alpha^\prime_0}{opt- \delta}+ w_{\max}. $$ Combining this with $\frac{f_1({\bf x}_1)}{\delta}=\gamma\leq\left \lfloor\frac{(1+p+2\delta)opt}{\delta}\right \rfloor -1$, individual ${\bf x}_1$ satisfies inequality \eqref{eq0666-1}. The first part of Claim 4 is proved. We next prove the second part of Claim 4 by induction. Consider any individual ${\bf x}'\neq {\bf x}_1$ which is added to the second-phase bin system, then ${\bf x}'$ advances some existing individual ${\bf x}$ in the second-phase bin system. By induction, ${\bf x}$ satisfies $\pi^{(2)}$, that is, inequality \eqref{eq0666-1}. If ${\bf x}'\succeq {\bf x}$, then ${\bf x}'$ also satisfies inequality \eqref{eq0666-1}. If ${\bf x}$ and ${\bf x}'$ satisfy relation \eqref{eq0316-2}, then \begin{align*} f_2(\textbf{x}^\prime) &\leq f_2(\textbf{x})+ w_{\max}\nonumber\\&\leq w_{\max} \left( \left \lfloor\frac{(p+2\delta+1)opt}{\delta}\right \rfloor-\frac{f_1(\textbf{x})}{\delta}\right)+\left(\ln\frac{\alpha^\prime_0}{opt-\delta}\right)opt+ w_{\max}\nonumber\\ &=w_{\max} \left( \left\lfloor\frac{(p+2\delta+1)opt}{\delta}\right \rfloor-\left(\frac{f_1(\textbf{x})}{\delta}-1\right)\right)+\left(\ln\frac{\alpha^\prime_0}{opt-\delta}\right)opt \nonumber \\ & \leq w_{\max}\left( \left \lfloor\frac{(p+2\delta+1)opt}{\delta}\right \rfloor-\frac{f_1(\textbf{x}^\prime)}{\delta}\right)+\left(\ln\frac{\alpha^\prime_0}{opt-\delta}\right)opt. \end{align*} In any case, ${\bf x}'$ also satisfies $\pi^{(2)}$. Claim 4 is proved. Note that if the first phase does not exist, that is, if $\frac{f_1({\bf 0})}{\delta}< \left \lfloor\frac{(p+2\delta+1)opt}{\delta}\right \rfloor $, then the initial solution ${\bf 0}$ satisfies condition $\pi^{(2)}$, and all the remaining arguments go through. Similar to the derivation in Claim 3, we have the following claim that estimates the expected time it takes for $I$ to drop from $\gamma $ to $0$. {\bf Claim 5.} The expected time it takes for the tracker $I$ to drop from $\gamma $ to $0$ is at most $e\left(\left \lfloor\frac{(p+2\delta+1)opt}{\delta}\right \rfloor-1\right) (1+\beta)n$. {\bf Putting these two phases together:} the total expected time it takes for $I$ to decrease from $\beta $ to $0$ is at most $e\big(\left \lfloor\frac{(p+2\delta+1)opt}{\delta}\right \rfloor-1\big) (1+\beta)n+e\big(\beta-\left \lfloor\frac{(p+2\delta+1)opt}{\delta}\right \rfloor+1\big) (1+\beta)n=e\beta(1+\beta)n=O(\beta^2n)$. Note that the individual ${\bf x}$ in $B_0$ satisfies $f_1({\bf x})=0$ and condition $\pi^{(2)}$, hence, ${\bf x}$ is a nearly feasible solution to the MinGC instance with $f_2({\bf x})\leq w_{\max}\left \lfloor\frac{(p+2\delta+1)opt}{\delta}\right \rfloor+\big(\ln\frac{\alpha^\prime_0}{opt-\delta}\big)opt\leq\big(\frac{w_{\max}}{\delta}\big(p+1+2\delta)+\ln\frac{\alpha^\prime_0}{opt-\delta}\big)opt$. The approximation ratio is proved. When $g(\cdot)$ is integer-valued, then by Remarks \ref{rem1} and \ref{rem2}, we can obtain improved results as claimed in the theorem. \end{proof} \subsection{Further Discussion}\label{sec0308-1} According to Theorem \ref{thm1} and Theorem \ref{thm2}, GSEMO returns a nearly feasible solution with an approximation ratio that is comparable to that of the greedy algorithm. In particular, when $g(\cdot)$ is integer-valued, there is no violation of feasibility and the approximation ratio of GSEMO coincides with that achieved by the greedy algorithm. In the following, we consider some special cases. For the {\em minimum submodular cover} (MinSubmC) problem, the utility function $g(\cdot)$ is submodular, which implies $p=0$. The trouble with a {\em real-valued} MinSubmC instance lies in the fact that the utility function $g(\cdot)$ might have too many different values to be manipulated efficiently by an evolutionary algorithm. Discretization is a natural choice to solve this problem. However, after discretization, submodularity is lost (see Remark \ref{rem2}). Nevertheless, this situation can be successfully dealt with using the condition formulated in Theorem \ref{thm1}, resulting in approximation ratio $\frac{w_{\max}}{\delta}\big(1+2\delta)+\ln\frac{ f_1({\bf 0})-2\delta\cdot opt}{opt-\delta}$. For an {\em integer-valued} MinSubmC instance, by the second half of Theorem \ref{thm2}, GSEMO obtains a feasible solution with an approximation ratio at most $w_{\max}+\ln\frac{g(X)}{opt}$. In particular, for an unweighted instance in which $w\equiv 1$, by the submodularity of $g$, for any optimal solution $C^*$, we have $g(X)= g(C^*)\leq \sum_{v\in C^*}g(v)\leq \max_{v\in C^*}g(v)\cdot |C^*|\leq \max_{v\in X}g(v)\cdot opt$. Hence the approximation ratio is at most $1+\ln \max\limits_{x\in X}g(x)$. The expected running time is $O(g(X)^2n)$. This ratio matches the best one achieved by an approximation algorithm \cite{Ding-ZhuDu}. The {\em minimum connected dominating set} (MinCDS) problem is another special case of the MinGC problem. Given a connected graph $G=(V,E)$, a vertex set $C$ is a connected dominating set (CDS) if every vertex $v\in V\setminus C$ has at least one neighbor in $C$ and the subgraph of $G$ induced by $C$, denoted by $G[C]$ is connected. The goal of MinCDS is to find a CDS with the minimum size. Taking $w\equiv 1$ and $g(C)=p(C)+q(C)- 2$, where $p(C)$ is the number of connected components of $G[C]$ and $q(C)$ is the number of connected components of $G\langle C\rangle$ which is a spanning subgraph of $G$ induced by those edges incident with $C$, then MinCDS is a MinGC problem with $p=1$ and $g(X)=n-2$, where $n$ is the number of vertices. To make use of Theorem \ref{thm2}, a crucial observation is: if we order $C^*$ as $\{v_1^*,\ldots,v_t^*\}$ such that for any $i=1,\ldots,t$, \begin{equation}\label{eq0309-1} \mbox{the induced subgraph $G[\{v_1^*,\ldots,v_i^*\}]$ is connected} \end{equation} (notice that such an ordering exists since $G[C^*]$ is connected), then the condition described in Theorem \ref{thm1} is satisfied with $p=1$. In fact it can be proved that function $-q(\cdot)$ is submodular, and thus $-\Delta_{v_{i}}q (C^*_{i-1}\cup C)\leq -\Delta_{v_{i}} q(C)$. However, $-p(C)$ is not submodular. In a worst case, $-\Delta_{v_{i}}p(C^*_{i-1}\cup C)$ can be larger than $-\Delta_{v_{i}}g(C)$ by the number of connected components in $G[C^*_{i-1}]$. Hence, under the ordering specified in \eqref{eq0309-1}, this gap can be bounded by $1$. As a result, GSEMO yields a CDS with an approximation ratio of at most $(2+\ln\frac{n-2-opt}{opt})$. Since $(n-2-opt)/opt\leq \delta_{\max}$, where $\delta_{\max}$ is the maximum degree of the graph, so the approximation ratio is at most $2+\ln(\delta_{\max})$, which coincides with the one obtained by the approximation algorithm in \cite{Zhouj1}. Furthermore, because $\beta=n-2$, the expected running time is $O(n^3)$. \section{Conclusion}\label{sec4} This paper proposes a technique called multi-phase bin-tracking analysis and we use this technique to analyze the performance bound of GSEMO for the MinGC problem. We show that for two important special cases of the MinGC problem, GSEMO yields approximation ratios matching those achieved by the greedy algorithm. Our analysis provides a valuable framework to help understand how a greedy mechanism is embedded in an evolutionary algorithm. In fact, the key step for the bin-tracking analysis is to find out under which situation there exists an evolutionary path which is no worse than a greedy path, and the evolutionary process will not yaw. It was worth mentioning that although we restrict our attention to the minimization problem, the proposed technique of multi-phase bin-tracking analysis can be easily modified to suit maximization problems too, examples of such problems include the maximum matroid base problem \cite{Qian1,Reichel,Zhoubook} and the maximum submodular cover problem \cite{Friedrich1,Qian3,Qian4}. In fact, a one-phase bin-tracking analysis works for these problems. In the future, we would like to find out more combinatorial optimization problems which can be solved approximately by an evolutionary algorithm. More importantly, we are interested in finding some common structural properties shared among those problems that lead to performance guarantees. \section*{Acknowledgment} This research is supported in part by National Natural Science Foundation of China (U20A2068, 11771013), Zhejiang Provincial Natural Science Foundation of China (LD19A010001).
1,941,325,220,897
arxiv
\section{Introduction} In this note we are interested in cancellations in sums of multiplicative functions. It is well known that $$ M(x) := \sum_{n \leq x} \mu(n) = O(x^{1/2 + \varepsilon}) $$ is equivalent to the Riemann Hypothesis. On the other hand it is also a classical result that $M(x) > x^{1/2 - \varepsilon}$ for a sequence of arbitrarily large $x$. It is in fact conjectured that $$ \overline{\underline{\lim_{x \rightarrow \infty}}} \ \frac{M(x)}{\sqrt{x} (\log\log\log x)^{\tfrac 54}} = \pm B $$ for some constant $B > 0$ (see \cite{ng}). Wintner \cite{wintner} initiated the study of what happens for a generic multiplicative function which is as likely to be $1$ or $-1$ on the primes. Consider $f(p)$, a sequence of independent random variables taking values $\pm 1$ with probability $1/2$ each (i.e. Rademacher random variables), and define a multiplicative function supported on squarefree integers $n$ by $$ f(n) := \prod_{p | n} f(p) . $$ We shall refer to such a function as a {\em Rademacher random multiplicative function}. By the three series theorem, the Euler product $F(s) := \prod_{p} ( 1+ f(p) p^{-s} )$ converges almost surely for $\Re s > \tfrac 12$. From this Wintner deduced that $$ \sum_{n \leq x} f(n) \ll x^{1/2 + \varepsilon}\ \ \text{almost surely (a.s.)} $$ Since then the problem of the behavior of $\sum_{n \leq x} f(n)$ has attracted considerable attention \cite{chatsound, halasz, harpergp, harperlimits, hough, tenenbaum}. A closely related model is to let $f(p)$ be uniformly distributed on the complex unit circle (i.e. Steinhaus random variables), and then define $f(n) := \prod_{p^{\alpha} || n} f(p)^{\alpha}$ for all $n$. We shall refer to such a function as a {\em Steinhaus random multiplicative function}. Very recently mean values of random multiplicative functions arose in connection with harmonic analysis. In his last paper Helson \cite{helson} considered the question of generalizing Nehari's theorem to the infinite polydisk. He noticed that the generalization could be disproved if one could show that \begin{equation} \label{helson} \lim_{T \rightarrow \infty} \frac{1}{T} \int_{0}^{T} \Big | \sum_{n \leq N} n^{-it} \Big | dt = o(\sqrt{N}). \end{equation} Using Bohr's identification, we have \begin{equation} \label{equivalent} \left( \mathbb{E} \Bigg | \sum_{n \leq N} f(n) \Bigg |^{2q} \right)^{1/2q} = \left( \lim_{T \rightarrow \infty} \frac{1}{T} \int_{0}^{T} \Bigg | \sum_{n \leq N} n^{-it} \Bigg |^{2q} dt \right)^{1/2q} \end{equation} for all $2q > 0$, and with $f(n)$ a Steinhaus random multiplicative function. Therefore (\ref{helson}) is equivalent to \begin{equation} \label{equivalent2} \mathbb{E} \Big | \sum_{n \leq N} f(n) \Big | = o(\sqrt{N}) , \end{equation} with $f(n)$ a Steinhaus random multiplicative function. Helson justified his belief in (\ref{helson}) by observing that $N(it) := \sum_{n \leq N} n^{-it}$ is the multiplicative analogue of the classical Dirichlet kernel $D(\theta) := \sum_{|n| \leq N} e^{2\pi i n \theta}$. Since $\| D \|_1 = o(\| D\|_2)$ Helson conjectured that the same phenomenon should happen for the ``multiplicative analogue'' $N(it)$. Another reason one might believe the large cancellation in (\ref{helson}) to be possible is that on the $\tfrac 12$-line one has $$ \lim_{T \rightarrow \infty} \frac{1}{T} \int_{0}^{T} \Big | \sum_{n \leq N} \frac{1}{n^{1/2+it}} \Big | dt \ll \log^{1/4+o(1)} N , $$ as follows from the work of Bondarenko, Heap and Seip~\cite{bondhs}. This bound is stronger than one would expect assuming only squareroot cancellation, which would suggest a size more like $\log^{1/2} N$. Recently Orteg\`{a}-Cerda and Seip \cite{OrtegaSeip} proved that Nehari's theorem doesn't extend to the infinite polydisk. However the problem of establishing (\ref{helson}) remained. There are now reasons to believe that (\ref{helson}) is false. In a recent paper Bondarenko and Seip \cite{BondarenkoSeip} showed that the first absolute moment is at least $\sqrt{N} (\log N)^{-\delta + o(1)}$ for some small $\delta < 1$. Our primary goal in this note is to improve further on the lower bounds for (\ref{equivalent}). Our results also work for Rademacher random multiplicative functions. \begin{theorem} \label{thm:main} Let $f(n)$ be a Rademacher or Steinhaus random multiplicative function. Then, $$ \mathbb{E} \Bigg | \sum_{n \leq N} f(n) \Bigg | \gg \sqrt{N} (\log\log N)^{-3 + o(1)} $$ as $N \rightarrow \infty$. \end{theorem} The main input in the proof of Theorem \ref{thm:main} is the work~\cite{harpergp} of the first named author on lower bounds for sums of random multiplicative functions. Using H\"{o}lder's inequality, we can extend the result of Theorem \ref{thm:main} to $L^q$ norms. \begin{theorem} \label{thm:lpmain} Let $f(n)$ be a Rademacher or Steinhaus random multiplicative function and let $0 \leq q \leq 1$. Then, $$ \mathbb{E} \Bigg | \sum_{n \leq N} f(n) \Bigg |^{2q} \gg N^q (\log\log N)^{-6 + o(1)}. $$ \end{theorem} Theorem \ref{thm:main} and Theorem \ref{thm:lpmain} suggest it is rather unlikely that Helson's conjecture is true. See Conjecture \ref{theconjecture}, below. In addition to the above results, we establish an asymptotic estimate for the $2k$-th moment when $k$ is a positive integer. \begin{theorem} \label{thm:asymp} Let $k \in \mathbb{N}$. Suppose that $f(n)$ is a Steinhaus random multiplicative function. Then, as $N \rightarrow \infty$, $$ \mathbb{E} \Bigg | \sum_{n \leq N} f(n) \Bigg |^{2k} \sim \binom{2k-2}{k-1} k^{-(k-1)} \cdot c_k \gamma_k \cdot N^k \cdot (\log N)^{(k-1)^2}. $$ where $\gamma_k$ is the volume of Birkhoff polytope $\mathcal{B}_{k}$, defined as the $(k-1)^2$ dimensional volume of the set of $(u_{i,j}) \in \mathbb{R}_{+}^{k^2}$ such that, \begin{align*} & \text{for each } i \leq k: \sum_{1 \leq j \leq k} u_{i,j} = 1 \\\text{ and } & \text{for each } j \leq k: \sum_{1 \leq i \leq k} u_{i,j} = 1 , \end{align*} and $$ c_k = \prod_{p} \Big (1 - \frac{1}{p} \Big)^{k^2} \cdot \Big (1 + \sum_{\alpha \geq 1} \frac{\binom{\alpha + k - 1}{k-1}^2}{p^{\alpha}} \Big ). $$ \end{theorem} Note that $\mathcal{B}_k$ is a $(k-1)^2$ dimensional object embedded in a $k^2$ dimensional space. The $(k-1)^2$ dimensional volume of $\mathcal{B}_k$ is equal (see e.g. section 2 of Chan and Robbins~\cite{Robbins}) to $k^{k-1}$ times the full-dimensional volume of the set of $(u_{i,j})_{i,j \leq k-1} \in \mathbb{R}^{(k-1)^2}$ such that, for all $i,j \leq k-1$, \begin{align*} \sum_{j \leq k-1} u_{i,j} \leq 1 \text{ and } \sum_{i \leq k-1} u_{i,j} \leq 1 \text{ and } \sum_{i,j \leq k-1} u_{i,j} \geq k-2. \end{align*} The latter is how the volume of $\mathcal{B}_{k}$ will actually arise in our calculations. It is worth pointing out that finding a closed formula for the volume of the Birkhoff polytope $\mathcal{B}_k$ is a notorious open question and would be of interest in enumerative combinatorics, statistics and computational geometry (see \cite{IgorPak}). There are evaluations of $\text{Vol}(\mathcal{B}_k)$ for small values of $k$, (see \cite{beckpixton} and \cite{Robbins}), $$ \text{Vol}(\mathcal{B}_3) = \frac{3 \cdot 3^2}{2^2!} \ , \ \text{Vol} (\mathcal{B}_4) = \frac{352 \cdot 4^3}{3^2!} \ , \ \text{Vol}(\mathcal{B}_5) = \frac{4718075 \cdot 5^4}{4^2!}, \ \ldots $$ and an asymptotic formula is known to hold \cite{McKay} $$ \text{Vol}(\mathcal{B}_k) \sim \sqrt{2\pi} e^{1/3} \cdot \frac{k^{-(k-1)^2} e^{k^2}}{(2\pi)^{k}} \ , \ k \rightarrow \infty. $$ In addition the asymptotic behavior of the Euler product $c_k$ is known (see \cite[Proposition]{ConreyGonek}), $$ \log c_k = -k^2 \log (2 e^{\gamma} \log k) + o(k^2) $$ where $\gamma$ is the Euler--Mascheroni constant. We note that Conrey and Gamburd~\cite{congamb} compute the even integer moments $$\lim_{T \rightarrow \infty} \frac{1}{T} \int_{0}^{T} \Bigg | \sum_{n \leq N} n^{-1/2 - it} \Bigg |^{2k} dt$$ on the $\tfrac 12$-line, and unsurprisingly the answer is extremely similar to Theorem \ref{thm:asymp} (in particular an Euler product and a volume related to the Birkhoff polytope again appear). Conrey and Gamburd discuss the connection between their result and the moments of certain truncated characteristic polynomials of random matrices. In general, it seems reasonable to say that the arithmetic factor $c_k$ reflects local counting modulo different primes in the moment computation, whereas the geometric factor $\gamma_k$ reflects global counting of tuples $n_1 , ... , n_k$ and $m_1 , ... , m_k$ subject to the truncation $n_i , m_i \leq N$. We deduce Theorem \ref{thm:asymp} from a general result of La Bret\`{e}che \cite{breteche} on mean values of multiplicative functions in several variables. Theorem 3 has also been obtained independently by Granville and Soundararajan (unpublished), and also very recently (and independently) by Heap and Lindqvist \cite{Heap}. Additionally, Theorem 3 sheds light on the conjectural behavior of moments of the theta functions, $$ \theta(x, \chi) = \sum_{n \geq 1} \chi(n) e^{-\pi n^2 x / p} $$ with $p \geq 3$ a prime, and $\chi$ an even Dirichlet character modulo $p$. The rapidly decaying factor $ e^{-\pi n^2 x / p}$ essentially restricts the sum to those $n$ less than about $\sqrt{p}$ (if $x=1$, say), and the average behavior of $\chi(n)$ with $n \ll p^{1/2}$ is similar to that of a Steinhaus random multiplicative function. Therefore Theorem 3 leads to the conjecture that $$ \frac{1}{p} \sum_{\substack{\chi \mod p \\ \chi \text{ even}}} |\theta(1, \chi)|^{2k} \sim C_k p^{k/2} (\log p)^{(k-1)^2} \;\;\;\;\; \text{as} \; p \rightarrow \infty . $$ In unpublished recent work the same conjecture was stated by Marc Munsch on the basis of his lower bound for moments of $\theta(1, \chi)$. Louboutin and Munsch~\cite{MunschLouboutin} prove the conjecture for $k = 1$ and $k = 2$. Combining Theorem \ref{thm:lpmain} and Theorem \ref{thm:asymp} suggests the following ``counter-conjecture'' to Helson's claim (\ref{helson}). \begin{conjecture1} \label{theconjecture} If $f(n)$ is a Steinhaus random multiplicative function, then we have as $N \rightarrow \infty$, $$ \mathbb{E} \Bigg | \sum_{n \leq N} f(n) \Bigg |^{2q} \sim \begin{cases} C(q) N^q, & \text{ for } 0 \leq q \leq 1 \\ C(q) N^q (\log N)^{(q-1)^2}, & \text{ for } 1 \leq q . \end{cases} $$ \end{conjecture1} Conjecture \ref{theconjecture} suggests a possible line of attack on the problem of showing that for a positive proportion of even characters $\chi$ modulo $p$, we have $\theta(1, \chi) \neq 0$. This would be based on comparing the first and second \textit{absolute} moments, i.e $$ \sum_{\substack{\chi \mod p \\ \chi \text{ even}}} |\theta(1, \chi)| \ \ \text{and} \ \sum_{\substack{\chi \mod p \\ \chi \text{ even}}} |\theta(1,\chi)|^2. $$ We emphasise that we do not have a lot of evidence towards Conjecture 1 when $q \notin \mathbb{N}$, and perhaps especially when $0 < q < 1$, and it is conceivable the behaviour could be more complicated. However this is the simplest possible conjecture respecting the information that we now have. In addition for $q > 1$ it perhaps seems unlikely that the distribution of the tails of $\sum_{n < N} f(n)$ (in a large deviation regime) fluctuates so significantly that it would affect the exponent $(q-1)^2$ of the logarithm when $q$ goes from an integer to a fractional exponent. We also note that if we could obtain the order of magnitude for the $2q$-th moment suggested by the Conjecture \ref{theconjecture} for $q=\tfrac 12$, then since we know it trivially for $q=1$ a simple argument using H\"older's inequality (as in the proof of Theorem \ref{thm:lpmain}, below) would establish the order of magnitude suggested by the Conjecture for all $0 \leq q \leq 1$. Finally, following a question from the referee, we noticed that we can extend Theorem 3 to the Rademacher case. We omit the simple cases of $k = 1, 2$ in the theorem below, since both are different from the case $k \geq 3$. \begin{theorem} \label{thm:rademachermoments} Let $f(n)$ be a Rademacher random multiplicative function. Then, for $k \geq 3$ an integer, as $N \rightarrow \infty$, $$ \mathbb{E} \Big ( \sum_{n \leq N} f(n) \Big )^{k} \sim C_k \cdot N^{k/2} (\log N)^{k(k-3)/2} $$ with $C_k > 0$ constant. \end{theorem} Similarly as in Theorem \ref{thm:asymp} the constant $C_k$ splits into an arithmetic and geometric factor. The interested reader should have no trouble working out the details. Theorem \ref{thm:rademachermoments} has also been obtained independently by Heap and Lindqvist \cite{Heap}. At first glance it may seem strange that all the moments here (including the odd ones) are non-trivially large, but that is because in the Rademacher case there is no distinction between a term and its complex conjugate (and similarly if one calculated an expression like $\mathbb{E} \left| \sum_{n \leq N} f(n) \right|^{2k} \left(\sum_{n \leq N} f(n) \right)$ in the Steinhaus case, this would be non-trivially large provided $k \geq 1$). Note also that the moments are rather larger in the Rademacher case than the Steinhaus case, again because everything is real valued and so the terms exhibit less cancellation. \textbf{Acknowledgments} We are grateful to the referee for a careful reading of the paper and for asking several questions which led to Theorem 4 and stronger results in Theorem 3. \section{Lower bounds for the first moment} In this section we shall first prove the following result. \begin{proph1} Let $f(n)$ be a Rademacher random multiplicative function. There exist arbitrarily large values of $x$ for which $$ \EE\left| \sum_{n \leq x} f(n) \right| \geq \frac{\sqrt{x}}{(\log\log x)^{3+o(1)}} . $$ The same is true if $f(n)$ is a Steinhaus random multiplicative function. \end{proph1} The above proposition is actually a fairly straightforward deduction from the work of Harper~\cite{harpergp}. However, it is a bit unsatisfactory because it only gives a lower bound along some special sequence of $x$ values. With more work we can correct this defect, as in the following theorem announced in the Introduction: \begin{thmh1} Let $f(n)$ be a Rademacher random multiplicative function. Then for {\em all} large $x$ we have $$ \EE\left| \sum_{n \leq x} f(n) \right| \geq \frac{\sqrt{x}}{(\log\log x)^{3+o(1)}} . $$ The same is true if $f(n)$ is a Steinhaus random multiplicative function. \end{thmh1} The proof of Proposition 1 has two ingredients. The first is the observation, essentially due to Hal\'{a}sz~\cite{halasz}, that one can almost surely lower bound an average of $\left| \sum_{n \leq x} f(n) \right|$ in terms of the behaviour of $f(n)$ on primes only: more specifically, in the Rademacher case we almost surely have that, for any $y \geq 2$, $$ \int_{1}^{\infty} \frac{\left| \sum_{n \leq z} f(n) \right|}{z^{3/2 + 1/\log y}} dz \gg \sup_{t \geq 1} \ \exp \Big ( \sum_{p}\frac{f(p) \cos(t\log p)}{p^{1/2+1/\log y}} - \log t - \log\log(t+2)/2 \Big ) . $$ Here the implicit constant in the $\gg$ notation is absolute. The reader should note that the presence of the supremum over $t$ will be very significant here, since at any fixed $t$ the expected size of the right hand side would be too small to produce a useful result (about $\log^{1/4}y$, rather than about $\log y$ which is what we need). The second ingredient is a strong lower bound for the expected size of the right hand side, which we deduce from the work of Harper~\cite{harpergp}. We quote the relevant statements from Harper's work as a lemma now, for ease of reference later. \begin{lemh1}\textup{(See $\S 6.3$ of \cite{harpergp}.)} If $(f(p))_{p \; \text{prime}}$ are independent Rademacher random variables, then with probability $1-o(1)$ as $x \rightarrow \infty$ we have \begin{align*} \sup_{1 \leq t \leq 2(\log\log x)^{2}} \sum_{p} \frac{f(p) \cos(t\log p)}{p^{1/2+1/\log x}} \geq \log\log x - \log & \log\log x \\ - & O((\log\log\log x)^{3/4}) . \end{align*} If $(f(p))_{p \; \text{prime}}$ are independent Steinhaus random variables, then with probability $1-o(1)$ as $x \rightarrow \infty$ we have \begin{align*} \sup_{1 \leq t \leq 2(\log\log x)^{2}} \sum_{p} \Big ( \frac{\Re(f(p)p^{-it})}{p^{1/2 + 1/\log x}} + \frac{1}{2} \frac{\Re(f(p)^{2}p^{-2it})}{p^{1 + 2/\log x}} \Big ) \geq \log & \log x - \log\log\log x \\- & O((\log\log\log x)^{3/4}) . \end{align*} \end{lemh1} The first statement here is proved in the last paragraph in $\S 6.3$ of \cite{harpergp} (noting that the quantity $y$ there is $\log^{8}x$). The second statement can be proved by straightforward adaptation of that argument, the point being that the expectation and covariance structure of these random sums in the Steinhaus case are the same, up to negligible error terms, as in the Rademacher case, so the same arguments can be applied. (See the preprint \cite{harpertypicalmax} for an explicit treatment of some very similar Steinhaus random sums.) The argument in \cite{harpergp} is quite involved, but the basic aim is to show that, for the purpose of taking the supremum, the sums $\sup_{1 \leq t \leq 2(\log\log x)^{2}} \sum_{p} \frac{f(p) \cos(t\log p)}{p^{1/2+1/\log x}}$ behave somewhat independently at values of $t$ that are separated by $\gg 1/\log x$, so one has something like the supremum over $\log x$ independent samples. To prove Theorem 1 we introduce a third ingredient, namely we show that $\EE\left| \sum_{n \leq x} f(n) \right|$ may itself be lower bounded in terms of an integral average of $\EE\left| \sum_{n \leq z} f(n) \right|$, as follows: \begin{proph2} Let $f(n)$ be a Rademacher random multiplicative function. For any large $x$ we have $$ \EE\left|\sum_{n \leq x} f(n) \right| \gg \frac{\sqrt{x}}{\log x} \int_{1}^{\sqrt{x}} \left(\frac{\EE|\sum_{n \leq z} f(n)|}{\sqrt{z}} \right) \frac{dz}{z} . $$ The same is true if $f(n)$ is a Steinhaus random multiplicative function. \end{proph2} This uses the multiplicativity of $f(n)$ in an essential way (as does the proof of Proposition 1, of course). Theorem 1 then follows quickly by combining Proposition 2 with the proof of Proposition 1. \vspace{12pt} As the reader will see, the proof of Proposition 2 is based on a ``physical space'' decomposition of the sum $\sum_{n \leq x} f(n)$, which is somewhat related to the martingale arguments of Harper~\cite{harperlimits}. This is unlike the other arguments above, which work by establishing a connection between the integral average of $\sum_{n \leq x} f(n)$ and its Dirichlet series $\sum_{n} f(n)/n^{s}$ (on the ``Fourier space'' side). \subsection{Proof of Proposition 1} The proof of Proposition 1 is slightly cleaner in the Rademacher case, because then $f(p)^{2} \equiv 1$ for all primes $p$. So we shall give the proof in that case first, and afterwards explain the small changes that arise in the Steinhaus case. We know from work of Wintner~\cite{wintner} that almost surely $\sum_{n \leq x} f(n) = O_{\epsilon}(x^{1/2+\epsilon})$. Consequently, by partial summation the Dirichlet series $F(s) := \sum_{n} f(n)/n^{s}$ is almost surely convergent in the half plane $\Re(s) > 1/2$, and then by term by term integration it satisfies $$ F(s) = s \int_{1}^{\infty} \frac{\sum_{n \leq z} f(n)}{z^{s+1}} dz , \;\;\;\;\; \Re(s) > 1/2 . $$ In particular, $F(s)$ is almost surely a holomorphic function on the half plane $\Re(s) > 1/2$. On the other hand, since $f(n)$ is multiplicative we have for any $\Re(s) > 1$ that, in the Rademacher case, \begin{align*} F(s) = \prod_{p} \left(1+ \frac{f(p)}{p^{s}} \right) & = \exp \Big ( \sum_{p} \log\left(1+ \frac{f(p)}{p^{s}}\right) \Big ) \\ = \exp & \Big ( \sum_{p} \frac{f(p)}{p^{s}} - \frac{1}{2} \sum_{p} \frac{f(p)^{2}}{p^{2s}} + \sum_{k \geq 3} \frac{(-1)^{k+1}}{k} \sum_{p} \frac{f(p)^{k}}{p^{ks}} \Big ) . \end{align*} Therefore in the Rademacher case we have $$ s \int_{1}^{\infty} \frac{\sum_{n \leq z} f(n)}{z^{s+1}} dz = \exp \Big ( \sum_{p} \frac{f(p)}{p^{s}} - \frac{1}{2} \sum_{p} \frac{f(p)^{2}}{p^{2s}} + \sum_{k \geq 3} \frac{(-1)^{k+1}}{k} \sum_{p} \frac{f(p)^{k}}{p^{ks}} \Big ) $$ at least when $\Re(s) > 1$, since both sides are equal to $F(s)$. But all the sums involving $p^{2s}$ and $p^{ks}$ are clearly absolutely convergent whenever $\Re(s) > 1/2$, and therefore define holomorphic functions there. In addition, for any fixed $s$ with $\Re(s) > 1/2$ the series $\sum_{p} \frac{f(p)}{p^{s}}$ is a sum of independent random variables, and Kolmogorov's Three Series Theorem implies it converges almost surely. Since a Dirichlet series is a holomorphic function strictly to the right of its abscissa of converge, we find that almost surely $\sum_{p} \frac{f(p)}{p^{s}}$ is a holomorphic function on the half plane $\Re(s) > 1/2$, and so almost surely we have, for all $\Re s > \tfrac 12$, $$ s \int_{1}^{\infty} \frac{\sum_{n \leq z} f(n)}{z^{s+1}} dz = \exp \Big ( \sum_{p} \frac{f(p)}{p^{s}} - \frac{1}{2} \sum_{p} \frac{f(p)^{2}}{p^{2s}} + \sum_{k \geq 3} \frac{(-1)^{k+1}}{k} \sum_{p} \frac{f(p)^{k}}{p^{ks}} \Big ) . $$ Next, if we write $s = \sigma + it$ and take absolute values on both sides then we find that, almost surely, \begin{align*} |s| \int_{1}^{\infty} \frac{\left|\sum_{n \leq z} f(n)\right|}{z^{\sigma+1}} dz & \geq \exp \Bigg ( \Re\Big(\sum_{p} \frac{f(p)}{p^{s}} - \frac{1}{2} \sum_{p} \frac{f(p)^{2}}{p^{2s}} + \sum_{k \geq 3} \frac{(-1)^{k+1}}{k} \sum_{p} \frac{f(p)^{k}}{p^{ks}} \Big ) \Bigg ) \\ = \exp \Big ( \sum_{p} & \frac{\Re(f(p)p^{-it})}{p^{\sigma}} - \frac{1}{2} \sum_{p} \frac{\Re(f(p)^{2}p^{-2it})}{p^{2\sigma}} + O(1) \Big ), \;\;\;\;\; \forall \sigma > 1/2 . \end{align*} If we take $\sigma = 1/2 + 1/\log y$ for a parameter $y \geq 2$, and we note that then $|s| \asymp |t|$ provided $t \geq 1$ (say), we have almost surely that for all $y \geq 2$, $$ \int_{1}^{\infty} \frac{\left|\sum_{n \leq z} f(n)\right|}{z^{3/2 + 1/\log y}} dz \gg \sup_{t \geq 1} \exp \Big ( \sum_{p} \frac{\Re(f(p)p^{-it})}{p^{1/2 + 1/\log y}} - \frac{1}{2} \sum_{p} \frac{\Re(f(p)^{2}p^{-2it})}{p^{1 + 2/\log y}} - \log t \Big ) . $$ In the Rademacher case the first sum over $p$ is $\sum_{p} \frac{f(p)\cos(t\log p)}{p^{1/2 + 1/\log y}}$, and (since $f(p)^{2}=1$) the second sum over $p$ is $\Re \sum_{p} \frac{1}{p^{1 + 2/\log y + 2it}} = \Re \log\zeta(1+2/\log y + 2it) + O(1)$, where $\zeta$ denotes the Riemann zeta function. Standard estimates (see e.g. Theorem 6.7 of Montgomery and Vaughan~\cite{mv}) imply that $|\log\zeta(1+2/\log y + 2it)| \leq \log\log(t+2) + O(1)$ for $t \geq 1$, so we have almost surely that for all $y \geq 2$, \begin{equation}\label{halineq} \int_{1}^{\infty} \frac{\left|\sum_{n \leq z} f(n)\right|}{z^{3/2 + 1/\log y}} dz \gg \sup_{t \geq 1} \exp \Big ( \sum_{p} \frac{f(p)\cos(t\log p)}{p^{1/2 + 1/\log y}} - \log t - \log\log(t+2)/2 \Big ). \end{equation} (The above argument and inequality \eqref{halineq} are essentially due to Hal\'{a}sz~\cite{halasz}, and are also related to the arguments of Wintner~\cite{wintner}. The only small difference is that Hal\'{a}sz restricted to $1 \leq t \leq 2$. See Appendix A of Harper~\cite{harpergp} for a presentation similar to the above.) \vspace{12pt} Now to prove Proposition 1, note that for any large parameters $x$ and $x_0 < x_1$ we have $$ \sup_{x_0 < z < x_1} \frac{\EE|\sum_{n \leq z} f(n)|}{\sqrt{z}} \geq \frac{1}{\log x} \int_{x_0}^{x_1} \frac{\EE\left|\sum_{n \leq z} f(n)\right|}{z^{3/2 + 1/\log x}} dz , $$ since $\int_{x_0}^{x_1} \frac{dz}{z^{1+1/\log x}} \leq \int_{1}^{\infty} \frac{dz}{z^{1+1/\log x}} = \log x$. Then by Cauchy--Schwarz we always have $\EE|\sum_{n \leq z} f(n)| \leq \sqrt{z}$, so \begin{eqnarray} \int_{x_0}^{x_1} \frac{\EE\left|\sum_{n \leq z} f(n)\right|}{z^{3/2 + 1/\log x}} dz & \geq & \int_{1}^{\infty} \frac{\EE\left|\sum_{n \leq z} f(n)\right|}{z^{3/2 + 1/\log x}} dz - \int_{1}^{x_0} \frac{dz}{z^{1+1/\log x}} - \int_{x_1}^{\infty} \frac{dz}{z^{1+1/\log x}} \nonumber \\ & \geq & \int_{1}^{\infty} \frac{\EE\left|\sum_{n \leq z} f(n)\right|}{z^{3/2 + 1/\log x}} dz - \log x_0 - \frac{\log x}{x_1^{1/\log x}} . \nonumber \end{eqnarray} In particular, if we choose $x_0 = e^{\sqrt{\log x}}$ and $x_1 = e^{(\log x) \log\log x}$, say, then we have \begin{equation}\label{divineq} \sup_{x_0 < z < x_1} \frac{\EE|\sum_{n \leq z} f(n)|}{\sqrt{z}} \geq \frac{1}{\log x} \int_{1}^{\infty} \frac{\EE\left|\sum_{n \leq z} f(n)\right|}{z^{3/2 + 1/\log x}} dz - \frac{2}{\sqrt{\log x}} . \end{equation} Finally, in the Rademacher case Lemma 1 implies that, with probability $1-o(1)$ as $x \rightarrow \infty$, $$ \sup_{1 \leq t \leq 2(\log\log x)^{2}} \sum_{p} \frac{f(p) \cos(t\log p)}{p^{1/2+1/\log x}} \geq \log\log x - \log\log\log x - O((\log\log\log x)^{3/4}) . $$ This implies that with probability $1-o(1)$ one has $$ \sup_{t \geq 1} \exp \Big ( \sum_{p} \frac{f(p)\cos(t\log p)}{p^{1/2 + 1/\log x}} - \log t - \log\log(t+2)/2 \Big ) \geq \frac{\log x}{(\log\log x)^{3+o(1)}} , $$ and then by the Hal\'{a}sz type lower bound inequality \eqref{halineq} we deduce \begin{equation}\label{intexbound} \int_{1}^{\infty} \frac{\EE\left|\sum_{n \leq z} f(n)\right|}{z^{3/2 + 1/\log x}} dz \geq \frac{\log x}{(\log\log x)^{3+o(1)}} . \end{equation} Proposition 1 follows in the Rademacher case by combining this with \eqref{divineq}. \vspace{12pt} In the Steinhaus case the initial argument of Wintner~\cite{wintner} still works, so the first change that is needed in the preceding argument comes in the expression for the Euler product $F(s)$, which for $\Re(s) > 1$ is now \begin{eqnarray} F(s) = \prod_{p} \left(1+ \sum_{j=1}^{\infty} \frac{f(p)^{j}}{p^{js}}\right) & = & \exp \Big ( - \sum_{p} \log\left(1- \frac{f(p)}{p^{s}}\right) \Big ) \nonumber \\ & = & \exp \Big ( \sum_{p} \frac{f(p)}{p^{s}} + \frac{1}{2} \sum_{p} \frac{f(p)^{2}}{p^{2s}} + \sum_{k \geq 3} \frac{1}{k} \sum_{p} \frac{f(p)^{k}}{p^{ks}} \Big ) . \nonumber \end{eqnarray} Notice this is the same as we had in the Rademacher case, except now there are no alternating minus signs in the final exponential. The argument using the Three Series Theorem, etc. then continues as in the Rademacher case to yield that, almost surely, $$ s \int_{1}^{\infty} \frac{\sum_{n \leq z} f(n)}{z^{s+1}} dz = \exp \Big ( \sum_{p} \frac{f(p)}{p^{s}} + \frac{1}{2} \sum_{p} \frac{f(p)^{2}}{p^{2s}} + \sum_{k \geq 3} \frac{1}{k} \sum_{p} \frac{f(p)^{k}}{p^{ks}} \Big ) \;\;\;\;\; \forall \; \Re(s) > 1/2 . $$ Putting $s=1/2 + 1/\log y + it$ and taking absolute values on both sides, we deduce that almost surely, \begin{equation}\label{halineq2} \int_{1}^{\infty} \frac{\left|\sum_{n \leq z} f(n)\right|}{z^{3/2 + 1/\log y}} dz \gg \sup_{t \geq 1} \ \exp \Big ( \sum_{p} ( \frac{\Re(f(p)p^{-it})}{p^{1/2 + 1/\log y}} + \frac{1}{2} \frac{\Re(f(p)^{2}p^{-2it})}{p^{1 + 2/\log y}}) - \log t \Big ) \ , \forall y \geq 2 . \end{equation} Since we don't now have $f(p)^{2} \equiv 1$, we cannot remove the contribution of the prime squares using estimates for the zeta function. However, by the Steinhaus case of Lemma 1 we still have that, with probability $1-o(1)$ as $x \rightarrow \infty$, \begin{align*} \sup_{1 \leq t \leq 2(\log\log x)^{2}} \sum_{p} \Big ( \frac{\Re(f(p)p^{-it})}{p^{1/2 + 1/\log x}} + \frac{1}{2} \frac{\Re(f(p)^{2}p^{-2it})}{p^{1 + 2/\log x}} \Big ) \geq \log & \log x - \log\log\log x \\ - & O((\log\log\log x)^{3/4}) , \end{align*} and therefore with probability $1-o(1)$ we have $$ \sup_{t \geq 1} \ \exp \Big ( \sum_{p} \Big ( \frac{\Re(f(p)p^{-it})}{p^{1/2 + 1/\log y}} + \frac{1}{2} \frac{\Re(f(p)^{2}p^{-2it})}{p^{1 + 2/\log y}} \Big ) - \log t \Big ) \geq \frac{\log x}{(\log\log x)^{3+o(1)}} . $$ Combining this estimate with \eqref{halineq2} and \eqref{divineq} then proves Proposition 1 in the Steinhaus case. \subsection{Proofs of Theorem 1 and Proposition 2} \begin{proof}[Proof of Theorem 1, assuming Proposition 2] In view of Proposition 2, it will suffice to prove that for all large $x$ we have $$ \int_{1}^{\sqrt{x}} \left(\frac{\EE|\sum_{n \leq z} f(n)|}{\sqrt{z}} \right) \frac{dz}{z} \geq \frac{\log x}{(\log\log x)^{3+o(1)}} . $$ However, for any large parameter $y$ we have \begin{align*} \int_{1}^{\sqrt{x}} \left(\frac{\EE|\sum_{n \leq z} f(n)|}{\sqrt{z}} \right) \frac{dz}{z} & \geq \int_{1}^{\sqrt{x}} \frac{\EE|\sum_{n \leq z} f(n)|}{z^{3/2+1/\log y}} dz \\ & \geq \frac{\log y}{(\log\log y)^{3+o(1)}} - \int_{\sqrt{x}}^{\infty} \frac{\EE|\sum_{n \leq z} f(n)|}{z^{3/2+1/\log y}} dz , \end{align*} in view of the lower bound $\int_{1}^{\infty} \frac{\EE|\sum_{n \leq z} f(n)|}{z^{3/2+1/\log y}} dz \geq \frac{\log y}{(\log\log y)^{3+o(1)}}$ obtained in \eqref{intexbound}. By Cauchy--Schwarz we always have $\EE|\sum_{n \leq z} f(n)| \leq \sqrt{z}$, so the subtracted term here is at most $$ \int_{\sqrt{x}}^{\infty} \frac{dz}{z^{1+1/\log y}} = \frac{\log y}{(\sqrt{x})^{1/\log y}} . $$ If we choose $\log y$ somewhat smaller than $\log x$, say $\log y = (\log x)/(100\log\log\log x)$, we deduce that $$ \int_{1}^{\sqrt{x}} \left(\frac{\EE|\sum_{n \leq z} f(n)|}{\sqrt{z}} \right) \frac{dz}{z} \geq \frac{\log x}{(\log\log x)^{3+o(1)}} - \frac{\log x}{(\log\log x)^{50}} = \frac{\log x}{(\log\log x)^{3+o(1)}} , $$ as required. \end{proof} \begin{proof}[Proof of Proposition 2] The first part of the proof again differs slightly depending on whether we are in the Rademacher or the Steinhaus case. We will first work in the Rademacher case and then explain the small changes needed in the other situation. Let $A_t := \sum_{n \leq t} f(n)$. If we let $P(n)$ denote the largest prime factor of $n$, we have $$ \sum_{n \leq x} f(n) = \sum_{p \leq x} \sum_{n \leq x, P(n)=p} f(n) = \sum_{p \leq x} f(p) \sum_{m \leq x/p, P(m) < p} f(m) , $$ since $f$ is multiplicative. Here the inequality $P(m) < p$ in the final sum is strict because $f$ is supported on squarefree numbers. Notice here that if $p > \sqrt{x}$ then $x/p < \sqrt{x} < p$, so we automatically have $P(m) < p$ in the inner sums over $m$. Thus we can rewrite things slightly as \begin{align*} \sum_{n \leq x} f(n) & = \sum_{\sqrt{x} < p \leq x} f(p) \sum_{m \leq x/p} f(m) + \sum_{p \leq \sqrt{x}} f(p) \sum_{m \leq x/p, P(m) < p} f(m) \\ & =: \sum_{\sqrt{x} < p \leq x} f(p) A_{x/p} + B_x , \end{align*} say. Notice also that the random variables $A_{x/p}$ and $B_x$ are independent of the $f(p)$ for $\sqrt{x} < p \leq x$. We shall introduce a penultimate piece of notation, by defining the random variable $$ C_{x} := \sum_{\sqrt{x} < p \leq x} f(p) A_{x/p} . $$ Finally, let $\epsilon$ be a Rademacher random variable that is independent of everything else. Now since the $(f(p))_{\sqrt{x} < p \leq x}$ are symmetric random variables independent of $B_x$ and the $A_{x/p}$, it follows that $$ \sum_{n \leq x} f(n) = \sum_{\sqrt{x} < p \leq x} f(p) A_{x/p} + B_x \stackrel{d}{=} \epsilon \sum_{\sqrt{x} < p \leq x} f(p) A_{x/p} + B_x , $$ where $\stackrel{d}{=}$ denotes equality in distribution. Then if we {\em condition on the values of $B_x , C_x$}, we find the conditional expectation $$ \EE\left( \left|\epsilon \sum_{\sqrt{x} < p \leq x} f(p) A_{x/p} + B_x \right| \Bigg| B_x , C_x \right) = (1/2) |C_x + B_x| + (1/2) |-C_x + B_x| \geq |C_x| , $$ by the triangle inequality. Now if we average over values of $B_x, C_x$, and use the Tower Property of conditional expectations (the fact that the expectation of a conditional expectation is the unconditional expectation), we obtain $$ \EE\left|\sum_{n \leq x} f(n) \right| = \EE\left|\epsilon \sum_{\sqrt{x} < p \leq x} f(p) A_{x/p} + B_x \right| \geq \EE|C_x| . $$ On recalling the definitions of $C_x$ and $A_{x/p}$, we see we have proved the following: \begin{lemh2} For all large $x$ we have $$ \EE\left|\sum_{n \leq x} f(n)\right| \geq \EE\left| \sum_{\sqrt{x} < p \leq x} f(p) \sum_{m \leq x/p} f(m) \right| . $$ \end{lemh2} (In the Steinhaus case one has a weak inequality $P(m) \leq p$ in the definition of $B_x$, since $f$ is totally multiplicative, but this makes no difference to the argument just given. Instead of choosing $\epsilon$ to be a Rademacher random variable one can choose $\epsilon$ to be uniformly distributed on the unit circle, and then one obtains exactly the same conclusion in Lemma 2.) \vspace{12pt} Since the $f(p)$ are Rademacher or Steinhaus random variables independent of the ``coefficients'' $\sum_{m \leq x/p} f(m) = A_{x/p}$, an application of Khintchine's inequality (see e.g. Gut's textbook~\cite{gut}) yields that $$ \EE\left| \sum_{\sqrt{x} < p \leq x} f(p) \sum_{m \leq x/p} f(m) \right| \gg \EE\sqrt{\sum_{\sqrt{x} < p \leq x} \left|\sum_{m \leq x/p} f(m)\right|^{2}} . $$ It would be nice if we could find a way to exploit this (sharp) bound with the squares still in place on the inside, but to prove Proposition 2 we shall trade them away in order to remove the intractable squareroot. Thus by the Cauchy--Schwarz inequality and the fact that $\sum_{\sqrt{x} < p \leq x} 1/p = \log 2 + o(1)$ we have \begin{align*} \sum_{\sqrt{x} < p \leq x} \sqrt{\frac{1}{p}} \left|\sum_{m \leq x/p} f(m)\right| & \leq \sqrt{\sum_{\sqrt{x} < p \leq x} \frac{1}{p}} \sqrt{\sum_{\sqrt{x} < p \leq x} \left|\sum_{m \leq x/p} f(m)\right|^{2}} \\ & \ll \sqrt{\sum_{\sqrt{x} < p \leq x} \left|\sum_{m \leq x/p} f(m)\right|^{2}} . \end{align*} Combining this with the above, we deduce: \begin{lemh3} For all large $x$ we have $$ \EE\left|\sum_{n \leq x} f(n)\right| \gg \sum_{\sqrt{x} < p \leq x} \frac{1}{\sqrt{p}} \EE\left| \sum_{m \leq x/p} f(m) \right| \geq \frac{1}{\log x} \sum_{\sqrt{x} < p \leq x} \frac{\log p}{\sqrt{p}} \cdot \EE\left| \sum_{m \leq x/p} f(m) \right| . $$ \end{lemh3} \vspace{12pt} We have now almost finished the proof of Proposition 2. If we have two primes $z \leq p \leq p' \leq z+ z/\log^{1000}x$ for some $\sqrt{x} < z \leq x$ then \begin{align*} \left| \EE\left| \sum_{m \leq x/p} f(m) \right| - \EE\left| \sum_{m \leq x/p'} f(m) \right| \right| & \leq \EE\left| \sum_{x/p' < m \leq x/p} f(m) \right| \\ & \ll \sqrt{x(\frac{1}{p} - \frac{1}{p'}) + 1} \ll \sqrt{\frac{x}{p \log^{1000}x}} + 1 , \end{align*} by the Cauchy--Schwarz inequality and orthogonality of the $f(m)$. And we see $$ \frac{1}{\log x} \sum_{\sqrt{x} < p \leq x} \frac{\log p}{\sqrt{p}} \left( \sqrt{\frac{x}{p \log^{1000}x}} + 1 \right) \ll \frac{\sqrt{x}}{\log^{500}x} + \frac{1}{\log x} \sum_{\sqrt{x} < p \leq x} \frac{\log p}{\sqrt{p}} \ll \frac{\sqrt{x}}{\log x} , $$ which will make a negligible contribution in Proposition 2, so in Lemma 3 we may replace each term $\EE\left| \sum_{m \leq x/p} f(m) \right|$ by an averaged version $$ \frac{\log^{1000}x}{p} \int_{p}^{p(1+1/\log^{1000}x)} \EE\left| \sum_{m \leq x/t} f(m) \right| dt . $$ Since we know that primes are well distributed in intervals of relative length $1 + 1/\log^{1000}x$ (with density 1 when weighted by $\log p$) we can rewrite Lemma 3 as \begin{eqnarray} \EE\left|\sum_{n \leq x} f(n)\right| & \gg & \frac{1}{\log x} \sum_{\sqrt{x} < p \leq x} \log p \frac{\log^{1000}x}{p} \int_{p}^{p(1+1/\log^{1000}x)} \EE\left| \sum_{m \leq x/t} f(m) \right| \frac{dt}{\sqrt{t}} \nonumber \\ & \gg & \frac{1}{\log x} \int_{\sqrt{x}}^{x} \EE\left| \sum_{m \leq x/t} f(m) \right| \frac{dt}{\sqrt{t}} . \nonumber \end{eqnarray} Proposition 2 now follows by making the substitution $z = x/t$ in the integral. \end{proof} \section{Lower bounds for small moments - Proof of Theorem 2} The proof is a very simple argument using the Cauchy--Schwarz inequality and H\"{o}lder's inequality. Indeed, for any $0 \leq q \leq 1$ we have \begin{align*} \mathbb{E} \Big | \sum_{n \leq N} f(n) \Big | & \leq \mathbb{E} \Big [ \Big | \sum_{n \leq N} f(n) \Big |^{2q} \Big ]^{1/2} \cdot \mathbb{E} \Big [ \Big | \sum_{n \leq N} f(n) \Big |^{2 - 2q} \Big ]^{1/2} \\ & \leq \mathbb{E} \Big [ \Big | \sum_{n \leq N} f(n) \Big |^{2q} \Big ]^{1/2} \cdot \mathbb{E} \Big [ \Big | \sum_{n \leq N} f(n) \Big |^{2} \Big ]^{(1 - q)/2}. \end{align*} Since $\EE|\sum_{n \leq N} f(n)|^{2} \leq N$ and $\EE|\sum_{n \leq N} f(n)| \geq \sqrt{N}/(\log\log N)^{3+o(1)}$, by re-arranging we obtain the lower bound $$ \mathbb{E} \Big [ \Big | \sum_{n \leq N} f(n) \Big |^{2q} \Big ] \geq N^{q} (\log\log N)^{-6 + o(1)}. $$ \section{Asymptotics for even moments - Proof of Theorem 3} Note that \begin{align} \nonumber \mathbb{E} \Big | \sum_{n \leq X} f(n) \Big |^{2k} & = \sum_{\substack{n_1, \ldots, n_k \leq X \\ m_1, \ldots, m_k \leq X}} \mathbb{E} [ f(n_1) \ldots f(n_k) \overline{f(m_1) \ldots f(m_k)} ] \\ & \label{equation1} = \sum_{\substack{n_1, \ldots, n_k \leq X \\ m_1, \ldots, m_k \leq X \\ n_1 \ldots n_k = m_1 \ldots m_k}} 1 . \end{align} Now $$ g(n_1, \ldots, n_k, m_1, \ldots, m_k) = \mathbf{1}_{n_1 \ldots n_k = m_1 \ldots m_k} $$ is a multiplicative function of several variables\footnote{In other words $$g(n_1, \ldots, n_k, m_1, \ldots, m_k) g(u_1, \ldots, u_k, v_1, \ldots, v_k) = g(n_1 u_1, \ldots, n_k u_k, m_1 v_1, \ldots, m_k v_k)$$ for any natural numbers $n_i , m_i$ and $u_i , v_i$ whose least common multiples are coprime.} and our problem reduces to understanding the mean value of $$ \sum_{\substack{n_1, \ldots, n_k \leq X \\ m_1, \ldots, m_k \leq X}} g(n_1, \ldots, n_k, m_1, \ldots, m_k) . $$ We notice that the associated multiple Dirichlet series $$ \sum_{\substack{n_1, \ldots, n_k \\ m_1, \ldots , m_k}} \frac{g(n_1, \ldots, n_k, m_1, \ldots, m_k)}{n_1^{s_1} \ldots n_k^{s_k} m_1^{w_1} \ldots m_k^{w_k}} = \sum_{n} \sum_{n_1 n_2 \ldots n_k = n} \frac{1}{n_1^{s_1} \ldots n_k^{s_k}} \sum_{m_1 m_2 \ldots m_k = n} \frac{1}{m_1^{w_1} \ldots m_k^{w_k}} $$ is absolutely convergent for $\Re s_i, \Re w_i > \tfrac 12$ and moreover it factors as $$ H(s_1, \ldots, s_k, w_1, \ldots, w_k) \prod_{i=1}^{k} \prod_{j=1}^{k} \zeta(s_i + w_j) $$ with $H(s_1, \ldots, s_k, w_1, \ldots, w_k)$ absolutely convergent in the region $\Re s_i, \Re w_i > \tfrac 14$. In addition a direct check shows that $$ H(\tfrac 12,\ldots, \tfrac 12) = \prod_{p} \Big ( 1 - \frac{1}{p} \Big )^{k^2} \cdot \Big ( 1 + \frac{k^2}{p} + \sum_{\alpha \geq 2} \frac{\binom{a + k - 1}{k - 1}^2}{p^{\alpha}} \Big ) > 0.$$ Therefore the main result of La Bret\`{e}che \cite{breteche} is applicable with the $k^2$ linear forms $\ell^{(i,j)}(s_1, \ldots, s_k, w_1, \ldots, w_k) := s_i + w_j$ with $1 \leq i,j \leq k$. We note that the rank of the collection of linear forms $\ell^{(i,j)}$ (inside the space of all $\C$-linear forms on $\C^{2k}$) is $2k -1$. Therefore it follows from La Bret\`{e}che's result that (\ref{equation1}) is equal to $$ (1 + o(1)) C_k X^k (\log X)^{k^2 - (2k -1)} . $$ Using Th\'eor\`eme 2 in La Bret\'{e}che's work allows us to recover the precise value of $C_k$. Indeed, according to Th\'eor\`eme 2 in \cite{breteche} we get that (\ref{equation1}) is equal to $$ (1 + o(1)) H(\tfrac 12, \ldots, \tfrac 12) \text{Vol}(A_k(X)) $$ where $A_k(X)$ is a subset of $[1,\infty)^{k^2}$ corresponding to tuples $(a_{i,j}) \in [1,\infty)^{k^2}$ with $1 \leq i , j \leq k$ such that \begin{align*} & \text{for each } j \leq k: \prod_{1 \leq i \leq k} a_{i,j} \leq X \\ \text{ and } & \text{for each } i \leq k: \prod_{1 \leq j \leq k} a_{i,j} \leq X \end{align*} Therefore it remains to understand the asymptotic behavior of $$ \text{Vol}(A_k(X)) $$ as $X \rightarrow \infty$. Surprisingly, this is somewhat involved, and the rest of the proof is devoted to that. \begin{proph3} Let $k \geq 2$ be fixed. Then, $$ \text{Vol}(A_k(X)) \sim \binom{2k-2}{k-1} k^{-(k-1)} \cdot \text{Vol}(\mathcal{B}_k) \cdot X^k \cdot (\log X)^{(k-1)^2} $$ where $\text{Vol}(\mathcal{B}_k)$ corresponds to the $(k-1)^2$ dimensional volume of the Birkhoff polytope $\mathcal{B}_k \subset \mathbb{R}^{k^2}$. \end{proph3} The proof of the Proposition depends on the following Lemma. \begin{lemh4} \label{lem:mainlemma} Let $n \geq 1$ be fixed. Then as $X \rightarrow \infty$ we have $$ \iint_{\substack{0 \leq x_1, ..., x_n \leq \log X \\ 0 \leq y_1, \ldots, y_n \leq \log X}} \exp \Big ( \min (x_1 + ... + x_n, y_1 + ... + y_n )\Big ) dx_1 ... dy_n \sim \binom{2n}{n} X^{n} . $$ \end{lemh4} \begin{proof} Making the substitutions $v_i = \log X - x_i$ and $w_i = \log X - y_i$ in Lemma \ref{lem:mainlemma}, we see the integral there is the same as $$ X^{n} \iint_{\substack{0 \leq v_1, ..., v_n \leq \log X \\ 0 \leq w_1, \ldots, w_n \leq \log X}} \exp \Big (- \max (v_1 + ... + v_n, w_1 + ... + w_n )\Big ) dv_1 ... dw_n . $$ Here we can extend all the ranges of integration up to positive infinity, at the cost of a multiplicative error term $1+o(1)$. Then by symmetry \begin{eqnarray} && \iint_{\substack{0 \leq v_1, ..., v_n \\ 0 \leq w_1, \ldots, w_n}} \exp\Big (- \max(v_1 + ... + v_n, w_1 + ... + w_n) \Big ) dv_1 ... dw_n \nonumber \\ & = & 2 \int_{0 \leq v_1, ..., v_n} \exp\Big (- (v_1 + ... + v_n) \Big ) \int_{w_1 + ... + w_n \leq v_1 + ... + v_n} dv_1 ... dw_n , \nonumber \end{eqnarray} and making the further substitution $v= v_1 + ... + v_n$ in the integral, we see the above is \begin{eqnarray} & = & 2 \int_{0}^{\infty} e^{-v} \left(\int_{v_1 + ... + v_{n-1} \leq v} dv_1 ... dv_{n-1} \right) \left(\int_{w_1 + ... + w_n \leq v} dw_1 ... dw_n \right) dv \nonumber \\ & = & 2 \int_{0}^{\infty} e^{-v} v^{2n-1} \left(\int_{v_1 + ... + v_{n-1} \leq 1} dv_1 ... dv_{n-1} \right) \left(\int_{w_1 + ... + w_n \leq 1} dw_1 ... dw_n \right) dv . \nonumber \end{eqnarray} Here the two integrals in brackets are simply the volume of the standard $n-1$ simplex and the standard $n$ simplex, which are well known to be $1/(n-1)!$ and $1/n!$ respectively. Threfore the above integral is equal to $$ \frac{2}{(n-1)! n!} \int_{0}^{\infty} e^{-v} v^{2n-1} dv = 2 \binom{2n-1}{n} = \binom{2n}{n}. $$ We conclude that the integral in the statement of Lemma \ref{lem:mainlemma} is equal to (as $X \rightarrow \infty$), $$ (1 + o(1)) \binom{2n}{n} X^n $$ as claimed. \end{proof} We are now ready to prove the Proposition, and thus finish the proof of Theorem 3. \begin{proof}[of Proposition 3] Notice first that, if we set $u_{i,j} = \log a_{i,j}$, and if we write $$ c_j = \sum_{1 \leq i \leq k} u_{i,j} \text{ and } r_i = \sum_{1 \leq j \leq k} u_{i,j} $$ for all $i,j \leq k$, then we find $$ \text{Vol}(A_k(X)) = \int_{(u_{i,j})_{1 \leq i,j \leq k} \subseteq [0,\infty)^{k^{2}} : c_j , r_i \leq \log X \; \forall i,j \leq k} \exp\left(\sum_{i, j \leq k} u_{i,j}\right) du_{1,1} ... du_{k,k} . $$ To prove the proposition we shall obtain upper and lower bounds for the integral on the right that are asymptotically equal. For convenience of writing, we start by introducing a little more notation. Let $ S_{k-1} := \sum_{i,j \leq k-1} u_{i,j}$. Let also $\mathcal{U}_{k, \varepsilon}(X)$ be the set of $u_{i,j}$ with $i,j \leq k-1$ for which $$ \sum_{i \leq k-1} u_{i,j} \leq \log X \text{ and } \sum_{j \leq k-1} u_{i,j} \leq \log X \text{ and } \sum_{i,j \leq k-1} u_{i,j} > (k-2- \varepsilon) \log X. $$ Considering the vector $\mathbf{u}$ of $u_{i,j}$ with $i,j \leq k-1$ as fixed, let $\mathcal{T}_{C, k}(\mathbf{u}, X)$ be the set of those $u_{k,i}$ with $i \leq k-1$ for which $$ c_j \leq \log X \text{ for all } j \leq k-1 $$ Finally, again consider the $u_{i,j}$ with $i,j \leq k-1$ as fixed let $\mathcal{T}_{R, k}(\mathbf{u}, X)$ be the set of those $u_{j,k}$ with $j \leq k-1$ for which $$ r_i \leq \log X \text{ for all } i \leq k-1. $$ We set $\epsilon = 1/\sqrt{\log X}$, say. First seeking an upper bound, we note that if we have $S_{k-1} \leq (k-2-\epsilon)\log X$ then $S_k \leq (k-\epsilon)\log X$, and therefore the part of the integral where $S_{k-1} \leq (k-2-\epsilon)\log X$ contributes at most $$ X^{k-\epsilon} \cdot \int_{(u_{i,j})_{1 \leq i,j \leq k} \subseteq [0,\infty)^{k^{2}} : c_j , r_i \leq \log X \; \forall i,j \leq k} 1 du_{1,1} ... du_{k,k} \leq X^{k-\epsilon} \log^{k^{2}}X . $$ This is asymptotically negligible (for any fixed $k$) by our choice of $\epsilon$. Meanwhile, the part of the integral where $S_{k-1} > (k-2-\epsilon)\log X$ is equal to \begin{align} \label{equ:integral} & \int_{\mathcal{U}_{k, \varepsilon}(X)} \exp ( S_{k-1} ) \int_{\mathcal{T}_{C,k}(\mathbf{u}, X)} \exp \Big ( u_{k,1} + ... + u_{k,k-1} \Big ) \times \\ \nonumber \times & \int_{\mathcal{T}_{R,k}(\mathbf{u}, X)} \exp \Big ( u_{1,k} + ... + u_{k-1,k} \Big ) \int_{u_{k,k} : c_k , r_k \leq \log X} \exp ( u_{k,k} ) \ du_{1,1} ... du_{k,k} . \end{align} Here the innermost integral is over those $$0 \leq u_{k,k} \leq \log X - \max(u_{k,1} + ... + u_{k,k-1}, u_{1,k} + ... + u_{k-1,k}),$$ assuming the upper range of integration is at least zero. Therefore the innermost integral is certainly bounded above (extending the lower limit to negative infinity, and then performing the integration) by $$ X \exp \Big (-\max(u_{k,1} + ... + u_{k,k-1}, u_{1,k} + ... + u_{k-1,k}) \Big ) . $$ Substituting this in, it follows that (\ref{equ:integral}) is less than \begin{equation} \label{equ:integral2} X \int_{\mathcal{U}_{k,\varepsilon}(X)} \int_{\mathcal{T}_{C,k}(\mathbf{u}, X)} \int_{\mathcal{T}_{R,k}(\mathbf{u},X)} \exp\Big ( \min(\sum_{1 \leq j \leq k-1} c_j , \sum_{1 \leq i \leq k-1} r_i)\Big ) \prod_{(i,j) \neq (k,k)} d u_{i,j} . \end{equation} At this point we change variables, letting $r_1,\ldots, r_{k-1}$ and $c_{1}, \ldots, c_{k-1}$ run through the interval $[0,\log X]$ so that $u_{i,k} = r_i - \sum_{1 \leq j \leq k-1} u_{i,j}$ and $u_{k,j} = c_j - \sum_{1 \leq i \leq k-1} u_{i,j}$. Since $u_{i,k} \geq 0$ and $u_{k, j} \geq 0$ this change of variable implies the additional condition that for all $i,j \leq k-1$, \begin{equation} \label{equ:cond1} \sum_{j \leq k-1} u_{i,j} \leq r_i \text{ and } \sum_{i \leq k-1} u_{i,j} \leq c_j \end{equation} The Jacobian of this linear change of variable is equal to $1$ since the linear transformation taking the $(u_{i,j})$ with $(i,j) \neq (k,k)$ into $(r_\ell, c_\ell, u_{i,j})$ with $i,j,\ell \leq k-1$ is upper triangular with only $1$'s on the diagonal. Given $\mathbf{r} = (r_1, \ldots, r_{k-1})$ and $\mathbf{c} = (c_1, \ldots, c_{k-1})$ we let $\widetilde{\mathcal{U}}_{k,\varepsilon}(\mathbf{r}, \mathbf{c}, X)$ be the set of $u_{i,j}$ with $i, j \leq k-1$ satisfying the conditions (\ref{equ:cond1}) and the standing condition that \begin{equation} \label{equ:cond2} \sum_{i,j \leq k-1} u_{i,j} \geq (k-2 - \varepsilon) \log X , \end{equation} and we let $\widetilde{\mathcal{T}}_k(X)$ be the set of $0 \leq r_1 , \ldots, r_{k-1} \leq \log X$ and $0 \leq c_1 , \ldots, c_{k-1} \leq \log X$. Then (\ref{equ:integral2}) can be re-written as \begin{equation} \label{equ:integral3} X \int_{\widetilde{\mathcal{T}}_k(X)} \exp \Big ( \min ( \sum_{1 \leq j \leq k-1} c_j , \sum_{1 \leq i \leq k-1} r_i ) \Big ) \text{Vol} \Big ( \widetilde{\mathcal{U}}_{k,\varepsilon}(\mathbf{r}, \mathbf{c}, X) \Big ) \prod_{i \leq k-1} d c_i \ d r_i \end{equation} Since $r_i, c_i \leq \log X$ for all $i \leq k-1$, we have \begin{align*} \text{Vol} & (\widetilde{\mathcal{U}}_{k,\varepsilon}(\mathbf{r}, \mathbf{c}, X)) \leq \text{Vol}(\widetilde{\mathcal{U}}_{k,\varepsilon}(\mathbf{\log X, \log X, X})) \\ & = (\log X)^{(k-1)^2} \cdot \text{Vol}(\widetilde{\mathcal{U}}_{k,\varepsilon}(\mathbf{1}, \mathbf{1}, e)) \sim (\log X)^{(k-1)^2} \cdot \text{Vol}(\widetilde{\mathcal{U}}_{k,0}(\textbf{1}, \textbf{1}, e)) \end{align*} as $X \rightarrow \infty$, where $\mathbf{\log X} := (\log X, \ldots, \log X)$ and $\mathbf{1} := (1, \ldots, 1)$, and where we recall for the final asymptotic that $\epsilon = 1/\sqrt{\log X}$. As already mentioned in the introduction $\text{Vol}(\widetilde{\mathcal{U}}_{k,0}(\mathbf{1},\mathbf{1},e)) = k^{-(k-1)} \text{Vol}(\mathcal{B}_k)$ where $\mathcal{B}_k$ is the Birkhoff polytope. It follows that (\ref{equ:integral2}) is \begin{align*} \leq (1 + o(1)) X (\log X)^{(k-1)^2} & \cdot k^{-(k-1)} \text{Vol} ( \mathcal{B}_k) \\ & \times \int_{\widetilde{\mathcal{T}}_k(X)} \exp \Big ( \min( \sum_{1 \leq j \leq k-1} c_j, \sum_{1 \leq i \leq k-1} r_i ) \Big ) \prod_{i \leq k-1} d c_i d r_i \end{align*} and by Lemma \ref{lem:mainlemma} this is less than or equal to to $$ (1 + o(1)) X (\log X)^{(k-1)^2} \cdot k^{-(k-1)} \text{Vol} ( \mathcal{B}_k) \cdot X^{k-1} \binom{2k-2}{k-1} $$ thus finishing the proof of the upper bound. For the lower bound we restrict attention, as we may (due to positivity), to the part of the integral where $S_{k-1} > (k-2 + \varepsilon) \log X$ and each $r_i$, $c_i$ is $\geq (1 - \varepsilon) \log X$. The point of the former condition is that if it is satisfied then $$ u_{1,k} + u_{2,k} + \ldots + u_{k-1, k} \leq (k-1)\log X - S_{k-1} \leq (1 - \varepsilon ) \log X $$ and similarly $u_{k,1} + u_{k,2} + \ldots + u_{k,k-1} \leq (1 - \varepsilon) \log X$, and therefore $$ \log X - \max \Big ( u_{k,1} + \ldots + u_{k,k-1}, u_{1,k} + \ldots + u_{k-1,k} \Big ) > \varepsilon \log X = \sqrt{\log X} \rightarrow \infty. $$ Therefore arguing as above the innermost integral over $u_{k,k}$ in (\ref{equ:integral}) contributes $$ (1 + o(1)) X \exp \Big ( - \max(u_{k,1} + \ldots + u_{k,k-1}, u_{1,k} + \ldots + u_{k-1,k} ) \Big ) . $$ Proceeding as before we thus arrive to (\ref{equ:integral3}) but with the additional condition that $(1 - \varepsilon) \log X < r_i, c_i < \log X$ (and with the condition that $\sum_{i,j \leq k-1} u_{i,j} \geq (k-2 - \varepsilon) \log X$ replaced by the condition that $\sum_{i,j \leq k-1} u_{i,j} \geq (k-2 + \varepsilon) \log X$). It follows that on this set of $r_i$ and $c_i$ we have \begin{align*} \text{Vol}({\widetilde{\mathcal{U}}_{k,\varepsilon}}(\mathbf{r}, \mathbf{c}, X)) > \text{Vol} & ({\widetilde{\mathcal{U}}_{k, \varepsilon}}(\mathbf{(1 - \varepsilon) \log X}, \mathbf{(1 - \varepsilon) \log X}, X)) \\ & = (1 + o(1)) (\log X)^{(k-1)^2} \cdot k^{-(k-1)} \text{Vol}(\mathcal{B}_k) \end{align*} Therefore we obtained the following lower bound \begin{align*} (1 + o(1)) X (\log X)^{(k-1)^2}& \cdot k^{-(k-1)} \text{Vol}(\mathcal{B}_k) \\ & \times \iint_{\mathcal{\widetilde{T}}_{k,\varepsilon}(X)} \exp \Big ( \min ( \sum_{1 \leq j \leq k-1} c_j, \sum_{1 \leq i \leq k-1} r_i) \Big ) \prod_{i \leq k-1} dc_i \ dr_i \end{align*} where $\widetilde{\mathcal{T}}_{k,\varepsilon}(X)$ is the set of $r_i, c_i$ satisfying $(1 - \varepsilon) \log X < r_i, c_i \leq \log X$ for all $i \leq k-1$. Note that the condition $(1 - \varepsilon) \log X<r_i, c_i$ can be dropped. Indeed the contribution to the integral of any tuple of $(r_1, \ldots, r_{k-1})$ or $(c_1,\ldots, c_{k-1})$ where at least one of the $c_i, r_i$ is $\leq (1 - \varepsilon) \log X$ is $\leq X^{k-1 - \varepsilon}$ and therefore negligible. Thus we can extend the integration to all of $c_i, r_i \leq \log X$. Because of this Lemma \ref{lem:mainlemma} is applicable and we have therefore obtained the lower bound $$ \geq (1 + o(1)) \binom{2k-2}{k-1} \cdot k^{-(k-1)} \text{Vol}(\mathcal{B}_k) X^k \cdot (\log X)^{(k-1)^2} $$ as claimed. Since we have obtained asymptotically matching upper and lower bounds the proof of the proposition is finished. \end{proof} \section{Proof of Theorem 4} In the Rademacher case we have, letting $\square$ denote a generic square, \begin{align*} \mathbb{E} \Big ( \sum_{n \leq X} f(n) \Big )^{k} & = \sum_{\substack{n_1, \ldots, n_k \leq X}} \mathbb{E}[f(n_1) \ldots f(n_k)] \\ & = \sum_{\substack{n_1, \ldots, n_k \leq X \\ n_1 \ldots n_k = \square}} \mu^2(n_1) \ldots \mu^2(n_k) \end{align*} Let $g(n_1, \ldots, n_k)$ be a multiplicative function of several variables, supported on square-free $n_i$, and such that $g = 1$ when $n_1 \ldots n_k = \square$ and $g = 0$ otherwise. Then we find that the Dirichlet series $$ \sum_{n_1 = 1}^{\infty} \ldots \sum_{n_k = 1}^{\infty} \frac{g(n_1, \ldots, n_k)}{n_1^{s_1} \ldots n_k^{s_k}} $$ is equal to $$ \prod_{p} \Big ( 1 + \sum_{\substack{0 \leq \alpha_1, \ldots, \alpha_{k} \leq 1 \\ \alpha_1 + \ldots + \alpha_{k} \equiv 0 \mod{2}}} \frac{1}{p^{\alpha_1 s_1 + \ldots + \alpha_k s_k}} \Big ) . $$ This factors as $$ H(s_1, \ldots, s_{k}) \prod_{1 \leq i < j \leq k} \zeta(s_i + s_j) $$ with $$ H(\tfrac 12, \ldots, \tfrac 12) = \prod_{p} \Big ( 1 - \frac{1}{p} \Big )^{k(k-1)/2} \Big (1 + \sum_{1 \leq j \leq k/2} \frac{\binom{k}{2j}}{p^j} \Big ) $$ The main result of La Bret\`eche is applicable with $\binom{k}{2}$ linear forms $\ell^{(i,j)}(s_1, s_2, \ldots, s_k) = s_i + s_j$ defined for $1 \leq i < j \leq k$. The rank of these linear forms is equal to $k$ for $k \geq 3$ (for $k = 2$ the rank is equal to $1$ since there is only one form in that case). Therefore applying La Bret\`eche's result it follows that the moment is asymptotically $$ (1 + o(1)) C_k X^{k/2} (\log X)^{\binom{k}{2} - k}. $$ In order to determine the constant $C_k$ one could use Th\'eor\`eme 2 of La Bret\`eche, to conclude that the moment is asymptotically $$ (1 + o(1)) H(\tfrac 12, \ldots, \tfrac 12) \text{Vol}(B(X)) $$ where $B(X)$ is the set of $(u_{i,j})_{i < j} \in \mathbb{R}^{k(k-1)/2}$ such that \begin{align*} \text{ for all } 1 \leq i \leq k: \ \prod_{j < i} u_{j,i} \prod_{i < j} u_{i,j} \leq X \end{align*} and then proceed in a manner similar to Theorem 3. However we leave this computation to the interested reader.
1,941,325,220,898
arxiv
\section{Omitted proofs from Section~\ref{sec:composition}} \label{app:composition} Given a 2-universal family of functions $H = \{h:X \to \{0,1\}^m\}$, a mechanism $M$ and adversary $\adv$, we construct a new randomized mechanism $M_H$ and new adversary $\adv_H$. The following lemma relates the success probability of the modified $\adv_H$ with respect to $D$ to that of $\adv$ with respect to $U_m$. \begin{mechanism}[H] \caption{$M_H:X \to Y$} \SetKwInOut{Output}{fixed} \Output{$H = \{h:X \to \{0,1\}^m\}$, and $M:\{0,1\}^m \to Y$ } \SetKwInOut{Input}{input} \Input{$\mathbf{x} \in X$} \BlankLine sample $h \in_R H$\; return $(h, M(h(\mathbf{x})))$ \end{mechanism} \noindent \begin{lemma} \label{lemma:unif-to-general} For any $\adv$ there exists $\adv_H$ such that for all $M$, $w_\mathsf{low}$, $w_\mathsf{high}$, $\alpha > 0$ and $D\in\Delta(X)$ with min-entropy $\lambda > m+2\log(1/\alpha^2)$: \begin{align*} \left|\mathsf{Succ}^{\adv_H,M_H}_{\lew_\mathsf{low}+\alpha}(n,D) - \mathsf{Succ}^{\adv, M}_{\lew_\mathsf{low}}(n,U_m)\right| \le n\alpha \end{align*} where $\mathsf{Succ}(n,D)$ (respectively, $\mathsf{Succ}(n,U_m)$) denotes the PSO success probability with respect to the distribution $D$ (respectively, $U_m$) as in Definition~\ref{def:success}. \end{lemma} \begin{proof} For a predicate $p$ on $\{0,1\}^m$ we define a corresponding predicate on $X$: $p_h(x) \triangleq p(h(x))$. On input $(h,y)\gets M_H(\mathbf{x})$, $A_H$ simulates $p\gets A(y)$ and outputs $p_h$. We call $h\in H$ \emph{good} if $h(D)$ is $\alpha$-close to $U_m$. By Corollary~\ref{cor:sd-fact}, $(1-\alpha)$-fraction of $h$ are good. By the goodness of $h$, if $\mathsf{weight}_{U_m}(p) \le w_\mathsf{low}$ then $\mathsf{weight}_D(p_h) \le w_\mathsf{low} + \alpha$. $$ \left| \mathsf{Succ}_{\le w_\mathsf{low}+\alpha}^{\adv_H,M_H}(n, D) -\mathsf{Succ}_{\lew_\mathsf{low} }^{\adv,M}(n,U_m)\right| \le \mathsf{SD}(U_m^n, h(D^n)) \le n\alpha \qedhere $$ \end{proof} \begin{corollary} \label{thm:count-composition} Let $\alpha = \mathrm{negl}(n)$ and $\lambda = m+2\log(1/\alpha^2)$. For any $m = \omega(\log(n))$, there exists a distribution over $O(m)$-many predicates $Q_h$, a negligible function $w_\mathsf{low}(n)$, and an adversary $\adv$ such that for all $D$ with min-entropy at least $\lambda$: $$\mathsf{Succ}_{\le w_\mathsf{low}}^{\adv,M_{\#Q_h}}(n) \ge 1 - \mathrm{negl}(n).$$ \end{corollary} \begin{proof} The success probability in Theorem~\ref{thm:count-composition-uniform} is easily amplified from $1/e$ to $1-\mathrm{negl}(n)$ by repetition. Applying Lemma~\ref{lemma:unif-to-general} to the result almost completes the proof; it remains to verify that the resulting mechanism $M_H$ can be written as $M_{\#Q_h}$ for some $Q_h = (q_0^h,\dots,q_m^h)$. To do so, take $q_i^h(x) = q_i(h(x))$, where $q_i$ is from the proof of Theorem~\ref{thm:count-composition-uniform}. \end{proof} \begin{remark} Corollary~\ref{thm:count-composition} is only meaningful as an example of a failure of composition if each $M_{\#q_i^h}$ taken in isolation is PSO secure, something that is \emph{not} provided by Lemma~\ref{lemma:unif-to-general}. However, $M_{\#q_i^h}$ is an instance of the counting mechanism and thus secure. \end{remark} \section{Omitted proofs from Section~\ref{sec:definitions}} \label{app:definitions} \begin{proof}[Proof of Claim~\ref{claim:baseline-exact}] For $w \in [0,1]$, let $P_w = \{p:\mathsf{weight}_D(p) = w\}$. First, we show that for all $w$, $ \mathsf{base}(n,P_w) \le B(n,w).$ For any fixed predicate $p$, $$\Pr_{\mathbf{x} \sim D^n}[\iso{p}{\mathbf{x}}] = {n \choose 1} \cdot \mathsf{weight}_D(p) \cdot (1 - \mathsf{weight}_D(p))^{n-1} = B(n, \mathsf{weight}_D(p)).$$ For a trivial adversary $\mathsf{T}$, let $\alpha_w(\mathsf{T}) = \Pr_{\mathsf{T}(\bot)}[\mathsf{weight}_D(p) = w]$. \begin{equation} \label{eqn:base-alpha} \mathsf{base}(n,P_w) = B(n,w)\cdot \sup_{\mbox{{\scriptsize Trivial }} \mathsf{T}} \alpha_w(\mathsf{T} \end{equation} If $w$ is realizable under $D$, then there exists a deterministic trivial adversary $\mathsf{T}$ with $\alpha_w(\mathsf{T}) = 1$; otherwise $\alpha_w(\mathsf{T}) = 0$ for all $\mathsf{T}$. For $W \subseteq [0,1]$, let $P_W = \{p:\mathsf{weight}_D(p) \in W\}$. By definition, $\mathsf{base}(n,P_W) \ge \sup_{w \in W} \mathsf{base}(n,P_w).$ Next, we show that in fact $\mathsf{base}(n,P_W) = \sup_{w \in W} \mathsf{base}(n,P_w).$ Suppose, towards contradiction, that $\mathsf{base}(n,P_W) > \sup_{w\in W} \mathsf{base}(n,P_w)$. Then there exists trivial $\mathsf{T}$ with $\mathsf{Succ}_{P_W}^{\mathsf{T},\bot}(n) > \sup_{w\in W} \mathsf{base}(n,P_w).$ There must also exist an deterministic trivial adversary $\mathsf{T}'$ with $\mathsf{Succ}_{P_W}^{\mathsf{T}',\bot}(n) \ge \mathsf{Succ}_{P_W}^{\mathsf{T},\bot}(n)$ but which always outputs predicates of a single weight $w'\in W$, a contradiction. Combining with \eqref{eqn:base-alpha}: for any $W$, $$\mathsf{base}(n,P_W) = \sup_{\substack{w\in W \\\mbox{\scriptsize realizable}}} B(n,w).$$ Because $B(n,w)$ monotonically increases as $w\to 1/n$, \begin{align*} \sup_{\substack{w\le w_\mathsf{low} \\\mbox{\scriptsize realizable}}} B(n,w) &= B(n,w^*_\mathsf{low}) \\ \sup_{\substack{w\ge w_\mathsf{high} \\\mbox{\scriptsize realizable}}} B(n,w) &= B(n,w^*_\mathsf{high}) \\ \end{align*} \end{proof} \begin{proof}[Proof of Lemma~\ref{lemma:lhl-1}] We prove the lemma for $w\ge2^{-(m-1)}$; the proof for $w \le 1 - 2^{-(m-1)}$ is analogous. Identify the set $\{0,1\}^m$ with the set $\{0,1,\dots,2^m -1\}$ in the natural way. For $y \in \{0,1\}^m$, define the function $r(y) \triangleq \frac{y}{2^m-1}$, the projection of $y$ onto the interval $[0,1]$. Let $0 \le \Delta \le w$ be some constant to be chosen later, and let $w_m$ be the greatest multiple of $2^{-m}$ less or equal to $w-\Delta$. \begin{align} \Pr_{y \in_R \{0,1\}^{m}}[r(y) \le w-\Delta] &= \Pr_{y \in_R \{0,1\}^{m}}[r(y) \le w_m] \notag \\ &= w_m + 2^{-m} \notag\\ &\in [w-\Delta,w-\Delta+2^{-m}]\label{eqn:pr-r-le-w} \end{align} Let $H = \{h:X\to\{0,1\}^m\}$ be 2-universal family of hash functions. For each $h\in H$ we define the predicate $p_h$: $$p_h(x) = \begin{cases} 1 & r(h(x)) \le w - \Delta \\ 0 & r(h(x)) > w - \Delta \end{cases}.$$ By the Leftover Hash Lemma, for every $\alpha > 0$ to be chosen later and every $\lambda \ge m + 2 \log(1/\alpha^2)$, if $D$ has min-entropy at least $\lambda$ then $$\bigl(h,h(x)\bigr)_{\substack{h\in_R H \\ x \sim D}}$$ is $\alpha^2$-close to the uniform distribution over $H\times X_n$ in total variation distance. By Corollary~\ref{cor:sd-fact} $h(D)$ is $\alpha$-close to uniform over $X$ with probability at least $1-\alpha$ over $h\in_R H$. For such $h$, by \eqref{eqn:pr-r-le-w}, \begin{equation*} \label{eqn:good-hash-property} \mathsf{weight}_D(p_h) = \Pr_{x\gets D}[p_h(x) \le w-\Delta] \in \bigl[w-\Delta - \alpha \mbox{\textbf{,}}\,\,\, w-\Delta+2^{-m}+\alpha\bigr]. \end{equation*} Set $\alpha = 2^{-m}$ and $\Delta = 2\alpha$. Then $\mathsf{weight}_D(p_h) \in[w - 3\cdot 2^{-m}, w]$ with probability at least $1- \alpha = 1-2^{-m}$ whenever $\lambda \ge m + 2\log(1/\alpha^2) = 5m$, completing the proof. \end{proof} \section{Omitted proofs from Section~\ref{sec:k-anon}} \label{app:k-anon} \begin{proof}[Proof of Lemma~\ref{lemma:k-of-n-min-entropy}] We prove the second inequality first. The idea in used in~\eqref{eq:entropy-444} is used in~\eqref{eq:entropy-444-2}. \begin{align} 2^{-{H}_\infty(Y_j)} &= \max_{y} \Pr[Y_j = y] \notag \\ &\le \max_y \Pr[\exists \ell \in [n], Y_\ell = y ] \label{eq:entropy-444} \\ &\le n\cdot\max_y \Pr[Y_1 = 1] \notag \\ &= 2^{\log n-{H}_\infty(Y_j)} \notag \end{align} The first inequality: \begin{align} 2^{-\widetilde{H}_\infty(Y_j \mid Y_I)} &= \operatorname*{\mathbb{E}}_{Y_I} \biggl[\max_{y} \Pr[Y_j = y \;\Big\vert\; Y_I]\biggr] \notag \\ &= \sum_{y_I} \Pr[Y_I = y_I] \cdot \left(\max_{y} \Pr[Y_j = y \;\Big\vert\; Y_I = y_I] \right) \notag \\ &= \sum_{y_I} \max_{y} \biggl(\Pr[Y_I = y_I] \cdot\Pr[Y_j = y \;\Big\vert\; Y_I = y_I] \biggr) \notag \\ &= \sum_{y_I} \max_{y} \biggl(\Pr[Y_j = y]\cdot \Pr[Y_I = y_I \;\Big\vert\; Y_j = y] \biggr) \notag \\ &\le \sum_{y_I} \max_{y} \biggl(\Pr[Y_j = y]\cdot \Pr[\forall i\in I, \exists\ell \in [n]\setminus\{j\}, Y_\ell = y_i \;\Big\vert\; Y_j = y] \biggr) \label{eq:entropy-444-2} \\ &= \sum_{y_I} \max_{y} \biggl(\Pr[Y_j = y]\cdot \Pr[\forall i\in I, \exists\ell \in [n]\setminus\{j\}, Y_\ell = y_i] \biggr) \notag \\ &= \sum_{y_I} \Pr[\forall i\in I, \exists\ell \in [n]\setminus\{j\}, Y_\ell = y_i] \cdot \max_{y} \Pr[Y_j = y] \notag \\ &= \max_{y} \Pr[Y_j = y] \cdot \sum_{y_I} 1 \notag \\ &\le {n\choose k-1}\cdot 2^{-{H}_\infty(Y_j)} \notag \\ &\le 2^{(k-1)\log n - {H}_\infty(Y_j)} \notag \end{align} \end{proof} The proof of Claim~\ref{eqn:k-anon-proof-cond} applies the following corollary of the Leftover Hash Lemma. For random variables $Y_1,\dots,Y_k$, and $j \in [n]$, let $Y_{-j} = \{Y_i : i\neq j\}$. \begin{corollary}[Corollary to Leftover Hash Lemma (\ref{LHL})] \label{LHL:avg-min-entropy} For every $Y_1,\dots,Y_k$ if $\forall j\in[n]$, $\widetilde{H}_\infty(Y_j \;\Big\vert\; Y_{-j}) = \lambda \ge m + 2\log(1/\alpha^2)$, then $(h(Y_1), \dots, h(Y_k))_{h\in_R H}$ is $k\alpha^2$-close to uniform over $\left(\{0,1\}^m\right)^k$ in total variation distance. \end{corollary} \begin{proof} \end{proof} \begin{proof}[Proof of Claim~\ref{eqn:k-anon-proof-cond}] The construction of $q$ uses the Leftover Hash Lemma and is very similar to the construction of the predicates in Lemma~\ref{lemma:lhl-1}. Identify the set $\{0,1\}^m$ with the set $\{0,1,\dots,2^m -1\}$ in the natural way. For $y \in \{0,1\}^m$, define the function $r(y) \triangleq \frac{y}{2^m-1}$, the projection of $y$ onto the interval $[0,1]$. Let $w_\phi$ be the multiple of $2^{-m}$ closest to $1/k_\phi$. Observe that $|B(k_\phi,w_\phi) - B(k_\phi,1/k_\phi)| \le \left|w_\phi - \frac{1}{k_\phi} \right| \cdot \max_{w' \in [0,1]} \left|\dv{B}{w}(w')\right| \le 2^{-m}n$. Let $H = \{h:X\to\{0,1\}^m\}$ be a 2-universal family of hash functions. For each $h\in H$ define the predicate $q_h$: $$q_h(x) = \begin{cases} 1 & r(h(x)) < w_\phi \\ 0 & r(h(x)) \ge w_\phi \end{cases}.$$ Because $w_\phi$ is a multiple of $2^{-m}$, \begin{align*} \Pr_{y \in_R \{0,1\}^{m}}[r(y) < w_\phi] = w_\phi \end{align*} By Lemma~\ref{lemma:k-of-n-min-entropy}, $\mathbf{x}_{\phi}$ (viewed as a $k_\phi$-tuple of random variables) satisfies the average min-entropy hypothesis of Corollary~\ref{LHL:avg-min-entropy}. Applying that Corollary: \begin{align*} \Pr_{\mathbf{x}_{\phi},q_h}\biggl[\iso{q}{\mathbf{x}_\phi} \;\Big\vert\; |\mathbf{x}_\phi| = k_\phi \land \mathsf{weight}_D(\phi)\lew_\mathsf{low}\biggr] &\ge \Pr_{y_1,\dots,y_{k_\phi} \in_R \{0,1\}^m}[\exists \mbox{ unique } j\in[k_\phi] : r(y_j) < w_\phi] - k_\phi \alpha^2\\ &= B(k_\phi,w_\phi) - k_\phi \alpha^2 \\ & \ge B(k_\phi,1/k_\phi) - 2^{-m}n - k_\phi \alpha^2. \end{align*} \end{proof} \section{Properties of PSO security} \label{sec:composition} Two desirable properties of privacy concepts are that (i) immunity to post-processing, i.e., further processing of the outcome of a mechanism, without access to the data, should not increase privacy risks, and (ii) closure under composition, i.e., a combination of two or more mechanisms which satisfy the requirements of the privacy concept is a mechanism that also satisfies the requirements (potentially, with worse parameters). Differential privacy is an example of a privacy concept that is immune to post-processing and is closed under composition. In this section we prove that PSO security withstands post processing but not composition. We give two demonstrations for the latter. In the first we consider mechanisms which count the number of dataset rows satisfying a property. We show that releasing a count satisfies Definifion~\ref{def:security-against-singling-out}. However, there exists a collection of $\omega(\log(n))$ counts which allows an adversary to isolate a row with probability arbitrarily close to one using a predicate with negligible weight. For the second demonstration, we construct a (less natural) pair of mechanisms that individually satisfy Definifion~\ref{def:security-against-singling-out} but together allow the recovery of a row in the dataset. This latter construction borrows ideas from~\cite{NSSSU18}. An immediate conclusion is that PSO security is distinct from differential privacy. More importantly, not being closed under composition is a significant weakness of the notion of PSO security. Our constructions rely on very simple mechanisms that would likely be deemed secure against singling out under other formulations of the concept. It may well be that non-closure under composition is inherent for singling out. From a legal or policy point of view, we believe that a privacy concept which is not closed under composition (or not immune to post-processing) should not be accepted as sufficient. Pragmatically, the fact that PSO security is not closed under composition suggests that this concept can be used for {\em disqualifying} privacy technology (if they are not PSO secure) but also that this concept must be combined with other requirements if it used for approving technology. \subsection{Post Processing} For any non-interactive mechanism $M$, let $F$ be a (possibly non-uniform) algorithm taking inputs of the form $M(\mathbf{x})$. Let $F\circ M$ be the mechanism that on input $\mathbf{x}$ returns $F(M(\mathbf{x}))$. \begin{lemma}[Postprocessing] \label{lemma:postprocessing} If $M$ is $(\varepsilon, \delta, w_\mathsf{low},w_\mathsf{high})$-PSO secure, then $F\circ M$ is too. \end{lemma} \begin{proof} We show something stronger: for all $M$, $F$, $\adv$ there exists $\adv_F$ such that for all $n$, $P$, $D$: $\mathsf{Succ}_P^{\adv_F,M}(n) = \mathsf{Succ}_P^{\adv,F\circ M}(n)$. On input $M(\mathbf{x})$, $\adv_F$ simulates $\adv$ on input $F(M(\mathbf{x}))$ and returns the resulting predicate $p$. The distribution of $\adv_F$'s output with mechanism $M$ is identical to that of $\adv$ with mechanism $F\circ M$, proving the lemma. \end{proof} \noindent The definition and proof above extend to the case where the mechanism $M$ is interactive. \subsection{Example PSO-secure mechanisms} \label{sec:counting-leakage} This section presents two PSO-secure mechanisms. These examples are useful for developing intuition for the PSO security notion. Additionally, they are the foundation for the examples of self-composition failures in the next section. \subsubsection{Counting Mechanism} For any predicate $q:X \to \{0,1\}$, we define the corresponding Counting Mechanism: \begin{mechanism}[H] \caption{Counting Mechanism $M_{\#q}$} \SetKwInOut{Input}{input} \Input{$\mathbf{x}$} \BlankLine return $|\{1\leq i \leq n : q(x_i) = 1\}|$ \end{mechanism} \noindent For example, consider the least-significant bit predicate $\mathsf{lsb}$, that takes as input a string $x \in \{0,1\}^*$ and outputs $x[1]$. The corresponding Counting Mechanism $M_{\#\mathsf{lsb}}$ returns the sum of the first column of $\mathbf{x}$. The security of the Counting Mechanism is a corollary of the following proposition. \begin{proposition} \label{lemma:mech-small-codomain} For all $\adv$, $P$, $M:X^n\mapsto Y$: $\mathsf{Succ}_P^{\adv,M}(n) \le |Y|\cdot \mathsf{base}(n, P),$ where $Y$ is the codomain of $M$. \end{proposition} \begin{proof} We define a \emph{trivial adversary} $\mathsf{T}$ such that for all $\adv$, $\mathsf{Succ}^{\mathsf{T},\bot}_P(n) \ge \frac{1}{|Y|}\cdot\mathsf{Succ}^{A,M}_P(n).$ The proposition follows by definition of $\mathsf{base}(n,P)$. $\mathsf{T}$ samples a random $y\in_R Y$ and returns $p \gets \adv(y)$. \begin{equation*} \mathsf{Succ}_P^{\mathsf{T},\bot}(n) = \Pr_{\substack{\mathbf{x} \gets D^n \\ y\in_R Y \\ p\gets \adv(y)}}[\iso{p}{\mathbf{x}} \land p\in P] \ge \frac{\mathsf{Succ}^{A,M}_P}{|Y|} \end{equation*} The inequality follows from the fact that for all datasets $\mathbf{x}$, there exists $y^* = y^*(\mathbf{x}) \in Y$ such that \begin{equation*} \Pr_{p\gets \adv(y^*)}[\iso{p}{\mathbf{x}} \land p \in P] \ge \Pr_{p\gets \adv(M(\mathbf{x}))}[\iso{p}{\mathbf{x}} \land p \in P], \end{equation*} and that for all $\mathbf{x}$, $ \Pr_{y\in_R Y}[y = y^*] \ge \frac{1}{|Y|}. $ \end{proof} \begin{corollary} \label{thm:count-bit} $M_{\#q}$ PSO secure. \end{corollary} As exact counts are not differentially private, this corollary demonstrates that differential privacy is not necessary for PSO security. \subsubsection{Predicate Mechanism} For any predicate $q:X \to \{0,1\}$, we define the corresponding Predicate Mechanism: \begin{mechanism}[H] \caption{Predicate Mechanism $M_q$} \SetKwInOut{Input}{input} \Input{$\mathbf{x}$} \BlankLine return $(q(x_1), q(x_2),\dots,q(x_n))$ \end{mechanism} \noindent \begin{theorem} \label{thm:mech-one-bit} $M_q$ is PSO secure. \end{theorem} We prove the security of $M_q$ by showing that its output is ``no more helpful'' to the PSO adversary than the counts returned by $M_{\#q}$. \begin{proposition}[Permutation Proposition] \label{lemma:permutation} For a permutation $\sigma:[n]\to[n]$ of $n$ elements and a dataset $\mathbf{x}=(x_1,\dots,x_n)$, define $\sigma(\mathbf{x}) = (x_{\sigma(1)},x_{\sigma(2)},\dots,x_{\sigma(n)})$. For any mechanism $M$, let $M\circ \sigma$ be the mechanism that on input $\mathbf{x}$ returns $M(\sigma(\mathbf{x}))$. For all $\adv$, $P$, $D$, and $\sigma$: $\mathsf{Succ}_P^{\adv,M}(n) = \mathsf{Succ}_P^{\adv, M\circ\sigma}(n).$ \end{proposition} \begin{proof} For all $\sigma$, the distributions $D^n$ and $\sigma(D^n)$ are identical. For all $p$ and $\mathbf{x}$, $\iso{p}{\mathbf{x}}$ if and only if $\iso{p}{\sigma(\mathbf{x})}$. Using these two observations: \begin{align*} \mathsf{Succ}_P^{\adv,M}(n) &= \Pr_{\substack{\mathbf{x} \gets D^n \\ p\gets\mathsf{A}(M(\mathbf{x}))}}[\iso{p}{\mathbf{x}} \land p \in P] \\ &= \Pr_{\substack{\mathbf{x} \gets D^n \\ p\gets\mathsf{A}(M\circ\sigma(\mathbf{x}))}}[\iso{p}{\sigma(\mathbf{x})} \land p \in P] \\ &= \Pr_{\substack{\mathbf{x} \gets D^n \\ p\gets\mathsf{A}(M\circ\sigma(\mathbf{x}))}}[\iso{p}{\mathbf{x}} \land p \in P] \\ &=\mathsf{Succ}_P^{\mathsf{A},M\circ\sigma}(n) \tag*{\qedhere} \end{align*} \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:mech-one-bit}] Consider $M_1$ that on input $\mathbf{x}$ samples a random permutation $\sigma$ and returns $M_q\circ \sigma(\mathbf{x})$. By the Permutation Proposition, $\mathsf{Succ}_P^{\adv,M_1}(n) = \mathsf{Succ}_P^{\adv,M_q}(n)$. Next, consider the randomized algorithm $F$ that on input $m\in [n]$ outputs a uniformly random bitstring $y \in \{0,1\}^n$ of Hamming weight $m$. By Postprocessing and the security of $M_{\#q}$, the mechanism $M_2 = F\circ M_{\#q}$ is PSO secure. $M_1$ and $M_2$ are the same mechanism: on every input $\mathbf{x}$, the output distributions are identical. Therefore $M_q$ is PSO secure. \end{proof} \subsection{Failure to Compose} \label{sec:failureToCompose} \subsubsection{Failure to compose $\omega(\log n)$ times} \label{sec:failcomposelog} The security of a single count (Corollary~\ref{thm:count-bit}) easily extends to $O(\log n)$-many counts (even adaptively chosen), as the size of the codomain grows polynomially. However, our next theorem states that a fixed set of $\omega(\log(n))$ counts suffices to predicate single out with probability close to $e^{-1}$ (which can be amplified to $1-\mathrm{negl}(n)$). \begin{theorem} \label{thm:count-composition-uniform} For a collection of predicates $Q = (q_0,\dots,q_m)$, let $M_{\#Q}(\mathbf{x}) \triangleq (M_{\#q_0}(\mathbf{x}),\dots,M_{\#q_m}(\mathbf{x}))$. Let $X = \{0,1\}^m$ and $D = U_m$ the uniform distribution over $X$. There exists $Q$ and an adversary $\adv$ such that $$\mathsf{Succ}^{\adv,M_{\#Q}}_{\le2^{-m}}(n) \ge B(n,1/n) - \mathrm{negl}(n). $$ \end{theorem} \noindent Choosing $m = \omega(\log(n))$ yields $2^{-m} = \mathrm{negl}(n)$. \begin{proof} Let $q_0$ be any predicate such that $\mathsf{weight}_{U_m}(q_0) \le 1/n$ such that $\Pr_{\mathbf{x} \gets U_m^n}[\iso{q_0}{\mathbf{x}}] \ge B(n,1/n) - \mathrm{negl}(n).$ For instance, $q_0(x) = 1$ iff $x < 2^m/n$ (where in the last inequality we treat $x$ as a number written in binary).\footnote{Or use Claim~\ref{baseline-lower-bound} with $w_\mathsf{low}(n) = 1/n$, and Remark~\ref{remark:baseline-derandomized}.} For $i \in \{1,\dots,m\}$, define the predicate $q_i(x) \triangleq (q_0(x) \land x[i])$, and let $y_i = M_{\#q_i}(\mathbf{x})$. Consider the deterministic adversary $\adv$ that on input $M_{\#Q}(\mathbf{x}) = (y_0,\dots,y_m)$ outputs the predicate $$p(x) = q_0(x) \land \left(\bigwedge_{i = 1}^m \bigl(x[i] = y_i\bigr)\right).$$ Observe that $\iso{q_0}{\mathbf{x}}\implies\iso{p}{\mathbf{x}}$ and that by construction $\mathsf{weight}_{U_m}(p) = 2^{-m}$. Thus \begin{align*} \mathsf{Succ}_{\le2^{-m}}^{\adv,M_{\#Q}}(n) & = \Pr_{\substack{\mathbf{x}\gets U_m^n \\ p\gets \adv(M_{\#Q}(\mathbf{x}))}}[\iso{p}{\mathbf{x}}] \\ & \ge \Pr_{\substack{\mathbf{x}\gets U_m^n \\ p\gets \adv(M_{\#Q}(\mathbf{x}))}}[\iso{q_0}{\mathbf{x}}]\\ & \ge B(n,1/n) - \mathrm{negl}(n) \tag*{\qedhere} \end{align*} \end{proof} \begin{remark} When the attack succeeds, all the predicates $q_i$ match 0 or 1 rows in $\mathbf{x}$. It may seem that an easy way to counter the attack is by masking low counts, a common measure taken e.g., in contingency tables. However, it is easy to modify the attack to only use predicates matching $\Theta(n)$ rows using one extra query. This means that restricting the mechanism to suppress low counts cannot prevent this type of attack. Let $q^*$ be a predicate with $\mathsf{weight}_{U_m}(q^*) = 1/2$ (e.g., parity of the bits), and let $q_i^* = q_i \lor q^*$. The attack succeeds whenever $q^*(\mathbf{x}) = q_0^*(\mathbf{x}) + 1$. If $q^*(x)$ and $q_0(x)$ are independent, then this occurs with probability at least $\frac{1}{2}\cdot B(n,1/n) - \mathrm{negl}(n)$. As before, the probability can be amplified to $1-\mathrm{negl}(n)$. \end{remark} While a single count is PSO secure for \emph{any} data distribution, the above attack against $\omega(\log(n))$ counts applies only to the uniform distribution $U_m$. Using the Leftover Hash Lemma, we can generically extend the attack to general distributions $D$ with moderate min-entropy, at the cost of randomizing the attacked mechanism (i.e., set of counts). Informally, we hash the data to a smaller domain where its image will be almost uniformly distributed, and adapt the attack appropriately. See Appendix~\ref{app:composition} for details. Theorem~\ref{thm:count-composition-uniform} can be extended to the predicate mechanism $M_Q$; this follows from the observation that $M_{\#Q}$ can be implemented by postprocessing $M_Q$. But in fact a much stronger attack is possible. \begin{claim} \label{claim:leakage-composition-uniform} For a collection of predicates $Q = (q_1,\dots,q_m)$, let $M_{Q}(\mathbf{x}) \triangleq (M_{q_1}(\mathbf{x}),\dots,M_{q_m}(\mathbf{x}))$. Let $X = \{0,1\}^m$ and $D = U_m$ the uniform distribution over $X$. For $m = \omega(\log(n))$, there exists $Q$ and an adversary $\adv$ such that $\adv$ fully predicate singles out against $M_Q$ and $D$. \end{claim} \begin{proof}[Proof Outline] For $i \in [m]$, define the predicate $q_i(x) = x[i]$, the $i$th bit of $x$. Let $Q_\mathsf{bits} = (q_1,\dots,q_m)$. For each row $j\in[n]$ and column $i\in[m]$, $M_{Q_\mathsf{bits}}(\mathbf{x})$ outputs the bit $x_i[j]$. The adversary outputs the collection of predicates $\{p_j\}_{j\in[n]}$ where $$p_j(x) = \bigwedge_{i=1}^m \bigl(x[i] = x_j[i]\bigr).\qedhere$$ \end{proof} \input{extract-and-encrypt} \subsubsection{Singling out and failure to compose} The failure to compose demonstrated in Section~\ref{sec:failcomposelog} capitalizes on the use of multiple counting queries. Such queries underlie a large variety of statistical analyses and machine learning algorithms. We expect that other attempts to formalize security against singling out would also allow counting queries. If so, our negative composition results may generalize beyond the notion of PSO security. The failure to compose demonstrated in Section~\ref{sec:failcomposetwice} is more contrived. We expect that other attempts to formalize security against singling out would allow mechanisms like $M_\mathsf{ext}$, where the output is uniform even conditioned on the input. It is less clear to us whether a mechanism like $M_\mathsf{enc}$ would be allowed under other possible formalizations of security against singling out. If an alternate formalization is to compose, it likely must forbid $M_\mathsf{enc}$. \section{Security against predicate singling out (PSO security)} \label{sec:definitions} We consider a setting in which a data controller has in its possession a dataset $\mathbf{x} = (x_1, \ldots, x_n)$ consisting of $n$ rows sampled i.i.d.\ from a distribution $D\in\Delta(X)$. The data controller publishes the output of an \emph{anonymization mechanism} $\mathsf{M}$ applied to the dataset $\mathbf{x}$. A predicate singling out (PSO) adversary $A$ is a non-uniform Turing machine with access to the mechanism $M(\mathbf{x})$ and produces a predicate $p:X\to\{0,1\}$.\footnotemark~ We abuse notation and write $\mathsf{A}(M(\mathbf{x}))$, regardless whether $M$ is an interactive or non-interactive mechanism. For now, we assume all adversaries have complete knowledge of $D$ and are computationally unbounded; we reexamine these choices in Section~\ref{sec:knowledge-discussion} below. \footnotetext{ As is typical in cryptography, strengthening the adversary to be non-uniform (including possibly having full knowledge of the distribution $D$) yields stronger security definition. See Section~\ref{sec:knowledge-discussion} for further discussion. } Intuitively, the adversary's goal is to output predicate $p$ that isolates a row in $\mathbf{x}$, where we associate the Article~29 WP Opinion on Anonymisation Techniques notion of ``isolat[ing] some or all records which identify an individual in [a] dataset'' with the production of a description that matches exactly one row in the dataset. Mathematically, the description would be in form of a predicate mapping data universe elements into $\{0,1\}$. \begin{definition}[Row isolation] A predicate $p$ \emph{isolates a row in $\mathbf{x}$} if there exists a unique $x \in \mathbf{x}$ such that $p(x) = 1$. I.e., if $p(\mathbf{x}) = 1/n$. We denote this event $\iso{p}{\mathbf{x}}$. \end{definition} It is tempting to require that a mechanism $M$ only allow a negligible probability of isolating a row, but this intuition is problematic. An adversary that does not have access to $M$---a \emph{trivial} adversary---can output a predicate $p$ with $\mathsf{weight}_D(p)\approx 1/n$ and hence isolate a row in $\mathbf{x}$ with probability ${n \choose 1} \cdot \mathsf{weight}_D(p)\cdot (1-\mathsf{weight}_D(p))^{n-1}\approx e^{-1}\approx 37\%$. In Section~\ref{sec:bounding-base} we will see that in many cases the trivial adversary need not know the distribution to produce such a predicate. \medskip Instead of considering the absolute probability that an adversary outputs a predicate that isolates a row, we consider the increase in probability relative to a \emph{baseline risk}: the probability of isolation by a trivial adversary. \begin{definition}[Trivial Adversary] A predicate singling out adversary $\mathsf{T}$ is \emph{trivial} if the distribution over outputs of $\mathsf{T}$ is independent of $M(\mathbf{x})$. That is $\mathsf{T}(M(\mathbf{x})) = \mathsf{T}(\bot)$. \end{definition} An unrestricted trivial adversary can isolate a row with probability about $1/e$. Towards a more expressive notion of the baseline risk, we restrict adversaries to output a predicate from a particular class of {\em admissible} predicates $P \subseteq \{p:X\to \{0,1\}\}$, i.e., a subset of predicates on $X$.\footnote{More formally, we restrict the adversary to an ensemble of admissible predicates $\mathcal{P} = \{P_n\}_{n\in\N}$, where $P_n \subseteq \{p:X_n\to \{0,1\}\}$, a subset of predicates on $X_n$.} \begin{definition}[Adversarial success probability] \label{def:success} Let $D$ be a distribution over $X$. For mechanism $M$, an adversary $\adv$, a set of admissible predicates $P$, and $n\in \mathbb{N}$, let $$\mathsf{Succ}_{P}^{\mathsf{A}, M}(n,D) \triangleq \Pr_{\substack{\mathbf{x}\gets D^n \\ p\gets \mathsf{A}(M(\mathbf{x}))}}[\iso{p}{\mathbf{x}} \land p\in P].$$ \end{definition} \begin{definition}[Baseline] \label{def:baseline} For $n\in\N$ and set of admissible predicates $P$, $$ \mathsf{base}_D(n,P) \triangleq \sup_{\mbox{\scriptsize Trivial }\mathsf{T}} ~ \mathsf{Succ}_{P}^{\mathsf{T},\bot}(n,D) $$ \end{definition} We typically omit the parameter $D$ when the distribution is clear from context. In this work, we focus on two classes of admissible predicates parameterized by the \emph{weight} of the predicate $p$. \begin{definition}[Predicate families $\mathsf{low}$ and $\mathsf{high}$] For $0\le w_\ell(n) \le 1/n \le w_h(n) \le 1$ we define the predicate families $$ \mathsf{low} = \{p: \mathsf{weight}_D(p) \leq w_\ell(n)\} \quad\mbox{and}\quad \mathsf{high} = \{p: \mathsf{weight}_D(p) \geq w_h(n)\}.$$ \end{definition} We will consider the success probability of adversaries restricted to these admissible predicates and will denote them $\mathsf{Succ}_{\le w_\ell}^{\mathsf{A},M}$ and $\mathsf{Succ}_{\ge w_h}^{\mathsf{A},M}$ as shown in Figure~\ref{fig:succ-low}. \begin{figure}[h \centering \medskip \includegraphics[width=\textwidth]{diagram.pdf} \caption{\label{fig:succ-low} $\mathsf{Succ}_{\le w_{\mathsf{low}}}^{\mathsf{A},M}(n,D) = \Pr_{D,M,\mathsf{A}}[b = \mathsf{true}].$} \end{figure} \subsection{Security against predicate singling out} We now have the tools for presenting our definition of security against singling out. We require that no adversary should have significantly higher probability of isolating a row than that of a trivial adversary, conditioned on both outputting predicates from the same class of admissible predicates. \begin{definition}[Security against predicate singling out\label{def:security-against-singling-out}] For $\epsilon(n) > 0$, $\delta(n) > 0$, $0 \le w_\mathsf{low}(n) \le 1/n \le w_\mathsf{high}(n) \le 1$, we say a mechanism $M$ is \emph{$(\epsilon, \delta, w_\mathsf{low},w_\mathsf{high})$ secure against predicate singling out} ($(\epsilon, \delta, w_\mathsf{low},w_\mathsf{high})$\emph{-PSO secure}) if for all $\mathsf{A}$, $D$, $n$, $w_\ell \le w_\mathsf{low}$, and $w_h \ge w_\mathsf{high}$: \begin{align} \mathsf{Succ}_{\lew_\ell}^{\mathsf{A},M}(n,D) &\le e^{\epsilon(n)} \cdot \mathsf{base}_D(n,\mathsf{low}) + \delta(n), \nonumber\\ \mathsf{Succ}_{\gew_h}^{\mathsf{A},M}(n,D) &\le e^{\epsilon(n)} \cdot \mathsf{base}_D(n,\mathsf{high}) + \delta(n). \label{eq:succ_w_high} \end{align} We often omit explicit reference to the parameter $n$ for $\epsilon$, $\delta$, $w_\mathsf{low}$, and $w_\mathsf{high}$. We say a mechanism is \emph{secure against predicate singling out} (\emph{PSO secure}) if for all $w_\mathsf{low} = \mathrm{negl}(n)$, $w_\mathsf{high} = \omega(\frac{\log n }{n})$ there exists $\delta = \mathrm{negl}(n)$ such that $M$ is $(0,\delta,w_\mathsf{low},w_\mathsf{high})$-PSO secure. \end{definition} The definition is strengthened as $\epsilon$ and $\delta$ get smaller, and as $w_\mathsf{low}$ and $w_\mathsf{high}$ get closer to $1/n$. As shown below, when $w_\mathsf{low} = \mathrm{negl}(n)$ the baseline is negligible. This is probably the most important regime of Definition~\ref{def:security-against-singling-out} as such predicates are likely to not only isolate a row in the dataset but also an individual in the entire population. The baseline is also negligible when $w_\mathsf{high} = \omega(\log n / n)$. It is not clear to the authors how beneficial finding a predicate in this regime may be to an attacker. The reader may decide to ignore Equation~\ref{eq:succ_w_high} in Definition~\ref{def:security-against-singling-out} (as is depicted in Figure~\ref{fig:succ-low}). We include the high weight regime in our analysis so as not to overlook potential singling out risks which rely on high weight predicates. We also define a strong notion of predicate singling out, where an adversary can simultaneously isolate all rows of a dataset. \begin{definition}[Fully Predicate Singling Out] \label{def:blatant} An adversary $\adv$ \emph{fully singles out} against a mechanism $M$ and distribution $D$ if (with high probability) it outputs a collection of $n$ negligible-weight predicates $p_i$, each of which isolates a different row of the input dataset $\mathbf{x}$. More formally, if \begin{equation} \Pr_{\substack{\mathbf{x}\gets D^n \\ (p_1,\dots,p_n)\gets \adv(M(\mathbf{x}))}}[\forall p_i, p_j: \iso{p_i}{\mathbf{x}} \land \mathsf{weight}_D(p_i) = \mathrm{negl}(n) \land (p_i \land p_j)(\mathbf{x}) = 0 ] > 1 - \mathrm{negl}(n) \end{equation} \end{definition} \paragraph{Examples.} On input $(x_1,\dots,x_n)$ the mechanism $M_f$ outputs $(f(x_1),\dots,f(x_n))$ for some possibly randomized function $f$. Whether $M_f$ prevents predicate singling out depends on $f$. On one extreme, if $f(x) = x$ and $|X|\gg n$, then $M_f$ provides no protection. On the other extreme, if $f(x)$ is completely random, $M_f(\mathbf{x})$ contains no information about $\mathbf{x}$ and provides no benefit to the adversary. More formally, for all $\mathbf{x}$ the output of $M_f(\mathbf{x})$ is uniform; this allows us to construct a trivial adversary $\mathsf{T}$ that perfectly simulates any adversary $\adv$.\footnotemark \footnotetext{Uniformity without conditioning on $\mathbf{x}$ may not be enough. For example, if the data itself is uniform, then the output of the identity function is also uniform. See also footnote~\ref{foot:uniformity-1}.} If $f$ is invertible, then it offers no more protection than the identity function. However, $f$ being many-to-one does not give an assurance. For instance, suppose the data is uniform over $\{0,1\}^n$ and $f:\{0,1\}^n \to \{0,1\}^{n/2}$ outputs the last $n/2$ bits of an input $x$. $M_f$ is \emph{not} secure. Indeed, it allows fully predicate singling out. For any $y_i = f(x_i)$ in the output, the adversary can output the predicate $p_i:(x) \mapsto \indic{f(x) = y_i}$. $\Pr[\iso{p_i}{\mathbf{x}}] = 1-\mathrm{negl}(n)$ and $\mathsf{weight}_{U_n}(p_i) = 2^{-n/2}= \mathrm{negl}(n)$. \subsection{Bounding the baseline} \label{sec:bounding-base} In this section, we characterize the baseline over intervals in terms of a simple function $B(n,w)$. For $n\ge2$ and a predicate $p$ of weight $w$, the probability over $\mathbf{x}\sim D^n$ that $p$ isolates a row in $\mathbf{x}$ is \begin{equation*} B(n,w) \triangleq n \cdot w \cdot (1-w)^{n-1} \end{equation*} $B(n,w)$ is maximized at $w = 1/n$ and strictly decreases moving away from the maximum. It is helpful to recall that $(1-1/n)^n \approx e^{-1}$ even for relatively small values of $n$. $(1-1/n)^{n-1}$ also approaches $e^{-1}$ as $n\to\infty$, and does so from above. As made formal in Claim~\ref{claim:baseline-exact} (proof in Appendix~\ref{app:definitions}), a trivial adversary maximizes its success of isolating a row by outputting a predicate $p$ with $\mathsf{weight}_D(p)$ as close as possible to $1/n$ (the weight that maximizes $B(n,w)$). The set of possible values for $\mathsf{weight}_D(p)$ depends not only on $w_\mathsf{low}$ and $w_\mathsf{high}$, but also on the distribution. We say that a weight $w \in [0,1]$ is \emph{realizable} under distribution $D$ if there exists $p$ such that $\mathsf{weight}_D(p) = w$. The baseline is characterized by $B(n,w)$. \begin{claim} \label{claim:baseline-exact} For every $n > 0$, $w_\mathsf{low}$, $w_\mathsf{high}$ and $D$, $$ \mathsf{base}_D(n,\mathsf{low}_n) = B(n,w^*_\mathsf{low}(n)) \quad\mbox{and}\quad \mathsf{base}_D(n,\mathsf{high}_n) = B(n,w^*_\mathsf{high}(n)),$$ where $$w^*_\mathsf{low}(n) = \sup \{w \le w_\mathsf{low}(n) : \mbox{ realizable}\} \quad\mbox{and}\quad w^*_\mathsf{high}(n) = \inf \{w \ge w_\mathsf{high}(n) : \mbox{ realizable}\}. $$ \end{claim} Because $B(n,w)$ increases as $w$ approaches $1/n$, the baseline has a simple upper-bound. \begin{corollary} \label{clm:baselineUpperBound} For every $w_\mathsf{low}$, $w_\mathsf{high}$, $n\in\N$ and distribution $D$, $$\mathsf{base}_D(n,\mathsf{low}_n) \le B(n,w_\mathsf{low}(n)) \quad \mbox{and}\quad \mathsf{base}_D(n,\mathsf{high}_n) \le B(n,w_\mathsf{high}(n)).$$ \end{corollary} The dependence on the realizability of weights under $D$ makes the exact baseline unwieldy. For example, the difference between the true baseline and the upper bound can be as large as $1/e$. Thankfully, the $B(n,w)$ upper bound is nearly tight when the underlying distribution has moderate min-entropy. Moreover, the corresponding lower bound is achievable by an efficient uniform trivial adversary who is oblivious of the distribution (see Section~\ref{sec:knowledge-discussion}). \begin{claim}[Baseline Lower Bound] \label{baseline-lower-bound} Let $c>0$ and $0\le w_\mathsf{low}(n) \le 1/n \le w_\mathsf{high}(n) \le 1$. If $D$ has min-entropy at least $\lambda> 5(c+\log n + 2)$, then $\mathsf{base}_D(n,\mathsf{low}_n) \ge B(n,w_\mathsf{low}(n)) - 2^{-c}$ and $\mathsf{base}_D(n,\mathsf{high}_n) \ge B(n,w_\mathsf{high}(n)) - 2^{-c}$. \end{claim} Informally, the assumption that $D$ has min-entropy $\lambda$ implies two useful facts. First, the set of realizable weights is dense: for any $w$, there exists a realizable $w'$ such that $|w-w'|$ is small. Second, the Leftover Hash Lemma allows us to construct an efficient uniform adversary who can find a predicate with weight $w'$ without any knowledge of the distribution. The following lemma (proved in Appendix~\ref{app:definitions}) captures these properties: \begin{lemma} \label{lemma:lhl-1} For $m\in\N$ and a set $X$, let $H = \{h:X\to\{0,1\}^m\}$ be 2-universal family of hash functions. For any $w \ge 2^{-(m-1)}$ (respectively, $w \le 1-2^{-(m-1)}$), there exists a collection of predicates $P_H = \{p_h\}_{h\in H}$ such that for all distributions $D$ over $X$ with min-entropy at least $\lambda = 5m$, $\mathsf{weight}_D(p_h) \in [w - 3\cdot2^{-m}, w]$ with probability at least $1-2^{-m}$ over $h\in_R H$. (respectively, $\mathsf{weight}_D(p_h) \in [w, w + 3\cdot2^{-m}]$). \end{lemma} \begin{proof}[Proof of Claim~\ref{baseline-lower-bound}] We prove the claim for $w_\mathsf{low}(n)$; the proof for $w_\mathsf{high}(n)$ is analogous. Let $m\ge c+\log n +2$. Either $w_\mathsf{low}(n) \le 2^{-(c+\log n)}$ or $w_\mathsf{low}(n) \ge 2^{-(m-1)}$. If $w_\mathsf{low}(n) \le 2^{-(c+\log n)}$, then $B(n,w_\mathsf{low}) \le nw_\mathsf{low} \le 2^{-c}$, making the claim trivial. It remains to consider $w_\mathsf{low}(n) \ge 2^{-(m-1)}$. Let $P_H$ be the family of predicates from Lemma~\ref{lemma:lhl-1} and $\mathsf{T}_H$ be a trivial adversary that outputs a random $p_h \in_R P_H$. Recall that for any predicate $p$, $\Pr_{\mathbf{x} \sim D^n}[\iso{p}{\mathbf{x}}]=B(n, \mathsf{weight}_D(p))$. By Lemma~\ref{lemma:lhl-1} \begin{align*} \mathsf{Succ}_\mathsf{low}^{\mathsf{T}_H,\bot}(n) &\ge \Pr_{\mathbf{x} \sim D^n, h\in_R H}[\iso{p_h}{\mathbf{x}} \land \mathsf{weight}_D(p_h) \in W_\mathsf{low}] \\ &= \Pr_{h\in_R H}[\mathsf{weight}_D(p_h) \in W_\mathsf{low}]\cdot \Pr_{\mathbf{x} \sim D^n, h\in_R H}[\iso{p_h}{\mathbf{x}} \mid \mathsf{weight}_D(p_h) \in W_\mathsf{low}] \\ &\ge (1 - 2^{-m})\cdot B(n,3\cdot 2^{-m}) \end{align*} Observing that $\left|\dv{B}{w}(w)\right| \le \dv{B}{w}(0) =n$, $\mathsf{Succ}_\mathsf{low}^{\mathsf{T}_H,\bot}(n) \ge B(n,w_\mathsf{low}(n)) - 3\cdot2^{-m}n - 2^{-m} \ge B(n,w_\mathsf{low}(n)) - 2^{-(m-\log n - 2)} \ge B(n,w_\mathsf{low}(n)) - 2^{-c}$. \end{proof} \begin{remark} \label{remark:baseline-derandomized} The proof of Claim~\ref{baseline-lower-bound} requires only that is possible to sample a predicate such that $\mathsf{weight}_D(p_h) \in W_\mathsf{low}$. If we switch the order of quantifiers in the claim by allowing the trivial adversary to depend on the distribution $D$, then the proof (and thus the trivial adversary) can be derandomized. Indeed, for any $D$ with sufficient min-entropy, there are many $p_h$ that can be used. This observation is used in the proof of Theorem~\ref{thm:count-composition-uniform}. \end{remark} \subsection{Reflections on modelling assumption} \label{sec:knowledge-discussion} In many ways, Definition~\ref{def:security-against-singling-out} requires a very high level of protection, similar to what is standard in the foundations of cryptography. The definition requires a mechanism to provide security for all distributions $D$ and against non-uniform, computationally unbounded adversaries.\footnote{\label{foot:knowledge} It is reasonable to limit the adversary in Definition~\ref{def:security-against-singling-out} to polynomial time. If we restricted our attention to distributions with moderate min-entropy, our results would remain qualitatively the same: our trivial adversaries and lower bounds are all based on efficient and uniform algorithms; our upper bounds are against unbounded adversaries. Relatedly, restricting to min-entropy distributions would allow us to switch the order of quantifiers of $D$ and $\mathsf{T}$ in the definition of the baseline without affecting our qualitative results.} The main weakness in the required protection is that it considers only data that is i.i.d., whereas real-life data cannot generally be modeled as i.i.d. Any mechanism that purports to be a universal anonymizer of data under the GDPR---by transforming personal data into non-personal data---must prevent singling out. Our definition is intended to capture a necessary condition for a mechanism to be considered as rendering data sufficiently anonymized under the GDPR. Any mechanism that prevents singling out in all cases must prevent it in the special case that the data is i.i.d. from a distribution $D$. We view a failure to provide security against predicate singling out (Definition~\ref{def:security-against-singling-out}) or is fully predicate singling out (Definition~\ref{def:blatant}) as strong evidence that a mechanism does not provide security against singling out; hence, it does not protect from identification, as per the analysis in Section~\ref{sec:soGDPR}. On the other hand, satisfying Definition~\ref{def:security-against-singling-out} is {\em not sufficient} for arguing that a mechanism renders data sufficiently anonymized under the GDPR. Singling out is only one of the many ``means reasonably likely to be used'' to identify a person in a data release.\footnote{Article~29 Working Party Opinion on Anonymisation techniques~\cite{wp-anonymisation} enumerates three criterions for identification: singling out, linkage, and inference.} Furthermore, the definition considers only i.i.d.\ data; satisfying it may not even be sufficient to conclude that a mechanism prevents singling out in all relevant circumstances. \subsubsection{Failure to compose twice} \label{sec:failcomposetwice} \renewcommand{\l}{{\frac{n}{2}}} Borrowing ideas from~\cite{NSSSU18}, we construct two mechanisms $M_\mathsf{ext}$ and $M_\mathsf{enc}$ which are individually secure against singling out (for arbitrary distributions), but which together allow an adversary to single out with high probability when the data is uniformly distributed over the universe $X = \{0,1\}^m$. With more work, the composition attack can be extended to more general universes and to distributions with sufficient min-entropy. We divide the input dataset into three parts: a source of randomness $\mathbf{x}_\mathsf{ext} = (x_1,\ldots,x_{\l})$, a message $\row_n$, and a holdout set $\mathbf{x}_\mathsf{hold} = (x_{\l+1},\ldots,x_{n-1})$ used in the proof. $M_\mathsf{ext}(\mathbf{x})$ outputs an encryption secret key $\mathsf{s}$ based on the rows in $\mathbf{x}_\mathsf{ext}$, using the von Neumann extractor. \begin{mechanism}[H] \caption{$M_\mathsf{ext}$} \SetKw{KwBy}{by} \SetKwInOut{Input}{input} \Input{$\mathbf{x}$} \BlankLine $\mathsf{s} \gets \emptyset$, the empty string\; \For{$i \gets 1$ \KwTo $\l$ \KwBy $2$}{ \uIf{$\mathsf{lsb}(x_i) = 0 \land \mathsf{lsb}(x_{i+1}) = 1$}{$\mathsf{s} \gets s\|0$} \uIf{$\mathsf{lsb}(x_i) = 1 \land \mathsf{lsb}(x_{i+1}) = 0$}{$\mathsf{s} \gets s\|1$} } \uIf{$|\mathsf{s}| \ge m$}{return $\mathsf{s}[1:m]$, the first $m$ bits of $\mathsf{s}$} \uElse{return $\bot$} \end{mechanism} $M_\mathsf{enc}(\mathbf{x})$ runs $\mathsf{s}\gets M_\mathsf{ext}$. If $\mathsf{s} \neq \bot$, it outputs $\mathsf{s}\oplus \row_n$ (using $\mathsf{s}$ as a one-time pad to encrypt $\row_n$); otherwise, it outputs $\bot$. Alone, neither $\mathsf{s}$ nor $\mathsf{s}\oplus \row_n$ allows the adversary to single out, but using both an adversary can recover $\row_n$ and thereby single it out. \begin{theorem} \label{thm:two-composition} $M_\mathsf{ext}$ and $M_\mathsf{enc}$ are secure against predicate singling out (Definition~\ref{def:security-against-singling-out}). For $m = \omega(\log(n))$ and $m\le n/8$, $X =\{0,1\}^m$, and $D = U_m$ the uniform distribution over $X$, there exists an adversary $\adv$ such that $$\mathsf{Succ}_{\le2^{-m}}^{\adv,M_\mathsf{ExtEnc}}(n) \ge 1- \mathrm{negl}(n),$$ where $M_\mathsf{ExtEnc} = (M_\mathsf{ext},M_\mathsf{enc})$. \end{theorem} \begin{proof} Let $\mathbf{x} = (\mathbf{x}_\mathsf{ext}, \mathbf{x}_\mathsf{hold},x_n)$ as described above. \begin{proof}[Security of $M_\mathsf{ext}$.] This is a special case of the security of the predicate mechanism $M_q$ (Theorem~\ref{thm:mech-one-bit}) and post-processing, with $q = \mathsf{lsb}$. \footnotemark \footnotetext{\label{foot:uniformity-1} The security of $M_\mathsf{ext}$ does not follow from the mere fact that its output is nearly uniform. For example, the mechanism that outputs $x_1$ may be uniform, but it trivially allows singling out. Security would follow if the output was nearly uniform \emph{conditioned} on $\mathbf{x}$.} In fact, $M_\mathsf{ext}$ is even $(\ln(2),0,1/n,1/n)$-PSO secure. We provide a brief outline of the proof. Consider a related mechanism $M_\mathsf{ext}^\top$ that outputs $\top$ if $|\mathsf{s}| \ge m$ and $\bot$ otherwise. By Proposition~\ref{lemma:mech-small-codomain}, $M_\mathsf{ext}^\top$ is $(\ln(2),0,1/n,1/n)$-PSO secure. The security of $M_\mathsf{ext}$ can be reduced to that of $M_\mathsf{ext}^\top$ using a generalization of Proposition~\ref{lemma:permutation} to distributions of permutations. \end{proof} \begin{proof}[Security of $M_\mathsf{enc}$.] For $\adv$, $w_\mathsf{low}(n) < \mathrm{negl}(n)$, and $w_\mathsf{high}(n) =\omega(\log(n)/n)$, let $$ \gamma_\mathsf{low} = \mathsf{Succ}_{\lew_\mathsf{low}}^{\adv,M_\mathsf{enc}}(n) \quad\mbox{and}\quad \gamma_\mathsf{high} = \mathsf{Succ}_{\gew_\mathsf{high}}^{\adv,M_\mathsf{enc}}(n). $$ We must show that $\gamma_\mathsf{low}, \gamma_\mathsf{high} < \mathrm{negl}(n)$. It is easy to bound $\gamma_\mathsf{high}$ using the holdout set $\mathbf{x}_\mathsf{hold}$, which is independent of the output $M_\mathsf{enc}$: $$\gamma_\mathsf{high} \le \Pr_{\mathbf{x},M_\mathsf{enc},\adv}[p(\mathbf{x}_\mathsf{hold}) \le 1 \mid \mathsf{weight}_{D}(p) \ge w_\mathsf{high}] = (1-w_\mathsf{high})^{n-m-2} = o(1-\log(n)/n)^{\Omega(n)} = \mathrm{negl}(n).$$ \noindent To bound $\gamma_\mathsf{low}$, we consider the two possible values of $p(x_n)$. Write $\gamma_\mathsf{low} = \gamma_\mathsf{low}^0 + \gamma_\mathsf{low}^1$ where \begin{align*} \gamma_\mathsf{low}^b \triangleq \Pr\left[\iso{p}{\mathbf{x}} \land \mathsf{weight}_{D}(p) \le w_\mathsf{low} \land p(x_n) = b \right] \end{align*} If $\adv$ singles out and $p(x_n) = 1$, then the it must have gleaned information about $x_n$ from the ciphertext $\mathsf{s} \oplus x_n$, which should be impossible. The von Neumann extractor guarantees that either $\mathsf{s} = \bot$ or $\mathsf{s}$ is uniformly distributed in $\{0,1\}^\l$. Either way, the output of $M_\mathsf{enc}(\mathbf{x})$ is information-theoretically independent of $\row_n$. Therefore $$\gamma_\mathsf{low}^1 \le \Pr_{\mathbf{x},M_\mathsf{enc},\mathsf{A}}[p(\row_n) = 1 \mid \mathsf{weight}_{D}(p) \le w_\mathsf{low}] \le w_\mathsf{low}=\mathrm{negl}(n).$$ If $\adv$ singles out and $p(x_n) = 0$, then it is effectively singling out against the sub-dataset $\mathbf{x}_{-n} = (x_1,\dots,x_{n-1})$. That is \begin{align*} \gamma_\mathsf{low}^0 &=\Pr_{\mathbf{x},M_\mathsf{enc},\mathsf{A}}[\iso{p}{\mathbf{x}} \land \mathsf{weight}_{D}(p) \le w_\mathsf{low} \land p(\row_n) = 0] \\ &= \Pr_{\mathbf{x},M_\mathsf{enc},\mathsf{A}}[\iso{p}{\mathbf{x}_{-n}} \land \mathsf{weight}_{D}(p) \le w_\mathsf{low} \land p(\row_n) = 0] \end{align*} We construct $\mathsf{B}$ that tries to single out against mechanism $M_\mathsf{ext}$ using $\mathsf{A}$. We assume that $\mathsf{B}$ can sample from $D$.\footnotemark\ On input $\mathsf{s}$, $\mathsf{B}$ samples $\row_n'\sim D$ and runs $p\gets\mathsf{A}(\mathsf{s}\oplus\row_n')$. \begin{align*} \mathsf{Succ}_{\lew_\mathsf{low}}^{\mathsf{B},M_\mathsf{ext}}(n) &\ge \Pr\left[\iso{p}{\mathbf{x}_{-n}} \land \mathsf{weight}_{D}(p) \le w_\mathsf{low} \land p(x_n') = 0 \land p(x_n) = 0\right] \\ &\ge \Pr\left[\iso{p}{\mathbf{x}_{-n}} \land \mathsf{weight}_{D}(p) \le w_\mathsf{low} \land p(x_n') = 0\right]\cdot \Pr[p(x_n) = 0 \mid \mathsf{weight}_{D}(p) \le w_\mathsf{low}] \\ &\ge \gamma_\mathsf{low}^0 \cdot (1 - w_\mathsf{low}) \\ &\ge \gamma_\mathsf{low}^0 \cdot (1 - \mathrm{negl}(n)) \end{align*} Therefore $\gamma_\mathsf{low}^0$ is negligible. \footnotetext{\label{foot:uniformity-2} It is tempting to try to remove this assumption by picking $\row_n'$ arbitrarily, say $\row_n' = 0^m$. Because $\mathsf{s}$ is uniform, the ciphertexts $\mathsf{s}\oplus \row_n$ and $\mathsf{s}\oplus \row_n'$ are identically distributed and perfectly indistinguishable. This intuition is misleading (see also footnote~\ref{foot:uniformity-1}).} \end{proof} \begin{proof}[Insecurity of $M_\mathsf{ExtEnc}$ for $D = U_m$.] The output of $M_\mathsf{ExtEnc}(\mathbf{x})$ is a pair $(\mathsf{s}, \mathsf{c})$. If $(\mathsf{s},\mathsf{c}) = (\bot, \bot)$, $\adv$ aborts. The for-loop in $M_\mathsf{ext}$ extracts $n/4$ uniform bits in expectation. By a Chernoff Bound, for $m\le n/8$, $\Pr_\mathbf{x}[\mathsf{s} = \bot] \le e^{-n/16} = \mathrm{negl}(n)$. If $(\mathsf{s},\mathsf{c}) \neq (\bot, \bot)$, $\adv$ recovers $\row_n = \mathsf{c}\oplus \mathsf{s}$ and outputs the predicate $$p(x) = \bigl(x = \row_n\bigr).$$ By the choice of $m = \omega(\log(n))$, $\mathsf{weight}_{U_m}(p) = 2^{-m} < \mathrm{negl}(n)$. $\Pr[\iso{p}{\mathbf{x}} \mid \mathsf{s} \neq \bot] = 1 - \Pr[\exists j\neq n: x_j =x_n] = 1 - n\cdot 2^{-m} > 1- \mathrm{negl}(n)$. The bound on $\mathsf{Succ}_{\le2^{-m}}^{\adv,M_\mathsf{ExtEnc}}$ follows, completing the proof of the claim and the theorem. \end{proof} \end{proof} \section{Differential Privacy, generalization and PSO security} \label{sec:generalization} \subsection{Preliminaries from differential privacy} For $\mathbf{x}, \mathbf{x}' \in X^n$, we write $\mathbf{x}\sim\mathbf{x}'$ if the two datasets differ on exactly one element $x_i$. \begin{definition}[Differential Privacy~\cite{DMNS06, dwork2006our}] A randomized mechanism $M: X^n \rightarrow T$ is $(\epsilon,\delta)$-differentially private if for all $\mathbf{x}\sim\mathbf{x}'\in X^n$ and for all events $S\subseteq T$, $$\Pr[M(\mathbf{x})\in S] \leq e^\epsilon \Pr[M(\mathbf{x}')\in S] + \delta,$$ where the probability is taken over the randomness of the mechanism $M$. \end{definition} \begin{lemma}[Basic and Parallel Composition~\cite{dwork2006our, mcsherryPINQ}] Let $M$ $(\epsilon,\delta)$-differentially private and $M'$ $(\epsilon',\delta')$-differentially private. The mechanism $M' \circ M:\mathbf{x} \mapsto M'(M(\mathbf{x}),\mathbf{x})$ is $(\epsilon + \epsilon', \delta+\delta')$-differentially private. Let $(\mathbf{x}_1,\dots,\mathbf{x}_\ell)$ be a partition of $\mathbf{x}$ into disjoint datasets. The mechanism $M^\ell:(\mathbf{x}_1,\dots,\mathbf{x}_\ell) \mapsto (M(\mathbf{x}_1),\dots,M(\mathbf{x}_\ell))$ is $(\epsilon,\delta)$-differentially private. \end{lemma} \begin{theorem}[Exponential Mechanism~\cite{mcsherry2007mechanism}\label{thm:expmech}] For domain $X^n$ and outcome space $\mathcal{R}$, let $u:X^n \times \mathcal{R} \to \mathbb{R}$ be a utility function. The sensitivity of $u$ is $\Delta u = \max_{r\in\mathcal{R}} \max_{\mathbf{x}\sim\mathbf{x}'} |u(\mathbf{x},r) - u(\mathbf{x}',r)|$. For a dataset $\mathbf{x}$, let $\mathop{\rm{opt}}\nolimits_u(\mathbf{x}) = \max_{r\in\mathcal{R}} u(\mathbf{x},r)$ and let $\mathcal{R}_{\mathop{\rm{opt}}\nolimits} = \{r\in \mathcal{R} : u(\mathbf{x},r) = \mathop{\rm{opt}}\nolimits_u(\mathbf{x})\}$. For any $\varepsilon > 0$, there exists a mechanism $M_\mathsf{Exp}^{\varepsilon}:X^n\times \mathcal{R}\times u \to \mathcal{R}$ that is $(\varepsilon,0)$-differentially private such that for all $\mathbf{x}$ and all $t>0$: $$\Pr\left[ u(M_\mathsf{Exp}^{\varepsilon}(\mathbf{x},u,\mathcal{R}) \le \mathop{\rm{opt}}\nolimits_u(\mathbf{x}) - \frac{ 2\Delta u}{\varepsilon} \left( \ln \left( \frac{|\mathcal{R}|}{|\mathcal{R}_{\mathop{\rm{opt}}\nolimits}|} \right) +t \right) \right] \le e^{-t}.$$ \end{theorem} Our analysis of how PSO security relates to differential privacy is through a connection of both concepts to statistical generalization. For differential privacy, this connection was established in~\cite{DFHPRR15, BNSSSU16}. We will also use a variant of the latter result from~\cite{NS2015}:\footnote{The proof of Equation~\ref{eq:generalization1} of Lemma~\ref{lemma:generalization} is identical to that of Lemma~3.3 in~\cite{NS2015}, skipping the last inequality in the proof. The proof of Equation~\ref{eq:generalization2} is analogous.} \begin{lemma}[Generalization lemma] \label{lemma:generalization} Let $\mathsf{A}:(X^n)^\ell \to 2^X \times [\ell]$ be an $(\epsilon, \delta)$-differentially private algorithm that operates on $\ell$ sub-datasets and outputs a predicate $p:X\to\{0,1\}$ and an index $i \in [\ell]$. Let $\vec{\mathbf{x}} = (\mathbf{x}_1,\dots,\mathbf{x}_\ell)$ where every $\mathbf{x}_i\sim D^n$ is a dataset containing i.i.d. elements from $D$, and let $(p,i) \gets \mathsf{A}(\vec{\mathbf{x}})$. Then \begin{eqnarray} \operatorname*{\mathbb{E}}_{\vec{\mathbf{x}} \sim (D^n)^\ell} \left[\operatorname*{\mathbb{E}}_{(p,i)\gets\mathsf{A}(\vec{\mathbf{x}})}\left[p(\mathbf{x}_i)\right]\right] & \le & e^\epsilon\cdot\operatorname*{\mathbb{E}}_{\vec{\mathbf{x}} \sim (D^n)^\ell} \left[\operatorname*{\mathbb{E}}_{(p,i)\gets\mathsf{A}(\vec{\mathbf{x}})}\left[\mathsf{weight}_{D}(p)\right]\right] + \ell\delta \label{eq:generalization1} \\ \operatorname*{\mathbb{E}}_{\vec{\mathbf{x}} \sim (D^n)^\ell} \left[\operatorname*{\mathbb{E}}_{(p,i)\gets\mathsf{A}(\vec{\mathbf{x}})}\left[p(\mathbf{x}_i)\right]\right] & \ge & e^{-\epsilon}\left(\operatorname*{\mathbb{E}}_{\vec{\mathbf{x}} \sim (D^n)^\ell} \left[\operatorname*{\mathbb{E}}_{(p,i)\gets\mathsf{A}(\vec{\mathbf{x}})}\left[\mathsf{weight}_{D}(p)\right]\right] - \ell\delta\right). \label{eq:generalization2} \end{eqnarray} \end{lemma} \subsection{Differential privacy implies PSO security} \begin{theorem} \label{thm:generalization} For all $\varepsilon = O(1)$, $\delta = \mathrm{negl}(n)$, $w_\mathsf{low}\le 1/n$, and $w_\mathsf{high}(n) = \omega(\log n / n)$, if $M$ is $(\varepsilon,\delta)$-differentially private, then $M$ is $(\varepsilon',\delta',w_\mathsf{low},w_\mathsf{high})$-PSO secure for $$ \varepsilon' = \varepsilon + (n-1)\ln\left(\frac{1}{1-w_\mathsf{low}}\right) \quad\mbox{and}\quad \delta' = \mathrm{negl}(n). $$ \end{theorem} \noindent For $w_\mathsf{low} = o(1/n)$, $\varepsilon' = \varepsilon + o(1)$.\footnote{For all $w_\mathsf{low}\le 1/n$ and $n$, $\varepsilon' < \varepsilon + 1$ by the fact that $(1-w_\mathsf{low})^{n-1} \ge (1-1/n)^{n-1} > e^{-1}$.} \begin{proof} The theorem consists of Claims~\ref{claim:generalization-low} and~\ref{claim:generalization-heavy}, each using one part of the generalization lemma. That lemma holds even when the distribution $D$ is known, a fact used in both proofs. \begin{claim} \label{claim:generalization-low} If $M$ is $(\epsilon,\delta)$-d.p., then for all $\mathsf{A}$ and $w_\mathsf{low}\in [0,1/n]$ $$\mathsf{Succ}^{\adv,M}_{\lew_\mathsf{low}}(n) \le e^{\varepsilon'} \cdot \mathsf{base}(n,w_\mathsf{low}) + n\delta.$$ \end{claim} \begin{claim} \label{claim:generalization-heavy} For $\varepsilon = O(1)$ and $\delta = \mathrm{negl}(n)$, if $M$ is $(\epsilon,\delta)$-d.p., then for all $\mathsf{A}$ and all $w_\mathsf{high} = \omega(\log n /n)$, $$\alpha\triangleq\mathsf{Succ}^{\adv,M}_{\gew_\mathsf{high}}(n) \le \mathrm{negl}(n).$$ \end{claim} \begin{proof}[Proof of Claim~\ref{claim:generalization-low}] Let $w^* = \max\{w\le w_\mathsf{low} : w \mbox{ realizeable under } D\}$. Given $p\gets\mathsf{A}(M(\mathbf{x}))$, $w_\mathsf{low}$, and $D$, define the predicate $p^*$: $$p^*(x) \equiv \begin{cases} p(x)& \mbox{if } \mathsf{weight}_{D}(p) \le w_\mathsf{low} \\ 0& \mbox{if } \mathsf{weight}_{D}(p) > w_\mathsf{low} \end{cases}$$ Observe that $\mathsf{weight}_{D}(p^*) \le w^*$. The predicate $p^*$ can be computed from $p$, $D$, and $w_\mathsf{low}$ without further access to $\mathbf{x}$. Because differential privacy is closed under post-processing, if $M$ is $(\varepsilon,\delta)$-differentially private, then the computation that produces $p^*$ is as well. \begin{align*} \mathsf{Succ}_{\lew_\mathsf{low}}^{\adv,M}(n) &\le \Pr_{\mathbf{x},p}[p(\mathbf{x}) \ge 1/n \land \mathsf{weight}_{D}(p) \le w^*] \\ &\le n \cdot \operatorname*{\mathbb{E}}_{\mathbf{x},p} [p^*(\mathbf{x})] \\ &\le n\cdot(e^\epsilon w^* + \delta) \;\;\,\quad\quad \quad\quad\quad \mbox{by Lemma~\ref{lemma:generalization}, $\ell=1$} \\ &= e^\varepsilon \frac{\mathsf{base}(n,w^*)}{(1-w^*)^{n-1}} + n\delta \quad\quad\quad \mbox{by Claim~\ref{claim:baseline-exact}} \\ &\le e^\varepsilon \frac{\mathsf{base}(n,w^*)}{(1-w_\mathsf{low})^{n-1}} + n\delta \\ &= e^{\varepsilon'} \mathsf{base}(n,w_\mathsf{low}) + \delta' \;\quad \quad\quad\mbox{by Claim~\ref{claim:baseline-exact}} \tag*{\qedhere} \end{align*} \end{proof} \begin{proof}[Proof of Claim~\ref{claim:generalization-heavy}] Fix an adversary $\mathsf{A}$. We construct an algorithm $\mathsf{B}$ in an attempt to violate the Generalization Lemma for $\ell = O(\frac{\log n}{\alpha})$: \begin{adversary}[H] \SetKwInOut{Input}{Input} \SetKw{KwBy}{by} \Input{$D$, $\vec{\mathbf{x}}\sim (D^n)^\ell$} \BlankLine $I \gets \emptyset$, the empty set\; \For{$i\gets 1,\dots,\ell$}{ $p_i \gets \mathsf{A}(M(\mathbf{x}_i))$\; $u_i = -p_i(\mathbf{x}_i)$\; \uIf{$\mathsf{weight}_{D}(p_i) \ge w_\mathsf{high}$}{$I \gets I\cup\{i\}$} } Let $u:i\mapsto -p_i(\mathbf{x}_i)$ for $i\in I$\; $i^* \gets M_\mathsf{Exp}^{\varepsilon}(\vec{\mathbf{x}},I,u)$\; return $(i^*, p_{i^*})$ \end{adversary} $M$ is $(\varepsilon,\delta)$-differentially private, and $M_\mathsf{Exp}^\varepsilon$ (Theorem~\ref{thm:expmech}) is $(\varepsilon,0)$-differentially private. By basic and paraellel composition, $\mathsf{B}$ is $(2\varepsilon,\delta)$-differentially private. \newcommand{\mathsf{PSO}}{\mathsf{PSO}} \newcommand{(1 - \alpha)^\ell}{(1 - \alpha)^\ell} Define the event $\mathsf{PSO}$ to be the event that $\adv$ successfully predicate singles out on one of the sub-datasets with a high-weight predicate: $\mathsf{PSO} = \{\exists i \in [\ell] : \iso{p_i}{\mathbf{x}_i} \land \mathsf{weight}_{D}(p_i) \ge w_\mathsf{high}\}$. By the choice of $\ell$, $\Pr[\mathsf{PSO}] = 1- \left(1 - \alpha\right)^{\ell} \ge 1-\frac{1}{n}$. Conditioned on $\mathsf{PSO}$, $\max_{i\in I} u(i) \ge -1/n$. $\Delta u = 1/n$, and $|I| \le \ell$. The Exponential Mechanism guarantees that $$\Pr_{\vec{\mathbf{x}}; (p_{i^*},i^*) \gets \mathsf{B}(\vec{\mathbf{x}})}\biggl[p_{i^*}(\mathbf{x}_{i^*}) \ge \frac{1}{n} + \frac{2}{n\varepsilon}(\ln\ell + t) \mid\mathsf{PSO}\biggr]\le e^{-t}.$$ Choosing $t = \ln n$ and using the fact that $p_{i^*}(\mathbf{x}_{i^*}) \le 1$, \begin{align*} \operatorname*{\mathbb{E}}_{\vec{\mathbf{x}}; (p_{i^*},i^*) \gets \mathsf{B}(\vec{\mathbf{x}})}[p_{i^*}(\mathbf{x}_{i^*})\mid\mathsf{PSO}] &\le \frac{1}{n} + \frac{2}{n\varepsilon}(\ln \ell + \ln n) + \frac{1}{n}.\notag \end{align*} \begin{align*} \operatorname*{\mathbb{E}}_{\vec{\mathbf{x}}; (p_{i^*},i^*) \gets \mathsf{B}(\vec{\mathbf{x}})}\left[p_{i^*}(\mathbf{x}_{i^*})\right] &= \Pr[\neg\mathsf{PSO}]\operatorname*{\mathbb{E}}[p_{i^*}(\mathbf{x}_{i^*})\mid\neg\mathsf{PSO}] + \Pr[\mathsf{PSO}] \operatorname*{\mathbb{E}}[p_{i^*}(\mathbf{x}_{i^*})\mid\mathsf{PSO}] \notag\\ &\le \Pr[\neg\mathsf{PSO}]+ \operatorname*{\mathbb{E}}[p_{i^*}(\mathbf{x}_{i^*})\mid\mathsf{PSO}] \notag\\ &< \frac{3}{n} + \frac{2}{n\varepsilon}(\ln \ell + \ln n). \\ \\% \end{align} % \operatorname*{\mathbb{E}}_{\vec{\mathbf{x}}; (p_{i^*},i^*) \gets \mathsf{B}(\vec{\mathbf{x}})}\left[\mathsf{weight}_{D}(p_{i^*})\right] &= \Pr[\neg\mathsf{PSO}]\operatorname*{\mathbb{E}}[\mathsf{weight}_{D}(p_{i^*})\mid\neg\mathsf{PSO}] + \Pr[\mathsf{PSO}] \operatorname*{\mathbb{E}}[\mathsf{weight}_{D}(p_{i^*})\mid\mathsf{PSO}] \notag \\ &\ge \Pr[\mathsf{PSO}]\cdot\operatorname*{\mathbb{E}}[\mathsf{weight}_{D}(p_{i^*})\mid\mathsf{PSO}] \notag\\ &\ge (1-\frac{1}{n})\cdotw_\mathsf{high}\notag\\ &> \frac{3w_\mathsf{high}}{4} \end{align*} Applying the Lemma for the $(2\varepsilon,\delta)$-d.p. mechanism $\mathsf{B}$, $$\frac{3}{n} + \frac{2}{n\varepsilon}(\ln \ell + t) \ge e^{-2\varepsilon}\left(\frac{3w_\mathsf{high}}{4} - \ell\delta\right).$$ If $\delta = \omega(\frac{\alpha}{n})$, then by the assumption that $\delta$ is negligible, $\alpha = \mathrm{negl}(n)$. Otherwise $\delta = O(\frac{\alpha}{n})= O(\frac{\log n}{n\ell})$ and $$\frac{2}{\varepsilon}(\ln \ell + \ln n) \ge \frac{3nw_\mathsf{high}- O(\log n)}{4e^{2\varepsilon}}.$$ For $\varepsilon = O(1)$ and $w_\mathsf{high} = \omega(\frac{\log n}{n})$, $\ln \ell + \ln n =\omega(\log n)$. By the choice of $\ell = O(\frac{\log n}{\alpha})$, $\alpha = \mathrm{negl}(n)$. \end{proof} \end{proof} \section{Introduction} Data privacy laws---like HIPAA, FERPA, and Title~13 in the US, and the GDPR in the EU---govern the use of sensitive personal information.\footnote{HIPAA is the Health Insurance Portability and Accountability Act. FERPA is the Family Educational Rights and Privacy Act. Title~13 of the US Code mandates the role of the US Census. GDPR is the EU General Data Protection Regulation.} These laws delineate the boundaries of appropriate use of personal information and impose steep penalties upon rule breakers. To adhere to these laws, practitioners need to apply suitable controls and statistical disclosure limitation techniques. Many commonly used techniques including $k$-anonymity, bucketing, rounding, pseudonymization, and swapping offer privacy protections that are seemingly intuitive but only poorly understood. And while there is a vast literature of best practices, a litany of successful privacy attacks demonstrates that these techniques often fall short of the sort of privacy envisioned by legal standards.\footnote{See, e.g.,~\cite{BrokenPromises}.} A more disciplined approach is needed. However, there is a significant conceptual gap between legal and mathematical thinking around data privacy. Privacy regulations are grounded in legal concepts such as personally-identifiable information (PII), linkage, distinguishability, anonymization, risk, and inference. In contrast, much of the recent progress in data privacy technology is rooted in mathematical privacy models such as differential privacy~\cite{DMNS06} that offer a foundational treatment of privacy, with formal privacy guarantees. And while such techniques are being actively developed in the academy, industry, and government, there is a basic disconnect between the legal and mathematical conceptions. The effect is uncertainty as to which technical offerings adequately match expectations expressed in legal standards~\cite{NW18}. \paragraph{Bridging between legal and technical concepts of privacy.} We aim to address this uncertainty by translating between the legal and the technical. To do so, we begin with a concept appearing in the law, then model some aspect of it mathematically. With the mathematical formalism in hand, we can better understand the \emph{requirements} of the law, their \emph{implications}, and the \emph{techniques} that might satisfy them. This is part of a larger effort to bridge between legal and technical conceptions of privacy. An earlier work analyzed the privacy requirements of FERPA and modeled them in a game-based definition, as is common in cryptography. The definition was used to argue that the use of differentially private analyses suffices for satisfying a wide range of interpretation of FERPA~\cite{Bridging}. An important feature of FERPA that enabled this analysis is that FERPA and its accompanying documents contain a rather detailed description of a privacy attacker and the attacker's goals. In this work we focus on the concept of {\em singling out} from the GDPR. More specifically, we examine what it means for a data anonymization mechanism to ensure \emph{security against singling out} in a data release. Preventing singling out attacks in a dataset is a necessary (but maybe not sufficient) precondition for a dataset to be considered effectively anonymized and thereby free from regulatory restrictions under the GDPR. Ultimately, our goal is to better understand a concept foundational to the GDPR, enabling a rigorous mathematical examination of whether certain classes of techniques (e.g., $k$-anonymity, differential privacy, pseudonymization) provide an important legal protection. We are not the first to study this issue. ``Opinion on Anonymisation Techniques''~\cite{wp-anonymisation} provides guidance about the use of various privacy technologies---including $k$-anonymity and differential privacy---as anonymization techniques. It's analysis is centered on asking whether each technology effectively mitigates three risks: ``singling out, linkability, and inference.'' For instance, \cite{wp-anonymisation} concludes that with $k$-anonymity singling is no longer a risk whereas with differential privacy it ``may not'' be a risk. Though similar in purpose to our work, its technical analyses are informal and coarse. Reconsidering these questions with mathematical rigor, we encourage revisiting the conclusions in~\cite{wp-anonymisation}. \subsection{Singling out in the GDPR} \label{sec:soGDPR} We begin with the text of the GDPR. It consists of {\em articles} detailing the obligations placed on processors of personal data as well as {\em recitals} containing explanatory remarks. Article~1 of the regulation delineates its scope as ``lay[ing] down rules relating to the protection of natural persons with regard to the processing of personal data and rules relating to the free movement of personal data.'' The GDPR places no restrictions on the processing of non-personal data, even if this data is the result of \emph{anonymizing} personal data.\footnote{Recital 26 emphasizes this point: ``The principles of data protection should therefore not apply to anonymous information, namely information which does not relate to an identified or identifiable natural person or to personal data rendered anonymous in such a manner that the data subject is not or no longer identifiable.''} Personal data is defined in Article~4 to mean ``any information relating to an identified or identifiable natural person; an identifiable natural person is one who can be identified, directly or indirectly.'' What it means for a person to be ``identified, directly or indirectly'' is not elaborated in the articles of the GDPR. Recital~26 sheds a little more light: ``To determine whether a natural person is identifiable account should be taken of all the means reasonably likely to be used, such as singling out, either by the controller or by another person to identify the natural person directly or indirectly.'' Singling out is one way to identify a person in data, and only data that does not allow singling out may be excepted from the regulation.\footnote{Interestingly, singling out is the only criterion for identifiability explicitly mentioned in the GDPR, the only occurrence the term being the quoted passage from Recital~26.} For insight as to the regulation's meaning, we refer to two documents prepared by the Article~29 Data Protection Working Party, an advisory body set out by the EU Data Protection Directive.\footnote{Formally, Directive on the protection of individuals with regard to the processing of personal data and on the free movement of such data. 95/46/EC.} ``Opinion on the Concept of Personal Data''~\cite{wp-personal-data} elaborates on the meaning of ``identifiable, directly or indirectly.'' A person is identified ``within a group of persons [when] he or she is distinguished from all other members of the group.'' One way of distinguishing a person from a group is by specifying ``criteria which allows him to be recognized by narrowing down the group to which he belongs.'' If the group is narrowed down to an individual, that individual has been singled out.\footnotemark\ Looking ahead, we will call this \emph{isolating} an individual in the dataset and argue that not every instance of isolation should be considered a singling out attack. \footnotetext{The notion of ``singling out'' is not defined in the Opinion on the Concept of Personal Data~\cite{wp-personal-data}. It is used in~\cite{wp-personal-data} four times, each consistent with the above interpretation. Our interpretation coincides with and was initially inspired by that of \cite{Diffix}, defining ``singling out as occurring when an analyst correctly makes a statement of the form `There is exactly one user that has these attributes.' "} We highlight three additional insights that inform our work. First, identification does not require a name or any other traditional identifier. For instance, singling out can be done with a ``small or large'' collection of seemingly innocuous traits (e.g., ``the man wearing a black suit''). Indeed, this is what is meant by ``indirectly identifiable.'' An example of singling out in practice cited by \cite{wp-anonymisation} showed that four locations sufficed to uniquely identify 95\% of people in a pseudonymized dataset of time-stamped locations. This is considered singling out even though no method of linking such location traces to individuals' names was identified. Second, identifiable data may come in many forms, including microdata, aggregate statistics, news articles, encrypted data, video footage, and server logs. What's important is not the form of the data, its whether the data permits an individual to be singled out. We apply this same principle to the manner in which an individual is singled out within a dataset. Most examples focus on specifying a collection of attributes (e.g., four time-stamped locations) that match a single person in the data. The collection of attributes corresponds to a \emph{predicate}: a function that assigns to each person in the dataset a value $0$ or $1$ (interpreted as $\mathsf{false}$ or $\mathsf{true}$ respectively). We interpret the regulation as considering data to be personal if an individual can be distinguished within a dataset using any predicate, not only those that correspond to specifying collections of attributes. Just as ``small or large'' collections of attributes may be used to single out, we allow these predicates to be simple or complex. Third, whether or not a collection of attributes identifies a person is context-dependent. ``A very common family name will not be sufficient to identify someone - i.e. to single someone out - from the whole of a country's population, while it is likely to achieve identification of a pupil in a classroom.'' Both the \emph{prevalence} of the name and the \emph{size} of the group are important in the example, and will be important in our formalization. \subsection{Our contributions} \subsubsection{Defining security against predicate singling out} In this work, we formalize and analyze \emph{predicate singling out}, a notion which is intended to partially model the GDPR's notion of singling out. Following the discussion above, we begin with the idea that singling out an individual from a group involves specifying a predicate that uniquely distinguishes the individual, which we call \emph{isolation}. Using this terminology, an intuitive interpretation of the GDPR's requirement is that to be considered secure against singling out, a function of the data must prevent isolation. Trying to make this idea formal, we will see that it requires some refinement. We restrict our attention to datasets $\mathbf{x} = (x_1,\dots,x_n)$ of size $n$, where each row $x_i$ is sampled according to some underlying {probability distribution} $D$ over a universe $X$. The dataset $\mathbf{x}$ is assumed to contain personal data corresponding to individuals, with at most one row per individual. For example, $\mathbf{x}$ might consist of home listings, hospital records, internet browsing history, or any other personal information. A mechanism $M$ takes $\mathbf{x}$ as input and outputs some data release $M(\mathbf{x})$, be it a map of approximate addresses, aggregate statistics about disease, or pseudonymized internet histories. We call $M$ an \emph{anonymization mechanism} because it purportedly anonymizes the personal data $\mathbf{x}$. \newcommand{\mathbf{y}}{\mathbf{y}} An {adversary} $\adv$ attempts to output a {predicate} $p:X\rightarrow\{0,1\}$ that \emph{isolates} a row in $\mathbf{x}$, i.e., there exists $i$ such that $p(x_i) = 1$ and $p(x_j) = 0$ for all $j\neq i$. We emphasize that it is rows in the original dataset $\mathbf{x}$ on which the predicate acts, not the output $\mathbf{y}$. In part, this is a byproduct of our desire to make no assumptions on the form of $M$'s output. While it might make sense to apply a predicate to pseudonymized microdata, it is far from clear what it would mean for a synthetic dataset or for aggregate statistics. Observe that this choice also rules out predicates $p$ that ``isolate'' rows by referring to their position in $\mathbf{x}$ (i.e., ``the seventh row''). $M$ \emph{prevents isolation} if there doesn't exist an adversary $\adv$ that isolates a row in $\mathbf{x}$ except with very small probability over the randomness of sampling $\mathbf{x} \gets D^n$, the mechanism $\mathbf{y} \gets M(\mathbf{x})$, and the adversary $\adv(\mathbf{y})$. Unfortunately, this is impossible to achieve by any mechanism $M$. To wit, there is a \emph{trivial adversary}---one that that doesn't look at $\mathbf{y}$ and denoted by $\mathsf{T}(\bot)$---that isolates a row with probability approximately $0.37$. The adversary simply outputs $p$ that matches a $1/n$ fraction of the distribution $D$. For example, for a dataset of size $n=365$ random people selected at random from the United States population, $\mathsf{T}(\bot)$ may simply output $p= \mbox{(born on March 15th)}$. This predicate will isolate a row with probability $${365 \choose 1}\cdot \frac{1}{365}\cdot\left(1-\frac{1}{365}\right)^{364} \approx 37\%.$$ Isolation is hence not necessarily indicative of a failure to protect against singling out, as $\mathsf{T}(\bot)$ would succeed with $\approx37\%$ probability (for any $n$) even if $M$ does not output anything at all. Furthermore, a trivial adversary need not knw the distribution $D$ to isolate with probabiilty $\approx 37\%$, as long as $D$ has sufficient min-entropy (Section~\ref{sec:bounding-base}). A trivial adversary can give us a \emph{baseline} against which to measure isolation success. But the baseline should not simply be 37\% chance of success. Consider the earlier example of a dataset of 365 random Americans. What if an adversary output predicates like $p=(\mbox{born on March 15th} \wedge \mbox{vegan} \wedge \mbox{speaks Dutch} \wedge \mbox{concert pianist})$, and managed to isolate 10\% of the time? Though 10\% is much less than 37\%, the predicate is extremely specific and unlikely to isolate a person by chance. We formalize this intuition by considering the baseline risk of isolation as a function of the \emph{weight} of $p$, i.e., the chance that $p$ matches a random row sampled from the distribution $D$. The baseline for predicates of weight $1/n$ is 37\%, but the baseline for an extremely specific predicate may be much lower. The more specific the predicate, the closer the baseline gets to zero. Our primary focus in this paper is on the regime of predicate weights where the baseline is negligible, corresponding to predicates with negligible weight.\footnote{For completeness, we also consider in Section~\ref{sec:definitions} predicates of weight $\omega(\log n/n)$, where the baseline is also negligible.} We get: \smallskip \noindent {\bf Definition~\ref{def:security-against-singling-out}} (informal) An adversary \emph{predicate singles out} a row in $\mathbf{x}$ if it outputs a predicate that isolates a row with probability significantly higher than the baseline risk. A mechanism $M$ is \emph{secure against predicate singling out} (\emph{PSO secure}) if no adversary can use its output to predicate single out. \subsubsection{Analyzing security against predicate singling out} Having formulated security against singling out, our next goal is to understand the guarantee it offers, what mechanisms satisfy it, and how this concept relates to existing privacy concepts, including differential privacy and $k$-anonymity. Two desirable properties of a privacy concept are robustness to \emph{post-processing} and to \emph{composition}. The former requires that if a mechanism $M$ is deemed secure, then anything that can be computed using the outcome of $M$ should also be deemed secure. Hence, the outcome may be reused without creating additional privacy risk. For instance, if a PSO-secure mechanism $M$ outputs microdata, then any statistics that can be computed from that microdata should also be PSO-secure. It follows directly from the definition of PSO security that it is robust to post-processing. We would like that the privacy risk of multiple data releases is not significantly greater than the accumulated risks of the individual releases. In this case, we say that the privacy concept composes. We prove that PSO security does not compose, and give two examples of this failure. First, we show that releasing aggregate statistics is PSO-secure but fails to compose super-logarithmically many times. A collection of $\omega(\log(n))$ counts may allow an adversary to isolate a row with probability arbitrarily close to one using a predicate with negligible weight (and negligible baseline). Second, we construct less natural pair of mechanisms that individually are PSO-secure but together allow the recovery of a row in the dataset. The first mechanism extracts and outputs a secret encryption key from one part of $\mathbf{x}$. The second extracts the same key and uses it to encrypt the last row $x_n\in\mathbf{x}$, outputting the corresponding ciphertext. The mechanisms individually prevent predicate singling out, but together completely fail. Next, we ask whether existing privacy concepts guarantee PSO security. We already know that differential privacy is not necessary for PSO security as exact counts are PSO-secure but not differentially private. However, differential privacy does provide PSO security. The proof relies on the connection between differential privacy and statistical generalization guarantees~\cite{DFHPRR15, BNSSSU16}. We show that predicate singling out implies a form of overfitting to the underlying dataset. If $M$ is differentially private it prevents this form of overfitting, and hence protects against predicate singling out. Finally, we examine $k$-anonymity~\cite{SamaratiS98} and show that it does not prevent predicate singling out attacks. Instead, it may enable an adversary to predicate single out with probability approximately 37\% using extremely low-weight predicates for which the baseline risk is negligible. Briefly, the attack begins by observing that typical $k$-anonymous algorithms ``almost'' predicate single out. They reveal predicates---usually, collections of attributes---that are satisfied by only $k$ rows in the dataset. In an effort to make the $k$-anonymized data as useful as possible, these predicates are as descriptive and specific as possible. To predicate single out a row from the dataset of size $n$ using the $k$-anonymous output, it roughly suffices to predicate single out a row from any grouping of $k$ rows in the output. \subsection{Implication for the GDPR} Precisely formalizing predicate singling out attacks allows us to examine with mathematical rigor the extent to which specific algorithms and paradigms protect against them. In particular, we show that $k$-anonymity fails to prevent predicate singling out, but that differential privacy prevents predicate singling out. Our conclusions contrast with those of the Article~29 Working Party: they conclude that $k$-anonymity eliminates the risk of singling out while differential privacy ``may not''~\cite{wp-anonymisation}. These disagreements may raise a doubt about whether our modeling indeed matches the regulators' intent. Our goal in interpreting the text of the GDPR and related documents, and in defining predicate singling out, is to provide a precise mathematical formalism to capture some aspect of the concept of personal data (as elucidated in the regulation and in \cite{wp-personal-data}) and the associated concept of anonymization. We want to render mathematically \emph{falsifiable} a legal claim that a given algorithmic technique anonymizes personal data by providing a \emph{necessary} condition for such anonymizers. We argue that predicate singling out succeeds. A number of modeling choices limit the scope of our definition, but limiting the scope poses no issue. Specifically, (i) we only consider randomly sampled datasets; (ii) we only consider an attacker who has no additional knowledge of the dataset besides the output of a mechanism; (iii) we do not require that isolation be impossible, instead comparing to a baseline risk of isolation. A technique that purports to anonymize all personal data against all attackers must at least do so against randomly sampled data and against limited attackers. And unless the idea of anonymization mechanisms is completely vacuous, one must compare against a baseline risk. We must be careful not when narrowing our definition's scope, but when expanding it. The most significant expansion\footnote{We discuss additional subtleties in Section~\ref{sec:knowledge-discussion}.} is our choice to parameterize the baseline risk by the weight of a predicate. But this is a minimal expansion and only done to prevent a severe weakness. Not doing so would mean that a mechanism that published the first row of the dataset 20\% of the time could be said to ``prevent singling out.'' Any meaningful instantiation of ``preventing singling out'' should rule out such mechanisms. Ours is a natural way of doing so. This does not mean that our modeling is the only one possible. As the starting point for the analysis is a description which does not use mathematical formalism, but is rather a (somewhat incomplete) description using natural language. It is certainly plausible that alternative mathematical formalizations of singling out could be extracted from the very same text. We are looking forward to seeing such formalizations emerge. Finally, one may still claim that the assessments made in~\cite{wp-anonymisation} should be taken as ground truth and that the Article~29 WP meant for any interpretation of singling out to be consistent with these assessments. That is, the protection provided by $k$-anonymity implicitly defines the meaning of singling out (partially or in full). We believe, however, that such a position would be hard to justify. To the best of our knowledge, the assessments made by the Article~29 WP were not substantiated by a mathematical analysis. Furthermore, we caution against defining privacy implicitly as the guarantee provided by particular techniques; this approach is doomed to fail. In particular, the choice of defining privacy as the result of applying practices such as suppression of directly identifying information has proved a problematic choice that unfortunately pervades current legal privacy standards. \paragraph {Is predicate singling out a good privacy concept?} A predicate singling out attack can be a stepping stone towards a greater harm, even in settings where isolation alone may not. It may enable linking a person's record in the dataset to some external source of information~\cite{narayanan2008robust}, or targeting of individuals for differential treatment. As such, it is meaningful as a mode of privacy failure, both in the GDPR context and otherwise. And, while we believe that PSO security is relevant for the GDPR as a necessary property of techniques that anonymize personal data, we do not consider it a sufficiently protective privacy concept by itself. First, singling out is a specific mode of privacy failure. It is not clear that ruling out this failure mode is sufficient for privacy (in particular, two other failure modes are mentioned in~\cite{wp-anonymisation}: linkage and inference). Second, our definition considers a setting where the underlying data is chosen i.i.d.\ from some (unknown) underlying distribution, an assumption that is not true in many real-life contexts. PSO security may not prevent singling out in such contexts. Lastly, we believe that self-composition is an essential property of any reasonable privacy definition. However, as we show in Section~\ref{sec:failureToCompose}, security against singling out does not self compose. \section{Does $k$-anonymity provide PSO security?} \label{sec:k-anon} $k$-anonymity~\cite{SamaratiS98, sweeney2002k} is a strategy intended to help a data holder ``release a version of its private data with scientific guarantees that the individuals who are the subjects of the data cannot be re-identified while the data remain practically useful''~\cite{sweeney2002k}. It is achieved by making each individual in a data release indistinguishable from at least $k-1$ individuals. Typically, a $k$-anonymized dataset is produced by subjecting it to a sequence of generalization and suppression operations. The Article~29 Working Party Opinion on Anonymisation Techniques concludes that $k$-anonymity prevents singling out~\cite{wp-anonymisation}. In this section, we analyze the extent to which $k$-anonymity provides PSO security. We show that $k$-anonymized dataset typically provides an attacker information which is sufficient to predicate singling out with constant probability. This result challenges the determination of the Article~29 Working Party.\footnote{Our results hold equally for $\ell$-diversity~\cite{MKGV07} and $t$-closeness~\cite{LiLV07} which the Article~29 Working Party also concludes prevent singling out.} \subsection{Preliminaries} \newcommand{\widehat{a}}{\widehat{a}} Let $(A_1, \dots,A_m)$ be \emph{attribute domains}. A dataset $\mathbf{x}=(x_1,\ldots,x_n)$ is collection of rows $x_i =(a_{i,1},\ldots,a_{i,m})$ where $a_{i,j}\in A_j$. For subsets $\widehat{a}_{i,j} \subseteq A_j$, we view $y_i = (\widehat{a}_{i,1},\dots,\widehat{a}_{i,m})$ as a set in the natural way, writing $x_i \in y_i$ if $\forall j\in [m]$, $a_{i,j} \in \widehat{a}_{i,j}$. We say that a dataset $\mathbf{y} = (y_1,\dots,y_n)$ is derived from $\mathbf{x}$ by generalization and suppression if $\forall i\in [n]$, $x_i \in y_i$. For example, if $(A_1, A_2,A_3)$ correspond to ``5 digit ZIP Code,'' ``Gender,'' and ``Year of Birth,'' then it may be that $x_i=(91015, F, 1972)$ and $y_i = (91010\text{--}91019, F, 1970\text{--}1975)$. $k$-anonymity aims to capture a sort of anonymity of a crowd: a data release $\mathbf{y}$ is $k$-anonymous if any individual row in the release cannot be distinguished from $k-1$ other individuals. Let $\mathsf{count}(\mathbf{y},y) \triangleq |\{i\in[n]: y_i = y\}|$ be the number of rows in $\mathbf{y}$ which agree with $y$.\footnotemark \footnotetext{Often $\mathsf{count}$ is paramaterized by a subset $Q$ of the attribute domains called a \emph{quasi-identifier}. This parameterization affect our analysis and we omit it for simplicity.} \begin{definition}[$k$-Anonymity (rephrased from~\cite{sweeney2002k})] For $k\ge2$, a dataset $\mathbf{y}$ is \emph{$k$-anonymous} if $\mathsf{count}(\mathbf{y},y_i) \ge k$ for all $i\in[n]$. An algorithm is called a \emph{$k$-anonymizer} if on an input dataset $\mathbf{x}$ its output is a $k$-anonymous $\mathbf{y}$ which is derived from $\mathbf{x}$ by generalization and suppression. \end{definition} Our goal is to relate $k$-anonymity and PSO security. It will be convenient to define a generalization of $k$-anonymity---\emph{predicate $k$-anonymity} which captures the core property of $k$-anonymity but relaxes its strict syntactic requirements. For a predicate $\phi:X \to \{0,1\}$ and dataset $\mathbf{x}$, let $\mathbf{x}_\phi = \{x \in \mathbf{x} : \phi(x) = 1\}$. We assume that $|\mathbf{x}_\phi|$ is computable given the output of the $k$-anonymizer, but this does not qualitatively affect the results in this section. \begin{definition}[\label{def:predicate_anonymity}Predicate $k$-anonymity] Let $\mathsf{Anon}$ be an algorithm mapping a dataset $\mathbf{x} \in X^n$ to a collection of predicates $\Phi = \{\phi:X\to\{0,1\}\}$. For $k\ge 2$ we call $\mathsf{Anon}$ \emph{predicate $k$-anonymous} if for all $\phi \in \Phi$, $|\mathbf{x}_\phi| \ge k$. \end{definition} $k$-anonymity is a special case of predicate $k$-anonymity that considers only specific collections of predicates $\Phi$ induced by a dataset $\mathbf{y}$: $$\Phi = \{ \phi_y(x) = 1 \iff x \in y\}_{y\in\mathbf{y}}.\footnotemark$$ \footnotetext{See also the definition of $k$-anonymity for face images \cite[Definition~2.10]{k-faces}. Using the notation of that paper, it is also special case of predicate $k$-anonymity, with $\Phi = \{\phi_{\Gamma_d}(\Gamma) = 1 \iff f(\Gamma) = \Gamma_d \}_{\Gamma_d \in H_d}$} \begin{definition} A predicate $k$ anonymizer is \emph{$k_{\mathsf{max}}$-bounded} if $\forall \mathbf{x}, \exists \phi \in \Phi$ such that $|\mathbf{x}_\phi| \le k_{\mathsf{max}}$. \end{definition} \subsection{Illustrative examples} Before presenting a formal technical analysis, we provide two illustrative examples of very simple $k$-anonymizers that fail to provide security against predicate singling out. For both examples, let $D = U_\ell$ be the uniform distribution over $\{0,1\}^n$. The dataset $\mathbf{x}$ consists of $n$ records sampled i.i.d.\ from $D$. \paragraph{Bit suppression.} This $k$-anonymizer processes groups of $k$ rows in index order and suppresses all bit locations where the $k$ rows disagree. Namely, for each group $g$ of $k$ rows $(x_{gk+1},\dots,x_{gk+k)})$ it outputs $k$ copies of the string $y_g \in \{0,1,\star\}^n$ where $y_g[j] = b\in\{0,1\}$ if $x_{gk+1}[j]=\cdots=x_{gk+k}[j]=b$ (i.e., all the $k$ rows in the group have $b$ as their $j$th bit) and $y_g[j] = \star$ otherwise. In the terminology of Definition~\ref{def:predicate_anonymity}, the predicate $\phi_g(x)$ evaluates to $1$ if $y_g[j] \in \{x[j], \star\}$ for all $j\in[n]$ and evaluates to $0$ otherwise. Namely, $\phi_g(x)$ checks whether $x$ agrees with $y_g$ (and hence with all of $x_{gk+1},\dots,x_{gk+k)}$) on all non-suppressed bits. In expectation, $n/2^{k}$ positions of $y_g$ are not suppressed. For large enough $n$, with high probability over the choice of $\mathbf{x}$, at least $\frac{n}{2\cdot 2^k}$ positions in $y_g$ are not suppressed. In this case, $\mathsf{weight}_{D}(\phi_g) \le 2^{-\frac{n}{2\cdot 2^k}}$ which is a negligible function of $n$ for any constant $k$. We now show how $\phi_g$ can be used adversarially. In expectation $n(1-2^{-k})\geq 3n/4$ positions of $y_g$ are suppressed. For large enough $n$, with high probability over the choice of $\mathbf{x}$ at least $n/2$ of the positions in $y_g$ are suppressed. Denote these positions $i_i,\ldots,i_{n/2}$. Define the predicate $p_k(x)$ that evaluates to $1$ if the binary number resulting from concatenating $x[i_1],x[i_2],\ldots,x[i_{n/2}]$ is greater than $2^{n/2}/k$ and $0$ otherwise. Note that $\mathsf{weight}_{D}(p_k)\approx 1/k$ and hence $p_k$ isolates within the group $g$ with probability $\approx 1/e\approx 0.37$, as was the case with the trivial adversary described at the beginning of Section~\ref{sec:definitions}. An attacker observing $\phi_g$ can now define a predicate $p(x)=\phi_g(x)\wedge p_k(x)$. By the analysis above, $\mathsf{weight}(p)$ is negligible (as it is bounded by $\mathsf{weight}(\phi_g)$) and $p(x)$ isolates a row in $\mathbf{x}$ with probability $\approx 0.37$. Hence, the $k$-anonymizer of this example fails to protect against singling out. Theorem~\ref{thm:k-anon-attack} below captures the intuition from our bit suppression example and generalizes it, hence demonstrating that $k$-anonymity would not typically protect against predicate singling out. We note that Theorem~\ref{thm:k-anon-attack} does not capture all possible ways in which the outcome of a $k$-anonymizer can be exploited, in particular, the following simple example. \paragraph{Interval Buckets.} This $k$-anonymizer sorts the rows in lexicographic order and outputs the intervals $[a_g,b_g] = [x_{gk+1},x_{gk+k}]$ (where the indices are after sorting and renaming). The corresponding predicate $\phi_{a_g,b_g}(x) = 1$ if $x\in [a_g,b_g]$. Observe that any of the endpoints $a_g$ or $b_g$ reveal a row in $\mathbf{x}$ and hence an adversary can predicate single out with probability 1 using predicates of weight $2^{-n}$. \subsection{k-Anonymity enables predicate singling out} \begin{theorem} \label{thm:k-anon-attack} For any $k_{\mathsf{max}} \ge 2$, there exists an (efficient, uniform, randomized) algorithm $\mathsf{A}$ such that for all $D$ with min-entropy $\lambda \ge m + 2\log(1/\alpha^2) + k_{\mathsf{max}}\log n$ (for $m \in \N$, $\alpha$), and all predicate anonymizers $\mathsf{Anon}$ that are $k_{\mathsf{max}}$-bounded, and all $w_\mathsf{low} > 0$: $$ \mathsf{Succ}_{\lew_\mathsf{low}}^{\adv,\mathsf{Anon}}(n) \ge \eta\cdot (e^{-1} - 2^{-m}n - k\alpha^2) $$ where $$\eta \triangleq \Pr_{\substack{\mathbf{x}\gets D^n\\\phi\gets\mathsf{Anon}(\mathbf{x})}}\left[\mathsf{weight}_{D}(\phi) \le w_\mathsf{low}(n)\right].$$ \end{theorem} For distributions with sufficient min-entropy ($m = \omega(\log n)$, $\alpha = \mathrm{negl}(n)$), the adversary's success probability is approximately $\eta/e \approx \eta \cdot B(k,1/k)$. To predicate single out, the adversary must output a predicate that both isolates $\mathbf{x}$ and has low weight. The theorem shows that these two requirements essentially decompose: $\eta$ is the probability that the predicate $k$-anonymizer outputs a low-weight predicate and $B(k,1/k)$ is the probability that a trivial adversary predicate singles out a dataset of size $k$. Algorithms for $k$-anonymity generally try to preserve as much information in the dataset as possible. We expect such algorithms to typically yield low-weight predicates and correspondingly high values of $\eta$. \begin{proof}[Proof of Theorem~\ref{thm:k-anon-attack}] On input $\Phi\gets \mathsf{Anon}(\mathbf{x})$, $\mathsf{A}$ selects $\phi \in \Phi$ such that $2\le |\mathbf{x}_\phi| \le k_{\mathsf{max}}$. $\mathsf{A}$ will construct some predicate $q$ and output the conjunction $p\triangleq \phi \land q$. Noting that $\mathsf{weight}_{D}(p) \le \mathsf{weight}_{D}(\phi)$, and that $\iso{q}{\mathbf{x}_\phi} \implies \iso{p}{\mathbf{x}},$ \begin{align} \mathsf{Succ}_{\lew_\mathsf{low}}^{\adv,\mathsf{Anon}}(n) &\ge \Pr\left[\iso{q}{\mathbf{x}_\phi} \land \mathsf{weight}_{D}(\phi) \le w_\mathsf{low}\right] \notag \\ &= \eta \cdot \Pr\left[\iso{q}{\mathbf{x}_\phi} \;\big\vert\; \mathsf{weight}_{D}(\phi)\le w_\mathsf{low}\right] \label{eqn:k-anon-proof-cond-1} \end{align} \begin{claim} \label{eqn:k-anon-proof-cond} There exists $\mathsf{A}$ such that for all $k_\phi\ge 2$ \begin{equation*} \Pr_{\substack{\mathbf{x}\gets D^n\\\phi\gets\mathsf{Anon}(\mathbf{x})\\p\gets\mathsf{A}(\phi,w)}}\biggl[\iso{q}{\mathbf{x}_\phi} \;\Big\vert\; |\mathbf{x}_\phi| = k_\phi \land \mathsf{weight}_{D}(\phi)\lew_\mathsf{low}\biggr] \ge B(k_\phi,1/k_\phi) - 2^{-m}n - k\alpha^2. \end{equation*} \end{claim} \noindent The claim is discussed below and proved in Appendix~\ref{app:k-anon}. Using the claim we get: \begin{align} \Pr\left[\iso{q}{\mathbf{x}_\phi} \;\big\vert\; \mathsf{weight}_{D}(\phi)\le w_\mathsf{low}\right] &= \sum_{k_\phi = k}^{k_{\mathsf{max}}} \Pr\left[|\mathbf{x}_\phi| = k_\phi\right] \cdot \Pr\left[\iso{q}{\mathbf{x}_\phi} \;\big\vert\; |\mathbf{x}_\phi| = k_\phi \land \mathsf{weight}_{D}(\phi)\le w_\mathsf{low}\right] \notag \\ &\ge \sum_{k_\phi} \Pr\left[|\mathbf{x}_\phi| = k_\phi\right] \cdot \biggl( B(k_\phi,1/k_\phi) - 2^{-m}n - k\alpha^2 \biggr) \notag \\ &= \operatorname*{\mathbb{E}}_{k_\phi} \left[B(k_\phi,1/k_\phi)\right] - 2^{-m}n - k\alpha^2 \notag \\ &\ge e^{-1} - 2^{-m}n - k\alpha^2 \label{eqn:k-anon-proof-449} \end{align} The last inequality follows from the fact that for all $k_\phi \ge 2$, $(1-1/k_\phi)^{k_\phi-1} > e^{-1}$. Combining~\eqref{eqn:k-anon-proof-449} with~\eqref{eqn:k-anon-proof-cond-1} completes the proof. \end{proof} The proof of Claim~\ref{eqn:k-anon-proof-cond} uses the Leftover Hash Lemma in a manner closely resembling Lemma~\ref{lemma:lhl-1}, but with an additional challenge. The earlier application of LHL proved that a random hash function selected appropriately isolates a row with probability close to $e^{-1}$. It relied on the fact that each row was sampled i.i.d.\ from a distribution with min-entropy. In contrast, the rows in $\mathbf{x}_\phi$ are a function of $\mathsf{Anon}$ and the whole dataset $\mathbf{x}$. They are not independently distributed and even their marginal distributions may be different than $D$. We can use the LHL to prove the claim if we can show that the rows in $\mathbf{x}_\phi$ still have sufficient (conditional) min-entropy. The following lemma (proved in Appendix~\ref{app:k-anon}) does exactly that. \begin{lemma} \label{lemma:k-of-n-min-entropy} Let $Y_1,\dots,Y_n$ be i.i.d.\ random variables and let $F$ be a (randomized) function mapping $(Y_1,\dots,Y_n)$ to $(j,I)$ where $j\in[n]$ and $I\subseteq [n]\setminus\{j\}$ of size $|I| = k-1$. Let $Y_I = \{Y_i\}_{i\in I}$. $$\widetilde{H}_\infty(Y_j \;\big\vert\; Y_I) \ge {H}_\infty(Y_j) - (k-1)\log n \ge {H}_\infty(Y_1) - k\log n.$$ \end{lemma} \section*{Acknowledgment} The authors thank Uri Stemmer for discussions of the generalization properties of differential privacy and Adam Sealfon for suggesting Proposition~\ref{lemma:mech-small-codomain}. Work supported by the U.S.\ Census Bureau under cooperative agreement no.~CB16ADR0160001. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the U.S.\ Census Bureau. Aloni Cohen was additionally supported by NSF award CNS-1413920, the 2018 Facebook Fellowship, and MIT's RSA Professorship and Fintech Initiative. \bibliographystyle{alpha} \section{Preliminaries} \label{sec:preliminaries} \paragraph{Notation.} A dataset $\mathbf{x}=(x_1,\ldots,x_n)$ consists of $n$ elements taken from the data $X = \{0,1\}^d$. We consider datasets where each entry $x_i$ is independently sampled from a fixed probability distribution $D\in\Delta(X)$ over $X$. We denote by $U_d$ a uniform random variable over $\{0,1\}^d$. For the purposes of asymptotic analyses, we will use the number of rows $n\in \N$ in a dataset as the complexity parameter. Furthermore, the parameter $d = d(n)$ is a function of $n$, but we typically omit the dependence. \footnotemark~ A function $f(n)$ is negligible, denoted $f(n) = \mathrm{negl}(n)$ if $f(n) = n^{-\omega(1)}$. \footnotetext{More formally, we can consider an ensemble of data domains $\mathcal{X}= \{X_n = \{0,1\}^{d(n)}\}_{n\in \N}$ and an ensemble of distributions $\mathcal{D} = \{D_n\}_{n\in\N}$, where $D_n\in\Delta(X_n)$.} A mechanism $M$ is a Turing Machine that takes as input a dataset $\mathbf{x} \in X^n$. A mechanism $M$ may be randomized and/or interactive. A predicate is a binary-valued function $p:X\rightarrow\{0,1\}$. We define $\mathsf{weight}_D(p) \triangleq \operatorname*{\mathbb{E}}_{x \sim D}[p(x)]$. For a dataset $\mathbf{x}\in X^n$ we define $p(\mathbf{x}) \triangleq \frac{1}{n}\sum_{i=1}^n p(x_i)$. We occasionally use indicator notation $\indic{}()$ to define a predicate: for example, $p(x) = \indic{x\in A}$ equals 1 if $x\in A$ and 0 otherwise. \subsection{Preliminaries from randomness extraction} \begin{definition}[Min-entropy, average min entropy \cite{fuzzy}]Let $Y_1,Y_2$ be two random variables. The \emph{min-entropy} of a $Y_1$ is $${H}_\infty(Y_1) = -\log\left(\max_{y} \Pr[Y_1 = y] \right).$$ The \emph{average min-entropy}\footnotemark of $Y_1$ given $Y_2$ is $$\widetilde{H}_\infty(Y_1 \mid Y_2) = -\log\left(\operatorname*{\mathbb{E}}_{Y_2} \biggl[\max_{y} \Pr[Y_1 = y \mid Y_2]\biggr]\right).$$ \end{definition} \footnotetext{In \cite{smith2009foundations} this same quantity is called conditional min-entropy and denoted $H_\infty$.} \begin{fact} For all $Y_1$ and $Y_2$: ${H}_\infty(Y_1) \ge \widetilde{H}_\infty(Y_1 \mid Y_2) \ge {H}_\infty(Y_1) - \log (|\mathsf{supp}(Y_2)|)$, where $\mathsf{supp}(Y_2)$ is the support of $Y_2$. \end{fact} \begin{definition}[2-universal hash functions] $H = \{h:\{0,1\}^d \to \{0,1\}^m\}$ is a 2-universal family of hash functions if $\Pr_{h \sim H} [h(x) = h(x')] = 2^{-m}$ for all $x,x'\in \{0,1\}^d$ where the probability is over the selection of $h$ uniformly at random from $H$. \end{definition} As an example, for $a,b\in\{0,1\}^d$ let $h_{a,b}(x)$ be the function that returns the first $m$ bits of $ax + b$ where the arithmetic is in the field $GF(2^d)$. Then $H=\{h_{a,b}:a,b\in\{0,1\}^d\}$ is 2-universal. \begin{definition}[Statistical distance] The statistical distance of random variables $Y_1,Y_2$ with support $\{0,1\}^d$ is $\mathsf{SD}(Y_1,Y_2) = \frac{1}{2} \sum_{y\in \{0,1\}^d} \bigl|\Pr[Y_1=y] - \Pr[Y_2=y]\bigr|$. If $\mathsf{SD}(Y_1,Y_2) < \alpha$ we say that $Y_1$ and $Y_2$ are $\alpha$-close. \end{definition} \begin{lemma}[Generalized Leftover Hash Lemma~\cite{fuzzy}] \label{LHL} Let $\lambda\in \N$, $\alpha > 0$, $Y_1$ a random variable over $\{0,1\}^d$, and $Y_2$ a random variable. Let $H = \{h:\{0,1\}^d \to \{0,1\}^m\}$ be a 2-universal family of hash functions where $m \le \lambda-2\log(1/\alpha^2)+2$. For every $Y_1$ and $Y_2$ with $\widetilde{H}_\infty(Y_1 \mid Y_2)\ge\lambda$, $(h,h(Y_1),Y_2)$ is $\alpha^2$-close to $(h, U_m, Y_2)$ in total variation distance, where $h\in_R H$ is a uniformly random function from the hash family and $U_m$ is uniform over $\{0,1\}^m$. \end{lemma} \begin{corollary} \label{cor:sd-fact} $h(Y)$ is ${\alpha}$-close to uniform with probability at least $1-\alpha$ over $h\in_R H$. \end{corollary} \begin{proof} Let $H_{> \alpha} = \{h\in H : h(Y)~\mbox{is not}~\alpha~\mbox{close to uniform}\}$. We have $\alpha^2 \geq \Delta ((h,h(Y)), \mbox{unif}) > \Pr[h \in H_{> \alpha}] \cdot \alpha$. Hence $\Pr[h \in H_{> \alpha}] < \alpha$. \end{proof}
1,941,325,220,899
arxiv
\section{Introduction} Automatic Speaker Verification (ASV) has become an increasingly attractive option for biometric authentication. Past research has shown that ASV systems are subject to malicious attacks: the presentation attacks. Presentation attacks, or spoofing attacks, refer to attempts of bypassing ASV systems by mimicking the voice characteristics of the target speaker. Spoofing attacks have four widely-recognized specifications: impersonation, replay, text-to-speech (TTS) and voice conversion (VC). To defend against these attacks, a standalone anti-spoofing system is developed in parallel to the ASV system~\cite{wu2015spoofing}. Recent efforts on anti-spoofing developments mainly originated from the Biennial ASVspoof Challenges~\cite{evans2013spoofing,wu2015asvspoof,delgado2018asvspoof}. Previous ASVspoof Challenges focused on promoting awareness and fostering solutions to spoofing attacks generated from TTS, VC and replay~\cite{evans2013spoofing,wu2015asvspoof,delgado2018asvspoof}. ASVspoof 2019 aims to address all previous attacks and further extended previous editions of ASVspoof in three aspects: \begin{itemize} \item Update attacks with TTS and VC with state-of-the-art technologies, especially those based on neural networks. \item Create a more controlled setup for replay attacks, covering acoustic and microphone conditions and predefined replay device qualities. \item Adopt an evaluation metric to assess impacts of standalone anti-spoofing systems to a fixed ASV system. \end{itemize} ASVspoof 2019 Challenge is composed of two sub-challenges: Physical Access (PA) and Logical Access (LA). LA considers spoofing attacks generated with TTS and VC, and PA refers to spoofing attacks from replay. Research work on anti-spoofing can be divided into one of the three categories: Feature Learning~\cite{todisco2016new,sahidullah2015comparison,alam2018boosting,saranyadecision,suthokumar2018modulation,sailor2018auditory,chen2010speaker,leon2012synthetic,wu2012detecting,wu2013synthetic}, Statistical Modeling~\cite{delgado2018asvspoof,adiban2017sut,wu2012study,khoury2014introducing}, and Deep Neural Network (DNN)~\cite{chen2015robust,lavrentyeva2017audio,lai2018attentive,valenti2018end,chen2017resnet,chettri2018study,shim2018replay,qian2016deep}. Having witnessed the successes of DNNs in ASVspoof 2017, we decided to explore and extend several DNN-based systems for the ASVspoof 2019 Challenge. Our objective is to identify and design core components of a working pipeline for DNN-based approach to anti-spoofing. These components, feature engineering, DNN models, network optimization, and system fusion, make up our anti-spoofing system, which we term Anti-Spoofing with Squeeze-Excitation and Residual neTworks, or ASSERT. The main contribution of this paper is two-fold: \begin{enumerate} \item \label{item:one} We conducted experiments on the effectiveness of several DNN models in detecting spoofing attacks generated from audio replay, TTS and VC. The DNN models are based on variants of Squeeze-Excitation Network (SENet)~\cite{hu2018squeeze} and ResNet~\cite{he2016deep}. To our knowledge, we were the first to introduce SENet and ResNet with statistical pooling to address anti-spoofing, and we also extended our previous work in~\cite{lai2018attentive} such that the DNNs are deeper but faster-trained. \item \label{item:two} We presented an ablation study, from feature engineering, network optimization, to fusion schemes for training DNN models for anti-spoofing. We believe these collective strategies are vital for the performance of DNNs. In addition, we compared ASSERT with our implementation of i-vectors baselines~\cite{dehak2011front}. Results on the ASVspoof 2019 corpus demonstrated that ASSERT achieved significant performances over the baseline systems, with more than $93\%$ and $17\%$ relative improvements on PA and LA respectively. Our fusion system was ranked \nth{3} in the PA sub-challenge, and \nth{14} in the LA sub-challenge. \end{enumerate} The outline of the paper is organized as follows. Section~\ref{sec:assert} details ASSERT, from the feature engineering approaches, proposed DNN models, to the optimization and fusion schemes. Section~\ref{sec:exp} compares the results of ASSERT with the baseline systems on the ASVspoof 2019 corpus. We ended the paper with some concluding remarks in Section~\ref{sec:conclusion}. \begin{figure}[tb] \centering \includegraphics[width=0.9\columnwidth]{figures/feature-engineering-4.pdf} \caption{Illustration of Unified Feature Map approach. Low-level acoustics feature are first extracted, and the utterance is repeated to form a unified feature map. Then, the feature map is broken down into segments with length $M$ frames and overlap $L$ frames, before inputting into the DNN models.} \label{fig:unified_feature_map} \end{figure} \section{ASSERT} \label{sec:assert} This section presents an overview of each component of ASSERT: input feature representations to DNN models, the DNN models and their parameters, along with the network optimization and fusion schemes. The feature preparation is either a unified feature map or the whole utterance. Both approaches are based on some low-level acoustic features. The DNN models are variants of squeeze-excitation and residual networks: SENet34, SENet50, Mean-Std ResNet, Dilated ResNet, and Attentive-Filtering Network. \subsection{Feature Engineering} \label{sec:feature_eng} \noindent\textbf{Acoustic Features:} We extracted two different acoustic features: constant Q cepstral coefficients (CQCC)~\cite{todisco2016new} and log power magnitude spectra (logspec). Following~\cite{delgado2018asvspoof}, we extracted 30 dimension CQCC feature, including the 0'th order cepstral coefficient and without CMVN. The dimension of logspec is 257. For both CQCC and logspec, we \textbf{did not} apply voice activity detection nor any normalization to the acoustic features, as we empirically found doing so yield better results. \vspace{1mm} \noindent\textbf{Unified Feature Map\footnote{This is not practical for long utterances, but spoofed speech are mostly recorded in short duration (less than 10 seconds).}:} We followed previous work~\cite{lai2018attentive} and created a unified feature map as input to the DNN models. Since the lengths of evaluation utterances were not known beforehand, we first extended all utterances to multiple of $M$ frames. Then, the extended feature map was broken down into segments of length $M$ frames. The segments can have overlap $L$ frames. For the 2019 ASVspoof Challenge, $M$ is set to 400, and $L$ is set to either 0 or 200. Figure~\ref{fig:unified_feature_map} is an illustration of this feature engineering approach. There may be multiple segments per utterance. We simply averaged the DNN outputs over all segments for each utterance. \vspace{1mm} \noindent\textbf{Whole Utterance:} In addition to the Unified Feature Map, we considered another feature engineering approach by training models with the whole utterance (variable length input). For each minibatch during training, utterances are zero-padded to match the length of the longest utterance. Padding frames are subsequently removed in the pooling layer of the DNNs. \begin{figure}[tb]\centering \includegraphics[width=.8\columnwidth]{figures/attentive-filtering+dilated-network-5.pdf} \caption{\textbf{(Bottom Left)} Dilated ResNet is consisted of four blocks followed by a fully-connected layer. Each block has five residual units, a max-pooling layer, and a dilated convolution layer. \textbf{(Right)} Attentive-Filtering applies an attention-based masking prior to a Dilated ResNet. Input feature goes through four downsampling and four upsampling units. Skip connection is used throughout. Dilation indicates the dilation rate of each convolution layer. Bilinear indicates bilinear upsampling.} \label{fig:dilated+attentive_filteirng_resnet_architecture} \end{figure} \begin{table}[tb]\centering \caption{Model parameters of SENet34, SENet50, Mean-Std ResNet, and Dilated ResNet. BN stands for a bottleneck residual block. Basic and Bottleneck residual blocks are described in the original ResNet~\cite{he2016deep}.} \label{tbl:model_parameter_summary} \resizebox{\columnwidth}{!}{ \begin{tabular}{@{}clcccc@{}} \toprule Model & Config. & Block1 & Block2 & Block3 & Block4 \\ \midrule & unit type & Basic & Basic & Basic & Basic \\ SENet34 & num. of unit & 3 & 4 & 6 & 3 \\ & channels & 16 & 32 & 64 & 128 \\ \midrule & unit type & BN & BN & BN & BN \\ SENet50 & num. of unit & 3 & 4 & 6 & 3 \\ & channels & 16 & 32 & 64 & 128 \\ \midrule & unit type & Basic & Basic & Basic & Basic \\ Mean-Std & num. of unit & 3 & 4 & 6 & 3 \\ ResNet & dilation rate & 1 & 1 & 1 & 1 \\ & channels & 16 & 32 & 64 & 128 \\ \midrule & unit type & Basic & Basic & Basic & Basic \\ Dilated & num. of unit & 5 & 5 & 5 & 5 \\ ResNet & dilation rate & 2 & 4 & 4 & 8 \\ & channels & 8 & 16 & 32 & 64 \\ \bottomrule \end{tabular} } \end{table} \subsection{DNN model} \label{sec:dnn_model} \noindent\textbf{Squeeze-Excitation Network:} Given recent achievements in spoofing countermeasures from different DNN architectures~\cite{lavrentyeva2017audio,lai2018attentive}, we explored an extension of ResNet, Squeeze-Excitation Network (SENet), for ASVspoof 2019. SENet has attained impressive image classification results, where a channel-wise transform is appended to existing DNN building blocks, such as the Residual unit~\cite{hu2018squeeze}. We implemented two variants of SENets: SEnet34 with ResNet34 backbone, and SEnet50 with ResNet50 backbone. Table~\ref{tbl:model_parameter_summary} contains the model parameters of SEnet34 and SEnet50. SEnet34 and SEnet50 were trained with unified feature maps of logspec while each minibatch contains 64 feature maps. \vspace{1mm} \noindent\textbf{Mean-Std ResNet:} Recent work in speaker recognition~\cite{cai2018novel, villalbajhu} has demonstrated that ResNet~\cite{he2016deep} with pooling achieves comparable results as x-vectors~\cite{snyder2018x}. Therefore, we introduced ResNet with pooling for anti-spoofing. Specifically, we employed Mean-Std ResNet, where mean and standard deviation are estimated over timesteps to represent the whole utterance~\cite{snyder2018x} after frame-level features are extracted from a ResNet34. Table~\ref{tbl:model_parameter_summary} contains the model parameters of a Mean-Std ResNet. Since the pooling layer accounts for variable length input, we train Mean-Std ResNet with the whole utterance. Both CQCC and logpsec were used, while each minibatch contains 64 and 32 full utterances, respectively. \vspace{1mm} \noindent\textbf{Dilated ResNet:} Following previous work~\cite{lai2018attentive}, we applied Dilated ResNet to ASVspoof 2019. Different from Mean-Std ResNet, Dilated ResNet contains a dilated convolution layer in each residual block~\cite{yu2015multi}. We also extended the original dilated residual block to multiple residual units. Figure~\ref{fig:dilated+attentive_filteirng_resnet_architecture} is a sketch of the Dilated ResNet, and Table~\ref{tbl:model_parameter_summary} contains its model parameters. Contrary to Mean-Std ResNet, Dilated ResNet does not have any pooling layer and thus only accepts fixed-size input. We trained Dilated ResNet with the same condition as the SENets. \vspace{1mm} \noindent\textbf{Attentive-Filtering Network:} Lastly, we applied Attentive-Filtering Network to ASVspoof 2019, in which an attention-based feature masking is applied prior to a Dilated ResNet~\cite{lai2018attentive}. The feature masking is comprised of four downsampling and four upsampling units. The downsampling unit is based on max-pooling and dilated convolution layers, while the upsampling unit is based on convolution and bilinear upsampling layers. The number of channels for all convolution is set to 8, and the final non-linearity is selected as the sigmoid function. Skip connections are included between downsampled and upsampled intermediate representations. Furthermore, we also extended the original attentive-filtering proposed in~\cite{lai2018attentive} by replacing the bilinear upsampling method by transpose convolution and self-attention~\cite{vaswani2017attention}. We only reported results with bilinear upsampling in this paper. Figure~\ref{fig:dilated+attentive_filteirng_resnet_architecture} is a visualization of the composition of Attentive-Filtering Network. Attentive-Filtering Network was trained under the same condition as the SENets. \subsection{Network Optimization} \label{sec:optimization} We present the optimization schemes used for training all the DNN models described above. \vspace{1mm} \noindent\textbf{Training Objective:} The objective of anti-spoofing is to classify whether an utterance is bonafide or spoofed. A straight forward training objective is, therefore, binary classification~\cite{lavrentyeva2017audio,lai2018attentive}. ~\cite{shim2018replay} showed that it could be beneficial to optimize the networks by classifying noise classes in audio replay, so we further trained our models with multi-class classification. We designed the multi-class labels as the conditions spoofed utterances are generated. For LA, this is simply the system ID (bonafide, SS\_1, SS\_2, SS\_4, US\_1, VC\_1, VC\_4) e.g. SS\_1 corresponds to ``system using neural-network acoustic models and WaveNet vocoder." For PA, spoofed utterances are recorded under different environment IDs and attack IDs. In this work, we adopted attack ID (bonafide, AA, AB, AC, BA, BB, BC, CA, CB, CC) as the multi-class label. Table~\ref{tbl:multi_class_label} gives a comparison between the multi-class classification labels between condition LA and PA. For computing EER during inference stage, we took the log-probability of the bonafide class as the score for utterance. \begin{table}[tb]\centering \caption{Multi-class labels for LA and PA.} \label{tbl:multi_class_label} \begin{tabular}{l|cc} \toprule data condition & LA & PA \\ \hline num. of classes & 7 & 10 \\ \hline label type & system IDs & attack IDs \\ \hline \begin{tabular}[l]{@{}l@{}}label \\ descriptions\end{tabular} & \begin{tabular}[c]{@{}c@{}c@{}} bonafide, SS\_1 \\ SS\_2, SS\_4 \\ US\_1, VC\_1, VC\_4\end{tabular} & \begin{tabular}[c]{@{}c@{}c@{}} bonafide, AA, AB \\ AC, BA, BB \\ BC, CA, CB, CC\end{tabular} \\ \bottomrule \end{tabular} \end{table} \vspace{1mm} \noindent\textbf{Optimizer:} We followed the optimization strategy described in~\cite{vaswani2017attention}. The DNN models are optimized by Adam with $\beta_{1} = 0.9$, $\beta_{2} = 0.98$, and weight decay $10^{-9}$. The learning rate scheduler increases the learning rate linearly for the first $1000$ warm-up steps and then decreases it proportionally to the inverse square root of the step number~\cite{vaswani2017attention}. Finally, after every training epoch, we selected the best model by two means: \textit{dev} EER or \textit{dev} classification accuracy of the class labels. \subsection{Fusion and Calibration} \label{sec:fusion} We followed the greedy fusion scheme described in~\cite{villalbajhu} to select the best system combination for our primary submission for ASVspoof 2019. Fusion and calibration were performed with logistic regression with the Bosaris toolkit~\cite{brummer2013bosaris}. Considering the priors defined in the evaluation plan, we set the effective prior to $0.672$ for PA and the effective prior to $0.707$ for LA. \begin{table*}[t] \caption{Ablation study of single system results on ASVspoof 2019. Due to space constraint, for each DNN model, we merely included \textbf{top two} performing systems in the table. Under Training Objective column, MCE stands for multi-class cross entropy, BCE stands for binary cross entropy, and acc/EER stands for the model selection criterion after each training epoch.} \label{tab:single_system_results} \centering \begin{tabular}{@{}lcccccccc@{}} \toprule Model & Acoustic & Feature & Training & Model & \multicolumn{2}{c}{PA development} & \multicolumn{2}{c}{LA development} \\ & Feature & Engineering & Objective & Params. & t-DCF$_{norm}^{min}$ & EER ($\%$) & t-DCF$_{norm}^{min}$ & EER ($\%$) \\ \midrule CQCC-GMM & CQCC + $\Delta$ + $\Delta\Delta$ & N/A & EM & 138k & 0.195 & 9.87 & 0.012 & 0.43 \\ LFCC-GMM & LFCC + $\Delta$ + $\Delta\Delta$ & N/A & EM & 92k & 0.255 & 11.96 & 0.066 & 2.71 \\ 100-i-vectors & CQCC + $\Delta$ + $\Delta\Delta$ & N/A & EM & 593k & 0.306 & 12.37 & 0.155 & 5.18 \\ 200-i-vectors & CQCC + $\Delta$ + $\Delta\Delta$ & N/A & EM & 2339k & 0.322 & 12.52 & 0.121 & 4.12 \\ \midrule SENet34 & logspec & unifed, L=200 & BCE + acc. & 1344k & \bf 0.015 & \bf 0.575 & \bf 0 & \bf 0 \\ & logspec & unifed, L=200 & BCE + EER & 1344k & 0.017 & 0.686 & 0 & 0 \\ \midrule SENet50 & logspec & unifed, L=200 & MCE + EER & 1095k & 0.021 & 0.799 & 0 & 0 \\ & logspec & unifed, L=200 & BCE + EER & 1093k & 0.017 & 0.631 & 0 & 0 \\ \midrule Mean-Std & logspec & whole & BCE + acc. & 1389k & 0.022 & 0.832 & 0 & 0 \\ ResNet & CQCC & whole & MCE + acc. & 1390k & 0.041 & 1.429 & 0.001 & 0.040 \\ \midrule Dilated & logspec & unifed, L=200 & MCE + EER & 593k & 0.029 & 1.072 & 0 & 0 \\ ResNet & logspec & unifed, L=200 & BCE + EER & 592k & 0.024 & 0.780 & 0 & 0 \\ \midrule Attentive- & logspec & unifed, L=200 & MCE + EER & 600k & 0.027 & 1.057 & 0 & 0 \\ Filteirng Net & logspec & unifed, L=200 & BCE + acc. & 599k & 0.021 & 0.740 & 0 & 0 \\ \bottomrule \end{tabular} \end{table*} \section{Experiments} \label{sec:exp} \subsection{Baseline System} \label{sec:exp_baseline} \noindent\textbf{LFCC-GMM and CQCC-GMM:} The official baseline systems are based on a 20 dimension linear frequency cepstral coefficients (LFCC)~\cite{sahidullah2015comparison} and a 30 dimension constant Q cepstral coefficients (CQCC)~\cite{todisco2016new}, both of which included static, delta and double delta coefficients. The backend is a 2-class GMM with 512 components. \vspace{1mm} \noindent\textbf{i-vectors:} We implemented i-vectors~\cite{dehak2011front} as previous work has demonstrated that i-vectors yield better results than GMM in anti-spoofing~\cite{delgado2018asvspoof,adiban2017sut}. We experimented with a 100-dimension i-vectors with 64-component UBM and a 200-dimension i-vectors with 128-component UBM. The i-vector extractors were trained on 30-dimension CQCC features, and the backend for our i-vectors is a Gaussian linear generative model~\cite{martinez2011language}. \subsection{Experimental Setup} \label{sec:experimental_setup} \noindent\textbf{Dataset:} All experiments in this work were conducted under the ASVspoof 2019 Challenge. The ASVspoof 2019 corpus is divided into two subsets: PA and LA. PA contains 48600 spoof and 5400 bonafide utterances in the training partition, 24300 spoof and 5400 bonafide utterances in the development partition. The spoofed utterances were recorded under are 27 different acoustic and 9 replay configurations. LA contains 22800 spoof and 2580 bonafide utterances in the training partition, 22296 spoof and 2548 bonafide utterances in the development partition. The spoofed utterances were generated according to 2 VC and 4 TTS algorithms. \vspace{1mm} \noindent\textbf{Evaluation Metric:} We evaluated the effectiveness of spoofing countermeasures with the \textit{minimum} normalized tandem detection cost function (t-DCF)~\cite{kinnunen2018t} and EER. t-DCF takes into account of the detection error rates of a fixed automatic speaker verification system, provided by the organizers in the case of ASVspoof 2019 Challenge. \vspace{1mm} \noindent\textbf{Implementation:} We used training partition to train our DNN models. Development partition was used for model selection during validation and system combination. We \textbf{did not} use any external data or data augmentation technique for development. CQCC-GMM and LFCC-GMM were adopted directly from the MATLAB script\footnote{\href{http://www.asvspoof.org}http://www.asvspoof.org}. Acoustic features and i-vectors were extracted with Kaldi~\cite{povey2011kaldi}. DNNs were implemented in PyTorch. \subsection{Experimental Results of Single Systems} \label{sec:exp_single_results} Table~\ref{tab:single_system_results} compares ASSERT with the baseline systems in spoofing countermeasure on the \textit{dev} partition. The first observation is that the i-vectors baseline performed worse than GMMs surprisingly, which is contrary to prior work on the ASVspoof 2017 corpus~\cite{delgado2018asvspoof,adiban2017sut}. ASSERT attains substantial improvements from the baseline GMM and i-vectors systems on both PA and LA. In general, for training the proposed DNN models, logspec outperforms CQCC, and unified feature map with overlap outperforms without overlap and whole utterance. On the other hand, there are mixed results on using multi-task or binary training objective, and on model selection with \textit{dev} EER or \textit{dev} classification accuracy. We empirically found that the best single system is based on SENet34 trained with unified feature map with overlap of logspec and binary cross-entropy loss with \textit{dev} accuracy model selection. The system obtains 92\% and 94\% relative improvements over CQCC-GMM on \textit{dev} t-DCF and EER for PA, and 100\% relative improvements for LA. \subsection{Evaluation Results} \label{sec:exp_fusion_results} Table~\ref{tab:submission_results} is the summary of our primary and single system submission to the ASVspoof 2019 Challenge. The single system is based on SENet34 (logspec), and the primary system is a system combination of five single systems based on SENet34 (logspec), Mean-Std ResNet (CQCC, logspec), SENet50 (logspec) and Dilated ResNet (logspec). Systems are trained separately for PA and LA. We can observe that ASSERT generalizes well across \textit{dev} and \textit{eval} for PA, nevertheless, it overfits on \textit{dev} for LA. Our primary system further gains 93\% and 95\% relative improvements over CQCC-GMM on \textit{eval} t-DCF and EER for PA, and 27\% and 17\% relative improvements over LFCC-GMM on \textit{eval} t-DCF and EER for LA. \begin{table}[t] \caption{Primary, single and baseline systems for ASVspoof 2019. Single system is based on SENet34; primary system is a fusion of five systems based on: SENet34, Mean-Std ResNet (CQCC, logspec), SENet50 and Dilated ResNet. } \label{tab:submission_results} \centering \begin{tabular}{@{}lcccc@{}} \toprule System & \multicolumn{2}{c}{Development} & \multicolumn{2}{c}{Evaluation} \\ \cmidrule(rl){2-3} \cmidrule(rl){4-5} & t-DCF$_{norm}^{min}$ & EER & t-DCF$_{norm}^{min}$ & EER \\ \midrule PA-single & 0.015 & 0.575 & 0.036 & 1.29 \\ PA-primary & \bf 0.003 & \bf 0.129 & \bf 0.016 & \bf 0.59 \\ PA-baseline & 0.195 & 9.87 & 0.245 & 11.04 \\ \midrule LA-single & 0 & 0 & 0.216 & 11.75 \\ LA-primary & \bf 0 & \bf 0 & \bf 0.155 & \bf 6.70 \\ LA-baseline & 0.066 & 2.71 & 0.212 & 8.09 \\ \bottomrule \end{tabular \end{table} \section{Conclusions} \label{sec:conclusion} We introduced ASSERT -- several variants of squeeze-excitation and residual networks, optimization and fusion schemes, along with feature engineering approaches -- for anti-spoofing. Our fusion system attained considerable improvement over baseline systems on the ASVspoof 2019 corpus. We believe this paper serves as a preliminary work on a more comprehensive study on DNN based countermeasures for speech spoofing attacks, while meta-data analysis and model refinements on LA should be further investigated. \vspace{1mm} \noindent\textbf{Acknowledgments} \label{sec:acknowledgment} The authors thank ASVspoof 2019 committee for designing the corpus and organizing the challenge. \bibliographystyle{IEEEtran}
1,941,325,220,900
arxiv
\section{Introduction} Quasi-Monte Carlo (QMC) sampling has gained increasing popularity in numerical integration over the unit cube $[0,1]^d$ (see, e.g., \cite{lecu:2009, dick:2010, dick:2013}). It is known that functions with finite variation in the sense of Hardy and Krause can be integrated with an error of $O(n^{-1}(\log n)^d)$, compared to $O(n^{-1/2})$ for ordinary Monte Carlo sampling; see \cite{nied:1992} for details. In this paper, we consider an alternative numerical integration based on Hilbert's space filling curve (HSFC) as introduced in \cite{he:owen:2016}. An HSFC is a continuous mapping $H(x)$ from $[0,1]$ to $[0,1]^d$ for $d\geq1$. We take the convention that $H(x)=x$ for $d=1$. Formally, the Hilbert curve is defined through the limit of a series of recursive curves. An illustration of the generative process of the HSFC with increasing recursion order for $d=2$ is presented in Figure~\ref{fig:hsfc}. For detailed definitions and the properties of the HSFC, we refer to \cite{he:owen:2016}. Let us consider the problem of estimating an integral over the $d$-dimensional unit cube $[0,1]^d$: \begin{equation}\label{eq:problem} \mu=\int_{[0,1]^d}f(X) \mathrm{d} X. \end{equation} The HSFC-based estimate takes the form \begin{equation}\label{eq:hilbertest} \hat{\mu}_{n}=\frac 1 n \sum_{i=1}^n f(H(x_i)), \end{equation} where $x_i$ are some well-chosen points in $[0,1]$. In this paper, we focus on the case of using van der Corput sequence (in base $b\ge 2$) with the nested uniform scrambling of \cite{owen:1995} as the inputs $x_i$, for which the estimate is extensible and unbiased. The nested uniform scrambling method is a kind of randomization techniques used commonly in randomized QMC; see \cite{owen:1995} for details and \cite{lecu:2002} for a survey of various randomized QMC methods. \cite{mato:1998} proposed a random linear scrambling method that does not require as much randomness and storage. \cite{he:owen:2016} found convergence rates of the extensible estimate for functions that are Lipschitz continuous or piecewise Lipschitz continuous. More precisely, for Lipschitz continuous functions, they derived a root mean-squared error (RMSE) of $O(n^{-1/2-1/d})$. For the piecewise Lipschitz continuous functions, an RMSE of $O(n^{-1/2-1/(2d)})$ is obtained. \cite{schr:2016} compared the star discrepancies and RMSEs of using the van der Corput and the golden ratio generator sequences. \begin{figure}[ht] \includegraphics[width=\hsize]{hsfc} \caption{First five steps of the recursive construction of the HSFC for $d=2$.}\label{fig:hsfc} \end{figure} Actually, the upper bounds of the RMSE do not tell much about the error distribution. It is often of interest to obtain asymptotically valid confidence interval type guarantees as the usual Monte Carlo sampling. The central limit theorem (CLT) is invoked routinely to compute a confidence interval on the estimate based on a normal approximation. Indeed, \cite{loh:2003} showed that the nested uniform scrambled $(0,m,d)$-net (in base $b$) estimate has an asymptotic normal distribution for smooth enough functions. Recently, by developing on the work by \cite{loh:2003}, \cite{basu:2016} showed that the scrambled geometric net estimate has an asymptotic normal distribution for certain smooth functions defined on products of suitable subsets of $\Re^d$. However, in most cases, the randomly-shifted lattice rule (another branch of randomized QMC techniques) estimates may be far from the normally distributed ones; see \cite{lecu:2010} for discussions and examples. In this paper we study the asymptotic normality of the HSFC-based estimate \eqref{eq:hilbertest} with sample sizes $n=b^m$, $m=0,1,2,\dots$. In practice, we often choose $b=2$ because the base $b=2$ is the same as used to approximate (and define) the Hilbert curve; see \cite{{butz:1971}} for the algorithm. The main contribution of our paper is two fold. First, for nontrivial functions in $C^1([0,1]^d)$, we establish a lower bound on $\mathrm{Var}(\hat{\mu}_{n})$ which matches the upper bound $O(n^{-1-2/d})$ found in \cite{he:owen:2016}. A similar lower bound is established for piecewise smooth functions. Second, we prove that the asymptotic normality of the HSFC-based estimate $\hat{\mu}_{n}$ holds for three classes of functions. In other words, we show that $$\frac{\hat{\mu}_n-\mu}{\sqrt{\mathrm{Var}(\hat{\mu}_{n})}}\to N(0,1)$$ in distribution as $n=b^m\to \infty$. These results can be applied to stratified sampling on a regular grid with sample sizes $n=m^d$, but the HSFC-based estimate we study, does not require the highly composite sample sizes that the grid sampling requires, particularly for large $d$. The main idea to prove the asymptotic normality is based on the Lyapunov CLT (see, e.g., \cite{chung:2001}), which is quite different from the techniques used in \cite{loh:2003} and \cite{basu:2016}. Our proofs do not rely on the upper bounds established in \cite{he:owen:2016}. The rest of the paper is organized as follows. In Section~\ref{sec:grid}, we study the asymptotic normality of stratified sampling on a regular grid with $n=m^d$, which can be viewed as a special case of the HSFC-based estimate \eqref{eq:hilbertest} using scrambled van der Corput sequence. In Section~\ref{sec:main}, we give lower bounds on $\mathrm{Var}(\hat{\mu}_{n})$ and establish asymptotic normality of the estimate $\hat{\mu}_{n}$ for three cases of integrands. In Section~\ref{sec:numer}, we give some empirical verifications on asymptotic normality for the HSFC sampling and other competitive QMC methods. Section~\ref{eq:final} concludes this paper. \section{Grid-based Stratified Sampling}\label{sec:grid} In this section, we consider the regular grid sampling with sample sizes $n=m^d$. The $d$-dimensional unit cube $[0,1]^d$ can be split into $m^d$ congruent subcubes with sides of length $1/m$, say, $E_i$, $i=1,\dots,n$. The grid-based stratified estimate of the integral \eqref{eq:problem} is given by \begin{equation}\label{eq:est} \tilde{\mu}_n=\frac{1}{n}\sum_{i=1}^{n}f(U_{i}), \end{equation} where $U_i\sim \mathrm{Unif}(E_i)$ independently. Denote $\nabla f(X)=(\frac{\partial f(X)}{\partial X_1} ,\dots,\frac{\partial f(X)}{\partial X_d})^\top$ as the gradient vector of $f(X)$, and let $\Vert\cdot\Vert$ be the usual Euclidean norm. The next lemma discusses the variance of $\tilde{\mu}_n$, which was proved in \cite{owen:2013}. We prove it here also, because we make extensive use of that result. \begin{lemma}\label{lem:asymvar} Assume that $f(X)\in C^1([0,1]^d)$. Then \begin{equation}\label{eq:asyVar} \lim_{n\rightarrow\infty}n^{1+\frac{2}{d}}\mathrm{Var}(\tilde{\mu}_n) =\frac{1}{12}\int_{[0,1]^{d}}\Vert\nabla f(X)\Vert^{2}\mathrm{d} X, \end{equation} where the limit is taken through values $n=m^d$ as $m\to \infty$. \end{lemma} \begin{proof} Note that $U_i$ is uniformly distributed within the cube $E_i$ with sides of length $1/m$. Let $c_i$ be the center of $E_i$. Since $f\in C^1([0,1]^d)$, the first-order Taylor approximation gives \begin{equation} f(U_i)=L_i+R_i, \end{equation} where $L_i=f(c_i)+\nabla f(c_i)^\top(U_i-c_i)$ and $R_i=o(1/m)$. For the linear term $L_i$, we have $$\mathrm{Var}(L_i)=\frac 1{12m^2}\sum_{i=1}^d \left(\frac{\partial }{\partial X_i}f(c_i)\right)^2=\frac 1{12m^2}\Vert\nabla f(c_i)\Vert^2,$$ since $U_i-c_i\sim \mathrm{Unif} [-1/(2m),1/(2m)]^d$. For the error term $R_i$, we have $\mathrm{Var}(R_i)=o(1/m^2)$. Also, $\mathrm{Cov}(L_i,R_i)=o(1/m^2)$. As a result, \begin{align*} \mathrm{Var}(\tilde{\mu}_n)&=\frac{1}{n^2}\sum_{i=1}^n\mathrm{Var}(f(U_i))\\ &=\frac{1}{n^2}\sum_{i=1}^n\mathrm{Var}(L_i)+o\left(\frac{1}{nm^2}\right)\\ &=\frac{1}{12n^2m^2}\sum_{i=1}^n\Vert\nabla f(c_i)\Vert^2 + o\left(\frac{1}{nm^2}\right). \end{align*} Since $m=n^{1/d}$ and $$\lim_{n\to \infty}\frac 1 n\sum_{i=1}^n\Vert\nabla f(c_i)\Vert^2=\int_{[0,1]^{d}}\Vert\nabla f(X)\Vert^{2}\mathrm{d} X,$$ we conclude that \eqref{eq:asyVar} holds. \end{proof} \begin{theorem}\label{thm:gridsCLT} If $f(X)\in C^1([0,1]^d)$ and $\sigma^2=(1/12)\int_{[0,1]^{d}}\Vert\nabla f(X)\Vert^{2}\mathrm{d} X>0$, then \begin{equation}\label{eq:gridsCLT} \frac{\tilde{\mu}_n-\mu}{\sigma n^{-1/2-1/d}}\to N(0,1), \end{equation} in distribution as $n=m^d\to\infty$. \end{theorem} \begin{proof} Since $f\in C^1([0,1]^d)$, $f$ is Lipschitz continuous. Then, for any $\delta>0$, \begin{equation}\label{eq:bound2pdelta} \mathbb{E}[\vert f(U_{i})-\mathbb{E}[f(U_{i})]\vert^{2+\delta}] \leq C_{d,\delta}n^{-\frac{2+\delta}{d}}, \end{equation} where $C_{d,\delta}>0$ is some constant that only depends on $d$ and $\delta$, and we used the fact that the diameter of $E_i$ is $\sqrt{d}n^{-1/d}$. Let $s_{n}^{2}=\sum_{i=1}^{n}\sigma_{i}^{2}$, where $\sigma_{i}^{2}=\mathrm{Var}(f(U_{i}))$. From Lemma~\ref{lem:asymvar}, we have \begin{equation} \lim_{n\rightarrow\infty}\frac{s_{n}^{2}}{n^{1-\frac{2}{d}}}=\frac{1}{12}\int_{[0,1]^{d}}\Vert\nabla f(X)\Vert^{2}\mathrm{d} X>0. \end{equation} Therefore, the Lyapunov condition \begin{align*} \lim_{n\rightarrow\infty}\frac{1}{s_{n}^{2+\delta}}\sum_{i=1}^{n}\mathbb{E}[\vert f(U_{i})-\mathbb{E}[f(U_{i})]\vert^{2+\delta}] &\leq\limsup_{n\rightarrow\infty}C_{d,\delta} \frac{n^{1-\frac{2+\delta}{d}}}{s_{n}^{2+\delta}} \\&=\limsup_{n\rightarrow\infty}C_{d,\delta}n^{-\frac{\delta}{2}}\left(\frac{n^{1-\frac 2 d}}{s_n^2}\right)^{\frac{2+\delta}{2}}=0 \end{align*} is satisfied. Using the Lyapunov CLT, we get \eqref{eq:gridsCLT}. \end{proof} To avoid the trivial case of constant functions (which yields an identically zero variance), Theorem~\ref{thm:gridsCLT} assumes that $\int_{[0,1]^{d}}\Vert\nabla f(x)\Vert^{2}\mathrm{d} x>0$. Theorem~\ref{thm:gridsCLT} admits the asymptotic normality of $\tilde{\mu}_n$. The grid-based stratified estimate has variance $O(n^{-1-2/d})$, compared to Monte Carlo variance $O(n^{-1})$. This actually holds for Lipschitz continuous functions covering the class of functions $C^1([0,1]^d)$ in Theorem~\ref{thm:gridsCLT}. \section{HSFC-based Sampling}\label{sec:main} In this section, we study the HSFC-based estimate given by \eqref{eq:hilbertest}, where $x_i$ are the first $n=b^m$ points of the scrambled van der Corput sequence in base $b\ge 2$. Let $a_i$ be the first $n$ points of the van der Corput sequence in base $b$ \cite{vdc:1935}. The integer $i-1\ge 0$ is written in base $b$ as $i-1=\sum_{j=1}^\infty a_{ij} b^{j-1}$ for $a_{ij}\in\{0,\dots,b-1\}$. Then $a_i$ is then defined by $$a_i=\sum_{j=1}^\infty a_{ij}b^{-j}.$$ The scrambled version of $a_1,\dots,a_n$ is $x_1,\dots,x_n$ written as $x_i = \sum_{j=1}^{\infty} x_{ij}b^{-j}$, where $x_{ij}$ are defined through random permutations of the $a_{ij}$. These permutations depend on $a_{ik}$, for $k<j$. More precisely, $x_{i1}=\pi(a_{i1})$, $x_{i2}=\pi_{a_{i1}}(a_{i2})$ and generally for $j\ge 2$ $$x_{ij}=\pi_{a_{i1}\dots a_{ij-1}}(a_{ij}).$$ Each random permutation is uniformly distributed over the $b!$ permutations of $\{0,\dots,b-1\}$, and the permutations are mutually independent. In this setting, thanks to the nice property of the nested uniform scrambling, the data values in the scrambled sequence can be reordered such that $x_i\sim \mathrm{Unif}(I_i)$ independently with $I_{i}$ for $i=1,\dots,b^m$. Let $E_{i}=H(I_{i})$. As used in \cite{he:owen:2016}, the estimate \eqref{eq:hilbertest} can be rewritten as \begin{equation*} \hat{\mu}_n = \frac 1n\sum_{i=1}^n f(X^{(i)}), \end{equation*} where $X^{(i)}=H(x_i)\sim\mathrm{Unif}(E_i)$. This implies that the HSFC-based sampling is actually a stratified sampling because $\{E_i\}_{i=1}^n$ is a split of $[0,1]^d$. Figure~\ref{fig:grids} illustrates such splits of $[0,1]^2$ when $b=2$. \cite{he:owen:2016} proved the unbiasedness of $\hat{\mu}_{n}$ for any $f\in L^2([0,1]^d)$ and gave some upper bounds for $\mathrm{Var}(\hat{\mu}_{n})$ under certain assumptions on the class of integrands $f$. Their proofs make use of the properties of the HSFC presented in the next lemma, which are also important in studying the asymptotic normality of the HSFC-based sampling. Denote $\lambda_d(\cdot)$ as the Lebesgue measure on $\Re^d$. \begin{lemma}\label{lem:ei} Let $A=H([p,q])$ for $0\leq p<q\le 1$. Then $\lambda_d(A)=\lambda_1([p,q])=q-p$. If $x\sim \mathrm{Unif}([p,q])$, then $H(x)\sim \mathrm{Unif}(A)$. Let $r$ be the diameter of $A$. Then $r\leq 2\sqrt{d+3}(q-p)^{1/d}$. \end{lemma} \subsection{Smooth Functions}\label{sec:smooth} \cite{loh:2003} and \cite{basu:2016} focused on smooth functions whose mixed partial gradient satisfies a H\"older condition, which was first studied in \cite{owen:1997}. Here we work with a weaker smoothness condition, in the sense that $f(X)\in C^1([0,1]^d)$ as required in Theorem~\ref{thm:gridsCLT}. \begin{theorem}\label{thm:HilbertCLTsmooth} Assume that $f(X)\in C^1([0,1]^d)$ and $\sigma^2=\int_{[0,1]^{d}}\Vert\nabla f(X)\Vert^{2}\mathrm{d} X>0$. Then for all sufficiently large $n$, we have \begin{equation}\label{eq:lbsmooth} \mathrm{Var}(\hat{\mu}_{n})\geq \frac{\sigma^2}{96}2^{-2/d-d}n^{-1-2/d}. \end{equation} Also, \begin{equation}\label{eq:HilbertCLT} \frac{\hat{\mu}_{n}-\mu}{\sqrt{\mathrm{Var}(\hat{\mu}_{n})}}\to N(0,1), \end{equation} in distribution as $n=b^m\to\infty$. \end{theorem} \begin{proof} Let $m = \lceil\frac{\log_2n+1}{d}\rceil$. Then there exists an interval $J_i$ of the form $[(k-1)/2^{dm},k/2^{dm}]$ such that $J_i\subset I_i$. This is because $\lambda_1(I_i)\geq 2\lambda_1(J_i)$. Let $s_{n}^{2}=\sum_{i=1}^{n}\sigma_{i}^{2}$, where $\sigma_{i}^{2}=\mathrm{Var}(f(X^{(i)}))$. Let $\mu_i=\mathrm{E}(f(X^{(i)}))$, and let $\tilde{E}_i=H(J_i)$. Note that $\tilde{E}_i\subset E_i$ due to $J_i\subset I_i$. Let $\mu'_i=\tilde{\mathrm{E}}(f(X^{(i)}))$, where the expectation is taken from $X^{(i)}\sim \mathrm{Unif}(\tilde{\mathrm{E}}_i)$. Based on some basic algebra, we find \begin{align*} \sigma_{i}^{2}&=\mathrm{Var}(f(X^{(i)}))=\frac{1}{\lambda_d(E_i)}\int_{E_i}[f(X)-\mu_i]^2 \mathrm{d} X \\ &\ge \frac{1}{\lambda_d(E_i)}\int_{\tilde{E}_i}[f(X)-\mu_i]^2 \mathrm{d} X\\ &= \frac{1}{\lambda_d(E_i)}\int_{\tilde{E}_i}\left([f(X)-\mu'_i]^2+2(f(X)-\mu'_i)(\mu'_i-\mu_i)+ (\mu'_i-\mu_i)^2\right)\mathrm{d} X\\ &=\frac{\lambda_d(\tilde{E}_i)}{\lambda_d(E_i)}\left(\widetilde{\mathrm{Var}}(f(X^{(i)}))+(\mu'_i-\mu_i)^2\right)\\ &\ge 2^{-(1+d)}\widetilde{\mathrm{Var}}(f(X^{(i)})), \end{align*} where $\widetilde{\mathrm{Var}}$ is taken over $X^{(i)}\sim\mathrm{Unif}(\tilde{E}_i)$, and we used two results by applying Lemma~\ref{lem:ei} that $\lambda_d(E_i)=1/n$ and \begin{equation}\label{eq:1} \lambda_d(\tilde{E}_i)=\lambda_1(J_i)=\frac{1}{2^{dm}}\ge\frac{1}{ 2^{1+d}n}=2^{-(1+d)}\lambda_d(E_i). \end{equation} We thus have \begin{equation} \frac{s_{n}^{2}}{n^{1-\frac{2}{d}}}\geq \frac{1}{2^{1+d}n^{1-\frac{2}{d}}}\sum_{i=1}^n\widetilde{\mathrm{Var}}(f(X^{(i)}))=:K_1(n). \end{equation} Notice that $\tilde{E}_i$ is a cube with sides of length $2^{-m}$. Following the proof of Theorem~\ref{thm:gridsCLT}, we have \begin{equation}\label{eq:tvar} \widetilde{\mathrm{Var}}(f(X^{(i)})) = \frac{1}{12\cdot 2^{2m}}\Vert\nabla f(c_i)\Vert^{2}+o(2^{-2m}), \end{equation} where $c_i$ is the center of $\tilde{E}_i$. Therefore, \begin{align} \liminf_{n\to\infty} K_1(n)&=\liminf_{n\to\infty}\frac{1}{2^{1+d}n^{1-\frac{2}{d}}}\left(\frac{1}{12\cdot 2^{2m}}\sum_{i=1}^n \Vert\nabla f(c_i)\Vert^{2}+o(2^{-2m}n)\right)\notag\\ &\geq \liminf_{n\to \infty} \frac{1}{2^{1+d}n^{1-2/d}}\frac{1}{12\cdot 2^{2(1+d)/d}n^{2/d}}\sum_{i=1}^n \Vert\nabla f(c_i)\Vert^{2}\label{eq:lb1}\\ &=\frac{1}{96\cdot 2^{2/d+d}}\int_{[0,1]^{d}}\Vert\nabla f(X)\Vert^{2}\mathrm{d} X>0.\label{eq:lb2} \end{align} The inequality \eqref{eq:lb1} is due to $m\le(\log_2 n+1)/d+1$. The equality \eqref{eq:lb2} is due to $c_i\in E_i$ and $\{E_1,\dots,E_n\}$ is a split of $[0,1]^d$. As a result, \begin{equation}\label{eq:lowerbound} \liminf_{n\to\infty} \frac{s_{n}^{2}}{n^{1-\frac{2}{d}}}\ge \liminf_{n\to\infty} K_1(n)\ge\frac{1}{96\cdot 2^{2/d+d}}\int_{[0,1]^{d}}\Vert\nabla f(X)\Vert^{2}\mathrm{d} X>0. \end{equation} Combing \eqref{eq:lowerbound} with $\mathrm{Var}(\hat{\mu}_{n})=s_n^2/n^2$ establishes the inequality \eqref{eq:lbsmooth}. Similar to \eqref{eq:bound2pdelta}, for any $\delta>0$, there exists a constant $C_{d,\delta}$ depending on $d$ and $\delta$ such that \begin{equation}\label{eq:upperbound} \mathbb{E}\left[\left|f(X^{(i)})-\mathbb{E}[f(X^{(i)})]\right|^{2+\delta}\right] \leq C_{d,\delta}n^{-\frac{2+\delta}{d}}, \end{equation} because the diameter of $E_i$ is not larger than $2\sqrt{d+3}n^{-1/d}$ by Lemma~\ref{lem:ei}. Using \eqref{eq:lowerbound} and \eqref{eq:upperbound}, the Lyapunov condition \begin{align*} \lim_{n\rightarrow\infty}\frac{1}{s_{n}^{2+\delta}}\sum_{i=1}^{n}\mathbb{E}\left[\left|f(X^{(i)}) -\mathbb{E}[f(X^{(i)})]\right|^{2+\delta}\right] &\leq\limsup_{n\rightarrow\infty}\frac{C_{d,\delta}n^{1-\frac{2+\delta}{d}}}{s_{n}^{2+\delta}}\\ &=\limsup_{n\rightarrow\infty} C_{d,\delta}n^{-2/\delta}\left(\frac{n^{1-2/d}}{s_n^2}\right)^{(2+\delta)/2}=0 \end{align*} is satisfied. Finally, using the Lyapunov CLT, we obtain \eqref{eq:HilbertCLT}. \end{proof} From the proof of Theorem 4 in \cite{he:owen:2016}, we find that \begin{equation}\label{eq:upperbd} \mathrm{Var}(\hat{\mu}_{n})\le 4M^2(d+3)n^{-1-2/d} \end{equation} for any Lipschitz function $f$ with modulus $M$. Theorem~\ref{thm:HilbertCLTsmooth} gives an asymptotic lower bound of order $n^{-1-2/d}$ for $\mathrm{Var}(\hat{\mu}_{n})$. Therefore, the rate $O(n^{-1-2/d})$ is tight for $\mathrm{Var}(\hat{\mu}_{n})$ if $f\in C^1([0,1]^d)$ and $\int_{[0,1]^{d}}\Vert\nabla f(X)\Vert^{2}\mathrm{d} X>0$. To prove the asymptotic normality, we only require the lower bound as shown in the proof of Theorem~\ref{thm:HilbertCLTsmooth}. \subsection{Piecewise Smooth Functions}\label{sec:disc} In this subsection, we focus on piecewise smooth functions of the form $f(X)=g(X)1_{\Omega}(X)$, where $\partial\Omega$ admits a $(d-1)$-dimensional Minkowski content defined below. This kind of functions was also studied in \cite{he:owen:2016}. \begin{definition} For a set $\Omega\subset[0,1]^d$, define \begin{equation}\label{eq:fmc} \mathcal{M}(\partial \Omega)=\lim_{\epsilon\downarrow 0}\frac{\lambda_d((\partial \Omega)_\epsilon)}{2\epsilon}, \end{equation} where $(A)_\epsilon:=\{x+y|x\in A,\Vert y\Vert\leq \epsilon\}$. If $\mathcal{M}(\partial \Omega)$ exists and finite, then $\partial \Omega$ is said to admit a $(d-1)$-dimensional Minkowski content. \end{definition} In the terminology of geometry, $\mathcal{M}(\partial \Omega)$ is known as the surface area of the set $\Omega$. The Minkowski content has a clear intuitive basis, compared to the Hausdorff measure that provides an alternative to quantify the surface area. We should note that the Minkowski content coincides with the Hausdorff measure, up to a constant factor, in regular cases. It is known that the boundary of any convex set in $[0,1]^d$ has a $(d-1)$-dimensional Minkowski content since the surface area of a convex set in $[0,1]^d$ is bounded by the surface area of the unit cube $[0,1]^d$, which is $2d$. More generally, \cite{Ambr:2008} found that $\partial \Omega$ admits a ($d-1$)-dimensional Minkowski content when $\Omega$ has a Lipschitz boundary. Let \begin{align*} &\mathcal{T}_{\mathrm{int}}=\{1\leq i\leq n|E_{i}\subset\Omega\}, \\ &\mathcal{T}_{\mathrm{bdy}}=\{1\leq i\leq n|E_{i}\cap\Omega\neq\emptyset\} \backslash\mathcal{T}_{\mathrm{int}}, \end{align*} be the indices of collections of $E_i$ that are interior to $\Omega$ and at the boundary of $\Omega$, respectively. Denote $|A|$ as the cardinality of the set $A$. \begin{lemma}\label{lem:bdy} If $\partial\Omega$ admits a $(d-1)$-dimensional Minkowski content, then $|\mathcal{T}_{\mathrm{bdy}}|=O(n^{1-1/d})$. \end{lemma} \begin{proof} The proof is given in the proof of Theorem 4 in \cite{he:owen:2016}. We provide here for completeness. From Lemma~\ref{lem:ei}, the diameter of $E_i$, denoted by $r_i$, satisfies $r_i\leq 2\sqrt{d+3}n^{-1/d}$. Let $\epsilon=2\sqrt{d+3}n^{-1/d}$. From~\eqref{eq:fmc}, for any fixed $\delta>2\mathcal{M}(\partial \Omega)$, there exists $\epsilon_0>0$ such that $\lambda_d((\partial \Omega)_\epsilon)<\delta \epsilon$ whenever $\epsilon<\epsilon_0$. Assume that $n>(2\sqrt{d+3}/\epsilon_0)^d$. Thus $r_i\le \epsilon<\epsilon_0$. Note that $\cup_{i\in \mathcal{T}_{\mathrm{bdy}}}E_i\subset (\partial \Omega)_\epsilon$. This leads to $$|\mathcal{T}_{\mathrm{bdy}}|\leq \frac{\lambda_d((\partial \Omega)_\epsilon)}{\lambda_d(E_i)}\leq \frac{\delta\epsilon}{n^{-1}}=2\sqrt{d+3}\delta n^{1-1/d},$$ which completes the proof. \end{proof} \begin{theorem}\label{thm:disCLT} Let $f(X)=g(X)1_{\Omega}(X)$, where $g(X)\in C^1([0,1]^d)$, $\Omega\subset[0,1]^d$ and $\partial\Omega$ admits a $(d-1)$-dimensional Minkowski content. Suppose that $\sigma_\Omega^2=\int_{\Omega}\Vert\nabla g(X)\Vert^{2}\mathrm{d} X>0$. Then for all sufficiently large $n$, \begin{equation}\label{eq:lbdis} \mathrm{Var}(\hat{\mu}_{n})\geq \frac{\sigma_\Omega^2}{96}2^{-2/d-d}n^{-1-2/d}, \end{equation} If $d>2$, \begin{equation}\label{eq:disHilbertCLT} \frac{\hat{\mu}_{n}-\mu}{\sqrt{\mathrm{Var}(\hat{\mu}_{n})}}\to N(0,1), \end{equation} in distribution as $n=b^m\to\infty$. \end{theorem} \begin{proof} Following the notations in the proof of Theorem~\ref{thm:HilbertCLTsmooth}, we have \begin{equation}\label{eq:decom} s_{n}^{2}=\sum_{i=1}^{n}\sigma_{i}^{2} =\sum_{i\in\mathcal{T}_{\mathrm{int}}}\mathrm{Var}(g(X^{(i)})) +\sum_{i\in\mathcal{T}_{\mathrm{bdy}}}\mathrm{Var}(g(X^{(i)})1_{\Omega}(X^{(i)})). \end{equation} Similar to the proof of Theorem~\ref{thm:HilbertCLTsmooth} for $g\in C^1([0,1]^d)$, we have \begin{equation*} \mathrm{Var}(g(X^{(i)})) = \frac{1}{12\cdot 2^{2m}}\Vert\nabla g(c_i)\Vert^{2}+o(2^{-2m}), \end{equation*} where $c_i\in E_i$ as defined there for $i\in \mathcal{T}_{\mathrm{int}}$. From \eqref{eq:decom}, we find that \begin{equation} s_{n}^{2}\geq\sum_{i\in\mathcal{T}_{\mathrm{int}}}\mathrm{Var}(g(X^{(i)}))= \frac{1}{12\cdot 2^{2m}}\sum_{i\in\mathcal{T}_{\mathrm{int}}}\Vert\nabla g(c_i)\Vert^{2}+o(2^{-2m}\vert\mathcal{T}_{\mathrm{int}}\vert). \end{equation} Note that \begin{align} \int_{\Omega}\Vert\nabla g(X)\Vert^{2}\mathrm{d} X&=\lim_{n\to\infty}\frac1n\sum_{i=1}^{n}\Vert\nabla g(c_i)\Vert^{2}1_\Omega(c_i)\notag\\&=\lim_{n\to\infty}\frac1n\left(\sum_{i\in\mathcal{T}_{\mathrm{int}}}\Vert\nabla g(c_i)\Vert^{2}+\sum_{i\in\mathcal{T}_{\mathrm{bdy}}}\Vert\nabla g(c_i)\Vert^{2}1_\Omega(c_i)\right)\label{eq:nonbdy}\\ &=\lim_{n\to\infty}\frac1n\sum_{i\in\mathcal{T}_{\mathrm{int}}}\Vert\nabla g(c_i)\Vert^{2},\notag \end{align} where we picked $c_i\in E_i\backslash\Omega\neq\emptyset$ for $i\in\mathcal{T}_{\mathrm{bdy}}$ so that the last term of \eqref{eq:nonbdy} is actually zero. Therefore, similar to \eqref{eq:lowerbound}, we have \begin{equation}\label{eq:lowerbound2} \liminf_{n\to\infty} \frac{s_{n}^{2}}{n^{1-\frac{2}{d}}}\ge\frac{1}{96\cdot 2^{2/d+d}}\int_{\Omega}\Vert\nabla g(X)\Vert^{2}\mathrm{d} X>0, \end{equation} that establishes \eqref{eq:lbdis}. On the other hand, \begin{align*} &\sum_{i=1}^{n}\mathbb{E}\left[\left|f(X^{(i)})-\mathbb{E}[f(X^{(i)})]\right|^{2+\delta}\right] \\ &=\sum_{i\in\mathcal{T}_{\mathrm{int}}}\mathbb{E}\left[\left|g(X^{(i)})-\mathbb{E}[g(X^{(i)})]\right|^{2+\delta}\right] +\sum_{i\in\mathcal{T}_{\mathrm{bdy}}}\mathbb{E}\left[\left|g(X^{(i)})1_{\Omega}(X^{(i)})-\mathbb{E}[g(X^{(i)})1_{\Omega}(X^{(i)})]\right|^{2+\delta}\right]. \end{align*} Again, for any $\delta>0$, \begin{equation*} \mathbb{E}[\vert g(X^{(i)})-\mathbb{E}[g(X^{(i)})]\vert^{2+\delta}] \leq C_{d,\delta}n^{-\frac{2+\delta}{d}}, \end{equation*} where $C_{d,\delta}>0$ is some constant that only depends on $d$ and $\delta$. This leads to \begin{equation} \sum_{i\in\mathcal{T}_{\mathrm{int}}}\mathbb{E}[|g(X^{(i)})-\mathbb{E}[g(X^{(i)})]|^{2+\delta}]\leq C_{d,\delta}n^{1-\frac{2+\delta}{d}}. \end{equation} It follows from \eqref{eq:lowerbound2} that \begin{equation}\label{eq:cond1} \limsup_{n\rightarrow\infty}\frac{1}{s_{n}^{2+\delta}} \sum_{i\in\mathcal{T}_{\mathrm{int}}}\mathbb{E}\left[\left|g(U_{i})-\mathbb{E}[g(U_{i})]\right|^{2+\delta}\right] \le \limsup_{n\rightarrow\infty}\frac{C_{d,\delta}n^{1-\frac{2+\delta}{d}}}{s_{n}^{2+\delta}}=0. \end{equation} By the continuity of $g$, there is a constant $D$ with $|g(X)|\leq D$ for all $X\in[0,1]^{d}$. Therefore, \begin{equation} \sum_{i\in\mathcal{T}_{\mathrm{bdy}}}\mathbb{E}\left[\left|g(X^{(i)})1_{\Omega}(X^{(i)})-\mathbb{E}[g(X^{(i)})1_{\Omega}(X^{(i)})]\right|^{2+\delta}\right] \leq (2D)^{2+\delta}|\mathcal{T}_{\mathrm{bdy}}|. \end{equation} By Lemma~\ref{lem:bdy}, we have $|\mathcal{T}_{\mathrm{bdy}}|=O(n^{1-1/d})$. As a result, \begin{align} &\limsup_{n\rightarrow\infty}\frac{1}{s_{n}^{2+\delta}} \sum_{i\in\mathcal{T}_{\mathrm{bdy}}}\mathbb{E}\left[\left|g(X^{(i)})1_{\Omega}(X^{(i)})-\mathbb{E}[g(X^{(i)})1_{\Omega}(X^{(i)})]\right|^{2+\delta}\right] \\ &\leq\limsup_{n\rightarrow\infty} \frac{(2D)^{2+\delta}|\mathcal{T}_{\mathrm{bdy}}|}{s_{n}^{2+\delta}} \notag\\ &=\limsup_{n\rightarrow\infty}(2D)^{2+\delta}n^{\frac{1+\delta}d-\frac\delta 2}\left(\frac{n^{1-\frac 2d}}{s_{n}^2}\right)^{\frac{2+\delta}2}\frac{|\mathcal{T}_{\mathrm{bdy}}|}{n^{1-1/d}}=0,\label{eq:cond2} \end{align} provided that $d>2$ and $\delta>1$. Together with \eqref{eq:cond1} and \eqref{eq:cond2}, the Lyapunov condition is thus verified. So the asymptotic normality is satisfied by applying the Lyapunov CLT again. \end{proof} \cite{he:owen:2016} gave an upper bound of $O(n^{-1-1/d})$ for $\mathrm{Var}(\hat{\mu}_{n})$ if $f(X)=g(X)1_{\Omega}(X)$, where $g$ is Lipschitz continuous. Theorem~\ref{thm:disCLT} provides a lower bound of order $n^{-1-2/d}$. For discontinuous integrands, we cannot get asymptotically matching lower bound to the upper bound because when we take $\Omega=[0,1]^d$, the lower bound \eqref{eq:lbdis} is in line with the smooth case. To establish the asymptotic normality, Theorem~\ref{thm:disCLT} requires $d>2$. It is not clear in general whether the asymptotic normality holds for $d=1,2$. If the last term of \eqref{eq:decom} has a lower bound of $O(n^{1-1/d})$, one would have the asymptotic normality for $d=2$. For $d=1$, let's consider the function $f(X)=g(X)1_{\{X>\theta\}}(X)$ for some $\theta \in [0,1]$. If $\theta$ is a multiple of $b^{-m_0}$ for some $m_0>0$, then the error over the set $\mathcal{T}_{\mathrm{bdy}}$ vanishes whenever $m\ge m_0$. The Lyapunov condition is thus verified by \eqref{eq:cond1}. As a result, the asymptotic normality holds for this case. If $\theta$ does not have a terminating $b$-adic representation, we may require some additional conditions to ensure the asymptotic normality. Theorem~\ref{thm:disCLT} also requires that $\int_{\Omega}\Vert\nabla g(X)\Vert^{2}\mathrm{d} X>0$. That condition actually rules out the case in which $f$ is an indicator function. The analysis of indicator functions is presented in the next subsection. \subsection{Indicator Functions}\label{sec:ind} We now consider indicator functions of the form $f(X)=1_{\Omega}(X)$. Recall that $\mathcal{T}_{\mathrm{bdy}}$ denotes the index of the collections of $E_i$ that touch the boundary of $\Omega$. In this case, the variance of the estimate reduces to \begin{equation}\label{eq:indvar} \mathrm{Var}(\hat{\mu}_n) = \frac{1}{n^2}\sum_{i\in \mathcal{T}_{\mathrm{bdy}}}\mathrm{Var}(1_{\Omega}(X^{(i)})), \end{equation} where $\mathrm{Var}(1_{\Omega}(X^{(i)}))=n\lambda_d(E_{i}\cap\Omega)(1-n\lambda_d(E_{i}\cap\Omega))$. Motivated by the proof of Theorem~\ref{thm:disCLT}, one needs to derive a suitable lower bound for $s_n^2=n^2\mathrm{Var}(\hat{\mu}_n)$ to apply the Lyapunov CLT. Note that $s_n^2\leq |\mathcal{T}_{\mathrm{bdy}}|/4$. Assume that $\partial\Omega$ admits a $(d-1)$-dimensional Minkowski content; $s_n^2$ then has an upper bound of $O(n^{1-1/d})$ since $|\mathcal{T}_{\mathrm{bdy}}|=O(n^{1-1/d})$ by Lemma~\ref{lem:bdy}. It is easy to see that if $s_n^2\ge c n^{1-1/d}$ for some constant $c>0$, the Lyapunov condition is satisfied for any $d>1$. However, it is possible that $\mathcal{T}_{\mathrm{bdy}}=\emptyset$ for strictly increasing sample sizes $n_k$, $k=1,\dots,\infty$, if $\Omega$ is a cube. This leads to an identically zero variance and hence $s_{n_k}^2=0$. Therefore, to study the asymptotic normality for indicator functions, we need the following assumption on $\Omega$, instead of the Minkowski content condition. \begin{assumption}\label{assump:ind} For $\Omega\subset [0,1]^d$, there exist a constant $c>0$ and an $N_0\ge 1$ such that for any $n\geq N_0$, \begin{equation}\label{eq:asump:ind1} \inf_{i\in\mathcal{T}_{\mathrm{bdy}}}\mathrm{Var}(1_{\Omega}(X^{(i)}))\ge c. \end{equation} Moreover, \begin{equation}\label{eq:asump:ind2} \lim_{n\rightarrow\infty}|\mathcal{T}_{\mathrm{bdy}}|=\infty. \end{equation} \end{assumption} \begin{theorem}\label{thm:indCLT} Let $f(X)=1_{\Omega}(X)$, where $\Omega$ satisfies Assumption~\ref{assump:ind}. Then \begin{equation}\label{eq:indHilbertCLT} \frac{\hat{\mu}_{n}-\mu}{\sqrt{\mathrm{Var}(\hat{\mu}_{n})}}\to N(0,1), \end{equation} in distribution as $n=b^m\to\infty$. \end{theorem} \begin{proof} By \eqref{eq:indvar} and \eqref{eq:asump:ind1}, we have $s_n^2\ge c |\mathcal{T}_{\mathrm{bdy}}|$. The Lyapunov condition \begin{align*} \lim_{n\rightarrow\infty}\frac{1}{s_{n}^{2+\delta}}\sum_{i=1}^{n}\mathbb{E}[\vert f(X^{(i)})-\mathbb{E}[f(X^{(i)})]\vert^{2+\delta}] &\leq\limsup_{n\rightarrow\infty}\frac{2^{2+\delta}|\mathcal{T}_{\mathrm{bdy}}|}{(c |\mathcal{T}_{\mathrm{bdy}}|)^{(2+\delta)/2}}\\ &=\limsup_{n\rightarrow\infty} \left(\frac{2}{\sqrt{c}}\right)^{2+\delta}|\mathcal{T}_{\mathrm{bdy}}|^{-\delta/2}=0 \end{align*} is satisfied for any $\delta>0$, where we used the condition \eqref{eq:asump:ind2} and $c>0$. Applying the Lyapunov CLT, we obtain \eqref{eq:indHilbertCLT}. \end{proof} Note that for $d=1$, the condition \eqref{eq:asump:ind2} does not hold if $\Omega$ is a union of $k$ disjoint intervals in [0,1], where $k$ is a given positive integer. This is because $|\mathcal{T}_{\mathrm{bdy}}|\le 2k$ for all possible $n$. Actually, for such cases, the CLT does not hold since the integration error is distributed over (at most) $k+1$ possible values for any $n=b^m$; see also \cite{lecu:2010} for discussions on randomly-shifted lattice rules. Define $A_n(c)=\{i\in \mathcal{T}_{\mathrm{bdy}}|\mathrm{Var}(1_{\Omega}(X^{(i)}))\ge c\}$. Assumption~\ref{assump:ind} can be weakened slightly to that there exist $c>0$ and $\delta>0$ such that $$\limsup_{n\rightarrow\infty} \frac{|\mathcal{T}_{\mathrm{bdy}}|}{|A_n(c)|^{1+\delta}}=0.$$ If $\partial\Omega$ admits a $(d-1)$-dimensional Minkowski content additionally, it suffices to verify $$\limsup_{n\rightarrow\infty} n^{1-1/d}|A_n(c)|^{-1-\delta}=0,$$ or equivalently, $|A_n(c)|^{-1}=o(n^{(1/d-1)/(1+\delta)})$. This is because $|\mathcal{T}_{\mathrm{bdy}}|=O(n^{1-1/d})$. However, it may be hard to verify Assumption~\ref{assump:ind} for general $\Omega$. As an illustrative example, we next show that the assumption holds for the case $\Omega=\{X=(X_1,X_2)\in[0,1]^2|X_1+X_2\ge 1\}$. We restrict our attention to the van der Corput sequence in base $b=2$ so that $n=2^m$. In this case, $E_i$ is a square with sides of length $1/\sqrt{n}$ when $m$ is even; when $m$ is odd, $E_i$ is a rectangle with width $\sqrt{2/n}$ and height $1/\sqrt{2n}$; see Figure~\ref{fig:grids} for illustrations. We thus have \begin{equation*} |\mathcal{T}_{\mathrm{bdy}}|= \begin{cases} \sqrt{n},&m \text{ is even},\\ \sqrt{2n},&m \text{ is odd}. \end{cases} \end{equation*} Moreover, for all $i\in \mathcal{T}_{\mathrm{bdy}}$, we find that \begin{equation*} \mathrm{Var}(1_{\Omega}(X^{(i)}))= \begin{cases} 1/4,&m \text{ is even},\\ 3/16,&m \text{ is odd}. \end{cases} \end{equation*} Therefore, Assumption~\ref{assump:ind} is satisfied with $c=3/16$ and $N_0=1$ so that the CLT holds for this example. Similarly, it is easy to see that the CLT still holds for the set $\Omega=\{X=(X_1,\dots,X_d)\in[0,1]^d|\sum_{i=1}^dX_i\ge d/2\}$ in $d$ dimensions. \begin{figure}[ht] \includegraphics[width = \hsize]{grids} \caption{Five splits of $[0,1]^2$ for HSFC stratification, and the dot line is $X_1+X_2=1$.}\label{fig:grids} \end{figure} \section{Numerical Results}\label{sec:numer} In this section, we present some numerical studies to assess the normality of the standardized errors. We also examine the lower bound established in Theorem~\ref{thm:HilbertCLTsmooth} for smooth functions. We consider the integrals of the following functions: \begin{itemize} \item a smooth function, $f_1(X)=12^{d/2}\prod_{i=1}^d(X_i-\frac{1}{2})$, \item a piecewise smooth function, $f_2(X)=(X_1-X_2)1_{\{\sum_{i=1}^d X_i\ge d/2\}}(X)$, and \item an indicator function, $f_3(X)=1_{\{\sum_{i=1}^d X_i\ge d/2\}}(X)$. \end{itemize} Note that for any $d\ge 1$, the exact values of these integrals are $\mu = 0$, $\mu = 0$, and $\mu = 1/2$, respectively. The smooth function was studied in \cite{owen:1997}, which satisfies the smooth condition required in \cite{loh:2003}. The scrambled $(t,m,d)$-net integration of this smooth function has a variance of $O(n^{-3}(\log n)^{d-1})$ \cite{owen:1997}, and it enjoys the asymptotic normality when $t=0$, as confirmed by \cite{loh:2003}. However, for the last two discontinuous functions, there is no theoretical guarantee in supporting the asymptotic normality for scrambled net quadratures, since the functions do not fit into the class of smooth functions required in \cite{loh:2003}. We make comparisons with randomized Sobol' points, which use the nested uniform scrambling of \cite{owen:1995} or the linear scrambling of \cite{mato:1998}. We use the C++ library of T. Kollig and A. Keller (\url{http://www.uni-kl.de/AG-Heinrich/Sample Pack.html}) to generate the nested uniform scrambled Sobol' (NUS--Sobol') points. To generate the linear scrambled Sobol' (LS--Sobol') points, we make use of the generator \verb|scramble| in MATLAB. To calculate Hilbert's mapping function $H(x)$, we use the C++ source code in \cite{lawder:2000} which is based on the algorithm in \cite{butz:1971}. To estimate the variances of these estimators, we use $R$ independent replications $\hat{\mu}_{n}^{(1)},\dots,\hat{\mu}_{n}^{(R)}$ of the sampling schemes. We then estimate the variances by the corresponding empirical variances \begin{equation*} \hat{\sigma}^2_n = \frac 1{R-1}\sum_{i=1}^R (\hat{\mu}_{n}^{(i)}-\bar{\mu})^2, \end{equation*} where $\bar{\mu} = (1/R) \sum_{i=1}^R \hat{\mu}_n^{(i)}$. To see asymptotic normality, we plot the kernel smoothed density of the standardized errors $$Z_i=\frac{\hat{\mu}_n^{(i)}-\mu}{\hat{\sigma}_n},\ i=1,\dots,R,$$ using the function \verb|ksdensity| in MATLAB. In our experiments, we take $R=1000$ and $n=2^{14}=16384$, in order to get good accuracy in the estimation of the target density. Now consider the smooth function $f_1(X)$. We find that $f_1\in C^1([0,1]^d)$, and \begin{align*} \int_{[0,1]^{d}}\Vert\nabla f_1(X)\Vert^{2}\mathrm{d} X&=d\int_{[0,1]^{d}}\left(\frac{\partial f(X)}{\partial X_1}\right)^{2}\mathrm{d} X\\ &=12^{d}d\int_{[0,1]^{d-1}}\prod_{i=2}^d\left(X_i-\frac{1}{2}\right)^2\mathrm{d} X\\ &=12d>0. \end{align*} Therefore, by Theorem~\ref{thm:HilbertCLTsmooth}, the HSFC-based estimate follows the CLT for $f_1$. The lower bound in \eqref{eq:lbsmooth} becomes $$\mathrm{Var}(\hat{\mu}_n)\geq 2^{-3-d-2/d}dn^{-1-2/d}.$$ Note that $f_1(X)$ is a Lipschitz function whose modulus $M$ satisfies $$M\le \sum_{i=1}^d \sup_{X\in[0,1]^d}\left\lvert\frac{\partial f_1(X)}{\partial X_i}\right\rvert=12^{d/2}2^{1-d}d.$$ Together with \eqref{eq:upperbd}, we obtain an upper bound $$\mathrm{Var}(\hat{\mu}_n)\leq 16(d+3)3^dd^2n^{-1-2/d}.$$ Figure~\ref{fig:bounds} shows the natural logarithm of the empirical variances of the HSFC-based estimator for $n=2^m$, $m=0,\dots,18$. The true variance of Monte Carlo sampling is $1/n$ for all $d\geq 1$. The lower bound and the upper bound above are also presented. We observe that the empirical variances decay at the rate $n^{-2}$ for $d=2$, and at the rate $n^{-5/4}$ for $d=8$. This supports that the rate $n^{-1-2/d}$ for the HSFC sampling is tight for smooth functions. Figure~\ref{fig:density1} displays smoothed density estimations of the standardized errors for plain Monte Carlo, LS--Sobol', NUS--Sobol', and HSFC. As expected, a nearly normal distribution appears for both the Monte Carlo and HSFC schemes. For the nested uniform scrambling scheme, a nearly normal distribution is also observed for $d=2$. This is because Sobol' sequence is a $(t,d)$-sequence in base $b=2$ with $t=0$ for $d=2$ and $t>0$ for $d=8$ \cite{dick:2008}. Therefore, the CLT holds for $d=2$, as confirmed by \cite{loh:2003}. For $d=8$, on the other hand, the density of the standardized errors does not look like a normal distribution. Even worse, for the linear scrambling scheme, the distribution of the standardized errors is very different from the normal distribution. It looks rather spiky for $d=2$. \begin{figure}[ht] \includegraphics[width=\hsize]{bounds} \caption{Decay of empirical variance of HSFC sampling as a function of sample size in a log-log scale for $d=2,8$. The lower bound and the upper bound for the variance are included. The true variance of Monte Carlo (MC) sampling is also presented for comparison.}\label{fig:bounds} \end{figure} \begin{figure}[ht] \includegraphics[width=\hsize]{density1} \caption{Empirical verification of asymptotic normality for the integrations of the smooth function $f_1(X)$ with plain Monte Carlo (MC), LS--Sobol', NUS--Sobol', and HSFC, the dot curve is the true density of the standard normal $N(0,1)$.}\label{fig:density1} \end{figure} For the two discontinuous functions $f_2(X)$ and $f_3(X)$, the CLT holds for the HSFC sampling (see Sections~\ref{sec:disc} and \ref{sec:ind} for details). Figures~\ref{fig:density2} and \ref{fig:density3} show smoothed density estimations of the standardized errors for the two functions, respectively. As expected, a nearly normal distribution appears for both the Monte Carlo and HSFC schemes with $d=2,8$. More interestingly, a nearly normal distribution is also observed for the nested uniform scrambling scheme in all cases, although it is not clear whether the CLT holds for scrambled net integrations of discontinuous functions. Similar to the case of the smooth function, the integration error distribution for the linear scrambling scheme is far from the normal distribution, particularly for $d=2$. Comparing to the nested uniform scrambling, the linear scrambling requires less randomness, and therefore its samples may be strongly dependent. That might explain why the CLT does not hold in most cases for randomized QMC with the linear scrambling. \begin{figure}[ht] \includegraphics[width=\hsize]{density2} \caption{Empirical verification of asymptotic normality for the integrations of the piecewise smooth function $f_2(X)$ with plain Monte Carlo (MC), LS--Sobol', NUS--Sobol', and HSFC, the dot curve is the true density of the standard normal $N(0,1)$.}\label{fig:density2} \end{figure} \begin{figure}[ht] \includegraphics[width=\hsize]{density3} \caption{Empirical verification of asymptotic normality for the integrations of the indicator function $f_3(X)$ with plain Monte Carlo (MC), LS--Sobol', NUS--Sobol', and HSFC, the dot curve is the true density of the standard normal $N(0,1)$.}\label{fig:density3} \end{figure} \section{Concluding Remarks}\label{eq:final} \cite{loh:2003} showed that the scrambled net estimate has an asymptotic normal distribution for certain smooth functions. In a very recent work, \cite{basu:2016} found that the scrambled geometric net estimate has an asymptotic normal distribution for certain smooth functions defined on products of suitable subsets of $\Re^d$. The smoothness conditions required in the two papers are more restrictive than the smooth condition required in Section~\ref{sec:smooth}. The proofs in both \cite{loh:2003} and \cite{basu:2016} relied on ensuring a suitable lower bound on the variance of the estimate matching up to constants to the upper bound. The proofs in this paper relies on establishing a suitable lower bound, and then make use of the Lyapunov CLT. We also proved the asymptotic normality of the HSFC-based stratified estimate for certain discontinuous functions. To our best knowledge, it is not clear whether the asymptotic normality of the scrambled net estimate holds for discontinuous functions. \cite{he:wang:2015} provided some upper bounds of scrambled net variances for piecewise smooth functions of the same form $f(X)=g(X)1_{\Omega}(X)$ studied in Section~\ref{sec:disc}, but $g$ is of bounded variation in the sense of Hardy and Krause instead. For future research, following the procedures in \cite{loh:2003}, it is desirable to establish a matching lower bound for the variance of scrambled net integration of discontinuous functions. \cite{he:owen:2016} used randomized van der Corput sequence in base $b$ as the input of the HSFC sampling. This makes the sampling scheme extensible. As in \cite{loh:2003} and \cite{basu:2016}, the analysis in this paper is based on the sample size with the pattern $n=b^m$, not with arbitrary $n$. This scheme turns out to be a kind of stratified samplings. In contrast to the usual grid sampling, it is extensible and does not require so highly composite sample sizes, particularly for large $d$. The results on the HSFC sampling can also be applied to the usual grid sampling. \section*{Acknowledgments} The authors gratefully thank Professor Art B. Owen for the helpful comments. Zhijian He is supported by the National Science Foundation of China under Grant 71601189. Lingjiong Zhu is supported by the NSF Grant DMS-1613164.
1,941,325,220,901
arxiv
\section*{Introduction} Let $A$ be a finite-dimensional Hopf algebra over a field $\Bbbk $ of characteristic zero such that the coradical $H$ of $A$ is a sub-Hopf algebra (i.e. $A$ has the dual Chevalley Property). Denote by $\mathcal{D}\left( A\right)$ the diagram of $A$. The main aim of this paper (see Theorem \ref{teo:main}) is to prove that, if the third Hochschild cohomology group in ${_{H}^{H}\mathcal{YD}}$ of the algebra $\mathcal{D}\left( A\right)$ with coefficients in $\Bbbk $ vanishes, in symbols $\mathrm{H}_{{\mathcal{YD}}}^{3}\left( \mathcal{D}\left( A\right) ,\Bbbk \right) =0$, then $A$ is quasi-isomorphic to the Radford-Majid bosonization $E\#H$ of some connected bialgebra $E$ in ${_{H}^{H}\mathcal{YD}}$ with \mathrm{gr}\left( E\right) \cong \mathcal{D}\left( A\right) $ as bialgebras in ${_{H}^{H}\mathcal{YD}}$. The paper is organized as follows. Let $H$ be a Hopf algebra over a field $\Bbbk$. In Section \ref{sec:1} we investigate the properties of coalgebras with multiplication and unit in the category ${_{H}^{H}\mathcal{YD}}$ (in particular of coquasi-bialgebras) and their associated graded coalgebra. The main result of this section, Theorem \ref{teo:grHopf}, establishes that the associated graded coalgebra $\mathrm{gr}Q$ of a connected coquasi-bialgebra in ${_{H}^{H \mathcal{YD}}$ is a connected bialgebra in ${_{H}^{H \mathcal{YD}}$. In Section \ref{sec:2} we study the deformation of coquasi-bialgebras in ${_{H}^{H}\mathcal{YD}}$ by means of gauge transformations. In Proposition \ref{pro:deformSmash} we investigate its behaviour with respect to bosonization while in Proposition \ref{pro:grgaugeYD} with respect to the associated graded coalgebra. In Section \ref{sec:3} we consider the associated graded coalgebra in case the Hopf algebra $H$ is semisimple and cosemisimple (e.g. $H$ is finite-dimensional cosemisimple over a field of characteristic zero). In particular, in Theorem \ref{teo:GelakiYD}, we prove that a f.d. connected coquasi-bialgebra $Q$ in ${_{H}^{H}\mathcal{YD}}$ is gauge equivalent to a connected bialgebra in ${_{H}^{H}\mathcal{YD}}$ whenever $\mathrm{H}_{{\mathcal{YD}}}^{3}\left( \mathrm{gr}Q,\Bbbk \right) =0$. This result is inspired by \cite[Proposition 2.3]{EG}. In Section \ref{sec:4}, we focus on the link between $\mathrm{H}_{{\mathcal{YD}}}^{n}\left( B,\Bbbk \right)$ and the invariants of $\mathrm{H} ^{n}\left( B,\Bbbk \right)$, where $B$ is a bialgebra in $\mathrm{H}_{{\mathcal{YD}}}^{n}\left( B,\Bbbk \right)$. In particular, in Proposition \ref{pro:D(H)} we show that $ \mathrm{H}_{{\mathcal{YD}}}^{n}\left( B,\Bbbk \right)$ is isomorphic to $\mathrm{H}^{n}\left( B,\Bbbk \right) ^{D(H)}$, which is a subspace of $\mathrm{H}^{n}\left( B,\Bbbk \right) ^{H}\cong\mathrm{H}^{n}\left( B\#H,\Bbbk \right),$ see Corollary \ref{coro:K}. Section \ref{sec:5} is devoted to the proof of the main result of the paper, the aforementioned Theorem \ref{teo:main}. In Section \ref{sec:6} we provide examples where $ \mathrm{H}_{{\mathcal{YD}}}^{n}\left( B,\Bbbk \right)=0$ in case $B$ is the Nichols algebra $\cB(V)$ of a Yetter-Drinfeld module $V$. In particular we show that that $\mathrm{H}_{{\mathcal{YD}}}^{3}\left( \cB(V),\Bbbk \right)$ can be zero although $\mathrm{H}^3\left( \cB(V)\#H,\Bbbk \right)$ is non-trivial. \section*{Preliminaries} Given a category ${\mathcal{C}}$ and objects $M,N\in {\mathcal{C}}$, the notation ${\mathcal{C}}\left( M,N\right) $ stands for the set of morphisms in ${\mathcal{C}}$. This notation will be mainly applied to the case $ \mathcal{C}}$ is the category of vector space $\mathbf{Vec}_{\Bbbk }$ over a field $\Bbbk $ or ${\mathcal{C}}$ is the category of Yetter-Drinfeld modules ${_{H}^{H}\mathcal{YD}}$ over a Hopf algebra $H$. The set of natural numbers including $0$ is denoted by $\N_0$ while $\N$ denotes the same set without $0$. \section{Yetter-Drinfeld}\label{sec:1} \begin{definition} Let $C$ be a coalgebra. Denote by $C_{n}$ the $n$-th term of the coradical filtration of $C$ and set $C_{-1}:=0.$ For every $x\in C,$ we se \begin{equation*} \left\vert x\right\vert :=\min \left\{ i\in \N_0:x\in C_{i}\right\} \qquad \text{and}\qquad \overline{x}:=x+C_{\left\vert x\right\vert -1}. \end{equation* Note that, for $x=0$, we have $\left\vert x\right\vert =0.$ One can define the associated graded coalgebr \begin{equation*} \mathrm{gr}C:=\oplus _{i\in \N_0}\frac{C_{i}}{C_{i-1}} \end{equation* with structure given, for every $x\in C$, b \begin{eqnarray} \Delta _{\mathrm{gr}C}\left( \overline{x}\right) &=&\sum_{0\leq i\leq \left\vert x\right\vert }\left( x_{1}+C_{i-1}\right) \otimes \left( x_{2}+C_{\left\vert x\right\vert -i-1}\right) , \label{form:DeltaGr} \\ \varepsilon _{\mathrm{gr}C}\left( \overline{x}\right) &=&\delta _{\left\vert x\right\vert ,0}\varepsilon _{C}\left( x\right) . \label{form:EpsGr} \end{eqnarray} \end{definition} \begin{claim} \label{claim:basis}For every $i\in \N_0$, take a basis $\left\{ \overline{x^{i,j}}\mid j\in B_{i}\right\} $ of the $\Bbbk $-module C_{i}/C_{i-1}\ $with $\overline{x^{i,j}}\neq \overline{x^{i,l}}$ for $j\neq l $ and \begin{equation*} \left\vert x^{i,j}\right\vert =i. \end{equation* Then $\left\{ x^{i,j}\mid 0\leq i\leq n,j\in B_{i}\right\} $ is a basis of C_{n}$ and $\left\{ x^{i,j}\mid i\in \N_0,j\in B_{i}\right\} $ is a basis of $C$. Assume that $C$ has a distinguished grouplike element 1=1_{C}\neq 0$ and take $i>0.$ If $\varepsilon \left( x^{i,j}\right) \neq 0$ then we have that \begin{equation*} \overline{x^{i,j}-\varepsilon \left( x^{i,j}\right) 1}=\overline{x^{i,j}} \end{equation* so that we can take $x^{i,j}-\varepsilon \left( x^{i,j}\right) 1$ in place of $x^{i,j}.$ In other words we can assume \begin{equation} \varepsilon \left( x^{i,j}\right) =0,\text{ for every }i>0,j\in B_{i}. \label{form:XijNorm} \end{equation It is well-known there is a $\Bbbk $-linear isomorphism $\varphi :C\rightarrow \mathrm{gr}C$ defined on the basis by $\varphi \left( x^{i,j}\right) :=\overline{x^{i,j}}.$ We comput \begin{equation*} \varepsilon _{\mathrm{gr}C}\varphi \left( x^{i,j}\right) =\varepsilon _ \mathrm{gr}C} \left( \overline{x^{i,j}}\right) \overset{(\ref{form:EpsGr})}{ }\delta _{i,0}\varepsilon \left( x^{0,j}\right) \overset{(\ref{form:XijNorm} }{=}\varepsilon \left( x^{i,j}\right) . \end{equation* Hence we obtai \begin{equation} \varepsilon _{\mathrm{gr}C}\circ \varphi =\varepsilon . \label{form:Gel1} \end{equation} \end{claim} Let $H$ be a Hopf algebra. A \textbf{coalgebra with multiplication and unit} in ${_{H}^{H}\mathcal{YD}}$ is a datum $\left( Q,m,u,\Delta ,\varepsilon \right) $ where $\left( Q,\Delta ,\varepsilon \right) $ is a coalgebra in $ _{H}^{H}\mathcal{YD}}$, $m:Q\otimes Q\rightarrow Q$ is a coalgebra morphism in ${_{H}^{H}\mathcal{YD}}$ called multiplication (which may fail to be associative) and $u:\Bbbk \rightarrow Q$ is a coalgebra morphism in $ _{H}^{H}\mathcal{YD}}$ called unit. In this case we set $1_{Q}:=u\left( 1_{\Bbbk }\right) .$ Note that, for every $h\in H,k\in \Bbbk $, we hav \begin{eqnarray} h1_{Q} &=&hu\left( 1_{\Bbbk }\right) =u\left( h1_{\Bbbk }\right) =u\left( \varepsilon _{H}\left( h\right) 1_{\Bbbk }\right) =\varepsilon _{H}\left( h\right) u\left( 1_{\Bbbk }\right) =\varepsilon _{H}\left( h\right) 1_{Q}, \label{form:1Qlin} \\ \left( 1_{Q}\right) _{-1}\otimes \left( 1_{Q}\right) _{0} &=&\left( u\left( 1_{\Bbbk }\right) \right) _{-1}\otimes \left( u\left( 1_{\Bbbk }\right) \right) _{0}=\left( 1_{\Bbbk }\right) _{-1}\otimes u\left( \left( 1_{\Bbbk }\right) _{0}\right) =1_{H}\otimes u\left( 1_{\Bbbk }\right) =1_{H}\otimes 1_{Q}. \label{form:1Qcolin} \end{eqnarray} \begin{proposition} \label{pro:CMU}Let $H$ be a Hopf algebra and let $\left( Q,m,u,\Delta ,\varepsilon \right) $ be a coalgebra with multiplication and unit in $ _{H}^{H}\mathcal{YD}}$. If $Q_{0}$ is a subcoalgebra of $Q$ in ${_{H}^{H \mathcal{YD}}$ such that $Q_{0}\cdot Q_{0}\subseteq Q_{0},$ then $Q_{n}$ is a subcoalgebra of $Q$ in ${_{H}^{H}\mathcal{YD}}$ for every $n\in \N_0 . Moreover $Q_{a}\cdot Q_{b}\subseteq Q_{a+b}$ for every $a,b\in \N_0$ and the graded coalgebra $\mathrm{gr}Q,$ associated with the coradical filtration of $Q,$ is a coalgebra with multiplication and unit in ${_{H}^{H \mathcal{YD}}$ with respect to the usual coalgebra structure and with multiplication and unit defined by \begin{eqnarray} m_{\mathrm{gr}Q}\left( \left( x+Q_{a-1}\right) \otimes \left( y+Q_{b-1}\right) \right) &:&=xy+Q_{a+b-1}, \label{eq:mgr} \\ u_{\mathrm{gr}Q}\left( k\right) &:&=k1_{Q}+Q_{-1} \notag \end{eqnarray} \end{proposition} \begin{proof} The coalgebra structure of $Q$ induces a coalgebra structure on $\mathrm{gr Q $. Since $Q_{0}$ is a subcoalgebra of $Q$ in ${_{H}^{H}\mathcal{YD}}\ and, for $n\geq 1$, one has $Q_{n}=Q_{n-1}\wedge _{Q}Q_{0},$ then inductively one proves that $Q_{n}$ is a subcoalgebra of $Q$ in ${_{H}^{H \mathcal{YD}}$. As a consequence one gets that $\mathrm{gr}Q$ is a coalgebra in ${_{H}^{H}\mathcal{YD}}$ (this construction can be performed in the setting of monoidal categories under suitable assumptions, see e.g. \cite Theorem 2.10]{AM}). Let us prove that $\mathrm{gr}Q$ inherits also a multiplication and unit. Let us check that $Q_{a}\cdot Q_{b}\subseteq Q_{a+b} $ for every $a,b\in \N_0$. We proceed by induction on $n=a+b.$ If $n=0$ there is nothing to prove. Let $n\geq 1$ and assume that Q_{i}\cdot Q_{j}\subseteq Q_{i+j}$ for every $i,j\in \N_0$ such that 0\leq i+j\leq n-1.$ Let $a,b\in \N_0$ be such that $n=a+b$. Since \Delta \left( Q_{a}\right) \subseteq \sum_{i=0}^{a}Q_{i}\otimes Q_{a-i}$ and $c_{Q,Q}\left( Q_{u}\otimes Q_{v}\right) \subseteq Q_{v}\otimes Q_{u},$ where $c_{Q,Q}$ denotes the braiding in ${_{H}^{H}\mathcal{YD}}$, using the compatibility condition between $\Delta $ and $m,$ one easily gets that \Delta \left( Q_{a}\cdot Q_{b}\right) \subseteq Q_{a+b-1}\otimes Q+Q\otimes Q_{0}.$ \begin{invisible} We comput \begin{eqnarray*} \Delta \left( Q_{a}\cdot Q_{b}\right) &=&\Delta m\left( Q_{a}\otimes Q_{b}\right) =\left( m\otimes m\right) \Delta _{Q\otimes Q}\left( Q_{a}\otimes Q_{b}\right) \\ &=&\left( m\otimes m\right) \left( Q\otimes c_{Q,Q}\otimes Q\right) \left( \Delta \otimes \Delta \right) \left( Q_{a}\otimes Q_{b}\right) \\ &\subseteq &\left( m\otimes m\right) \left( Q\otimes c_{Q,Q}\otimes Q\right) \left( \left( \sum_{i=0}^{a}Q_{i}\otimes Q_{a-i}\right) \otimes \left( \sum_{j=0}^{b}Q_{j}\otimes Q_{b-j}\right) \right) \\ &\subseteq &\sum_{i=0}^{a}\sum_{j=0}^{b}\left( m\otimes m\right) \left( Q_{i}\otimes c_{Q,Q}\left( Q_{a-i}\otimes Q_{j}\right) \otimes Q_{b-j}\right) \\ &\subseteq &\sum_{i=0}^{a}\sum_{j=0}^{b}\left( m\otimes m\right) \left( Q_{i}\otimes Q_{j}\otimes Q_{a-i}\otimes Q_{b-j}\right) \\ &\subseteq &\sum_{i=0}^{a}\sum_{j=0}^{b}\left( Q_{i}\cdot Q_{j}\otimes Q_{a-i}\cdot Q_{b-j}\right) \subseteq \sum_{\substack{ 0\leq i\leq a, \\ 0\leq j\leq b, \\ i+j<a+b}}\left( Q_{i}\cdot Q_{j}\otimes Q_{a-i}\cdot Q_{b-j}\right) +\left( Q_{a}\cdot Q_{b}\otimes Q_{0}\cdot Q_{0}\right) \\ &\subseteq &\sum_{\substack{ 0\leq i\leq a, \\ 0\leq j\leq b, \\ i+j<a+b} Q_{i+j}\otimes Q+Q\otimes Q_{0}\subseteq Q_{a+b-1}\otimes Q+Q\otimes Q_{0}. \end{eqnarray*} \end{invisible} Therefore $Q_{a}\cdot Q_{b}\subseteq Q_{a+b}.$ This property implies we have a well-defined map in ${_{H}^{H}\mathcal{YD}} \begin{equation*} m_{\mathrm{gr}Q}^{a,b}:\frac{Q_{a}}{Q_{a-1}}\otimes \frac{Q_{b}}{Q_{b-1} \rightarrow \frac{Q_{a+b}}{Q_{a+b-1}} \end{equation* defined, for $x\in Q_{a}$ and $y\in Q_{b},$ by (\ref{eq:mgr}). This can be seen as the graded component of a morphism in ${_{H}^{H}\mathcal{YD}}$ that we denote by $m_{\mathrm{gr}Q}:\mathrm{gr}Q\otimes \mathrm{gr}Q\rightarrow \mathrm{gr}Q$. Let us check that $m_{\mathrm{gr}Q}$ is a coalgebra morphism in ${_{H}^{H}\mathcal{YD}}$. Consider a basis of $Q$ with terms of the form x^{i,j}$ as in \ref{claim:basis}. Hence we can write the comultiplication in the form \begin{equation*} \Delta \left( x^{a,u}\right) =\sum_{s+t\leq a}\sum_{l,m}\eta _{s,t,l,m}^{a,u}x^{s,l}\otimes x^{t,m}. \end{equation* Now, using (\ref{form:DeltaGr}), one gets that \begin{equation} \Delta _{\mathrm{gr}Q}\left( \overline{x^{a,u}}\right) =\sum_{0\leq i\leq a}\sum_{l,m}\eta _{i,a-i,l,m}^{a,u}\overline{x^{i,l}}\otimes \overline x^{a-i,m}}. \label{form:deltagr} \end{equation} \begin{invisible} Here is the proof \begin{eqnarray*} &&\Delta _{\mathrm{gr}Q}\left( \overline{x^{a,u}}\right) \overset{(\re {form:DeltaGr})}{=}\sum_{0\leq i\leq a}\left( \left( x^{a,u}\right) _{1}+Q_{i-1}\right) \otimes \left( \left( x^{a,u}\right) _{2}+Q_{a-i-1}\right) \\ &=&\sum_{0\leq i\leq a}\sum_{s+t\leq a}\sum_{l,m}\eta _{s,t,l,m}^{a,u}\left( x^{s,l}+Q_{i-1}\right) \otimes \left( x^{t,m}+Q_{a-i-1}\right) \\ &=&\sum_{0\leq i\leq a}\sum_{\substack{ s+t\leq a \\ i\leq s,a-i\leq t} \sum_{l,m}\eta _{s,t,l,m}^{a,u}\left( x^{s,l}+Q_{i-1}\right) \otimes \left( x^{t,m}+Q_{a-i-1}\right) \\ &=&\sum_{0\leq i\leq a}\sum_{s=i,t=a-i}\sum_{l,m}\eta _{s,t,l,m}^{a,u}\left( x^{s,l}+Q_{i-1}\right) \otimes \left( x^{t,m}+Q_{a-i-1}\right) \\ &=&\sum_{0\leq i\leq a}\sum_{l,m}\eta _{i,a-i,l,m}^{a,u}\left( x^{i,l}+Q_{i-1}\right) \otimes \left( x^{a-i,m}+Q_{a-i-1}\right) \\ &=&\sum_{0\leq i\leq a}\sum_{l,m}\eta _{i,a-i,l,m}^{a,u}\overline{x^{i,l} \otimes \overline{x^{a-i,m}} \end{eqnarray*} \end{invisible} Using that $\Delta _{\mathrm{gr}Q\otimes \mathrm{gr}Q}=\left( \mathrm{gr Q\otimes c_{\mathrm{gr}Q,\mathrm{gr}Q}\otimes \mathrm{gr}Q\right) \left( \Delta _{\mathrm{gr}Q}\otimes \Delta _{\mathrm{gr}Q}\right) $ and (\re {form:deltagr}), it is straightforward to check that $\left( m_{\mathrm{gr Q}\otimes m_{\mathrm{gr}Q}\right) \Delta _{\mathrm{gr}Q\otimes \mathrm{gr Q}\left( \overline{x^{a,u}}\otimes \overline{x^{b,v}}\right) =\Delta _ \mathrm{gr}Q}m_{\mathrm{gr}Q}\left( \overline{x^{a,u}}\otimes \overline x^{b,v}}\right) .$ \begin{invisible} We hav \begin{eqnarray*} &&\left( m_{\mathrm{gr}Q}\otimes m_{\mathrm{gr}Q}\right) \Delta _{\mathrm{gr Q\otimes \mathrm{gr}Q}\left( \overline{x^{a,u}}\otimes \overline{x^{b,v} \right) \\ &=&\left( m_{\mathrm{gr}Q}\otimes m_{\mathrm{gr}Q}\right) \left( \mathrm{gr Q\otimes c_{\mathrm{gr}Q,\mathrm{gr}Q}\otimes \mathrm{gr}Q\right) \left( \Delta _{\mathrm{gr}Q}\otimes \Delta _{\mathrm{gr}Q}\right) \left( \overline x^{a,u}}\otimes \overline{x^{b,v}}\right) \\ &\overset{(\ref{form:deltagr})}{=}&\left[ \begin{array}{c} \left( m_{\mathrm{gr}Q}\otimes m_{\mathrm{gr}Q}\right) \left( \mathrm{gr Q\otimes c_{\mathrm{gr}Q,\mathrm{gr}Q}\otimes \mathrm{gr}Q\right) \\ \left( \sum_{0\leq i\leq a}\sum_{l,m}\eta _{i,a-i,l,m}^{a,u}\overline{x^{i,l }\otimes \overline{x^{a-i,m}}\otimes \sum_{0\leq i^{\prime }\leq b}\sum_{l^{\prime },m^{\prime }}\eta _{i^{\prime },b-i^{\prime },l^{\prime },m^{\prime }}^{b,v}\overline{x^{i^{\prime },l^{\prime }}}\otimes \overline x^{b-i^{\prime },m^{\prime }}}\right \end{array \right] \\ &=&\left[ \begin{array}{c} \sum_{0\leq i\leq a}\sum_{l,m}\sum_{0\leq i^{\prime }\leq b}\sum_{l^{\prime },m^{\prime }}\eta _{i^{\prime },b-i^{\prime },l^{\prime },m^{\prime }}^{b,v}\eta _{i,a-i,l,m}^{a,u}\left( m_{\mathrm{gr}Q}\otimes m_{\mathrm{gr Q}\right) \\ \left( \overline{x^{i,l}}\otimes \left( \overline{x^{a-i,m}}\right) _{-1 \overline{x^{i^{\prime },l^{\prime }}}\otimes \left( \overline{x^{a-i,m} \right) _{0}\otimes \overline{x^{b-i^{\prime },m^{\prime }}}\right \end{array \right] \\ &=&\left[ \begin{array}{c} \sum_{0\leq i\leq a}\sum_{l,m}\sum_{0\leq i^{\prime }\leq b}\sum_{l^{\prime },m^{\prime }}\eta _{i^{\prime },b-i^{\prime },l^{\prime },m^{\prime }}^{b,v}\eta _{i,a-i,l,m}^{a,u}\left( m_{\mathrm{gr}Q}\otimes m_{\mathrm{gr Q}\right) \\ \left( \overline{x^{i,l}}\otimes \left( x^{a-i,m}+Q_{a-i-1}\right) _{-1 \overline{x^{i^{\prime },l^{\prime }}}\otimes \left( x^{a-i,m}+Q_{a-i-1}\right) _{0}\otimes \overline{x^{b-i^{\prime },m^{\prime }}}\right \end{array \right] \\ &=&\left[ \begin{array}{c} \sum_{0\leq i\leq a}\sum_{l,m}\sum_{0\leq i^{\prime }\leq b}\sum_{l^{\prime },m^{\prime }}\eta _{i^{\prime },b-i^{\prime },l^{\prime },m^{\prime }}^{b,v}\eta _{i,a-i,l,m}^{a,u}\left( m_{\mathrm{gr}Q}\otimes m_{\mathrm{gr Q}\right) \\ \left( \overline{x^{i,l}}\otimes \left( x^{a-i,m}\right) _{-1}\overline x^{i^{\prime },l^{\prime }}}\otimes \left( \left( x^{a-i,m}\right) _{0}+Q_{a-i-1}\right) \otimes \overline{x^{b-i^{\prime },m^{\prime }}}\right \end{array \right] \\ &=&\left[ \begin{array}{c} \sum_{0\leq i\leq a}\sum_{l,m}\sum_{0\leq i^{\prime }\leq b}\sum_{l^{\prime },m^{\prime }}\eta _{i^{\prime },b-i^{\prime },l^{\prime },m^{\prime }}^{b,v}\eta _{i,a-i,l,m}^{a,u}\left( m_{\mathrm{gr}Q}\otimes m_{\mathrm{gr Q}\right) \\ \left( \overline{x^{i,l}}\otimes \left( x^{a-i,m}\right) _{-1}\left( x^{i^{\prime },l^{\prime }}+Q_{i^{\prime }-1}\right) \otimes \left( \left( x^{a-i,m}\right) _{0}+Q_{a-i-1}\right) \otimes \overline{x^{b-i^{\prime },m^{\prime }}}\right \end{array \right] \\ &=&\left[ \begin{array}{c} \sum_{0\leq i\leq a}\sum_{l,m}\sum_{0\leq i^{\prime }\leq b}\sum_{l^{\prime },m^{\prime }}\eta _{i^{\prime },b-i^{\prime },l^{\prime },m^{\prime }}^{b,v}\eta _{i,a-i,l,m}^{a,u}\left( m_{\mathrm{gr}Q}\otimes m_{\mathrm{gr Q}\right) \\ \left( \left( x^{i,l}+Q_{i-1}\right) \otimes \left( \left( x^{a-i,m}\right) _{-1}x^{i^{\prime },l^{\prime }}+Q_{i^{\prime }-1}\right) \otimes \left( \left( x^{a-i,m}\right) _{0}+Q_{a-i-1}\right) \otimes \left( x^{b-i^{\prime },m^{\prime }}+Q_{b-i^{\prime }-1}\right) \right \end{array \right] \\ &=&\left[ \begin{array}{c} \sum_{0\leq i\leq a}\sum_{0\leq i^{\prime }\leq b}\sum_{l,m}\sum_{l^{\prime },m^{\prime }}\eta _{i^{\prime },b-i^{\prime },l^{\prime },m^{\prime }}^{b,v}\eta _{i,a-i,l,m}^{a,u} \\ \left( \left( x^{i,l}\left( \left( x^{a-i,m}\right) _{-1}x^{i^{\prime },l^{\prime }}\right) +Q_{i+i^{\prime }-1}\right) \otimes \left( x^{a-i,m}\right) _{0}x^{b-i^{\prime },m^{\prime }}+Q_{a-i+b-i^{\prime }-1}\right \end{array \right] \\ &=&\left[ \begin{array}{c} \sum_{w=0}^{a+b}\sum_{\substack{ i+i^{\prime }=w \\ i\leq a,i^{\prime }\leq b }}\sum_{l,m}\sum_{l^{\prime },m^{\prime }}\eta _{i^{\prime },b-i^{\prime },l^{\prime },m^{\prime }}^{b,v}\eta _{i,a-i,l,m}^{a,u} \\ \left( x^{i,l}\left( \left( x^{a-i,m}\right) _{-1}x^{i^{\prime },l^{\prime }}\right) +Q_{w-1}\right) \otimes \left( \left( x^{a-i,m}\right) _{0}x^{b-i^{\prime },m^{\prime }}+Q_{a+b-w-1}\right \end{array \right] \\ &=&\left[ \begin{array}{c} \sum_{w=0}^{a+b}\sum_{\substack{ i+t=a,i^{\prime }+t^{\prime }=b, \\ t+t^{\prime }=a+b-w, \\ i+i^{\prime }=w}}\sum_{l,m}\sum_{l^{\prime },m^{\prime }}\eta _{i^{\prime },t^{\prime },l^{\prime },m^{\prime }}^{b,v}\eta _{i,t,l,m}^{a,u} \\ \left( x^{i,l}\left( \left( x^{t,m}\right) _{-1}x^{i^{\prime },l^{\prime }}\right) +Q_{w-1}\right) \otimes \left( \left( x^{t,m}\right) _{0}x^{t^{\prime },m^{\prime }}+Q_{a+b-w-1}\right \end{array \right] \\ &=&\left[ \begin{array}{c} \sum_{w=0}^{a+b}\sum_{\substack{ s+t=a,s^{\prime }+t^{\prime }=b, \\ t+t^{\prime }=a+b-w, \\ s+s^{\prime }=w}}\sum_{l,m}\sum_{l^{\prime },m^{\prime }}\eta _{s^{\prime },t^{\prime },l^{\prime },m^{\prime }}^{b,v}\eta _{s,t,l,m}^{a,u} \\ \left( x^{s,l}\left( \left( x^{t,m}\right) _{-1}x^{s^{\prime },l^{\prime }}\right) +Q_{w-1}\right) \otimes \left( \left( x^{t,m}\right) _{0}x^{t^{\prime },m^{\prime }}+Q_{a+b-w-1}\right \end{array \right] \\ &=&\left[ \begin{array}{c} \sum_{w=0}^{a+b}\sum_{l,m}\sum_{\substack{ s+t\leq a,s^{\prime }+t^{\prime }\leq b, \\ a+b-w\leq t+t^{\prime }, \\ w\leq s+s^{\prime }} \sum_{l^{\prime },m^{\prime }}\eta _{s^{\prime },t^{\prime },l^{\prime },m^{\prime }}^{b,v}\eta _{s,t,l,m}^{a,u} \\ \left( x^{s,l}\left( \left( x^{t,m}\right) _{-1}x^{s^{\prime },l^{\prime }}\right) +Q_{w-1}\right) \otimes \left( \left( x^{t,m}\right) _{0}x^{t^{\prime },m^{\prime }}+Q_{a+b-w-1}\right \end{array \right] \\ &=&\left[ \begin{array}{c} \sum_{w=0}^{a+b}\sum_{s+t\leq a}\sum_{l,m}\eta _{s,t,l,m}^{a,u}\sum_{s^{\prime }+t^{\prime }\leq b}\sum_{l^{\prime },m^{\prime }}\eta _{s^{\prime },t^{\prime },l^{\prime },m^{\prime }}^{b,v} \\ \left( x^{s,l}\left( \left( x^{t,m}\right) _{-1}x^{s^{\prime },l^{\prime }}\right) +Q_{w-1}\right) \otimes \left( \left( x^{t,m}\right) _{0}x^{t^{\prime },m^{\prime }}+Q_{a+b-w-1}\right \end{array \right] \\ &=&\sum_{w=0}^{a+b}\left( \left( x^{a,u}\right) _{1}\left( \left( \left( x^{a,u}\right) _{2}\right) _{-1}\left( x^{b,v}\right) _{1}\right) +Q_{w-1}\right) \otimes \left( \left( \left( x^{a,u}\right) _{2}\right) _{0}\left( x^{b,v}\right) _{2}+Q_{a+b-w-1}\right) \\ &=&\sum_{w=0}^{a+b}\left( \left( x^{a,u}x^{b,v}\right) _{1}+Q_{w-1}\right) \otimes \left( \left( x^{a,u}x^{b,v}\right) _{2}+Q_{a+b-w-1}\right) \\ &=&\Delta _{\mathrm{gr}Q}\left( x^{a,u}x^{b,v}+Q_{a+b-1}\right) \\ &=&\Delta _{\mathrm{gr}Q}m_{\mathrm{gr}Q}\left( \left( x^{a,u}+Q_{a-1}\right) \otimes \left( x^{b,v}+Q_{b-1}\right) \right) \\ &=&\Delta _{\mathrm{gr}Q}m_{\mathrm{gr}Q}\left( \overline{x^{a,u}}\otimes \overline{x^{b,v}}\right) . \end{eqnarray*} \end{invisible} Moreover, since $\varepsilon _{\mathrm{gr}Q\otimes \mathrm{gr}Q}=\varepsilon _{\mathrm{gr}Q}\otimes \varepsilon _{\mathrm{gr}Q},$ we get that \varepsilon _{\mathrm{gr}Q}m_{\mathrm{gr}Q}\left( \overline{x^{a,u}}\otimes \overline{x^{b,v}}\right) =\varepsilon _{\mathrm{gr}Q\otimes \mathrm{gr Q}\left( \overline{x^{a,u}}\otimes \overline{x^{b,v}}\right) .$ \begin{invisible} \begin{eqnarray*} \varepsilon _{\mathrm{gr}Q}m_{\mathrm{gr}Q}\left( \overline{x^{a,u}}\otimes \overline{x^{b,v}}\right) &=&\varepsilon _{\mathrm{gr}Q}m_{\mathrm{gr Q}\left( \left( x^{a,u}+Q_{a-1}\right) \otimes \left( x^{b,v}+Q_{b-1}\right) \right) \\ &=&\varepsilon _{\mathrm{gr}Q}\left( x^{a,u}x^{b,v}+Q_{a+b-1}\right) \\ &=&\delta _{a+b,0}\varepsilon \left( x^{a,u}x^{b,v}\right) \\ &=&\delta _{a+b,0}\varepsilon m\left( x^{a,u}\otimes x^{b,v}\right) \\ &=&\delta _{a+b,0}\varepsilon _{Q\otimes Q}\left( x^{a,u}\otimes x^{b,v}\right) \\ &=&\delta _{a,0}\delta _{b,0}\varepsilon \left( x^{a,u}\right) \varepsilon \left( x^{b,v}\right) \\ &=&\delta _{a,0}\varepsilon \left( x^{a,u}\right) \delta _{b,0}\varepsilon \left( x^{b,v}\right) \\ &=&\varepsilon _{\mathrm{gr}Q}\left( \overline{x^{a,u}}\right) \varepsilon _ \mathrm{gr}Q}\left( \overline{x^{b,v}}\right) \\ &=&\varepsilon _{\mathrm{gr}Q\otimes \mathrm{gr}Q}\left( \overline{x^{a,u} \otimes \overline{x^{b,v}}\right) . \end{eqnarray*} \end{invisible} This proves that $m_{\mathrm{gr}Q}$ is a coalgebra morphism in ${_{H}^{H \mathcal{YD}}$. The fact that $u_{\mathrm{gr}Q}:\Bbbk \rightarrow \mathrm{gr}Q,$ defined by u_{\mathrm{gr}Q}\left( k\right) :=k1_{Q}+Q_{-1}$ is a coalgebra morphism in {_{H}^{H}\mathcal{YD}}$ easily follows by means of (\ref{form:1Qlin}) and \ref{form:1Qcolin}). \begin{invisible} Let us check that $u_{\mathrm{gr}Q}:\Bbbk \rightarrow \mathrm{gr}Q,$ defined by $u_{\mathrm{gr}Q}\left( k\right) :=k1_{Q}+Q_{-1}$ is a coalgebra morphism in ${_{H}^{H}\mathcal{YD}}$ too. For $h\in H,k\in \Bbbk $, we hav \begin{eqnarray*} u_{\mathrm{gr}Q}\left( hk\right) &=&u_{\mathrm{gr}Q}\left( \varepsilon _{H}\left( h\right) k\right) =\varepsilon _{H}\left( h\right) u_{\mathrm{gr Q}\left( k\right) =\varepsilon _{H}\left( h\right) \left( k1_{Q}+Q_{-1}\right) \\ &=&\varepsilon _{H}\left( h\right) k1_{Q}+Q_{-1}\overset{(\ref{form:1Qlin})} =}kh1_{Q}+Q_{-1}=h\left( k1_{Q}+Q_{-1}\right) =hu_{\mathrm{gr}Q}\left( k\right) , \\ \left[ u_{\mathrm{gr}Q}\left( k\right) \right] _{-1}\otimes \left[ u_ \mathrm{gr}Q}\left( k\right) \right] _{0} &=&\left[ k1_{Q}+Q_{-1}\right] _{-1}\otimes \left[ k1_{Q}+Q_{-1}\right] _{0}=\left( k1_{Q}\right) _{-1}\otimes \left[ \left( k1_{Q}\right) _{0}+Q_{-1}\right] \\ &=&\left( 1_{Q}\right) _{-1}\otimes \left[ k\left( 1_{Q}\right) _{0}+Q_{-1 \right] \overset{(\ref{form:1Qcolin})}{=}1_{H}\otimes \left[ k1_{Q}+Q_{-1 \right] =1_{H}\otimes u_{\mathrm{gr}Q}\left( k\right) . \end{eqnarray* Moreove \begin{eqnarray*} \Delta _{\mathrm{gr}Q}u_{\mathrm{gr}Q}\left( k\right) &=&\Delta _{\mathrm{gr Q}\left( k1_{Q}+Q_{-1}\right) \\ &=&\left( \left( k1_{Q}\right) _{1}+Q_{-1}\right) \otimes \left( \left( k1_{Q}\right) _{2}+Q_{-1}\right) \\ &=&k\left( \left( 1_{Q}\right) _{1}+Q_{-1}\right) \otimes \left( \left( 1_{Q}\right) _{2}+Q_{-1}\right) \\ &=&k\left( 1_{Q}+Q_{-1}\right) \otimes \left( 1_{Q}+Q_{-1}\right) \\ &=&u_{\mathrm{gr}Q}\left( k\right) \otimes u_{\mathrm{gr}Q}\left( 1_{\Bbbk }\right) \\ &=&\left( u_{\mathrm{gr}Q}\otimes u_{\mathrm{gr}Q}\right) \Delta _{\Bbbk }\left( k\right) , \\ \varepsilon _{\mathrm{gr}Q}u_{\mathrm{gr}Q}\left( k\right) &=&\varepsilon _ \mathrm{gr}Q}\left( k1_{Q}+Q_{-1}\right) =\varepsilon \left( k1_{Q}\right) =k\varepsilon \left( 1_{Q}\right) =k1_{\Bbbk }=k=\varepsilon _{\Bbbk }\left( k\right) . \end{eqnarray*} \end{invisible} \end{proof} \begin{definition}[{\protect\cite[Definition 5.2]{ABM}}] \label{def: dual quasi braided} Let $H$ be a Hopf algebra. Recall that a \emph{coquasi-bialgebra} $(Q,m,u,\Delta ,\varepsilon ,\alpha )$ in the pre-braided monoidal category $_{H}^{H}\mathcal{YD}$ is a coalgebra $\left( Q,\Delta ,\varepsilon \right) $ in $_{H}^{H}\mathcal{YD}$ together with coalgebra homomorphisms $m:Q\otimes Q\rightarrow Q$ and $u:\Bbbk \rightarrow Q$ in $_{H}^{H}\mathcal{YD}$ and a convolution invertible element $\alpha \in {_{H}^{H}\mathcal{YD}}\left( Q^{\otimes 3},\Bbbk \right) $ (\emph braided reassociator}) such tha \begin{eqnarray} &&\alpha \left( Q\otimes Q\otimes m\right) \ast \alpha \left( m\otimes Q\otimes Q\right) =\left( \varepsilon \otimes \alpha \right) \ast \alpha \left( Q\otimes m\otimes Q\right) \ast \left( \alpha \otimes \varepsilon \right) , \label{form: alpha 3-cocycle} \\ &&\alpha \left( Q\otimes u\otimes Q\right) =\alpha \left( u\otimes Q\otimes Q\right) =\alpha \left( Q\otimes Q\otimes u\right) =\varepsilon _{Q\otimes Q}, \label{form: alpha unital} \\ &&m\left( Q\otimes m\right) \ast \alpha =\alpha \ast m\left( m\otimes Q\right) , \label{form: m quasi assoc} \\ &&m\left( u\otimes Q\right) =\mathrm{Id}_{Q}=m\left( Q\otimes u\right) . \label{form: m unital} \end{eqnarray Here $\ast $ denotes the convolution product, where $Q^{\otimes 3}$ is the tensor product of coalgebras in $_{H}^{H}\mathcal{YD}$ whence it depends on the braiding of this category. Note that in (\ref{form: alpha unital}) any of the three equalities such as $\alpha \left( u\otimes Q\otimes Q\right) =\varepsilon _{Q\otimes Q}$ implies that $\alpha $ is unital. \end{definition} \begin{theorem} \label{teo:grHopf}Let $H$ be a Hopf algebra and let $\left( Q,m,u,\Delta ,\varepsilon ,\omega \right) $ be a connected coquasi-bialgebra in ${_{H}^{H \mathcal{YD}}$. Then $\mathrm{gr}Q$ is a connected bialgebra in ${_{H}^{H \mathcal{YD}}$. \end{theorem} \begin{proof} By Proposition \ref{pro:CMU}, we know that $\mathrm{gr}Q$ is a coalgebra with multiplication and unit in ${_{H}^{H}\mathcal{YD}}$. We have to check that the multiplication is associative and unitary. Given two coalgebras $D,E$ in ${_{H}^{H}\mathcal{YD}}$ endowed with coalgebras filtration $\left( D_{\left( n\right) }\right) _{n\in \N_0}$ and $\left( E_{\left( n\right) }\right) _{n\in \N_0}$ in ${_{H}^{H \mathcal{YD}}$ such that $D_{\left( 0\right) }$ and $E_{\left( 0\right) }$ are one-dimensional, let us check that $C_{\left( n\right) }:=\sum_{0\leq i\leq n}D_{\left( i\right) }\otimes E_{\left( n-i\right) }$ gives a coalgebra filtration on $C:=D\otimes E$ in ${_{H}^{H}\mathcal{YD}}.$ First note that the coalgebra structure of $C$ depends on the braiding. Thus, we have \begin{eqnarray*} \Delta _{C}\left( C_{\left( n\right) }\right) &=&\left( D\otimes c_{D,E}\otimes E\right) \left( \Delta _{D}\otimes \Delta _{E}\right) \left( \sum_{i=0}^{n}D_{\left( i\right) }\otimes E_{\left( n-i\right) }\right) \\ &\subseteq &\left( D\otimes c_{D,E}\otimes E\right) \left( \sum_{i=0}^{n}\sum_{a=0}^{i}\sum_{b=0}^{n-i}D_{\left( a\right) }\otimes D_{\left( i-a\right) }\otimes E_{\left( b\right) }\otimes E_{\left( n-i-b\right) }\right) \\ &\subseteq &\sum_{i=0}^{n}\sum_{a=0}^{i}\sum_{b=0}^{n-i}D_{\left( a\right) }\otimes c_{D,E}\left( D_{\left( i-a\right) }\otimes E_{\left( b\right) }\right) \otimes E_{\left( n-i-b\right) } \\ &\subseteq &\sum_{i=0}^{n}\sum_{a=0}^{i}\sum_{b=0}^{n-i}D_{\left( a\right) }\otimes c_{D_{\left( i-a\right) },E_{\left( b\right) }}\left( D_{\left( i-a\right) }\otimes E_{\left( b\right) }\right) \otimes E_{\left( n-i-b\right) } \\ &\subseteq &\sum_{i=0}^{n}\sum_{a=0}^{i}\sum_{b=0}^{n-i}D_{\left( a\right) }\otimes E_{\left( b\right) }\otimes D_{\left( i-a\right) }\otimes E_{\left( n-i-b\right) } \\ &\subseteq &\sum_{i=0}^{n}\sum_{w=0}^{n}\sum_{\substack{ 0\leq a\leq i, \\ 0\leq b\leq n-i \\ a+b=w}}D_{\left( a\right) }\otimes E_{\left( b\right) }\otimes D_{\left( i-a\right) }\otimes E_{\left( n-i-b\right) } \\ &\subseteq &\sum_{w=0}^{n}C_{\left( w\right) }\otimes C_{\left( n-w\right) }. \end{eqnarray* Moreover, by \cite[Proposition 11.1.1]{Sw}, we have that the coradical of C $ is contained in $D_{\left( 0\right) }\otimes E_{\left( 0\right) }\ $and hence it is one-dimensional. This argument can be used to produce a coalgebra filtration on $C:=Q\otimes Q\otimes Q$ using as a filtration on $Q$ the coradical filtration. Let $n>0$ and let $w\in C_{\left( n\right) }=\sum_{i+j+k\leq n}Q_{i}\otimes Q_{j}\otimes Q_{k}.$ By \cite[Lemma 3.69]{AMS}, we have tha \begin{equation*} \Delta _{C}\left( w\right) -w\otimes \left( 1_{Q}\right) ^{\otimes 3}-\left( 1_{Q}\right) ^{\otimes 3}\otimes w\in C_{\left( n-1\right) }\otimes C_{\left( n-1\right) }. \end{equation* Thus we ge \begin{equation*} w_{1}\otimes w_{2}\otimes w_{3}-\Delta _{C}\left( w\right) \otimes \left( 1_{Q}\right) ^{\otimes 3}-\Delta _{C}\left( \left( 1_{Q}\right) ^{\otimes 3}\right) \otimes w\in \Delta _{C}\left( C_{\left( n-1\right) }\right) \otimes C_{\left( n-1\right) } \end{equation* and hence, tensoring the first relation by $\left( 1_{Q}\right) ^{\otimes 3}$ on the right and adding it to the second one, we get \begin{equation*} w_{1}\otimes w_{2}\otimes w_{3}-w\otimes \left( 1_{Q}\right) ^{\otimes 3}\otimes \left( 1_{Q}\right) ^{\otimes 3}-\left( 1_{Q}\right) ^{\otimes 3}\otimes w\otimes \left( 1_{Q}\right) ^{\otimes 3}-\left( 1_{Q}\right) ^{\otimes 6}\otimes w\in C_{\left( n-1\right) }\otimes C_{\left( n-1\right) }\otimes C_{\left( n-1\right) }. \end{equation*} For shortness, we set $\nu _{n}\left( z\right) :=m\left( Q\otimes m\right) \left( z\right) +Q_{n-1}$ for every $z\in C.$ Thus, by applying to the last displayed relation $C_{\left( n-1\right) }\otimes m\left( Q\otimes m\right) \otimes C_{\left( n-1\right) }$ and factoring out the middle term by Q_{n-1},$ we get \begin{eqnarray*} &&\left[ \begin{array}{c} w_{1}\otimes \nu _{n}\left( w_{2}\right) \otimes w_{3}-w\otimes \nu _{n}\left( \left( 1_{Q}\right) ^{\otimes 3}\right) \otimes \left( 1_{Q}\right) ^{\otimes 3}+ \\ -\left( 1_{Q}\right) ^{\otimes 3}\otimes \nu _{n}\left( w\right) \otimes \left( 1_{Q}\right) ^{\otimes 3}-\left( 1_{Q}\right) ^{\otimes 3}\otimes \nu _{n}\left( \left( 1_{Q}\right) ^{\otimes 3}\right) \otimes \end{array \right] \\ &\in &C_{\left( n-1\right) }\otimes \left( \frac{\nu _{n}\left( C_{\left( n-1\right) }\right) }{Q_{n-1}}\right) \otimes C_{\left( n-1\right) }\subseteq C_{\left( n-1\right) }\otimes \frac{Q_{n-1}}{Q_{n-1}}\otimes C_{\left( n-1\right) }=0. \end{eqnarray* Thus we can express the first term with respect to the remaining ones as follows \begin{eqnarray*} &&w_{1}\otimes \nu _{n}\left( w_{2}\right) \otimes w_{3} \\ &=&w\otimes \nu _{n}\left( \left( 1_{Q}\right) ^{\otimes 3}\right) \otimes \left( 1_{Q}\right) ^{\otimes 3}+\left( 1_{Q}\right) ^{\otimes 3}\otimes \nu _{n}\left( w\right) \otimes \left( 1_{Q}\right) ^{\otimes 3}+\left( 1_{Q}\right) ^{\otimes 3}\otimes \nu _{n}\left( \left( 1_{Q}\right) ^{\otimes 3}\right) \otimes w \\ &=&w\otimes \left( 1_{Q}+Q_{n-1}\right) \otimes \left( 1_{Q}\right) ^{\otimes 3}+\left( 1_{Q}\right) ^{\otimes 3}\otimes \nu _{n}\left( w\right) \otimes \left( 1_{Q}\right) ^{\otimes 3}+\left( 1_{Q}\right) ^{\otimes 3}\otimes \left( 1_{Q}+Q_{n-1}\right) \otimes w \\ &\overset{n>0}{=}&\left( 1_{Q}\right) ^{\otimes 3}\otimes \nu _{n}\left( w\right) \otimes \left( 1_{Q}\right) ^{\otimes 3}. \end{eqnarray* We have so proved that for $n>0$ and $w\in C_{\left( n\right) } \begin{equation} w_{1}\otimes \nu _{n}\left( w_{2}\right) \otimes w_{3}=\left( 1_{Q}\right) ^{\otimes 3}\otimes \nu _{n}\left( w\right) \otimes \left( 1_{Q}\right) ^{\otimes 3}. \label{form:Dragos} \end{equation The same equation trivially holds also in the case $n=0$ as $C_{\left( n\right) }$ is one-dimensional. Let $x,y,z\in Q$. Then $x\otimes y\otimes z\in C_{\left( \left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert \right) }$ so that \begin{eqnarray*} \left( \overline{x}\cdot \overline{y}\right) \cdot \overline{z} &=&\left( \left( x+Q_{\left\vert x\right\vert -1}\right) \cdot \left( y+Q_{\left\vert y\right\vert -1}\right) \right) \cdot \left( z+Q_{\left\vert z\right\vert -1}\right) \\ &=&\left( \left( xy\right) +Q_{\left\vert x\right\vert +\left\vert y\right\vert -1}\right) \cdot \left( z+Q_{\left\vert z\right\vert -1}\right) \\ &=&\left( xy\right) z+Q_{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert -1} \\ &=&\omega ^{-1}\left( \left( x\otimes y\otimes z\right) _{1}\right) \nu _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert }\left( \left( x\otimes y\otimes z\right) _{2}\right) \omega \left( \left( x\otimes y\otimes z\right) _{3}\right) \\ &\overset{(\ref{form:Dragos})}{=}&\omega ^{-1}\left( 1_{Q}\otimes 1_{Q}\otimes 1_{Q}\right) \nu _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert }\left( x\otimes y\otimes z\right) \omega \left( 1_{Q}\otimes 1_{Q}\otimes 1_{Q}\right) \\ &=&\nu _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert }\left( x\otimes y\otimes z\right) \\ &=&x\left( yz\right) +Q_{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert -1}=\overline{x}\cdot \left( \overline{y}\cdot \overline{z}\right) . \end{eqnarray*} Therefore the multiplication is associative. It is also unitary as \begin{equation*} \overline{x}\cdot \overline{1_{Q}}=\left( x+Q_{\left\vert x\right\vert -1}\right) \cdot \left( 1_{Q}+Q_{-1}\right) =x\cdot 1_{Q}+Q_{\left\vert x\right\vert -1}=x+Q_{\left\vert x\right\vert -1}=\overline{x} \end{equation* and similarly $\overline{1_{Q}}\cdot \overline{x}=\overline{x}$ for every x\in Q.$ \end{proof} \section{Gauge deformation}\label{sec:2} \begin{definition} Let $H$ be a Hopf algebra and let $\left( Q,m,u,\Delta ,\varepsilon ,\omega \right) $ be a coquasi-bialgebra in ${_{H}^{H}\mathcal{YD}}$. A \textbf gauge transformation} for $Q$ is a morphism $\gamma :Q\otimes Q\rightarrow \Bbbk $ in ${_{H}^{H}\mathcal{YD}}$ which is convolution invertible in $ _{H}^{H}\mathcal{YD}}$ and which is also unitary on both entries. \end{definition} \begin{remark} \label{rem:gamma-1gauge}For $\gamma $ as above, let us check that $\gamma ^{-1}$ is unitary whence a gauge transformation too. First note that for all $x\in Q,$ by means of (\ref{form:1Qcolin}) and (\re {form:1Qlin}), one get \begin{eqnarray} \left( 1_{Q}\otimes x\right) _{1}\otimes \left( 1_{Q}\otimes x\right) _{2} &=&1_{Q}\otimes x_{1}\otimes 1_{Q}\otimes x_{2} \label{form:delta1x} \\ \left( x\otimes 1_{Q}\right) _{1}\otimes \left( x\otimes 1_{Q}\right) _{2} &=&x_{1}\otimes 1_{Q}\otimes x_{2}\otimes 1_{Q} \label{form:deltax1} \end{eqnarray} \begin{invisible} For all $x\in Q, \begin{equation*} \left( 1_{Q}\otimes x\right) _{1}\otimes \left( 1_{Q}\otimes x\right) _{2}=1_{Q}\otimes \left( 1_{Q}\right) _{-1}x_{1}\otimes \left( 1_{Q}\right) _{0}\otimes x_{2}\overset{(\ref{form:1Qcolin})}{=}1_{Q}\otimes x_{1}\otimes 1_{Q}\otimes x_{2} \end{equation* an \begin{eqnarray*} \left( x\otimes 1_{Q}\right) _{1}\otimes \left( x\otimes 1_{Q}\right) _{2} &=&x_{1}\otimes \left( x_{2}\right) _{-1}1_{Q}\otimes \left( x_{2}\right) _{0}\otimes 1_{Q}\overset{(\ref{form:1Qlin})}{=}x_{1}\otimes \varepsilon _{H}\left( \left( x_{2}\right) _{-1}\right) 1_{Q}\otimes \left( x_{2}\right) _{0}\otimes 1_{Q} \\ &=&x_{1}\otimes 1_{Q}\otimes x_{2}\otimes 1_{Q} \end{eqnarray*} \end{invisible} Thu \begin{equation*} \gamma ^{-1}\left( 1_{Q}\otimes x\right) =\gamma ^{-1}\left( 1_{Q}\otimes x_{1}\right) \varepsilon \left( x_{2}\right) =\gamma ^{-1}\left( 1_{Q}\otimes x_{1}\right) \gamma \left( 1_{Q}\otimes x_{2}\right) =\left( \gamma ^{-1}\ast \gamma \right) \left( 1_{Q}\otimes x\right) =\varepsilon \left( x\right) \end{equation* and similarly $\gamma ^{-1}\left( x\otimes 1_{Q}\right) =\varepsilon \left( x\right) .$ \end{remark} \begin{lemma} \label{lem:InvYD}Let $H$ be a Hopf algebra and let $C$ be a coalgebra in $ _{H}^{H}\mathcal{YD}}$. Given a map $\gamma \in {_{H}^{H}\mathcal{YD}}\left( C,\Bbbk \right) ,$ we have that $\gamma $ is convolution invertible in $ _{H}^{H}\mathcal{YD}}\left( C,\Bbbk \right) $ if and only if it is convolution invertible in $\mathbf{Vec}_{\Bbbk }\left( C,\Bbbk \right) $. Moreover the inverse is the same. \end{lemma} \begin{proof} Assume there is a $\Bbbk $-linear map $\gamma ^{-1}:C\rightarrow \Bbbk $ which is a convolution inverse of $\gamma $ in $\mathbf{Vec}_{\Bbbk }\left( C,\Bbbk \right) $. By \cite[Remark 2.4(ii)]{ABM-cocycleproj}, $\gamma ^{-1}$ is left $H$-linear. Let us check that $\gamma ^{-1}$ is left $H$-colinear \begin{align*} c_{-1}\otimes \gamma ^{-1}\left( c_{0}\right) & =\left( c_{1}\right) _{-1}1_{H}\otimes \gamma ^{-1}\left( \left( c_{1}\right) _{0}\right) \gamma \left( c_{2}\right) \gamma ^{-1}\left( c_{3}\right) \\ & =\left( c_{1}\right) _{-1}\left( c_{2}\right) _{-1}\otimes \gamma ^{-1}\left( \left( c_{1}\right) _{0}\right) \gamma \left( \left( c_{2}\right) _{0}\right) \gamma ^{-1}\left( c_{3}\right) \\ & \overset{(\ast )}{=}\left( c_{1}\right) _{-1}\otimes \gamma ^{-1}\left( \left( \left( c_{1}\right) _{0}\right) _{1}\right) \gamma \left( \left( \left( c_{1}\right) _{0}\right) _{2}\right) \gamma ^{-1}\left( c_{2}\right) \\ & =\left( c_{1}\right) _{-1}\otimes \left( \gamma ^{-1}\ast \gamma \right) \left( \left( c_{1}\right) _{0}\right) \gamma ^{-1}\left( c_{2}\right) \\ & =\left( c_{1}\right) _{-1}\otimes \varepsilon _{C}\left( \left( c_{1}\right) _{0}\right) \gamma ^{-1}\left( c_{2}\right) \\ & \overset{(\ast )}{=}1_{H}\otimes \varepsilon _{C}\left( c_{1}\right) \gamma ^{-1}\left( c_{2}\right) =1_{H}\otimes \gamma ^{-1}\left( c\right) \end{align* where in (*) we used that the comultiplication or the counit of $C$ is left H$-colinear. Thus $\gamma $ is convolution invertible in ${_{H}^{H}\mathcal YD}}\left( C,\Bbbk \right) $. The other implication is obvious. \end{proof} \begin{proposition} \label{pro:deformYD}Let $H$ be a Hopf algebra and let $\left( Q,m,u,\Delta ,\varepsilon ,\omega \right) $ be a coquasi-bialgebra in ${_{H}^{H}\mathcal YD}}$. Let $\gamma :Q\otimes Q\rightarrow \Bbbk $ be a gauge transformation in ${_{H}^{H}\mathcal{YD}}$. The \begin{equation*} Q^{\gamma }:=\left( Q,m^{\gamma },u,\Delta ,\varepsilon ,\omega ^{\gamma }\right) \end{equation* is a coquasi-bialgebra in ${_{H}^{H}\mathcal{YD}}$, wher \begin{eqnarray*} m^{\gamma } &:=&\gamma \ast m\ast \gamma ^{-1} \\ \omega ^{\gamma } &:=&\left( \varepsilon \otimes \gamma \right) \ast \gamma \left( Q\otimes m\right) \ast \omega \ast \gamma ^{-1}\left( m\otimes Q\right) \ast \left( \gamma ^{-1}\otimes \varepsilon \right) . \end{eqnarray*} \end{proposition} \begin{proof} The proof is analogue to \cite[Proposition XV.3.2]{Kassel} in its dual version. We include some details for the reader's sake. Note that $Q^{\gamma }$ has the same underlying coalgebra of $Q$ which is a coalgebra in $ _{H}^{H}\mathcal{YD}}$. The unit is also the same and hence it is a coalgebra map in ${_{H}^{H}\mathcal{YD}}$. Since $m^{\gamma }$ is the convolution product of morphisms in ${_{H}^{H}\mathcal{YD}},$ it results that $m^{\gamma }$ is in ${_{H}^{H}\mathcal{YD}}$ as well. \begin{invisible} Let us check that the multiplication is $m^{\gamma }$ is a morphism in $ _{H}^{H}\mathcal{YD}}.$ Let $C=Q\otimes Q.$ For $h\in H,c\in C$, we hav \begin{eqnarray*} \left( \gamma \ast m\ast \gamma ^{-1}\right) \left( hc\right) &=&\gamma \left( \left( hc\right) _{1}\right) \cdot m\left( \left( hc\right) _{2}\right) \cdot \gamma ^{-1}\left( \left( hc\right) _{3}\right) \\ &=&\gamma \left( h_{1}c_{1}\right) \cdot m\left( h_{2}c_{2}\right) \cdot \gamma ^{-1}\left( h_{3}c_{3}\right) \\ &=&\left( \varepsilon _{H}\left( h_{1}\right) \gamma \left( c_{1}\right) \right) \cdot \left( h_{2}m\left( c_{2}\right) \right) \cdot \left( \varepsilon _{H}\left( h_{3}\right) \gamma ^{-1}\left( c_{3}\right) \right) \\ &=&\gamma \left( c_{1}\right) \cdot \left( hm\left( c_{2}\right) \right) \cdot \gamma ^{-1}\left( c_{3}\right) \\ &=&h\left( \gamma \left( c_{1}\right) \cdot m\left( c_{2}\right) \cdot \gamma ^{-1}\left( c_{3}\right) \right) \\ &=&h\left( \gamma \ast m\ast \gamma ^{-1}\right) \left( c\right) , \\ c_{-1}\otimes \left( \gamma \ast m\ast \gamma ^{-1}\right) \left( c_{0}\right) &=&c_{-1}\otimes \gamma \left( \left( c_{0}\right) _{1}\right) \cdot m\left( \left( c_{0}\right) _{2}\right) \cdot \gamma ^{-1}\left( \left( c_{0}\right) _{3}\right) \\ &=&\left( c_{1}\right) _{-1}\left( c_{2}\right) _{-1}\left( c_{3}\right) _{-1}\otimes \gamma \left( \left( c_{1}\right) _{0}\right) \cdot m\left( \left( c_{2}\right) _{0}\right) \cdot \gamma ^{-1}\left( \left( c_{3}\right) _{0}\right) \\ &=&1_{H}\left( m\left( c_{2}\right) \right) _{-1}1_{H}\otimes \gamma \left( c_{1}\right) \cdot \left( m\left( c_{2}\right) \right) _{0}\cdot \gamma ^{-1}\left( c_{3}\right) \\ &=&\left( m\left( c_{2}\right) \right) _{-1}\otimes \gamma \left( c_{1}\right) \cdot \left( m\left( c_{2}\right) \right) _{0}\cdot \gamma ^{-1}\left( c_{3}\right) \\ &=&\left( \gamma \left( c_{1}\right) \cdot m\left( c_{2}\right) \cdot \gamma ^{-1}\left( c_{3}\right) \right) _{-1}\otimes \left( \gamma \left( c_{1}\right) \cdot m\left( c_{2}\right) \cdot \gamma ^{-1}\left( c_{3}\right) \right) _{0} \\ &=&\left( \left( \gamma \ast m\ast \gamma ^{-1}\right) \left( c\right) \right) _{-1}\otimes \left( \left( \gamma \ast m\ast \gamma ^{-1}\right) \left( c\right) \right) _{0}. \end{eqnarray*} \end{invisible} Since $m$ is a coalgebra map in ${_{H}^{H}\mathcal{YD}}$ and $\gamma $ is convolution invertible with convolution inverse $\gamma ^{-1},$ it follows that $m^{\gamma }$ is a coalgebra map in ${_{H}^{H}\mathcal{YD}}$. \begin{invisible} Let us check that $m^{\gamma }$ is a coalgebra map in ${_{H}^{H}\mathcal{YD} : \begin{eqnarray*} \Delta m^{\gamma }\left( c\right) &=&\gamma \left( c_{1}\right) \cdot \left( m\left( c_{2}\right) \right) _{1}\otimes \left( m\left( c_{2}\right) \right) _{2}\cdot \gamma ^{-1}\left( c_{3}\right) \\ &=&\gamma \left( c_{1}\right) \cdot m\left( \left( c_{2}\right) _{1}\right) \otimes m\left( \left( c_{2}\right) _{2}\right) \cdot \gamma ^{-1}\left( c_{3}\right) \\ &=&\gamma \left( c_{1}\right) \cdot m\left( c_{2}\right) \cdot \gamma ^{-1}\left( c_{3}\right) \otimes \gamma \left( c_{4}\right) \cdot m\left( c_{5}\right) \cdot \gamma ^{-1}\left( c_{6}\right) \\ &=&m^{\gamma }\left( c_{1}\right) \otimes m^{\gamma }\left( c_{2}\right) , \\ \varepsilon m^{\gamma }\left( c\right) &=&\gamma \left( c_{1}\right) \cdot \varepsilon m\left( c_{2}\right) \cdot \gamma ^{-1}\left( c_{3}\right) =\gamma \left( c_{1}\right) \cdot \varepsilon _{C}\left( c_{2}\right) \cdot \gamma ^{-1}\left( c_{3}\right) \\ &=&\gamma \left( c_{1}\right) \cdot \gamma ^{-1}\left( c_{2}\right) =\left( \gamma \ast \gamma ^{-1}\right) \left( c\right) =\varepsilon _{C}\left( c\right) . \end{eqnarray*} \end{invisible} By means of (\ref{form:delta1x}) and (\ref{form:deltax1}), one gets that m^{\gamma }\left( 1_{Q}\otimes x\right) =x=m^{\gamma }\left( x\otimes 1_{Q}\right) .$ \begin{invisible} Let us check that $m^{\gamma }$ is unitary. For every $x\in Q,$ we hav \begin{eqnarray*} m^{\gamma }\left( 1_{Q}\otimes x\right) &=&\gamma \left( \left( 1_{Q}\otimes x\right) _{1}\right) \cdot m\left( \left( 1_{Q}\otimes x\right) _{2}\right) \cdot \gamma ^{-1}\left( \left( 1_{Q}\otimes x\right) _{3}\right) \\ &&\overset{(\ref{form:delta1x})}{=}\gamma \left( 1_{Q}\otimes x_{1}\right) \cdot m\left( 1_{Q}\otimes x_{2}\right) \cdot \gamma ^{-1}\left( 1_{Q}\otimes x_{3}\right) \\ &=&\varepsilon \left( x_{1}\right) \cdot m\left( 1_{Q}\otimes x_{2}\right) \cdot \varepsilon \left( x_{3}\right) =m\left( 1_{Q}\otimes x\right) =x. \end{eqnarray* Similarly, using (\ref{form:deltax1}), we get $m^{\gamma }\left( x\otimes 1_{Q}\right) =x.$ \end{invisible} Let us consider now $\omega ^{\gamma }.$ Since it is the convolution product of morphisms in ${_{H}^{H}\mathcal{YD}},$ it results that $\omega ^{\gamma }$ is in ${_{H}^{H}\mathcal{YD}}$ as well. \begin{invisible} Let us check it is a morphism in ${_{H}^{H}\mathcal{YD}}$. Set $D:=Q\otimes Q\otimes Q$ and let $d\in D.$ We comput \begin{eqnarray*} \omega ^{\gamma }\left( hd\right) &=&\left( \varepsilon \otimes \gamma \right) \left( h_{1}d_{1}\right) \cdot \gamma \left( Q\otimes m\right) \left( h_{2}d_{2}\right) \cdot \omega \left( h_{3}d_{3}\right) \cdot \gamma ^{-1}\left( m\otimes Q\right) \left( h_{4}d_{4}\right) \cdot \left( \gamma ^{-1}\otimes \varepsilon \right) \left( h_{5}d_{5}\right) \\ &=&\left[ \begin{array}{c} \varepsilon _{H}\left( h_{1}\right) \left( \varepsilon \otimes \gamma \right) \left( d_{1}\right) \cdot \varepsilon _{H}\left( h_{2}\right) \gamma \left( Q\otimes m\right) \left( d_{2}\right) \cdot \varepsilon _{H}\left( h_{3}\right) \omega \left( d_{3}\right) \\ \cdot \varepsilon _{H}\left( h_{4}\right) \gamma ^{-1}\left( m\otimes Q\right) \left( d_{4}\right) \cdot \varepsilon _{H}\left( h_{5}\right) \left( \gamma ^{-1}\otimes \varepsilon \right) \left( d_{5}\right \end{array \right] \\ &=&\varepsilon _{H}\left( h\right) \omega ^{\gamma }\left( d\right) \end{eqnarray* an \begin{eqnarray*} &&d_{-1}\otimes \omega ^{\gamma }\left( d_{0}\right) \\ &=&d_{-1}\otimes \left[ \begin{array}{c} \left( \varepsilon \otimes \gamma \right) \left( \left( d_{0}\right) _{1}\right) \cdot \gamma \left( Q\otimes m\right) \left( \left( d_{0}\right) _{2}\right) \cdot \omega \left( \left( d_{0}\right) _{3}\right) \\ \cdot \gamma ^{-1}\left( m\otimes Q\right) \left( \left( d_{0}\right) _{4}\right) \cdot \left( \gamma ^{-1}\otimes \varepsilon \right) \left( \left( d_{0}\right) _{5}\right \end{array \right] \\ &=&\left( d_{1}\right) _{-1}\left( d_{2}\right) _{-1}\left( d_{3}\right) _{-1}\left( d_{4}\right) _{-1}\left( d_{5}\right) _{-1}\otimes \left[ \begin{array}{c} \left( \varepsilon \otimes \gamma \right) \left( \left( d_{1}\right) _{0}\right) \cdot \gamma \left( Q\otimes m\right) \left( \left( d_{2}\right) _{0}\right) \cdot \omega \left( \left( d_{3}\right) _{0}\right) \\ \cdot \gamma ^{-1}\left( m\otimes Q\right) \left( \left( d_{4}\right) _{0}\right) \cdot \left( \gamma ^{-1}\otimes \varepsilon \right) \left( \left( d_{5}\right) _{0}\right \end{array \right] \\ &=&1_{H}\otimes \left[ \begin{array}{c} \left( \varepsilon \otimes \gamma \right) \left( d_{1}\right) \cdot \gamma \left( Q\otimes m\right) \left( d_{2}\right) \cdot \omega \left( d_{3}\right) \\ \cdot \gamma ^{-1}\left( m\otimes Q\right) \left( d_{4}\right) \cdot \left( \gamma ^{-1}\otimes \varepsilon \right) \left( d_{5}\right \end{array \right] =1_{H}\otimes \omega ^{\gamma }\left( d\right) . \end{eqnarray*} \end{invisible} Let us check that $\omega ^{\gamma }$ is unitary. Consider the map $\alpha _{2}:Q\otimes Q\rightarrow Q\otimes Q\otimes Q$ defined by $\alpha _{2}\left( x\otimes y\right) =x\otimes 1_{Q}\otimes y.$ The equalities (\re {form:deltax1}) and (\ref{form:1Qcolin}) yiel \begin{eqnarray*} \left( \alpha _{2}\left( x\otimes y\right) \right) _{1}\otimes \left( \alpha _{2}\left( x\otimes y\right) \right) _{2} &=&\alpha _{2}\left( x_{1}\otimes \left( x_{2}\right) _{-1}y_{1}\right) \otimes \alpha _{2}\left( \left( x_{2}\right) _{0}\otimes y_{2}\right) \\ &=&\alpha _{2}\left( \left( x\otimes y\right) _{1}\right) \otimes \alpha _{2}\left( \left( x\otimes y\right) _{2}\right) \end{eqnarray* so that $\alpha _{2}$ is comultiplicative. \begin{invisible} For $x,y\in Q$, we hav \begin{eqnarray*} \left( x\otimes 1_{Q}\otimes y\right) _{1}\otimes \left( x\otimes 1_{Q}\otimes y\right) _{2} &=&\left( x\otimes 1_{Q}\right) _{1}\otimes \left( \left( x\otimes 1_{Q}\right) _{2}\right) _{-1}y_{1}\otimes \left( \left( x\otimes 1_{Q}\right) _{2}\right) _{0}\otimes y_{2} \\ &&\overset{(\ref{form:deltax1})}{=}\left( x_{1}\otimes 1_{Q}\right) \otimes \left( x_{2}\otimes 1_{Q}\right) _{-1}y_{1}\otimes \left( x_{2}\otimes 1_{Q}\right) _{0}\otimes y_{2} \\ &=&x_{1}\otimes 1_{Q}\otimes \left( x_{2}\right) _{-1}\left( 1_{Q}\right) _{-1}y_{1}\otimes \left( x_{2}\right) _{0}\otimes \left( 1_{Q}\right) _{0}\otimes y_{2} \\ &&\overset{(\ref{form:1Qcolin})}{=}x_{1}\otimes 1_{Q}\otimes \left( x_{2}\right) _{-1}y_{1}\otimes \left( x_{2}\right) _{0}\otimes 1_{Q}\otimes y_{2}. \end{eqnarray*} \end{invisible} Thu \begin{equation*} \omega ^{\gamma }\alpha _{2}:=\left( \varepsilon \otimes \gamma \right) \alpha _{2}\ast \gamma \left( Q\otimes m\right) \alpha _{2}\ast \omega \alpha _{2}\ast \gamma ^{-1}\left( m\otimes Q\right) \alpha _{2}\ast \left( \gamma ^{-1}\otimes \varepsilon \right) \alpha _{2} \end{equation* and computing the factors of this convolution products one gets \begin{gather*} \left( \varepsilon \otimes \gamma \right) \alpha _{2}=\varepsilon \otimes \varepsilon ,\quad \gamma \left( Q\otimes m\right) \alpha _{2}=\gamma ,\quad \omega \alpha _{2}=\varepsilon \otimes \varepsilon , \\ \gamma ^{-1}\left( m\otimes Q\right) \alpha _{2}=\gamma ^{-1},\quad \left( \gamma ^{-1}\otimes \varepsilon \right) \alpha _{2}=\varepsilon \otimes \varepsilon \end{gather* and hence $\omega ^{\gamma }\alpha _{2}=\gamma \ast \gamma ^{-1}=\varepsilon \otimes \varepsilon ,$ which means that $\omega ^{\gamma }\left( x\otimes 1_{Q}\otimes y\right) =\varepsilon \left( x\right) \varepsilon \left( y\right) $ for every $x,y\in Q.$ \begin{invisible} Thus, for every $x,y\in Q$, we have \begin{eqnarray*} \omega ^{\gamma }\left( x\otimes 1_{Q}\otimes y\right) &=&\omega ^{\gamma }\alpha _{2}\left( x\otimes y\right) \\ &=&\left[ \begin{array}{c} \left( \varepsilon \otimes \gamma \right) \left( \left( \alpha _{2}\left( x\otimes y\right) \right) _{1}\right) \cdot \gamma \left( Q\otimes m\right) \left( \left( \alpha _{2}\left( x\otimes y\right) \right) _{2}\right) \\ \cdot \omega \left( \left( \alpha _{2}\left( x\otimes y\right) \right) _{3}\right) \cdot \gamma ^{-1}\left( m\otimes Q\right) \left( \left( \alpha _{2}\left( x\otimes y\right) \right) _{4}\right) \cdot \left( \gamma ^{-1}\otimes \varepsilon \right) \left( \left( \alpha _{2}\left( x\otimes y\right) \right) _{5}\right \end{array \right] \\ &=&\left[ \begin{array}{c} \left( \varepsilon \otimes \gamma \right) \left( \alpha _{2}\left( \left( x\otimes y\right) _{1}\right) \right) \cdot \gamma \left( Q\otimes m\right) \left( \alpha _{2}\left( \left( x\otimes y\right) _{2}\right) \right) \\ \cdot \omega \left( \alpha _{2}\left( \left( x\otimes y\right) _{3}\right) \right) \cdot \gamma ^{-1}\left( m\otimes Q\right) \left( \alpha _{2}\left( \left( x\otimes y\right) _{4}\right) \right) \cdot \left( \gamma ^{-1}\otimes \varepsilon \right) \left( \alpha _{2}\left( \left( x\otimes y\right) _{5}\right) \right \end{array \right] . \end{eqnarray* Note tha \begin{eqnarray*} \left( \varepsilon \otimes \gamma \right) \alpha _{2}\left( x^{\prime }\otimes y^{\prime }\right) &=&\left( \varepsilon \otimes \gamma \right) \left( x^{\prime }\otimes 1_{Q}\otimes y^{\prime }\right) =\varepsilon \left( x^{\prime }\right) \gamma \left( 1_{Q}\otimes y^{\prime }\right) =\varepsilon \left( x^{\prime }\right) \varepsilon \left( y^{\prime }\right) , \\ \gamma \left( Q\otimes m\right) \alpha _{2}\left( x^{\prime }\otimes y^{\prime }\right) &=&\gamma \left( Q\otimes m\right) \left( x^{\prime }\otimes 1_{Q}\otimes y^{\prime }\right) =\gamma \left( x^{\prime }\otimes y^{\prime }\right) , \\ \omega \alpha _{2}\left( x^{\prime }\otimes y^{\prime }\right) &=&\omega \left( x^{\prime }\otimes 1_{Q}\otimes y^{\prime }\right) =\varepsilon \left( x^{\prime }\right) \varepsilon \left( y^{\prime }\right) , \\ \gamma ^{-1}\left( m\otimes Q\right) \alpha _{2}\left( x^{\prime }\otimes y^{\prime }\right) &=&\gamma ^{-1}\left( m\otimes Q\right) \left( x^{\prime }\otimes 1_{Q}\otimes y^{\prime }\right) =\gamma ^{-1}\left( x^{\prime }\otimes y^{\prime }\right) , \\ \left( \gamma ^{-1}\otimes \varepsilon \right) \alpha _{2}\left( x^{\prime }\otimes y^{\prime }\right) &=&\left( \gamma ^{-1}\otimes \varepsilon \right) \left( x^{\prime }\otimes 1_{Q}\otimes y^{\prime }\right) =\gamma ^{-1}\left( x^{\prime }\otimes 1_{Q}\right) \varepsilon \left( y^{\prime }\right) =\varepsilon \left( x^{\prime }\right) \varepsilon \left( y^{\prime }\right) \end{eqnarray* so tha \begin{equation*} \omega ^{\gamma }\left( x\otimes 1_{Q}\otimes y\right) =\gamma \left( \left( x\otimes y\right) _{1}\right) \cdot \gamma ^{-1}\left( \left( x\otimes y\right) _{2}\right) =\varepsilon \left( x\right) \varepsilon \left( y\right) . \end{equation*} \end{invisible} Similarly, considering $\alpha _{1}:Q\otimes Q\rightarrow Q\otimes Q\otimes Q $ defined by $\alpha _{2}\left( x\otimes y\right) =1_{Q}\otimes x\otimes y , one proves that $\omega ^{\gamma }\left( 1_{Q}\otimes x\otimes y\right) =\varepsilon \left( x\right) \varepsilon \left( y\right) .$ A symmetric argument shows that $\omega ^{\gamma }\left( x\otimes y\otimes 1_{Q}\right) =\varepsilon \left( x\right) \varepsilon \left( y\right) .$ \begin{invisible} We hav \begin{eqnarray*} \left( 1_{Q}\otimes x\otimes y\right) _{1}\otimes \left( 1_{Q}\otimes x\otimes y\right) _{2} &=&\left( 1_{Q}\otimes x\right) _{1}\otimes \left( \left( 1_{Q}\otimes x\right) _{2}\right) _{-1}y_{1}\otimes \left( \left( 1_{Q}\otimes x\right) _{2}\right) _{0}\otimes y_{2} \\ &&\overset{(\ref{form:delta1x})}{=}\left( 1_{Q}\otimes x_{1}\right) \otimes \left( 1_{Q}\otimes x_{2}\right) _{-1}y_{1}\otimes \left( 1_{Q}\otimes x_{2}\right) _{0}\otimes y_{2} \\ &=&\left( 1_{Q}\otimes x_{1}\right) \otimes \left( x_{2}\right) _{-1}y_{1}\otimes 1_{Q}\otimes \left( x_{2}\right) _{0}\otimes y_{2} \end{eqnarray* and hence also the map $\alpha _{1}:Q\otimes Q\rightarrow Q\otimes Q\otimes Q $ defined by $\alpha _{2}\left( x\otimes y\right) =1_{Q}\otimes x\otimes y$ is a coalgebra map. Hence one gets that $\omega ^{\gamma }\left( 1_{Q}\otimes x\otimes y\right) =\varepsilon \left( x\right) \varepsilon \left( y\right) $ as a consequence of the following computation \begin{eqnarray*} \left( \varepsilon \otimes \gamma \right) \alpha _{1}\left( x^{\prime }\otimes y^{\prime }\right) &=&\left( \varepsilon \otimes \gamma \right) \left( 1_{Q}\otimes x^{\prime }\otimes y^{\prime }\right) =\gamma \left( x^{\prime }\otimes y^{\prime }\right) , \\ \gamma \left( Q\otimes m\right) \alpha _{1}\left( x^{\prime }\otimes y^{\prime }\right) &=&\gamma \left( Q\otimes m\right) \left( 1_{Q}\otimes x^{\prime }\otimes y^{\prime }\right) =\gamma \left( 1_{Q}\otimes x^{\prime }y^{\prime }\right) =\varepsilon \left( x^{\prime }\right) \varepsilon \left( y^{\prime }\right) , \\ \omega \alpha _{1}\left( x^{\prime }\otimes y^{\prime }\right) &=&\omega \left( 1_{Q}\otimes x^{\prime }\otimes y^{\prime }\right) =\varepsilon \left( x^{\prime }\right) \varepsilon \left( y^{\prime }\right) , \\ \gamma ^{-1}\left( m\otimes Q\right) \alpha _{1}\left( x^{\prime }\otimes y^{\prime }\right) &=&\gamma ^{-1}\left( m\otimes Q\right) \left( 1_{Q}\otimes x^{\prime }\otimes y^{\prime }\right) =\gamma ^{-1}\left( x^{\prime }\otimes y^{\prime }\right) , \\ \left( \gamma ^{-1}\otimes \varepsilon \right) \alpha _{1}\left( x^{\prime }\otimes y^{\prime }\right) &=&\left( \gamma ^{-1}\otimes \varepsilon \right) \left( 1_{Q}\otimes x^{\prime }\otimes y^{\prime }\right) =\gamma ^{-1}\left( 1_{Q}\otimes x^{\prime }\right) \varepsilon \left( y^{\prime }\right) =\varepsilon \left( x^{\prime }\right) \varepsilon \left( y^{\prime }\right) . \end{eqnarray* A symmetric argument shows that $\omega ^{\gamma }\left( x\otimes y\otimes 1_{Q}\right) =\varepsilon \left( x\right) \varepsilon \left( y\right) .$ \end{invisible} Note that, by Lemma \ref{lem:InvYD}, $\omega ^{\gamma }$ is convolution invertible in ${_{H}^{H}\mathcal{YD}}\left( D,\Bbbk \right) $ as it is convolution invertible in $\mathbf{Vec}_{\Bbbk }\left( D,\Bbbk \right) $. Let us check that the multiplication is quasi-associative. By \cite[Lemma 2.10 formula (2.7)]{ABM}, we have \begin{eqnarray*} m^{\gamma }\left( Q\otimes \gamma \ast m\ast \gamma ^{-1}\right) &=&\left( \varepsilon \otimes \gamma \right) \ast m^{\gamma }\left( Q\otimes m\right) \ast \left( \varepsilon \otimes \gamma ^{-1}\right) \text{,} \\ \left( \varepsilon \otimes \gamma ^{-1}\right) \ast \left( \varepsilon \otimes \gamma \right) &=&\varepsilon \otimes \left( \gamma ^{-1}\ast \gamma \right) =\varepsilon \otimes \varepsilon \otimes \varepsilon , \\ m^{\gamma }\left( m^{\gamma }\otimes Q\right) &=&m^{\gamma }\left( \gamma \ast m\ast \gamma ^{-1}\otimes Q\right) =\left( \gamma \otimes \varepsilon \right) \ast m^{\gamma }\left( m\ast \gamma ^{-1}\otimes Q\right) \\ &=&\left( \gamma \otimes \varepsilon \right) \ast m^{\gamma }\left( m\otimes Q\right) \ast \left( \gamma ^{-1}\otimes \varepsilon \right) , \\ \left( \gamma ^{-1}\otimes \varepsilon \right) \ast \left( \gamma \otimes \varepsilon \right) &=&\left( \left( \gamma ^{-1}\ast \gamma \right) \otimes \varepsilon \right) =\varepsilon \otimes \varepsilon \otimes \varepsilon . \end{eqnarray* By using these equalities one obtain \begin{eqnarray*} m^{\gamma }\left( Q\otimes m^{\gamma }\right) \ast \omega ^{\gamma } &=&\left( \varepsilon \otimes \gamma \right) \ast \gamma \left( Q\otimes m\right) \ast m\left( Q\otimes m\right) \ast \omega \ast \gamma ^{-1}\left( m\otimes Q\right) \ast \left( \gamma ^{-1}\otimes \varepsilon \right) , \\ \omega ^{\gamma }\ast m^{\gamma }\left( m^{\gamma }\otimes Q\right) &=&\left( \varepsilon \otimes \gamma \right) \ast \gamma \left( Q\otimes m\right) \ast \omega \ast m\left( m\otimes Q\right) \ast \gamma ^{-1}\left( m\otimes Q\right) \ast \left( \gamma ^{-1}\otimes \varepsilon \right) \end{eqnarray* so that $\omega ^{\gamma }\ast m^{\gamma }\left( m^{\gamma }\otimes Q\right) =m^{\gamma }\left( Q\otimes m^{\gamma }\right) \ast \omega ^{\gamma }.$ \begin{invisible} We have tha \begin{eqnarray*} &&m^{\gamma }\left( Q\otimes m^{\gamma }\right) \ast \omega ^{\gamma } \\ &=&m^{\gamma }\left( Q\otimes \gamma \ast m\ast \gamma ^{-1}\right) \ast \omega ^{\gamma } \\ &=&\left( \varepsilon \otimes \gamma \right) \ast m^{\gamma }\left( Q\otimes m\right) \ast \left( \varepsilon \otimes \gamma ^{-1}\right) \ast \omega ^{\gamma } \\ &=&\left( \varepsilon \otimes \gamma \right) \ast m^{\gamma }\left( Q\otimes m\right) \ast \left( \varepsilon \otimes \gamma ^{-1}\right) \ast \left( \varepsilon \otimes \gamma \right) \ast \gamma \left( Q\otimes m\right) \ast \omega \ast \gamma ^{-1}\left( m\otimes Q\right) \ast \left( \gamma ^{-1}\otimes \varepsilon \right) \\ &=&\left( \varepsilon \otimes \gamma \right) \ast m^{\gamma }\left( Q\otimes m\right) \ast \gamma \left( Q\otimes m\right) \ast \omega \ast \gamma ^{-1}\left( m\otimes Q\right) \ast \left( \gamma ^{-1}\otimes \varepsilon \right) \\ &=&\left( \varepsilon \otimes \gamma \right) \ast \left( \gamma \ast m\ast \gamma ^{-1}\right) \left( Q\otimes m\right) \ast \gamma \left( Q\otimes m\right) \ast \omega \ast \gamma ^{-1}\left( m\otimes Q\right) \ast \left( \gamma ^{-1}\otimes \varepsilon \right) \\ &=&\left( \varepsilon \otimes \gamma \right) \ast \gamma \left( Q\otimes m\right) \ast m\left( Q\otimes m\right) \ast \gamma ^{-1}\left( Q\otimes m\right) \ast \gamma \left( Q\otimes m\right) \ast \omega \ast \gamma ^{-1}\left( m\otimes Q\right) \ast \left( \gamma ^{-1}\otimes \varepsilon \right) \\ &=&\left( \varepsilon \otimes \gamma \right) \ast \gamma \left( Q\otimes m\right) \ast m\left( Q\otimes m\right) \ast \left( \gamma ^{-1}\ast \gamma \right) \left( Q\otimes m\right) \ast \omega \ast \gamma ^{-1}\left( m\otimes Q\right) \ast \left( \gamma ^{-1}\otimes \varepsilon \right) \\ &=&\left( \varepsilon \otimes \gamma \right) \ast \gamma \left( Q\otimes m\right) \ast m\left( Q\otimes m\right) \ast \omega \ast \gamma ^{-1}\left( m\otimes Q\right) \ast \left( \gamma ^{-1}\otimes \varepsilon \right) \end{eqnarray* and similarly we hav \begin{eqnarray*} &&\omega ^{\gamma }\ast m^{\gamma }\left( m^{\gamma }\otimes Q\right) \\ &=&\omega ^{\gamma }\ast \left( \gamma \otimes \varepsilon \right) \ast m^{\gamma }\left( m\otimes Q\right) \ast \left( \gamma ^{-1}\otimes \varepsilon \right) \\ &=&\left( \varepsilon \otimes \gamma \right) \ast \gamma \left( Q\otimes m\right) \ast \omega \ast \gamma ^{-1}\left( m\otimes Q\right) \ast \left( \gamma ^{-1}\otimes \varepsilon \right) \ast \left( \gamma \otimes \varepsilon \right) \ast m^{\gamma }\left( m\otimes Q\right) \ast \left( \gamma ^{-1}\otimes \varepsilon \right) \\ &=&\left( \varepsilon \otimes \gamma \right) \ast \gamma \left( Q\otimes m\right) \ast \omega \ast \gamma ^{-1}\left( m\otimes Q\right) \ast m^{\gamma }\left( m\otimes Q\right) \ast \left( \gamma ^{-1}\otimes \varepsilon \right) \\ &=&\left( \varepsilon \otimes \gamma \right) \ast \gamma \left( Q\otimes m\right) \ast \omega \ast \gamma ^{-1}\left( m\otimes Q\right) \ast \left( \gamma \ast m\ast \gamma ^{-1}\right) \left( m\otimes Q\right) \ast \left( \gamma ^{-1}\otimes \varepsilon \right) \\ &=&\left( \varepsilon \otimes \gamma \right) \ast \gamma \left( Q\otimes m\right) \ast \omega \ast \gamma ^{-1}\left( m\otimes Q\right) \ast \gamma \left( m\otimes Q\right) \ast m\left( m\otimes Q\right) \ast \gamma ^{-1}\left( m\otimes Q\right) \ast \left( \gamma ^{-1}\otimes \varepsilon \right) \\ &=&\left( \varepsilon \otimes \gamma \right) \ast \gamma \left( Q\otimes m\right) \ast \omega \ast \left( \gamma ^{-1}\ast \gamma \right) \left( m\otimes Q\right) \ast m\left( m\otimes Q\right) \ast \gamma ^{-1}\left( m\otimes Q\right) \ast \left( \gamma ^{-1}\otimes \varepsilon \right) \\ &=&\left( \varepsilon \otimes \gamma \right) \ast \gamma \left( Q\otimes m\right) \ast \omega \ast m\left( m\otimes Q\right) \ast \gamma ^{-1}\left( m\otimes Q\right) \ast \left( \gamma ^{-1}\otimes \varepsilon \right) \\ &=&\left( \varepsilon \otimes \gamma \right) \ast \gamma \left( Q\otimes m\right) \ast m\left( Q\otimes m\right) \ast \omega \ast \gamma ^{-1}\left( m\otimes Q\right) \ast \left( \gamma ^{-1}\otimes \varepsilon \right) \end{eqnarray* so that $\omega ^{\gamma }\ast m^{\gamma }\left( m^{\gamma }\otimes Q\right) =m^{\gamma }\left( Q\otimes m^{\gamma }\right) \ast \omega ^{\gamma }.$ \end{invisible} It remains to check that $\omega ^{\gamma }$ is a reassociator. By \cite Lemma 2.10 formula (2.7)]{ABM}, we hav \begin{eqnarray*} \omega ^{\gamma }\left( Q\otimes Q\otimes \gamma \ast m\ast \gamma ^{-1}\right) &=&\left( \varepsilon \otimes \varepsilon \otimes \gamma \right) \ast \omega ^{\gamma }\left( Q\otimes Q\otimes m\right) \ast \left( \varepsilon \otimes \varepsilon \otimes \gamma ^{-1}\right) , \\ \omega ^{\gamma }\left( \gamma \ast m\ast \gamma ^{-1}\otimes Q\otimes Q\right) &=&\left( \gamma \otimes \varepsilon \otimes \varepsilon \right) \ast \omega ^{\gamma }\left( m\otimes Q\otimes Q\right) \ast \left( \gamma ^{-1}\otimes \varepsilon \otimes \varepsilon \right) , \\ \left( \gamma \otimes \varepsilon \otimes \varepsilon \right) \ast \left( \varepsilon \otimes \varepsilon \otimes \gamma \right) &=&\gamma \otimes \gamma =\left( \varepsilon \otimes \varepsilon \otimes \gamma \right) \ast \left( \gamma \otimes \varepsilon \otimes \varepsilon \right) . \end{eqnarray* By using these equalities one obtain \begin{eqnarray*} &&\omega ^{\gamma }\left( Q\otimes Q\otimes m^{\gamma }\right) \ast \omega ^{\gamma }\left( m^{\gamma }\otimes Q\otimes Q\right) \\ &&=\left[ \begin{array}{c} \left( \varepsilon \otimes \varepsilon \otimes \gamma \right) \ast \left( \varepsilon \otimes \gamma \left( Q\otimes m\right) \right) \ast \gamma \left( Q\otimes m\left( Q\otimes m\right) \right) \\ \ast \omega \left( Q\otimes Q\otimes m\right) \ast \omega \left( m\otimes Q\otimes Q\right) \\ \ast \gamma ^{-1}\left( m\left( m\otimes Q\right) \otimes Q\right) \ast \left( \gamma ^{-1}\left( m\otimes Q\right) \otimes \varepsilon \right) \ast \left( \gamma ^{-1}\otimes \varepsilon \otimes \varepsilon \right \end{array \right] \end{eqnarray* an \begin{eqnarray*} &&\left( \varepsilon \otimes \omega ^{\gamma }\right) \ast \omega ^{\gamma }\left( Q\otimes m^{\gamma }\otimes Q\right) \ast \left( \omega ^{\gamma }\otimes \varepsilon \right) \\ &&=\left[ \begin{array}{c} \left( \varepsilon \otimes \varepsilon \otimes \gamma \right) \ast \left( \varepsilon \otimes \gamma \left( Q\otimes m\right) \right) \ast \gamma \left( Q\otimes m\left( Q\otimes m\right) \right) \\ \ast \left( \varepsilon \otimes \omega \right) \ast \omega \left( Q\otimes m\otimes Q\right) \ast \left( \omega \otimes \varepsilon \right) \\ \ast \gamma ^{-1}\left( m\left( m\otimes Q\right) \otimes Q\right) \ast \left( \gamma ^{-1}\left( m\otimes Q\right) \otimes \varepsilon \right) \ast \left( \gamma ^{-1}\otimes \varepsilon \otimes \varepsilon \right \end{array \right] . \end{eqnarray*} \begin{invisible} We comput \begin{eqnarray*} &&\omega ^{\gamma }\left( Q\otimes Q\otimes m^{\gamma }\right) \ast \omega ^{\gamma }\left( m^{\gamma }\otimes Q\otimes Q\right) \\ &=&\left[ \begin{array}{c} \left( \varepsilon \otimes \varepsilon \otimes \gamma \right) \ast \omega ^{\gamma }\left( Q\otimes Q\otimes m\right) \ast \left( \varepsilon \otimes \varepsilon \otimes \gamma ^{-1}\right) \\ \ast \left( \gamma \otimes \varepsilon \otimes \varepsilon \right) \ast \omega ^{\gamma }\left( m\otimes Q\otimes Q\right) \ast \left( \gamma ^{-1}\otimes \varepsilon \otimes \varepsilon \right \end{array \right] \\ &=&\left[ \begin{array}{c} \left( \varepsilon \otimes \varepsilon \otimes \gamma \right) \ast \left( \varepsilon \otimes \gamma \right) \left( Q\otimes Q\otimes m\right) \ast \gamma \left( Q\otimes m\right) \left( Q\otimes Q\otimes m\right) \ast \omega \left( Q\otimes Q\otimes m\right) \\ \ast \gamma ^{-1}\left( m\otimes Q\right) \left( Q\otimes Q\otimes m\right) \ast \left( \gamma ^{-1}\otimes \varepsilon \right) \left( Q\otimes Q\otimes m\right) \ast \left( \varepsilon \otimes \varepsilon \otimes \gamma ^{-1}\right) \\ \ast \left( \gamma \otimes \varepsilon \otimes \varepsilon \right) \ast \left( \varepsilon \otimes \gamma \right) \left( m\otimes Q\otimes Q\right) \ast \gamma \left( Q\otimes m\right) \left( m\otimes Q\otimes Q\right) \\ \ast \omega \left( m\otimes Q\otimes Q\right) \ast \gamma ^{-1}\left( m\otimes Q\right) \left( m\otimes Q\otimes Q\right) \ast \left( \gamma ^{-1}\otimes \varepsilon \right) \left( m\otimes Q\otimes Q\right) \ast \left( \gamma ^{-1}\otimes \varepsilon \otimes \varepsilon \right \end{array \right] \\ &=&\left[ \begin{array}{c} \left( \varepsilon \otimes \varepsilon \otimes \gamma \right) \ast \left( \varepsilon \otimes \gamma \left( Q\otimes m\right) \right) \ast \gamma \left( Q\otimes m\left( Q\otimes m\right) \right) \ast \omega \left( Q\otimes Q\otimes m\right) \\ \ast \gamma ^{-1}\left( m\otimes m\right) \ast \left( \gamma ^{-1}\otimes \varepsilon \otimes \varepsilon \right) \ast \left( \varepsilon \otimes \varepsilon \otimes \gamma ^{-1}\right) \\ \ast \left( \gamma \otimes \varepsilon \otimes \varepsilon \right) \ast \left( \varepsilon \otimes \varepsilon \otimes \gamma \right) \ast \gamma \left( m\otimes m\right) \\ \ast \omega \left( m\otimes Q\otimes Q\right) \ast \gamma ^{-1}\left( m\left( m\otimes Q\right) \otimes Q\right) \ast \left( \gamma ^{-1}\left( m\otimes Q\right) \otimes \varepsilon \right) \ast \left( \gamma ^{-1}\otimes \varepsilon \otimes \varepsilon \right \end{array \right] \\ &=&\left[ \begin{array}{c} \left( \varepsilon \otimes \varepsilon \otimes \gamma \right) \ast \left( \varepsilon \otimes \gamma \left( Q\otimes m\right) \right) \ast \gamma \left( Q\otimes m\left( Q\otimes m\right) \right) \ast \omega \left( Q\otimes Q\otimes m\right) \\ \ast \gamma ^{-1}\left( m\otimes m\right) \ast \left( \gamma ^{-1}\otimes \varepsilon \otimes \varepsilon \right) \ast \left( \varepsilon \otimes \varepsilon \otimes \gamma ^{-1}\right) \\ \ast \left( \varepsilon \otimes \varepsilon \otimes \gamma \right) \ast \left( \gamma \otimes \varepsilon \otimes \varepsilon \right) \ast \gamma \left( m\otimes m\right) \\ \ast \omega \left( m\otimes Q\otimes Q\right) \ast \gamma ^{-1}\left( m\left( m\otimes Q\right) \otimes Q\right) \ast \left( \gamma ^{-1}\left( m\otimes Q\right) \otimes \varepsilon \right) \ast \left( \gamma ^{-1}\otimes \varepsilon \otimes \varepsilon \right \end{array \right] \\ &=&\left[ \begin{array}{c} \left( \varepsilon \otimes \varepsilon \otimes \gamma \right) \ast \left( \varepsilon \otimes \gamma \left( Q\otimes m\right) \right) \ast \gamma \left( Q\otimes m\left( Q\otimes m\right) \right) \ast \omega \left( Q\otimes Q\otimes m\right) \\ \ast \gamma ^{-1}\left( m\otimes m\right) \ast \gamma \left( m\otimes m\right) \\ \ast \omega \left( m\otimes Q\otimes Q\right) \ast \gamma ^{-1}\left( m\left( m\otimes Q\right) \otimes Q\right) \ast \left( \gamma ^{-1}\left( m\otimes Q\right) \otimes \varepsilon \right) \ast \left( \gamma ^{-1}\otimes \varepsilon \otimes \varepsilon \right \end{array \right] \\ &=&\left[ \begin{array}{c} \left( \varepsilon \otimes \varepsilon \otimes \gamma \right) \ast \left( \varepsilon \otimes \gamma \left( Q\otimes m\right) \right) \ast \gamma \left( Q\otimes m\left( Q\otimes m\right) \right) \\ \ast \omega \left( Q\otimes Q\otimes m\right) \ast \omega \left( m\otimes Q\otimes Q\right) \\ \ast \gamma ^{-1}\left( m\left( m\otimes Q\right) \otimes Q\right) \ast \left( \gamma ^{-1}\left( m\otimes Q\right) \otimes \varepsilon \right) \ast \left( \gamma ^{-1}\otimes \varepsilon \otimes \varepsilon \right \end{array \right] . \end{eqnarray* Moreove \begin{eqnarray*} &&\left( \varepsilon \otimes \omega ^{\gamma }\right) \ast \omega ^{\gamma }\left( Q\otimes m^{\gamma }\otimes Q\right) \ast \left( \omega ^{\gamma }\otimes \varepsilon \right) \\ &=&\left( \varepsilon \otimes \omega ^{\gamma }\right) \ast \omega ^{\gamma }\left( Q\otimes \left( \gamma \ast m\ast \gamma ^{-1}\right) \otimes Q\right) \ast \left( \omega ^{\gamma }\otimes \varepsilon \right) \\ &=&\left( \varepsilon \otimes \omega ^{\gamma }\right) \ast \omega ^{\gamma }\left( Q\otimes \left( \left( \gamma \otimes \varepsilon \right) \ast \left( m\otimes Q\right) \ast \left( \gamma ^{-1}\otimes \varepsilon \right) \right) \right) \ast \left( \omega ^{\gamma }\otimes \varepsilon \right) \\ &=&\left( \varepsilon \otimes \omega ^{\gamma }\right) \ast \left( \varepsilon \otimes \gamma \otimes \varepsilon \right) \ast \omega ^{\gamma }\left( Q\otimes m\otimes Q\right) \ast \left( \varepsilon \otimes \gamma ^{-1}\otimes \varepsilon \right) \ast \left( \omega ^{\gamma }\otimes \varepsilon \right) \\ &=&\left[ \begin{array}{c} \left( \varepsilon \otimes \varepsilon \otimes \gamma \right) \ast \left( \varepsilon \otimes \gamma \left( Q\otimes m\right) \right) \ast \left( \varepsilon \otimes \omega \right) \ast \left( \varepsilon \otimes \gamma ^{-1}\left( m\otimes Q\right) \right) \ast \left( \varepsilon \otimes \gamma ^{-1}\otimes \varepsilon \right) \ast \\ \left( \varepsilon \otimes \gamma \otimes \varepsilon \right) \ast \left( \varepsilon \otimes \gamma \right) \left( Q\otimes m\otimes Q\right) \ast \\ \gamma \left( Q\otimes m\right) \left( Q\otimes m\otimes Q\right) \ast \omega \left( Q\otimes m\otimes Q\right) \ast \gamma ^{-1}\left( m\otimes Q\right) \left( Q\otimes m\otimes Q\right) \\ \ast \left( \gamma ^{-1}\otimes \varepsilon \right) \left( Q\otimes m\otimes Q\right) \ast \left( \varepsilon \otimes \gamma ^{-1}\otimes \varepsilon \right) \\ \ast \left( \varepsilon \otimes \gamma \otimes \varepsilon \right) \ast \left( \gamma \left( Q\otimes m\right) \otimes \varepsilon \right) \ast \left( \omega \otimes \varepsilon \right) \ast \left( \gamma ^{-1}\left( m\otimes Q\right) \otimes \varepsilon \right) \ast \left( \gamma ^{-1}\otimes \varepsilon \otimes \varepsilon \right \end{array \right] \\ &=&\left[ \begin{array}{c} \left( \varepsilon \otimes \varepsilon \otimes \gamma \right) \ast \left( \varepsilon \otimes \gamma \left( Q\otimes m\right) \right) \ast \left( \varepsilon \otimes \omega \right) \ast \left( \varepsilon \otimes \gamma ^{-1}\left( m\otimes Q\right) \right) \ast \\ \left( \varepsilon \otimes \gamma \left( m\otimes Q\right) \right) \ast \\ \gamma \left( Q\otimes m\left( m\otimes Q\right) \right) \ast \omega \left( Q\otimes m\otimes Q\right) \ast \gamma ^{-1}\left( m\left( Q\otimes m\right) \otimes Q\right) \\ \ast \left( \gamma ^{-1}\left( Q\otimes m\right) \otimes \varepsilon \right) \ast \\ \ast \left( \gamma \left( Q\otimes m\right) \otimes \varepsilon \right) \ast \left( \omega \otimes \varepsilon \right) \ast \left( \gamma ^{-1}\left( m\otimes Q\right) \otimes \varepsilon \right) \ast \left( \gamma ^{-1}\otimes \varepsilon \otimes \varepsilon \right \end{array \right] \\ &=&\left[ \begin{array}{c} \left( \varepsilon \otimes \varepsilon \otimes \gamma \right) \ast \left( \varepsilon \otimes \gamma \left( Q\otimes m\right) \right) \ast \left( \varepsilon \otimes \omega \right) \ast \\ \gamma \left( Q\otimes m\left( m\otimes Q\right) \right) \ast \omega \left( Q\otimes m\otimes Q\right) \ast \gamma ^{-1}\left( m\left( Q\otimes m\right) \otimes Q\right) \\ \ast \left( \omega \otimes \varepsilon \right) \ast \left( \gamma ^{-1}\left( m\otimes Q\right) \otimes \varepsilon \right) \ast \left( \gamma ^{-1}\otimes \varepsilon \otimes \varepsilon \right \end{array \right] \\ &=&\left[ \begin{array}{c} \left( \varepsilon \otimes \varepsilon \otimes \gamma \right) \ast \left( \varepsilon \otimes \gamma \left( Q\otimes m\right) \right) \ast \\ \gamma \left( Q\otimes \omega \ast m\left( m\otimes Q\right) \right) \ast \omega \left( Q\otimes m\otimes Q\right) \ast \gamma ^{-1}\left( m\left( Q\otimes m\right) \ast \omega \otimes Q\right) \\ \ast \left( \gamma ^{-1}\left( m\otimes Q\right) \otimes \varepsilon \right) \ast \left( \gamma ^{-1}\otimes \varepsilon \otimes \varepsilon \right \end{array \right] \\ &=&\left[ \begin{array}{c} \left( \varepsilon \otimes \varepsilon \otimes \gamma \right) \ast \left( \varepsilon \otimes \gamma \left( Q\otimes m\right) \right) \ast \\ \gamma \left( Q\otimes m\left( Q\otimes m\right) \ast \omega \right) \ast \omega \left( Q\otimes m\otimes Q\right) \ast \gamma ^{-1}\left( \omega \ast m\left( m\otimes Q\right) \otimes Q\right) \\ \ast \left( \gamma ^{-1}\left( m\otimes Q\right) \otimes \varepsilon \right) \ast \left( \gamma ^{-1}\otimes \varepsilon \otimes \varepsilon \right \end{array \right] \\ &=&\left[ \begin{array}{c} \left( \varepsilon \otimes \varepsilon \otimes \gamma \right) \ast \left( \varepsilon \otimes \gamma \left( Q\otimes m\right) \right) \ast \gamma \left( Q\otimes m\left( Q\otimes m\right) \right) \\ \left( \varepsilon \otimes \omega \right) \ast \omega \left( Q\otimes m\otimes Q\right) \ast \left( \omega \otimes \varepsilon \right) \ast \\ \gamma ^{-1}\left( m\left( m\otimes Q\right) \otimes Q\right) \ast \left( \gamma ^{-1}\left( m\otimes Q\right) \otimes \varepsilon \right) \ast \left( \gamma ^{-1}\otimes \varepsilon \otimes \varepsilon \right \end{array \right] \\ &=&\left[ \begin{array}{c} \left( \varepsilon \otimes \varepsilon \otimes \gamma \right) \ast \left( \varepsilon \otimes \gamma \left( Q\otimes m\right) \right) \ast \gamma \left( Q\otimes m\left( Q\otimes m\right) \right) \\ \omega \left( Q\otimes Q\otimes m\right) \ast \omega \left( m\otimes Q\otimes Q\right) \ast \\ \gamma ^{-1}\left( m\left( m\otimes Q\right) \otimes Q\right) \ast \left( \gamma ^{-1}\left( m\otimes Q\right) \otimes \varepsilon \right) \ast \left( \gamma ^{-1}\otimes \varepsilon \otimes \varepsilon \right \end{array \right] . \end{eqnarray*} \end{invisible} Therefor \begin{equation*} \omega ^{\gamma }\left( Q\otimes Q\otimes m^{\gamma }\right) \ast \omega ^{\gamma }\left( m^{\gamma }\otimes Q\otimes Q\right) =\left( \varepsilon \otimes \omega ^{\gamma }\right) \ast \omega ^{\gamma }\left( Q\otimes m^{\gamma }\otimes Q\right) \ast \left( \omega ^{\gamma }\otimes \varepsilon \right) . \end{equation*} \end{proof} In analogy to the case of Hopf algebras, one can define the bosonization $E\#H$ of a coquasi-bialgebra in ${_{H}^{H}\mathcal{YD}}$ by a Hopf algebra H,$ see \cite[Definition 5.4]{ABM} for further details on the structure. The following result was originally stated for $E$ a Hopf algebra. Yorck Sommerh\"{a}user suggested the present more general form which investigates the behaviour of the bosonization under a suitable gauge transformation. \begin{proposition} \label{pro:deformSmash}Let $H$ be a Hopf algebra and let $\left( E,m,u,\Delta ,\varepsilon ,\omega \right) $ be a coquasi-bialgebra in $ _{H}^{H}\mathcal{YD}}$. Let $\gamma :E\otimes E\rightarrow \Bbbk $ be a gauge transformation in ${_{H}^{H}\mathcal{YD}}$. Se \begin{equation*} \Gamma :\left( E\#H\right) \otimes \left( E\#H\right) \rightarrow \Bbbk :\left( x\#h\right) \otimes \left( x^{\prime }\#h^{\prime }\right) \mapsto \gamma \left( x\otimes hx^{\prime }\right) \varepsilon _{H}\left( h^{\prime }\right) . \end{equation* Then $\Gamma $ is a gauge transformation and $\left( E\#H\right) ^{\Gamma }=E^{\gamma }\#H$ as ordinary coquasi-bialgebras. \end{proposition} \begin{proof} By \cite[Lemma 2.15 and what follows]{ABM}, we have that $\Gamma $ is convolution invertible $H$-bilinear and $H$-balanced. Moreover $\Gamma ^{-1}\left( \left( x\#h\right) \otimes \left( x^{\prime }\#h^{\prime }\right) \right) =\gamma ^{-1}\left( x\otimes hx^{\prime }\right) \varepsilon _{H}\left( h^{\prime }\right) .$ If $\alpha :\left( E\#H\right) \otimes \left( E\#H\right) \rightarrow E\#H$ is $H$-bilinear and $H -balanced, it is easy to check that $\Gamma \ast \alpha \ast \Gamma ^{-1}$ is $H$-bilinear and $H$-balanced too. \begin{invisible} We check this for our sake. Note that $E\#H$ is an $H$-bimodule coalgebra with respect to $\left( r\#h\right) l=r\#hl$ and $l\left( s\#h\right) =l_{1}s\#l_{2}h$. Let, in general, $A$ be an $H$-bimodule coalgebra and let \alpha :A\otimes A\rightarrow A$ be an $H$-bilinear and $H$-balanced map. Let $\Gamma :A\otimes A\rightarrow \Bbbk $ be $H$-bilinear and $H$-balanced map. The \begin{eqnarray*} \left( \Gamma \ast \alpha \right) \left( hxh^{\prime }\otimes x^{\prime }h^{\prime \prime }\right) &=&\Gamma \left( \left( hxh^{\prime }\right) _{1}\otimes \left( x^{\prime }h^{\prime \prime }\right) _{1}\right) \alpha \left( \left( hxh^{\prime }\right) _{2}\otimes \left( x^{\prime }h^{\prime \prime }\right) _{2}\right) \\ &=&\Gamma \left( h_{1}x_{1}h_{1}^{\prime }\otimes x_{1}^{\prime }h_{1}^{\prime \prime }\right) \alpha \left( h_{2}x_{2}h_{2}^{\prime }\otimes x_{2}^{\prime }h_{2}^{\prime \prime }\right) \\ &=&\varepsilon _{H}\left( h_{1}\right) \Gamma \left( x_{1}\otimes h_{1}^{\prime }x_{1}^{\prime }\right) \varepsilon _{H}\left( h_{1}^{\prime \prime }\right) h_{2}\alpha \left( x_{2}\otimes h_{2}^{\prime }x_{2}^{\prime }\right) h_{2}^{\prime \prime } \\ &=&\Gamma \left( x_{1}\otimes h_{1}^{\prime }x_{1}^{\prime }\right) h\alpha \left( x_{2}\otimes h_{2}^{\prime }x_{2}^{\prime }\right) h^{\prime \prime } \\ &=&h\Gamma \left( x_{1}\otimes \left( h^{\prime }x^{\prime }\right) _{1}\right) \alpha \left( x_{2}\otimes \left( h^{\prime }x^{\prime }\right) _{2}\right) h^{\prime \prime } \\ &=&h\left( \Gamma \ast \alpha \right) \left( x\otimes h^{\prime }x^{\prime }\right) h^{\prime \prime } \end{eqnarray* an \begin{eqnarray*} \left( \alpha \ast \Gamma \right) \left( hxh^{\prime }\otimes x^{\prime }h^{\prime \prime }\right) &=&\alpha \left( \left( hxh^{\prime }\right) _{1}\otimes \left( x^{\prime }h^{\prime \prime }\right) _{1}\right) \Gamma \left( \left( hxh^{\prime }\right) _{2}\otimes \left( x^{\prime }h^{\prime \prime }\right) _{2}\right) \\ &=&\alpha \left( h_{1}x_{1}h_{1}^{\prime }\otimes x_{1}^{\prime }h_{1}^{\prime \prime }\right) \Gamma \left( h_{2}x_{2}h_{2}^{\prime }\otimes x_{2}^{\prime }h_{2}^{\prime \prime }\right) \\ &=&h_{1}\alpha \left( x_{1}\otimes h_{1}^{\prime }x_{1}^{\prime }\right) h_{1}^{\prime \prime }\varepsilon _{H}\left( h_{2}\right) \Gamma \left( x_{2}\otimes h_{2}^{\prime }x_{2}^{\prime }\right) \varepsilon _{H}\left( h_{2}^{\prime \prime }\right) \\ &=&h\alpha \left( x_{1}\otimes h_{1}^{\prime }x_{1}^{\prime }\right) h^{\prime \prime }\Gamma \left( x_{2}\otimes h_{2}^{\prime }x_{2}^{\prime }\right) \\ &=&h\alpha \left( x_{1}\otimes \left( h^{\prime }x^{\prime }\right) _{1}\right) \Gamma \left( x_{2}\otimes \left( h^{\prime }x^{\prime }\right) _{2}\right) h^{\prime \prime } \\ &=&h\left( \alpha \ast \Gamma \right) \left( x\otimes h^{\prime }x^{\prime }\right) h^{\prime \prime }. \end{eqnarray*} \end{invisible} In particular, since \begin{equation*} m_{E\#H}\left( \left( x\#h\right) \otimes \left( x^{\prime }\#h^{\prime }\right) \right) =m\left( x\otimes h_{1}x^{\prime }\right) \otimes h_{2}h^{\prime } \end{equation* we have that $m_{E\#H}$ is $H$-bilinear and $H$-balanced where $E\#H$ carries the left $H$-diagonal action and the right regular action over $H$. \begin{invisible} We hav \begin{eqnarray*} m_{E\#H}\left( l\left( x\#h\right) \otimes \left( x^{\prime }\#h^{\prime }\right) \right) &=&m_{E\#H}\left( \left( l_{1}x\#l_{2}h\right) \otimes \left( x^{\prime }\#h^{\prime }\right) \right) \\ &=&m_{E}\left( l_{1}x\otimes \left( l_{2}h_{1}\right) x^{\prime }\right) \otimes \left( l_{3}h_{2}\right) h^{\prime } \\ &=&m_{E}\left( l_{1}x\otimes l_{2}\left( h_{1}x^{\prime }\right) \right) \otimes l_{3}\left( h_{2}h^{\prime }\right) \\ &=&l_{1}m_{E}\left( x\otimes h_{1}x^{\prime }\right) \otimes l_{2}\left( h_{2}h^{\prime }\right) \\ &=&l\left( m_{E}\left( x\otimes h_{1}x^{\prime }\right) \otimes h_{2}h^{\prime }\right) =lm_{E\#H}\left( \left( x\#h\right) \otimes \left( x^{\prime }\#h^{\prime }\right) \right) , \end{eqnarray* \begin{eqnarray*} m_{E\#H}\left( \left( x\#h\right) \otimes \left( x^{\prime }\#h^{\prime }\right) l\right) &=&m_{E\#H}\left( \left( x\#h\right) \otimes \left( x^{\prime }\#h^{\prime }l\right) \right) =m_{E}\left( x\otimes h_{1}x^{\prime }\right) \otimes h_{2}\left( h^{\prime }l\right) \\ &=&m_{E}\left( x\otimes h_{1}x^{\prime }\right) \otimes \left( h_{2}h^{\prime }\right) l=\left( m_{E}\left( x\otimes h_{1}x^{\prime }\right) \otimes \left( h_{2}h^{\prime }\right) \right) l \\ &=&m_{E\#H}\left( \left( x\#h\right) \otimes \left( x^{\prime }\#h^{\prime }\right) \right) l, \end{eqnarray* an \begin{eqnarray*} m_{E\#H}\left( \left( x\#h\right) l\otimes \left( x^{\prime }\#h^{\prime }\right) \right) &=&m_{E\#H}\left( \left( x\#hl\right) \otimes \left( x^{\prime }\#h^{\prime }\right) \right) \\ &=&m_{E}\left( x\otimes \left( h_{1}l_{1}\right) x^{\prime }\right) \otimes \left( h_{2}l_{2}\right) h^{\prime }=m_{E}\left( x\otimes h_{1}\left( l_{1}x^{\prime }\right) \right) \otimes h_{2}\left( l_{2}h^{\prime }\right) \\ &=&m_{E\#H}\left( \left( x\#h\right) \otimes \left( l_{1}x^{\prime }\#l_{2}h^{\prime }\right) \right) =m_{E\#H}\left( \left( x\#h\right) \otimes l\left( x^{\prime }\#h^{\prime }\right) \right) . \end{eqnarray*} \end{invisible} Thus $m_{\left( E\#H\right) ^{\Gamma }}=\Gamma \ast m_{E\#H}\ast \Gamma ^{-1} $ is $H$-bilinear and $H$-balanced. Moreover, since $E^{\gamma }$ is also a coquasi-bialgebra in ${_{H}^{H}\mathcal{YD}}$ we have that $m_{E^{\gamma }\#H}:\left( E\#H\right) \otimes \left( E\#H\right) \rightarrow E\#H$ is $H -bilinear and $H$-balanced too. \begin{invisible} \begin{eqnarray*} m_{E^{\gamma }\#H}\left( l\left( x\#h\right) s\otimes \left( x^{\prime }\#h^{\prime }\right) t\right) &=&m_{E^{\gamma }\#H}\left( \left( l_{1}x\#l_{2}hs\right) \otimes \left( x^{\prime }\#h^{\prime }t\right) \right) \\ &=&\left( \gamma \ast m_{E}\ast \gamma ^{-1}\right) \left( l_{1}x\otimes l_{2}h_{1}s_{1}x^{\prime }\right) \#l_{3}h_{2}s_{2}h^{\prime }t \\ &=&l_{1}\left( \gamma \ast m_{E}\ast \gamma ^{-1}\right) \left( x\otimes h_{1}s_{1}x^{\prime }\right) \#l_{2}h_{2}s_{2}h^{\prime }t \\ &=&l\left[ \left( \gamma \ast m_{E}\ast \gamma ^{-1}\right) \left( x\otimes h_{1}s_{1}x^{\prime }\right) \#h_{2}s_{2}h^{\prime }\right] t \\ &=&lm_{E^{\gamma }\#H}\left( \left( x\#h\right) \otimes \left( s_{1}x^{\prime }\#s_{2}h^{\prime }\right) \right) t \\ &=&lm_{E^{\gamma }\#H}\left( \left( x\#h\right) \otimes s\left( x^{\prime }\#h^{\prime }\right) \right) t \end{eqnarray*} \end{invisible} Therefore, in order to check that $m_{\left( E\#H\right) ^{\Gamma }}=m_{E^{\gamma }\#H},$ it suffices to prove that they coincide on elements of the form $\left( x\#1_{H}\right) \otimes \left( x^{\prime }\#1_{H}\right) .$ \begin{invisible} If $\alpha :\left( E\#H\right) \otimes \left( E\#H\right) \rightarrow E\#H$ is $H$-bilinear and $H$-balanced, then \begin{eqnarray*} \alpha \left( \left( x\#h\right) \otimes \left( x^{\prime }\#h^{\prime }\right) \right) &=&\alpha \left( \left( x\#1_{H}\right) h\otimes \left( x^{\prime }\#h^{\prime }\right) \right) \\ &=&\alpha \left( \left( x\#1_{H}\right) \otimes h\left( x^{\prime }\#h^{\prime }\right) \right) \\ &=&\alpha \left( \left( x\#1_{H}\right) \otimes \left( h_{1}x^{\prime }\#h_{2}h^{\prime }\right) \right) \\ &=&\alpha \left( \left( x\#1_{H}\right) \otimes \left( h_{1}x^{\prime }\#1_{H}\right) \right) h_{2}h^{\prime }. \end{eqnarray*} \end{invisible} Let us consider the multiplicatio \begin{eqnarray*} &&m_{\left( E\#H\right) ^{\Gamma }}\left( \left( x\#1_{H}\right) \otimes \left( x^{\prime }\#1_{H}\right) \right) \\ &=&\left( \Gamma \ast m_{E\#H}\ast \Gamma ^{-1}\right) \left( \left( x\#1_{H}\right) \otimes \left( x^{\prime }\#1_{H}\right) \right) \\ &=&\Gamma \left( \left( x\#1_{H}\right) _{1}\otimes \left( x^{\prime }\#1_{H}\right) _{1}\right) \cdot m_{E\#H}\left( \left( x\#1_{H}\right) _{2}\otimes \left( x^{\prime }\#1_{H}\right) _{2}\right) \cdot \Gamma ^{-1}\left( \left( x\#1_{H}\right) _{3}\otimes \left( x^{\prime }\#1_{H}\right) _{3}\right) . \end{eqnarray* Now, from \begin{equation*} \Delta _{E\#H}\left( x\#h\right) =\sum \left( x^{\left( 1\right) }\# x^{\left( 2\right) }}_{\left\langle -1\right\rangle }h_{1}\right) \otimes \left( {x^{\left( 2\right) }}_{\left\langle 0\right\rangle }\#h_{2}\right) \end{equation* we get \begin{eqnarray*} &&\left( x\#1_{H}\right) _{1}\otimes \left( x\#1_{H}\right) _{2}\otimes \left( x\#1_{H}\right) _{3} \\ &=&\sum \left( x^{\left( 1\right) }\#{x^{\left( 2\right) }}_{\left\langle -1\right\rangle }{x^{\left( 3\right) }}_{\left\langle -2\right\rangle }\right) \otimes \left( {x^{\left( 2\right) }}_{\left\langle 0\right\rangle }\#{x^{\left( 3\right) }}_{\left\langle -1\right\rangle }\right) \otimes \left( {x^{\left( 3\right) }}_{\left\langle 0\right\rangle }\#1_{H}\right) \end{eqnarray*} \begin{invisible} \begin{eqnarray*} &&\left( x\#1_{H}\right) _{1}\otimes \left( x\#1_{H}\right) _{2}\otimes \left( x\#1_{H}\right) _{3} \\ &=&\sum \Delta _{E\#H}\left( x^{\left( 1\right) }\#\left( x^{\left( 2\right) }\right) _{\left\langle -1\right\rangle }\right) \otimes \left( \left( x^{\left( 2\right) }\right) _{\left\langle 0\right\rangle }\#1_{H}\right) \\ &=&\sum \left( \left( x^{\left( 1\right) }\right) ^{\left( 1\right) }\#\left( \left( x^{\left( 1\right) }\right) ^{\left( 2\right) }\right) _{\left\langle -1\right\rangle }\left( \left( x^{\left( 2\right) }\right) _{\left\langle -1\right\rangle }\right) _{1}\right) \otimes \left( \left( \left( x^{\left( 1\right) }\right) ^{\left( 2\right) }\right) _{\left\langle 0\right\rangle }\#\left( \left( x^{\left( 2\right) }\right) _{\left\langle -1\right\rangle }\right) _{2}\right) \otimes \left( \left( x^{\left( 2\right) }\right) _{\left\langle 0\right\rangle }\#1_{H}\right) \\ &=&\sum \left( x^{\left( 1\right) }\#\left( x^{\left( 2\right) }\right) _{\left\langle -1\right\rangle }\left( x^{\left( 3\right) }\right) _{\left\langle -2\right\rangle }\right) \otimes \left( \left( x^{\left( 2\right) }\right) _{\left\langle 0\right\rangle }\#\left( x^{\left( 3\right) }\right) _{\left\langle -1\right\rangle }\right) \otimes \left( \left( x^{\left( 3\right) }\right) _{\left\langle 0\right\rangle }\#1_{H}\right) \end{eqnarray*} \end{invisible} so tha \begin{eqnarray*} &&m_{\left( E\#H\right) ^{\Gamma }}\left( \left( x\#1_{H}\right) \otimes \left( x^{\prime }\#1_{H}\right) \right) \\ &=&\Gamma \left( \left( x\#1_{H}\right) _{1}\otimes \left( x^{\prime }\#1_{H}\right) _{1}\right) \cdot m_{E\#H}\left( \left( x\#1_{H}\right) _{2}\otimes \left( x^{\prime }\#1_{H}\right) _{2}\right) \cdot \Gamma ^{-1}\left( \left( x\#1_{H}\right) _{3}\otimes \left( x^{\prime }\#1_{H}\right) _{3}\right) \\ &=&\left[ \begin{array}{c} \sum \Gamma \left( x^{\left( 1\right) }\#{x^{\left( 2\right) } _{\left\langle -1\right\rangle }{x^{\left( 3\right) }}_{\left\langle -2\right\rangle }\otimes x^{\prime \left( 1\right) }\#x^{\prime \left( 2\right) }{}_{\left\langle -1\right\rangle }x^{\prime \left( 3\right) }{}_{\left\langle -2\right\rangle }\right) \\ \cdot m_{E\#H}\left( {x^{\left( 2\right) }}_{\left\langle 0\right\rangle }\# x^{\left( 3\right) }}_{\left\langle -1\right\rangle }\otimes x^{\prime \left( 2\right) }{}_{\left\langle 0\right\rangle }\#x^{\prime \left( 3\right) }{}_{\left\langle -1\right\rangle }\right) \\ \cdot \Gamma ^{-1}\left( {x^{\left( 3\right) }}_{\left\langle 0\right\rangle }\#1_{H}\otimes x^{\prime \left( 3\right) }{}_{\left\langle 0\right\rangle }\#1_{H}\right) \end{array \right] \\ &=&\left[ \begin{array}{c} \sum \gamma \left( x^{\left( 1\right) }\otimes {x^{\left( 2\right) } _{\left\langle -1\right\rangle }{x^{\left( 3\right) }}_{\left\langle -2\right\rangle }x^{\prime \left( 1\right) }\right) \\ \cdot m_{E\#H}\left( {x^{\left( 2\right) }}_{\left\langle 0\right\rangle }\# x^{\left( 3\right) }}_{\left\langle -1\right\rangle }\otimes x^{\prime \left( 2\right) }\#x^{\prime \left( 3\right) }{}_{\left\langle -1\right\rangle }\right) \\ \cdot \gamma ^{-1}\left( {x^{\left( 3\right) }}_{\left\langle 0\right\rangle }\otimes x^{\prime \left( 3\right) }{}_{\left\langle 0\right\rangle }\right) \end{array \right] \\ &=&\left[ \begin{array}{c} \sum \gamma \left( x^{\left( 1\right) }\otimes {x^{\left( 2\right) } _{\left\langle -1\right\rangle }{x^{\left( 3\right) }}_{\left\langle -2\right\rangle }x^{\prime \left( 1\right) }\right) \\ \cdot m\left( {x^{\left( 2\right) }}_{\left\langle 0\right\rangle }\otimes x^{\left( 3\right) }}_{\left\langle -2\right\rangle }x^{\prime \left( 2\right) }\right) \otimes x^{\left( 3\right) }{}_{\left\langle -1\right\rangle }x^{\prime \left( 3\right) }{}_{\left\langle -1\right\rangle } \\ \cdot \gamma ^{-1}\left( {x^{\left( 3\right) }}_{\left\langle 0\right\rangle }\otimes x^{\prime \left( 3\right) }{}_{\left\langle 0\right\rangle }\right) \end{array \right] \\ &=&\left[ \begin{array}{c} \sum \gamma \left( x^{\left( 1\right) }\otimes {x^{\left( 2\right) } _{\left\langle -1\right\rangle }x^{\left( 3\right) }{}_{\left\langle -2\right\rangle }x^{\prime \left( 1\right) }\right) \\ \cdot m\left( {x^{\left( 2\right) }}_{\left\langle 0\right\rangle }\otimes x^{\left( 3\right) }}_{\left\langle -1\right\rangle }x^{\prime \left( 2\right) }\right) \otimes \left( {x^{\left( 3\right) }}_{\left\langle 0\right\rangle }\otimes x^{\prime \left( 3\right) }\right) _{\left\langle -1\right\rangle } \\ \cdot \gamma ^{-1}\left( {x^{\left( 3\right) }}_{\left\langle 0\right\rangle }\otimes x^{\prime \left( 3\right) }{}_{\left\langle 0\right\rangle }\right) \end{array \right] \\ &\overset{\gamma ^{-1}\text{ colin.}}{=}&\left[ \begin{array}{c} \sum \gamma \left( x^{\left( 1\right) }\otimes {x^{\left( 2\right) } _{\left\langle -1\right\rangle }{x^{\left( 3\right) }}_{\left\langle -2\right\rangle }x^{\prime \left( 1\right) }\right) \cdot m\left( {x^{\left( 2\right) }}_{\left\langle 0\right\rangle }\otimes {x^{\left( 3\right) } _{\left\langle -1\right\rangle }x^{\prime \left( 2\right) }\right) \otimes 1_{H} \\ \cdot \gamma ^{-1}\left( {x^{\left( 3\right) }}_{\left\langle 0\right\rangle }\otimes x^{\prime \left( 3\right) }\right) \end{array \right] \\ &=&\left[ \begin{array}{c} \sum \gamma \left( x^{\left( 1\right) }\otimes {x^{\left( 2\right) } _{\left\langle -1\right\rangle }{x^{\left( 3\right) }}_{\left\langle -2\right\rangle }x^{\prime \left( 1\right) }\right) m\left( {x^{\left( 2\right) }}_{\left\langle 0\right\rangle }\otimes {x^{\left( 3\right) } _{\left\langle -1\right\rangle }x^{\prime \left( 2\right) }\right) \\ \gamma ^{-1}\left( {x^{\left( 3\right) }}_{\left\langle 0\right\rangle }\otimes x^{\prime \left( 3\right) }\right) \end{array \right] \otimes 1_{H}. \end{eqnarray* Now we hav \begin{equation*} \sum \left( x\otimes y\right) ^{\left( 1\right) }\otimes \left( x\otimes y\right) ^{\left( 2\right) }=\sum x^{\left( 1\right) }\otimes x^{\left( 2\right) }{}_{\left\langle -1\right\rangle }y^{\left( 1\right) }\otimes x^{\left( 2\right) }{}_{\left\langle 0\right\rangle }\otimes y^{\left( 2\right) } \end{equation* so that \begin{invisible} \begin{eqnarray*} &&\sum \left( x\otimes y\right) ^{\left( 1\right) }\otimes \left( x\otimes y\right) ^{\left( 2\right) }\otimes \left( x\otimes y\right) ^{\left( 3\right) } \\ &=&\sum \left( x^{\left( 1\right) }\otimes \left( x^{\left( 2\right) }\right) _{\left\langle -1\right\rangle }y^{\left( 1\right) }\right) ^{\left( 1\right) }\otimes \left( x^{\left( 1\right) }\otimes \left( x^{\left( 2\right) }\right) _{\left\langle -1\right\rangle }y^{\left( 1\right) }\right) ^{\left( 2\right) }\otimes \left( x^{\left( 2\right) }\right) _{\left\langle 0\right\rangle }\otimes y^{\left( 2\right) } \\ &=&\sum \left( x^{\left( 1\right) \left( 1\right) }\otimes \left( x^{\left( 1\right) \left( 2\right) }\right) _{\left\langle -1\right\rangle }\left( \left( x^{\left( 2\right) }\right) _{\left\langle -1\right\rangle }y^{\left( 1\right) }\right) ^{\left( 1\right) }\otimes \left( x^{\left( 1\right) \left( 2\right) }\right) _{\left\langle 0\right\rangle }\otimes \left( \left( x^{\left( 2\right) }\right) _{\left\langle -1\right\rangle }y^{\left( 1\right) }\right) ^{\left( 2\right) }\right) \otimes \left( x^{\left( 2\right) }\right) _{\left\langle 0\right\rangle }\otimes y^{\left( 2\right) } \\ &=&\sum \left( x^{\left( 1\right) }\otimes \left( x^{\left( 2\right) }\right) _{\left\langle -1\right\rangle }\left( \left( x^{\left( 3\right) }\right) _{\left\langle -1\right\rangle }y^{\left( 1\right) }\right) ^{\left( 1\right) }\otimes \left( x^{\left( 2\right) }\right) _{\left\langle 0\right\rangle }\otimes \left( \left( x^{\left( 3\right) }\right) _{\left\langle -1\right\rangle }y^{\left( 1\right) }\right) ^{\left( 2\right) }\right) \otimes \left( x^{\left( 3\right) }\right) _{\left\langle 0\right\rangle }\otimes y^{\left( 2\right) } \\ &=&\sum \left( x^{\left( 1\right) }\otimes \left( x^{\left( 2\right) }\right) _{\left\langle -1\right\rangle }\left( x^{\left( 3\right) }\right) _{\left\langle -2\right\rangle }\left( y^{\left( 1\right) }\right) ^{\left( 1\right) }\otimes \left( x^{\left( 2\right) }\right) _{\left\langle 0\right\rangle }\otimes \left( x^{\left( 3\right) }\right) _{\left\langle -1\right\rangle }\left( y^{\left( 1\right) }\right) ^{\left( 2\right) }\right) \otimes \left( x^{\left( 3\right) }\right) _{\left\langle 0\right\rangle }\otimes y^{\left( 2\right) } \\ &=&\sum \left( x^{\left( 1\right) }\otimes \left( x^{\left( 2\right) }\right) _{\left\langle -1\right\rangle }\left( x^{\left( 3\right) }\right) _{\left\langle -2\right\rangle }y^{\left( 1\right) }\right) \otimes \left( \left( x^{\left( 2\right) }\right) _{\left\langle 0\right\rangle }\otimes \left( x^{\left( 3\right) }\right) _{\left\langle -1\right\rangle }y^{\left( 2\right) }\right) \otimes \left( \left( x^{\left( 3\right) }\right) _{\left\langle 0\right\rangle }\otimes y^{\left( 3\right) }\right) \end{eqnarray* i.e. \end{invisible} \begin{eqnarray*} &&\sum \left( x\otimes y\right) ^{\left( 1\right) }\otimes \left( x\otimes y\right) ^{\left( 2\right) }\otimes \left( x\otimes y\right) ^{\left( 3\right) } \\ &=&\sum \left( x^{\left( 1\right) }\otimes x^{\left( 2\right) }{}_{\left\langle -1\right\rangle }x^{\left( 3\right) }{}_{\left\langle -2\right\rangle }y^{\left( 1\right) }\right) \otimes \left( x^{\left( 2\right) }{}_{\left\langle 0\right\rangle }\otimes x^{\left( 3\right) }{}_{\left\langle -1\right\rangle }y^{\left( 2\right) }\right) \otimes \left( x^{\left( 3\right) }{}_{\left\langle 0\right\rangle }\otimes y^{\left( 3\right) }\right) . \end{eqnarray* Using this equality we can proceed in our computation \begin{eqnarray*} &&m_{\left( E\#H\right) ^{\Gamma }}\left( \left( x\#1_{H}\right) \otimes \left( x^{\prime }\#1_{H}\right) \right) \\ &=&\left[ \begin{array}{c} \sum \gamma \left( x^{\left( 1\right) }\otimes x^{\left( 2\right) }{}_{\left\langle -1\right\rangle }x^{\left( 3\right) }{}_{\left\langle -2\right\rangle }x^{\prime \left( 1\right) }\right) \\ m\left( x^{\left( 2\right) }{}_{\left\langle 0\right\rangle }\otimes x^{\left( 3\right) }{}_{\left\langle -1\right\rangle }x^{\prime \left( 2\right) }\right) \gamma ^{-1}\left( {x^{\left( 3\right) }}_{\left\langle 0\right\rangle }\otimes x^{\prime \left( 3\right) }\right) \end{array \right] \otimes 1_{H} \\ &=&\left[ \sum \gamma \left( \left( x\otimes x^{\prime }\right) ^{\left( 1\right) }\right) \cdot m\left( \left( x\otimes x^{\prime }\right) ^{\left( 2\right) }\right) \cdot \gamma ^{-1}\left( \left( x\otimes x^{\prime }\right) ^{\left( 3\right) }\right) \right] \#1_{H} \\ &=&\left( \gamma \ast m\ast \gamma ^{-1}\right) \left( x\otimes x^{\prime }\right) \#1_{H} \\ &=&m_{E^{\gamma }}\left( x\otimes x^{\prime }\right) \#1_{H} \\ &=&m_{E^{\gamma }\#H}\left( \left( x\#1_{H}\right) \otimes \left( x^{\prime }\#1_{H}\right) \right) . \end{eqnarray* Finally $u_{\left( E\#H\right) ^{\Gamma }}=u_{E\#H}=1_{E}\#1_{H}=1_{E^{\gamma }}\#1_{H}=u_{E^{\gamma }\#H}.$ As a coalgebra $\left( E\#H\right) ^{\Gamma }$ coincides with $E\#H$ and hence with $E^{\gamma }\#H$. Finally let us check that $\omega _{E^{\gamma }\#H}$ and $\omega _{\left( E\#H\right) ^{\Gamma }}$ coincide. To this aim, let us use the maps $\mho _{H,-}^{\ast }$ of \cite[Lemma 2.15]{ABM}. First note that $\omega _{E^{\gamma }\#H}=\mho _{H,E^{\gamma }}^{3}\left( \omega _{E^{\gamma }}\right) $ by \cite[Proposition 5.3]{ABM}. No \begin{eqnarray*} \omega _{\left( E\#H\right) ^{\Gamma }} &=&\left( \varepsilon _{E\#H}\otimes \Gamma \right) \ast \Gamma \left( E\#H\otimes m_{E\#H}\right) \ast \omega _{E\#H}\ast \Gamma ^{-1}\left( m_{E\#H}\otimes E\#H\right) \ast \left( \Gamma ^{-1}\otimes \varepsilon _{E\#H}\right) \\ &=&\left( \mho _{H,E}^{1}\left( \varepsilon \right) \otimes \mho _{H,E}^{2}\left( \gamma \right) \right) \ast \mho _{H,E}^{2}\left( \gamma \right) \left( E\#H\otimes m_{E\#H}\right) \ast \mho _{H,E}^{3}\left( \omega \right) \\ &&\ast \mho _{H,E}^{2}\left( \gamma ^{-1}\right) \left( m_{E\#H}\otimes E\#H\right) \ast \left( \mho _{H,E}^{2}\left( \gamma ^{-1}\right) \otimes \mho _{H,E}^{1}\left( \varepsilon \right) \right) \end{eqnarray* One easily checks tha \begin{eqnarray*} \mho _{H,E}^{1}\left( \varepsilon \right) \otimes \mho _{H,E}^{2}\left( \gamma \right) &=&\mho _{H,E^{\gamma }}^{3}\left( \varepsilon \otimes \gamma \right) , \\ \mho _{H,E}^{2}\left( \gamma \right) \left( E\#H\otimes m_{E\#H}\right) &=&\mho _{H,E^{\gamma }}^{3}\left( \gamma \left( E\otimes m\right) \right) , \\ \mho _{H,E}^{2}\left( \gamma ^{-1}\right) \left( m_{E\#H}\otimes E\#H\right) &=&\mho _{H,E^{\gamma }}^{3}\left( \gamma ^{-1}\left( m\otimes E\right) \right) , \\ \mho _{H,E}^{2}\left( \gamma ^{-1}\right) \otimes \mho _{H,E}^{1}\left( \varepsilon _{E}\right) &=&\mho _{H,E^{\gamma }}^{3}\left( \gamma ^{-1}\otimes \varepsilon \right) . \end{eqnarray* Thus we obtai \begin{eqnarray*} \omega _{\left( E\#H\right) ^{\Gamma }} &=&\mho _{H,E^{\gamma }}^{3}\left( \varepsilon \otimes \gamma \right) \ast \mho _{H,E^{\gamma }}^{3}\left( \gamma \left( E\otimes m\right) \right) \ast \mho _{H,E}^{3}\left( \omega \right) \ast \mho _{H,E^{\gamma }}^{3}\left( \gamma ^{-1}\left( m\otimes E\right) \right) \ast \mho _{H,E^{\gamma }}^{3}\left( \gamma ^{-1}\otimes \varepsilon \right) \\ &=&\mho _{H,E^{\gamma }}^{3}\left[ \left( \varepsilon \otimes \gamma \right) \ast \gamma \left( E\otimes m\right) \ast \omega \ast \gamma ^{-1}\left( m\otimes E\right) \ast \left( \gamma ^{-1}\otimes \varepsilon \right) \right] \\ &=&\mho _{H,E^{\gamma }}^{3}\left( \omega _{E^{\gamma }}\right) =\omega _{E^{\gamma }\#H}. \end{eqnarray*} \begin{invisible} We hav \begin{eqnarray*} \left[ \mho _{H,E}^{1}\left( \varepsilon \right) \otimes \mho _{H,E}^{2}\left( \gamma \right) \right] \left( r\#h\otimes s\#l\otimes t\#k\right) &=&\varepsilon _{E\#H}\left( r\#h\right) \mho _{H,E}^{2}\left( \gamma \right) \left( s\#l\otimes t\#k\right) \\ &=&\varepsilon \left( r\right) \varepsilon _{H}\left( h\right) \gamma \left( s\otimes lt\right) \varepsilon _{H}\left( k\right) \\ &&\overset{\gamma \text{ lin.}}{=}\varepsilon \left( r\right) \gamma \left( h_{1}s\otimes h_{2}lt\right) \varepsilon _{H}\left( k\right) \\ &=&\left( \varepsilon \otimes \gamma \right) \left( r\otimes h_{1}s\otimes h_{2}lt\right) \varepsilon _{H}\left( k\right) \\ &=&\mho _{H,E^{\gamma }}^{3}\left( \varepsilon \otimes \gamma \right) \left( r\#h\otimes s\#l\otimes t\#k\right) , \end{eqnarray* \begin{eqnarray*} &&\mho _{H,E}^{2}\left( \gamma \right) \left( E\#H\otimes m_{E\#H}\right) \left( r\#h\otimes s\#l\otimes t\#k\right) \\ &=&\mho _{H,E}^{2}\left( \gamma \right) \left( r\#h\otimes m\left( s\otimes l_{1}t\right) \#l_{2}k\right) =\gamma \left( r\otimes hm\left( s\otimes l_{1}t\right) \right) \varepsilon _{H}\left( l_{2}k\right) \\ &&\overset{m\text{ lin.}}{=}\gamma \left( r\otimes m\left( h_{1}s\otimes h_{2}lt\right) \right) \varepsilon _{H}\left( k\right) \\ &=&\gamma \left( E\otimes m\right) \left( r\otimes h_{1}s\otimes h_{2}lt\right) \varepsilon _{H}\left( k\right) \\ &=&\mho _{H,E^{\gamma }}^{3}\left( \gamma \left( E\otimes m\right) \right) , \end{eqnarray* \begin{eqnarray*} &&\mho _{H,E}^{2}\left( \gamma ^{-1}\right) \left( m_{E\#H}\otimes E\#H\right) \left( r\#h\otimes s\#l\otimes t\#k\right) \\ &=&\mho _{H,E}^{2}\left( \gamma ^{-1}\right) \left( m\left( r\otimes h_{1}s\right) \#h_{2}l\otimes t\#k\right) =\gamma ^{-1}\left( m\left( r\otimes h_{1}s\right) \otimes h_{2}lt\right) \varepsilon _{H}\left( k\right) \\ &=&\gamma ^{-1}\left( m\otimes E\right) \left( r\otimes h_{1}s\otimes h_{2}lt\right) \varepsilon _{H}\left( k\right) =\mho _{H,E^{\gamma }}^{3}\left( \gamma ^{-1}\left( m\otimes E\right) \right) \left( r\#h\otimes s\#l\otimes t\#k\right) , \end{eqnarray* \begin{eqnarray*} &&\left( \mho _{H,E}^{2}\left( \gamma ^{-1}\right) \otimes \mho _{H,E}^{1}\left( \varepsilon _{E}\right) \right) \left( r\#h\otimes s\#l\otimes t\#k\right) \\ &=&\gamma ^{-1}\left( r\otimes hs\right) \varepsilon _{H}\left( l\right) \varepsilon \left( t\right) \varepsilon _{H}\left( k\right) \\ &=&\gamma ^{-1}\left( r\otimes h_{1}s\right) \varepsilon \left( h_{2}lt\right) \varepsilon _{H}\left( k\right) \\ &=&\left( \gamma ^{-1}\otimes \varepsilon \right) \left( r\otimes h_{1}s\otimes h_{2}lt\right) \varepsilon _{H}\left( k\right) \\ &=&\mho _{H,E^{\gamma }}^{3}\left( \gamma ^{-1}\otimes \varepsilon \right) \left( r\#h\otimes s\#l\otimes t\#k\right) . \end{eqnarray*} \end{invisible} \end{proof} \begin{proposition} \label{pro:grgaugeYD} Let $H$ be a Hopf algebra and let $\left( Q,m,u,\Delta ,\varepsilon ,\omega \right) $ be a connected coquasi-bialgebra in ${_{H}^{H \mathcal{YD}}$. Let $\gamma :Q\otimes Q\rightarrow \Bbbk $ be a gauge transformation in ${_{H}^{H}\mathcal{YD}}$. Then $\mathrm{gr}\left( Q^{\gamma }\right) $ and $\mathrm{gr}\left( Q\right) $ coincide as bialgebras in ${_{H}^{H}\mathcal{YD}}$. \end{proposition} \begin{proof} By Proposition \ref{pro:deformYD}, $Q^{\gamma }$ is a coquasi-bialgebra in $ _{H}^{H}\mathcal{YD}}$. It is obviously connected as it coincides with $Q$ as a coalgebra. By Theorem \ref{teo:grHopf}, both $\mathrm{gr}Q$ and \mathrm{gr}\left( Q^{\gamma }\right) $ are connected bialgebras in ${_{H}^{H \mathcal{YD}}$. Let us check they coincide. Note that, by Remark \ref{rem:gamma-1gauge}, we have that $\gamma ^{-1}$ is a gauge transformation, hence it is trivial on $\Bbbk 1_{Q}\otimes 1_{Q}.$ Let $C:=Q\otimes Q$. Let $n>0$ and let $w\in C_{\left( n\right) }=\sum_{i+j\leq n}Q_{i}\otimes Q_{j}.$ By \cite[Lemma 3.69]{AMS}, we have that $\Delta _{C}\left( w\right) -w\otimes \left( 1_{Q}\right) ^{\otimes 2}-\left( 1_{Q}\right) ^{\otimes 2}\otimes w\in C_{\left( n-1\right) }\otimes C_{\left( n-1\right) }$. Thus we ge \begin{equation*} w_{1}\otimes w_{2}\otimes w_{3}-\Delta _{C}\left( w\right) \otimes \left( 1_{Q}\right) ^{\otimes 2}-\Delta _{C}\left( \left( 1_{Q}\right) ^{\otimes 2}\right) \otimes w\in \Delta _{C}\left( C_{\left( n-1\right) }\right) \otimes C_{\left( n-1\right) } \end{equation* and henc \begin{equation*} w_{1}\otimes w_{2}\otimes w_{3}-w\otimes \left( 1_{Q}\right) ^{\otimes 2}\otimes \left( 1_{Q}\right) ^{\otimes 2}-\left( 1_{Q}\right) ^{\otimes 2}\otimes w\otimes \left( 1_{Q}\right) ^{\otimes 2}-\left( 1_{Q}\right) ^{\otimes 4}\otimes w\in C_{\left( n-1\right) }\otimes C_{\left( n-1\right) }\otimes C_{\left( n-1\right) }. \end{equation* Since $m\left( C_{\left( n-1\right) }\right) \subseteq Q_{n-1}$ we ge \begin{equation*} w_{1}\otimes m\left( w_{2}\right) \otimes w_{3}-w\otimes 1_{Q}\otimes \left( 1_{Q}\right) ^{\otimes 2}-\left( 1_{Q}\right) ^{\otimes 2}\otimes m\left( w\right) \otimes \left( 1_{Q}\right) ^{\otimes 2}-\left( 1_{Q}\right) ^{\otimes 3}\otimes w\in C_{\left( n-1\right) }\otimes Q_{n-1}\otimes C_{\left( n-1\right) } \end{equation*} and henc \begin{equation} w_{1}\otimes \left( m\left( w_{2}\right) +Q_{n-1}\right) \otimes w_{3}=\left( 1_{Q}\right) ^{\otimes 2}\otimes \left( m\left( w\right) +Q_{n-1}\right) \otimes \left( 1_{Q}\right) ^{\otimes 2}. \label{form:gr1} \end{equation} Let $x,y\in Q$. We comput \begin{eqnarray*} \overline{x}\cdot _{\gamma }\overline{y} &=&\left( x+Q_{\left\vert x\right\vert -1}\right) \cdot _{\gamma }\left( y+Q_{\left\vert y\right\vert -1}\right) \\ &=&\left( x\cdot _{\gamma }y\right) +Q_{\left\vert x\right\vert +\left\vert y\right\vert -1} \\ &=&\gamma \left( \left( x\otimes y\right) _{1}\right) m\left( \left( x\otimes y\right) _{2}\right) \gamma ^{-1}\left( \left( x\otimes y\right) _{3}\right) +Q_{\left\vert x\right\vert +\left\vert y\right\vert -1} \\ &=&\gamma \left( \left( x\otimes y\right) _{1}\right) \left( m\left( \left( x\otimes y\right) _{2}\right) +Q_{\left\vert x\right\vert +\left\vert y\right\vert -1}\right) \gamma ^{-1}\left( \left( x\otimes y\right) _{3}\right) \\ &\overset{(\ref{form:gr1})}{=}&\gamma \left( \left( 1_{Q}\right) ^{\otimes 2}\right) \left( m\left( x\otimes y\right) +Q_{\left\vert x\right\vert +\left\vert y\right\vert -1}\right) \gamma ^{-1}\left( \left( 1_{Q}\right) ^{\otimes 2}\right) \\ &=&m\left( x\otimes y\right) +Q_{\left\vert x\right\vert +\left\vert y\right\vert -1}=\left( x\cdot y\right) +Q_{\left\vert x\right\vert +\left\vert y\right\vert -1}=\overline{x}\cdot \overline{y}. \end{eqnarray* Note that $Q^{\gamma }$ and $Q$ have the same unit so that $\mathrm{gr}Q$ and $\mathrm{gr}Q^{\gamma }$ have. \end{proof} \section{(Co)semisimple case}\label{sec:3} Assume $H$ is a semisimple and cosemisimple Hopf algebra (e.g. $H$ is finite-dimensional cosemisimple over a field of characteristic zero). Note that $H$ is then separable (see e.g. \cite[Corollary 3.7]{St} or \cite Theorem 2.34]{AMS}) whence finite-dimensional. Let $\left( Q,m,u,\Delta ,\varepsilon \right) $ be a f.d. coalgebra with multiplication and unit in $ _{H}^{H}\mathcal{YD}}$. Assume that the coradical $Q_{0}$ is a subcoalgebra of $Q$ in ${_{H}^{H}\mathcal{YD}}$ such that $Q_{0}\cdot Q_{0}\subseteq Q_{0}.$ Let $y^{n,i}$ with $1\leq i\leq \dim \left( Q_{n}/Q_{n-1}\right) $ be a basis for $Q_{n}/Q_{n-1}.$ Consider, for every $n>0,$ the exact sequence in ${_{H}^{H}\mathcal{YD}}$ given by \begin{equation*} \xymatrixrowsep{25pt}\xymatrixcolsep{1cm} \xymatrix{0\ar[r]&Q_{n-1 \ar[r]^{s_n}&Q_n\ar[r]^{\pi_n}&\frac{Q_n}{Q_{n-1}}\ar[r]&0} \end{equation*} Now, since $H$ is semisimple and cosemisimple, by \cite[Proposition 7]{Ra} the Drinfeld double $D(H)$ is semisimple. By a result essentially due to Majid (see \cite[Proposition 10.6.16]{Mo}) and by \cite[Proposition 6]{RT}, we get that the category ${_{H}^{H}\mathcal{YD}}\cong {{_{D(H)}}}\mathfrak{M} $ is a semisimple category. Therefore $\pi _{n}$ cosplits i.e. there is a morphism $\sigma _{n}:\left( Q_{n}/Q_{n-1}\right) \rightarrow Q_{n}$ in $ _{H}^{H}\mathcal{YD}}$ such that $\pi _{n}\sigma _{n}=\mathrm{Id}.$ Let u_{n}:\Bbbk \rightarrow Q_{n}$ be the corestriction of the unit $u:\Bbbk \rightarrow Q$ and let $\varepsilon _{n}=\varepsilon _{\mid Q_{n}}:Q_{n}\rightarrow \Bbbk $ be the counit of the subcoalgebra $Q_{n}.$ Se \begin{equation*} \sigma _{n}^{\prime }:=\sigma _{n}-u_{n}\circ \varepsilon _{n}\circ \sigma _{n} \end{equation* This is a morphism in ${_{H}^{H}\mathcal{YD}}$. Moreover \begin{eqnarray*} \pi _{n}\circ \sigma _{n}^{\prime } &=&\pi _{n}\circ \sigma _{n}-\pi _{n}\circ u_{n}\circ \varepsilon _{n}\circ \sigma _{n}\overset{n>0}{= \mathrm{Id}_{Q_{n}/Q_{n-1}}-0=\mathrm{Id}_{Q_{n}/Q_{n-1}}, \\ \varepsilon _{n}\circ \sigma _{n}^{\prime } &=&\varepsilon _{n}\circ \sigma _{n}-\varepsilon _{n}\circ u_{n}\circ \varepsilon _{n}\circ \sigma _{n}=\varepsilon _{n}\circ \sigma _{n}-\varepsilon _{n}\circ \sigma _{n}=0. \end{eqnarray* Therefore, without loss of generality we can assume that $\varepsilon _{n}\circ \sigma _{n}=0.$ A standard argument on split short exact sequences shows that there exists a morphism $p_{n}:Q_{n}\rightarrow Q_{n-1}$ in $ _{H}^{H}\mathcal{YD}}$ such that $s_{n}p_{n}+\sigma _{n}\pi _{n}=\mathrm{Id _{Q_{n}}$, $p_{n}s_{n}=\mathrm{Id}_{Q_{n-1}}$ and $p_{n}\sigma _{n}=0$. We set \begin{equation*} x^{n,i}:=\sigma _{n}\left( y^{n,i}\right) . \end{equation* Therefor \begin{equation*} y^{n,i}=\pi _{n}\sigma _{n}\left( y^{n,i}\right) =\pi _{n}\left( x^{n,i}\right) =x^{n,i}+Q_{n-1}=\overline{x^{n,i}}. \end{equation* These terms $x^{n,i}$ define a $\Bbbk $-basis for $Q.$ As $Q$ is finite-dimensional, there exists $d\in \N_0$ such that $Q=Q_{d}$; we fix $d$ minimal. For all $0\leq a\leq b,$ define the map \begin{eqnarray*} p_{a,b} &:&Q_{b}\rightarrow Q_{a},\qquad p_{a,b}:=p_{a+1}\circ p_{a+2}\circ \cdots \circ p_{b-1}\circ p_{b}, \\ s_{b,a} &:&Q_{a}\rightarrow Q_{b},\qquad s_{b,a}:=s_{b}\circ s_{b-1}\circ \cdots \circ s_{a+2}\circ s_{a+1}. \end{eqnarray* Clearly one ha \begin{equation*} p_{a,b}\circ s_{b,a}=\mathrm{Id}_{Q_{a}}\text{.} \end{equation* Thus, for $0\leq i,a\leq b$ we hav \begin{equation} p_{i,b}\circ s_{b,a}=\left\{ \begin{array}{cc} p_{i,b}\circ s_{b,i}\circ s_{i,a} & i>a \\ p_{i,a}\circ p_{a,b}\circ s_{b,a} & i\leq \end{array \right. =\left\{ \begin{array}{cc} s_{i,a} & i>a \\ p_{i,a} & i\leq \end{array \right. \label{form:ps1} \end{equation} Thus we get an isomorphism $\varphi :Q\rightarrow \mathrm{gr}Q$ of objects in ${_{H}^{H}\mathcal{YD}}$ given b \begin{eqnarray*} \varphi \left( x\right) := &&p_{0,d}\left( x\right) +\pi _{1}p_{1,d}\left( x\right) +\pi _{2}p_{2,d}\left( x\right) +\cdots +\pi _{d-2}p_{d-2,d}\left( x\right) +\pi _{d-1}p_{d-1,d}\left( x\right) +\pi _{d}\left( x\right) \\ &=&\sum_{0\leq t\leq d}\pi _{t}p_{t,d}\left( x\right) ,\text{ for every x\in Q, \end{eqnarray* where we se \begin{equation*} \pi _{0}=\mathrm{Id}_{Q_{0}},\qquad p_{d,d}=\mathrm{Id}_{Q_{d}}. \end{equation* For $0\leq n\leq d,$ we hav \begin{eqnarray*} \varphi \left( x^{n,i}\right) &=&\varphi \left( s_{d,n}\left( x^{n,i}\right) \right) =\varphi \left( s_{d,n}\sigma _{n}\left( y^{n,i}\right) \right) =\sum_{0\leq t\leq d}\pi _{t}p_{t,d}s_{d,n}\left( \sigma _{n}\left( y^{n,i}\right) \right) \\ &=&\sum_{n<t\leq d}\pi _{t}p_{t,d}s_{d,n}\left( \sigma _{n}\left( y^{n,i}\right) \right) +\sum_{0\leq t\leq n}\pi _{t}p_{t,d}s_{d,n}\left( \sigma _{n}\left( y^{n,i}\right) \right) \\ &\overset{(\ref{form:ps1})}{=}&\sum_{n<t\leq d}\pi _{t}s_{t,n}\left( \sigma _{n}\left( y^{n,i}\right) \right) +\sum_{0\leq t<n}\pi _{t}p_{t,n}\left( \sigma _{n}\left( y^{n,i}\right) \right) +\pi _{n}p_{n,d}s_{d,n}\left( \sigma _{n}\left( y^{n,i}\right) \right) \\ &=&\sum_{n<t\leq d}\pi _{t}s_{t,t-1}s_{t-1,n}\left( \sigma _{n}\left( y^{n,i}\right) \right) +\sum_{0\leq t<n}\pi _{t}p_{t,n-1}p_{n-1,n}\left( \sigma _{n}\left( y^{n,i}\right) \right) + \\ &&+\pi _{n}p_{n,d}s_{d,n}\left( \sigma _{n}\left( y^{n,i}\right) \right) \\ &=&\sum_{n<t\leq d}\pi _{t}s_{t}s_{t-1,n}\sigma _{n}\left( y^{n,i}\right) +\sum_{0\leq t<n}\pi _{t}p_{t,n-1}p_{n}\sigma _{n}\left( y^{n,i}\right) +\pi _{n}\sigma _{n}\left( y^{n,i}\right) \\ &=&0+0+y^{n,i}=y^{n,i}. \end{eqnarray* Hence $\varphi \left( x^{n,i}\right) =y^{n,i}.$ Since $y^{n,i}$ with $1\leq i\leq \dim \left( Q_{n}/Q_{n-1}\right) =:d_{n}$ form a basis for Q_{n}/Q_{n-1}$ we have tha \begin{equation*} hy^{n,i}\in \frac{Q_{n}}{Q_{n-1}},\qquad \left( y^{n,i}\right) _{-1}\otimes \left( y^{n,i}\right) _{0}\in H\otimes \frac{Q_{n}}{Q_{n-1}}. \end{equation* Therefore there are $\chi _{t,i}^{n}\in H^{\ast }$ and $h_{t,i}^{n}\in H$ such that \begin{equation} hy^{n,i}=\sum_{1\leq t\leq d_{n}}\chi _{t,i}^{n}\left( h\right) y^{n,t},\qquad \left( y^{n,i}\right) _{-1}\otimes \left( y^{n,i}\right) _{0}=\sum_{1\leq t\leq d_{n}}h_{i,t}^{n}\otimes y^{n,t}. \label{eq:chi} \end{equation We hav \begin{eqnarray*} h\left( h^{\prime }y^{n,i}\right) &=&\sum_{1\leq s\leq d_{n}}\chi _{s,i}^{n}\left( h^{\prime }\right) hy^{n,s}=\sum_{1\leq s\leq d_{n}}\chi _{s,i}^{n}\left( h^{\prime }\right) \sum_{1\leq t\leq d_{n}}\chi _{t,s}^{n}\left( h\right) y^{n,t} \\ &=&\sum_{1\leq s\leq d_{n}}\sum_{1\leq t\leq d_{n}}\chi _{t,s}^{n}\left( h\right) \chi _{s,i}^{n}\left( h^{\prime }\right) y^{n,t}, \\ \left( hh^{\prime }\right) y^{n,i} &=&\sum_{1\leq t\leq d_{n}}\chi _{t,i}^{n}\left( hh^{\prime }\right) y^{n,t} \end{eqnarray* and henc \begin{equation*} \chi _{t,i}^{n}\left( hh^{\prime }\right) =\sum_{1\leq s\leq d_{n}}\chi _{t,s}^{n}\left( h\right) \chi _{s,i}^{n}\left( h^{\prime }\right) . \end{equation* Moreover \begin{equation*} y^{n,i}=1_{H}y^{n,i}=\sum_{1\leq t\leq d_{n}}\chi _{t,i}^{n}\left( 1_{H}\right) y^{n,t} \end{equation* and hence \begin{equation*} \chi _{t,i}^{n}\left( 1_{H}\right) =\delta _{t,i}. \end{equation* We also hav \begin{eqnarray*} \left( y^{n,i}\right) _{-1}\otimes \left( \left( y^{n,i}\right) _{0}\right) _{-1}\otimes \left( \left( y^{n,i}\right) _{0}\right) _{0} &=&\sum_{1\leq s\leq d_{n}}h_{i,s}^{n}\otimes \left( y^{n,s}\right) _{-1}\otimes \left( y^{n,s}\right) _{0} \\ &=&\sum_{1\leq s\leq d_{n}}h_{i,s}^{n}\otimes \sum_{1\leq t\leq d_{n}}h_{s,t}^{n}\otimes y^{n,t} \\ &=&\sum_{1\leq s\leq d_{n}}\sum_{1\leq t\leq d_{n}}h_{i,s}^{n}\otimes h_{s,t}^{n}\otimes y^{n,t}, \\ \left( \left( y^{n,i}\right) _{-1}\right) _{1}\otimes \left( \left( y^{n,i}\right) _{-1}\right) _{2}\otimes \left( y^{n,i}\right) _{0} &=&\sum_{1\leq t\leq d_{n}}\Delta _{H}\left( h_{t,i}^{n}\right) \otimes y^{n,t} \end{eqnarray* so tha \begin{equation*} \Delta _{H}\left( h_{t,i}^{n}\right) =\sum_{1\leq s\leq d_{n}}h_{i,s}^{n}\otimes h_{s,t}^{n}. \end{equation* Moreove \begin{equation*} y^{n,i}=\varepsilon _{H}\left( \left( y^{n,i}\right) _{-1}\right) \left( y^{n,i}\right) _{0}=\sum_{1\leq t\leq d_{n}}\varepsilon _{H}\left( h_{t,i}^{n}\right) y^{n,t} \end{equation* and henc \begin{equation*} \varepsilon _{H}\left( h_{t,i}^{n}\right) =\delta _{t,i}. \end{equation* Finall \begin{eqnarray*} \left( h_{1}y^{n,i}\right) _{-1}h_{2}\otimes \left( h_{1}y^{n,i}\right) _{0} &=&\sum_{1\leq s\leq d_{n}}\chi _{s,i}^{n}\left( h_{1}\right) \left( y^{n,s}\right) _{-1}h_{2}\otimes \left( y^{n,s}\right) _{0} \\ &=&\sum_{1\leq s\leq d_{n}}\chi _{s,i}^{n}\left( h_{1}\right) \sum_{1\leq t\leq d_{n}}h_{s,t}^{n}h_{2}\otimes y^{n,t} \\ &=&\sum_{1\leq s\leq d_{n}}\sum_{1\leq t\leq d_{n}}h_{s,t}^{n}\chi _{s,i}^{n}\left( h_{1}\right) h_{2}\otimes y^{n,t}, \\ h_{1}\left( y^{n,i}\right) _{-1}\otimes h_{2}\left( y^{n,i}\right) _{0} &=&\sum_{1\leq s\leq d_{n}}h_{1}h_{i,s}^{n}\otimes h_{2}y^{n,s}=\sum_{1\leq s\leq d_{n}}h_{1}h_{i,s}^{n}\otimes \sum_{1\leq t\leq d_{n}}\chi _{t,s}^{n}\left( h_{2}\right) y^{n,t} \\ &=&\sum_{1\leq s\leq d_{n}}\sum_{1\leq t\leq d_{n}}h_{1}\chi _{t,s}^{n}\left( h_{2}\right) h_{i,s}^{n}\otimes y^{n,t} \end{eqnarray* Therefore, we ge \begin{equation*} \sum_{1\leq s\leq d_{n}}h_{s,t}^{n}\chi _{s,i}^{n}\left( h_{1}\right) h_{2}=\sum_{1\leq s\leq d_{n}}h_{1}\chi _{t,s}^{n}\left( h_{2}\right) h_{i,s}^{n}. \end{equation* We hav \begin{eqnarray*} hx^{n,i} &=&h\sigma _{n}\left( y^{n,i}\right) =\sigma _{n}\left( hy^{n,i}\right) =\sigma _{n}\left( \sum_{1\leq t\leq d_{n}}\chi _{t,i}^{n}\left( h\right) y^{n,t}\right) =\sum_{1\leq t\leq d_{n}}\chi _{t,i}^{n}\left( h\right) x^{n,t}, \\ \left( x^{n,i}\right) _{-1}\otimes \left( x^{n,i}\right) _{0} &=&\left( \sigma _{n}\left( y^{n,i}\right) \right) _{-1}\otimes \left( \sigma _{n}\left( y^{n,i}\right) \right) _{0}=\left( y^{n,i}\right) _{-1}\otimes \sigma _{n}\left( \left( y^{n,i}\right) _{0}\right) =\sum_{1\leq t\leq d_{n}}h_{i,t}^{n}\otimes x^{n,t}, \\ \varepsilon _{Q}\left( x^{n,i}\right) &=&\varepsilon _{n}\left( x^{n,i}\right) =\varepsilon _{n}\sigma _{n}\left( y^{n,i}\right) =0\text{ for }n>0. \end{eqnarray* If $Q$ is connected, then $d_{0}=1$ so we may assume $y^{0,0}:=1_{Q}+Q_{-1} . Since $\pi _{0}=\mathrm{Id}_{Q_{0}}$ we get \begin{equation*} \sigma _{0}=\mathrm{Id}_{Q_{0}}\circ \sigma _{0}=\pi _{0}\circ \sigma _{0} \mathrm{Id}_{Q_{0}} \end{equation* and hence \begin{equation*} x^{0,0}=\sigma _{0}\left( y^{0,0}\right) =\sigma _{0}\left( 1_{Q}+Q_{-1}\right) =1_{Q}. \end{equation* Since, by Proposition \ref{pro:CMU}, $Q_{a}\cdot Q_{a^{\prime }}\subseteq Q_{a+a^{\prime }}$ for every $a,a^{\prime }\in \N_0$, we can write the product of two elements of the basis in the for \begin{equation} x^{a,l}x^{a^{\prime },l^{\prime }}=\sum_{u\leq a+a^{\prime }}\sum_{v}\mu _{u,v}^{a,l,a^{\prime },l^{\prime }}x^{u,v}. \label{form:GelYD1} \end{equation We comput \begin{eqnarray*} \overline{x^{a,l}}\cdot \overline{x^{a^{\prime },l^{\prime }}} &=&\left( x^{a,l}+Q_{a-1}\right) \left( x^{a^{\prime },l^{\prime }}+Q_{a^{\prime }-1}\right) \\ &=&\left( x^{a,l}x^{a^{\prime },l^{\prime }}\right) +Q_{a+a^{\prime }-1} \\ &&\overset{(\ref{form:GelYD1})}{=}\left( \sum_{u\leq a+a^{\prime }}\sum_{v}\mu _{u,v}^{a,l,a^{\prime },l^{\prime }}x^{u,v}\right) +Q_{a+a^{\prime }-1} \\ &=&\left( \sum_{v}\mu _{a+a^{\prime },v}^{a,l,a^{\prime },l^{\prime }}x^{a+a^{\prime },v}\right) +Q_{a+a^{\prime }-1} \\ &=&\sum_{v}\mu _{a+a^{\prime },v}^{a,l,a^{\prime },l^{\prime }}\left( x^{a+a^{\prime },v}+Q_{a+a^{\prime }-1}\right) \\ &=&\sum_{v}\mu _{a+a^{\prime },v}^{a,l,a^{\prime },l^{\prime }}\overline x^{a+a^{\prime },v}}. \end{eqnarray* which give \begin{equation} \overline{x^{a,l}}\cdot \overline{x^{a^{\prime },l^{\prime }}}=\sum_{v}\mu _{a+a^{\prime },v}^{a,l,a^{\prime },l^{\prime }}\overline{x^{a+a^{\prime },v }. \label{form:GelYD2} \end{equation} \begin{remark} \label{rem:HochYD}Let $H$ be a Hopf algebra and let $\left( A,m_{A},u_{A}\right) $ be an algebra in ${_{H}^{H}\mathcal{YD}}$. Let \varepsilon _{A}:A\rightarrow \Bbbk $ be an algebra map in ${_{H}^{H \mathcal{YD}}$. The Hochschild cohomology in a monoidal category is known, see e.g. \cite{AMS-Hoch}. Consider $\Bbbk $ as an $A$-bimodule in ${_{H}^{H}\mathcal{YD}}$ through $\varepsilon _{A}$. Then, following \cite[1.24]{AMS-Hoch}, we can consider an analogue of the standard complex \begin{equation*} \xymatrixrowsep{25pt}\xymatrixcolsep{1cm} \xymatrix{\yd( \Bbbk ,\Bbbk) \ar[r]^{\partial ^0}& \yd(A ,\Bbbk) \ar[r]^{\partial ^1}&\yd( A^{\otimes2} ,\Bbbk) \ar[r]^-{\partial ^2}&\yd( A^{\otimes3} ,\Bbbk) \ar[r]^-{\partial ^3}&\cdots} \end{equation*} Explicitly, given $f$ in the corresponding domain of $\partial ^{n},$ for n=0,1,2,3$, we have \begin{eqnarray*} \partial ^{0}\left( f\right) &=&f\left( 1\right) \varepsilon _{A}-\varepsilon _{A}f\left( 1\right) =0, \\ \partial ^{1}\left( f\right) &=&f\otimes \varepsilon _{A}-fm_{A}+\varepsilon _{A}\otimes f, \\ \partial ^{2}\left( f\right) &=&f\otimes \varepsilon _{A}-f\left( A\otimes m_{A}\right) +f\left( m_{A}\otimes A\right) -\varepsilon _{A}\otimes f, \\ \partial ^{3}\left( f\right) &=&f\otimes \varepsilon _{A}-f\left( A\otimes A\otimes m_{A}\right) +f\left( A\otimes m_{A}\otimes A\right) -f\left( m_{A}\otimes A\otimes A\right) +\varepsilon _{A}\otimes f. \end{eqnarray* For every $n\geq 1$ denote by \begin{equation*} \mathrm{Z}_{{\mathcal{YD}}}^{n}\left( A,\Bbbk \right) :=\mathrm{ker}\left( \partial ^{n}\right) ,\qquad \mathrm{B}_{{\mathcal{YD}}}^{n}\left( A,\Bbbk \right) :=\mathrm{Im}\left( \partial ^{n-1}\right) \qquad \text{and}\qquad \mathrm{H}_{{\mathcal{YD}}}^{n}\left( A,\Bbbk \right) :=\frac{\mathrm{Z}_{ \mathcal{YD}}}^{n}\left( A,\Bbbk \right) }{\mathrm{B}_{{\mathcal{YD} }^{n}\left( A,\Bbbk \right) } \end{equation* the abelian groups of $n$-cocycles, of $n$-coboundaries and the $n$-th Hochschild cohomology group in ${_{H}^{H}\mathcal{YD}}$ of the algebra $A$ with coefficients in $\Bbbk $. We point out that the construction above works for an arbitrary $A$-bimodule $M$ in ${_{H}^{H}\mathcal{YD}}$ instead of $\Bbbk $. \end{remark} Next result is inspired by \cite[Proposition 2.3]{EG}. Two coquasi-bialgebras $Q$ and $Q^{\prime }$ in ${_{H}^{H}\mathcal{YD}}$ will be called \textbf{gauge equivalent} whenever there is some gauge transformation $\gamma :Q\otimes Q\rightarrow \Bbbk $ in ${_{H}^{H}\mathcal{YD}}$ such that $Q^{\gamma }\cong Q^{\prime }$ as coquasi-bialgebras in ${_{H}^{H}\mathcal{Y }},$ see Proposition \ref{pro:deformYD} for the structure of $Q^{\gamma }$. \begin{theorem} \label{teo:GelakiYD} Let $H$ be a semisimple and cosemisimple Hopf algebra and let $\left( Q,m,u,\Delta ,\varepsilon ,\omega \right) $ be a f.d. connected coquasi-bialgebra in ${_{H}^{H}\mathcal{YD}}$. If $\mathrm{H}_{ \mathcal{YD}}}^{3}\left( \mathrm{gr}Q,\Bbbk \right) =0$ then $Q$ is gauge equivalent to a connected bialgebra in ${_{H}^{H}\mathcal{YD}}$. \end{theorem} \begin{proof} For $t\in \N_0$, and $x,y,z$ in the basis of $Q$, we se \begin{equation*} \omega _{t}\left( x\otimes y\otimes z\right) :=\delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,t}\omega \left( x\otimes y\otimes z\right) . \end{equation* Let us check it defines a morphism $\omega _{t}:Q\otimes Q\otimes Q\rightarrow \Bbbk $ in ${_{H}^{H}\mathcal{YD}}$. It is left $H$-linear as, by means of (\ref{eq:chi}), the definition of $\omega _{t}$ and the $H -linearity of $\omega $, we can prove that $\omega _{t}\left( h\left( x^{n,i}\otimes x^{n^{\prime },i^{\prime }}\otimes x^{n^{\prime \prime },i^{\prime \prime }}\right) \right) =\varepsilon _{H}\left( h\right) \omega _{t}\left( x^{n,i}\otimes x^{n^{\prime },i^{\prime }}\otimes x^{n^{\prime \prime },i^{\prime \prime }}\right) .$ \begin{invisible} \begin{eqnarray*} &&\omega _{t}\left( h\left( x^{n,i}\otimes x^{n^{\prime },i^{\prime }}\otimes x^{n^{\prime \prime },i^{\prime \prime }}\right) \right) \\ &=&\omega _{t}\left( h_{1}x^{n,i}\otimes h_{2}x^{n^{\prime },i^{\prime }}\otimes h_{3}x^{n^{\prime \prime },i^{\prime \prime }}\right) \\ &&\overset{(\ref{eq:chi})}{=}\omega _{t}\left( \sum_{1\leq w\leq d_{n}}\chi _{w,i}^{n}\left( h_{1}\right) x^{n,w}\otimes \sum_{1\leq w^{\prime }\leq d_{n^{\prime }}}\chi _{w^{\prime },i^{\prime }}^{n^{\prime }}\left( h_{2}\right) x^{n^{\prime },w^{\prime }}\otimes \sum_{1\leq w^{\prime \prime }\leq d_{n^{\prime \prime }}}\chi _{w^{\prime \prime },i^{\prime \prime }}^{n^{\prime \prime }}\left( h_{3}\right) x^{n^{\prime \prime },w^{\prime \prime }}\right) \\ &=&\sum_{1\leq w\leq d_{n}}\sum_{1\leq w^{\prime }\leq d_{n^{\prime }}}\sum_{1\leq w^{\prime \prime }\leq d_{n^{\prime \prime }}}\chi _{w,i}^{n}\left( h_{1}\right) \chi _{w^{\prime },i^{\prime }}^{n^{\prime }}\left( h_{2}\right) \chi _{w^{\prime \prime },i^{\prime \prime }}^{n^{\prime \prime }}\left( h_{3}\right) \omega _{t}\left( x^{n,w}\otimes x^{n^{\prime },w^{\prime }}\otimes x^{n^{\prime \prime },w^{\prime \prime }}\right) \\ &=&\sum_{1\leq w\leq d_{n}}\sum_{1\leq w^{\prime }\leq d_{n^{\prime }}}\sum_{1\leq w^{\prime \prime }\leq d_{n^{\prime \prime }}}\chi _{w,i}^{n}\left( h_{1}\right) \chi _{w^{\prime },i^{\prime }}^{n^{\prime }}\left( h_{2}\right) \chi _{w^{\prime \prime },i^{\prime \prime }}^{n^{\prime \prime }}\left( h_{3}\right) \delta _{n+n^{\prime }+n^{\prime \prime },t}\omega \left( x^{n,w}\otimes x^{n^{\prime },w^{\prime }}\otimes x^{n^{\prime \prime },w^{\prime \prime }}\right) \\ &&\overset{(\ref{eq:chi})}{=}\delta _{n+n^{\prime }+n^{\prime \prime },t}\omega \left( h_{1}x^{n,i}\otimes h_{2}x^{n^{\prime },i^{\prime }}\otimes h_{3}x^{n^{\prime \prime },i^{\prime \prime }}\right) \\ &=&\delta _{n+n^{\prime }+n^{\prime \prime },t}\omega \left( h\left( x^{n,i}\otimes x^{n^{\prime },i^{\prime }}\otimes x^{n^{\prime \prime },i^{\prime \prime }}\right) \right) \\ &=&\delta _{n+n^{\prime }+n^{\prime \prime },t}\varepsilon _{H}\left( h\right) \omega \left( x^{n,i}\otimes x^{n^{\prime },i^{\prime }}\otimes x^{n^{\prime \prime },i^{\prime \prime }}\right) =\varepsilon _{H}\left( h\right) \omega _{t}\left( x^{n,i}\otimes x^{n^{\prime },i^{\prime }}\otimes x^{n^{\prime \prime },i^{\prime \prime }}\right) . \end{eqnarray*} \end{invisible} Moreover it is left $H$-colinear as, by means of (\ref{eq:chi}), the definition of $\omega _{t}$ and the $H$-colinearity of $\omega $, we can prove tha \begin{equation*} \left( x^{n,i}\otimes x^{n^{\prime },i^{\prime }}\otimes x^{n^{\prime \prime },i^{\prime \prime }}\right) _{\left\langle -1\right\rangle }\otimes \omega _{t}\left( \left( x^{n,i}\otimes x^{n^{\prime },i^{\prime }}\otimes x^{n^{\prime \prime },i^{\prime \prime }}\right) _{\left\langle 0\right\rangle }\right) =1_{H}\otimes \omega _{t}\left( x^{n,i}\otimes x^{n^{\prime },i^{\prime }}\otimes x^{n^{\prime \prime },i^{\prime \prime }}\right) . \end{equation*} \begin{invisible} \begin{eqnarray*} &&\left( x^{n,i}\otimes x^{n^{\prime },i^{\prime }}\otimes x^{n^{\prime \prime },i^{\prime \prime }}\right) _{\left\langle -1\right\rangle }\otimes \omega _{t}\left( \left( x^{n,i}\otimes x^{n^{\prime },i^{\prime }}\otimes x^{n^{\prime \prime },i^{\prime \prime }}\right) _{\left\langle 0\right\rangle }\right) \\ &=&\left( x^{n,i}\right) _{-1}\left( x^{n^{\prime },i^{\prime }}\right) _{-1}\left( x^{n^{\prime \prime },i^{\prime \prime }}\right) _{-1}\otimes \omega _{t}\left( \left( x^{n,i}\right) _{0}\otimes \left( x^{n^{\prime },i^{\prime }}\right) _{0}\otimes \left( x^{n^{\prime \prime },i^{\prime \prime }}\right) _{0}\right) \\ &&\overset{(\ref{eq:chi})}{=}\sum_{1\leq w\leq d_{n}}h_{i,w}^{n}\sum_{1\leq w^{\prime }\leq d_{n^{\prime }}}h_{i^{\prime },w^{\prime }}^{n^{\prime }}\sum_{1\leq w^{\prime \prime }\leq d_{n^{\prime \prime }}}h_{i^{\prime \prime },w^{\prime \prime }}^{n^{\prime \prime }}\otimes \omega _{t}\left( x^{n,w}\otimes x^{n^{\prime },w^{\prime }}\otimes x^{n^{\prime \prime },w^{\prime \prime }}\right) \\ &=&\sum_{1\leq w\leq d_{n}}h_{i,w}^{n}\sum_{1\leq w^{\prime }\leq d_{n^{\prime }}}h_{i^{\prime },w^{\prime }}^{n^{\prime }}\sum_{1\leq w^{\prime \prime }\leq d_{n^{\prime \prime }}}h_{i^{\prime \prime },w^{\prime \prime }}^{n^{\prime \prime }}\otimes \delta _{n+n^{\prime }+n^{\prime \prime },t}\omega \left( x^{n,w}\otimes x^{n^{\prime },w^{\prime }}\otimes x^{n^{\prime \prime },w^{\prime \prime }}\right) \\ &&\overset{(\ref{eq:chi})}{=}\delta _{n+n^{\prime }+n^{\prime \prime },t}\left( x^{n,i}\right) _{-1}\left( x^{n^{\prime },i^{\prime }}\right) _{-1}\left( x^{n^{\prime \prime },i^{\prime \prime }}\right) _{-1}\otimes \omega \left( \left( x^{n,i}\right) _{0}\otimes \left( x^{n^{\prime },i^{\prime }}\right) _{0}\otimes \left( x^{n^{\prime \prime },i^{\prime \prime }}\right) _{0}\right) \\ &=&\delta _{n+n^{\prime }+n^{\prime \prime },t}\left( x^{n,i}\otimes x^{n^{\prime },i^{\prime }}\otimes x^{n^{\prime \prime },i^{\prime \prime }}\right) _{-1}\otimes \omega \left( \left( x^{n,i}\otimes x^{n^{\prime },i^{\prime }}\otimes x^{n^{\prime \prime },i^{\prime \prime }}\right) _{0}\right) \\ &=&\delta _{n+n^{\prime }+n^{\prime \prime },t}\left( \omega \left( x^{n,i}\otimes x^{n^{\prime },i^{\prime }}\otimes x^{n^{\prime \prime },i^{\prime \prime }}\right) \right) _{-1}\otimes \left( \omega \left( x^{n,i}\otimes x^{n^{\prime },i^{\prime }}\otimes x^{n^{\prime \prime },i^{\prime \prime }}\right) \right) _{0} \\ &=&1_{H}\otimes \delta _{n+n^{\prime }+n^{\prime \prime },t}\omega \left( x^{n,i}\otimes x^{n^{\prime },i^{\prime }}\otimes x^{n^{\prime \prime },i^{\prime \prime }}\right) \\ &=&1_{H}\otimes \omega _{t}\left( x^{n,i}\otimes x^{n^{\prime },i^{\prime }}\otimes x^{n^{\prime \prime },i^{\prime \prime }}\right) . \end{eqnarray*} \end{invisible} Clearly, for $x,y,z\in Q$ in the basis, one ha \begin{equation*} \sum\limits_{t\in \N_0}\omega _{t}\left( x\otimes y\otimes z\right) =\sum\limits_{t\in \N_0}\delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,t}\omega \left( x\otimes y\otimes z\right) =\omega \left( x\otimes y\otimes z\right) \end{equation* so that we can formally write \begin{equation} \omega =\sum\limits_{t\in \N_0}\omega _{t}. \label{eq:omegapiece} \end{equation} Since $\varepsilon $ is trivial on elements in the basis of strictly positive degree, one get \begin{equation} \omega _{0}=\varepsilon \otimes \varepsilon \otimes \varepsilon . \label{form:Omega0YD} \end{equation} \begin{invisible} In fac \begin{eqnarray*} \omega _{0}\left( x\otimes y\otimes z\right) &=&\delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,0}\omega \left( x\otimes y\otimes z\right) \\ &=&\delta _{\left\vert x\right\vert ,0}\delta _{\left\vert y\right\vert ,0}\delta _{\left\vert z\right\vert ,0}\omega \left( x\otimes y\otimes z\right) \\ &=&\delta _{\left\vert x\right\vert ,0}\delta _{\left\vert y\right\vert ,0}\delta _{\left\vert z\right\vert ,0}xyz\omega \left( 1_{\Bbbk }\otimes 1_{\Bbbk }\otimes 1_{\Bbbk }\right) \\ &=&\delta _{\left\vert x\right\vert ,0}\delta _{\left\vert y\right\vert ,0}\delta _{\left\vert z\right\vert ,0}xyz=\delta _{\left\vert x\right\vert ,0}\delta _{\left\vert y\right\vert ,0}\delta _{\left\vert z\right\vert ,0}\varepsilon \left( x\right) \varepsilon \left( y\right) \varepsilon \left( z\right) \\ &\overset{(\ast )}{=}&\varepsilon \left( x\right) \varepsilon \left( y\right) \varepsilon \left( z\right) =\left( \varepsilon \otimes \varepsilon \otimes \varepsilon \right) \left( x\otimes y\otimes z\right) \end{eqnarray* where (*) we used that $\varepsilon $ is trivial on elements in the basis of strictly positive degree. \end{invisible} If $\omega =\omega _{0}$ then $Q$ is a (connected) bialgebra in ${_{H}^{H \mathcal{YD}}$ and the proof is finished. Thus we can assume $\omega \neq \omega _{0}$ and set \begin{eqnarray*} s &:&=\min \left\{ i\in \N :\omega _{i}\neq 0\right\} , \\ \overline{\omega }_{s} &:&=\omega _{s}\left( \varphi ^{-1}\otimes \varphi ^{-1}\otimes \varphi ^{-1}\right) , \\ \overline{Q} &:&=\mathrm{gr}Q. \end{eqnarray* Note that $\overline{\omega }_{s}$ is a morphism in ${_{H}^{H}\mathcal{YD}}$ as a composition of morphisms in ${_{H}^{H}\mathcal{YD}}$. Let $n\in \N_0,$ let $C^{4}=Q\otimes Q\otimes Q\otimes Q$ and let u\in C_{\left( n\right) }^{4}=\sum_{i+j+k+l\leq n}Q_{i}\otimes Q_{j}\otimes Q_{k}\otimes Q_{l}$. \begin{invisible} Similarly one has $C^{3}=Q\otimes Q\otimes Q$ and the corresponding C_{\left( n\right) }^{3}.$ Then \begin{equation*} u_{1}\otimes u_{2}\in \sum_{i+j\leq n}C_{\left( i\right) }^{4}\otimes C_{\left( j\right) }^{4},\qquad u_{1}\otimes u_{2}\otimes u_{3}\in \sum_{a+b+c\leq n}C_{\left( a\right) }^{4}\otimes C_{\left( b\right) }^{4}\otimes C_{\left( c\right) }^{4}. \end{equation* Note tha \begin{eqnarray*} \omega _{|C_{\left( n\right) }^{3}} &=&\left( \sum\limits_{t\in \N_ }\omega _{t}\right) _{|C_{\left( n\right) }^{3}}=\left( \sum\limits_{0\leq t\leq n}\omega _{t}\right) _{|C_{\left( n\right) }^{3}}, \\ \left( Q\otimes Q\otimes m\right) \left( C_{\left( n\right) }^{4}\right) &\subseteq &C_{\left( n\right) }^{3},\left( m\otimes Q\otimes Q\right) \left( C_{\left( n\right) }^{4}\right) \subseteq C_{\left( n\right) }^{3},\left( Q\otimes m\otimes Q\right) \left( C_{\left( n\right) }^{4}\right) \subseteq C_{\left( n\right) }^{3}, \\ \left( \varepsilon \otimes Q\otimes Q\otimes Q\right) \left( C_{\left( n\right) }^{4}\right) &\subseteq &C_{\left( n\right) }^{3},\left( Q\otimes Q\otimes Q\otimes \varepsilon \right) \left( C_{\left( n\right) }^{4}\right) \subseteq C_{\left( n\right) }^{3}. \end{eqnarray* Henc \begin{eqnarray*} &&\left( Q\otimes Q\otimes m\right) \left( u_{1}\right) \otimes \left( m\otimes Q\otimes Q\right) \left( u_{2}\right) \\ &\in &\sum_{i+j\leq n}\left( Q\otimes Q\otimes m\right) \left( C_{\left( i\right) }^{4}\right) \otimes \left( m\otimes Q\otimes Q\right) \left( C_{\left( j\right) }^{4}\right) \\ &\subseteq &\sum_{i+j\leq n}C_{\left( i\right) }^{3}\otimes C_{\left( j\right) }^{3}\subseteq \sum_{i+j\leq n}C_{\left( i+j\right) }^{6}\subseteq C_{\left( n\right) }^{6}, \\ &&\left( \varepsilon \otimes Q^{\otimes 3}\right) \left( u_{1}\right) \otimes \left( Q\otimes m\otimes Q\right) \left( u_{2}\right) \cdot \left( Q^{\otimes 3}\otimes \varepsilon \right) \left( u_{3}\right) \\ &\in &\sum_{a+b+c\leq n}\left( \varepsilon \otimes Q^{\otimes 3}\right) \left( C_{\left( a\right) }^{4}\right) \otimes \left( Q\otimes m\otimes Q\right) \left( C_{\left( b\right) }^{4}\right) \cdot \left( Q^{\otimes 3}\otimes \varepsilon \right) \left( C_{\left( c\right) }^{4}\right) \\ &\subseteq &\sum_{a+b+c\leq n}C_{\left( a\right) }^{3}\otimes C_{\left( b\right) }^{3}\otimes C_{\left( c\right) }^{3}\subseteq \sum_{a+b+c\leq n}C_{\left( a+b+c\right) }^{9}\subseteq C_{\left( n\right) }^{9} \end{eqnarray* A direct computation on a generic element $x^{n_{1},i_{1}}\otimes x^{n_{2},i_{2}}\otimes x^{n_{3},i_{3}}\otimes x^{n_{4},i_{4}}\otimes x^{n_{5},i_{5}}\otimes x^{n_{6},i_{6}}$ of the basis of $C_{\left( n\right) }^{6}$ shows tha \begin{equation*} \left( \omega \otimes \omega \right) _{\mid C_{\left( n\right) }^{6}}=\left( \sum_{i+j\leq n}\omega _{i}\otimes \omega _{j}\right) _{\mid C_{\left( n\right) }^{6}}. \end{equation* Similarly one prove \begin{equation*} \left( \omega \otimes \omega \otimes \omega \right) _{\mid C_{\left( n\right) }^{9}}=\left( \sum_{a+b+c\leq n}\omega _{a}\otimes \omega _{b}\otimes \omega _{c}\right) _{\mid C_{\left( n\right) }^{9}}. \end{equation*} Since $\omega $ is a reassociator, we hav \begin{equation*} \omega \left( Q\otimes Q\otimes m\right) \ast \omega \left( m\otimes Q\otimes Q\right) =\left( \varepsilon \otimes \omega \right) \ast \omega \left( Q\otimes m\otimes Q\right) \ast \left( \omega \otimes \varepsilon \right) \end{equation* and henc \begin{equation*} \omega \left( Q\otimes Q\otimes m\right) \left( u_{1}\right) \cdot \omega \left( m\otimes Q\otimes Q\right) \left( u_{2}\right) =\omega \left( \left( \varepsilon \otimes Q^{\otimes 3}\right) \left( u_{1}\right) \right) \cdot \omega \left( Q\otimes m\otimes Q\right) \left( u_{2}\right) \cdot \omega \left( \left( Q^{\otimes 3}\otimes \varepsilon \right) \left( u_{3}\right) \right) \end{equation* which, by the foregoing, rereads as \begin{equation*} \sum\limits_{0\leq i+j\leq n}\left[ \begin{array}{c} \omega _{i}\left( \left( Q\otimes Q\otimes m\right) \left( u_{1}\right) \right) \cdot \\ \omega _{j}\left( \left( m\otimes Q\otimes Q\right) \left( u_{2}\right) \right \end{array \right] =\sum\limits_{0\leq a+b+c\leq n}\left[ \begin{array}{c} \left( \varepsilon \otimes \omega _{a}\right) \left( u_{1}\right) \cdot \\ \omega _{b}\left( \left( Q\otimes m\otimes Q\right) \left( u_{2}\right) \right) \cdot \\ \left( \omega _{c}\otimes \varepsilon \right) \left( u_{3}\right \end{array \right] . \end{equation* This leads to the following equality. \end{invisible} A direct computation rewriting the cocycle condition using (\re {eq:omegapiece}) proves that, for every $n\in \N_0$, and $u\in C_{\left( n\right) }^{4} \begin{eqnarray} &&\sum\limits_{0\leq i+j\leq n}\left[ \omega _{i}\left( Q\otimes Q\otimes m\right) \ast \omega _{j}\left( m\otimes Q\otimes Q\right) \right] \left( u\right) \label{form:goldenYD} \\ &=&\sum\limits_{0\leq a+b+c\leq n}\left[ \left( \varepsilon \otimes \omega _{a}\right) \ast \omega _{b}\left( Q\otimes m\otimes Q\right) \ast \left( \omega _{c}\otimes \varepsilon \right) \right] \left( u\right) . \notag \end{eqnarray} Next aim is to check that $\left[ \overline{\omega }_{s}\right] \in \mathrm{ }_{{\mathcal{YD}}}^{3}\left( \mathrm{gr}Q,\Bbbk \right) $ i.e. tha \begin{equation*} \overline{\omega }_{s}\left( m_{\overline{Q}}\otimes \overline{Q}\otimes \overline{Q}\right) +\overline{\omega }_{s}\left( \overline{Q}\otimes \overline{Q}\otimes m_{\overline{Q}}\right) =\left( \varepsilon _{\overline{ }}\otimes \overline{\omega }_{s}\right) +\overline{\omega }_{s}\left( \overline{Q}\otimes m_{\overline{Q}}\otimes \overline{Q}\right) +\left( \overline{\omega }_{s}\otimes \varepsilon _{\overline{Q}}\right) . \end{equation* This is achieved by evaluating the two sides of the equality above on \overline{u}:=\overline{x}\otimes \overline{y}\otimes \overline{z}\otimes \overline{t}$ where $x,y,z,t$ are elements in the basis and using (\re {form:GelYD2}). If $\overline{u}$ has homogeneous degree greater than $s,$ then both terms are zero. Otherwise, i.e. if $\overline{u}$ has homogeneous degree at most $s$, one has $\overline{\omega }_{s}\left( m_{\overline{Q }\otimes \overline{Q}\otimes \overline{Q}\right) \left( \overline{u}\right) =\omega _{s}\left( m_{Q}\otimes Q\otimes Q\right) \left( u\right) $ and similarly for the other pieces so that one has to check that \begin{equation*} \omega _{s}\left( m\otimes Q\otimes Q\right) \left( u\right) +\omega _{s}\left( Q\otimes Q\otimes m\right) \left( u\right) =\left( \varepsilon \otimes \omega _{s}\right) \left( u\right) +\omega _{s}\left( Q\otimes m\otimes Q\right) \left( u\right) +\left( \omega _{s}\otimes \varepsilon \right) \left( u\right) . \end{equation*} This equality follows by using (\ref{form:goldenYD}) and the definition of s $. \begin{invisible} We compute this equality on $\overline{u}:=\overline{x}\otimes \overline{y \otimes \overline{z}\otimes \overline{t}$ where $x,y,z,t$ are elements in the basis. In this case take $x=x^{a,l}$ and $y=x^{a^{\prime },l^{\prime }} . First we hav \begin{eqnarray*} &&\overline{\omega }_{s}\left( m_{\overline{Q}}\otimes \overline{Q}\otimes \overline{Q}\right) \left( \overline{u}\right) \\ &=&\overline{\omega }_{s}\left( \overline{x}\cdot \overline{y}\otimes \overline{z}\otimes \overline{t}\right) \\ &=&\omega _{s}\left( \varphi ^{-1}\left( \overline{x}\cdot \overline{y \right) \otimes z\otimes t\right) \\ &\overset{(\ref{form:GelYD2})}{=}&\omega _{s}\left( \varphi ^{-1}\left( \sum_{v}\mu _{a+a^{\prime },v}^{a,l,a^{\prime },l^{\prime }}\overline x^{a+a^{\prime },v}}\right) \otimes z\otimes t\right) \\ &=&\omega _{s}\left( \sum_{v}\mu _{a+a^{\prime },v}^{a,l,a^{\prime },l^{\prime }}x^{a+a^{\prime },v}\otimes z\otimes t\right) \\ &=&\sum_{v}\mu _{a+a^{\prime },v}^{a,l,a^{\prime },l^{\prime }}\omega _{s}\left( x^{a+a^{\prime },v}\otimes z\otimes t\right) =\sum_{v}\mu _{a+a^{\prime },v}^{a,l,a^{\prime },l^{\prime }}\delta _{a+a^{\prime }+\left\vert z\right\vert +\left\vert t\right\vert ,s}\omega \left( x^{a+a^{\prime },v}\otimes z\otimes t\right) \\ &=&\sum_{v}\mu _{a+a^{\prime },v}^{a,l,a^{\prime },l^{\prime }}\delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert +\left\vert t\right\vert ,s}\omega \left( x^{a+a^{\prime },v}\otimes z\otimes t\right) . \end{eqnarray* so that \begin{equation} \overline{\omega }_{s}\left( m_{\overline{Q}}\otimes \overline{Q}\otimes \overline{Q}\right) \left( \overline{u}\right) =\sum_{v}\mu _{a+a^{\prime },v}^{a,l,a^{\prime },l^{\prime }}\delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert +\left\vert t\right\vert ,s}\omega \left( x^{a+a^{\prime },v}\otimes z\otimes t\right) . \label{form:ubar1} \end{equation} CASE 1) $\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert +\left\vert t\right\vert >s.$ In this case by (\ref{form:ubar1 ), we ge \begin{equation*} \overline{\omega }_{s}\left( m_{\overline{Q}}\otimes \overline{Q}\otimes \overline{Q}\right) \left( \overline{u}\right) =\sum_{v}\mu _{a+a^{\prime },v}^{a,l,a^{\prime },l^{\prime }}\delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert +\left\vert t\right\vert ,s}\omega \left( x^{a+a^{\prime },v}\otimes z\otimes t\right) =0. \end{equation* Similarly $\overline{\omega }_{s}\left( \overline{Q}\otimes \overline{Q \otimes m_{\overline{Q}}\right) \left( \overline{u}\right) =0=\overline \omega }_{s}\left( \overline{Q}\otimes m_{\overline{Q}}\otimes \overline{Q \right) \left( \overline{u}\right) .$ Moreove \begin{eqnarray*} \left( \overline{\omega }_{s}\otimes \varepsilon _{\overline{A}}\right) \left( \overline{u}\right) &=&\overline{\omega }_{s}\left( \overline{x \otimes \overline{y}\otimes \overline{z}\right) \varepsilon _{\overline{A }\left( \overline{t}\right) \\ &=&\omega _{s}\left( x\otimes y\otimes z\right) \delta _{\left\vert t\right\vert ,0}\varepsilon \left( t\right) \\ &=&\delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\omega \left( x\otimes y\otimes z\right) \delta _{\left\vert t\right\vert ,0}\varepsilon \left( t\right) =0. \end{eqnarray* Similarly $\left( \varepsilon _{\overline{A}}\otimes \overline{\omega _{s}\right) \left( \overline{u}\right) =0.$ CASE 2) $\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert +\left\vert t\right\vert \leq s.$ In this case we will use (\re {form:goldenYD}) for $u=x\otimes y\otimes z\otimes t$ and henc \begin{eqnarray*} &&\sum\limits_{0\leq i+j\leq s}\left[ \omega _{i}\left( Q\otimes Q\otimes m\right) \ast \omega _{j}\left( m\otimes Q\otimes Q\right) \right] \left( u\right) \\ &=&\sum\limits_{0\leq a+b+c\leq s}\left[ \left( \varepsilon \otimes \omega _{a}\right) \ast \omega _{b}\left( Q\otimes m\otimes Q\right) \ast \left( \omega _{c}\otimes \varepsilon \right) \right] \left( u\right) . \end{eqnarray* By minimality of $s$ we obtai \begin{eqnarray*} &&\left[ \begin{array}{c} \left[ \omega _{0}\left( Q\otimes Q\otimes m\right) \ast \omega _{s}\left( m\otimes Q\otimes Q\right) \right] \left( u\right) + \\ \left[ \omega _{s}\left( Q\otimes Q\otimes m\right) \ast \omega _{0}\left( m\otimes Q\otimes Q\right) \right] \left( u\right \end{array \right] \\ &=&\left[ \begin{array}{c} \left[ \left( \varepsilon \otimes \omega _{0}\right) \ast \omega _{0}\left( Q\otimes m\otimes Q\right) \ast \left( \omega _{s}\otimes \varepsilon \right) \right] \left( u\right) + \\ \left[ \left( \varepsilon \otimes \omega _{0}\right) \ast \omega _{s}\left( Q\otimes m\otimes Q\right) \ast \left( \omega _{0}\otimes \varepsilon \right) \right] \left( u\right) + \\ \left[ \left( \varepsilon \otimes \omega _{s}\right) \ast \omega _{0}\left( Q\otimes m\otimes Q\right) \ast \left( \omega _{0}\otimes \varepsilon \right) \right] \left( u\right \end{array \right] . \end{eqnarray*} By (\ref{form:Omega0YD}) the equality above become \begin{eqnarray} &&\omega _{s}\left( m\otimes Q\otimes Q\right) \left( u\right) +\omega _{s}\left( Q\otimes Q\otimes m\right) \left( u\right) \label{form:OmegaSYD} \\ &=&\left( \omega _{s}\otimes \varepsilon \right) \left( u\right) +\omega _{s}\left( Q\otimes m\otimes Q\right) \left( u\right) +\left( \varepsilon \otimes \omega _{s}\right) \left( u\right) . \notag \end{eqnarray We will rewrite this formula in terms of $\overline{\omega }_{s}$. For x=x^{a,l}$ and $y=x^{a^{\prime },l^{\prime }}$, by (\ref{form:ubar1}) we hav \begin{equation*} \overline{\omega }_{s}\left( m_{\overline{Q}}\otimes \overline{Q}\otimes \overline{Q}\right) \left( \overline{u}\right) =\sum_{v}\mu _{a+a^{\prime },v}^{a,l,a^{\prime },l^{\prime }}\delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert +\left\vert t\right\vert ,s}\omega \left( x^{a+a^{\prime },v}\otimes z\otimes t\right) . \end{equation* Moreove \begin{eqnarray*} &&\omega _{s}\left( m\otimes Q\otimes Q\right) \left( u\right) \\ &=&\omega _{s}\left( m\otimes Q\otimes Q\right) \left( x\otimes y\otimes z\otimes t\right) \\ &=&\omega _{s}\left( xy\otimes z\otimes t\right) \\ &=&\omega _{s}\left( x^{a,l}x^{a^{\prime },l^{\prime }}\otimes z\otimes t\right) \\ &\overset{(\ref{form:GelYD1})}{=}&\omega _{s}\left( \sum_{w\leq a+a^{\prime }}\sum_{v}\mu _{w,v}^{a,l,a^{\prime },l^{\prime }}x^{w,v}\otimes z\otimes t\right) \\ &=&\sum_{w\leq a+a^{\prime }}\sum_{v}\mu _{w,v}^{a,l,a^{\prime },l^{\prime }}\omega _{s}\left( x^{w,v}\otimes z\otimes t\right) \\ &=&\sum_{w\leq \left\vert x\right\vert +\left\vert y\right\vert }\sum_{v}\mu _{w,v}^{a,l,a^{\prime },l^{\prime }}\delta _{w+\left\vert z\right\vert +\left\vert t\right\vert ,s}\omega \left( x^{w,v}\otimes z\otimes t\right) \\ &&\left( \left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert +\left\vert t\right\vert \leq s\Rightarrow w+\left\vert z\right\vert +\left\vert t\right\vert \leq \left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert +\left\vert t\right\vert \leq s\right) \\ &=&\sum_{v}\mu _{a+a^{\prime },v}^{a,l,a^{\prime },l^{\prime }}\delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert +\left\vert t\right\vert ,s}\omega \left( x^{\left\vert x\right\vert +\left\vert y\right\vert ,v}\otimes z\otimes t\right) \end{eqnarray* so that $\overline{\omega }_{s}\left( m_{\overline{Q}}\otimes \overline{Q \otimes \overline{Q}\right) \left( \overline{u}\right) =\omega _{s}\left( m\otimes Q\otimes Q\right) \left( u\right) .$ Similarly one proves \begin{eqnarray*} \overline{\omega }_{s}\left( \overline{Q}\otimes \overline{Q}\otimes m_ \overline{Q}}\right) \left( \overline{u}\right) &=&\omega _{s}\left( Q\otimes Q\otimes m\right) \left( u\right) , \\ \overline{\omega }_{s}\left( \overline{Q}\otimes m_{\overline{Q}}\otimes \overline{Q}\right) \left( \overline{u}\right) &=&\omega _{s}\left( Q\otimes m\otimes Q\right) \left( u\right) . \end{eqnarray* Since $x,y,z,t$ are elements in the basis we hav \begin{eqnarray*} &&\left( \overline{\omega }_{s}\otimes \varepsilon _{\overline{Q}}\right) \left( \overline{u}\right) \\ &=&\overline{\omega }_{s}\left( \overline{x}\otimes \overline{y}\otimes \overline{z}\right) \varepsilon _{\overline{Q}}\left( \overline{t}\right) \\ &=&\omega _{s}\left( x\otimes y\otimes z\right) \delta _{\left\vert t\right\vert ,0}\varepsilon \left( t\right) \text{ (note }t\text{ is of the form }x^{i,j}\text{ and that }\varepsilon \left( x^{i,j}\right) =\delta _{i,0}\varepsilon \left( x^{i,j}\right) \text{)} \\ &=&\omega _{s}\left( x\otimes y\otimes z\right) \varepsilon \left( t\right) \\ &=&\left( \omega _{s}\otimes \varepsilon \right) \left( u\right) \end{eqnarray* and hence $\left( \overline{\omega }_{s}\otimes \varepsilon _{\overline{Q }\right) \left( \overline{u}\right) =\left( \omega _{s}\otimes \varepsilon \right) \left( u\right) .$ In a similar way one get \begin{equation*} \left( \varepsilon _{\overline{Q}}\otimes \overline{\omega }_{s}\right) \left( \overline{u}\right) =\left( \varepsilon \otimes \omega _{s}\right) \left( u\right) . \end{equation* Summing up we get that (\ref{form:OmegaSYD}) rewrites as \begin{equation*} \overline{\omega }_{s}\left( m_{\overline{Q}}\otimes \overline{Q}\otimes \overline{Q}\right) \left( \overline{u}\right) +\overline{\omega }_{s}\left( \overline{Q}\otimes \overline{Q}\otimes m_{\overline{Q}}\right) \left( \overline{u}\right) =\left( \varepsilon _{\overline{Q}}\otimes \overline \omega }_{s}\right) \left( \overline{u}\right) +\overline{\omega }_{s}\left( \overline{Q}\otimes m_{\overline{Q}}\otimes \overline{Q}\right) \left( \overline{u}\right) +\left( \overline{\omega }_{s}\otimes \varepsilon _ \overline{Q}}\right) \left( \overline{u}\right) . \end{equation* We have so proved that $\overline{\omega }_{s}\in \mathrm{Z}_{{\mathcal{YD} }^{3}\left( \mathrm{gr}Q,\Bbbk \right) .$ \end{invisible} By assumption $\mathrm{H}_{{\mathcal{YD}}}^{3}\left( \mathrm{gr}Q,\Bbbk \right) =0$ so that there exists a morphism $\overline{v}:\overline{Q \otimes \overline{Q}\rightarrow \Bbbk $ in ${_{H}^{H}\mathcal{YD}}$ such tha \begin{equation*} \overline{\omega }_{s}=\partial ^{2}\overline{v}=\overline{v}\otimes \varepsilon _{\overline{Q}}-\overline{v}\left( \overline{Q}\otimes m_ \overline{Q}}\right) +\overline{v}\left( m_{\overline{Q}}\otimes \overline{Q \right) -\varepsilon _{\overline{Q}}\otimes \overline{v}. \end{equation* Explicitly, on elements in the basis we ge \begin{equation*} \overline{\omega }_{s}\left( \overline{x}\otimes \overline{y}\otimes \overline{z}\right) =\overline{v}\left( \overline{x}\otimes \overline{y \right) \varepsilon _{\overline{Q}}\left( \overline{z}\right) -\overline{v \left( \overline{x}\otimes \overline{y}\cdot \overline{z}\right) +\overline{ }\left( \overline{x}\cdot \overline{y}\otimes \overline{z}\right) -\varepsilon _{\overline{Q}}\left( \overline{x}\right) \overline{v}\left( \overline{y}\otimes \overline{z}\right) . \end{equation* Define $\overline{\zeta }:\overline{Q}\otimes \overline{Q}\rightarrow \Bbbk $ on the basis by setting \begin{equation*} \overline{\zeta }\left( \overline{x}\otimes \overline{y}\right) :=\delta _{\left\vert x\right\vert +\left\vert y\right\vert ,s}\overline{v}\left( \overline{x}\otimes \overline{y}\right) . \end{equation* As we have done for $\omega _{t},$ one can check that $\overline{\zeta }$ is a morphism in ${_{H}^{H}\mathcal{YD}}$. \begin{invisible} It is a morphism in ${_{H}^{H}\mathcal{YD}}$ a \begin{eqnarray*} \overline{\zeta }\left( h\left( y^{n,i}\otimes y^{n^{\prime },i^{\prime }}\right) \right) &=&\overline{\zeta }\left( h_{1}y^{n,i}\otimes h_{2}y^{n^{\prime },i^{\prime }}\right) \\ &=&\overline{\zeta }\left( \sum_{1\leq w\leq d_{n}}\chi _{w,i}^{n}\left( h_{1}\right) y^{n,w}\otimes \sum_{1\leq w^{\prime }\leq d_{n^{\prime }}}\chi _{w^{\prime },i^{\prime }}^{n^{\prime }}\left( h_{2}\right) y^{n^{\prime },w^{\prime }}\right) \\ &=&\delta _{n+n^{\prime },s}\overline{v}\left( \sum_{1\leq w\leq d_{n}}\chi _{w,i}^{n}\left( h_{1}\right) y^{n,w}\otimes \sum_{1\leq w^{\prime }\leq d_{n^{\prime }}}\chi _{w^{\prime },i^{\prime }}^{n^{\prime }}\left( h_{2}\right) y^{n^{\prime },w^{\prime }}\right) \\ &=&\delta _{n+n^{\prime },s}\overline{v}\left( h_{1}y^{n,i}\otimes h_{2}y^{n^{\prime },i^{\prime }}\right) \\ &=&\delta _{n+n^{\prime },s}\overline{v}\left( h\left( y^{n,i}\otimes y^{n^{\prime },i^{\prime }}\right) \right) =\varepsilon _{H}\left( h\right) \delta _{n+n^{\prime },s}\overline{v}\left( y^{n,i}\otimes y^{n^{\prime },i^{\prime }}\right) \\ &=&\varepsilon _{H}\left( h\right) \overline{\zeta }\left( y^{n,i}\otimes y^{n^{\prime },i^{\prime }}\right) \end{eqnarray* an \begin{eqnarray*} &&\left( y^{n,i}\otimes y^{n^{\prime },i^{\prime }}\right) _{-1}\otimes \overline{\zeta }\left( \left( y^{n,i}\otimes y^{n^{\prime },i^{\prime }}\right) _{0}\right) \\ &=&\left( y^{n,i}\right) _{-1}\left( y^{n^{\prime },i^{\prime }}\right) _{-1}\otimes \overline{\zeta }\left( \left( y^{n,i}\right) _{0}\otimes \left( y^{n^{\prime },i^{\prime }}\right) _{0}\right) \\ &=&\sum_{1\leq w\leq d_{n}}h_{i,w}^{n}\sum_{1\leq w^{\prime }\leq d_{n^{\prime }}}h_{i^{\prime },w^{\prime }}^{n^{\prime }}\otimes \overline \zeta }\left( y^{n,w}\otimes y^{n^{\prime },w^{\prime }}\right) \\ &=&\delta _{n+n^{\prime },s}\sum_{1\leq w\leq d_{n}}h_{i,w}^{n}\sum_{1\leq w^{\prime }\leq d_{n^{\prime }}}h_{i^{\prime },w^{\prime }}^{n^{\prime }}\otimes \overline{v}\left( y^{n,w}\otimes y^{n^{\prime },w^{\prime }}\right) \\ &=&\delta _{n+n^{\prime },s}\left( y^{n,i}\otimes y^{n^{\prime },i^{\prime }}\right) _{-1}\otimes \overline{v}\left( \left( y^{n,i}\otimes y^{n^{\prime },i^{\prime }}\right) _{0}\right) \\ &=&\delta _{n+n^{\prime },s}\left( \overline{v}\left( y^{n,i}\otimes y^{n^{\prime },i^{\prime }}\right) \right) _{-1}\otimes \left( \overline{v \left( y^{n,i}\otimes y^{n^{\prime },i^{\prime }}\right) \right) _{0} \\ &=&\delta _{n+n^{\prime },s}1_{H}\otimes \overline{v}\left( y^{n,i}\otimes y^{n^{\prime },i^{\prime }}\right) \\ &=&1_{H}\otimes \overline{\zeta }\left( y^{n,i}\otimes y^{n^{\prime },i^{\prime }}\right) . \end{eqnarray*} \end{invisible} Moreover on elements in the basis we ge \begin{eqnarray*} &&\left( \partial ^{2}\overline{\zeta }\right) \left( \overline{x}\otimes \overline{y}\otimes \overline{z}\right) \\ &=&\left( \overline{\zeta }\otimes \varepsilon _{\overline{Q}}\right) \left( \overline{x}\otimes \overline{y}\otimes \overline{z}\right) -\overline{\zeta }\left( \overline{Q}\otimes m_{\overline{Q}}\right) \left( \overline{x \otimes \overline{y}\otimes \overline{z}\right) +\overline{\zeta }\left( m_ \overline{Q}}\otimes \overline{Q}\right) \left( \overline{x}\otimes \overline{y}\otimes \overline{z}\right) -\left( \varepsilon _{\overline{Q }\otimes \overline{\zeta }\right) \left( \overline{x}\otimes \overline{y \otimes \overline{z}\right) \\ &=&\overline{\zeta }\left( \overline{x}\otimes \overline{y}\right) \varepsilon _{\overline{Q}}\left( \overline{z}\right) -\overline{\zeta \left( \overline{x}\otimes \overline{y}\cdot \overline{z}\right) +\overline \zeta }\left( \overline{x}\cdot \overline{y}\otimes \overline{z}\right) -\varepsilon _{\overline{Q}}\left( \overline{x}\right) \overline{\zeta \left( \overline{y}\otimes \overline{z}\right) . \end{eqnarray*} \begin{invisible} Note that $\overline{y}$ and $\overline{z}$ are of the form $\overline{y} \overline{x^{a,l}}$ and $\overline{z}=\overline{x^{a^{\prime },l^{\prime }}}$ and $)$hence one obtains \begin{eqnarray*} &&\overline{\zeta }\left( \overline{x}\otimes \overline{x^{a,l}}\cdot \overline{x^{a^{\prime },l^{\prime }}}\right) \overset{(\ref{form:GelYD2})}{ }\overline{\zeta }\left( \overline{x}\otimes \sum_{v}\mu _{a+a^{\prime },v}^{a,l,a^{\prime },l^{\prime }}\overline{x^{a+a^{\prime },v}}\right) \\ &=&\delta _{\left\vert x\right\vert +a+a^{\prime },s}\overline{v}\left( \overline{x}\otimes \sum_{v}\mu _{a+a^{\prime },v}^{a,l,a^{\prime },l^{\prime }}\overline{x^{a+a^{\prime },v}}\right) \\ &\overset{(\ref{form:GelYD2})}{=}&\delta _{\left\vert x\right\vert +a+a^{\prime },s}\overline{v}\left( \overline{x}\otimes \overline{x^{a,l} \cdot \overline{x^{a^{\prime },l^{\prime }}}\right) \end{eqnarray*} \end{invisible} By using (\ref{form:GelYD2}), one gets \begin{equation*} \overline{\zeta }\left( \overline{x}\otimes \overline{y}\cdot \overline{z \right) =\delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\overline{v}\left( \overline{x}\otimes \overline y}\cdot \overline{z}\right) \qquad \text{and}\qquad \overline{\zeta }\left( \overline{x}\cdot \overline{y}\otimes \overline{z}\right) =\delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\overline{v}\left( \overline{x}\cdot \overline{y}\otimes \overline{z \right) . \end{equation* By means of these equalities one get \begin{eqnarray*} \left( \partial ^{2}\overline{\zeta }\right) \left( \overline{x}\otimes \overline{y}\otimes \overline{z}\right) &=&\delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\left( \partial ^{2 \overline{v}\right) \left( \overline{x}\otimes \overline{y}\otimes \overline z}\right) =\delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\overline{\omega }_{s}\left( \overline{x}\otimes \overline{y}\otimes \overline{z}\right) \\ &=&\delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\omega _{s}\left( x\otimes y\otimes z\right) =\delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\omega \left( x\otimes y\otimes z\right) \\ &=&\delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\omega \left( x\otimes y\otimes z\right) =\omega _{s}\left( x\otimes y\otimes z\right) =\overline{\omega }_{s}\left( \overline{x}\otimes \overline{y}\otimes \overline{z}\right) . \end{eqnarray*} \begin{invisible} \begin{eqnarray*} &&\left( \partial ^{2}\overline{\zeta }\right) \left( \overline{x}\otimes \overline{y}\otimes \overline{z}\right) \\ &=&\overline{\zeta }\left( \overline{x}\otimes \overline{y}\right) \varepsilon _{\overline{Q}}\left( \overline{z}\right) -\overline{\zeta \left( \overline{x}\otimes \overline{y}\cdot \overline{z}\right) +\overline \zeta }\left( \overline{x}\cdot \overline{y}\otimes \overline{z}\right) -\varepsilon _{\overline{Q}}\left( \overline{x}\right) \overline{\zeta \left( \overline{y}\otimes \overline{z}\right) \\ &=&\left[ \begin{array}{c} \delta _{\left\vert x\right\vert +\left\vert y\right\vert ,s}\overline{v \left( \overline{x}\otimes \overline{y}\right) \delta _{\left\vert z\right\vert ,0}\varepsilon _{\overline{Q}}\left( \overline{z}\right) -\delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\overline{v}\left( \overline{x}\otimes \overline{y}\cdot \overline{z}\right) \\ +\delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\overline{v}\left( \overline{x}\cdot \overline{y}\otimes \overline{z}\right) -\delta _{\left\vert x\right\vert ,0}\varepsilon _ \overline{Q}}\left( \overline{x}\right) \delta _{\left\vert y\right\vert +\left\vert z\right\vert ,s}\overline{v}\left( \overline{y}\otimes \overline z}\right \end{array \right] \\ &=&\left[ \begin{array}{c} \delta _{\left\vert x\right\vert +\left\vert y\right\vert ,s}\delta _{\left\vert z\right\vert ,0}\left( \overline{v}\otimes \varepsilon _ \overline{Q}}\right) \left( \overline{x}\otimes \overline{y}\otimes \overline{z}\right) -\delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\overline{v}\left( \overline{x \otimes \overline{y}\cdot \overline{z}\right) \\ +\delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\overline{v}\left( \overline{x}\cdot \overline{y}\otimes \overline{z}\right) -\delta _{\left\vert x\right\vert ,0}\delta _{\left\vert y\right\vert +\left\vert z\right\vert ,s}\left( \varepsilon _{\overline{Q }\otimes \overline{v}\right) \left( \overline{x}\otimes \overline{y}\otimes \overline{z}\right \end{array \right] \\ &=&\left[ \begin{array}{c} \delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\left( \overline{v}\otimes \varepsilon _{\overline{Q }\right) \left( \overline{x}\otimes \overline{y}\otimes \overline{z}\right) -\delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\overline{v}\left( \overline{x}\otimes \overline{y}\cdot \overline{z}\right) \\ +\delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\overline{v}\left( \overline{x}\cdot \overline{y}\otimes \overline{z}\right) -\delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\left( \varepsilon _{\overline{Q }\otimes \overline{v}\right) \left( \overline{x}\otimes \overline{y}\otimes \overline{z}\right \end{array \right] \\ &=&\delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\left( \partial ^{2}\overline{v}\right) \left( \overline{x \otimes \overline{y}\otimes \overline{z}\right) \\ &=&\delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\overline{\omega }_{s}\left( \overline{x}\otimes \overline{y \otimes \overline{z}\right) \\ &=&\delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\omega _{s}\left( x\otimes y\otimes z\right) =\delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\omega \left( x\otimes y\otimes z\right) =\delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\omega \left( x\otimes y\otimes z\right) \\ &=&\omega _{s}\left( x\otimes y\otimes z\right) =\overline{\omega _{s}\left( \overline{x}\otimes \overline{y}\otimes \overline{z}\right) . \end{eqnarray*} \end{invisible} Therefore $\partial ^{2}\overline{\zeta }=\overline{\omega }_{s}.$ This means that we can assume that $\overline{v}\left( \overline{x}\otimes \overline{y}\right) =0$ for $\left\vert x\right\vert +\left\vert y\right\vert \neq s.$ Equivalentl \begin{equation} \overline{v}\left( \overline{x}\otimes \overline{y}\right) =\delta _{\left\vert x\right\vert +\left\vert y\right\vert ,s}\overline{v}\left( \overline{x}\otimes \overline{y}\right) \text{ for }x,y\text{ in the basis.} \label{form:vbarYD} \end{equation} \begin{invisible} \lbrack NOTE that we essentially proved that $\partial ^{2}\overline{v}$ graded implies we can take $\overline{v}$ graded. Is this equivalent to prove that $\partial ^{2}$ is graded? Does it make sense to write it in general for a graded algebra $A=\oplus _{i\in \N_0}A_{i}$ instead of \mathrm{gr}Q$???? It this case probably we have to take on ${_{H}^{H \mathcal{YD}}\left( A^{\otimes n},\Bbbk \right) $ the graduation ${_{H}^{H \mathcal{YD}}\left( A^{\otimes n},\Bbbk \right) _{t}:={_{H}^{H}\mathcal{YD} \left( \left( A^{\otimes n}\right) _{t},\Bbbk \right) $ where $\left( A^{\otimes n}\right) _{t}=\sum\limits_{i_{1}+\cdots ti_{n}=t}A_{i_{1}}\otimes \cdots \otimes A_{i_{n}}.$] \end{invisible} Set \begin{equation*} v:=\overline{v}\circ \left( \varphi \otimes \varphi \right) \qquad \text{and \qquad \gamma :=\left( \varepsilon \otimes \varepsilon \right) +v. \end{equation* In particular, one get \begin{equation} v\left( x\otimes y\right) =\delta _{\left\vert x\right\vert +\left\vert y\right\vert ,s}v\left( x\otimes y\right) \text{ for }x,y\text{ in the basis } \label{form:vYD} \end{equation Note also that both $v$ and $\gamma $ are morphisms in ${_{H}^{H}\mathcal{YD }$ as they are obtained as composition or sum of morphisms in this category. Let us check that $\gamma $ is a gauge transformation on $Q$ in ${_{H}^{H \mathcal{YD}}$. Recall that $x^{0,0}=1_{Q}$ is in the basis. For $x$ in the basis, we have \gamma \left( x\otimes 1_{Q}\right) =\varepsilon \left( x\right) +v\left( x\otimes 1_{Q}\right) .$ Note tha \begin{eqnarray*} 0 &=&\delta _{\left\vert x\right\vert ,s}\varepsilon \left( x\right) =\delta _{\left\vert x\right\vert +\left\vert 1_{Q}\right\vert +\left\vert 1_{Q}\right\vert ,s}\omega \left( x\otimes 1_{Q}\otimes 1_{Q}\right) \\ &=&\omega _{s}\left( x\otimes 1_{Q}\otimes 1_{Q}\right) =\overline{\omega _{s}\left( \overline{x}\otimes \overline{1_{Q}}\otimes \overline{1_{Q} \right) \\ &=&\overline{v}\left( \overline{x}\otimes \overline{1_{Q}}\right) \varepsilon _{\overline{Q}}\left( \overline{1_{Q}}\right) -\overline{v \left( \overline{x}\otimes \overline{1_{Q}}\cdot \overline{1_{Q}}\right) \overline{v}\left( \overline{x}\cdot \overline{1_{Q}}\otimes \overline{1_{Q} \right) -\varepsilon _{\overline{Q}}\left( \overline{x}\right) \overline{v \left( \overline{1_{Q}}\otimes \overline{1_{Q}}\right) \\ &\overset{(\ref{form:vbarYD})}{=}&\overline{v}\left( \overline{x}\otimes \overline{1_{Q}}\right) -\overline{v}\left( \overline{x}\otimes \overline 1_{Q}}\right) +\overline{v}\left( \overline{x}\otimes \overline{1_{Q} \right) -\varepsilon _{\overline{Q}}\left( \overline{x}\right) \delta _{\left\vert 1_{Q}\right\vert +\left\vert 1_{Q}\right\vert ,s}\overline{v \left( \overline{1_{Q}}\otimes \overline{1_{Q}}\right) \\ &=&v\left( x\otimes 1_{Q}\right) \end{eqnarray* so that $v\left( x\otimes 1_{Q}\right) =0$ and hence $\gamma \left( x\otimes 1_{Q}\right) =\varepsilon \left( x\right) +v\left( x\otimes 1_{Q}\right) =\varepsilon \left( x\right) .$ Similarly one proves $\gamma \left( 1_{Q}\otimes x\right) =\varepsilon \left( x\right) .$ Hence $\gamma $ is unital. Note that the coalgebra $C=Q\otimes Q$ is connected as $Q$ is. Thus, in order to prove that $\gamma :Q\otimes Q\rightarrow \Bbbk $ is convolution invertible it suffices to check (see \cite[Lemma 5.2.10]{Mo}) that $\gamma _{\mid \Bbbk 1_{Q}\otimes \Bbbk 1_{Q}}$ is convolution invertible. But for k,k^{\prime }\in \Bbbk $ we have \begin{equation*} \gamma \left( k1_{Q}\otimes k^{\prime }1_{Q}\right) =kk^{\prime }\gamma \left( 1_{Q}\otimes 1_{Q}\right) =kk^{\prime }\varepsilon \left( 1_{Q}\right) =kk^{\prime }=\left( \varepsilon \otimes \varepsilon \right) \left( k1_{Q}\otimes k^{\prime }1_{Q}\right) \end{equation* Hence $\gamma _{\mid \Bbbk 1_{Q}\otimes \Bbbk 1_{Q}}=\left( \varepsilon \otimes \varepsilon \right) _{\mid \Bbbk 1_{Q}\otimes \Bbbk 1_{Q}}$ which is convolution invertible. Thus there is a $\Bbbk $-linear map $\gamma ^{-1}:Q\otimes Q\rightarrow \Bbbk $ and such tha \begin{equation*} \gamma \ast \gamma ^{-1}=\varepsilon \otimes \varepsilon =\gamma ^{-1}\ast \gamma . \end{equation* Note that, by Lemma \ref{lem:InvYD}, $\gamma \in {_{H}^{H}\mathcal{YD}}$ implies $\gamma ^{-1}\in {_{H}^{H}\mathcal{YD}}$. Therefore $\gamma $ is a gauge transformation in ${_{H}^{H}\mathcal{YD}}$. By Proposition \ref{pro:deformYD}, $Q^{\gamma }$ is a coquasi-bialgebra in $ _{H}^{H}\mathcal{YD}}$. By Proposition \ref{pro:grgaugeYD}, we have that $\mathrm gr}Q^{\gamma }$ and $\mathrm{gr}Q$ coincide as bialgebras in ${_{H}^{H \mathcal{YD}}$. Hence $\mathrm{H}_{{\mathcal{YD}}}^{3}\left( \mathrm{gr Q^{\gamma },\Bbbk \right) =\mathrm{H}_{{\mathcal{YD}}}^{3}\left( \mathrm{gr Q,\Bbbk \right) =0.$ Therefore $Q^{\gamma }$ fulfills the same requirement of $Q$ as in the statement. Let us check that $\left( \omega ^{\gamma }\right) _{t}=0$ for $1\leq t\leq s$ (this will complete the proof by an induction process as $Q$ is finite-dimensional). Note that the definition of $\gamma $ and (\ref{form:vYD}) impl \begin{equation} \gamma \left( x\otimes y\right) =\delta _{\left\vert x\right\vert +\left\vert y\right\vert ,0}\gamma \left( x\otimes y\right) +\delta _{\left\vert x\right\vert +\left\vert y\right\vert ,s}\gamma \left( x\otimes y\right) \text{ for }x,y\text{ in the basis.} \label{form:gamma1YD} \end{equation} Let $C^{2}=Q\otimes Q$ and let $C_{\left( n\right) }^{2}=\sum_{i+j\leq n}Q_{i}\otimes Q_{j}$. For $u\in C_{\left( 2s-1\right) }^{2}$ we hav \begin{equation*} \left[ \gamma \ast \left( \left( \varepsilon \otimes \varepsilon \right) -v\right) \right] \left( u\right) =\left( \varepsilon \otimes \varepsilon \right) \left( u\right) -v\left( u\right) +v\left( u\right) -v\left( u_{1}\right) v\left( u_{2}\right) \overset{(\ref{form:vYD})}{=}\left( \varepsilon \otimes \varepsilon \right) \left( u\right) . \end{equation*} \begin{invisible} Here is the computatio \begin{eqnarray*} &&\left[ \gamma \ast \left( \left( \varepsilon \otimes \varepsilon \right) -v\right) \right] \left( u\right) \\ &=&\left[ \left( \left( \varepsilon \otimes \varepsilon \right) +v\right) \ast \left( \left( \varepsilon \otimes \varepsilon \right) -v\right) \right] \left( u\right) \\ &=&\left( \left( \varepsilon \otimes \varepsilon \right) +v\right) \left( u_{1}\right) \cdot \left( \left( \varepsilon \otimes \varepsilon \right) -v\right) \left( u_{2}\right) \\ &=&\left[ \begin{array}{c} \left( \varepsilon \otimes \varepsilon \right) \left( u_{1}\right) \left( \varepsilon \otimes \varepsilon \right) \left( u_{2}\right) + \\ \left( \varepsilon \otimes \varepsilon \right) \left( u_{1}\right) \left( -v\right) \left( u_{2}\right) + \\ \left( v\right) \left( u_{1}\right) \left( \varepsilon \otimes \varepsilon \right) \left( u_{2}\right) + \\ v\left( u_{1}\right) \left( -v\right) \left( u_{2}\right \end{array \right] \\ &=&\left( \varepsilon \otimes \varepsilon \right) \left( u\right) -v\left( u\right) +v\left( u\right) -v\left( u_{1}\right) v\left( u_{2}\right) \\ &=&\left( \varepsilon \otimes \varepsilon \right) \left( u\right) -v\left( u_{1}\right) v\left( u_{2}\right) \\ &\overset{(\ref{form:vYD})}{=}&\left( \varepsilon \otimes \varepsilon \right) \left( u\right) \end{eqnarray*} \end{invisible} Therefore $\left[ \gamma \ast \left( \left( \varepsilon \otimes \varepsilon \right) -v\right) \right] _{|C_{\left( 2s-1\right) }^{2}}=\left( \varepsilon \otimes \varepsilon \right) _{|C_{\left( 2s-1\right) }^{2}}.$ By uniqueness of the convolution inverse, we deduce \begin{equation} \gamma ^{-1}\left( u\right) =\left( \varepsilon \otimes \varepsilon \right) \left( u\right) -v\left( u\right) ,\text{ for }u\in C_{\left( 2s-1\right) }^{2}. \label{form:gamma-1} \end{equation} \begin{invisible} Explicitly, for $u\in C_{\left( 2s-1\right) }^{2} \begin{eqnarray*} \gamma ^{-1}\left( u\right) &=&\gamma ^{-1}\left( u_{1}\right) \left( \varepsilon \otimes \varepsilon \right) \left( u_{2}\right) \text{ (use that }u_{2}\in C_{\left( 2s-1\right) }^{2}\text{)} \\ &=&\gamma ^{-1}\left( u_{1}\right) \left[ \gamma \ast \left( \left( \varepsilon \otimes \varepsilon \right) -v\right) \right] \left( u_{2}\right) \\ &=&\left[ \gamma ^{-1}\ast \gamma \ast \left( \left( \varepsilon \otimes \varepsilon \right) -v\right) \right] \left( u\right) \\ &=&\left( \left( \varepsilon \otimes \varepsilon \right) -v\right) \left( u\right) \end{eqnarray*} \end{invisible} Let $x,y,z$ be in the basis. Set $\overline{u}:=\overline{x}\otimes \overline{y}\otimes \overline{z}$ and $u:=x\otimes y\otimes z.$ We comput \begin{eqnarray*} \left( \omega ^{\gamma }\right) _{s}\left( u\right) &=&\delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\omega ^{\gamma }\left( u\right) \\ &=&\delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\left[ \left( \varepsilon \otimes \gamma \right) \ast \gamma \left( Q\otimes m\right) \ast \omega \ast \gamma ^{-1}\left( m\otimes Q\right) \ast \left( \gamma ^{-1}\otimes \varepsilon \right) \right] \left( u\right) \\ &=&\delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\left[ \left( \varepsilon \otimes \gamma \right) \ast \gamma \left( Q\otimes m\right) \ast \left( \omega _{0}+\omega _{s}\right) \ast \gamma ^{-1}\left( m\otimes Q\right) \ast \left( \gamma ^{-1}\otimes \varepsilon \right) \right] \left( u\right) \\ &\overset{(\ref{form:Omega0YD})}{=}&\delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\left[ \begin{array}{c} \left( \varepsilon \otimes \gamma \right) \ast \gamma \left( Q\otimes m\right) \ast \gamma ^{-1}\left( m\otimes Q\right) \ast \left( \gamma ^{-1}\otimes \varepsilon \right) + \\ \left( \varepsilon \otimes \gamma \right) \ast \gamma \left( Q\otimes m\right) \ast \omega _{s}\ast \gamma ^{-1}\left( m\otimes Q\right) \ast \left( \gamma ^{-1}\otimes \varepsilon \right \end{array \right] \left( u\right) \\ &=&\left[ \begin{array}{c} \delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\left( \varepsilon \otimes \gamma \right) \left( u_{1}\right) \cdot \gamma \left( Q\otimes m\right) \left( u_{2}\right) \cdot \gamma ^{-1}\left( m\otimes Q\right) \left( u_{3}\right) \cdot \left( \gamma ^{-1}\otimes \varepsilon \right) \left( u_{4}\right) + \\ \delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\left( \varepsilon \otimes \gamma \right) \left( u_{1}\right) \cdot \gamma \left( Q\otimes m\right) \left( u_{2}\right) \cdot \omega _{s}\left( u_{3}\right) \cdot \gamma ^{-1}\left( m\otimes Q\right) \left( u_{4}\right) \cdot \left( \gamma ^{-1}\otimes \varepsilon \right) \left( u_{5}\right \end{array \right] . \end{eqnarray*} \begin{invisible} Here is the full computation: \begin{eqnarray*} &&\left( \omega ^{\gamma }\right) _{s}\left( u\right) =\delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\omega ^{\gamma }\left( u\right) \\ &=&\delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\left[ \left( \varepsilon \otimes \gamma \right) \ast \gamma \left( Q\otimes m\right) \ast \omega \ast \gamma ^{-1}\left( m\otimes Q\right) \ast \left( \gamma ^{-1}\otimes \varepsilon \right) \right] \left( u\right) \\ &=&\delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\left[ \left( \varepsilon \otimes \gamma \right) \ast \gamma \left( Q\otimes m\right) \ast \left( \sum\limits_{t\in \N_0}\omega _{t}\right) \ast \gamma ^{-1}\left( m\otimes Q\right) \ast \left( \gamma ^{-1}\otimes \varepsilon \right) \right] \left( u\right) \\ &=&\delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\left[ \left( \varepsilon \otimes \gamma \right) \ast \gamma \left( Q\otimes m\right) \ast \left( \omega _{0}+\omega _{s}\right) \ast \gamma ^{-1}\left( m\otimes Q\right) \ast \left( \gamma ^{-1}\otimes \varepsilon \right) \right] \left( u\right) \\ &=&\delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\left[ \begin{array}{c} \left( \varepsilon \otimes \gamma \right) \ast \gamma \left( Q\otimes m\right) \ast \omega _{0}\ast \gamma ^{-1}\left( m\otimes Q\right) \ast \left( \gamma ^{-1}\otimes \varepsilon \right) + \\ \left( \varepsilon \otimes \gamma \right) \ast \gamma \left( Q\otimes m\right) \ast \omega _{s}\ast \gamma ^{-1}\left( m\otimes Q\right) \ast \left( \gamma ^{-1}\otimes \varepsilon \right \end{array \right] \left( u\right) \\ &\overset{(\ref{form:Omega0YD})}{=}&\delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\left[ \begin{array}{c} \left( \varepsilon \otimes \gamma \right) \ast \gamma \left( Q\otimes m\right) \ast \gamma ^{-1}\left( m\otimes Q\right) \ast \left( \gamma ^{-1}\otimes \varepsilon \right) + \\ \left( \varepsilon \otimes \gamma \right) \ast \gamma \left( Q\otimes m\right) \ast \omega _{s}\ast \gamma ^{-1}\left( m\otimes Q\right) \ast \left( \gamma ^{-1}\otimes \varepsilon \right \end{array \right] \left( u\right) \\ &=&\left[ \begin{array}{c} \delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\left[ \left( \varepsilon \otimes \gamma \right) \ast \gamma \left( Q\otimes m\right) \ast \gamma ^{-1}\left( m\otimes Q\right) \ast \left( \gamma ^{-1}\otimes \varepsilon \right) \right] \left( u\right) + \\ \delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\left[ \left( \varepsilon \otimes \gamma \right) \ast \gamma \left( Q\otimes m\right) \ast \omega _{s}\ast \gamma ^{-1}\left( m\otimes Q\right) \ast \left( \gamma ^{-1}\otimes \varepsilon \right) \right] \left( u\right \end{array \right] \\ &=&\left[ \begin{array}{c} \delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\left( \varepsilon \otimes \gamma \right) \left( u_{1}\right) \cdot \gamma \left( Q\otimes m\right) \left( u_{2}\right) \cdot \gamma ^{-1}\left( m\otimes Q\right) \left( u_{3}\right) \cdot \left( \gamma ^{-1}\otimes \varepsilon \right) \left( u_{4}\right) + \\ \delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\left( \varepsilon \otimes \gamma \right) \left( u_{1}\right) \cdot \gamma \left( Q\otimes m\right) \left( u_{2}\right) \cdot \omega _{s}\left( u_{3}\right) \cdot \gamma ^{-1}\left( m\otimes Q\right) \left( u_{4}\right) \cdot \left( \gamma ^{-1}\otimes \varepsilon \right) \left( u_{5}\right \end{array \right] . \end{eqnarray*} \end{invisible} Now, all terms appearing in the last two lines, excepted $\omega _{s}$, vanish out of degrees $0$ and $s$ and coincide with $\varepsilon \otimes \varepsilon \otimes \varepsilon $ on degree $0.$ On the other hand $\omega _{s}$ vanishes out of $s$. Since $\gamma :=\left( \varepsilon \otimes \varepsilon \right) +v$ and in view of (\ref{form:gamma-1}), the term \delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}$ forces the following simplificatio \begin{equation*} \left( \omega ^{\gamma }\right) _{s}\left( u\right) =\left[ \begin{array}{c} \delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\left[ \left( \varepsilon \otimes v\right) \left( u\right) +v\left( Q\otimes m\right) \left( u\right) -v\left( m\otimes Q\right) \left( u\right) -\left( v\otimes \varepsilon \right) \left( u\right) \right] + \\ +\delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\omega _{s}\left( u\right \end{array \right] . \end{equation*} \begin{invisible} \begin{eqnarray*} &&\left( \omega ^{\gamma }\right) _{s}\left( u\right) \\ &=&\left[ \begin{array}{c} \delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\left( \varepsilon \otimes \gamma \right) \left( u_{1}\right) \cdot \gamma \left( Q\otimes m\right) \left( u_{2}\right) \cdot \gamma ^{-1}\left( m\otimes Q\right) \left( u_{3}\right) \cdot \left( \gamma ^{-1}\otimes \varepsilon \right) \left( u_{4}\right) + \\ \delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\left( \varepsilon \otimes \gamma \right) \left( u_{1}\right) \cdot \gamma \left( Q\otimes m\right) \left( u_{2}\right) \cdot \omega _{s}\left( u_{3}\right) \cdot \gamma ^{-1}\left( m\otimes Q\right) \left( u_{4}\right) \cdot \left( \gamma ^{-1}\otimes \varepsilon \right) \left( u_{5}\right \end{array \right] \\ &=&\left[ \begin{array}{c} \delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\left( \varepsilon \otimes \left( \left( \varepsilon \otimes \varepsilon \right) +v\right) \right) \left( u_{1}\right) \cdot \gamma \left( Q\otimes m\right) \left( u_{2}\right) \cdot \gamma ^{-1}\left( m\otimes Q\right) \left( u_{3}\right) \cdot \left( \gamma ^{-1}\otimes \varepsilon \right) \left( u_{4}\right) + \\ \delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\left( \varepsilon \otimes \left( \left( \varepsilon \otimes \varepsilon \right) +v\right) \right) \left( u_{1}\right) \cdot \gamma \left( Q\otimes m\right) \left( u_{2}\right) \cdot \omega _{s}\left( u_{3}\right) \cdot \gamma ^{-1}\left( m\otimes Q\right) \left( u_{4}\right) \cdot \left( \gamma ^{-1}\otimes \varepsilon \right) \left( u_{5}\right \end{array \right] \\ &=&\left[ \begin{array}{c} \delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\left( \varepsilon \otimes \varepsilon \otimes \varepsilon \right) \left( u_{1}\right) \cdot \gamma \left( Q\otimes m\right) \left( u_{2}\right) \cdot \gamma ^{-1}\left( m\otimes Q\right) \left( u_{3}\right) \cdot \left( \gamma ^{-1}\otimes \varepsilon \right) \left( u_{4}\right) + \\ \delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\left( \varepsilon \otimes v\right) \left( u_{1}\right) \cdot \gamma \left( Q\otimes m\right) \left( u_{2}\right) \cdot \gamma ^{-1}\left( m\otimes Q\right) \left( u_{3}\right) \cdot \left( \gamma ^{-1}\otimes \varepsilon \right) \left( u_{4}\right) + \\ \delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\left( \varepsilon \otimes \varepsilon \otimes \varepsilon \right) \left( u_{1}\right) \cdot \gamma \left( Q\otimes m\right) \left( u_{2}\right) \cdot \omega _{s}\left( u_{3}\right) \cdot \gamma ^{-1}\left( m\otimes Q\right) \left( u_{4}\right) \cdot \left( \gamma ^{-1}\otimes \varepsilon \right) \left( u_{5}\right) + \\ \delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\left( \varepsilon \otimes v\right) \left( u_{1}\right) \cdot \gamma \left( Q\otimes m\right) \left( u_{2}\right) \cdot \omega _{s}\left( u_{3}\right) \cdot \gamma ^{-1}\left( m\otimes Q\right) \left( u_{4}\right) \cdot \left( \gamma ^{-1}\otimes \varepsilon \right) \left( u_{5}\right \end{array \right] \\ &\overset{\left\vert u_{1}\right\vert +\left\vert u_{3}\right\vert \leq s}{= &\left[ \begin{array}{c} \delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\left( \varepsilon \otimes \varepsilon \otimes \varepsilon \right) \left( u_{1}\right) \cdot \gamma \left( Q\otimes m\right) \left( u_{2}\right) \cdot \gamma ^{-1}\left( m\otimes Q\right) \left( u_{3}\right) \cdot \left( \gamma ^{-1}\otimes \varepsilon \right) \left( u_{4}\right) + \\ \delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\left( \varepsilon \otimes v\right) \left( u_{1}\right) \cdot \gamma \left( Q\otimes m\right) \left( u_{2}\right) \cdot \gamma ^{-1}\left( m\otimes Q\right) \left( u_{3}\right) \cdot \left( \gamma ^{-1}\otimes \varepsilon \right) \left( u_{4}\right) + \\ \delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\left( \varepsilon \otimes \varepsilon \otimes \varepsilon \right) \left( u_{1}\right) \cdot \gamma \left( Q\otimes m\right) \left( u_{2}\right) \cdot \omega _{s}\left( u_{3}\right) \cdot \gamma ^{-1}\left( m\otimes Q\right) \left( u_{4}\right) \cdot \left( \gamma ^{-1}\otimes \varepsilon \right) \left( u_{5}\right) + \end{array \right] \\ &=&\left[ \begin{array}{c} \delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\gamma \left( Q\otimes m\right) \left( u_{1}\right) \cdot \gamma ^{-1}\left( m\otimes Q\right) \left( u_{2}\right) \cdot \left( \gamma ^{-1}\otimes \varepsilon \right) \left( u_{3}\right) + \\ \delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\left( \varepsilon \otimes v\right) \left( u_{1}\right) \cdot \gamma \left( Q\otimes m\right) \left( u_{2}\right) \cdot \gamma ^{-1}\left( m\otimes Q\right) \left( u_{3}\right) \cdot \left( \gamma ^{-1}\otimes \varepsilon \right) \left( u_{4}\right) + \\ \delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\gamma \left( Q\otimes m\right) \left( u_{1}\right) \cdot \omega _{s}\left( u_{2}\right) \cdot \gamma ^{-1}\left( m\otimes Q\right) \left( u_{3}\right) \cdot \left( \gamma ^{-1}\otimes \varepsilon \right) \left( u_{4}\right) \end{array \right] \\ &=&\left[ \begin{array}{c} \delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\left( \left( \varepsilon \otimes \varepsilon \right) +v\right) \left( Q\otimes m\right) \left( u_{1}\right) \cdot \gamma ^{-1}\left( m\otimes Q\right) \left( u_{2}\right) \cdot \left( \gamma ^{-1}\otimes \varepsilon \right) \left( u_{3}\right) + \\ \delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\left( \varepsilon \otimes v\right) \left( u_{1}\right) \cdot \left( \left( \varepsilon \otimes \varepsilon \right) +v\right) \left( Q\otimes m\right) \left( u_{2}\right) \cdot \gamma ^{-1}\left( m\otimes Q\right) \left( u_{3}\right) \cdot \left( \gamma ^{-1}\otimes \varepsilon \right) \left( u_{4}\right) + \\ \delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\left( \left( \varepsilon \otimes \varepsilon \right) +v\right) \left( Q\otimes m\right) \left( u_{1}\right) \cdot \omega _{s}\left( u_{2}\right) \cdot \gamma ^{-1}\left( m\otimes Q\right) \left( u_{3}\right) \cdot \left( \gamma ^{-1}\otimes \varepsilon \right) \left( u_{4}\right) \end{array \right] \\ &=&\left[ \begin{array}{c} \delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\left( \varepsilon \otimes \varepsilon \right) \left( Q\otimes m\right) \left( u_{1}\right) \cdot \gamma ^{-1}\left( m\otimes Q\right) \left( u_{2}\right) \cdot \left( \gamma ^{-1}\otimes \varepsilon \right) \left( u_{3}\right) + \\ \delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}v\left( Q\otimes m\right) \left( u_{1}\right) \cdot \gamma ^{-1}\left( m\otimes Q\right) \left( u_{2}\right) \cdot \left( \gamma ^{-1}\otimes \varepsilon \right) \left( u_{3}\right) + \\ \delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\left( \varepsilon \otimes v\right) \left( u_{1}\right) \cdot \left( \varepsilon \otimes \varepsilon \right) \left( Q\otimes m\right) \left( u_{2}\right) \cdot \gamma ^{-1}\left( m\otimes Q\right) \left( u_{3}\right) \cdot \left( \gamma ^{-1}\otimes \varepsilon \right) \left( u_{4}\right) + \\ \delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\left( \varepsilon \otimes v\right) \left( u_{1}\right) \cdot v\left( Q\otimes m\right) \left( u_{2}\right) \cdot \gamma ^{-1}\left( m\otimes Q\right) \left( u_{3}\right) \cdot \left( \gamma ^{-1}\otimes \varepsilon \right) \left( u_{4}\right) + \\ \delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\left( \varepsilon \otimes \varepsilon \right) \left( Q\otimes m\right) \left( u_{1}\right) \cdot \omega _{s}\left( u_{2}\right) \cdot \gamma ^{-1}\left( m\otimes Q\right) \left( u_{3}\right) \cdot \left( \gamma ^{-1}\otimes \varepsilon \right) \left( u_{4}\right) + \\ \delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}v\left( Q\otimes m\right) \left( u_{1}\right) \cdot \omega _{s}\left( u_{2}\right) \cdot \gamma ^{-1}\left( m\otimes Q\right) \left( u_{3}\right) \cdot \left( \gamma ^{-1}\otimes \varepsilon \right) \left( u_{4}\right \end{array \right] \\ &\overset{\left\vert u_{1}\right\vert +\left\vert u_{2}\right\vert \leq s}{= &\left[ \begin{array}{c} \delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\gamma ^{-1}\left( m\otimes Q\right) \left( u_{1}\right) \cdot \left( \gamma ^{-1}\otimes \varepsilon \right) \left( u_{2}\right) + \\ \delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}v\left( Q\otimes m\right) \left( u_{1}\right) \cdot \gamma ^{-1}\left( m\otimes Q\right) \left( u_{2}\right) \cdot \left( \gamma ^{-1}\otimes \varepsilon \right) \left( u_{3}\right) + \\ \delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\left( \varepsilon \otimes v\right) \left( u_{1}\right) \cdot \gamma ^{-1}\left( m\otimes Q\right) \left( u_{2}\right) \cdot \left( \gamma ^{-1}\otimes \varepsilon \right) \left( u_{3}\right) +0+ \\ \delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\omega _{s}\left( u_{1}\right) \cdot \gamma ^{-1}\left( m\otimes Q\right) \left( u_{2}\right) \cdot \left( \gamma ^{-1}\otimes \varepsilon \right) \left( u_{3}\right) + \end{array \right] \\ &\overset{(\ref{form:gamma-1})}{=}&\left[ \begin{array}{c} \delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\left( \left( \varepsilon \otimes \varepsilon \right) -v\right) \left( m\otimes Q\right) \left( u_{1}\right) \cdot \left( \gamma ^{-1}\otimes \varepsilon \right) \left( u_{2}\right) + \\ \delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}v\left( Q\otimes m\right) \left( u_{1}\right) \cdot \left( \left( \varepsilon \otimes \varepsilon \right) -v\right) \left( m\otimes Q\right) \left( u_{2}\right) \cdot \left( \gamma ^{-1}\otimes \varepsilon \right) \left( u_{3}\right) + \\ \delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\left( \varepsilon \otimes v\right) \left( u_{1}\right) \cdot \left( \left( \varepsilon \otimes \varepsilon \right) -v\right) \left( m\otimes Q\right) \left( u_{2}\right) \cdot \left( \gamma ^{-1}\otimes \varepsilon \right) \left( u_{3}\right) + \\ \delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\omega _{s}\left( u_{1}\right) \cdot \left( \left( \varepsilon \otimes \varepsilon \right) -v\right) \left( m\otimes Q\right) \left( u_{2}\right) \cdot \left( \gamma ^{-1}\otimes \varepsilon \right) \left( u_{3}\right \end{array \right] \\ &=&\left[ \begin{array}{c} \delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\left( \varepsilon \otimes \varepsilon \right) \left( m\otimes Q\right) \left( u_{1}\right) \cdot \left( \gamma ^{-1}\otimes \varepsilon \right) \left( u_{2}\right) + \\ -\delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}v\left( m\otimes Q\right) \left( u_{1}\right) \cdot \left( \gamma ^{-1}\otimes \varepsilon \right) \left( u_{2}\right) + \\ \delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}v\left( Q\otimes m\right) \left( u_{1}\right) \cdot \left( \varepsilon \otimes \varepsilon \right) \left( m\otimes Q\right) \left( u_{2}\right) \cdot \left( \gamma ^{-1}\otimes \varepsilon \right) \left( u_{3}\right) + \\ -\delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}v\left( Q\otimes m\right) \left( u_{1}\right) \cdot v\left( m\otimes Q\right) \left( u_{2}\right) \cdot \left( \gamma ^{-1}\otimes \varepsilon \right) \left( u_{3}\right) + \\ \delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\left( \varepsilon \otimes v\right) \left( u_{1}\right) \cdot \left( \varepsilon \otimes \varepsilon \right) \left( m\otimes Q\right) \left( u_{2}\right) \cdot \left( \gamma ^{-1}\otimes \varepsilon \right) \left( u_{3}\right) + \\ -\delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\left( \varepsilon \otimes v\right) \left( u_{1}\right) \cdot v\left( m\otimes Q\right) \left( u_{2}\right) \cdot \left( \gamma ^{-1}\otimes \varepsilon \right) \left( u_{3}\right) + \\ \delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\omega _{s}\left( u_{1}\right) \cdot \left( \varepsilon \otimes \varepsilon \right) \left( m\otimes Q\right) \left( u_{2}\right) \cdot \left( \gamma ^{-1}\otimes \varepsilon \right) \left( u_{3}\right) \\ -\delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\omega _{s}\left( u_{1}\right) \cdot v\left( m\otimes Q\right) \left( u_{2}\right) \cdot \left( \gamma ^{-1}\otimes \varepsilon \right) \left( u_{3}\right \end{array \right] \\ &\overset{\left\vert u_{1}\right\vert +\left\vert u_{2}\right\vert \leq s}{= &\left[ \begin{array}{c} \delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\left( \gamma ^{-1}\otimes \varepsilon \right) \left( u\right) + \\ -\delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}v\left( m\otimes Q\right) \left( u_{1}\right) \cdot \left( \gamma ^{-1}\otimes \varepsilon \right) \left( u_{2}\right) + \\ \delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}v\left( Q\otimes m\right) \left( u_{1}\right) \cdot \left( \gamma ^{-1}\otimes \varepsilon \right) \left( u_{2}\right) -0+ \\ \delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\left( \varepsilon \otimes v\right) \left( u_{1}\right) \cdot \left( \gamma ^{-1}\otimes \varepsilon \right) \left( u_{2}\right) -0+ \\ \delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\omega _{s}\left( u_{1}\right) \cdot \left( \gamma ^{-1}\otimes \varepsilon \right) \left( u_{2}\right) - \end{array \right] \\ &\overset{(\ref{form:gamma-1})}{=}&\left[ \begin{array}{c} \delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\left( \left( \left( \varepsilon \otimes \varepsilon \right) -v\right) \otimes \varepsilon \right) \left( u\right) + \\ -\delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}v\left( m\otimes Q\right) \left( u_{1}\right) \cdot \left( \left( \left( \varepsilon \otimes \varepsilon \right) -v\right) \otimes \varepsilon \right) \left( u_{2}\right) + \\ \delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}v\left( Q\otimes m\right) \left( u_{1}\right) \cdot \left( \left( \left( \varepsilon \otimes \varepsilon \right) -v\right) \otimes \varepsilon \right) \left( u_{2}\right) + \\ \delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\left( \varepsilon \otimes v\right) \left( u_{1}\right) \cdot \left( \left( \left( \varepsilon \otimes \varepsilon \right) -v\right) \otimes \varepsilon \right) \left( u_{2}\right) + \\ \delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\omega _{s}\left( u_{1}\right) \cdot \left( \left( \left( \varepsilon \otimes \varepsilon \right) -v\right) \otimes \varepsilon \right) \left( u_{2}\right \end{array \right] \\ &=&\left[ \begin{array}{c} \delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\left( \varepsilon \otimes \varepsilon \otimes \varepsilon \right) \left( u\right) -\delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\left( v\otimes \varepsilon \right) \left( u\right) \\ -\delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}v\left( m\otimes Q\right) \left( u_{1}\right) \cdot \left( \varepsilon \otimes \varepsilon \otimes \varepsilon \right) \left( u_{2}\right) + \\ \delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}v\left( m\otimes Q\right) \left( u_{1}\right) \cdot \left( v\otimes \varepsilon \right) \left( u_{2}\right) + \\ \delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}v\left( Q\otimes m\right) \left( u_{1}\right) \cdot \left( \varepsilon \otimes \varepsilon \otimes \varepsilon \right) \left( u_{2}\right) + \\ -\delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}v\left( Q\otimes m\right) \left( u_{1}\right) \cdot \left( v\otimes \varepsilon \right) \left( u_{2}\right) + \\ \delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\left( \varepsilon \otimes v\right) \left( u_{1}\right) \cdot \left( \varepsilon \otimes \varepsilon \otimes \varepsilon \right) \left( u_{2}\right) + \\ -\delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\left( \varepsilon \otimes v\right) \left( u_{1}\right) \cdot \left( v\otimes \varepsilon \right) \left( u_{2}\right) + \\ \delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\omega _{s}\left( u_{1}\right) \cdot \left( \varepsilon \otimes \varepsilon \otimes \varepsilon \right) \left( u_{2}\right) + \\ -\delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\omega _{s}\left( u_{1}\right) \cdot \left( v\otimes \varepsilon \right) \left( u_{2}\right \end{array \right] \\ &\overset{\left\vert u_{1}\right\vert +\left\vert u_{2}\right\vert \leq s}{= &\left[ \begin{array}{c} 0-\delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\left( v\otimes \varepsilon \right) \left( u\right) \\ -\delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}v\left( m\otimes Q\right) \left( u\right) +0+ \\ \delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}v\left( Q\otimes m\right) \left( u\right) -0+ \\ \delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\left( \varepsilon \otimes v\right) \left( u\right) -0+ \\ \delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\omega _{s}\left( u\right) - \end{array \right] \\ &=&\left[ \begin{array}{c} \delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\left[ \left( \varepsilon \otimes v\right) \left( u\right) +v\left( Q\otimes m\right) \left( u\right) -v\left( m\otimes Q\right) \left( u\right) -\left( v\otimes \varepsilon \right) \left( u\right) \right] + \\ \delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\omega _{s}\left( u\right \end{array \right] . \end{eqnarray*} \end{invisible} Now $\omega _{s}\left( u\right) =\overline{\omega }_{s}\left( \overline{u \right) $ while one proves that $\left( \varepsilon \otimes v\right) \left( u\right) =\left( \varepsilon _{\overline{Q}}\otimes \overline{v}\right) \left( \overline{u}\right) ,$ $\delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}v\left( m\otimes Q\right) \left( u\right) =\delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\overline{v}\left( m_{\overline{Q}}\otimes \overline{Q}\right) \left( \overline{u}\right) $ and similarly for the other pieces of the equality. \begin{invisible} Here is the full computatio \begin{eqnarray*} \left( \varepsilon \otimes v\right) \left( u\right) &=&\left( \varepsilon \otimes v\right) \left( x\otimes y\otimes z\right) =\varepsilon \left( x\right) v\left( y\otimes z\right) =\varepsilon \left( x\right) \overline{v \left( \overline{y}\otimes \overline{z}\right) \\ &=&\delta _{\left\vert x\right\vert ,0}\varepsilon \left( x\right) \overline v}\left( \overline{y}\otimes \overline{z}\right) =\varepsilon _{\overline{Q }\left( \overline{x}\right) \overline{v}\left( \overline{y}\otimes \overline z}\right) =\left( \varepsilon _{\overline{Q}}\otimes \overline{v}\right) \left( \overline{u}\right) , \end{eqnarray* and taking $x=x^{a,l}$ and $y=x^{a^{\prime },l^{\prime }} \begin{eqnarray*} \delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}v\left( m\otimes Q\right) \left( u\right) &=&\delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}v\left( x\cdot y\otimes z\right) \overset{(\ref{form:GelYD1})}{=}\delta _{a+a^{\prime }+\left\vert z\right\vert ,s}v\left( \sum_{w\leq a+a^{\prime }}\sum_{t}\mu _{w,t}^{a,l,a^{\prime },l^{\prime }}x^{w,t}\otimes z\right) \\ &=&\delta _{a+a^{\prime }+\left\vert z\right\vert ,s}\sum_{w\leq a+a^{\prime }}\sum_{t}\mu _{w,t}^{a,l,a^{\prime },l^{\prime }}v\left( x^{w,t}\otimes z\right) \\ &=&\delta _{a+a^{\prime }+\left\vert z\right\vert ,s}\sum_{w\leq a+a^{\prime }}\sum_{t}\mu _{w,t}^{a,l,a^{\prime },l^{\prime }}\overline{v}\left( \overline{x^{w,t}}\otimes \overline{z}\right) \\ &&\overset{(\ref{form:vbarYD})}{=}\delta _{a+a^{\prime }+\left\vert z\right\vert ,s}\sum_{w\leq a+a^{\prime }}\sum_{t}\mu _{w,t}^{a,l,a^{\prime },l^{\prime }}\delta _{w+\left\vert z\right\vert ,s}\overline{v}\left( \overline{x^{w,t}}\otimes \overline{z}\right) \\ &=&\delta _{a+a^{\prime }+\left\vert z\right\vert ,s}\sum_{t}\mu _{a+a^{\prime },t}^{a,l,a^{\prime },l^{\prime }}\overline{v}\left( \overline x^{a+a^{\prime },t}}\otimes \overline{z}\right) \\ &&\overset{(\ref{form:GelYD2})}{=}\delta _{a+a^{\prime }+\left\vert z\right\vert ,s}\overline{v}\left( \overline{x^{a,l}}\cdot \overline x^{a^{\prime },l^{\prime }}}\otimes \overline{z}\right) \\ &=&\delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\overline{v}\left( \overline{x}\cdot \overline{y}\otimes \overline{z}\right) \\ &=&\delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\overline{v}\left( m_{\overline{Q}}\otimes \overline{Q \right) \left( \overline{u}\right) \end{eqnarray* Similarly one gets $\left( v\otimes \varepsilon \right) \left( u\right) =\left( \overline{v}\otimes \varepsilon _{\overline{Q}}\right) \left( \overline{u}\right) $ and $\delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}v\left( Q\otimes m\right) \left( u\right) =\delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\overline{v}\left( \overline{Q}\otimes m_ \overline{Q}}\right) \left( \overline{u}\right) .$ \end{invisible} Thus one get \begin{eqnarray*} \left( \omega ^{\gamma }\right) _{s}\left( u\right) &=&\left[ \begin{array}{c} \delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\left[ \left( \varepsilon _{\overline{Q}}\otimes \overline{v \right) \left( \overline{u}\right) +\overline{v}\left( \overline{Q}\otimes m_{\overline{Q}}\right) \left( \overline{u}\right) -\overline{v}\left( m_ \overline{Q}}\otimes \overline{Q}\right) \left( \overline{u}\right) -\left( \overline{v}\otimes \varepsilon _{\overline{Q}}\right) \left( \overline{u \right) \right] + \\ +\delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\overline{\omega }_{s}\left( \overline{u}\right \end{array \right] \\ &=&-\delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\partial ^{2}\overline{v}+\delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,s}\overline{\omega _{s}\left( \overline{u}\right) =0. \end{eqnarray* For $0\leq t\leq s-1$, analogously to the above, we compute \begin{eqnarray*} \left( \omega ^{\gamma }\right) _{t}\left( u\right) &=&\delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,t}\omega ^{\gamma }\left( u\right) \\ &=&\delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,t}\left[ \left( \varepsilon \otimes \gamma \right) \ast \gamma \left( Q\otimes m\right) \ast \omega \ast \gamma ^{-1}\left( m\otimes Q\right) \ast \left( \gamma ^{-1}\otimes \varepsilon \right) \right] \left( u\right) \\ &=&\delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,t}\left[ \left( \varepsilon \otimes \gamma \right) \ast \gamma \left( Q\otimes m\right) \ast \omega _{0}\ast \gamma ^{-1}\left( m\otimes Q\right) \ast \left( \gamma ^{-1}\otimes \varepsilon \right) \right] \left( u\right) \\ &\overset{(\ref{form:Omega0YD})}{=}&\delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,t}\left[ \left( \varepsilon \otimes \gamma \right) \ast \gamma \left( Q\otimes m\right) \ast \gamma ^{-1}\left( m\otimes Q\right) \ast \left( \gamma ^{-1}\otimes \varepsilon \right) \right] \left( u\right) \\ &=&\delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,t}\left( \varepsilon \otimes \varepsilon \otimes \varepsilon \right) \left( u\right) =\delta _{0,t}\left( \varepsilon \otimes \varepsilon \otimes \varepsilon \right) \left( u\right) . \end{eqnarray*} \begin{invisible} Here are the datail \begin{eqnarray*} &&\left( \omega ^{\gamma }\right) _{t}\left( u\right) =\delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,t}\omega ^{\gamma }\left( u\right) \\ &=&\delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,t}\left[ \left( \varepsilon \otimes \gamma \right) \ast \gamma \left( Q\otimes m\right) \ast \omega \ast \gamma ^{-1}\left( m\otimes Q\right) \ast \left( \gamma ^{-1}\otimes \varepsilon \right) \right] \left( u\right) \\ &=&\delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,t}\left[ \left( \varepsilon \otimes \gamma \right) \ast \gamma \left( Q\otimes m\right) \ast \left( \sum\limits_{t^{\prime }\in \N_ }\omega _{t^{\prime }}\right) \ast \gamma ^{-1}\left( m\otimes Q\right) \ast \left( \gamma ^{-1}\otimes \varepsilon \right) \right] \left( u\right) \\ &=&\delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,t}\left[ \left( \varepsilon \otimes \gamma \right) \ast \gamma \left( Q\otimes m\right) \ast \omega _{0}\ast \gamma ^{-1}\left( m\otimes Q\right) \ast \left( \gamma ^{-1}\otimes \varepsilon \right) \right] \left( u\right) \\ &\overset{(\ref{form:Omega0YD})}{=}&\delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,t}\left[ \left( \varepsilon \otimes \gamma \right) \ast \gamma \left( Q\otimes m\right) \ast \gamma ^{-1}\left( m\otimes Q\right) \ast \left( \gamma ^{-1}\otimes \varepsilon \right) \right] \left( u\right) \\ &=&\delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,t}\left( \varepsilon \otimes \gamma \right) \left( u_{1}\right) \cdot \gamma \left( Q\otimes m\right) \left( u_{2}\right) \cdot \gamma ^{-1}\left( m\otimes Q\right) \left( u_{3}\right) \cdot \left( \gamma ^{-1}\otimes \varepsilon \right) \left( u_{4}\right) \\ &=&\delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,t}\left( \left( \varepsilon \otimes \varepsilon \right) +v\right) \left( u_{1}\right) \cdot \gamma \left( Q\otimes m\right) \left( u_{2}\right) \cdot \gamma ^{-1}\left( m\otimes Q\right) \left( u_{3}\right) \cdot \left( \gamma ^{-1}\otimes \varepsilon \right) \left( u_{4}\right) \\ &\overset{\left\vert u_{1}\right\vert <s}{=}&\delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,t}\left( \varepsilon \otimes \varepsilon \right) \left( u_{1}\right) \cdot \gamma \left( Q\otimes m\right) \left( u_{2}\right) \cdot \gamma ^{-1}\left( m\otimes Q\right) \left( u_{3}\right) \cdot \left( \gamma ^{-1}\otimes \varepsilon \right) \left( u_{4}\right) \\ &=&\delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,t}\gamma \left( Q\otimes m\right) \left( u_{1}\right) \cdot \gamma ^{-1}\left( m\otimes Q\right) \left( u_{2}\right) \cdot \left( \gamma ^{-1}\otimes \varepsilon \right) \left( u_{3}\right) \\ &=&\delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,t}\left( \left( \varepsilon \otimes \varepsilon \right) +v\right) \left( Q\otimes m\right) \left( u_{1}\right) \cdot \gamma ^{-1}\left( m\otimes Q\right) \left( u_{2}\right) \cdot \left( \gamma ^{-1}\otimes \varepsilon \right) \left( u_{3}\right) \\ &\overset{\left\vert u_{1}\right\vert <s}{=}&\delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,t}\left( \varepsilon \otimes \varepsilon \right) \left( Q\otimes m\right) \left( u_{1}\right) \cdot \gamma ^{-1}\left( m\otimes Q\right) \left( u_{2}\right) \cdot \left( \gamma ^{-1}\otimes \varepsilon \right) \left( u_{3}\right) \\ &=&\delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,t}\gamma ^{-1}\left( m\otimes Q\right) \left( u_{1}\right) \cdot \left( \gamma ^{-1}\otimes \varepsilon \right) \left( u_{2}\right) \\ &\overset{(\ref{form:gamma-1})}{=}&\delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,t}\left( \left( \varepsilon \otimes \varepsilon \right) -v\right) \left( m\otimes Q\right) \left( u_{1}\right) \cdot \left( \gamma ^{-1}\otimes \varepsilon \right) \left( u_{2}\right) \\ &\overset{\left\vert u_{1}\right\vert <s}{=}&\delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,t}\left( \varepsilon \otimes \varepsilon \right) \left( m\otimes Q\right) \left( u_{1}\right) \cdot \left( \gamma ^{-1}\otimes \varepsilon \right) \left( u_{2}\right) \\ &=&\delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,t}\left( \gamma ^{-1}\otimes \varepsilon \right) \left( u\right) \\ &\overset{(\ref{form:gamma-1})}{=}&\delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,t}\left( \left( \left( \varepsilon \otimes \varepsilon \right) -v\right) \otimes \varepsilon \right) \left( u\right) \\ &\overset{\left\vert u_{1}\right\vert <s}{=}&\delta _{\left\vert x\right\vert +\left\vert y\right\vert +\left\vert z\right\vert ,t}\left( \varepsilon \otimes \varepsilon \otimes \varepsilon \right) \left( u\right) \\ &=&\delta _{0,t}\left( \varepsilon \otimes \varepsilon \otimes \varepsilon \right) \left( u\right) =\delta _{0,t}\left( \varepsilon _{\overline{Q }\otimes \varepsilon _{\overline{Q}}\otimes \varepsilon _{\overline{Q }\right) \left( \overline{u}\right) . \end{eqnarray*} \end{invisible} Therefore we can now repeat the argument on $\omega ^{\gamma }$ instead of \omega .$ Deforming several times we will get a reassociator, say $\omega ^{\prime },$ whose first non trivial component $\omega _{t}^{\prime }$, with $t\neq 0$, exceeds the dimension of $Q.$ In other words $\omega ^{\prime }=\omega _{0}^{\prime }$ which is trivial. Hence $Q$ is gauge equivalent to a connected bialgebra in ${_{H}^{H}\mathcal{YD}}$. \end{proof} \section{Invariants}\label{sec:4} Given a $\Bbbk $-algebra $A,$ we denote by $\mathrm{H}^{n}\left( A,-\right) $ the $n$-th right derived functor of $\mathrm{Hom}_{A,A}\left( A,-\right) $ in the category of $A$-bimodules. In other words, for every $A$-bimodule $M , $\mathrm{H}^{n}\left( A,M\right) $ is the Hochschild cohomology group of A $ with coefficients in $M$. Denote by $\mathrm{Z}^{n}\left( A,M\right) $ and $\mathrm{B}^{n}\left( A,M\right) $ the abelian groups of $n$-cocycles and of $n$-coboundaries respectively. Let $H$ be a Hopf algebra, let $B$ be a left $H$-module algebra and let $M$ be a $B\#H$-bimodule, where $B\#H$ denotes the smash product algebra, see e.g. \cite[Definition 4.1.3]{Mo}. Then $\mathrm{H}^{n}\left( B,M\right) $ becomes an $H$-bimodule as follows. Its structure of left $H$-module is given via $\varepsilon _{H}$ and its structure of right $H$-module is defined, for every $f\in \mathrm{Z}^{n}\left( B,M\right) $ and $h\in H,$ by setting \begin{equation*} \left[ f\right] h:=\left[ \chi _{n}^{h}\left( M\right) \left( f\right) \right] \end{equation* where, for every $k\in \Bbbk ,b_{1},\ldots ,b_{n}\in B,$ we se \begin{eqnarray*} \chi _{0}^{h}\left( M\right) \left( f\right) \left( k\right) := &&\left( 1_{B}\#S\left( h_{1}\right) \right) f\left( k\right) \left( 1_{B}\#h_{2}\right) \text{ for }n=0\text{ while and for }n\geq 1 \\ \chi _{n}^{h}\left( M\right) \left( f\right) \left( b_{1}\otimes b_{2}\otimes \cdots \otimes b_{n}\right) := &&\left( 1_{B}\#S\left( h_{1}\right) \right) f\left( h_{2}b_{1}\otimes h_{3}b_{2}\otimes \cdots \otimes h_{n+1}b_{n}\right) \left( 1_{B}\#h_{n+2}\right) . \end{eqnarray*} Moreover \begin{equation} \partial ^{n}\circ \chi _{n}^{h}\left( M\right) =\chi _{n+1}^{h}\left( M\right) \circ \partial ^{n},\text{ for every }n\geq -1, \label{form:PartialChi} \end{equation where $\partial ^{n}:\mathrm{Hom}_{\Bbbk }\left( B^{\otimes n},M\right) \rightarrow \mathrm{Hom}_{\Bbbk }\left( B^{\otimes \left( n+1\right) },M\right) $ denotes the differential of the usual Hochschild cohomology. Denote by $\mathrm{H}^{n}\left( B,M\right) ^{H}$ the space of $H$-invariant elements of $\mathrm{H}^{n}\left( B,M\right) $. \begin{proposition} \label{pro:Dragos}Let $H$ be a semisimple Hopf algebra and let $B$ be a left $H$-module algebra. Denote by $A:=B\#H$. Then, for each $n\in \N_0$ and for every $A$-bimodule $M \begin{equation*} \mathrm{H}^{n}\left( B\#H,M\right) \cong \mathrm{H}^{n}\left( B,M\right) ^{H}. \end{equation*} \end{proposition} \begin{proof} We will apply \cite[Equation (3.6.1)]{St}. To this aim we have to prove first that $A/B$ is an $H$-Galois extension such that $A$ is flat as left and right $B$-module. Now, $A=B\#_{\xi }H$ for $\xi :H\otimes H\rightarrow B$ defined by $\xi \left( x,y\right) =\varepsilon _{H}\left( x\right) \varepsilon _{H}\left( y\right) 1_{A},$ cf. \cite[Definition 7.1.1]{Mo}. Moreover a direct computation shows that $\iota :B\rightarrow A:b\mapsto b\#1_{H}$ is a right $H$-extension where $A$ is regarded as a right $H -comodule via $\rho :A\rightarrow A\otimes H:b\#h\mapsto \left( b\#h_{1}\right) \otimes h_{2}.$ Thus, by \cite[Proposition 7.2.7]{Mo}, we know that $\iota :B\rightarrow A$ is $H$-cleft and hence, by \cite[Theorem 8.2.4]{Mo}, it is $H$-Galois. The $B$-bimodule structure of $A$ is induced by $\iota $ so that, explicitly, we have \begin{eqnarray*} b^{\prime }\left( b\#h\right) &=&\left( b^{\prime }\#1_{H}\right) \left( b\#h\right) =b^{\prime }b\#h, \\ \left( b\#h\right) b^{\prime } &=&\left( b\#h\right) \left( b^{\prime }\#1_{H}\right) =b\left( h_{1}b^{\prime }\right) \#h_{2}. \end{eqnarray* Note that $A=B\#H$ is flat as a left $B$-module as $H$ is a free $\Bbbk -module ($\Bbbk $ is a field). Now consider the map $\alpha :H\otimes B\rightarrow A$ defined by setting $\alpha \left( h\otimes b\right) :=h_{1}b\otimes h_{2}$ (note that it is defined as the braiding in ${_{H}^{H \mathcal{YD}}$). We hav \begin{equation*} \alpha \left( h\otimes bb^{\prime }\right) =h_{1}\left( bb^{\prime }\right) \otimes h_{2}=\left( h_{1}b\right) \left( h_{2}b^{\prime }\right) \otimes h_{3}=\left( h_{1}b\#h_{2}\right) b^{\prime }=\alpha \left( h\otimes b\right) b^{\prime } \end{equation* so that $\alpha $ is right $B$-linear where $H\otimes B$ is regarded as a right module via $\left( h\#b\right) b^{\prime }:=h\#bb^{\prime }.$ Now $H$ is semisimple and hence separable (see \cite[Corollary 3.7]{St}). Thus $H$ is finite-dimensional and hence it has bijective antipode $S_{H}$. Thus \alpha $ is invertible with inverse given by $\alpha ^{-1}\left( b\#h\right) :=h_{2}\otimes S_{H}^{-1}\left( h_{1}\right) b.$ Therefore $\alpha $ is an isomorphism of right $B$-modules and hence $A$ is flat as a right $B$-module as $H\otimes B$ is. We have now the hypotheses necessary to apply \cite[Equation (3.6.1)]{St} and obtai \begin{equation*} \mathrm{H}^{n}\left( A,M\right) \cong \mathrm{Hom}_{-,H}\left( \Bbbk \mathrm{H}^{n}\left( B,M\right) \right) =\mathrm{Hom}_{\Bbbk }\left( \Bbbk \mathrm{H}^{n}\left( B,M\right) \right) ^{H}\cong \mathrm{H}^{n}\left( B,M\right) ^{H}. \end{equation*} \begin{invisible} Let us check for ourself that the structures of $\mathrm{H}^{n}\left( B,M\right) $ are the ones claimed. First note that $\mathrm{H}^{n}\left( B,M\right) $, through the isomorphism, is regarded as a left $H$-module via \varepsilon _{H}$ i.e. \begin{equation*} \mathrm{H}^{n}\left( B,M\right) ^{H}=\left\{ z\in \mathrm{H}^{n}\left( B,M\right) \mid \varepsilon _{H}\left( h\right) z=zh,\text{ for every }h\in H\right\} . \end{equation* We would like to express explicitly the right $H$-module structure of \mathrm{H}^{n}\left( B,M\right) $. It is the one of \cite[Proposition 2.4 {St}. First, following the results recalled above, the map making $A$ cleft is $\gamma :H\rightarrow A$ given by $\gamma \left( h\right) =1_{B}\#h$ whose convolution inverse is defined by $\gamma ^{-1}\left( h\right) =\gamma S\left( h\right) =1_{B}\#S\left( h\right) .$ Note that $\gamma $ is an algebra map a \begin{eqnarray*} \gamma \left( h\right) \gamma \left( l\right) &=&\left( 1_{B}\#h\right) \left( 1_{B}\#l\right) =\left( 1_{B}\left( h_{1}1_{B}\right) \right) \#\left( h_{2}l\right) =\left( 1_{B}\left( \varepsilon _{H}\left( h_{1}\right) 1_{B}\right) \right) \#\left( h_{2}l\right) =\gamma \left( hl\right) , \\ \gamma \left( 1_{H}\right) &=&1_{B}\#1_{H}. \end{eqnarray* Now we have $\gamma ^{-1}\left( h\right) =\gamma S\left( h\right) $ so that \begin{eqnarray*} \gamma \left( h_{1}\right) \gamma ^{-1}\left( h_{2}\right) &=&\gamma \left( h_{1}\right) \gamma \left( S\left( h_{2}\right) \right) =\gamma \left( h_{1}S\left( h_{2}\right) \right) =\varepsilon _{H}\left( h\right) \gamma \left( 1_{H}\right) =\varepsilon _{H}\left( h\right) 1_{B}\#1_{H}, \\ \gamma ^{-1}\left( h_{1}\right) \gamma \left( h_{2}\right) &=&\gamma \left( S\left( h_{1}\right) \right) \gamma \left( h_{2}\right) =\gamma \left( S\left( h_{1}\right) h_{2}\right) =\varepsilon _{H}\left( h\right) 1_{B}\#1_{H}. \end{eqnarray* Using $\gamma $ one has that the composition inverse of the canonical map \begin{equation*} \beta :A\otimes _{B}A\rightarrow A\#H:\left( b\#h\right) \otimes \left( b^{\prime }\#h^{\prime }\right) \mapsto \left( b\#h\right) \rho \left( b^{\prime }\#h^{\prime }\right) =\left( b\left( h_{1}b^{\prime }\right) \#h_{2}h_{1}^{\prime }\right) \otimes h_{2}^{\prime } \end{equation* is given b \begin{eqnarray*} \beta ^{-1}\left( \left( b\#h\right) \otimes l\right) &=&\left( b\#h\right) \gamma ^{-1}\left( l_{1}\right) \otimes _{B}\gamma \left( l_{2}\right) \\ &=&\left( b\#h\right) \left( 1_{B}\#S\left( l_{1}\right) \right) \otimes _{B}\left( 1_{B}\#l_{2}\right) \\ &=&\left( b\left( h_{1}1_{B}\right) \right) \#\left( h_{2}S\left( l_{1}\right) \right) \otimes _{B}\left( 1_{B}\#l_{2}\right) \\ &=&\left( b\#hS\left( l_{1}\right) \right) \otimes _{B}\left( 1_{B}\#l_{2}\right) . \end{eqnarray* Hence \begin{equation*} r_{i}\left( h\right) \otimes _{B}l_{i}\left( h\right) =\beta ^{-1}\left( \left( 1_{B}\#1_{H}\right) \otimes h\right) =\left( 1_{B}\#S\left( h_{1}\right) \right) \otimes _{B}\left( 1_{B}\#h_{2}\right) . \end{equation*} Now $\mathrm{H}^{0}\left( B,M\right) =M^{B}$ and the right $H$-module structure of the latter is given, for every $h\in H,m\in M^{B},$ b \begin{equation*} mh=r_{i}\left( h\right) ml_{i}\left( h\right) =\left( 1_{B}\#S\left( h_{1}\right) \right) m\left( 1_{B}\#h_{2}\right) . \end{equation* Consider a short exact sequence of $A$-bimodules \begin{equation*} 0\rightarrow M\overset{s}{\rightarrow }I\overset{p}{\rightarrow C\rightarrow 0 \end{equation* with $I$ injective as an $A$-bimodule (then $\mathrm{H}^{n}\left( B,I\right) =0\,$for every $n\geq 1$ by the proof of \cite[Lemma 2.2]{St}). This short sequence induces a short exact sequence of chain complexe \begin{equation*} \begin{array}{ccccccccc} 0 & \rightarrow & M^{B} & \overset{s^{B}}{\rightarrow } & I^{B} & \overset p^{B}}{\rightarrow } & C^{B} & & \\ & & \partial ^{-1}\downarrow & & \partial ^{-1}\downarrow & & \partial ^{-1}\downarrow & & \\ 0 & \rightarrow & \mathrm{Hom}_{\Bbbk }\left( \Bbbk ,M\right) & \overset \varphi ^{0}=\mathrm{Hom}_{\Bbbk }\left( \Bbbk ,s\right) }{\rightarrow } & \mathrm{Hom}_{\Bbbk }\left( \Bbbk ,I\right) & \overset{\psi ^{0}=\mathrm{Hom _{\Bbbk }\left( \Bbbk ,p\right) }{\rightarrow } & \mathrm{Hom}_{\Bbbk }\left( \Bbbk ,C\right) & \rightarrow & 0 \\ & & \partial ^{0}\downarrow & & \partial ^{0}\downarrow & & \partial ^{0}\downarrow & & \\ 0 & \rightarrow & \mathrm{Hom}_{\Bbbk }\left( B,M\right) & \overset{\varphi ^{1}=\mathrm{Hom}_{\Bbbk }\left( B,s\right) }{\rightarrow } & \mathrm{Hom _{\Bbbk }\left( B,I\right) & \overset{\psi ^{1}=\mathrm{Hom}_{\Bbbk }\left( B,p\right) }{\rightarrow } & \mathrm{Hom}_{\Bbbk }\left( B,C\right) & \rightarrow & 0 \\ & & \partial ^{1}\downarrow & & \partial ^{1}\downarrow & & \partial ^{1}\downarrow & & \\ 0 & \rightarrow & \mathrm{Hom}_{\Bbbk }\left( B^{\otimes 2},M\right) & \overset{\varphi ^{2}=\mathrm{Hom}_{\Bbbk }\left( B^{\otimes 2},s\right) } \rightarrow } & \mathrm{Hom}_{\Bbbk }\left( B^{\otimes 2},I\right) & \overse {\psi ^{2}=\mathrm{Hom}_{\Bbbk }\left( B^{\otimes 2},p\right) }{\rightarrow } & \mathrm{Hom}_{\Bbbk }\left( B^{\otimes 2},C\right) & \rightarrow & 0 \\ & & \partial ^{2}\downarrow & & \partial ^{2}\downarrow & & \partial ^{2}\downarrow & & \\ 0 & \rightarrow & \mathrm{Hom}_{\Bbbk }\left( B^{\otimes 3},M\right) & \overset{\varphi ^{3}=\mathrm{Hom}_{\Bbbk }\left( B^{\otimes 3},s\right) } \rightarrow } & \mathrm{Hom}_{\Bbbk }\left( B^{\otimes 3},I\right) & \overse {\psi ^{3}=\mathrm{Hom}_{\Bbbk }\left( B^{\otimes 3},p\right) }{\rightarrow } & \mathrm{Hom}_{\Bbbk }\left( B^{\otimes 3},C\right) & \rightarrow & 0 \\ & & \partial ^{3}\downarrow & & \partial ^{3}\downarrow & & \partial ^{3}\downarrow & & \\ & & \vdots & & \vdots & & \vdots & & \end{array \end{equation* It is straightforward to prove that the above squares commute. \end{invisible} \begin{invisible} In fact, for $b_{1},\ldots ,b_{n+1}\in B, \begin{eqnarray*} &&\varphi ^{n+1}\partial ^{n}\left( f\right) \left( b_{1}\otimes b_{2}\otimes \cdots \otimes b_{n+1}\right) \\ &=&s\left( \sum\limits_{i=0}^{n+1}\left( -1\right) ^{i}\partial _{i}^{n}\left( f\right) \left( b_{1}\otimes b_{2}\otimes \cdots \otimes b_{n+1}\right) \right) \\ &=&s\left( \begin{array}{c} f\left( b_{1}\otimes b_{2}\otimes \cdots \otimes b_{n}\right) b_{n+1}-f\left( b_{1}\otimes b_{2}\cdots \otimes b_{n-1}\otimes b_{n}b_{n+1}\right) \\ +f\left( b_{1}\otimes b_{2}\otimes \cdots \otimes b_{n-1}b_{n}\otimes b_{n+1}\right) +\cdots +\left( -1\right) ^{n+1}b_{1}f\left( b_{2}\otimes b_{3}\otimes \cdots \otimes b_{n}\otimes b_{n+1}\right \end{array \right) \\ &=&\left( \begin{array}{c} sf\left( b_{1}\otimes b_{2}\otimes \cdots \otimes b_{n}\right) b_{n+1}-sf\left( b_{1}\otimes b_{2}\cdots \otimes b_{n-1}\otimes b_{n}b_{n+1}\right) \\ +sf\left( b_{1}\otimes b_{2}\otimes \cdots \otimes b_{n-1}b_{n}\otimes b_{n+1}\right) +\cdots +\left( -1\right) ^{n+1}b_{1}sf\left( b_{2}\otimes b_{3}\otimes \cdots \otimes b_{n}\otimes b_{n+1}\right \end{array \right) \\ &=&\sum\limits_{i=0}^{n+1}\left( -1\right) ^{i}\partial _{i}^{n}\left( sf\right) \left( b_{1}\otimes b_{2}\otimes \cdots \otimes b_{n+1}\right) \\ &=&\partial ^{n}\left( sf\right) \left( b_{1}\otimes b_{2}\otimes \cdots \otimes b_{n+1}\right) =\partial ^{n}\varphi ^{n}\left( f\right) \left( b_{1}\otimes b_{2}\otimes \cdots \otimes b_{n+1}\right) . \end{eqnarray*} Now consider the following commutative diagra \begin{equation*} \begin{array}{ccccccccc} 0 & \rightarrow & \mathrm{Hom}_{\Bbbk }\left( B^{\otimes n},M\right) & \overset{\varphi ^{n}=\mathrm{Hom}_{\Bbbk }\left( B^{\otimes n},s\right) } \rightarrow } & \mathrm{Hom}_{\Bbbk }\left( B^{\otimes n},I\right) & \overse {\psi ^{n}=\mathrm{Hom}_{\Bbbk }\left( B^{\otimes n},p\right) }{\rightarrow } & \mathrm{Hom}_{\Bbbk }\left( B^{\otimes n},C\right) & \rightarrow & 0 \\ & & \chi _{n}^{h}\left( M\right) \downarrow & & \chi _{n}^{h}\left( I\right) \downarrow & & \chi _{n}^{h}\left( C\right) \downarrow & & \\ 0 & \rightarrow & \mathrm{Hom}_{\Bbbk }\left( B^{\otimes n},M\right) & \overset{\varphi ^{n}=\mathrm{Hom}_{\Bbbk }\left( B^{\otimes n},s\right) } \rightarrow } & \mathrm{Hom}_{\Bbbk }\left( B^{\otimes n},I\right) & \overse {\psi ^{n}=\mathrm{Hom}_{\Bbbk }\left( B^{\otimes n},p\right) }{\rightarrow } & \mathrm{Hom}_{\Bbbk }\left( B^{\otimes n},C\right) & \rightarrow & \end{array \end{equation*} where, for every $h\in H,$ we se \begin{eqnarray*} \chi _{0}^{h}\left( M\right) \left( f\right) \left( k\right) &:&=\left( 1_{B}\#S\left( h_{1}\right) \right) f\left( k\right) \left( 1_{B}\#h_{2}\right) , \\ \chi _{1}^{h}\left( M\right) \left( f\right) \left( b_{1}\right) &:&=\left( 1_{B}\#S\left( h_{1}\right) \right) f\left( h_{2}b_{1}\right) \left( 1_{B}\#h_{3}\right) \text{ and for }n>1 \\ \chi _{n}^{h}\left( M\right) \left( f\right) \left( b_{1}\otimes b_{2}\otimes \cdots \otimes b_{n}\right) &:&=\left( 1_{B}\#S\left( h_{1}\right) \right) f\left( h_{2}b_{1}\otimes h_{3}b_{2}\otimes \cdots \otimes h_{n+1}b_{n}\right) \left( 1_{B}\#h_{n+2}\right) . \end{eqnarray*} Let us check the diagram above commutes for each $n\in \N_0$. We hav \begin{eqnarray*} &&\varphi ^{n}\chi _{n}^{h}\left( M\right) \left( f\right) \left( b_{1}\otimes b_{2}\otimes \cdots \otimes b_{n}\right) \\ &=&s\left( \left( 1_{B}\#S\left( h_{1}\right) \right) f\left( h_{2}b_{1}\otimes h_{3}b_{2}\otimes \cdots \otimes h_{n+1}b_{n}\right) \left( 1_{B}\#h_{n+2}\right) \right) \\ &=&\left( 1_{B}\#S\left( h_{1}\right) \right) \left( sf\right) \left( h_{2}b_{1}\otimes h_{3}b_{2}\otimes \cdots \otimes h_{n+1}b_{n}\right) \left( 1_{B}\#h_{n+2}\right) \\ &=&\chi _{n}^{h}\left( I\right) \left( sf\right) \left( b_{1}\otimes b_{2}\otimes \cdots \otimes b_{n}\right) =\chi _{n}^{h}\left( I\right) \varphi ^{n}\left( f\right) \left( b_{1}\otimes b_{2}\otimes \cdots \otimes b_{n}\right) \end{eqnarray* so that the left-hand square commutes and similarly the right-hand one does. One also checks that $\chi _{n}^{h}\left( M\right) $ commutes with differentials i.e. that the following diagram commute \begin{equation*} \begin{array}{ccccccccc} 0 & \rightarrow & M^{B} & \overset{\partial ^{-1}}{\longrightarrow } & \mathrm{Hom}_{\Bbbk }\left( \Bbbk ,M\right) & \overset{\partial ^{0}} \longrightarrow } & \mathrm{Hom}_{\Bbbk }\left( B,M\right) & \overset \partial ^{1}}{\longrightarrow } & \cdots \\ & & \chi _{-1}^{h}\left( M\right) =\varrho _{0}^{h}\left( M\right) \downarrow & & \chi _{0}^{h}\left( M\right) \downarrow & & \chi _{1}^{h}\left( M\right) \downarrow & & \\ 0 & \rightarrow & M^{B} & \overset{\partial ^{-1}}{\longrightarrow } & \mathrm{Hom}_{\Bbbk }\left( \Bbbk ,M\right) & \overset{\partial ^{0}} \longrightarrow } & \mathrm{Hom}_{\Bbbk }\left( B,M\right) & \overset \partial ^{1}}{\longrightarrow } & \cdot \end{array \end{equation*} We compute \begin{eqnarray*} \left( \partial ^{-1}\chi _{-1}^{h}\left( M\right) \left( m\right) \right) \left( k\right) &=&\left( \partial ^{-1}\left( mh\right) \right) \left( k\right) =k\left( mh\right) =\left( 1_{B}\#S\left( h_{1}\right) \right) km\left( 1_{B}\#h_{2}\right) \\ &=&\left( 1_{B}\#S\left( h_{1}\right) \right) \partial ^{-1}\left( m\right) \left( k\right) \left( 1_{B}\#h_{2}\right) =\left( \chi _{0}^{h}\left( M\right) \partial ^{-1}\left( m\right) \right) \left( k\right) , \end{eqnarray* \begin{eqnarray*} \left( \partial ^{0}\chi _{0}^{h}\left( M\right) \left( f\right) \right) \left( b_{1}\right) &=&\chi _{0}^{h}\left( M\right) \left( f\right) \left( 1\right) b_{1}-b_{1}\chi _{0}^{h}\left( M\right) \left( f\right) \left( 1\right) \\ &=&\left( 1_{B}\#S\left( h_{1}\right) \right) f\left( 1\right) \left( 1_{B}\#h_{2}\right) b_{1}-b_{1}\left( 1_{B}\#S\left( h_{1}\right) \right) f\left( 1\right) \left( 1_{B}\#h_{2}\right) \\ &=&\left( 1_{B}\#S\left( h_{1}\right) \right) f\left( 1\right) \left( 1_{B}\#h_{2}\right) \left( b_{1}\#1_{H}\right) -\left( b_{1}\#1_{H}\right) \left( 1_{B}\#S\left( h_{1}\right) \right) f\left( 1\right) \left( 1_{B}\#h_{2}\right) \\ &=&\left( 1_{B}\#S\left( h_{1}\right) \right) f\left( 1\right) \left( h_{2}b_{1}\#h_{3}\right) -\left( b_{1}\#S\left( h_{1}\right) \right) f\left( 1\right) \left( 1_{B}\#h_{2}\right) \\ &=&\left( 1_{B}\#S\left( h_{1}\right) \right) \left( f\left( 1\right) \left( h_{2}b_{1}\#1_{H}\right) \right) \left( 1_{B}\#h_{3}\right) -\left( 1_{B}\#S\left( h_{1}\right) \right) \left( \left( h_{2}b_{1}\#1_{H}\right) f\left( 1\right) \right) \left( 1_{B}\#h_{3}\right) \\ &=&\left( 1_{B}\#S\left( h_{1}\right) \right) \left( f\left( 1\right) \left( h_{2}b_{1}\right) \right) \left( 1_{B}\#h_{3}\right) -\left( 1_{B}\#S\left( h_{1}\right) \right) \left( \left( h_{2}b_{1}\right) f\left( 1\right) \right) \left( 1_{B}\#h_{3}\right) \\ &=&\left( 1_{B}\#S\left( h_{1}\right) \right) \partial ^{0}\left( f\right) \left( h_{2}b_{1}\right) \left( 1_{B}\#h_{3}\right) =\left( \chi _{1}^{h}\left( M\right) \partial ^{0}\left( f\right) \right) \left( b_{1}\right) \end{eqnarray* \begin{eqnarray*} &&\left( \partial ^{n}\chi _{n}^{h}\left( M\right) \left( f\right) \right) \left( b_{1}\otimes b_{2}\otimes \cdots \otimes b_{n+1}\right) \\ &=&\sum\limits_{i=0}^{n+1}\left( -1\right) ^{i}\partial _{i}^{n}\chi _{n}^{h}\left( M\right) \left( f\right) \left( b_{1}\otimes b_{2}\otimes \cdots \otimes b_{n+1}\right) \\ &=&\left( \begin{array}{c} \chi _{n}^{h}\left( M\right) \left( f\right) \left( b_{1}\otimes b_{2}\otimes \cdots \otimes b_{n}\right) b_{n+1}-\chi _{n}^{h}\left( M\right) \left( f\right) \left( b_{1}\otimes b_{2}\cdots \otimes b_{n-1}\otimes b_{n}b_{n+1}\right) \\ +\chi _{n}^{h}\left( M\right) \left( f\right) \left( b_{1}\otimes b_{2}\otimes \cdots \otimes b_{n-1}b_{n}\otimes b_{n+1}\right) +\cdots +\left( -1\right) ^{n+1}b_{1}\chi _{n}^{h}\left( M\right) \left( f\right) \left( b_{2}\otimes b_{3}\otimes \cdots \otimes b_{n}\otimes b_{n+1}\right \end{array \right) \\ &=&\left( \begin{array}{c} \left( 1_{B}\#S\left( h_{1}\right) \right) f\left( h_{2}b_{1}\otimes h_{3}b_{2}\otimes \cdots \otimes h_{n+1}b_{n}\right) \left( 1_{B}\#h_{n+2}\right) b_{n+1}+ \\ -\left( 1_{B}\#S\left( h_{1}\right) \right) f\left( h_{2}b_{1}\otimes h_{3}b_{2}\otimes \cdots \otimes h_{n+1}\left( b_{n}b_{n+1}\right) \right) \left( 1_{B}\#h_{n+2}\right) \\ +\left( 1_{B}\#S\left( h_{1}\right) \right) f\left( h_{2}b_{1}\otimes h_{3}b_{2}\otimes \cdots \otimes h_{n}\left( b_{n-1}b_{n}\right) \otimes h_{n+1}b_{n}\right) \left( 1_{B}\#h_{n+2}\right) +\cdots \\ +\left( -1\right) ^{n+1}b_{1}\left( 1_{B}\#S\left( h_{1}\right) \right) f\left( h_{2}b_{2}\otimes h_{3}b_{3}\otimes \cdots \otimes h_{n+1}b_{n+1}\right) \left( 1_{B}\#h_{n+2}\right \end{array \right) \\ &=&\left( \begin{array}{c} \left( 1_{B}\#S\left( h_{1}\right) \right) f\left( h_{2}b_{1}\otimes h_{3}b_{2}\otimes \cdots \otimes h_{n+1}b_{n}\right) \left( 1_{B}\#h_{n+2}\right) \left( b_{n+1}\#1_{H}\right) + \\ -\left( 1_{B}\#S\left( h_{1}\right) \right) f\left( h_{2}b_{1}\otimes h_{3}b_{2}\otimes \cdots \otimes \left( h_{n+1}b_{n}\right) \left( h_{n+2}b_{n+1}\right) \right) \left( 1_{B}\#h_{n+3}\right) \\ +\left( 1_{B}\#S\left( h_{1}\right) \right) f\left( h_{2}b_{1}\otimes h_{3}b_{2}\otimes \cdots \otimes \left( h_{n}b_{n-1}\right) \left( h_{n+1}b_{n}\right) \otimes h_{n+2}b_{n}\right) \left( 1_{B}\#h_{n+3}\right) +\cdots \\ +\left( -1\right) ^{n+1}\left( b_{1}\#1_{H}\right) \left( 1_{B}\#S\left( h_{1}\right) \right) f\left( h_{2}b_{2}\otimes h_{3}b_{3}\otimes \cdots \otimes h_{n+1}b_{n+1}\right) \left( 1_{B}\#h_{n+2}\right \end{array \right) \\ &=&\left( \begin{array}{c} \left( 1_{B}\#S\left( h_{1}\right) \right) f\left( h_{2}b_{1}\otimes h_{3}b_{2}\otimes \cdots \otimes h_{n+1}b_{n}\right) \left( h_{n+2}b_{n+1}\#h_{n+3}\right) + \\ -\left( 1_{B}\#S\left( h_{1}\right) \right) f\left( h_{2}b_{1}\otimes h_{3}b_{2}\otimes \cdots \otimes \left( h_{n+1}b_{n}\right) \left( h_{n+2}b_{n+1}\right) \right) \left( 1_{B}\#h_{n+3}\right) \\ +\left( 1_{B}\#S\left( h_{1}\right) \right) f\left( h_{2}b_{1}\otimes h_{3}b_{2}\otimes \cdots \otimes \left( h_{n}b_{n-1}\right) \left( h_{n+1}b_{n}\right) \otimes h_{n+2}b_{n}\right) \left( 1_{B}\#h_{n+3}\right) +\cdots \\ +\left( -1\right) ^{n+1}\left( b_{1}\#S\left( h_{1}\right) \right) f\left( h_{2}b_{2}\otimes h_{3}b_{3}\otimes \cdots \otimes h_{n+1}b_{n+1}\right) \left( 1_{B}\#h_{n+2}\right \end{array \right) \\ &=&\left( 1_{B}\#S\left( h_{1}\right) \right) \left( \begin{array}{c} f\left( h_{2}b_{1}\otimes h_{3}b_{2}\otimes \cdots \otimes h_{n+1}b_{n}\right) \left( h_{n+2}b_{n+1}\#1_{H}\right) \\ -f\left( h_{2}b_{1}\otimes h_{3}b_{2}\otimes \cdots \otimes \left( h_{n+1}b_{n}\right) \left( h_{n+2}b_{n+1}\right) \right) + \\ +f\left( h_{2}b_{1}\otimes h_{3}b_{2}\otimes \cdots \otimes \left( h_{n}b_{n-1}\right) \left( h_{n+1}b_{n}\right) \otimes h_{n+2}b_{n}\right) +\cdots \\ +\left( -1\right) ^{n+1}\left( h_{2}b_{1}\#1_{H}\right) f\left( h_{3}b_{2}\otimes h_{3}b_{3}\otimes \cdots \otimes h_{n+2}b_{n+1}\right \end{array \right) \left( 1_{B}\#h_{n+3}\right) \\ &=&\left( 1_{B}\#S\left( h_{1}\right) \right) \left( \begin{array}{c} f\left( h_{2}b_{1}\otimes h_{3}b_{2}\otimes \cdots \otimes h_{n+1}b_{n}\right) h_{n+2}b_{n+1} \\ -f\left( h_{2}b_{1}\otimes h_{3}b_{2}\otimes \cdots \otimes \left( h_{n+1}b_{n}\right) \left( h_{n+2}b_{n+1}\right) \right) + \\ +f\left( h_{2}b_{1}\otimes h_{3}b_{2}\otimes \cdots \otimes \left( h_{n}b_{n-1}\right) \left( h_{n+1}b_{n}\right) \otimes h_{n+2}b_{n}\right) +\cdots \\ +\left( -1\right) ^{n+1}h_{2}b_{1}f\left( h_{3}b_{2}\otimes h_{3}b_{3}\otimes \cdots \otimes h_{n+2}b_{n+1}\right \end{array \right) \left( 1_{B}\#h_{n+3}\right) \\ &=&\left( 1_{B}\#S\left( h_{1}\right) \right) \partial ^{n}\left( f\right) \left( h_{2}b_{1}\otimes h_{3}b_{2}\otimes \cdots \otimes h_{n+2}b_{n+1}\right) \left( 1_{B}\#h_{n+3}\right) \\ &=&\left( \chi _{n+1}^{h}\left( M\right) \partial ^{n}\left( f\right) \right) \left( b_{1}\otimes b_{2}\otimes \cdots \otimes b_{n+1}\right) . \end{eqnarray*} Therefore we got a commutative diagram with exact rows \begin{equation*} \begin{array}{ccccccccc} 0 & \rightarrow & \mathrm{Hom}_{\Bbbk }\left( B^{\otimes \ast },M\right) & \overset{\varphi ^{\ast }}{\rightarrow } & \mathrm{Hom}_{\Bbbk }\left( B^{\otimes \ast },I\right) & \overset{\psi ^{\ast }}{\rightarrow } & \mathrm Hom}_{\Bbbk }\left( B^{\otimes \ast },C\right) & \rightarrow & 0 \\ & & \chi _{\ast }^{h}\left( M\right) \downarrow & & \chi _{\ast }^{h}\left( I\right) \downarrow & & \chi _{\ast }^{h}\left( C\right) \downarrow & & \\ 0 & \rightarrow & \mathrm{Hom}_{\Bbbk }\left( B^{\otimes \ast },M\right) & \overset{\varphi ^{\ast }}{\rightarrow } & \mathrm{Hom}_{\Bbbk }\left( B^{\otimes \ast },I\right) & \overset{\psi ^{\ast }}{\rightarrow } & \mathrm Hom}_{\Bbbk }\left( B^{\otimes \ast },C\right) & \rightarrow & \end{array \end{equation* This yields a commutative square for each $n\in \N_0 \begin{equation*} \begin{array}{ccc} \mathrm{H}^{n-1}\left( B,C\right) & \overset{\delta ^{n-1}}{\rightarrow } & \mathrm{H}^{n}\left( B,M\right) \\ \varrho _{n-1}^{h}\left( C\right) \downarrow & & \downarrow \varrho _{n}^{h}\left( M\right) \\ \mathrm{H}^{n-1}\left( B,C\right) & \overset{\delta ^{n-1}}{\rightarrow } & \mathrm{H}^{n}\left( B,M\right \end{array \end{equation* wher \begin{equation*} \varrho _{n}^{h}\left( M\right) \left( \left[ f\right] \right) :=\left[ \chi _{n}^{h}\left( M\right) \left( f\right) \right] \end{equation* see e.g. \cite[Proposition 0.4, page 6]{Br} in its dual form. Coming back to our short exact sequence \begin{equation*} 0\rightarrow M\overset{s}{\rightarrow }I\overset{p}{\rightarrow C\rightarrow 0, \end{equation* from $\mathrm{H}^{n}\left( B,I\right) =0$ for every $n\geq 1$ (as observed above) we get a commutative diagra \begin{equation*} \begin{array}{ccccccc} \mathrm{H}^{n-1}\left( B,I\right) & \overset{\mathrm{H}^{n-1}\left( B,p\right) }{\rightarrow } & \mathrm{H}^{n-1}\left( B,C\right) & \overset \delta ^{n-1}}{\rightarrow } & \mathrm{H}^{n}\left( B,M\right) & \overset \mathrm{H}^{n-1}\left( B,s\right) }{\rightarrow } & \mathrm{H}^{n}\left( B,I\right) =0 \\ \varrho _{n-1}^{h}\left( I\right) \downarrow & & \varrho _{n-1}^{h}\left( C\right) \downarrow & & \downarrow \varrho _{n}^{h}\left( M\right) & & \\ \mathrm{H}^{n-1}\left( B,I\right) & \overset{\mathrm{H}^{n-1}\left( B,p\right) }{\rightarrow } & \mathrm{H}^{n-1}\left( B,C\right) & \overset \delta ^{n-1}}{\rightarrow } & \mathrm{H}^{n}\left( B,M\right) & & \end{array \end{equation* so that the map $\varrho _{n}^{h}\left( M\right) $ is uniquely determined by the universal property of the cokernel in the upper sequence. This map is the one used in \cite[Proposition 2.4]{St} which is constructed using \cite Theorem 7.5, page 78]{Br}. Summing up, the right $H$-module structure of \mathrm{H}^{n}\left( B,M\right) $ is given by $\left[ f\right] h:=\varrho _{n}^{h}\left( M\right) \left( \left[ f\right] \right) =\left[ \chi _{n}^{h}\left( M\right) \left( f\right) \right] .$ \end{invisible} \end{proof} \begin{remark}\label{rem:DV} Proposition \ref{pro:Dragos} in the particular case when $M=\Bbbk$ and $B$ is finite-dimensional is \cite[Theorem 2.17]{SVay}. Note that in the notations therein, one has $E(B)=\oplus_{n \in \N_0}E_n(B,\Bbbk)$ where $E_n(B,\Bbbk)=\mathrm{Ext}^n_B(\Bbbk,\Bbbk)\cong\mathrm{H}^n(B,\Bbbk)$. The latter isomorphism is \cite[Corollary 4.4, page 170]{CE}. \end{remark} Let $H$ be a Hopf algebra and let $B$ be a bialgebra in the braided category ${_{H}^{H}\mathcal{YD}}$. Denote by $A:=B\#H$ the Radford-Majid bosonization of $B$ by $H,$ see e.g. \cite[Theorem 1]{Ra-TheStruct}. Note that $A$ is endowed with an algebra map $\varepsilon _{A}:A\rightarrow \Bbbk $ defined by $\varepsilon _{A}\left( b\#h\right) =\varepsilon _{B}\left( b\right) \varepsilon _{H}\left( h\right) $ so that we can regard $\Bbbk $ as an $A -bimodule via $\varepsilon _{A}.$ Then we can consider $\mathrm{H}^{n}\left( B,\Bbbk \right) $ as an $H$-bimodule as follows. Its structure of left $H -module is given via $\varepsilon _{H}$ and its structure of right $H -module is defined, for every $f\in \mathrm{Z}^{n}\left( B,\Bbbk \right) $ and $h\in H,$ by setting \begin{equation*} \left[ f\right] h:=\left[ fh\right] , \end{equation* where $\left( fh\right) \left( z\right) =f\left( hz\right) ,$ for every z\in B^{\otimes n}$. The latter is the usual right $H$-module structure of \mathrm{Hom}_{\Bbbk }\left( B^{\otimes n},\Bbbk \right) .$ Indeed, for every $n\geq -1$, the vector space $\mathrm{Hom}_{\Bbbk }\left( B^{\otimes n},\Bbbk \right) $ is an $H$-bimodule with respect to this right $H$-module structure and the left one induced by $\varepsilon _{H}.$ \begin{corollary} \label{coro:K}Let $H$ be a semisimple Hopf algebra and let $B$ be a bialgebra in the braided category ${_{H}^{H}\mathcal{YD}}$. Set $A:=B\#H.$ Then, for each $n\in \N_0 \begin{equation*} \mathrm{H}^{n}\left( B\#H,\Bbbk \right) \cong \mathrm{H}^{n}\left( B,\Bbbk \right) ^{H} \end{equation* and the differential $\partial ^{n}:\mathrm{Hom}_{\Bbbk }\left( B^{\otimes n},\Bbbk \right) \rightarrow \mathrm{Hom}_{\Bbbk }\left( B^{\otimes \left( n+1\right) },\Bbbk \right) $ of the usual Hochschild cohomology is $H -bilinear. \end{corollary} \begin{proof} In the particular case $M=\Bbbk ,$ the right module $H$-structure used in Proposition \ref{pro:Dragos} simplifies as follows. It is defined, for every $f\in \mathrm{Z}^{n}\left( B,\Bbbk \right) $ and $h\in H,$ by setting \begin{equation*} \left[ f\right] h:=\left[ \chi _{n}^{h}\left( \Bbbk \right) \left( f\right) \right] \end{equation* where, for every $k\in \Bbbk ,b_{1},\ldots ,b_{n}\in B,$ we se \begin{eqnarray*} \chi _{0}^{h}\left( \Bbbk \right) \left( f\right) \left( k\right) &:&=\varepsilon _{H}\left( h\right) f\left( k\right) \text{ for }n=0\text{ while and for }n\geq 1 \\ \chi _{n}^{h}\left( \Bbbk \right) \left( f\right) \left( b_{1}\otimes b_{2}\otimes \cdots \otimes b_{n}\right) &:&=f\left( h_{1}b_{1}\otimes h_{2}b_{2}\otimes \cdots \otimes h_{n}b_{n}\right) . \end{eqnarray* More concisely $\chi _{n}^{h}\left( \Bbbk \right) \left( f\right) \left( z\right) =f\left( hz\right) $ for every $n\in \N_0$ and $z\in B^{\otimes n}$ i.e. $\left[ f\right] h:=\left[ fh\right] $ where $fh:=\chi _{n}^{h}\left( \Bbbk \right) \left( f\right) .$ Now consider the differential $\partial ^{n}:\mathrm{Hom}_{\Bbbk }\left( B^{\otimes n},\Bbbk \right) \rightarrow \mathrm{Hom}_{\Bbbk }\left( B^{\otimes \left( n+1\right) },\Bbbk \right) $ of the usual Hochschild cohomology. Note that for each $n\in \N_0$, $\mathrm{Hom}_{\Bbbk }\left( B^{\otimes n},\Bbbk \right) $ is regarded as a bimodule over $H$ using the left $H$-module structures of its arguments. By (\re {form:PartialChi}), we have \begin{equation*} \partial ^{n}\chi _{n}^{h}\left( \Bbbk \right) \left( f\right) =\chi _{n+1}^{h}\left( \Bbbk \right) \partial ^{n}\left( f\right) \end{equation* Since $\chi _{n}^{h}\left( \Bbbk \right) \left( f\right) =fh$, the last displayed equality becomes $\partial ^{n}\left( fh\right) =\partial ^{n}\left( f\right) h\,$for every $n\in \N_0$. Thus $\partial ^{n}$ is right $H$-linear. Since $hf=\varepsilon _{H}\left( h\right) f$ for every f\in \mathrm{Hom}_{\Bbbk }\left( B^{\otimes n},\Bbbk \right) ,h\in H,$ we get that $\partial ^{n}$ is also left $H$-linear whence $H$-bilinear. \end{proof} \begin{remark} Note that, in the context of the proof of \cite[Proposition 5.1]{EG}, one has \begin{equation*} \mathrm{H}^{3}\left( \mathcal{B}\left( V\right) \#\mathbb{C}\left[ \mathbb \mathbb{Z}}_{p}\right] ,\mathbb{C}\right) \cong \mathrm{H}^{3}\left( \mathcal{B}\left( V\right) ,\mathbb{C}\right) ^{\mathbb{\mathbb{Z}}_{p}}. \end{equation*} This is a particular case of Corollary \ref{coro:K} where $H=\mathbb{C}\left[ \mathbb{\mathbb{Z}}_{p}\right] ,$\thinspace $V\in {_{H}^{H}\mathcal{YD}}$ and $B=\mathcal{B}\left( V\right) $. \end{remark} \begin{proposition} \label{pro:Prerad}Let $\mathcal{C}$ and $\mathcal{D}$ be abelian categories. Let $r,\omega :\mathcal{C}\rightarrow \mathcal{D}$ be exact functors such that $r$ is a subfunctor of $\omega $ i.e. there is a natural transformation $\eta :r\rightarrow \omega $ which is a monomorphism when evaluated on objects. If $X$ is a subobject of $Y$ then $r\left( X\right) =\omega \left( X\right) \cap r\left( Y\right) .$ Moreover, for every morphism f:X\rightarrow Y$ in $\mathcal{C}$ one ha \begin{eqnarray*} \mathrm{ker}\left( r\left( f\right) \right) &=&r\left( \mathrm{ker}\left( f\right) \right) =\omega \left( \mathrm{ker}\left( f\right) \right) \cap r\left( X\right) =\mathrm{ker}\left( \omega \left( f\right) \right) \cap r\left( X\right) , \\ \mathrm{Im}\left( r\left( f\right) \right) &=&\mathrm{Im}\left( \omega \left( f\right) \right) \cap r\left( Y\right) =r\left( \mathrm{Im}\left( f\right) \right) . \end{eqnarray*} \end{proposition} \begin{proof} The proof is similar to \cite[Proposition 1.7, page 138]{Sten}. \begin{invisible} Consider an exact sequence $0\rightarrow X\overset{s}{\rightarrow }Y\overset p}{\rightarrow }Y/X\rightarrow 0.$ By exactness we get a commutative diagram as follows \begin{equation*} \begin{array}{ccccccccc} & & & & 0 & & & & \\ & & & & \downarrow & & & & \\ 0 & \rightarrow & r\left( X\right) & \overset{r\left( s\right) }{\rightarrow } & r\left( Y\right) & \overset{r\left( p\right) }{\rightarrow } & r\left( Y/X\right) & \rightarrow & 0 \\ & & \downarrow \eta X & & \downarrow \eta Y & & \downarrow \eta \left( Y/X\right) & & \\ 0 & \rightarrow & \omega \left( X\right) & \overset{\omega \left( s\right) } \rightarrow } & \omega \left( Y\right) & \overset{\omega \left( p\right) } \rightarrow } & \omega \left( Y\right) /\omega \left( X\right) \cong \omega \left( Y/X\right) & \rightarrow & 0 \\ & & & & \downarrow \tau & & & & \\ & & & & \omega \left( Y\right) /r\left( Y\right) & & & & \\ & & & & \downarrow & & & & \\ & & & & 0 & & & & \end{array \end{equation* where all rows and columns are exact sequences. Let us prove that $\left( r\left( X\right) ,\eta Y\circ r\left( s\right) \right) =\omega \left( X\right) \cap r\left( Y\right) .$ We hav \begin{equation*} \omega \left( p\right) \circ \left( \eta Y\circ r\left( s\right) \right) =\eta \left( Y/X\right) \circ r\left( p\right) \circ r\left( s\right) =0=\tau \circ \left( \eta Y\circ r\left( s\right) \right) . \end{equation* Let $\alpha :Z\rightarrow \omega \left( Y\right) $ be a morphism such that \tau \circ \alpha =0=\omega \left( p\right) \circ \alpha .$ The first equality entails that there exists a morphism $\alpha ^{\prime }:Z\rightarrow r\left( Y\right) $ such that $\eta Y\circ \alpha ^{\prime }=\alpha .$ We hav \begin{equation*} \eta \left( Y/X\right) \circ r\left( p\right) \circ \alpha ^{\prime }=\omega \left( p\right) \circ \eta Y\circ \alpha ^{\prime }=\omega \left( p\right) \circ \alpha =0. \end{equation* Since $\eta \left( Y/X\right) $ is a monomorphism, we deduce that $r\left( p\right) \circ \alpha ^{\prime }=0$ so that there is a morphism $\alpha ^{\prime \prime }:Z\rightarrow r\left( X\right) $ such that $r\left( s\right) \circ \alpha ^{\prime \prime }=\alpha ^{\prime }.$ Thu \begin{equation*} \left( \eta Y\circ r\left( s\right) \right) \circ \alpha ^{\prime \prime }=\eta Y\circ \alpha ^{\prime }=\alpha . \end{equation* Since $\eta Y\circ r\left( s\right) $ is a monomorphism, we deduce that \alpha ^{\prime \prime }$ is uniquely determined by the equality above. This proves that $\left( r\left( X\right) ,\eta Y\circ r\left( s\right) \right) \mathrm{ker}\left( \Delta \left( \tau ,\omega \left( p\right) \right) \right) =\omega \left( X\right) \cap r\left( Y\right) $ where $\Delta \left( \tau ,\omega \left( p\right) \right) $ denotes the diagonal morphism of \left( \tau ,\omega \left( p\right) \right) $. Let $f:X\rightarrow Y$ be a morphism in $\mathcal{C}$. We have that $r\left( \mathrm{ker}\left( f\right) \right) =\omega \left( \mathrm{ker}\left( f\right) \right) \cap r\left( X\right) .$ By exactness we get $r\left( \mathrm{ker}\left( f\right) \right) =\mathrm{ker}\left( r\left( f\right) \right) $ and $\omega \left( \mathrm{ke }\left( f\right) \right) =\mathrm{ker}\left( \omega \left( f\right) \right) . $ Thus \begin{eqnarray*} \mathrm{Im}\left( r\left( f\right) \right) &=&\mathrm{kerCo\mathrm{ker} \left( r\left( f\right) \right) =\mathrm{ker}\left( r\left( \mathrm{C \mathrm{ker}}f\right) \right) \\ &=&r\left( \mathrm{ker}\left( \mathrm{Co\mathrm{ker}}f\right) \right) =\omega \left( \mathrm{ker}\left( \mathrm{Co\mathrm{ker}}f\right) \right) \cap r\left( Y\right) \\ &=&\mathrm{ker}\left( \mathrm{Co\mathrm{ker}}\left( \omega \left( f\right) \right) \right) \cap r\left( Y\right) =\mathrm{Im}\left( \omega \left( f\right) \right) \cap r\left( Y\right) . \end{eqnarray* Note that from the above computation one gets $\mathrm{Im}\left( r\left( f\right) \right) =r\left( \mathrm{ker}\left( \mathrm{Co\mathrm{ker}}f\right) \right) =r\left( \mathrm{Im}\left( f\right) \right) $. \end{invisible} \end{proof} \begin{remark} \label{rem:ExactInv}From Corollary \ref{coro:K}, we hav \begin{eqnarray*} \mathrm{H}^{n}\left( B,\Bbbk \right) ^{H} &=&\left\{ \left[ f\right] \mid f\in \mathrm{Z}^{n}\left( B,\Bbbk \right) ,\varepsilon _{H}\left( h\right) \left[ f\right] =\left[ f\right] h,\text{ for every }h\in H\right\} \\ &=&\left\{ \left[ f\right] \mid f\in \mathrm{Z}^{n}\left( B,\Bbbk \right) \left[ \varepsilon _{H}\left( h\right) f\right] =\left[ fh\right] ,\text{ for every }h\in H\right\} \end{eqnarray* where, for every $z\in B^{\otimes n}$, we hav \begin{equation*} \left( fh\right) \left( z\right) =f\left( hz\right) . \end{equation* Note that, for any $H$-bimodule $M$ one has \begin{equation*} \mathrm{Hom}_{H,H}\left( H,M\right) \cong M^{H}=\left\{ m\in M\mid hm=mh \text{ for every }h\in H\right\} . \end{equation* Note also that $H$ is a separable $\Bbbk $-algebra whence it is projective in the category of $H$-bimodules. As a consequence $\mathrm{Hom}_{H,H}\left( H,{-}\right) \cong \left( {-}\right) ^{H}:{_{H}}\mathfrak{M}_{H}\rightarrow \mathfrak{M}$ is an exact functor (here ${_{H}}\mathfrak{M}_{H}$ is the category of $H$-bimodules and $\mathfrak{M}$ the category of $\Bbbk $-vector spaces). By Proposition \ref{pro:Prerad} applied to the case when $r:=\left( {-}\right) ^{H}:{_{H}}\mathfrak{M}_{H}\rightarrow \mathfrak{M}$ and $\omega $ is the forgetful functor, for every morphism $f:X\rightarrow Y$ of $H -bimodules one ha \begin{equation*} \mathrm{ker}\left( f^{H}\right) =\mathrm{ker}\left( f\right) \cap X^{H}=\left( \mathrm{ker}\left( f\right) \right) ^{H}\qquad \text{and}\qquad \mathrm{Im}\left( f^{H}\right) =\mathrm{Im}\left( f\right) \cap Y^{H}=\left( \mathrm{Im}\left( f\right) \right) ^{H}. \end{equation* Still by Corollary \ref{coro:K}, we know that the differential $\partial ^{n}:\mathrm{Hom}_{\Bbbk }\left( B^{\otimes n},\Bbbk \right) \longrightarrow \mathrm{Hom}_{\Bbbk }\left( B^{\otimes \left( n+1\right) },\Bbbk \right) $ of the usual Hochschild cohomology is $H$-bilinear. Thus we can apply the argument above to ge \begin{eqnarray*} \mathrm{ker}\left( \left( \partial ^{n}\right) ^{H}\right) &=&\mathrm{ker \left( \partial ^{n}\right) \cap \mathrm{Hom}_{\Bbbk }\left( B^{\otimes n},\Bbbk \right) ^{H}=\left( \mathrm{ker}\left( \partial ^{n}\right) \right) ^{H}\qquad \text{and}\qquad \\ \mathrm{Im}\left( \left( \partial ^{n-1}\right) ^{H}\right) &=&\mathrm{Im \left( \partial ^{n-1}\right) \cap \mathrm{Hom}_{\Bbbk }\left( B^{\otimes n},\Bbbk \right) ^{H}=\left( \mathrm{Im}\left( \partial ^{n-1}\right) \right) ^{H}. \end{eqnarray* Now $\mathrm{Hom}_{\Bbbk }\left( B^{\otimes n},\Bbbk \right) ^{H}=\mathrm{Ho }_{H,-}\left( B^{\otimes n},\Bbbk \right) $ so that we ge \begin{eqnarray*} \mathrm{Z}_{H\text{-}\mathrm{Mod}}^{n}\left( B,\Bbbk \right) &=&\mathrm{Z ^{n}\left( B,\Bbbk \right) \cap \mathrm{Hom}_{H,-}\left( B^{\otimes n},\Bbbk \right) =\mathrm{Z}^{n}\left( B,\Bbbk \right) ^{H}\qquad \text{and}\qquad \\ \mathrm{B}_{H\text{-}\mathrm{Mod}}^{n}\left( B,\Bbbk \right) &=&\mathrm{B ^{n}\left( B,\Bbbk \right) \cap \mathrm{Hom}_{H,-}\left( B^{\otimes n},\Bbbk \right) =\mathrm{B}^{n}\left( B,\Bbbk \right) ^{H}. \end{eqnarray* where $\mathrm{Z}_{H\text{-}\mathrm{Mod}}^{n}\left( B,\Bbbk \right) $ and \mathrm{B}_{H\text{-}\mathrm{Mod}}^{n}\left( B,\Bbbk \right) $ denotes the the abelian groups of $n$-cocycles, of $n$-coboundaries for the cohomology of the algebra $B$ with coefficients in $\Bbbk $ computed in the monoidal category $H$-$\mathrm{Mod}$ of left $H$-modules. The corresponding $n$-th Hochschild cohomology group is \begin{equation*} \mathrm{H}_{H\text{-}\mathrm{Mod}}^{n}\left( B,\Bbbk \right) :=\frac{\mathrm Z}_{H\text{-}\mathrm{Mod}}^{n}\left( B,\Bbbk \right) }{\mathrm{B}_{H\text{- \mathrm{Mod}}^{n}\left( B,\Bbbk \right) }=\frac{\mathrm{Z}^{n}\left( B,\Bbbk \right) ^{H}}{\mathrm{B}^{n}\left( B,\Bbbk \right) ^{H}}\cong \left( \frac \mathrm{Z}^{n}\left( B,\Bbbk \right) }{\mathrm{B}^{n}\left( B,\Bbbk \right) \right) ^{H}=\mathrm{H}^{n}\left( B,\Bbbk \right) ^{H}. \end{equation*} \end{remark} Denote by $D\left( H\right) $ the Drinfeld double, see e.g. the first structure of \cite[Theorem 7.1.1]{Maj}. \begin{proposition} \label{pro:D(H)}In the setting of Corollary \ref{coro:K} assume that $H$ is also cosemisimple. Then, for $n\in \N_0$ \begin{equation*} \mathrm{Z}_{{\mathcal{YD}}}^{n}\left( B,\Bbbk \right) =\mathrm{Z}^{n}\left( B,\Bbbk \right) ^{D(H)},\quad \mathrm{B}_{{\mathcal{YD}}}^{n}\left( B,\Bbbk \right) =\mathrm{B}^{n}\left( B,\Bbbk \right) ^{D(H)}\quad \text{and}\quad \mathrm{H}_{{\mathcal{YD}}}^{n}\left( B,\Bbbk \right) \cong \mathrm{H ^{n}\left( B,\Bbbk \right) ^{D(H)}. \end{equation* where $\mathrm{Z}^{n}\left( B,\Bbbk \right) $ and $\mathrm{B}^{n}\left( B,\Bbbk \right) $ are regarded as $D\left( H\right) $-subbimodules of \mathrm{Hom}_{\Bbbk }\left( B^{\otimes n},\Bbbk \right) $ whose structure is induced by the left $D\left( H\right) $-module structures of its arguments. Moreover $\mathrm{H}^{n}\left( B,\Bbbk \right) ^{D(H)}$ is a subspace of \mathrm{H}^{n}\left( B,\Bbbk \right) ^{H}.$ \end{proposition} \begin{proof} For shortness, in this proof, we denote $D(H)$ by $D$. Consider the analogue of the standard complex as in Remark \ref{rem:HochYD} \begin{equation*} \xymatrixrowsep{25pt}\xymatrixcolsep{1cm}\xymatrix{\yd( \Bbbk ,\Bbbk) \ar[r]^{\partial ^0}& \yd(B ,\Bbbk) \ar[r]^{\partial ^1}&\yd( B^{\otimes2} ,\Bbbk) \ar[r]^-{\partial ^2}&\cdots} \end{equation* where $\partial ^{n}$ is induced by the differential $\partial ^{n}:\mathrm Hom}_{\Bbbk }\left( B^{\otimes n},\Bbbk \right) \longrightarrow \mathrm{Hom _{\Bbbk }\left( B^{\otimes \left( n+1\right) },\Bbbk \right) $ of the ordinary Hochschild cohomology. Now, since $H$ is semisimple, it is finite-dimensional (whence it has bijective antipode) so that, by a result essentially due to Majid (see \cite[Proposition 10.6.16]{Mo}) and by \cite Proposition 6]{RT}, we get a category isomorphism ${_{H}^{H}\mathcal{YD} \cong {{_{D}}}\mathfrak{M}$. Thus the complex above can be rewritten as follow \begin{equation*} \xymatrixrowsep{25pt}\xymatrixcolsep{0,5cm}\xymatrix{\mathrm{Hom}_{D,-}( \Bbbk ,\Bbbk) \ar[r]^{\partial ^0}& \mathrm{Hom}_{D,-}(B ,\Bbbk) \ar[r]^{\partial ^1}& \mathrm{Hom}_{D,-}( B^{\otimes2} ,\Bbbk) \ar[r]^-{\partial ^2}&\cdots} \end{equation* Now, since, for each $n\in \N_0$, we have $\mathrm{Hom}_{D,-}\left( B^{\otimes n},\Bbbk \right) =\mathrm{Hom}_{\Bbbk }\left( B^{\otimes n},\Bbbk \right) ^{D},$ we obtain the comple \begin{equation*} \xymatrixrowsep{25pt}\xymatrixcolsep{0,5cm}\xymatrix{\mathrm{Hom}_\Bbbk( \Bbbk ,\Bbbk)^{D} \ar[r]^{\partial ^0}& \mathrm{Hom}_\Bbbk(B ,\Bbbk)^{D} \ar[r]^{\partial ^1}& \mathrm{Hom}_\Bbbk( B^{\otimes2} ,\Bbbk)^{D} \ar[r]^-{\partial ^2}&\cdots} \end{equation* We will write $\left( \partial ^{n}\right) ^{D}$ instead of $\partial ^{n}$ when we would like to stress that the map considered is the one induced on invariants. Thus we will write equivalentl \begin{equation*} \xymatrixrowsep{25pt}\xymatrixcolsep{45pt}\xymatrix{\mathrm{Hom}_\Bbbk( \Bbbk ,\Bbbk)^{D} \ar[r]^{(\partial ^0)^{D}}& \mathrm{Hom}_\Bbbk(B ,\Bbbk)^{D} \ar[r]^{(\partial ^1)^{D}}& \mathrm{Hom}_\Bbbk( B^{\otimes2} ,\Bbbk)^{D} \ar[r]^-{(\partial ^2)^{D}}&\cdots} \end{equation*} Now, assume $H$ is also cosemisimple. Since $H$ is both semisimple and cosemisimple, by \cite[Proposition 7]{Ra} the Hopf algebra $D$ is semisimple as an algebra. Thus, as in Remark \ref{rem:ExactInv} in case of $H$, the functor $\left( -\right) ^{D}:{_{D}}\mathfrak{M}_{D}\rightarrow \mathfrak{M}$ is exact (here ${_{D}}\mathfrak{M}_{D}$ is the category of $D$-bimodules and $\mathfrak{M}$ the category of $\Bbbk $-vector spaces). By Proposition \re {pro:Prerad} applied to the case when $r:=\left( {-}\right) ^{D}:{_{D} \mathfrak{M}_{D}\rightarrow \mathfrak{M}$ and $\omega $ is the forgetful functor, for every morphism $f:X\rightarrow Y$ of $D$-bimodules one ha \begin{equation*} \mathrm{ker}\left( f^{D}\right) =\mathrm{ker}\left( f\right) \cap X^{D}=\left( \mathrm{ker}\left( f\right) \right) ^{D}\qquad \text{and}\qquad \mathrm{Im}\left( f^{D}\right) =\mathrm{Im}\left( f\right) \cap Y^{D}=\left( \mathrm{Im}\left( f\right) \right) ^{D}. \end{equation* In particular we ge \begin{eqnarray*} \mathrm{ker}\left( \left( \partial ^{n}\right) ^{D}\right) &=&\mathrm{ker \left( \partial ^{n}\right) \cap \mathrm{Hom}_{\Bbbk }\left( B^{\otimes n},\Bbbk \right) ^{D}=\mathrm{ker}\left( \partial ^{n}\right) ^{D}\qquad \text{and}\qquad \\ \mathrm{Im}\left( \left( \partial ^{n-1}\right) ^{D}\right) &=&\mathrm{Im \left( \partial ^{n-1}\right) \cap \mathrm{Hom}_{\Bbbk }\left( B^{\otimes n},\Bbbk \right) ^{D}=\mathrm{Im}\left( \partial ^{n-1}\right) ^{D} \end{eqnarray* and henc \begin{eqnarray*} \mathrm{Z}_{{\mathcal{YD}}}^{n}\left( B,\Bbbk \right) &=&\mathrm{Z ^{n}\left( B,\Bbbk \right) \cap \mathrm{Hom}_{D,-}\left( B^{\otimes n},\Bbbk \right) =\mathrm{Z}^{n}\left( B,\Bbbk \right) ^{D}\qquad \text{and}\qquad \\ \mathrm{B}_{{\mathcal{YD}}}^{n}\left( B,\Bbbk \right) &=&\mathrm{B ^{n}\left( B,\Bbbk \right) \cap \mathrm{Hom}_{D,-}\left( B^{\otimes n},\Bbbk \right) =\mathrm{B}^{n}\left( B,\Bbbk \right) ^{D} \end{eqnarray* Then we obtai \begin{equation*} \mathrm{H}_{{\mathcal{YD}}}^{n}\left( B,\Bbbk \right) =\frac{\mathrm{Z}_{ \mathcal{YD}}}^{n}\left( B,\Bbbk \right) }{\mathrm{B}_{{\mathcal{YD} }^{n}\left( B,\Bbbk \right) }=\frac{\mathrm{Z}^{n}\left( B,\Bbbk \right) ^{D }{\mathrm{B}^{n}\left( B,\Bbbk \right) ^{D}}\cong \mathrm{H}^{n}\left( B,\Bbbk \right) ^{D}. \end{equation* Let us prove the last part of the statement. The correspondence between the left $D$-module structure and the structure of Yetter-Drinfeld module over H $ is written explicitly in \cite[Proposition 7.1.6]{Maj}. In particular D=H^{\ast }\otimes H$ and given $V\in {_{H}^{H}\mathcal{YD}}$, the two structures are related by the following equality $\left( f\otimes h\right) \rhd v=f\left( \left( h\rhd v\right) _{-1}\right) \left( h\rhd v\right) _{0}$ for every $f\in H^{\ast },h\in H,v\in V.$ Thus $\left( \varepsilon _{H}\otimes h\right) \rhd v=h\rhd v.$ Moreover $H$ is a Hopf subalgebra of D $ via $h\mapsto \varepsilon _{H}\otimes h,$ where $D$ is considered with the first structure of \cite[Theorem 7.1.1]{Maj}. Since the $D$-bimodule structure of $\mathrm{H}^{n}\left( B,\Bbbk \right) $ is induced by the one of $\mathrm{Hom}_{\Bbbk }\left( B^{\otimes n},\Bbbk \right) $ which comes from the left $D$-module structures of its arguments and similarly for the H $-bimodule structure of $\mathrm{H}^{n}\left( B,\Bbbk \right) ,$ we deduce that $\mathrm{H}^{n}\left( B,\Bbbk \right) ^{D}$ is a subspace of $\mathrm{H ^{n}\left( B,\Bbbk \right) ^{H}.$ \end{proof} \begin{example} In the setting of the proof of \cite[Theorem 4.1.3]{An-Basic}, a Nichols algebra $\mathcal{B}\left( V\right) $ such that $\mathrm{H}^{3}\left( \mathcal{B}\left( V\right) ,\Bbbk \right) ^{\mathbb{Z}_{m}}=0$ is considered where $\Bbbk $ is a field of characteristic zero. By Proposition \re {pro:D(H)} applied in the case $H=\Bbbk \mathbb{Z}_{m}$ and $B=\mathcal{B \left( V\right) ,$ we have that $\mathrm{H}_{{\mathcal{YD}}}^{3}\left( \mathcal{B}\left( V\right) ,\Bbbk \right) \cong \mathrm{H}^{3}\left( \mathcal{B}\left( V\right) ,\Bbbk \right) ^{D(H)}$ is a subspace of $\mathrm H}^{3}\left( \mathcal{B}\left( V\right) ,\Bbbk \right) ^{H}=\mathrm{H ^{3}\left( \mathcal{B}\left( V\right) ,\Bbbk \right) ^{\mathbb{Z}_{m}}=0.$ Thus we get $\mathrm{H}_{{\mathcal{YD}}}^{3}\left( \mathcal{B}\left( V\right) ,\Bbbk \right) =0.$ Therefore, in view of Theorem \ref{teo:GelakiYD , if $\left( Q,m,u,\Delta ,\varepsilon ,\omega \right) $ is a f.d. connected coquasi-bialgebra in ${_{H}^{H}\mathcal{YD}}$ such that $\mathrm{gr}Q\cong \mathcal{B}\left( V\right) $ (as above) as augmented algebras in ${_{H}^{H \mathcal{YD}}$ (the counit must be the same in order to have the same Yetter-Drinfeld module structure on $\Bbbk $), then we can conclude that $Q$ is gauge equivalent to a connected bialgebra in ${_{H}^{H}\mathcal{YD}}$. \end{example} \begin{remark} Let $A$ be a finite-dimensional coquasi-bialgebra with the dual Chevalley property i.e. the coradical $H$ of $A$ is a coquasi-subbialgebra of $A$ (in particular $H$ is cosemisimple). Assume the coquasi-bialgebra structure of H $ has trivial reassociator (i.e. it is an ordinary bialgebra) and also assume it has an antipode (i.e. it is a Hopf algebra). Then, by \cite Corollary 6.4]{AP}, $\mathrm{gr}A$ is isomorphic to $R\#H$ as a coquasi-bialgebra, where $R$ is a suitable connected bialgebra in ${_{H}^{H \mathcal{YD}}$. Note that $R\#H$ is the usual Radford-Majid bosonization as H$ has trivial reassociator, see \cite[Definition 5.4]{AP}. Hence we can compute \begin{equation*} \mathrm{H}^{3}\left( \mathrm{gr}A,\Bbbk \right) =\mathrm{H}^{3}\left( R\#H,\Bbbk \right) . \end{equation* Assume further that $H$ is semisimple. Then, by Corollary \ref{coro:K}, we have \begin{equation*} \mathrm{H}^{n}\left( R\#H,\Bbbk \right) \cong \mathrm{H}^{n}\left( R,\Bbbk \right) ^{H} \end{equation* so that $\mathrm{H}^{3}\left( \mathrm{gr}A,\Bbbk \right) \cong \mathrm{H ^{3}\left( R,\Bbbk \right) ^{H}.$ Thus, if $\mathrm{H}^{3}\left( R,\Bbbk \right) ^{H}=0,$ one gets $\mathrm{H}^{3}\left( \mathrm{gr}A,\Bbbk \right) =0 $ which is the analogue of the condition \cite[Proposition 2.3]{EG} (note that our $A$ is the dual of the one considered therein) which guarantees that $A$ is gauge equivalent to an ordinary Hopf algebra, if $A$ has an a quasi-antipode and $\Bbbk =\mathbb{C}$. Next we will give another approach to arrive at the same conclusion but just requiring $\mathrm{H}_{{\mathcal{Y }}}^{3}\left( R,\Bbbk \right) =0$. Note that a priori $\mathrm{H}_{{\mathcal YD}}}^{3}\left( R,\Bbbk \right) \cong \mathrm{H}^{3}\left( R,\Bbbk \right) ^{D\left( H\right) }$ is smaller than $\mathrm{H}^{3}\left( R,\Bbbk \right) ^{H}$. \end{remark} \section{Dual Chevalley}\label{sec:5} The main aim of this section is to prove Theorem \ref{teo:main}. Let $A$ be a Hopf algebra over a field $\Bbbk $ of characteristic zero such that the coradical $H$ of $A$ is a sub-Hopf algebra (i.e. $A$ has the dual Chevalley Property). Assume $H$ is finite-dimensional so that $H$ is semisimple. By \cite[Theorem I]{ABM}, there is a gauge transformation $\zeta :A\otimes A\rightarrow \Bbbk $ such that $A^{\zeta }$ is isomorphic, as a coquasi-bialgebra, to the bosonization $Q\#H$ of a connected coquasi-bialgebra $Q$ in ${_{H}^{H}\mathcal{YD}}$ by $H.$ By construction \zeta $ is $H$-bilinear and $H$-balanced: this follows from \cite Proposition 5.7]{ABM} (note that gauge transformation $v_{B}:B\otimes B\rightarrow \Bbbk $, used therein for $B:=R\#_{\xi }H$, is $H$-bilinear and $H$-balanced, as observed in the proof) and the fact that there is an $H -bilinear Hopf algebra isomorphism $\psi :B\rightarrow A$ (see \cite[Proof of Theorem I, page 36 and Theorem 6.1]{ABM} which is a consequence of \cite Theorem 3.64]{AMS}) where $\left( R,\xi \right) $ is a suitable connected pre-bialgebra with cocycle in ${_{H}^{H}\mathcal{YD}}$ (note that $\zeta =v_{B}\circ \left( \psi ^{-1}\otimes \psi ^{-1}\right) $): here by connected pre-bialgebra we mean that the coradical $R_{0}$ of $R$ is $\Bbbk 1_{R}$ (by the properties of $1_{R}$ this implies that $R_{0}$ is a subcoalgebra in $ _{H}^{H}\mathcal{YD}}$ of $R$). Assume that $A$ is finite-dimensional. Then Q\#H$ and hence $Q$ is finite dimensional. Thus, by Theorem \ref{teo:GelakiYD}, if $\mathrm{H}_{{\mathcal{YD} }^{3}\left( \mathrm{gr}Q,\Bbbk \right) =0$, then $Q$ is gauge equivalent to a connected bialgebra in ${_{H}^{H}\mathcal{YD}}$. First let us check which condition on $A$ guarantee that $\mathrm{H}_{ \mathcal{YD}}}^{3}\left( \mathrm{gr}Q,\Bbbk \right) =0.$ Note that by construction $Q=R^{v}$ (see \cite[Proposition 5.7]{ABM}) where $v:=\left( \lambda \xi \right) ^{-1}$, the convolution inverse of $\lambda \xi $ and \lambda :H\rightarrow \Bbbk $ denotes the total integral on $H$.$\ $Thus we can rewrite $\mathrm{gr}\left( Q\right) $ as $\mathrm{gr}\left( R^{v}\right) .$ Moreover $v_{B}$ is given by $v_{B}\left( \left( r\#h\right) \otimes \left( r^{\prime }\#h^{\prime }\right) \right) =v\left( r\otimes hr^{\prime }\right) \varepsilon _{H}\left( h^{\prime }\right) $ for every $r,r^{\prime }\in R,h,h^{\prime }\in H.$ By \cite[Proposition 2.5]{AMStu-Small}, $\mathrm gr}\left( R\right) $ inherits the pre-bialgebra structure in ${_{H}^{H \mathcal{YD}}$ of $R$. This is proved by checking that $R_{i}\cdot R_{j}\subseteq R_{i+j}$ for every $i,j\in \N_0$, where $R_{i}$ denotes the $i$-th term of the coradical filtration of $R$. Moreover $R_{i}$ is a subcoalgebra of $R$ in ${_{H}^{H}\mathcal{YD}}$. \begin{lemma} \label{lem:ciccio}Keep the above hypotheses and notations. Then $\mathrm{gr \left( R^{v}\right) $ and $\mathrm{gr}\left( R\right) $ coincide as bialgebras in ${_{H}^{H}\mathcal{YD}}$ where the structures of $\mathrm{gr \left( R\right) $ are induced by the ones of $\left( R,\xi \right) .$ \end{lemma} \begin{proof} By Theorem \ref{teo:grHopf}, $\mathrm{gr}\left( R^{v}\right) =\mathrm{gr \left( Q\right) $ is a connected bialgebras in ${_{H}^{H}\mathcal{YD}}$. Note that $R^{v}$ and $R$ coincide as coalgebras in ${_{H}^{H}\mathcal{YD}}$ so that $\mathrm{gr}\left( R^{v}\right) $ and $\mathrm{gr}\left( R\right) $ coincide as coalgebras in ${_{H}^{H}\mathcal{YD}}$. They also have the same unit. It remains to check that their two multiplications coincide too. Since $\xi $ is unital, by \cite[Proposition 4.8]{AMS}, we have that $v$ is unital and this is equivalent to $v^{-1}$ unital (see the proof therein). Let $C:=R\otimes R$. Let $n>0$ and let $w\in C_{\left( n\right) }=\sum_{i+j\leq n}R_{i}\otimes R_{j}.$ By \cite[Lemma 3.69]{AMS}, we have tha \begin{equation*} \Delta _{C}\left( w\right) -w\otimes \left( 1_{R}\right) ^{\otimes 2}-\left( 1_{R}\right) ^{\otimes 2}\otimes w\in C_{\left( n-1\right) }\otimes C_{\left( n-1\right) }. \end{equation* Thus we ge \begin{equation*} w_{1}\otimes w_{2}\otimes w_{3}-\Delta _{C}\left( w\right) \otimes \left( 1_{R}\right) ^{\otimes 2}-\Delta _{C}\left( \left( 1_{R}\right) ^{\otimes 2}\right) \otimes w\in \Delta _{C}\left( C_{\left( n-1\right) }\right) \otimes C_{\left( n-1\right) } \end{equation* and henc \begin{equation*} w_{1}\otimes w_{2}\otimes w_{3}-w\otimes \left( 1_{R}\right) ^{\otimes 2}\otimes \left( 1_{R}\right) ^{\otimes 2}-\left( 1_{R}\right) ^{\otimes 2}\otimes w\otimes \left( 1_{R}\right) ^{\otimes 2}-\left( 1_{R}\right) ^{\otimes 4}\otimes w\in C_{\left( n-1\right) }\otimes C_{\left( n-1\right) }\otimes C_{\left( n-1\right) }. \end{equation* Since $m\left( C_{\left( n-1\right) }\right) \subseteq \sum_{i+j\leq n}m\left( R_{i}\otimes R_{j}\right) \subseteq R_{n-1}$ we ge \begin{equation*} w_{1}\otimes m\left( w_{2}\right) \otimes w_{3}-w\otimes 1_{R}\otimes \left( 1_{R}\right) ^{\otimes 2}-\left( 1_{R}\right) ^{\otimes 2}\otimes m\left( w\right) \otimes \left( 1_{R}\right) ^{\otimes 2}-\left( 1_{R}\right) ^{\otimes 3}\otimes w\in C_{\left( n-1\right) }\otimes R_{n-1}\otimes C_{\left( n-1\right) } \end{equation*} and henc \begin{equation} w_{1}\otimes \left( m\left( w_{2}\right) +R_{n-1}\right) \otimes w_{3}=\left( 1_{R}\right) ^{\otimes 2}\otimes \left( m\left( w\right) +R_{n-1}\right) \otimes \left( 1_{R}\right) ^{\otimes 2}. \label{form:gr2} \end{equation} Let $x,y\in R$. We comput \begin{eqnarray*} \overline{x}\cdot _{v}\overline{y} &=&\left( x+R_{\left\vert x\right\vert -1}\right) \cdot _{v}\left( y+R_{\left\vert y\right\vert -1}\right) \\ &=&\left( x\cdot _{v}y\right) +R_{\left\vert x\right\vert +\left\vert y\right\vert -1}=m^{v}\left( x\otimes y\right) +R_{\left\vert x\right\vert +\left\vert y\right\vert -1} \\ &=&v\left( \left( x\otimes y\right) _{1}\right) m\left( \left( x\otimes y\right) _{2}\right) v^{-1}\left( \left( x\otimes y\right) _{3}\right) +R_{\left\vert x\right\vert +\left\vert y\right\vert -1} \\ &=&v\left( \left( x\otimes y\right) _{1}\right) \left( m\left( \left( x\otimes y\right) _{2}\right) +R_{\left\vert x\right\vert +\left\vert y\right\vert -1}\right) v^{-1}\left( \left( x\otimes y\right) _{3}\right) \\ &\overset{(\ref{form:gr2})}{=}&v\left( \left( 1_{R}\right) ^{\otimes 2}\right) \left( m\left( x\otimes y\right) +R_{\left\vert x\right\vert +\left\vert y\right\vert -1}\right) v^{-1}\left( \left( 1_{R}\right) ^{\otimes 2}\right) \\ &=&m\left( x\otimes y\right) +R_{\left\vert x\right\vert +\left\vert y\right\vert -1}=\left( x\cdot y\right) +R_{\left\vert x\right\vert +\left\vert y\right\vert -1}=\overline{x}\cdot \overline{y}. \end{eqnarray*} \end{proof} The following result is inspired by \cite[Theorem 3.71]{AMS}. \begin{lemma} \label{lem:CoradSmash}Let $H$ be a cosemisimple Hopf algebra. Let $C$ be a left $H$-comodule coalgebra such that $C_{0}$ is a one-dimensional left $H -comodule subcoalgebra of $C$. Let $B=C\#H$ be the smash coproduct of $C$ by $H$ i.e. the coalgebra defined by \begin{eqnarray} \Delta _{B}\left( c\#h\right) &=&\sum \left( c_{1}\#\left( c_{2}\right) _{\left\langle -1\right\rangle }h_{1}\right) \otimes \left( \left( c_{2}\right) _{\left\langle 0\right\rangle }\#h_{2}\right) , \label{eq:DeltaCosmash} \\ \varepsilon _{B}\left( c\#h\right) &=&\varepsilon _{C}\left( c\right) \varepsilon _{H}\left( h\right) . \notag \end{eqnarray Then, for every $n\in \N_0$ we have $B_{n}=C_{n}\#H.$ \end{lemma} \begin{proof} Since $C_{0}$ is a subcoalgebra of $C$ in ${^{H}}\mathfrak{M}\ $and, for n\geq 1$, one has $C_{n}=C_{n-1}\wedge _{C}C_{0},$ then inductively one proves that $C_{n}$ is a subcoalgebra of $C$ in ${^{H}}\mathfrak{M}$. Set B_{\left( n\right) }:=C_{n}\#H$ for every $n\in \N_0$. Let us check that $B_{\left( n\right) }=B_{n}$ by induction on $n\in \N_0.$ Let $n=0.$ First note $B=\cup _{m\in \N_0}B_{\left( m\right) }$ and, since $\Delta _{C}\left( C_{m}\right) \subseteq \sum_{0\leq i\leq m}C_{i}\otimes C_{m-i}$, we also have \begin{eqnarray*} \Delta _{B}\left( B_{\left( m\right) }\right) &=&\Delta _{B}\left( C_{m}\#H\right) \subseteq \sum_{0\leq i\leq m}\sum \left( C_{i}\#\left( C_{m-i}\right) _{\left\langle -1\right\rangle }\left( H\right) _{1}\right) \otimes \left( \left( C_{m-i}\right) _{\left\langle 0\right\rangle }\#\left( H\right) _{2}\right) \\ &\subseteq &\sum_{0\leq i\leq m}\left( C_{i}\#H\right) \otimes \left( C_{m-i}\#\left( H\right) \right) =\sum_{0\leq i\leq m}B_{\left( i\right) }\otimes B_{\left( m-i\right) }. \end{eqnarray* Therefore $\left( B_{\left( m\right) }\right) _{m\in \N_0}$ is a coalgebra filtration for $B$ and hence, by \cite[Proposition 11.1.1]{Sw}, we get that $B_{\left( 0\right) }\supseteq B_{0}.$ Since $C_{0}$ is one-dimensional, there is a grouplike element $1_{C}\in C_{0}$ such that C_{0}=\Bbbk 1_{C}.$ Moreover one checks that $C_{0}$ is a subcoalgebra of $C$ in ${^{H}}\mathfrak{M}$ implies $\sum \left( 1_{C}\right) _{\left\langle -1\right\rangle }\otimes \left( 1_{C}\right) _{\left\langle 0\right\rangle }=1_{H}\otimes 1_{C}.$ \begin{invisible} Since $\rho \left( C_{0}\right) \subseteq H\otimes C_{0},$ we get that $\rho \left( 1_{C}\right) =x\otimes 1_{C}$ for some $x\in H.$ Since $C_{0}$ is a subcoalgebra of $C$ in ${^{H}}\mathfrak{M,}$ the counit $\varepsilon :C_{0}\rightarrow \Bbbk $ is left $H$-colinear i.e. \begin{equation*} 1_{H}\otimes 1_{\Bbbk }=\rho _{\Bbbk }\left( 1_{\Bbbk }\right) =\rho _{\Bbbk }\left( \varepsilon \left( 1_{C}\right) \right) =\left( H\otimes \varepsilon \right) \rho \left( 1_{C}\right) =x\otimes \varepsilon \left( 1_{C}\right) =x\otimes 1_{\Bbbk } \end{equation* and hence $x=1_{H}.$ \end{invisible} Let $\sigma :H\rightarrow C\otimes H:h\mapsto 1_{C}\otimes h$ be the canonical injection. We hav \begin{eqnarray*} \Delta _{B}\sigma \left( h\right) &=&\Delta _{B}\left( 1_{C}\otimes h\right) =\sum \left( 1_{C}\#\left( 1_{C}\right) _{\left\langle -1\right\rangle }h_{1}\right) \otimes \left( \left( 1_{C}\right) _{\left\langle 0\right\rangle }\#h_{2}\right) \\ &=&\sum \left( 1_{C}\#1_{H}h_{1}\right) \otimes \left( 1_{C}\#h_{2}\right) =\sum \sigma \left( h_{1}\right) \otimes \sigma \left( h_{2}\right) =\left( \sigma \otimes \sigma \right) \Delta _{H}\left( h\right) , \\ \varepsilon _{B}\sigma \left( h\right) &=&\varepsilon _{B}\left( 1_{C}\otimes h\right) =\varepsilon _{C}\left( 1_{C}\right) \varepsilon _{H}\left( h\right) =\varepsilon _{H}\left( h\right) \end{eqnarray* so that $\sigma $ is a coalgebra map. Since $H$ is cosemisimple and $\sigma $ an injective coalgebra map we deduce that also $\sigma \left( H\right) =C_{0}\otimes H=B_{\left( 0\right) }$ is a cosemisimple subcoalgebra of $B$ whence $B_{\left( 0\right) }\subseteq B_{0}.$ Let $n>0$ and assume that $B_{i}=B_{\left( i\right) }$ for $0\leq i\leq n-1.$ Let $\sum\limits_{i\in I}c_{i}\#h_{i}\in B_{n}.$ Then \begin{equation*} \Delta _{B}\left( \sum\limits_{i\in I}c_{i}\#h_{i}\right) \in B_{n-1}\otimes B+B\otimes B_{0}=C_{n-1}\otimes H\otimes C\otimes H+C\otimes H\otimes C_{0}\otimes H. \end{equation* Let $p_{n}:C\rightarrow \frac{C}{C_{n}}$ be the canonical projection. If we apply $\left( p_{n-1}\otimes \varepsilon _{H}\otimes p_{0}\otimes H\right) $ we ge \begin{eqnarray*} 0 &=&\left( p_{n-1}\otimes \varepsilon _{H}\otimes p_{0}\otimes H\right) \Delta _{B}\left( \sum\limits_{i\in I}c_{i}\#h_{i}\right) \\ &=&\left( p_{n-1}\otimes \varepsilon _{H}\otimes p_{0}\otimes H\right) \left( \sum\limits_{i\in I}\left( \left( c_{i}\right) _{1}\#\left( \left( c_{i}\right) _{2}\right) _{\left\langle -1\right\rangle }\left( h_{i}\right) _{1}\right) \otimes \left( \left( \left( c_{i}\right) _{2}\right) _{\left\langle 0\right\rangle }\#\left( h_{i}\right) _{2}\right) \right) \\ &=&\left( p_{n-1}\otimes p_{0}\otimes H\right) \left( \sum\limits_{i\in I}\left( c_{i}\right) _{1}\otimes \left( c_{i}\right) _{2}\otimes h_{i}\right) =\left( \left( p_{n-1}\otimes p_{0}\right) \Delta _{C}\otimes H\right) \left( \sum\limits_{i\in I}c_{i}\#h_{i}\right) . \end{eqnarray* Thus $\sum\limits_{i\in I}c_{i}\#h_{i}\in \mathrm{ke}$\textrm{$r$}$\left( \left( p_{n-1}\otimes p_{0}\right) \Delta _{C}\otimes H\right) =\left[ \mathrm{ker}\left( \left( p_{n-1}\otimes p_{0}\right) \Delta _{C}\right) \right] \otimes H=C_{n}\otimes H=B_{\left( n\right) }.$ Thus $B_{n}\subseteq B_{\left( n\right) }.$ On the other hand, form $\Delta _{C}\left( C_{n}\right) \subseteq C_{n-1}\otimes C+C\otimes C_{0}$ we deduce \begin{eqnarray*} \Delta _{B}\left( B_{\left( n\right) }\right) &=&\Delta _{B}\left( C_{n}\otimes H\right) \\ &\subseteq &\sum \left( \left( C_{n}\right) _{1}\#\left( \left( C_{n}\right) _{2}\right) _{\left\langle -1\right\rangle }\left( H\right) _{1}\right) \otimes \left( \left( \left( C_{n}\right) _{2}\right) _{\left\langle 0\right\rangle }\#\left( H\right) _{2}\right) \\ &\subseteq &\sum \left( C_{n-1}\#\left( C\right) _{\left\langle -1\right\rangle }H\right) \otimes \left( \left( C\right) _{\left\langle 0\right\rangle }\#H\right) +\sum \left( C\#\left( C_{0}\right) _{\left\langle -1\right\rangle }H\right) \otimes \left( \left( C_{0}\right) _{\left\langle 0\right\rangle }\#H\right) \\ &\subseteq &\left( C_{n-1}\#H\right) \otimes \left( C\#H\right) +\left( C\#H\right) \otimes \left( C_{0}\#H\right) \\ &=&B_{\left( n-1\right) }\otimes B+B\otimes B_{\left( 0\right) }=B_{n-1}\otimes B+B\otimes B_{0} \end{eqnarray*} and hence $B_{\left( n\right) }\subseteq B_{n}.$ \end{proof} \begin{definition} Let $A$ be a Hopf algebra over a field $\Bbbk $ such that the coradical $H$ of $A$ is a sub-Hopf algebra (i.e. $A$ has the dual Chevalley Property). Set $G:=\mathrm{gr}\left( A\right) .$ There are two canonical Hopf algebra map \begin{eqnarray*} \sigma _{G} &:&H\rightarrow \mathrm{gr}\left( A\right) :h\mapsto h+A_{-1}, \\ \pi _{G} &:&\mathrm{gr}\left( A\right) \rightarrow H:a+A_{n-1}\mapsto a\delta _{n,0},\qquad n\in \N_0\text{.} \end{eqnarray* The diagram of $A$ (see \cite[page 659]{AS-Lifting}) is the vector spac \begin{equation*} \mathcal{D}\left( A\right) :=\left\{ d\in \mathrm{gr}\left( A\right) \mid \sum d_{1}\otimes \pi _{G}\left( d_{2}\right) =d\otimes 1_{H}\right\} . \end{equation* It is a bialgebra in ${_{H}^{H}\mathcal{YD}}$ as follows. $\mathcal{D}\left( A\right) $ is a subalgebra of $G.$ The left $H$-action, the left $H -coaction of $\mathcal{D}\left( A\right) ,$ the comultiplication and counit are given respectively b \begin{gather*} h\vartriangleright d:=\sum \sigma _{G}\left( h_{1}\right) d\sigma _{G}S\left( h_{2}\right) ,\qquad \rho \left( d\right) =\sum \pi _{G}\left( d_{1}\right) \otimes d_{2}, \\ \Delta _{D\left( A\right) }\left( d\right) :=\sum d_{1}\sigma _{G}S_{H}\pi _{G}\left( d_{2}\right) \otimes d_{3},\qquad \varepsilon _{D\left( A\right) }\left( d\right) =\varepsilon _{G}\left( d\right) . \end{gather*} \end{definition} Although the following result seems to be folklore, we include here its statement for future references. \begin{proposition} \label{pro:D(f)}Let $A$ be a Hopf algebra over a field $\Bbbk $ such that the coradical $H$ of $A$ is a sub-Hopf algebra. Let $A^{\prime }$ be a Hopf algebra over a field $\Bbbk $. Let $f:A^{\prime }\rightarrow A$ be an isomorphism of Hopf algebras. Then $H^{\prime }:=f^{-1}\left( H\right) \cong H$ is the coradical of $A^{\prime }$ and it is a sub-Hopf algebra of A^{\prime }$. Thus we can identify $H^{\prime }$ with $H.$ Moreover $f$ induces an isomorphism $\mathcal{D}\left( f\right) :\mathcal{D}\left( A^{\prime }\right) \rightarrow \mathcal{D}\left( A\right) $ of bialgebras in ${_{H}^{H}\mathcal{YD}}$. \end{proposition} \begin{invisible} \begin{proof} Set $G:=\mathrm{gr}\left( A\right) $ and set $G^{\prime }:=\mathrm{gr}\left( A^{\prime }\right) .$ Using the identification $H^{\prime }\rightarrow H:h^{\prime }\mapsto f\left( h^{\prime }\right) $ we can rewrite the canonical bialgebra map \begin{eqnarray*} \sigma _{G^{\prime }} &:&H^{\prime }\rightarrow \mathrm{gr}\left( A^{\prime }\right) :h^{\prime }\mapsto h^{\prime }+A_{-1}^{\prime }, \\ \pi _{G^{\prime }} &:&\mathrm{gr}\left( A^{\prime }\right) \rightarrow H^{\prime }:a^{\prime }+A_{n-1}^{\prime }\mapsto a^{\prime }\delta _{n,0},\qquad n\in \N_0 \end{eqnarray* as follow \begin{eqnarray*} \sigma _{G^{\prime }} &:&H\rightarrow \mathrm{gr}\left( A^{\prime }\right) :h\mapsto f^{-1}\left( h\right) +A_{-1}^{\prime }, \\ \pi _{G^{\prime }} &:&\mathrm{gr}\left( A^{\prime }\right) \rightarrow H:a^{\prime }+A_{n-1}^{\prime }\mapsto f\left( a^{\prime }\delta _{n,0}\right) ,\qquad n\in \N_0\text{.} \end{eqnarray* Since $f$ is an isomorphism it induces an isomorphism $A_{n}^{\prime }\cong A_{n}$ and hence an isomorphism of graded Hopf algebras $\mathrm{gr}\left( f\right) :G^{\prime }=\mathrm{gr}\left( A^{\prime }\right) \rightarrow \mathrm{gr}\left( A\right) =G:a^{\prime }+A_{n-1}^{\prime }\mapsto f\left( a^{\prime }\right) +A_{n-1}$ for every $n\in \N_0,a^{\prime }\in A_{n}^{\prime }\backslash A_{n-1}^{\prime }.$ We hav \begin{eqnarray*} \pi _{G}\mathrm{gr}\left( f\right) \left( a^{\prime }+A_{n-1}^{\prime }\right) &=&\pi _{G}\left( f\left( a^{\prime }\right) +A_{n-1}\right) =f\left( a^{\prime }\right) \delta _{n,0}=f\left( a^{\prime }\delta _{n,0}\right) =\pi _{G^{\prime }}\left( a^{\prime }+A_{n-1}^{\prime }\right) , \\ \mathrm{gr}\left( f\right) \sigma _{G^{\prime }}\left( h\right) &=&\mathrm{g }\left( f\right) \left( f^{-1}\left( h\right) +A_{-1}^{\prime }\right) =h+A_{-1}=\sigma _{G}\left( h\right) \end{eqnarray* so tha \begin{equation*} \pi _{G}\circ \mathrm{gr}\left( f\right) =\pi _{G^{\prime }}\qquad \text{and \qquad \mathrm{gr}\left( f\right) \circ \sigma _{G^{\prime }}=\sigma _{G}. \end{equation* Let $a^{\prime }+A_{n-1}^{\prime }\in \mathcal{D}\left( A^{\prime }\right) $ and consider $d:=\mathrm{gr}\left( f\right) \left( a^{\prime }+A_{n-1}^{\prime }\right) =f\left( a^{\prime }\right) +A_{n-1}.$ Then \begin{eqnarray*} \sum d_{1}\otimes \pi _{G}\left( d_{2}\right) &=&\sum \left( \mathrm{gr \left( f\right) \left( a^{\prime }+A_{n-1}^{\prime }\right) \right) _{1}\otimes \pi _{G}\left( \left( \mathrm{gr}\left( f\right) \left( a^{\prime }+A_{n-1}^{\prime }\right) \right) _{2}\right) \\ &=&\sum \mathrm{gr}\left( f\right) \left( \left( a^{\prime }+A_{n-1}^{\prime }\right) _{1}\right) \otimes \pi _{G}\mathrm{gr}\left( f\right) \left( \left( a^{\prime }+A_{n-1}^{\prime }\right) _{2}\right) \\ &=&\sum \mathrm{gr}\left( f\right) \left( \left( a^{\prime }+A_{n-1}^{\prime }\right) _{1}\right) \otimes \pi _{G^{\prime }}\left( \left( a^{\prime }+A_{n-1}^{\prime }\right) _{2}\right) \\ &=&\mathrm{gr}\left( f\right) \left( a^{\prime }+A_{n-1}^{\prime }\right) \otimes 1_{H^{\prime }}=d\otimes 1_{H^{\prime }}. \end{eqnarray* Therefore $\mathrm{gr}\left( f\right) $ induces an isomorphism of vector spaces $\mathcal{D}\left( f\right) :D\left( A^{\prime }\right) \rightarrow D\left( A\right) :d^{\prime }\mapsto \mathrm{gr}\left( f\right) \left( d^{\prime }\right) .$ Let us check $\mathcal{D}\left( f\right) $ is a morphism in ${_{H}^{H}\mathcal{YD}}$. For every $d^{\prime }\in \mathcal{D \left( A^{\prime }\right) ,$ we hav \begin{eqnarray*} \mathcal{D}\left( f\right) \left( h\vartriangleright d^{\prime }\right) &= \mathcal{D}\left( f\right) \left( \sum \sigma _{G^{\prime }}\left( h_{1}\right) d^{\prime }\sigma _{G^{\prime }}S\left( h_{2}\right) \right) \\ &=&\mathrm{gr}\left( f\right) \left( \sum \sigma _{G^{\prime }}\left( h_{1}\right) d^{\prime }\sigma _{G^{\prime }}S\left( h_{2}\right) \right) \\ &=&\sum \mathrm{gr}\left( f\right) \sigma _{G^{\prime }}\left( h_{1}\right) \cdot \mathrm{gr}\left( f\right) \left( d^{\prime }\right) \cdot \mathrm{gr \left( f\right) \sigma _{G^{\prime }}S\left( h_{2}\right) \\ &=&\sum \sigma _{G}\left( h_{1}\right) \cdot \mathcal{D}\left( f\right) \left( d^{\prime }\right) \cdot \sigma _{G}S\left( h_{2}\right) =h\vartriangleright \mathcal{D}\left( f\right) \left( d^{\prime }\right) \end{eqnarray* and also \begin{eqnarray*} \rho \mathcal{D}\left( f\right) \left( d^{\prime }\right) &=&\rho \mathrm{gr \left( f\right) \left( d^{\prime }\right) =\sum \pi _{G}\left( \left( \mathrm{gr}\left( f\right) \left( d^{\prime }\right) \right) _{1}\right) \otimes \left( \left( \mathrm{gr}\left( f\right) \left( d^{\prime }\right) \right) \right) _{2} \\ &=&\sum \pi _{G}\left( \mathrm{gr}\left( f\right) \left( d_{1}^{\prime }\right) \right) \otimes \mathrm{gr}\left( f\right) \left( d_{2}^{\prime }\right) \\ &=&\sum \pi _{G^{\prime }}\left( d_{1}^{\prime }\right) \otimes \mathrm{gr \left( f\right) \left( d_{2}^{\prime }\right) =\left( H\otimes \mathrm{gr \left( f\right) \right) \rho \left( d^{\prime }\right) =\left( H\otimes \mathcal{D}\left( f\right) \right) \rho \left( d^{\prime }\right) . \end{eqnarray*} Now $\mathcal{D}\left( A^{\prime }\right) $ is a subalgebra of $G^{\prime }$ and $\mathcal{D}\left( A\right) $ is a subalgebra of $G.$ Since $\mathrm{gr \left( f\right) :G^{\prime }\rightarrow G$ is multiplicative and unitary so that also $\mathcal{D}\left( f\right) :\mathcal{D}\left( A^{\prime }\right) \rightarrow \mathcal{D}\left( A\right) $ is multiplicative and unitary. Moreover we hav \begin{eqnarray*} \Delta _{\mathcal{D}\left( A\right) }\mathcal{D}\left( f\right) \left( d^{\prime }\right) &=&\sum \left( \mathcal{D}\left( f\right) \left( d^{\prime }\right) \right) _{1}\sigma _{G}S_{H}\pi _{G}\left( \left( \mathcal{D}\left( f\right) \left( d^{\prime }\right) \right) _{2}\right) \otimes \left( \mathcal{D}\left( f\right) \left( d^{\prime }\right) \right) _{3} \\ &=&\sum \left( \mathrm{gr}\left( f\right) \left( d^{\prime }\right) \right) _{1}\sigma _{G}S_{H}\pi _{G}\left( \left( \mathrm{gr}\left( f\right) \left( d^{\prime }\right) \right) _{2}\right) \otimes \left( \mathrm{gr}\left( f\right) \left( d^{\prime }\right) \right) _{3} \\ &=&\sum \mathrm{gr}\left( f\right) \left( d_{1}^{\prime }\right) \cdot \sigma _{G}S_{H}\pi _{G}\mathrm{gr}\left( f\right) \left( d_{2}^{\prime }\right) \otimes \mathrm{gr}\left( f\right) \left( d_{3}^{\prime }\right) \\ &=&\sum \mathrm{gr}\left( f\right) \left( d_{1}^{\prime }\right) \cdot \mathrm{gr}\left( f\right) \sigma _{G^{\prime }}S_{H}\pi _{G^{\prime }}\left( d_{2}^{\prime }\right) \otimes \mathrm{gr}\left( f\right) \left( d_{3}^{\prime }\right) \\ &=&\left( \mathrm{gr}\left( f\right) \otimes \mathrm{gr}\left( f\right) \right) \left( \sum d_{1}^{\prime }\sigma _{G^{\prime }}S_{H}\pi _{G^{\prime }}\left( d_{2}^{\prime }\right) \otimes d_{3}^{\prime }\right) \\ &=&\left( \mathrm{gr}\left( f\right) \otimes \mathrm{gr}\left( f\right) \right) \Delta _{\mathcal{D}\left( A^{\prime }\right) }\left( d^{\prime }\right) =\left( \mathcal{D}\left( f\right) \otimes \mathcal{D}\left( f\right) \right) \Delta _{\mathcal{D}\left( A^{\prime }\right) }\left( d^{\prime }\right) \end{eqnarray* an \begin{equation*} \varepsilon _{\mathcal{D}\left( A\right) }\mathcal{D}\left( f\right) \left( d^{\prime }\right) =\varepsilon _{G}\mathrm{gr}\left( f\right) \left( d^{\prime }\right) =\varepsilon _{G^{\prime }}\left( d^{\prime }\right) =\varepsilon _{\mathcal{D}\left( A^{\prime }\right) }\left( d^{\prime }\right) . \end{equation*} \end{proof} \end{invisible} \begin{proposition} \label{pro:ciccio}Keep the hypotheses and notations of the beginning of the section. Then $\mathcal{D}\left( A\right) \cong \mathcal{D}\left( R\#_{\xi }H\right) \cong \mathrm{gr}\left( R\right) $ as bialgebras in ${_{H}^{H \mathcal{YD}}$. \end{proposition} \begin{proof} Apply Proposition \ref{pro:D(f)} to the canonical isomorphism $\psi :B:=R\#_{\xi }H\rightarrow A$ that we recalled at the beginning of the section to get that $\mathcal{D}\left( R\#_{\xi }H\right) \cong \mathcal{D \left( A\right) .$ Note that, by $H$-linearity we have \begin{equation*} \psi \left( 1_{R}\#h\right) =\psi \left( \left( 1_{R}\#1_{H}\right) \left( 1_{R}\#h\right) \right) =\psi \left( \left( 1_{R}\#1_{H}\right) h\right) =\psi \left( 1_{R}\#1_{H}\right) h=h \end{equation* so that $\psi \left( \Bbbk 1_{R}\otimes H\right) =H$ and hence $H^{\prime }=\psi ^{-1}\left( H\right) =\Bbbk 1_{R}\otimes H$ with the notation of Proposition \ref{pro:D(f)}. Thus $B_{0}=\Bbbk 1_{R}\otimes H=R_{0}\otimes H$ so that we can identify $B_{0}$ with $H$ via the canonical isomorphism H\rightarrow R_{0}\otimes H:h\mapsto 1_{R}\otimes h$. Its inverse is R_{0}\otimes H\rightarrow H:r\otimes h\mapsto \varepsilon _{R}\left( r\right) h.$ With this identification and by setting $G:=\mathrm{gr}\left( B\right) ,$ we can consider the canonical bialgebra map \begin{eqnarray*} \sigma _{G} &:&H\rightarrow \mathrm{gr}\left( B\right) :h\mapsto 1_{R}\#h+\left( R\#_{\xi }H\right) _{-1}, \\ \pi _{G} &:&\mathrm{gr}\left( B\right) \rightarrow H:r\#h+\left( R\#_{\xi }H\right) _{n-1}\mapsto \varepsilon _{R}\left( r\right) h\delta _{n,0},\text{ where }r\#h\in \left( R\#_{\xi }H\right) _{n},n\in \N_0\text{.} \end{eqnarray*} Since the underlying coalgebra of $B$ is exactly the smash coproduct of $R$ by $H$ and $\left( R,\xi \right) $ is a connected pre-bialgebra with cocycle in ${_{H}^{H}\mathcal{YD}}$, by Lemma \ref{lem:CoradSmash}, we have that B_{n}=R_{n}\otimes H.$ Let us compute $\mathcal{D}:=\mathcal{D}\left( B\right) .$ As a vector space it i \begin{equation*} \mathcal{D}:=\left\{ d\in G\mid \sum d_{1}\otimes \pi _{G}\left( d_{2}\right) =d\otimes 1_{H}\right\} . \end{equation* By \cite[Lemma 2.1]{AS-Lifting}, we have that $\mathcal{D}=\oplus _{n\in \N_0}\mathcal{D}^{n}$ where $\mathcal{D}^{n}=\mathcal{D}\cap G^{n} \mathcal{D}\cap \frac{B_{n}}{B_{n-1}}.$ Let $d:=\overline{\sum\limits_{i\in I}r_{i}\#h_{i}}\in \mathcal{D}^{n}$ where we can assume $\sum\limits_{i\in I}r_{i}\#h_{i}\in B_{n}\backslash B_{n-1}$ and, for every $i\in I$, r_{i}\#h_{i}\in B_{n}\backslash B_{n-1}$. Then $\overline{\sum\limits_{i\in I}r_{i}\#h_{i}}=\sum\limits_{i\in I}\overline{r_{i}\#h_{i}}$ and hence the fact that $d\mathcal{\ }$is coinvariant rewrites a \begin{equation} \sum\limits_{i\in I}\left( \overline{r_{i}\#h_{i}}\right) _{1}\otimes \pi _{G}\left( \left( \overline{r_{i}\#h_{i}}\right) _{2}\right) =\sum\limits_{i\in I}\overline{r_{i}\#h_{i}}\otimes 1_{H}. \label{eq: pig1} \end{equation By definition of $\pi _{G}$ and (\ref{form:DeltaGr}), the left-hand side become \begin{equation*} \sum\limits_{i\in I}\left( \overline{r_{i}\#h_{i}}\right) _{1}\otimes \pi _{G}\left( \left( \overline{r_{i}\#h_{i}}\right) _{2}\right) =\sum\limits_{i\in I}\left( \left( r_{i}\#\left( h_{i}\right) _{1}\right) +B_{n-1}\right) \otimes \left( h_{i}\right) _{2} \end{equation*} \begin{invisible} Here is the computation \begin{eqnarray*} &&\sum\limits_{i\in I}\left( \overline{r_{i}\#h_{i}}\right) _{1}\otimes \pi _{G}\left( \left( \overline{r_{i}\#h_{i}}\right) _{2}\right) \overset{(\re {form:DeltaGr})}{=}\sum\limits_{i\in I}\sum_{0\leq t\leq n}\left( \left( r_{i}\#h_{i}\right) _{1}+B_{t-1}\right) \otimes \pi _{G}\left( \left( r_{i}\#h_{i}\right) _{2}+B_{n-t-1}\right) \\ &=&\sum\limits_{i\in I}\sum_{0\leq t\leq n}\left( \left( r_{i}^{\left( 1\right) }\#\left( r_{i}^{\left( 2\right) }\right) _{\left\langle -1\right\rangle }\left( h_{i}\right) _{1}\right) +B_{t-1}\right) \otimes \pi _{G}\left( \left( \left( r_{i}^{\left( 2\right) }\right) _{\left\langle 0\right\rangle }\#\left( h_{i}\right) _{2}\right) +B_{n-t-1}\right) \\ &=&\sum\limits_{i\in I}\sum_{0\leq t\leq n}\left( \left( r_{i}^{\left( 1\right) }\#\left( r_{i}^{\left( 2\right) }\right) _{\left\langle -1\right\rangle }\left( h_{i}\right) _{1}\right) +B_{t-1}\right) \otimes \varepsilon _{R}\left( \left( r_{i}^{\left( 2\right) }\right) _{\left\langle 0\right\rangle }\right) \left( h_{i}\right) _{2}\delta _{n-t,0} \\ &=&\sum\limits_{i\in I}\left( \left( r_{i}^{\left( 1\right) }\#\left( h_{i}\right) _{1}\right) +B_{n-1}\right) \otimes \varepsilon _{R}\left( r_{i}^{\left( 2\right) }\right) \left( h_{i}\right) _{2} \\ &=&\sum\limits_{i\in I}\left( \left( r_{i}\#\left( h_{i}\right) _{1}\right) +B_{n-1}\right) \otimes \left( h_{i}\right) _{2} \end{eqnarray*} \end{invisible} so that (\ref{eq: pig1}) become \begin{equation*} \sum\limits_{i\in I}\left( \left( r_{i}\#\left( h_{i}\right) _{1}\right) +B_{n-1}\right) \otimes \left( h_{i}\right) _{2}=\sum\limits_{i\in I \overline{r_{i}\#h_{i}}\otimes 1_{H}=\sum\limits_{i\in I}\left( r_{i}\#h_{i}+B_{n-1}\right) \otimes 1_{H} \end{equation* i.e \begin{equation*} \sum\limits_{i\in I}\left( r_{i}\#\left( h_{i}\right) _{1}\right) \otimes \left( h_{i}\right) _{2}-\sum\limits_{i\in I}r_{i}\#h_{i}\otimes 1_{H}\in B_{n-1}\otimes H=R_{n-1}\otimes H\otimes H. \end{equation* If we apply $R\otimes \varepsilon _{H}\otimes H$, we ge \begin{equation*} \sum\limits_{i\in I}r_{i}\otimes h_{i}-\sum\limits_{i\in I}r_{i}\varepsilon _{H}\left( h_{i}\right) \otimes 1_{H}\in R_{n-1}\otimes H=B_{n-1}. \end{equation* Thus $\overline{\sum\limits_{i\in I}r_{i}\#h_{i}}=\sum\limits_{i\in I \overline{r_{i}\#h_{i}}=\sum\limits_{i\in I}\left( r_{i}\#h_{i}+B_{n-1}\right) =\sum\limits_{i\in I}\left( r_{i}\varepsilon _{H}\left( h_{i}\right) \otimes 1_{H}\right) +B_{n-1}.$ Since $\sum\limits_{i\in I}r_{i}\#h_{i}\in B_{n}\backslash B_{n-1}$ we get that $\left( \sum\limits_{i\in I}r_{i}\varepsilon _{H}\left( h_{i}\right) \right) \otimes 1_{H}\notin B_{n-1}$ and hence $\sum\limits_{i\in I}r_{i}\varepsilon _{H}\left( h_{i}\right) \notin R_{n-1}$ and we can writ \begin{equation*} \overline{\sum\limits_{i\in I}r_{i}\#h_{i}}=\overline{\left( \sum\limits_{i\in I}r_{i}\varepsilon _{H}\left( h_{i}\right) \right) \otimes 1_{H}}. \end{equation* Therefore we have proved that the map \begin{equation*} \varphi _{n}:\frac{R_{n}}{R_{n-1}}\rightarrow \mathcal{D}^{n}:\overline{r \mapsto \overline{r\otimes 1_{H}}, \end{equation* which is well-defined as $\mathcal{D}^{n}=\mathcal{D}\cap G^{n}=\mathcal{D \cap \frac{B_{n}}{B_{n-1}}=\mathcal{D}\cap \frac{R_{n}\otimes H} R_{n-1}\otimes H},$ is also surjective. It is also injective as $\varphi _{n}\left( \overline{r}\right) =\varphi _{n}\left( \overline{s}\right) $ implies $r\otimes 1_{H}-s\otimes 1_{H}\in B_{n-1}=R_{n-1}\otimes H$ and hence, by applying $R\otimes \varepsilon _{H},$ we get $r-s\in R_{n-1}$ i.e. $\overline{r}=\overline{s}.$ Therefore $\varphi _{n}$ is an isomorphism such that $\overline{\sum\limits_{i\in I}r_{i}\#h_{i }=\varphi _{n}\left( \overline{\sum\limits_{i\in I}r_{i}\varepsilon _{H}\left( h_{i}\right) }\right) $ and hence \begin{equation*} \varphi _{n}^{-1}\left( \overline{\sum\limits_{i\in I}r_{i}\#h_{i}}\right) \overline{\sum\limits_{i\in I}r_{i}\varepsilon _{H}\left( h_{i}\right) }. \end{equation* Clearly this extends to a graded $\Bbbk $-linear isomorphism \begin{equation*} \varphi :\mathrm{gr}\left( R\right) \rightarrow \mathcal{D}. \end{equation* Let us check that $\varphi $ is a morphism in ${_{H}^{H}\mathcal{YD}}$. First note that, for every $r\in R_{n}$, we hav \begin{eqnarray*} \varphi \left( r+R_{n-1}\right) &=&\delta _{\left\vert r\right\vert ,n}\varphi \left( r+R_{n-1}\right) =\delta _{\left\vert r\right\vert ,n}\varphi _{n}\left( r+R_{n-1}\right) =\delta _{\left\vert r\right\vert ,n}\varphi _{n}\left( \overline{r}\right) \\ &=&\delta _{\left\vert r\right\vert ,n}\overline{r\otimes 1_{H}}=\delta _{\left\vert r\right\vert ,n}\left( r\otimes 1_{H}+\left( R\#_{\xi }H\right) _{n-1}\right) =r\otimes 1_{H}+\left( R\#_{\xi }H\right) _{n-1}. \end{eqnarray* Thu \begin{equation} \varphi \left( r+R_{n-1}\right) =r\otimes 1_{H}+\left( R\#_{\xi }H\right) _{n-1},\text{ for every }r\in R_{n}\text{.} \label{form:phiTrick} \end{equation For every $r\in R_{n}\backslash R_{n-1},$ by using (\ref{form:phiTrick}), it is straighforward to prove that $h\vartriangleright \varphi \left( \overline r}\right) =\varphi \left( h\overline{r}\right) .$ \begin{invisible} In fact we hav \begin{eqnarray*} h\vartriangleright \varphi \left( \overline{r}\right) &=&\sum \sigma _{G}\left( h_{1}\right) \varphi \left( \overline{r}\right) \sigma _{G}S\left( h_{2}\right) \\ &&\overset{(\ref{form:phiTrick})}{=}\sum \left( 1_{R}\#h_{1}+\left( R\#_{\xi }H\right) _{-1}\right) \left( r\#1_{H}+\left( R\#_{\xi }H\right) _{n-1}\right) \left( 1_{R}\#S\left( h_{2}\right) +\left( R\#_{\xi }H\right) _{-1}\right) \\ &=&\sum \left( 1_{R}\#h_{1}\right) \left( r\#1_{H}\right) \left( 1_{R}\#S\left( h_{2}\right) \right) +\left( R\#_{\xi }H\right) _{n-1} \\ &=&\sum \left( h_{1}r\#h_{2}S\left( h_{3}\right) \right) +\left( R\#_{\xi }H\right) _{n-1} \\ &=&\left( hr\#1_{H}\right) +\left( R\#_{\xi }H\right) _{n-1}\overset{(\re {form:phiTrick})}{=}\varphi \left( hr+R_{n-1}\right) \\ &=&\varphi \left( h\left( r+R_{n-1}\right) \right) =\varphi \left( \overline{r}\right) . \end{eqnarray*} \end{invisible} Moreover, by applying (\ref{form:DeltaGr}), (\ref{eq:DeltaCosmash}), the definition of $\pi _{G}$ and (\ref{form:phiTrick}), we get that $\rho \varphi \left( \overline{r}\right) =\left( H\otimes \varphi \right) \rho \left( \overline{r}\right) .$ \begin{invisible} In fact we hav \begin{eqnarray*} \rho \varphi \left( \overline{r}\right) &=&\sum \pi _{G}\left( \varphi \left( \overline{r}\right) _{1}\right) \otimes \varphi \left( \overline{r \right) _{2} \\ &=&\sum \pi _{G}\left( \left( \overline{r\otimes 1_{H}}\right) _{1}\right) \otimes \left( \overline{r\otimes 1_{H}}\right) _{2} \\ &&\overset{(\ref{form:DeltaGr})}{=}\sum_{0\leq i\leq n}\pi _{G}\left( \left( r\otimes 1_{H}\right) _{1}+\left( R\#_{\xi }H\right) _{i-1}\right) \otimes \left( \left( r\otimes 1_{H}\right) _{2}+\left( R\#_{\xi }H\right) _{n-i-1}\right) . \end{eqnarray* We comput \begin{equation} \Delta _{B}\left( r\otimes 1_{H}\right) =\sum r^{\left( 1\right) }\otimes \left( r^{\left( 2\right) }\right) _{\left\langle -1\right\rangle }\otimes \left( r^{\left( 2\right) }\right) _{\left\langle 0\right\rangle }\otimes 1_{H} \label{form:DeltaSmash1} \end{equation so tha \begin{eqnarray*} &&\rho \varphi \left( \overline{r}\right) \overset{(\ref{form:DeltaSmash1})} =}\sum_{0\leq i\leq n}\sum \pi _{G}\left( r^{\left( 1\right) }\otimes \left( r^{\left( 2\right) }\right) _{\left\langle -1\right\rangle }+\left( R\#_{\xi }H\right) _{i-1}\right) \otimes \left( \left( r^{\left( 2\right) }\right) _{\left\langle 0\right\rangle }\otimes 1_{H}+\left( R\#_{\xi }H\right) _{n-i-1}\right) \\ &=&\sum_{0\leq i\leq n}\sum \left( \varepsilon _{R}\left( r^{\left( 1\right) }\right) \left( r^{\left( 2\right) }\right) _{\left\langle -1\right\rangle }\delta _{i,0}\right) \otimes \left( \left( r^{\left( 2\right) }\right) _{\left\langle 0\right\rangle }\otimes 1_{H}+\left( R\#_{\xi }H\right) _{n-i-1}\right) \\ &=&\sum r_{\left\langle -1\right\rangle }\otimes \left( r_{\left\langle 0\right\rangle }\otimes 1_{H}+\left( R\#_{\xi }H\right) _{n-1}\right) \\ &&\overset{(\ref{form:phiTrick})}{=}r_{\left\langle -1\right\rangle }\otimes \varphi \left( r_{\left\langle 0\right\rangle }+R_{n}\right) \\ &=&\left( H\otimes \varphi \right) \left( r_{\left\langle -1\right\rangle }\otimes \left( r_{\left\langle 0\right\rangle }+R_{n}\right) \right) =\left( H\otimes \varphi \right) \rho \left( r+R_{n}\right) =\left( H\otimes \varphi \right) \rho \left( \overline{r}\right) . \end{eqnarray*} \end{invisible} Let us check that $\varphi $ is a morphism of bialgebras in ${_{H}^{H \mathcal{YD}}$. Fix $r\in R_{n}\backslash R_{n-1}.$ Using the definition of $\Delta _{\mathcal{D}}$, (\ref{form:DeltaGr}), (\re {eq:DeltaCosmash}), the definition of $\pi _{G}$, the definition of $\sigma _{G}$, (\ref{form:phiTrick}) and (\ref{form:DeltaGr}) again, we obtain \Delta _{\mathcal{D}}\varphi \left( \overline{r}\right) =\left( \varphi \otimes \varphi \right) \Delta _{\mathrm{gr}\left( R\right) }\left( \overline{r}\right) .$ \begin{invisible} Let us check $\varphi $ is comultiplicative. \begin{eqnarray*} \Delta _{\mathcal{D}}\varphi \left( \overline{r}\right) &=&\Delta _{\mathcal D}}\left( \overline{r\otimes 1_{H}}\right) \\ &=&\sum \left( \overline{r\otimes 1_{H}}\right) _{1}\sigma _{G}S_{H}\pi _{G}\left( \left( \overline{r\otimes 1_{H}}\right) _{2}\right) \otimes \left( \overline{r\otimes 1_{H}}\right) _{3} \\ &&\overset{(\ref{form:DeltaGr})}{=}\sum \sum_{a+b+c=n}\left( \left( r\otimes 1_{H}\right) _{1}+B_{a-1}\right) \sigma _{G}S_{H}\pi _{G}\left( \left( r\otimes 1_{H}\right) _{2}+B_{b-1}\right) \otimes \left( \left( r\otimes 1_{H}\right) _{3}+B_{c-1}\right) . \end{eqnarray* We comput \begin{eqnarray*} &&\left( B\otimes \Delta _{B}\right) \Delta _{B}\left( r\otimes 1_{H}\right) \\ &&\overset{(\ref{form:DeltaSmash1})}{=}\sum r^{\left( 1\right) }\otimes \left( r^{\left( 2\right) }\right) _{\left\langle -1\right\rangle }\otimes \left( \left( r^{\left( 2\right) }\right) _{\left\langle 0\right\rangle }\right) ^{\left( 1\right) }\otimes \left( \left( \left( r^{\left( 2\right) }\right) _{\left\langle 0\right\rangle }\right) ^{\left( 2\right) }\right) _{\left\langle -1\right\rangle }\otimes \left( \left( \left( r^{\left( 2\right) }\right) _{\left\langle 0\right\rangle }\right) ^{\left( 2\right) }\right) _{\left\langle 0\right\rangle }\otimes 1_{H} \\ &=&\sum r^{\left( 1\right) }\otimes \left( r^{\left( 2\right) }\right) _{\left\langle -1\right\rangle }\left( r^{\left( 3\right) }\right) _{\left\langle -1\right\rangle }\otimes \left( r^{\left( 2\right) }\right) _{\left\langle 0\right\rangle }\otimes \left( \left( \left( r^{\left( 3\right) }\right) _{\left\langle 0\right\rangle }\right) \right) _{\left\langle -1\right\rangle }\otimes \left( \left( \left( r^{\left( 3\right) }\right) _{\left\langle 0\right\rangle }\right) \right) _{\left\langle 0\right\rangle }\otimes 1_{H} \\ &=&\sum \left( r^{\left( 1\right) }\otimes \left( r^{\left( 2\right) }\right) _{\left\langle -1\right\rangle }\left( r^{\left( 3\right) }\right) _{\left\langle -2\right\rangle }\right) \otimes \left( \left( r^{\left( 2\right) }\right) _{\left\langle 0\right\rangle }\otimes \left( r^{\left( 3\right) }\right) _{\left\langle -1\right\rangle }\right) \otimes \left( \left( r^{\left( 3\right) }\right) _{\left\langle 0\right\rangle }\otimes 1_{H}\right) \end{eqnarray* so tha \begin{eqnarray*} \Delta _{\mathcal{D}}\varphi \left( \overline{r}\right) &=&\sum \sum_{a+b+c=n}\left( \left( r\otimes 1_{H}\right) _{1}+B_{a-1}\right) \sigma _{G}S_{H}\pi _{G}\left( \left( r\otimes 1_{H}\right) _{2}+B_{b-1}\right) \otimes \left( \left( r\otimes 1_{H}\right) _{3}+B_{c-1}\right) \\ &=&\left[ \begin{array}{c} \sum \sum_{a+b+c=n}\left( \left( r^{\left( 1\right) }\otimes \left( r^{\left( 2\right) }\right) _{\left\langle -1\right\rangle }\left( r^{\left( 3\right) }\right) _{\left\langle -2\right\rangle }\right) +B_{a-1}\right) \sigma _{G}S_{H}\pi _{G}\left( \left( \left( r^{\left( 2\right) }\right) _{\left\langle 0\right\rangle }\otimes \left( r^{\left( 3\right) }\right) _{\left\langle -1\right\rangle }\right) +B_{b-1}\right) \\ \otimes \left( \left( \left( r^{\left( 3\right) }\right) _{\left\langle 0\right\rangle }\otimes 1_{H}\right) +B_{c-1}\right \end{array \right] \\ &=&\left[ \begin{array}{c} \sum \sum_{a+b+c=n}\left( \left( r^{\left( 1\right) }\otimes \left( r^{\left( 2\right) }\right) _{\left\langle -1\right\rangle }\left( r^{\left( 3\right) }\right) _{\left\langle -2\right\rangle }\right) +B_{a-1}\right) \sigma _{G}S_{H}\left( \left( \varepsilon _{R}\left( \left( r^{\left( 2\right) }\right) _{\left\langle 0\right\rangle }\right) \left( r^{\left( 3\right) }\right) _{\left\langle -1\right\rangle }\right) \right) \delta _{b,0} \\ \otimes \left( \left( \left( r^{\left( 3\right) }\right) _{\left\langle 0\right\rangle }\otimes 1_{H}\right) +B_{c-1}\right \end{array \right] \\ &=&\left[ \begin{array}{c} \sum \sum_{a+c=n}\left( \left( r^{\left( 1\right) }\otimes \left( r^{\left( 3\right) }\right) _{\left\langle -2\right\rangle }\right) +B_{a-1}\right) S_{H}\left( \varepsilon _{R}\left( r^{\left( 2\right) }\right) \left( r^{\left( 3\right) }\right) _{\left\langle -1\right\rangle }\right) \\ \otimes \left( \left( \left( r^{\left( 3\right) }\right) _{\left\langle 0\right\rangle }\otimes 1_{H}\right) +B_{c-1}\right \end{array \right] \\ &=&\sum \sum_{a+c=n}\left( \left( r^{\left( 1\right) }\otimes \left( r^{\left( 2\right) }\right) _{\left\langle -2\right\rangle }\right) +B_{a-1}\right) \sigma _{G}S_{H}\left( \left( r^{\left( 2\right) }\right) _{\left\langle -1\right\rangle }\right) \otimes \left( \left( \left( r^{\left( 2\right) }\right) _{\left\langle 0\right\rangle }\otimes 1_{H}\right) +B_{c-1}\right) \\ &=&\sum \sum_{a+c=n}\left( \left( r^{\left( 1\right) }\otimes \left( r^{\left( 2\right) }\right) _{\left\langle -2\right\rangle }\right) +B_{a-1}\right) \left( 1_{R}\otimes S_{H}\left( \left( r^{\left( 2\right) }\right) _{\left\langle -1\right\rangle }\right) +B_{-1}\right) \otimes \left( \left( \left( r^{\left( 2\right) }\right) _{\left\langle 0\right\rangle }\otimes 1_{H}\right) +B_{c-1}\right) \\ &=&\sum \sum_{a+c=n}\left( \left( r^{\left( 1\right) }\otimes \left( r^{\left( 2\right) }\right) _{\left\langle -2\right\rangle }\right) \left( 1_{R}\otimes S_{H}\left( \left( r^{\left( 2\right) }\right) _{\left\langle -1\right\rangle }\right) \right) +B_{a-1}\right) \otimes \left( \left( \left( r^{\left( 2\right) }\right) _{\left\langle 0\right\rangle }\otimes 1_{H}\right) +B_{c-1}\right) \\ &=&\sum \sum_{a+c=n}\left( \left( r^{\left( 1\right) }\otimes \left( r^{\left( 2\right) }\right) _{\left\langle -2\right\rangle }S_{H}\left( \left( r^{\left( 2\right) }\right) _{\left\langle -1\right\rangle }\right) \right) +B_{a-1}\right) \otimes \left( \left( \left( r^{\left( 2\right) }\right) _{\left\langle 0\right\rangle }\otimes 1_{H}\right) +B_{c-1}\right) \\ &=&\sum \sum_{a+c=n}\left( \left( r^{\left( 1\right) }\otimes 1_{H}\right) +B_{a-1}\right) \otimes \left( \left( r^{\left( 2\right) }\otimes 1_{H}\right) +B_{c-1}\right) \\ &&\overset{(\ref{form:phiTrick})}{=}\left( \varphi \otimes \varphi \right) \left( \sum_{a+c=n}\left( r^{\left( 1\right) }+R_{a-1}\right) \otimes \left( r^{\left( 2\right) }+R_{c-1}\right) \right) \\ &&\overset{(\ref{form:DeltaGr})}{=}\left( \varphi \otimes \varphi \right) \Delta _{\mathrm{gr}\left( R\right) }\left( \overline{r}\right) . \end{eqnarray*} \end{invisible} Let us check $\varphi $ is counitary \begin{eqnarray*} \varepsilon _{\mathcal{D}}\varphi \left( \overline{r}\right) &=&\varepsilon _{G}\varphi \left( \overline{r}\right) =\varepsilon _{G}\left( \overline r\otimes 1_{H}}\right) \overset{(\ref{form:EpsGr})}{=}\delta _{n,0}\varepsilon _{B}\left( r\otimes 1_{H}\right) \\ &=&\delta _{n,0}\varepsilon _{R}\left( r\right) \overset{(\ref{form:EpsGr})} =}\varepsilon _{\mathrm{gr}\left( R\right) }\left( \overline{r}\right) . \end{eqnarray* Let us check $\varphi $ is multiplicative. Let $s\in R_{m}\backslash R_{m-1}. $ Then, by definition of $\varphi ,$ of $m_{\mathcal{D}}$ and of the multiplication of $R\#_{\xi }H,$ we have tha \begin{equation*} m_{\mathcal{D}}\left( \varphi \otimes \varphi \right) \left( \overline{s \otimes \overline{r}\right) =\sum \left( s^{\left( 1\right) }\left( \left( s^{\left( 2\right) }\right) _{\left\langle -1\right\rangle }r^{\left( 1\right) }\right) \#\xi \left( \left( s^{\left( 2\right) }\right) _{\left\langle 0\right\rangle }\otimes r^{\left( 2\right) }\right) \right) +\left( R\#_{\xi }H\right) _{m+n-1}. \end{equation*} \begin{invisible} Explicitly we hav \begin{eqnarray*} m_{\mathcal{D}}\left( \varphi \otimes \varphi \right) \left( \overline{s \otimes \overline{r}\right) &=&m_{\mathcal{D}}\left( \overline{s\otimes 1_{H }\otimes \overline{r\otimes 1_{H}}\right) \\ &=&m_{G}\left( \left( s\otimes 1_{H}+\left( R\#_{\xi }H\right) _{m-1}\right) \otimes \left( r\otimes 1_{H}+\left( R\#_{\xi }H\right) _{n-1}\right) \right) \\ &=&\left( s\#1_{H}\right) \left( r\#1_{H}\right) +\left( R\#_{\xi }H\right) _{m+n-1} \\ &=&\sum \left( s^{\left( 1\right) }\left( \left( s^{\left( 2\right) }\right) _{\left\langle -1\right\rangle }r^{\left( 1\right) }\right) \#\xi \left( \left( s^{\left( 2\right) }\right) _{\left\langle 0\right\rangle }\otimes r^{\left( 2\right) }\right) \right) +\left( R\#_{\xi }H\right) _{m+n-1}. \end{eqnarray*} \end{invisible} Now write $\sum s^{\left( 1\right) }\otimes s^{\left( 2\right) }=\sum_{0\leq i\leq m}s_{i}\otimes s_{m-i}^{\prime }$ for some $s_{i},s_{i}^{\prime }\in R_{i}$ and similarly $\sum r^{\left( 1\right) }\otimes r^{\left( 2\right) }=\sum_{0\leq j\leq n}r_{j}\otimes r_{n-j}^{\prime }$ for some r_{j},r_{j}^{\prime }\in R_{j}.$ The \begin{eqnarray*} m_{\mathcal{D}}\left( \varphi \otimes \varphi \right) \left( \overline{s \otimes \overline{r}\right) &=&\sum_{\substack{ 0\leq i\leq m \\ 0\leq j\leq n}}\left( s_{i}\left( \left( s_{m-i}^{\prime }\right) _{\left\langle -1\right\rangle }r_{j}\right) \#\xi \left( \left( s_{m-i}^{\prime }\right) _{\left\langle 0\right\rangle }\otimes r_{n-j}^{\prime }\right) \right) +\left( R\#_{\xi }H\right) _{m+n-1} \\ &=&\sum_{\substack{ 0\leq i\leq m \\ 0\leq j\leq n}}\delta _{i,m}\delta _{j,n}\left( s_{i}\left( \left( s_{m-i}^{\prime }\right) _{\left\langle -1\right\rangle }r_{j}\right) \#\xi \left( \left( s_{m-i}^{\prime }\right) _{\left\langle 0\right\rangle }\otimes r_{n-j}^{\prime }\right) \right) +\left( R\#_{\xi }H\right) _{m+n-1} \\ &=&\sum \left( s_{m}\left( \left( s_{0}^{\prime }\right) _{\left\langle -1\right\rangle }r_{n}\right) \#\xi \left( \left( s_{0}^{\prime }\right) _{\left\langle 0\right\rangle }\otimes r_{0}^{\prime }\right) \right) +\left( R\#_{\xi }H\right) _{m+n-1}\text{ } \\ &&\overset{R_{0}=\Bbbk 1_{R}}{=}\sum s_{m}\left( \left( s_{0}^{\prime }\right) _{\left\langle -1\right\rangle }r_{n}\right) \#\varepsilon _{R}\left( \left( s_{0}^{\prime }\right) _{\left\langle 0\right\rangle }\right) \varepsilon _{R}\left( r_{0}^{\prime }\right) 1_{H}+\left( R\#_{\xi }H\right) _{m+n-1} \\ &=&\sum s_{m}\varepsilon _{R}\left( s_{0}^{\prime }\right) r_{n}\varepsilon _{R}\left( r_{0}^{\prime }\right) \#1_{H}+\left( R\#_{\xi }H\right) _{m+n-1} \\ &=&\sum_{\substack{ 0\leq i\leq m \\ 0\leq j\leq n}}\delta _{i,m}\delta _{j,n}\left( s_{i}\varepsilon _{R}\left( s_{m-i}^{\prime }\right) r_{j}\varepsilon _{R}\left( r_{m-j}^{\prime }\right) \#1_{H}\right) +\left( R\#_{\xi }H\right) _{m+n-1} \\ &=&\sum_{\substack{ 0\leq i\leq m \\ 0\leq j\leq n}}\left( s_{i}\varepsilon _{R}\left( s_{m-i}^{\prime }\right) r_{j}\varepsilon _{R}\left( r_{m-j}^{\prime }\right) \#1_{H}\right) +\left( R\#_{\xi }H\right) _{m+n-1} \\ &=&\sum \left( s^{\left( 1\right) }\varepsilon _{R}\left( s^{\left( 2\right) }\right) r^{\left( 1\right) }\varepsilon _{R}\left( r^{\left( 2\right) }\right) \#1_{H}\right) +\left( R\#_{\xi }H\right) _{m+n-1} \\ &=&\left( sr\#1_{H}\right) +\left( R\#_{\xi }H\right) _{m+n-1}\overset{(\re {form:phiTrick})}{=}\varphi \left( sr+R_{m+n-1}\right) \\ &=&\varphi \left( \left( s+R_{m-1}\right) \left( r+R_{n-1}\right) \right) =\varphi m_{\mathrm{gr}\left( R\right) }\left( \overline{s}\otimes \overline r}\right) . \end{eqnarray* Let us check $\varphi $ is unitary. We hav \begin{equation*} \varphi \left( 1_{\mathrm{gr}\left( R\right) }\right) =\varphi \left( 1_{R}+R_{-1}\right) =\varphi \left( \overline{1_{R}}\right) =\overline 1_{R}\otimes 1_{H}}=\left( 1_{R}\otimes 1_{H}\right) +\left( R\#_{\xi }H\right) _{-1}=1_{B}+B_{-1}=1_{G}. \end{equation*} \end{proof} Summing up we have proved that \begin{equation*} \mathrm{gr}\left( Q\right) \overset{Q=R^{v}}{=}\mathrm{gr}\left( R^{v}\right) \overset{\text{Lem. \ref{lem:ciccio}}}{\cong }\mathrm{gr}\left( R\right) \overset{\text{Pro. \ref{pro:ciccio}}}{\cong }\mathcal{D}\left( R\#_{\xi }H\right) \overset{\text{Pro. \ref{pro:D(f)}}}{\cong }\mathcal{D \left( A\right) \end{equation* as bialgebras in ${_{H}^{H}\mathcal{YD}}$. Therefore $\mathrm{H}_{{\mathcal YD}}}^{3}\left( \mathcal{D}\left( A\right) ,\Bbbk \right) =0$ (the Hochschild cohomology in ${_{H}^{H}\mathcal{YD}}$ of the algebra $\mathcal{D \left( A\right) $ with values in $\Bbbk $) if, and only if, $\mathrm{H}_{ \mathcal{YD}}}^{3}\left( \mathrm{gr}Q,\Bbbk \right) =0$. In this case, by the foregoing, we get that $Q$ is gauge equivalent to a connected bialgebra in ${_{H}^{H}\mathcal{YD}}$. Now let $E$ be a connected bialgebra in ${_{H}^{H}\mathcal{YD}}$ and let \gamma :E\otimes E\rightarrow \Bbbk $ be a gauge transformation in ${_{H}^{H \mathcal{YD}}$ such that $Q=E^{\gamma }.$ We proved that $A^{\zeta }\cong Q\#H\cong E^{\gamma }\#H$ as coquasi-bialgebras. By Proposition \re {pro:deformSmash}, we have that $\left( E\#H\right) ^{\Gamma }=E^{\gamma }\#H $ as an ordinary coquasi-bialgebras. Recall that two coquasi-bialgebras $A$ and $A^{\prime }$ are called \textbf{gauge equivalent} or \textbf quasi-isomorphic }whenever there is some gauge transformation $\gamma :Q\otimes Q\rightarrow \Bbbk $ in $\mathbf{Vec}_{\Bbbk }$ such that A^{\gamma }\cong A^{\prime }$ as coquasi-bialgebras. We point out that, if $A$ and $A'$ are ordinary bialgebras and A^{\gamma }\cong A^{\prime }$, then $\gamma$ comes out to be a unitary cocycle. This is encoded in the triviality of the reassociators of $A$ and $A'$. \begin{theorem} \label{teo:main}Let $A$ be a finite-dimensional Hopf algebra over a field \Bbbk $ of characteristic zero such that the coradical $H$ of $A$ is a sub-Hopf algebra (i.e. $A$ has the dual Chevalley Property). If $\mathrm{H}_ {\mathcal{YD}}}^{3}\left( \mathcal{D}\left( A\right) ,\Bbbk \right) =0$, then $A$ is quasi-isomorphic to the Radford-Majid bosonization $E\#H$ of some connected bialgebra $E$ in ${_{H}^{H}\mathcal{YD}}$ by $H$. Moreover \mathrm{gr}\left( E\right) \cong \mathcal{D}\left( A\right) $ as bialgebras in ${_{H}^{H}\mathcal{YD}}$. \end{theorem} \begin{proof} By the foregoing $A^{\zeta }\cong Q\#H\cong E^{\gamma }\#H=\left( E\#H\right) ^{\Gamma }$ as coquasi-bialgebras. Now $A$ is quasi-isomorphic to $A^{\zeta }$ which is quasi-isomorphic to $E\#H$ so that $A$ is quasi-isomorphic to $E\#H.$ Moreove \begin{equation*} \mathrm{gr}\left( E\right) =\mathrm{gr}\left( E^{\gamma }\right) =\mathrm{gr \left( Q\right) \cong \mathcal{D}\left( A\right) . \end{equation* where the first equality holds by Proposition \ref{pro:grgaugeYD}. \begin{invisible} Let us check for our sake that the quasi-isomorphism is an equivalence relation. First recall that two coquasi-bialgebras $A$ and $B$ are called quasi-isomorphism whenever $A^{\alpha }\cong B$ as coquasi-bialgebras for some gauge transformation $\alpha :A\otimes A\rightarrow \Bbbk $ for $A$. Write $A\sim B$ in this case. Clearly $A\sim A$ taking $\alpha :=\varepsilon _{A}\otimes \varepsilon _{A}.$ If $A\sim B$, then there is $\alpha $ and an isomorphism of coquasi-bialgebras $\sigma :B\rightarrow A^{\alpha }.$ Apply \cite Proposition 2.5]{ABM} to this morphism and $v:=\alpha ^{-1}$ which is a gauge transformation for $A^{\alpha },$ see \cite[Remark 2.2(ii)]{ABM}. Then $\beta :=v\circ \left( \sigma \otimes \sigma \right) $ is a gauge transformation for $B$ and $\sigma :B^{\beta }\rightarrow \left( A^{v}\right) ^{v^{-1}}$ is also an isomorphism of coquasi-bialgebras. By \cite[Remark 2.2(ii)]{ABM} we have $\left( A^{v}\right) ^{v^{-1}}\cong A$ and hence $B^{\beta }\cong A$ which means $B\sim A.$ Is $A\sim B$ and $B\sim C$ then there are $\alpha $ and $\beta $ such that A^{\alpha }\cong B$ and $B^{\beta }\cong C.$ Call $\sigma :A^{\alpha }\rightarrow B$ the first isomorphism. Apply again \cite[Proposition 2.5]{ABM} with $v:=\beta .$ Then $\alpha ^{\prime }:=v\circ \left( \sigma \otimes \sigma \right) $ is a gauge transformation for $A^{\alpha }$ and $\sigma :\left( A^{\alpha }\right) ^{\alpha ^{\prime }}\rightarrow B^{\beta }$ is an isomorphism of coquasi-bialgebras. Thus \left( A^{\alpha }\right) ^{\alpha ^{\prime }}\cong B^{\beta }\cong C.$ Since $A$ and $A^{\alpha }$ have the same coalgebra structure, we have that the monoids $\left( \mathrm{Hom}_{\Bbbk }\left( A^{\alpha }\otimes A^{\alpha },\Bbbk \right) ,\ast ,\varepsilon _{A^{\alpha }\otimes A^{\alpha }}\right) $ and $\left( \mathrm{Hom}_{\Bbbk }\left( A\otimes A,\Bbbk \right) \ast ,\varepsilon _{A\otimes A}\right) $ are the same. Thus we can regard $\alpha ^{\prime }$ and its inverse as elements in $\mathrm{Hom}_{\Bbbk }\left( A\otimes A,\Bbbk \right) $ and we get that $\alpha ^{\prime }\ast \alpha :A\otimes A\rightarrow \Bbbk $ is convolution invertible with convolution inverse $\alpha ^{-1}\ast \left( \alpha ^{\prime }\right) ^{-1}.$ Since $A$ and $A^{\alpha }$ have also the same unit, we get that $\alpha ^{\prime }\ast \alpha $ is unitary too and hence a gauge transformation. Let us check that \begin{equation*} \left( A^{\alpha }\right) ^{\alpha ^{\prime }}=A^{\alpha ^{\prime }\ast \alpha }. \end{equation* Note tha \begin{eqnarray*} &&\omega _{A^{\alpha ^{\prime }\ast \alpha }} \\ &=&\left( \varepsilon _{A}\otimes \left( \alpha ^{\prime }\ast \alpha \right) \right) \ast \left( \alpha ^{\prime }\ast \alpha \right) \left( A\otimes m_{A}\right) \ast \omega _{A}\ast \left( \alpha ^{\prime }\ast \alpha \right) ^{-1}\left( m_{A}\otimes A\right) \ast \left( \left( \alpha ^{\prime }\ast \alpha \right) ^{-1}\otimes \varepsilon _{A}\right) \\ &=&\left( \begin{array}{c} \left( \varepsilon _{A}\otimes \alpha ^{\prime }\right) \ast \left( \varepsilon _{A}\otimes \alpha \right) \ast \alpha ^{\prime }\left( A\otimes m_{A}\right) \ast \alpha \left( A\otimes m_{A}\right) \\ \ast \omega _{A}\ast \left( \alpha ^{-1}\ast \left( \alpha ^{\prime }\right) ^{-1}\right) \left( m_{A}\otimes A\right) \ast \left( \left( \alpha ^{-1}\ast \left( \alpha ^{\prime }\right) ^{-1}\right) \otimes \varepsilon _{A}\right \end{array \right) \\ &=&\left( \begin{array}{c} \left( \varepsilon _{A}\otimes \alpha ^{\prime }\right) \ast \left( \varepsilon _{A}\otimes \alpha \right) \ast \alpha ^{\prime }\left( A\otimes m_{A}\right) \ast \alpha \left( A\otimes m_{A}\right) \\ \ast \omega _{A}\ast \alpha ^{-1}\left( m_{A}\otimes A\right) \ast \left( \alpha ^{\prime }\right) ^{-1}\left( m_{A}\otimes A\right) \ast \left( \alpha ^{-1}\otimes \varepsilon _{A}\right) \ast \left( \left( \alpha ^{\prime }\right) ^{-1}\otimes \varepsilon _{A}\right \end{array \right) \\ &=&\left( \begin{array}{c} \left( \varepsilon _{A}\otimes \alpha ^{\prime }\right) \ast \alpha ^{\prime }\left( A\otimes \left( \alpha \ast m_{A}\right) \right) \ast \alpha \left( A\otimes m_{A}\right) \\ \ast \omega _{A}\ast \alpha ^{-1}\left( m_{A}\otimes A\right) \ast \left( \alpha ^{\prime }\right) ^{-1}\left( \left( m_{A}\ast \alpha ^{-1}\right) \otimes A\right) \ast \left( \left( \alpha ^{\prime }\right) ^{-1}\otimes \varepsilon _{A}\right \end{array \right) \\ &=&\left( \begin{array}{c} \left( \varepsilon _{A}\otimes \alpha ^{\prime }\right) \ast \alpha ^{\prime }\left( A\otimes \left( m_{A^{\alpha }}\ast \alpha \right) \right) \ast \alpha \left( A\otimes m_{A}\right) \\ \ast \omega _{A}\ast \alpha ^{-1}\left( m_{A}\otimes A\right) \ast \left( \alpha ^{\prime }\right) ^{-1}\left( \left( \alpha ^{-1}\ast m_{A^{\alpha }}\right) \otimes A\right) \ast \left( \left( \alpha ^{\prime }\right) ^{-1}\otimes \varepsilon _{A}\right \end{array \right) \\ &=&\left( \begin{array}{c} \left( \varepsilon _{A}\otimes \alpha ^{\prime }\right) \ast \alpha ^{\prime }\left( A\otimes m_{A^{\alpha }}\right) \ast \left( \varepsilon _{A}\otimes \alpha \right) \ast \alpha \left( A\otimes m_{A}\right) \\ \ast \omega _{A}\ast \alpha ^{-1}\left( m_{A}\otimes A\right) \ast \left( \alpha ^{-1}\otimes \varepsilon _{A}\right) \ast \left( \alpha ^{\prime }\right) ^{-1}\left( m_{A^{\alpha }}\otimes A\right) \ast \left( \left( \alpha ^{\prime }\right) ^{-1}\otimes \varepsilon _{A}\right \end{array \right) \\ &=&\left( \varepsilon _{A^{\alpha }}\otimes \alpha ^{\prime }\right) \ast \alpha ^{\prime }\left( A^{\alpha }\otimes m_{A^{\alpha }}\right) \ast \omega _{A^{\alpha }}\ast \left( \alpha ^{\prime }\right) ^{-1}\left( m_{A^{\alpha }}\otimes A^{\alpha }\right) \ast \left( \left( \alpha ^{\prime }\right) ^{-1}\otimes \varepsilon _{A^{\alpha }}\right) \\ &=&\omega _{\left( A^{\alpha }\right) ^{\alpha ^{\prime }}}. \end{eqnarray* Moreove \begin{eqnarray*} m_{A^{\alpha ^{\prime }\ast \alpha }} &=&\left( \alpha ^{\prime }\ast \alpha \right) \ast m_{A}\ast \left( \alpha ^{\prime }\ast \alpha \right) ^{-1} \\ &=&\alpha ^{\prime }\ast \left( \alpha \ast m_{A}\ast \alpha ^{-1}\right) \ast \left( \alpha ^{\prime }\right) ^{-1} \\ &=&\alpha ^{\prime }\ast m_{A^{\alpha ^{\prime }}}\ast \left( \alpha ^{\prime }\right) ^{-1}=m_{\left( A^{\alpha }\right) ^{\alpha ^{\prime }}}. \end{eqnarray* Thus $\left( A^{\alpha }\right) ^{\alpha ^{\prime }}=A^{\alpha ^{\prime }\ast \alpha }$ as coquasi-bialgebras. We conclude that $A^{\alpha ^{\prime }\ast \alpha }\cong C$ and hence $A\sim C.$ \end{invisible} \end{proof} More generally, given $A$ a (finite-dimensional) Hopf algebra whose coradical $H$ is a sub-Hopf algebra, then if $H$ is also semisimple, we expect that $A$ is quasi-isomorphic to the Radford-Majid bosonization $E\#H$ of some connected bialgebra $E$ in ${_{H}^{H}\mathcal{YD}}$ by $H$. See e.g. \cite[Corollary 3.4 and the proof therein]{Grunenfelder-Mastnak} and \cite{AAGMV,AAG} for a further clue in this direction. \section{Examples} \label{sec:6} We notice that the Hochschild cohomology of a finite-dimensional Nichols algebras has been computed in few examples. We consider here those Nichols algebras to compute $\mathrm{H}_{{\mathcal{YD}}}^{3}\left( \mathcal{B}\left( V\right) ,\Bbbk \right)$. \subsection{Braidings of Cartan type} Let $A=(a_{ij})_{1\le i,j\le\theta}$ be a finite Cartan matrix, $\Delta$ the corresponding root system, $(\alpha_i)_{1\le i\le\theta}$ a set of simple roots and $W$ its Weyl group. Let $w_0=s_{i_1}\cdots s_{i_M}$ be a reduced expression of the element $w_0\in W$ of maximal length as a product of simple reflections, $\beta_j=s_{i_1}\cdots s_{i_{j-1}}(\alpha_{i_j})$, $1\le j\le M$. Then $\beta_j\neq\beta_k$ if $j\neq k$ and $\Delta^+=\{\beta_j|1\le j\le M\}$, see \cite[page 25 and Proposition 3.6]{Hiller}. \begin{invisible} This can be deduced as follows. By \cite[page 25]{Hiller}, there is $w_0\in W$ such that $w_0(\Delta^+)=\Delta^-$ and $M=l(w_0)=|\Delta^+|$. Since $\Delta^+=-\Delta^-$, we also have $w_0(\Delta^-)=\Delta^+$. By \cite[Proposition 3.6]{Hiller} we have $\Delta^+=\Delta^+\cap w_0(\Delta^-)=\{\beta_1,\ldots, \beta_M\}$. Since $M=l(w_0)=|\Delta^+|$, then $\beta_1,\ldots, \beta_M$ are distinct. \end{invisible} Let $\Gamma$ be a finite abelian group, $\widehat{\Gamma}$ its group of characters. $\cD=(\Gamma,(g_i)_{1\le i\le\theta},(\chi_i)_{1\le i\le\theta},A)$ is a \emph{datum of finite Cartan type} \cite{AS-Classif} associated to $\Gamma$ and $A$ if $g_i\in\Gamma$, $\chi_j\in\widehat{\Gamma}$, $1\le i,j\le\theta$, satisfy $\chi_i(g_i)\neq 1$, $\chi_i(g_j)\chi_j(g_i)=\chi_i(g_i)^{a_{ij}}$ for all $i,j$. Set $\bq=(q_{ij})_{1\le i,j\le\theta}$, where $q_{ij}=\chi_j(g_i)$. In what follows $V$ denotes the Yetter-Drinfeld module over $\Bbbk\Gamma$, $\dim V=\theta$, with a fixed basis $x_1,\ldots, x_\theta$, where the action and the coaction over each $x_i$ is given by $\chi_i$ and $g_i$, respectively. Then the associated braiding is $c(x_i\otimes x_j)=q_{ij}x_j\otimes x_i$ for all $i,j$. Let $\cB_\bq=\cB(V)$. The tensor algebra $T(V)$ is $\N_0^\theta$-graded with grading $\alpha_i$ for each $x_i$. For $\beta=\sum_{i=1}^\theta a_i\alpha_i\in\Delta^+$, set \begin{align*} g_\beta &= g_1^{a_1}\cdots g_\theta^{a_\theta} , & \chi_\beta &= \chi_1^{a_1}\cdots \chi_\theta^{a_\theta}, & q_\beta&=\chi_\beta(g_\beta). \end{align*} Given $\alpha,\beta\in\Delta^+$, we denote $q_{\alpha\beta}=\chi_\beta(g_\alpha)$. We assume as in \cite{AS-Classif,MPSW} that \emph{the order of $q_{ii}$ is odd for all $i$, and not divisible by 3 for each connected component of the Dynkin diagram of $A$ of type $G_2$}. Therefore the order of $q_{ii}$ is the same for all the $i$ in the same connected component $J$. Given $\beta\in J$, we denote by $N_\beta$ the order of the corresponding $q_{ii}$ in $J$, which is also the order of $q_\beta$. By \cite{L} there exist homogeneous elements $x_{\beta}$ of degree $\beta$, $\beta\in\Delta^+$, such that the Nichols algebra $\cB_\bq$ of $V$ is presented by generators $x_1,\dots,x_\theta$ and relations \begin{align*} (\ad_c x_i)^{1-a_{ij}}x_j&=0, & &1\le i\neq j\le\theta; \\ x_\beta^{N_\beta}&=0, & &\beta\in \Delta_+. \end{align*} Moreover $\{x_{\beta_1}^{n_1}\dots x_{\beta_M}^{n_M}| 0\le n_i<N_{\beta_i}\}$ is a basis of $\cB_\bq$. We shall prove that $\mathrm{H}_{{\mathcal{YD}}}^{3}\left( \mathcal{B}_\bq,\Bbbk \right)=0$. We need first some technical results. \begin{lemma}\label{lemma:non trivial pair 1} Let $\alpha,\beta\in\Delta_+$. Then either $g_\alpha g_\beta^{N_\beta}\neq e$, or else $\chi_\alpha \chi_\beta^{N_\beta}\neq \epsilon$. \end{lemma} \begin{proof} Suppose on the contrary that $g_\alpha g_\beta^{N_\beta}=e$, $\chi_\alpha \chi_\beta^{N_\beta}=\epsilon$. Then $$ q_\alpha=\chi_\alpha^{-1}(g_\alpha^{-1})= \chi_\beta^{N_\beta}(g_\beta^{N_\beta})=q_\beta^{N_\beta^2}=1, $$ since $q_\beta$ is a root of unity of order $N_\beta$. But this is a contradiction, since $q_\alpha\neq 1$. \end{proof} \begin{lemma}\label{lemma:non trivial pair 2} Let $\alpha,\beta,\gamma\in\Delta^+$ be pairwise different. Then either $g_\alpha g_\beta g_\gamma \neq e$, or else $\chi_\alpha \chi_\beta \chi_\gamma \neq \epsilon$. \end{lemma} \begin{proof} Suppose on the contrary that $g_\alpha g_\beta g_\gamma=e$ and $\chi_\alpha \chi_\beta \chi_\gamma=\epsilon$. Then \begin{align}\label{eq:conditions alpha,beta,gamma} q_{\alpha}&= \chi_\alpha^{-1}(g_\alpha^{-1})= \chi_\beta\chi_\gamma(g_\beta g_\gamma)=q_\beta q_\gamma q_{\beta\gamma}q_{\gamma\beta}, & q_\beta&=q_\alpha q_\gamma q_{\alpha\gamma}q_{\gamma\alpha}, & q_\gamma&=q_\alpha q_\beta q_{\alpha\beta}q_{\beta\alpha}. \end{align} Notice that $\alpha,\beta,\gamma$ belong to the same connected component. Indeed, if $\gamma$ belongs to a different connected component, then $q_{\beta\gamma}q_{\gamma\beta}=q_{\alpha\gamma}q_{\gamma\alpha}=1$. Thus $q_\beta=q_\alpha q_\gamma = q_\beta q_\gamma^2$, so $q_\gamma^2=1$, which is a contradiction. Therefore we may assume that the Dynkin diagram is connected. One can prove that $q_{s_i(\alpha)}=q_{\alpha}$ for every $\alpha\in\Delta$. As we observed that $\Delta^+=\{\beta_j|1\le j\le M\}$, we deduce that for every $\beta\in\Delta^+$ there is some $j$ such that $q_{\beta}=q_{j}$. One can prove that there is some $q\in\Bbbk$ such that $q_{\alpha}=q^{(\alpha,\alpha)/2}$ and $q_{\alpha\gamma}q_{\gamma\alpha}=q^{(\alpha,\gamma)}$, where $(\cdot,\cdot)$ is the invariant bilinear form on the simple Lie algebra $\mathfrak{g}$ associated with the finite Cartan matrix \cite[Ch. VI, $\S1$, Proposition 3 and Definition 3]{B} and the basis of the root systems given in \cite[Ch. VI, $\S 4$]{B} should be normalized in such a way that $q=q_\delta$, $(\delta,\delta)={ 2}$ for each short root $\delta\in\Delta$. Note that $q_{\alpha}=q^{(\alpha,\alpha){/2}}\neq 1$ for all $\alpha$ as $(\alpha,\alpha)\neq 0$. Thus \begin{itemize} \item $q_\alpha=q_\beta=q_\gamma=q$ if the Dynkin diagram is simply laced, \item $q_\alpha,q_\beta,q_\gamma\in\{q,q^2\}$ if the Dynkin diagram has a double arrow, \item $q_\alpha,q_\beta,q_\gamma\in\{q,q^3\}$ if the Dynkin diagram is of type $G_2$. \end{itemize} If the Dynkin diagram is simply laced, then, by \eqref{eq:conditions alpha,beta,gamma}, we have $q_{\beta\gamma}q_{\gamma\beta}=q_{\alpha\gamma}q_{\gamma\alpha}=q_{\alpha\beta}q_{\beta\alpha}=q^{-1}$. Then $q^{(\alpha,\gamma)}=q^{-1}$. Now set $n(\alpha,\beta):=2(\alpha,\beta)/(\beta,\beta)=(\alpha,\beta)$. Then $n(\alpha,\beta)$ is symmetric whence, by \cite[Ch. VI, $\S1$, page 148]{B} we have $(\alpha,\gamma)=-1$ as the order of $q$ is odd, so $\alpha+\gamma\in\Delta^+$, by \cite[VI, $\S1$, Corollary, page 149]{B}. Now the same argument we used above shows that also $(\alpha,\beta)=-1=(\gamma,\beta)$ and hence $(\alpha+\gamma,\beta)=-2$, so $\alpha+\beta+\gamma\in\Delta^+$, since $\alpha+\gamma\neq -\beta$ (as $\alpha+\gamma$ and $\beta$ are both in $\Delta^+$). But $ q_{\alpha+\beta+\gamma}= q_{\alpha}q_\beta q_\gamma q_{\beta\gamma}q_{\gamma\beta}q_{\alpha\gamma}q_{\gamma\alpha} q_{\alpha\beta}q_{\beta\alpha}=1$, which is a contradiction. If the Dynkin diagram has a double arrow, then $q_{\alpha}$, $q_\beta$, $q_\gamma\in\{q,q^2\}$. If $q_{\alpha}=q_\beta=q_\gamma$, then the proof follows as for the simply-laced case because $n(u,v)=n(v,u)$ for $u,v\in\{\alpha,\beta,\gamma\}$. If $q_\alpha=q_\beta=q$ and $q_\gamma=q^2$, then $q_{\beta\gamma}q_{\gamma\beta}=q_{\alpha\gamma}q_{\gamma\alpha}=q^{-2}$, and $q_{\alpha\beta}q_{\beta\alpha}=1$, by \eqref{eq:conditions alpha,beta,gamma}. Then a simple calculation yields $(\beta,\gamma)=-2$ so that $\beta+\gamma\in\Delta^+$. One also gets $(\alpha,\beta)=0$ and $(\alpha,\gamma)=-2$ so that $(\alpha,\beta+\gamma)=(\alpha,\beta)+(\alpha,\gamma)=-2<0$ by the conditions on the order of $q$, so again $\alpha+\beta+\gamma\in\Delta^+$; but again we obtain $q_{\alpha+\beta+\gamma}=1$, which is a contradiction. The proof for $q_\alpha=q_\beta=q^2$ and $q_\gamma=q$ follows analogously. Finally, if the Dynkin diagram is of type $G_2$, then a similar analysis gives a contradiction. \end{proof} For each $1\le k\le M$, set $\cB_\bq(k)$ as the subspace of $\cB_\bq$ spanned by $\{x_{\beta_1}^{n_1}\dots x_{\beta_k}^{n_k}| 0\le n_i<N_{\beta_i}\}$. By \cite{DP} this gives an algebra filtration, and the graded algebra $\Gr\cB_\bq$ associated to this filtration is presented by generators $\bx_\beta$, $\beta\in\Delta^+$, and relations \begin{align*} \bx_\beta\bx_\gamma &= q_{\beta\gamma} \bx_\gamma\bx_\beta, & \bx_\beta^{N_\beta}&=0, & \beta & <\gamma\in\Delta_+. \end{align*} In \cite{MPSW} $\Gr\cB_\bq$ is viewed as an algebra in $\ydg$, which (as an algebra) is the Nichols algebra of Cartan type $A_1\times\dots\times A_1$, $M$ copies, with action and coaction on $\bx_\beta$ given by $\chi_\beta$, $g_\beta$, respectively. By \cite[Theorem 4.1]{MPSW}, $\mathrm{H}^\bullet(\Gr\cB_\bq,\Bbbk)$ is the algebra generated by $\xi_\beta$, $\eta_\beta$, $\beta\in\Delta^+$, where $\deg\xi_\beta=2$, $\deg\eta_\beta=1$, and relations \begin{align*} \xi_\beta\xi_\gamma &= q_{\beta\gamma}^{N_\beta N_\gamma} \xi_\gamma\xi_\beta, & \eta_\beta\xi_\gamma &= q_{\beta\gamma}^{N_\gamma} \xi_\gamma\eta_\beta, & \eta_\beta\eta_\gamma &= -q_{\beta\gamma} \eta_\gamma\eta_\beta, & \beta,\gamma & \in\Delta^+. \end{align*} As we assume that all the $q_{ii}$ have odd order, we deduce in particular from the last equality that $\eta_\beta^2=0$ for all $\beta\in\Delta^+$. As an algebra in $\ydg$, the action and coaction on $\xi_\beta$ is given by $\chi_\beta^{-N_\beta}$, $g_\beta^{-N_\beta}$, while the action and coaction on $\eta_\beta$ is given by $\chi_\beta^{-1}$, $g_\beta^{-1}$. \begin{theorem} $\mathrm{H}_{{\mathcal{YD}}}^{3}\left( \cB_\bq,\Bbbk \right)=0$. \end{theorem} \begin{proof} First we will prove that $\mathrm{H}^{3}\left( \Gr\cB_\bq,\Bbbk \right)^{D} =0$ for $D:=D(\Bbbk\Gamma).$ Now, the invariants are with respect to the ${D}$-bimodule structure that $\mathrm{H}^{3}\left( \Gr\cB_\bq,\Bbbk \right)$ inherits from $\mathrm{Hom}\left( (\Gr\cB_\bq)^{\otimes 3},\Bbbk \right)$ (this is a ${D}$-bimodule as its arguments are left ${D}$-modules). Since the left $D$-module structure is induced by the one of $\Bbbk$, it is trivial. Thus the invariants of $\mathrm{H}^{3}\left( \Gr\cB_\bq,\Bbbk \right)$ as a $D$-bimodule reduce to the its invariants as a right $D$-module. Since right $D$-modules are equivalent to left $D$-modules, via the antipode of $D$ which is invertible as $D$ is finite-dimensional, the right $D$-module structure of $\mathrm{H}^{3}\left( \Gr\cB_\bq,\Bbbk \right)$ becomes the structure of object in $\ydg$ described above. Thus, in order to prove that $\mathrm{H}^{3}\left( \Gr\cB_\bq,\Bbbk \right)^{D} =0$ we just have to check that the invariants of $\mathrm{H}^{3}\left( \Gr\cB_\bq,\Bbbk \right)$ as a left-left Yetter-Drinfeld modules are zero. Now, by the defining relations of $\mathrm{H}^\bullet(\Gr\cB_\bq,\Bbbk)$, a basis $B$ of $\mathrm{H}^3(\Gr\cB_\bq,\Bbbk)$ is given by $\{\xi_\alpha\eta_\beta\}\cup\{\eta_\alpha\eta_\beta\eta_\gamma|\alpha<\beta<\gamma\}$. If $v\in\mathrm{H}^3(\Gr\cB_\bq,\Bbbk)$ is invariant, then $v$ is written as a linear combination of elements in the trivial component. Indeed, write $v=\sum_{b\in B} c_b \, b$ for some $c_b\in\Bbbk$, and let $g_b$, $\chi_b$ be the elements describing the component of $b\in B$. Then \begin{align*} v &= g\cdot v = \sum_{b\in B} c_b \, g\cdot b = \sum_{b\in B} c_b \chi_b(g) \, b, & \mbox{for all }& g\in \Gamma, \\ 1\otimes v &= \rho(v) = \sum_{b\in B} c_b \, \rho\cdot b = \sum_{b\in B} c_b g_b \otimes b. \end{align*} If $c_b\neq 0$, then $\chi_b(g)=1$ for all $g\in \Gamma$ so $\chi_b=\epsilon$, and $g_b=1$. Thus $b$ is invariant. We have so proved that the existence of $v\neq 0$ invariant implies the existence of $b\in B$ invariant. Hence, if $B$ has no invariant element then there is no invariant element at all. Note that, for all $h\in H$, we have $h\cdot(\xi_\alpha\eta_\beta)=(\chi_\alpha^{-N_\alpha}\chi_\beta^{-1})(h)\xi_\alpha\eta_\beta$ and $\rho(\xi_\alpha\eta_\beta)=g_\alpha^{-N_\alpha}g_\beta^{-1}\otimes\xi_\alpha\eta_\beta$ so that, by Lemma \ref{lemma:non trivial pair 1}, the element $\xi_\alpha\eta_\beta$ is not $D$-invariant. A similar argument, using Lemma \ref{lemma:non trivial pair 2}, shows that also $\eta_\alpha\eta_\beta\eta_\gamma$ is not $D$-invariant. Thus the elements in $B$ are not $D$-invariant, so $\mathrm{H}^{3}\left( \Gr\cB_\bq,\Bbbk \right)^{D} =0$. Since the elements in $\{x_{\beta_1}^{n_1}\dots x_{\beta_k}^{n_k}| 0\le n_i<N_{\beta_i}\}$ are eigenvectors for $D$, we can mimic the argument in \cite[Section 5]{MPSW} by taking into account the spectral sequence associated to the filtration of algebras therein; see for example \cite[Corollary 5.5]{MPSW} for a similar argument. Thus $\mathrm{H}_{{\mathcal{YD}}}^{3}\left( \cB_\bq,\Bbbk \right)\cong\mathrm{H}^{3}\left( \cB_\bq,\Bbbk \right)^{D} =0$. \end{proof} \begin{remark} Notice that $\mathrm{H}_{{\mathcal{YD}}}^{3}\left( \cB_\bq,\Bbbk \right)\cong \mathrm{H}^{3}\left( \cB_\bq,\Bbbk \right)^{D(\Bbbk\Gamma)} =0$ although $\mathrm{H}^3\left( \cB_\bq\#\Bbbk\Gamma,\Bbbk \right)\cong \mathrm{H}^{3}\left( \cB_\bq,\Bbbk \right)^{\Gamma}$ can be non-trivial, see for example \cite[Example 5.8]{MPSW}. \end{remark} \subsection{Braidings of non-diagonal type} For $n\geq3$, $\mathcal{FK}_{n}$ denotes the quadratic algebra \cite{FK} with a presentation by generators $x_{(ij)}$, $1\leq i<j\leq n$, and relations \begin{align*} x_{(ij)}^{2}&=0,& &1\leq i<j\leq n,\\ x_{(ij)}x_{(jk)}&=x_{(jk)}x_{(ik)}+x_{(ik)}x_{(ij)},& &1\leq i<j<k\leq n,\\ x_{(jk)}x_{(ij)}&=x_{(ik)}x_{(jk)}+x_{(ij)}x_{(ik)},& &1\leq i<j<k\leq n,\\ x_{(ij)}x_{(kl)}&=x_{(kl)}x_{(ij)},& & \# \{i,j,k,l\}=4. \end{align*} According to \cite{MiS} each $\mathcal{FK}_n$ is a graded bialgebra in the category of Yetter-Drinfeld modules over the symmetric group $S_n$, generated as an algebra by the vector space $V_n$ with basis $\{x_{(ij)}\mid 1\leq i<j\leq n\}$. The action is described by identifying $(ij)$ with the corresponding transposition in $S_n$ and then consider the conjugation twisted by the sign, while the coaction is given by declaring $x_\sigma$ a homogeneous element of degree $\sigma$. Then the braiding on $V_n$ becomes \[ c(x_\sigma\otimes x_\tau)=\chi(\sigma,\tau)x_{\sigma\tau\sigma^{-1}}\otimes x_\sigma,\quad\quad \chi(\sigma,\tau)= \begin{cases} 1 & \sigma(i)<\sigma(j), \tau=(ij), \, i<j,\\ -1 & \text{otherwise,} \end{cases} \] where $\sigma$ and $\tau$ are transpositions. Moreover $\mathcal{FK}_n$ projects onto the Nichols algebra $\cB(V_n)$. For $n=3,4,5$, it is known that $\mathcal{FK}_n=\cB(V_n)$ and has dimension, respectively, $12$, $576$ and $8294400$. The Hochschild cohomology of $\mathcal{FK}_3$ is a consequence of the results in \cite{SVay} as follows. \begin{theorem} $\mathrm{H}_{\Bbbk S_3\text{-}\mathrm{Mod}}^{\bullet}\left( \mathcal{FK}_3 ,\Bbbk \right)$ is isomorphic to the graded algebra \begin{align*} &\Bbbk[X,U,V]/(U^2V-VU^2), & \mbox{where }\deg U=\deg V=2, & \, \deg X=4. \end{align*} \end{theorem} \begin{proof} By \cite[Theorem 4.19]{SVay}, we have that $E(B\#\Bbbk S_3)$ is isomorphic to the algebra in the claim, where $B=\mathcal{FK}_3$. By \cite[Theorem 2.17]{SVay}, we know that $E(B\#\Bbbk S_3)\cong E(B)^{\Bbbk S_3}$ as graded algebras. As observed in Remark \ref{rem:DV}, we have that $E(B)\cong \mathrm{H}^{\bullet}\left( B ,\Bbbk \right)$. By Remark \ref{rem:ExactInv}, we have $\mathrm{H}^{\bullet}\left( B ,\Bbbk \right)^{\Bbbk S_3}\cong\mathrm{H}_{\Bbbk S_3\text{-}\mathrm{Mod}}^{\bullet}\left( \mathcal{FK}_3 ,\Bbbk \right)$. \end{proof} From this result we get $\mathrm{H}_{\Bbbk S_3\text{-}\mathrm{Mod}}^{3}\left( \mathcal{FK}_3 ,\Bbbk \right)=0$ so that, by Proposition \ref{pro:D(H)} we conclude that \begin{corollary} $\mathrm{H}_{{\mathcal{YD}}}^{3}\left( \mathcal{FK}_3,\Bbbk \right)=0$. \end{corollary}
1,941,325,220,902
arxiv
\section{Introduction} The lattice Boltzmann method (LBM), which is developed from the lattice gas automata (LGA)\cite{HPP,FHP,Humieres1986,Succi1989,higuera1989a,higuera1989b,succi2001lattice} and considered as a powerful discrete scheme of the continuum Boltzmann equation\cite{he1997theory}, has attracted considerable attention in modeling hydrodynamics of flow systems. After more than two decades of development, LBM has been successfully applied in the simulation of fluid flow\cite{Koelman1991,qian1992,chen1998lattice,aidun2010lattice}. By the Chapman-Enskog analysis and Taylor expansion\cite{qian1993,sterling1996,He1997,Qian1998}, the standard lattice Boltzmann equation (LBE) can recover the macroscopic Navier-Stokes equations that are usually used to describe simple or single-phase flow. When it comes to both the flow through porous media at representative elementary volume scale and the multiphase flow based on the idea of interpenetrating continua (namely multi-fluid model), the volume-averaged Navier-Stokes equations\cite{kuipers1992,VANS} are of opportune description. To date, two algorithms, namely the interphase slip algorithm (IPSA)\cite{spalding1980numerical} and the implicit multi-field (IMF) method \cite{harlow1975numerical}, are mainly applied to solve such equations. Additionally, when the coupling of phases in multiphase flows is very strong for the multi-fluid model, the interphase coupling algorithms, such as the partial elimination algorithm (PEA)\cite{spalding1980numerical} and the simultaneous solution of non-linearly coupled equations (SINCE)\cite{karema1999efficiency} etc, are applied to avoid the divergence of iterative sequential solvers. However, almost all the mentioned algorithms are implicit/semi-implicit schemes, which suffer from relatively lower scalability and parallel efficiency. LBM is a promising method for large-scale fast simulation of multiphase flow problems due to its advantages of explicit solver, simple coding and natural parallelism\cite{chen1998lattice,aidun2010lattice}. In the literature, various LBE formulations for modeling the volume-averaged Navier-Stokes equations have been reported\cite{guo2002lattice,wang2005two,sankaranarayanan2008lattice,sungkorn2012simulations,eggels1995numerical,wang2013lattice}. Guo et al.\cite{guo2002lattice} put forward a generalized lattice Boltzmann model for isothermal incompressible flows in porous media. Wang et al.\cite{wang2005two} reformulated the lattice Boltzmann equation as a dimensionless one, and added a modified pressure term to recover the volume-averaged Navier-Stokes equations through the Chapman-Enskog analysis. Sankarananrayanan et al.\cite{sankaranarayanan2008lattice} modified the equilibrium distribution by introducing the volume fraction and amended the continuous Boltzmann equation for considering the temperature of particle to ensure the mass and momentum conservation in gas-solid flow. Unfortunately, the macroscopic equations derived from all the above-mentioned LBEs are not equivalent to the full volume-averaged Navier-Stokes equations. Sungkorn and Derksen\cite{sungkorn2012simulations} reformulated the volume-averaged Navier-Stokes equations by putting all the volume fraction terms to the right hand side (RHS) as a source term or forcing term, and solved the reformulated equations by the FCHC method\cite{eggels1995numerical}. Recently, Wang et al.\cite{wang2013lattice} proposed a modified LBE with a reasonable consideration for the effect of both the local solid volume fraction and the local relative velocity between particles and fluid, while the corresponding macroscopic equations of the modified LBE were not established. Therefore, a rigorous theoretical derivation from the LBE to the volume-averaged Navier-Stokes equations is highly anticipated. \section{Lattice Boltzmann model} The lattice Boltzmann equation with BGK approximation can be expressed as \begin{equation} \label{e.BGK} {f_{i}}({\bf{x}} + {{\bf{e}}_i}\delta t,t + \delta t) - {f_{i}}({\bf{x}},t) = -\frac{1}{\tau }[{f_{i}}({\bf{x}},t) - f_{i}^{\mathrm{eq}}({\bf{x}},t)], \end{equation} where ${f_{i}}({\bf{x}},t)$ is the single particle distribution function indicating the probability of finding a particle with velocity ${\bf e}_i$ at position $\bf{x}$ and time $t$, $\delta t$ is the discrete time step, $\tau$ is the relaxation time, and $f_{i}^{\mathrm{eq}}({\bf{x}},t)$ is the equilibrium distribution function. It is well known that the lattice Bhatnagar-Gross-Krook (LBGK) model is one of the most popular LB models and can recover the Navier-Stokes equations\cite{qian1992} given as follows: \begin{subequations} \label{e.NS} \begin{align} &\frac{\partial }{{\partial t}}\rho + \nabla \cdot (\rho {\bf{u}}) = 0,\label{e.NSA} \\ \frac{\partial }{{\partial t}}(\rho {\bf{u}}) + \nabla \cdot (\rho {\bf{uu}}&) = - \nabla p + \nu \nabla \cdot \{ \rho [\nabla {\bf{u}} + {(\nabla {\bf{u}})^{\rm{T}}}]\},\label{e.NSB} \end{align} \end{subequations} where ${\rho }$ is the fluid density, ${\bf{u}}$ is the fluid velocity, $\nu $ is the kinematic viscosity, $p$ is the pressure. In order to find a model for the volume-averaged Navier-Stokes equations, we begin with comparing the volume-averaged Navier-Stokes equations with the standard Navier-Stokes equations. To make the illustration more clear, we neglect the forcing term in the derivation. The volume-averaged Navier-Stokes equations\cite{kuipers1992,VANS} are extensively used to describe flow through porous media, multiphase flow and other complex flows, and can be expressed as follows: \begin{subequations} \label{e.VNS1} \begin{align} &\frac{\partial }{{\partial t}}(\varepsilon \rho ) + \nabla \cdot (\varepsilon \rho {\bf{u}}) = 0,\label{e.VNS1A} \\ \frac{\partial }{{\partial t}}(\varepsilon \rho {\bf{u}}) + \nabla \cdot (\varepsilon \rho &{\bf{uu}}) = - \varepsilon \nabla p + \nu \nabla \cdot \{\varepsilon \rho [\nabla {\bf{u}} + {(\nabla {\bf{u}})^{\rm{T}}}]\}, \label{e.VNS1B} \end{align} \end{subequations} where $\varepsilon$ represents the volume fraction in multiphase flow or local porosity in porous media flow. Comparing eqs.~(\ref{e.NS}) and eqs.~(\ref{e.VNS1}), it is readily found that there exists a variable $\varepsilon$ before $\rho$ in the volume-averaged Navier-Stokes equations. Therefore, it is natural for us to introduce $\varepsilon$ before $\rho$ in the equilibrium equation, and we can get \begin{equation} \label{e.d2q9eq} f_{i}^{\mathrm{eq}}({\bf{x}},t) = {w_i}{\varepsilon }{\rho }[1 + \frac{{{\bf{e}} \cdot {\bf{u}}}}{{c_s^2}} + \frac{{{{({\bf{e}} \cdot {\bf{u}})}^2}}}{{2c_s^4}} - \frac{{\bf{u}} \cdot {\bf{u}}}{{2c_s^2}}], \end{equation} where $w_i$ is the weight and $c_s$ is the sound speed. By applying the Chapman-Enskog analysis and Taylor expansion, and omitting the high-order terms, the LBGK model with the above equilibrium equation can recover the following equations \begin{subequations} \label{e.VNS2} \begin{align} & \qquad\qquad \frac{\partial }{{\partial t}}(\varepsilon \rho ) + \nabla \cdot (\varepsilon \rho {\bf{u}}) = 0,\label{e.VNS2A} \\ \begin{split} \frac{\partial }{{\partial t}}(\varepsilon \rho {\bf{u}}) + \nabla \cdot (\varepsilon \rho {\bf{uu}}) = & - \nabla(\varepsilon p)\\& + \nu \nabla \cdot \{\varepsilon \rho [\nabla {\bf{u}} + {(\nabla {\bf{u}})^{\rm{T}}}]\},\label{e.VNS2B}\\ \end{split} \end{align} \end{subequations} where $p=c_s^2\rho$, and $\nu=c_s^2(\tau-0.5)\delta t$. We can see that the pressure term $-\nabla(\varepsilon p)$ is different from $-\varepsilon \nabla p$ in the volume-averaged Navier-Stokes equations. Furthermore, from the vector identity concerning the divergence of a scalar times a scalar, \begin{equation} - \varepsilon \nabla p=p\nabla \varepsilon -\nabla(\varepsilon p), \end{equation} we can add a discrete term of $p\nabla \varepsilon$ (denoted by $P_i$) to the discrete Boltzmann equation with BGK approximation. Therefore, the proposed model can be expressed as \begin{equation} \label{e.vlbe} \begin{split} {f_{i}}({\bf{x}} + {{\bf{e}}_i}\delta t,t + \delta t) - {f_{i}}({\bf{x}},t) =&-\frac{1}{\tau }[{f_{i}}({\bf{x}},t) - f_{i}^{eq}({\bf{x}},t)]\\&+ \delta t P_i, \end{split} \end{equation} where $f_{i}^{\mathrm{eq}}$ is given as eq.~(\ref{e.d2q9eq}), and the additional term $P_i$ can be written as\cite{force2002discrete} \begin{equation} \label{e.pi} P_i = (1 - {\frac{1}{2\tau }}){w_i}({\frac{{{\bf{e}}_i} - {\bf{u}}}{c_s^2}} + {\frac{{{\bf{e}}_i} \cdot {\bf{u}}} {c_s^4}}{{\bf{e}}_i})\cdot p\nabla \varepsilon. \end{equation} As a result, the macroscopic equations corresponding to eq.~(\ref{e.vlbe}) are eqs.~(\ref{e.VNS1}). \section{Chapman-Enskog analysis} In the following, we would derive the corresponding macroscopic equations from the proposed lattice Boltzmann equations through the Chapman-Enskog analysis. With the equilibrium distribution function $f_{i}^{eq}$, as defined in eq.~(\ref{e.d2q9eq}), we get the following moment equations \begin{subequations} \begin{align} &\sum\limits_i {f_{i}{^{\mathrm{eq}}}} = {\varepsilon }{\rho },\\ \sum\limits_i& {{{\bf{e}}_i}f_{i}{^{\mathrm{eq}}}} = {\varepsilon }{\rho }{\bf{u}},\\ \sum\limits_i {{{\bf{e}}_i}{{\bf{e}}_i}f_{i}^{\mathrm{eq}}}& = c_s^2{\varepsilon }{\rho } + {\varepsilon }{\rho }{u_\alpha }{u_\beta },\\ \sum\limits_i {{{\bf{e}}_i}{{\bf{e}}_i}{{\bf{e}}_i}f_{i}^{\mathrm{eq}}} = c_s^2{\varepsilon }&{\rho }({\delta _{\alpha \beta }}{u_\gamma } + {\delta _{\alpha \beta }}{u_\gamma } + {\delta _{\beta \gamma }}{u_\alpha }). \end{align} \end{subequations} With the additional term $P_i$, it is noted that \begin{subequations} \begin{align} &\sum\limits_i {P_i} = 0,\\ \sum\limits_i{{{\bf{e}}_i}P_i}& = (1 - {\frac{1}{2\tau }})c_s^2{\rho }\nabla {\varepsilon },\\ \sum\limits_i {{{\bf{e}}_i}{{\bf{e}}_i}P_i}= (1 - &{\frac{1}{ {2\tau }}})({\bf{u}}c_s^2{\rho }\nabla {\varepsilon } + c_s^2{\rho }\nabla {\varepsilon }{\bf{u}}). \end{align} \end{subequations} Based on the Chapman-Enskog analysis and Taylor expansion, the following expressions are given \begin{equation} {f_{i}} = f_{i}^{(0)} + \lambda f_{i}^{(1)} + {\lambda ^2}f_{i}^{(2)} \end{equation} \begin{equation} \frac{\partial }{{\partial t}} = \lambda \frac{\partial }{{\partial {t_1}}} + {\lambda ^2}\frac{\partial }{{\partial {t_2}}} \end{equation} \begin{equation} \nabla = \lambda {\nabla _1} \end{equation} \begin{equation} \begin{split} {f_{i}}({\bf{x}} + {{\bf{e}}_i}\delta t,t + \delta t) = &{f_{i}}({\bf{x}},t) +\delta t(\frac{\partial }{{\partial t}} + {{\bf{e}}_i}\nabla ){f_{i}}({\bf{x}},t) \\ &+ \frac{{{\delta t}^2}}{2}{(\frac{\partial }{{\partial t}} + {{\bf{e}}_i}\nabla )^2}{f_{i}}({\bf{x}},t) \end{split} \end{equation} where $\lambda$ is an expansion parameter, which is proportional to the ratio of the lattice spacing to a characteristic macroscopic length. Applying the above expressions to eq.~(\ref{e.vlbe}), we can obtain the following equations in the consecutive order of the parameter $\lambda$: \begin{widetext} \begin{eqnarray} \label{e.0}{\rm O}({\lambda ^0}):& & f_{i}^{(0)} = f_{i}^{\mathrm{eq}} ,\\ \label{e.1}{\rm O}({\lambda ^1}):& & (\frac{\partial }{{\partial {t_1}}} + {{\bf{e}}_i}\cdot{\nabla _1})f_{i}^{(0)} = - \frac{{f_{i}^{(1)}}}{{\tau \delta t}}+\frac{P_{i}}{\lambda},\\ \label{e.2}{\rm O}({\lambda ^2}):& & \frac{{\partial f_{i}^{(0)}}}{{\partial {t_2}}} + (1 - {\frac{1}{2\tau }})(\frac{\partial }{{\partial {t_1}}} + {{\bf{e}}_i}\cdot{\nabla _1})f_{i}^{(1)} =- \frac{{f_i^{(2)}}}{{\tau \delta t}}- {\frac{\delta t}{2\lambda}}(\frac{\partial }{{\partial {t_1}}} + {{\bf{e}}_i}\cdot{\nabla _1})P_{i}. \end{eqnarray} \end{widetext} \mbox{\textit{see eq.~\eqref{e.0},eq.~\eqref{e.1} and eq.~\eqref{e.2}.}} {The zero-order velocity moment of eq.~(\ref{e.1}) is \begin{equation} \sum\limits_i {(\frac{\partial }{{\partial {t_1}}} + {{\bf{e}}_i} \cdot {\nabla _1})f_{i}^{(0)}} = -\sum\limits_i { \frac{{f_{i}^{(1)}}}{{\tau \delta t}}}+ \sum\limits_i {\frac{P_{i}}{\lambda} }, \end{equation} from which we can get \begin{equation} \label{e.11vmoment} \frac{\partial }{{\partial {t_1}}}({\varepsilon }{\rho }) + {\nabla _1} \cdot ({\varepsilon }{\rho }{{\bf{u}}}) = 0. \end{equation} The first-order velocity moment of eq.~(\ref{e.1}) is \begin{equation} \sum\limits_i {{{\bf{e}}_i}(\frac{\partial }{{\partial {t_1}}} + {{\bf{e}}_i}\cdot{\nabla _1})f_{i}^{(0)}} = - \sum\limits_i {\frac{{{\bf{e}}_i}{f_{i}^{(1)}}}{{\tau \delta t}}} + \sum\limits_i {\frac{{{\bf{e}}_i}P_{i}}{\lambda} } , \end{equation} then, we can obtain \begin{equation} \label{e.12vmoment} \frac{\partial }{{\partial {t_1}}}({\varepsilon }{\rho }{{\bf{u}}}) + {\nabla _1} \cdot (c_s^2{\varepsilon }{\rho } + {\varepsilon }{\rho }{u_{\alpha}}{u_{\beta}})=\frac{ c_s^2{\rho }\nabla {\varepsilon }}{\lambda}. \end{equation} And the zero-order velocity moment of eq.~(\ref{e.2}) is \begin{equation} \begin{split} & \sum\limits_i {\frac{{\partial f_{i}^{(0)}}}{{\partial {t_2}}}} + (1 - {\frac{1}{2\tau }})\sum\limits_i {(\frac{\partial }{{\partial {t_1}}} + {{\bf{e}}_i}\cdot{\nabla _1})f_{i}^{(1)}}\\ &= - \sum\limits_i {\frac{f_i^{(2)}}{{\tau \Delta t}}} - \sum\limits_i {\frac{\delta t} { 2\lambda }(\frac{\partial }{{\partial {t_1}}} + {{\bf{e}}_i}\cdot{\nabla _1})P_{i}} , \end{split} \end{equation} we can readily get \begin{equation} \label{e.21vmoment} \frac{{\partial( {\varepsilon }{\rho })}}{{\partial {t_2}}}{\rm{ = }}0. \end{equation} Since the time derivative term can be expressed as \begin{equation} \label{e.timederivative} \begin{split} &\frac{\partial }{{\partial {t_1}}}({\varepsilon }{\rho }{u_{\alpha}}{u_{\beta}})\\&= - {u_{\alpha}}{u_{\beta}}\frac{\partial }{{\partial {t_1}}}({\varepsilon }{\rho }) + {u_{\alpha}}\frac{\partial }{{\partial {t_1}}}({\varepsilon }{\rho }{u_{\beta}}) + {u_{\beta}}\frac{\partial }{{\partial {t_1}}}({\varepsilon }{\rho }{u_{\alpha}}) \\ & = \frac{1}{\lambda}({u_{\alpha}}c_s^2\rho\nabla \varepsilon +{u_{\beta}}c_s^2\rho\nabla \varepsilon) - ({u_{\alpha}} + {u_{\beta}}){\nabla _1} (c_s^2{\varepsilon }{\rho })\\ &\quad- {\nabla _1} \cdot ({\varepsilon }{\rho }{u_{\alpha}}{u_{\beta}}{{\bf{u}}}), \end{split} \end{equation} the momentum flux tensor can be simplified as \begin{equation} \label{e.fluxtensor} \begin{split} &\sum\limits_i {{{\bf{e}}_i}{{\bf{e}}_i}f_{i}^{(1)}}\\ &= \tau \Delta t({u_{\alpha}} + {u_{\beta}}){\nabla _1} ({\varepsilon }{\rho }{u_{\alpha}}{u_{\beta}})\\ &\quad - \tau \Delta t c_s^2{\varepsilon }{\rho }{\nabla _1}\cdot({\delta _{\alpha\gamma}}{u_\beta} + {\delta _{\gamma\beta}}{u_\alpha})\\ &\quad- {\frac{\Delta t}{2\lambda }}({\bf{u}}{c_s^2\rho\nabla \varepsilon} + c_s^2{\rho }\nabla {\varepsilon }{\bf{u}})\\ &\quad - \tau \Delta t[{u_{\alpha}}{u_{\beta}}{\nabla _1} \cdot ({\varepsilon }{\rho }{{\bf{u}}})] .\\ \end{split} \end{equation} Once we have substituted eq.~(\ref{e.fluxtensor}) into the first-order velocity moment of eq.~(\ref{e.2}), we can obtain \begin{equation} \label{e.22vmoment} \begin{split} &\frac{\partial}{{\partial {t_2}}} ({\varepsilon }{\rho }{\bf{u}}) + (1 - {\frac{1}{2\tau }}) \frac{\partial }{{\partial {t_1}}}( - {\frac{1}{2\lambda}}\Delta t{c_s^2\rho\nabla \varepsilon})\\ &- (1 - {\frac{1}{2\tau }}){\nabla _1}\cdot[\tau \Delta tc_s^2{\varepsilon }{\rho }{\nabla _1}\cdot({\delta _{\alpha\gamma}}{u_\beta} + {\delta _{\gamma\beta}}{u_\alpha})] \\ & + (1 - {\frac{1}{2\tau }}){\nabla _1}\cdot[\tau \Delta t({u_{\alpha}} + {u_{\beta}}){\nabla _1} \cdot ({\varepsilon }{\rho }{u_{\alpha}}{u_{\beta}})] \\ &- (1 - {\frac{1}{2\tau }}){\nabla _1}\cdot\{\tau \Delta t[{u_{\alpha}}{u_{\beta}}{\nabla _1} \cdot ({\varepsilon }{\rho }{{\bf{u}}})]\} \\ & = - {\frac{\Delta t}{2\lambda}} \frac{\partial }{{\partial {t_1}}}[(1 - {\frac{1}{2\tau }})c_s^2\rho\nabla \varepsilon]. \\ \end{split} \end{equation} In the above work, four macroscopic equations that are eq.~(\ref{e.11vmoment}), eq.~(\ref{e.12vmoment}), eq.~(\ref{e.21vmoment}) and eq.~(\ref{e.22vmoment}) have been obtained. Combining these equations, we have \begin{subequations} \label{e.VNS4} \begin{align} &\qquad\qquad\frac{\partial }{{\partial t}}({\varepsilon }{\rho }) + \nabla \cdot ({\varepsilon }{\rho }{{\bf{u}}}) = 0,\label{e.VNS4A} \\ \begin{split} &\frac{\partial }{{\partial t}}({\varepsilon }{\rho }{{\bf{u}}}) + \nabla \cdot ({\varepsilon }{\rho }{u_{\alpha}}{u_{\beta}}) = - {\varepsilon }\nabla (c_s^2{\rho })\\ &\quad + \tau \Delta tc_s^2(1 - {\frac{1}{2\tau }})\nabla\cdot [{\varepsilon }{\rho }\nabla\cdot ({\delta _{\alpha\gamma}}{u_\beta} + {\delta _{\gamma\beta}}{u_\alpha})] \\ & \quad+ (1 - {\frac{1}{2\tau }})\nabla \cdot\{\tau \Delta t[{u_{\alpha}}{u_{\beta}}\nabla \cdot ({\varepsilon }{\rho }{{\bf{u}}})]\} \\ &\quad- (1 - {\frac{1}{2\tau }})\nabla \cdot[\tau \Delta t({u_{\alpha}} + {u_{\beta}})\nabla \cdot ({\varepsilon }{\rho }{u_{\alpha}}{u_{\beta}})]. \\ \end{split} \label{e.VNS4B} \end{align} \end{subequations} Omitting the high-order terms which are the divergents term representing the compressibility, we get eq.~(\ref{e.VNS1}), where $p=c_s^2\rho$, $\nu=c_s^2(\tau-0.5)\delta t$. As a consequence, eq.~(\ref{e.vlbe}) can recover the volume-averaged Navier-Stokes equations without forcing term. An additional discrete forcing term should be added when recovering the volume-averaged Navier-Stokes equations with a forcing term\cite{kuipers1992,VANS}, namely \begin{subequations} \label{e.VNS3} \begin{align} &\frac{\partial }{{\partial t}}(\varepsilon \rho ) + \nabla \cdot (\varepsilon \rho {\bf{u}}) = 0,\label{e.VNS3A} \\ \begin{split} \frac{\partial }{{\partial t}}(\varepsilon \rho {\bf{u}}) &+ \nabla \cdot (\varepsilon \rho{\bf{uu}}) = - \varepsilon \nabla p \\&+ \nu \nabla \cdot \{\varepsilon \rho [\nabla {\bf{u}}+ {(\nabla {\bf{u}})^{\rm{T}}}]\}+\bf{F}, \end{split} \label{e.VNS3B} \end{align} \end{subequations} Therefore, the final lattice Boltzmann equation for the volume-averaged Navier-Stokes equations with a forcing term can be expressed as: \begin{equation} \begin{split} {f_{i}}({\bf{x}} + {{\bf{e}}_i}\delta t,t + \delta t) - {f_{i}}({\bf{x}},t) =&-\frac{1}{\tau }[{f_{i}}({\bf{x}},t) - f_{i}^{\mathrm{eq}}({\bf{x}},t)]\\&+ \delta t P_i+ \delta t F_i, \end{split} \end{equation} where $P_i$ is given as eq.~(\ref{e.pi}), $F_i$ is the discrete forcing term\cite{force2002discrete}, and is given as \begin{equation} \label{e.Fi} F_i = (1 - \frac{1}{2\tau }){w_i}({\frac{{{\bf{e}}_i} - {\bf{u}}}{c_s^2}} + {\frac{{{\bf{e}}_i} \cdot {\bf{u}}} {c_s^4}}{{\bf{e}}_i})\cdot\bf{F}, \end{equation} The macroscopic values of fluid density and velocity are obtained from the moments of the particle distribution function \begin{equation} \label{e.0moment} {\varepsilon }{\rho } = \sum\limits_i {{f_i}}, \end{equation} \begin{equation} \label{e.1moment} {\varepsilon }{\rho }{\bf{u}} = \sum\limits_i {{{\bf{e}}_i}{f_i}} + {\frac{1}{2}}\delta tc_s^2\rho\nabla \varepsilon + {\frac{1}{2}}\delta t\bf{F}. \end{equation} \section{Examples and discussions} To validate the proposed lattice Boltzmann model, a two-dimensional Couette flow through porous media with porosity $\varepsilon$ (fig.~\ref{fig.1}) is simulated. The Couette flow is through porous media of porosity $\varepsilon$ between two plates separated by a distance $H$. The flow is driven by the upper plate with a constant velocity $u_0$ along the $x$ direction. A periodic boundary condition is applied in the $x$ direction. A no-slip boundary condition\cite{wang2006cpc} and a velocity boundary condition\cite{zouhe1997boundary} are imposed to the bottom and upper plates, respectively. The flow suffers resistance through porous media given as\cite{guo2002lattice} \begin{equation} {\bf{F}} = - \frac{{{\varepsilon ^2}\nu }}{K}{\bf{u}} - \frac{{{\varepsilon ^3}{F_\varepsilon }}}{{\sqrt K }}\left| {\bf{u}} \right|{\bf{u}}. \end{equation} Here, $\nu$ is the kinematic viscosity, $K$ and $F_\varepsilon$ represent the permeability and geometric function. Based on Ergun's experimental investigations\cite{Ergun}, $K$ and $F_\varepsilon$ can be expressed as\cite{vafai1984} \begin{equation} K=\frac{{{\varepsilon^3}d_p^2}}{{150{{(1 - \varepsilon )}^2}}}, \end{equation} \begin{equation} {F_\varepsilon } = \frac{{1.75}}{{\sqrt {150{\varepsilon ^3}} }}, \end{equation} where $d_p$ is the solid particle diameter. To characterize the flow through porous media, two dimensionless numbers, namely the Reynolds number $Re$ and the Darcy number $Da$, are defined as \begin{equation} Re = \frac{{LU}}{\nu }, \end{equation} \begin{equation} Da = \frac{K}{{{L^2}}}, \end{equation} where $L$ and $U$ are the characteristic length and velocity respectively. As the flow is fully developed between two plates, the velocity at the $x$ direction satisfies the following equation\cite{guo2002lattice}: \begin{equation} \label{e.velocity} \left\{ \begin{split} &\nu\frac{{{\partial ^2}u}}{{\partial {x^2}}} - \frac{\varepsilon \nu }{K}u - \frac{{{\varepsilon}^2{F_\varepsilon }}}{{\sqrt K }}{u^2} = 0, \\ &u(0) = 0, \quad u(H) = {u_0}. \\ \end{split} \right. \end{equation} \begin{figure} \centering \includegraphics[width=6cm]{fig1.png}\\ \caption{Illustration of the Couette flow through porous media.} \label{fig.1} \end{figure} In the simulations, the porous media are assumed as constant porosity $\varepsilon$ (chosen as 0.1), the characteristic velocity is set to be $u_0$ and the domain of porous media is divided into an $80\times80$ square lattice (with the characteristic length L is 80). When $Re=50$, the relaxation time $\tau$ is set as 0.6574; otherwise $\tau$ is set to be 0.8. The fluid density $\rho$ and fluid velocity $u$ are initialized as 1 and 0 respectively. The non-equilibrium extrapolation scheme\cite{guo2002boundary} is applied for both the no-slip boundary condition and velocity boundary condition. The solution shows that $u/U$ varies with $y/H$ at $x=H/2$ in fig.~\ref{fig.2}. The simulation results are found to be in excellent agreement with the reference solutions, proving that the present model can handle the Couette flow through porous media very well. \begin{figure} \centering \subfigure[]{\includegraphics[width=7cm]{fig2a.png}}\\ \subfigure[]{\includegraphics[width=7cm]{fig2b.png}} \caption{Velocity profiles of the Couette flow along the $y$ axis at different $Re$ and $Da$. Symbols: the solution of the new model. Lines: finite difference solution of eq.~(\ref{e.velocity}), considered as the reference solution.} \label{fig.2} \end{figure} \section{Conclusions} This letter is concerned with the lattice Boltzmann equation for solving the volume-averaged Navier-Stokes equations. Although most of previous numerical schemes have been carried for solving the volume-averaged Navier-Stokes equations, they are implicit/semi-implicit schemes. Here, we proposed a new LB model which is an explicit scheme for solving the equations. The proposed LB model provides an effective way in modeling porous flow and two-phase flow. We would like to point out that the volume fraction (or local porosity) $\varepsilon$ can be treated as a variable, which will be extended to solve the two-fluid model equations in future work. \acknowledgments The authors wish to thank Prof. Jinghai Li for his encouragement and help in improving the manuscript. The support from the National Natural Science Foundation of China (Grant No. 21106155) and the Chinese Academy of Sciences (Grant No. XDA07080303) is gratefully acknowledged.
1,941,325,220,903
arxiv
\section{Introduction} \IEEEPARstart{E}{stimation} of optical flow is a computationally intensive task. Modern artificial systems achieve ever higher accuracy on the task at the expense of more complex algorithms and more powerful computing hardware such as GPUs. On the other hand, many biological systems are able to estimate optical flow both effectively and power-efficiently. Examples include insects such as bees and drosophilia, which rely heavily on optical flow for navigation \cite{joesch2010and}, but must estimate optical flow under severe size, weight, and power constraints. While the optical flow estimated by such insect may not rival modern computer vision approaches in terms of accuracy, the estimated flow is good enough for the insects to use in navigation, and impressive considering the tight constraints under which the estimates were generated. In artificial systems, early optical flow estimation approaches relied on methods such as velocity-tuned filters in the frequency domain \cite{watson1983look, heeger1987model}, phase-based methods \cite{barron1994d}, gradient based methods, and correlation based methods \cite{sutton1983determination}. The accuracy of these methods for optical flow estimation on sequences of images has long since been surpassed by even more computationally intensive methods which estimate flow at multiple scales \cite{alvarez1999scale}, and more recently by convolutional neural networks \cite{dosovitskiy2015flownet}, which are mostly running on power inneficient GPUs or dedicated ASICs, which are less accurate, more general, but are more power efficient (PX4Flow \cite{honegger2013open}, optical mouse). Direction selective neurons are found as early as the retina in frogs \cite{lettvin1959frog}, cats \cite{barlow1965}, rabbits, primates, and many other animals. A key difference between optical flow estimation in biological and artificial systems lies in the format of the data from which optical flow is extracted. Both approaches start off with light from a scene focused on an image sensor, but most artificial systems convert this light signal into a sequence of static digital images from which optical flow must later be estimated. The retina works on a different principle, transducing the light signal into a continuous electrical signal, and later into discrete spikes for communication to the lateral geniculate nucleus and downstream visual cortex. Not all artificial methods rely on static images though. Tanner and Mead \cite{tanner1986integrated} introduced the first analog two dimensional optical flow chip, extracting a global motion from a given scene on dedicated hardware. Delbruck \cite{delbruck1993silicon} also implemented such a network in VLSI. More recently, with the new event-based cameras, some spike based techniques have been proposed \cite{benosman2012asynchronous}, but they require a dedicated computer for processing. An extensive review of motion estimation designs can be found in \cite{orchard2014bioinspired}. Over the last decade, so-called silicon-retinae have matured to a level where they are now publicly available (described later in Section~\ref{sec:event-based_sensors}). These bio-inspired devices provide a spiking output more similar to the biological retina. Recently many papers have proposed different methods for processing data, including for estimating optical flow. Benosman~\textit{et~al.} \cite{benosman2012asynchronous} proposed a plane fitting approach which estimates the motion of sharp edges. Later, Barranco~\textit{et~al.} \cite{barranco2014contour} proposed a method which also estimates flow at edges, but their method estimates the magnitude of the spatial and temporal image gradients at the edge and then relies on a gradient-based method for optical flow estimation. Bardow~\textit{et~al.} \cite{bardow2016simultaneous} apply a variational method which simultaneously estimates both the image grayscale values and optical flow from events. By enforcing spatial and temporal smoothness constraints, their method generates optical flow estimates even for image regions where no gradient information is available. Some optical flow estimation methods take bio-inspiration a step further and make use of biologically inspired computation using spiking neurons to extract optical flow. Examples include Brosch~\textit{et~al.} implementing a phase based method \cite{brosch2016event}, Conradt~\textit{et~al.} implementing a Reichardt detector \cite{spike-reichardt_distilled.pdf}, and Orchard~\textit{et~al.} \cite{orchard2013spiking} who simulated a method relying on synaptic delays. In this paper we propose a Spiking Neural Network variant of the Barlow $\&$ Levick model \cite{barlow1965}, using a silicon retina coupled with IBM's TrueNorth Neurosynaptic System. The method exploits precise spike timing provided by the silicon retina to reliably extract motion direction and amplitude from a scene in real-time. Such methods promise to allow for low power visual motion estimation in real-time. TrueNorth is estimated to consume $70$mW and the silicon retina consumes approximately $10$mW (chip only, omitting FPGA for communication, which can be removed for dedicated applications). This setup can then easily be used in embedded devices such as drones or autonomous driving.\\ Section~\ref{sec:background} introduces both the model, IBM's TrueNorth Neurosynaptic System, and the silicon retina. Details of the model implementation are given in Section~\ref{sec::implementation}. Section~\ref{sec:testing} describes how the model was tested, and results of testing are described in Section~\ref{sec:results}, followed by discussion in Section~\ref{sec:discussion}. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{tdevent} \caption{Working principle of the ATIS camera. (left) A significant relative change of the luminance (programmable threshold $n$) generates an ON (respectively OFF) event when there is an increase (resp. decrease) in luminosity. (right) Snapshot of events over $100$ ms for a car passing in front of the camera. White dots represents ON events, black ones OFF events.} \label{fig::atis} \end{figure} \section{Background}\label{sec:background} \subsection{Direction Sensitive (DS) Unit} The basis of the network, introduced in \cite{giulioni2016event}, is described in Fig. \ref{fig::flow1}. The original model, inspired by the neural circuitry found in the rabbit's retina by Barlow and Levick \cite{barlow1965} is based on inhibition-based direction-sensitive units, combined in motion detectors. Fig.~\ref{fig::flow1} shows a simplified Direction Selective (DS) neuron which responds (spikes) to the detection of motion in its preferred direction (rightward). The DS neuron (gray) receives inputs from two neighbouring ATIS pixels (bipolar cells). Each ATIS pixel (orange and blue) will generate an output spike when stimulated by the passing of an edge. For motion in the preferred direction (rightward), the edge will pass over the orange pixel first, and the blue pixel second. When the edge passes over the orange pixel, a spike is generated and the DS neuron is excited. The excitatory input triggers a burst of spikes from the DS neuron. The burst continues until the edge passes over the blue pixel, at which point the blue pixel will generate an output spike which inhibits the DS neuron, causing the spike burst to end. The time-length of the spike burst encodes the time taken for the edge to pass from the orange pixel to the blue pixel, thus providing information on the edge velocity in the preferred direction. For motion in the opposite direction (leftward) the inhibition from the blue pixel will arrive before the excitation from the orange pixel. Due to the initial inhibition, the later excitation from the orange pixel is not sufficient to drive the DS neuron membrane potential above threshold, and thus the DS neuron does not spike in response to leftward motion. The full direction is obtained with a combination of four single DS units, as shown in Fig.~\ref{fig::flow2}. In implementation, there is a possibility of receiving an isolated noise spike from the orange pixel, which would cause the DS neuron to begin bursting, and continue bursting indefinitely. To handle this case, a delayed copy of the spikes from the orange pixel are used to inhibit the DS neuron, thus limiting the maximum burst length to the length of delay used. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{flow_DS_cell} \caption{Direction Sensitive (DS) unit. A stimulus from left to right first excite (yellow cell) the output (gray), which is self-excitatory until it gets an strong inhibitory input (blue cell), as a movement in the other direction (right to left) doesn't produce any output. Thus, this unit is directive to a specified direction. More, the timing between the beginning and the end of the output spiking give an information of the (inverse) velocity. Inspired by \cite{giulioni2016event}.} \label{fig::flow1} \end{figure} \begin{figure} \centering \includegraphics[width=0.45\textwidth]{flow_DS_full} \caption{Assembly of the 4 DS units (North, South, West, East). The direction of the movement is gathered by taking in consideration, for each pixel, a vertical and horizontal component. Inspired by \cite{giulioni2016event}.} \label{fig::flow2} \end{figure} \subsection{Event-based Sensor}\label{sec:event-based_sensors} Conventional image sensors sample the visual scene at fixed temporal periods (framerate). All pixels acquire light in a synchronous fashion by integrating photons over a fixed time-period. When observing a dynamic scene, this framerate, no matter its value, will always be wrong because there is no relation whatsoever between the temporal dynamics of the scene and the acquisition period. This leads to simultaneous over-sampling and under-sampling of different parts of the scene, motion blur for moving objects, and data redundancy for static background. Event-based sensors \cite{posch2014retinomorphic}, also known as silicon retinae, are an alternative to fixed-frequency imaging devices. In these devices, a time-varying signal is sampled on its amplitude-axis instead of time-axis, leading to a non-uniform sampling rate that matches the temporal dynamics of the signal, as shown on Fig.~\ref{fig::atis}. The Asynchronous Time-based Image Sensor (ATIS\cite{posch11}) is an QVGA array of asynchronous independent pixels, each combining a change detection unit and an absolute gray level measurement unit. Each time a pixel detects a significant change in luminance in its field of view, an event is generated. Each event can be represented as a triplet $ev_i = \left(\textbf{x}_i,t_i,p_i\right)$, $ev_i$ being the $i-th$ event, where $\textbf{x}_i$ is the event spatial coordinates, $t_i$ its timestamp and $p_i$ its polarity, indicating if the change is an increase or decrease in luminance. In this work, only the change detection unit is used. \subsection{The TrueNorth Environment} A TrueNorth Chip \cite{merolla2014million} consists of a network of 64$\times$64=4096 neurosynaptic cores with programmable connectivity, synapses, and neuron parameters. Connectivity between neurons follows a block-wise scheme. Each core has 256 input axons, with programmable connectivity to any of the 256 neurons in that core. Each neuron's output can be routed to a single axon anywhere on the chip. All communications to, from, and within chip are performed asynchronously \cite{merolla2016deep}. Each TrueNorth neurosynaptic core is made of 256 axons, 256$\times$256 synapse crossbar and 256 neurons (Fig.~\ref{fig::TNcore} A). In this paper, the NS1e board was used, containing 4096 cores, $1M^+$ neurons, $268M^+$ synapses, embedded in a board with a Zynq FPGA containing an ARM-core (Fig.~\ref{fig::TNcore} B). \begin{figure}[t!] \centering \begin{subfigure}[c]{0.22\textwidth} \centering \includegraphics[width=\textwidth]{TN_core} \caption{} \end{subfigure}% ~ \begin{subfigure}[c]{0.22\textwidth} \centering \includegraphics[width=\textwidth]{TN_ATIS_interface.jpg} \caption{} \end{subfigure} \caption{a) The TrueNorth topology. Each neurosynaptic core has 256 axons, 256$\times$256 synapse crossbar and 256 neurons. Information flows from axons to neurons gated by binary synapses, where each axon fans out, in parallel, to all neurons thus achieving a 256-fold reduction in communication volume compared to a point-to-point approach. Network operation is governed by a discrete time step. In a time step, if the synapse value for a particular axon-neuron pair is non-zero and the axon is active, then the neuron updates its state by the synaptic weight corresponding to the axon type. Next, each neuron applies a leak, and any neuron whose state exceeds its threshold fires a spike \cite{cassidy2013cognitive}. b) IBM's NS1e board (4096 cores, 1 Million neurons and 256 Million synapses) and ATIS sensor with native link used for this paper.} \label{fig::TNcore} \end{figure} \section{Implementation} \label{sec::implementation} TrueNorth implementation of the model was achieved by carefully configuring a single tile-able TrueNorth core, thanks to the tools presented in \cite{amir2013cognitive}, to handle a small local region of the input space, and then tiling enough copies of these cores to cover the full input image. Within each core, neurons are divided into three main modules, shown in different colors in Fig. ~\ref{fig::core}. The first module (red) receives input spikes from ATIS and generates a copy of these spikes which can be used for further processing by the other two modules. The second module (blue) implements the delay required to generate the delayed inhibition for the DS neurons. The third module (green) implements the DS neurons. Each module is described further below. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{core} \caption{Arrangement of a TrueNorth core implementing the motion model. Refractory (red), Delay (Blue), and DS (green) neurons are shown horizontally along the bottom of the core. Input axons are shown vertically on the left of the core. Text labels indicate the direction inputs are arriving from and outputs are routed to. The number of neurons (horizontal axis) and axons (vertical axis) are explicitly indicated for each part of the model. Shaded regions indicate areas where connections (synapses) exist between axons on the left and neurons at the bottom.} \label{fig::core} \end{figure} \subsection{Input Module} The input stage provides a copy of the input spikes which serve as inputs to the DS neuron module and delay module. The input stage is also used to enforce a refractory period which limits the minimum allowable time between two input spikes from the same pixel by blocking the second spike if it falls within $\tau_{r}$ of the first. A refractory period of $\tau_{r}$ (ms) is implemented using a \textit{refractory} neuron model (see Table~\ref{tab:neuron_params}). When at rest state (membrane potential of zero), the neuron should fire an output spike as soon as an input spike is received. This is achieved by enforcing \begin{equation}\label{eq:thresh} \begin{array}{l l} w_e + l \geq \Theta \\ \end{array} \end{equation} where $w_e$ is the excitatory synaptic weight, $l$ is the leak, and $\Theta$ is the neuron threshold. After spiking, the neuron resets to a large negative voltage and slowly leaks back to zero, thus implementing a refractory period. The desired refractory period is achieved by setting the reset and leak values such that \begin{equation} \begin{array}{l l l} \tau_r &\approx \frac{V_r}{l} \\ V_r &= -(2^n -1), & n\in \mathbb{N}\\ \end{array} \end{equation} where $\tau_r$ is the desired refractory period, and $V_r$ is the reset potential, which can only take on limited values due to constraints of the chip. Input spikes received while the neuron potential is leaking towards zero will not cause an output spike, but they will shorten the refractory period. The actual refractory period achieved, $\tau'_{r}$, is given by: \begin{equation} \begin{array}{l l} \tau'_{r} &= \frac{V_r + w_es}{l}\\ \\ &= \tau_r + \frac{w_es}{l}\\ \end{array} \end{equation} where $s$ is the number of spikes received during the refractory period $\tau'_{r}$. To ensure that the achieved refractory period $\tau'_{r}$ is as close as possible to the desired refractory period, $\tau_{r}$, the term $w_es/l$ must be minimized. The minimum value which still satisfies constraint \eqref{eq:thresh} occurs when $l=254$, $w_e=255$. Connectivity on TrueNorth is bound by certain constraints, one of which is that a spike can only be routed to one core. This constraint can be overcome by implementing multiple identical neurons on the same core. Each neuron will then generate an identical output, thereby providing multiple copies, each of which can be routed to different cores. In the case of our TrueNorth model, there are four co-located DS neurons at every pixel location, and they each require input from one of the four neighbouring pixels. Thus, the spikes from some pixels serve as input to neighbouring DS neurons which reside on different cores. The input stage takes care of generating copies of the spikes from these pixels and routing them to the neighbouring cores. Each core only sends spikes to its neighbouring cores lying to the South and East, and each core receives input spikes from its neighbouring cores in the North and West. The number of neurons required by the input module is therefore \begin{equation} N_{in}(\Delta x, \Delta y) = \Delta x \Delta y + \Delta x + \Delta y, \end{equation} where $\Delta x$ and $\Delta y$ specify the pixel size of the image region in $x$ and $y$ direction processed by a single core. The input module implementing refraction is shown in red in Fig.~\ref{fig::core}. \subsection{Delay Module} The delay module implements the delay required by the delayed DS neuron inhibitory input. Delays of $\tau_d$ milliseconds are implemented using the \textit{Delay} neuron model of Table~\ref{tab:neuron_params}. When at rest (zero membrane potential), then neuron will remain at rest until an input spike is received. An input spike will push the membrane potential above zero, and once above zero the neuron membrane potential will leak towards a positive threshold. The delay is implemented by the time taken to leak to the positive threshold. Once the threshold is reached, an output spike is emitted, and the neuron will be reset back to the rest state. A disadvantage of the second approach is that if two input spikes arrive from the same pixel at times $t_1$ and $t_2$ such that $t_2-t_1 < \tau_d$, only a single delayed output spike will be generated, and the output spike will be generated with delay $\tau_d-1$ millseconds after $t_1$. To avoid such complications, we use the refractory period from the input stage to limit the probability of having two spikes occur close together. More specifically, we choose $\tau_r$ such that $\tau_r>\tau_d$. All input spikes from the region covered by the core must be delayed, and copies of the delayed spikes at the South and East boundaries must be generated for neighbouring cores. Thus the total number of neurons required for the delay module, $N_{delay}(\Delta x, \Delta y)$, is \begin{equation}\label{eq:delay_count} \begin{array}{l l} N_{delay}(\Delta x, \Delta y) &= N_{in}(\Delta x, \Delta y) \\ &= \Delta x \Delta y + \Delta x + \Delta y. \end{array} \end{equation} The delay module (blue in Fig.~\ref{fig::core}) receives inputs from the refractory input module (red), as well as input from the refractory input modules of the cores to the North and West. The delay module generates outputs which feed back to the same core, as well as output for use in processing of the cores to the South and East. \subsection{DS Module} The DS module holds the DS neurons. The parameters for each neuron are shown in Table~\ref{tab:neuron_params}. There are four DS neurons per image location, so a core covering an image region of size $\Delta x$ by $\Delta y$ pixels will require \begin{equation} N_{DS}(\Delta x, \Delta y) = 4\Delta x\Delta y \end{equation} neurons. The DS neurons are shown in green in Fig.~\ref{fig::core} and receive input from both the input refractory module, and the delay module of the current core, as well as from the cores to the North and West. \subsection{Parameters} TrueNorth neurons are a variant of linear leaky integrate and fire model with 23 tunable parameters, as mentioned in \cite{cassidy2013cognitive}. Table \ref{tab:neuron_params} contains the neuron parameters for each population. \begin{table}[!t] \renewcommand{\arraystretch}{1.3} \caption{Neuron parameters} \centering \begin{tabular}{|c|c|c|c|c|} \hline Parameter & Symbol & Refractory & DS neuron & Delay\\ \hline excitatory weight (mV) & $w_e$ & $255$ & $150$ & $1$\\ \hline inhibitory weight (mV) & $w_i$ & $0$ & $50$ & $0$\\ \hline threshold (mV) & $\Theta$ & $1$ & $125 $ & $\tau_d$\\ \hline leak (mV/ms) & $l$ & $-254$ & $-1$ & $1$\\ \hline reset potential (mV) & $V_r$ & $-\tau_{r}$ & $127$ & $0$\\ \hline negative floor (mV) & $\beta$ & $-\tau_{r}$ & $-50$ & $0$\\ \hline \end{tabular} \label{tab:neuron_params} \end{table} The total number of neurons $N_{\sum}$ required by a core is \begin{equation} \begin{array}{l l l} N_{\sum}(\Delta x, \Delta y) &= &N_{in}(\Delta x, \Delta y) + N_{delay}(\Delta x, \Delta y) \\ & &+ N_{DS}(\Delta x, \Delta y) \\ \\ &= &6 \Delta x \Delta y + 2 \Delta x + 2 \Delta y\\ \\ N_{\sum}(\Delta x, \Delta y) &\leq &256.\\ \end{array} \end{equation} The values $\Delta X = 6$ and $\Delta Y = 6$ maximize the image area processed by the core, subject to the constraint that the core only has 256 neurons. The number of axons used is less than the number of neurons, so neurons are the limiting factor. 285 identity cores are used to relay the ATIS input to the motion model (see Section~\ref{sec::atisTNlink}). For the motion model to cover the 304$\times$240 pixel input image requires another $304/6\times240/6\approx 51\times40 = 2040$ cores. In total the TrueNorth model uses $2040+285=2325$ cores to compute motion at all locations in the input image space. 240 neurons are used per core (a usage of 256 neurons per core is achievable if $\Delta X = 18$ and $\Delta Y = 2$, but each core would still only be processing 36 pixels, and thus such an arrangement is less efficient). \subsection{Interpreting the Result} To interpret the output spikes from the network at a particular pixel location, the outputs of all four of the DS neurons located at the pixel are combined. The same input spike will excite all four DS neurons, so they should all start bursting simultaneously (unless they are already inhibited). We use the notation $t_{x^+}$ to denote the length of the burst from the neuron sensitive to motion in the positive x-direction. The velocity can then be calculated using \begin{equation} \begin{array}{l l l} t_x &= &t_{x^+} - t_{x^-}\\ t_y &= &t_{y^+} - t_{y^-}\\ v_x &=& t_x/(t_x^2+t_y^2)\\ v_y &=& t_y/(t_x^2+t_y^2)\\ \end{array} \end{equation} where $v_x$ and $v_y$ indicate the component of velocity in the $x$ and $y$ directions respectively, in units of pixels per millisecond. \subsection{Self excitatory state} The output neuron of the DS unit should be self-excitatory until it receives a strong inhibitory spike. This behavior was achieved by setting the reset potential to be higher than the threshold ($\alpha_j$). Then, when the membrane potential crosses once the threshold, the cell enters a self-excitatory state until a strong inhibition link is received. Sometimes, due to noise, the inhibitory spike can be missing. To prevent a cell to spike indefinitely, and second strong inhibition spike is generated and delayed (typically 100ms) as the excitation spike is generated. \subsection{ATIS-TrueNorth link} \label{sec::atisTNlink} The ATIS and TrueNorth chips both use variants of the Address Event-Representation (AER) protocol \cite{Boahen2000}. However, the two are not directly compatible, so the Opal Kelly XEM6010 board with Xilinx Spartan 6 FPGA which powers and configures ATIS is also used to convert the ATIS AER signals to TrueNorth AER signals. The ATIS AER data consists of an absolute pixel address and a polarity, all of which are communicated in parallel when the AER request line is active. The TrueNorth AER data consists of a relative core address, an axon address, and a 4 bit target time. The data is communicated in two phases, with half the bits being communicated in parallel during each phase. The TrueNorth reset and 1kHz clock tick signals are shared with the Opal Kelly board so that it can keep track of the state of TrueNorth's internal 4 bit time. All events are communicated to TrueNorth with a target time 2ms later than the current time. A one to one mapping from ATIS pixels to TrueNorth neurons is generated such that each TrueNorth core accepts events from a 16$\times$16 pixel region, with polarity being ignored. An array of 19$\times$15 = 285 identity cores is instantiated at the physical core locations targetted by the ATIS interface. These identity cores generate a copy of the spikes they receive, which can then be routed to the rest of the TrueNorth model. This interface arrangement uses an extra 285 cores, but allows the model to be rapidly changed by only reconfiguring TrueNorth, instead of having to reprogram the ATIS FPGA to target different cores and neurons for different TrueNorth models. \section{Testing}\label{sec:testing} \subsection{Sources of Visual Data} \label{sec:model_data} \begin{figure} \centering \includegraphics[width=0.97\columnwidth]{inputs.pdf} \caption{The two recordings used in this paper. (a) and (b) show images of a rotating pipe and spiral respectively from the sensor's viewpoint. The direction of rotation is shown by the green arrows inset. (c) and (d) show 10ms each of the pipe and spiral recordings. Red points indicate OFF-events (decreases in intensity) while blue points indicate ON-events (increases in intensity). Axes directions used in \eqref{eq:pipe} and \eqref{eq:spiral} are shown in the top left.}\label{fig:VisualSources} \end{figure} Two sources of visual data are used in this work: a recording of a black pipe rotating in front of a white background, originally presented in \cite{orchard2013spiking}, and a recording of a rotating spiral \cite{orchard2014bioinspired}. The pipe and spiral sequences are both ATIS recordings in which ground truth for motion can be predicted to quantify the accuracy of the output. The pipe and spiral recordings are shown in Fig.~\ref{fig:VisualSources}. This motion is modelled as described below. \subsection{Modelling Motion for the Rotating Pipe} We use a recording of just over half rotation of the pipe due to symmetry (the second half of the rotation will look almost identical to the first). A full rotation takes roughly 2.85 seconds, so we use a 1.5 second recording. The rotating pipe was modelled as having two parallel edges spaced a width of $w$ apart from each other and rotating about a point centered between them. Motion of the pipe could then be modelled using \begin{equation} \begin{array}{l l} R(t) &= \left[\begin{matrix}\cos{(t\omega_t)} & -\sin{(t\omega_t)}\\ \sin{(t\omega_t)} & \cos{(t\omega_t)}\end{matrix}\right] \\ \\ Location(l,t) &= \left[\begin{matrix}x_c\\ y_c\end{matrix}\right] + R(t)\left[\begin{matrix} \pm \frac{w}{2} \\ l\end{matrix}\right]\\ \\ Direction(t,l) &= \left\{\begin{matrix} t\omega_t & l \leq 0 \\ t\omega_t+\pi & l > 0 \end{matrix}\right. \\ \\ Speed(l) &= |l|\omega_t\\ \\ l &\in [-150,150]\\ t &\in [0,\frac{\pi}{|\omega_t|}]\\ \omega_t = 2.21 rad/s\\ \end{array}\label{eq:pipe} \end{equation} where $R(t)$ is a rotation matrix which varies with time $(t)$ to model rotation of the pipe, $\omega_t$ is the angular velocity of the pipe, $Location(l,t)$ gives the pixel location, $[x,y]^T$, of points on the pipe as a function of time, their position on the edge of the pipe $l$, and the location of the center of the pipe $[x_c,y_c]^T$. The speed of each point is dependent only on its position on the pipe $l$, while the direction depends on $t$ and $l$ because one half of the pipe is moving in the opposite direction to the other. The pipe has a finite length of 300 pixels, thereby limiting $l$. The recording is long enough for half a rotation of the pipe, thereby limiting $t$. The speed and direction given by \eqref{eq:pipe} are the components perpendicular to the edges of the pipe. The direction of the $x$ and $y$ axes are shown top left of Fig.~\ref{fig:VisualSources}. \subsection{Modelling Motion for the Rotating Spiral} The spiral stimulus does not exhibit the same symmetry as the pipe, so we use a recording of a full rotation of the spiral, which takes 0.5 seconds. The shape of the spiral stimulus was parameterized by angle with a variable $\theta_0$ from which the speed, direction, and location of motion can be modelled using \begin{equation} \begin{array}{l l} r(\theta_0) &= 2^{\frac{\theta_0}{\pi}}\\ \\ Location(\theta_0,t) &= \left[\begin{matrix}x_c\\ y_c\end{matrix}\right] + \left[\begin{matrix}r(\theta_0)\cos{(-\theta_0+t\omega_t)}\\ r(\theta_0)\sin{(-\theta_0+t\omega_t)}\end{matrix}\right] \\ \\ Speed(\theta_0,t) &= 2^{\frac{\theta_0}{\pi}}\frac{\ln{2}}{\pi}\omega_t\frac{1}{\cos{(\frac{\ln{2}}{\pi\omega_t})}}\\ \\ Direction(\theta_0,t) &= -\theta_0 + t\omega_t + \sin{(\frac{\ln{2}}{\pi\omega_t})}\\ \\ \theta_0 &\in [0,20] \\ t &\in [0,\frac{\pi}{|\omega_t}|]\\ \omega_t &= -12.57 rad/s\\ \end{array}\label{eq:spiral} \end{equation} where $r(\theta_0)$ is the radial distance to the spiral from the center, $[x_0, y_0]^T$. $Location(\theta_0,t)$ describes the $[x,y]^T$ location of the spiral edges. $Speed(\theta_0,t)$ and $Direction(\theta_0,t)$ give the speed and direction of motion respectively. The Cosine and Sine terms in the speed and direction equations account for the fact that the spiral is not perpendicular to the radial vector (second term in $Location(\theta_0,t)$). \section{Results} \label{sec:results} Output spikes are sent from the board as UDP packets. A simple UDP receiver code for visualizing the output spikes was developed that captures UDP packets, decodes them and plots the results of the flow (direction and velocity). \begin{figure} \centering \includegraphics[width=0.49\textwidth]{spirale_data3d.pdf} \caption{3D representation of network result on the Spiral data. The direction of movement is color-coded in the HSV space. Some individual isolated spikes are flow estimates generated in response to sensor noise. Video available online \cite{spiral_video}.} \label{fig::flow_out1} \end{figure} The implemented network, as presented in Section~\ref{sec::implementation}, uses $\Delta X=6$ and $\Delta Y=6$, leading to a use of $2325$ cores, $558000$ neurons. Feeding this network with input from a rotating spiral, as shown in Fig.~\ref{fig:VisualSources}, gives results shown in Fig.~\ref{fig::flow_out1}. The estimated direction of movment is color-coded in the HSV space. Fig.~\ref{fig::flow_outerror} shows the estimated error for this network. Here, the proposed approach is able to extract the direction of movement with an absolute error mean of $\bar \epsilon \simeq 8.5\deg$. This error is mainly located at the end of the edges, where no neighborhood is available, thus given wrong measurements. Fig.~\ref{fig::flow_out1} shows a 3D representation of the optical flow estimates arising from the spiral data. Fig.~\ref{fig::flow_outerror} shows the angular error for the same spiral recording. Fig.~\ref{fig::AAE} shows the average endpoint error for the spiral data. A video of the spiral result is available online at \cite{spiral_video}. Fig.~\ref{fig::flow_bar_outerror} shows similar angular error for the pipe data. A video of the pipe results can be found online at \cite{pipe_video}. Fig.~\ref{fig::lab_scene} shows a snapshot of the output from data captured with the sensor hand held in the lab. A video of this data is available online at \cite{lab_scene_video}. \begin{figure} \centering \begin{subfigure}[b]{0.49\textwidth} \includegraphics[width=\textwidth]{spirale_error.pdf} \caption{} \end{subfigure} \begin{subfigure}[b]{0.49\textwidth} \includegraphics[width=\textwidth]{spirale_data_HSV.pdf} \caption{} \end{subfigure} \caption{Rotating spiral. a) Absolute error. One can notice that the error is mainly located at the end of edges, due to the lack of adjacent active pixels. In some extend, it is impossible to compute a flow here. b) Top view of the output of the network.} \label{fig::flow_outerror} \end{figure} \begin{figure} \centering \includegraphics[width=0.49\textwidth]{speed-estimation4.pdf} \caption{Average Endpoint Error (AEE) for the spiral input. Our network is able to reconstruct the velocity vector with an average error of 11\%, error which is mainly located at the edges of the spiral, where it is impossible to now the speed by our method.} \label{fig::AAE} \end{figure} \begin{figure} \centering \begin{subfigure}[b]{0.49\textwidth} \includegraphics[width=\textwidth]{bar_error.pdf} \caption{} \end{subfigure} \begin{subfigure}[b]{0.49\textwidth} \includegraphics[width=\textwidth]{bar_data_HSV.pdf} \caption{} \end{subfigure} \caption{Rotating pipe. a) Absolute error. One can notice that the error is mainly located at the end of edges, due to the lack of adjacent active pixels. In some extend, it is impossible to compute a flow here. b) Top view of the output of the network. Video available online \cite{pipe_video}.} \label{fig::flow_bar_outerror} \end{figure} \begin{figure*} \centering \includegraphics[width=\textwidth]{lab_scene.pdf} \caption{Snapshot from a video showing optical flow output from TrueNorth for data captured while moving the ATIS sensor by hand in the lab. The leftmost image shows the scene from a similar angle captured using a cellphone camera. The middle shows the direction of motion detected, and the right shows the speed. Strong vertical and horizontal edges come out clearly, but the direction of motion differs because the network estimates the normal component of optical flow.} \label{fig::lab_scene} \end{figure*} \section{Discussion}\label{sec:discussion} The model presented here is capable of estimating visual motion in real-time with good accuracy, but there are some limitations. The method described uses 57\% of the cores available on TrueNorth to estimate optical flow at every pixel over the full ATIS resolution of 304$\times$240. To extend the method to operate at higher resolutions, such as the prototype 640$\times$480 pixel DAVIS sensor, would require either using multiple TrueNorth chips, or only computing optical flow at every second pixel in the x and y directions. Of the 256 neurons available on each TrueNorth core, 16 are unused. Of the 240 remaining neurons, 24 (10\%) are used to copy spike signals across the core boundaries to prevent motion estimation artifacts at the core boundary. Since copying the signal across the core boundary introduces a 1 tick delay, the inputs to each core must also be delayed 1 tick before processing, which uses another 6$\times$6=36 of the 240 used neurons. The block connectivity of TrueNorth has many advantages for parallelizing local computations in each core, but at the cost of requiring spikes to be copied between neighbouring blocks, which ends up using $36+16=52$ neurons per core. Another constraint is imposed by the time resolution of TrueNorth, which is operating at a 1ms tick interval. This time quantization results in each DS unit's output speeds being quantized to $1/n, n\in \mathbb{N}$ pixels/ms. Such quantization can result in large error for large speeds (since the quantized bins are further apart at high speed), and speeds faster than 1 pixel/ms will be detected as 0. The maximum detectable speed could be increased by placing DS input pixels further apart. A spacing of $k$ pixels between the DS unit input pixels would result in a maximum possible speed of $k$ pixels/ms. However, this approach would aggravate the edge correspondence problem, because it becomes less likely that the two pixels are seeing the same edge. Another possible method for increasing the maximum detectable speed is to reduce the tick period for TrueNorth. While this is definitely possible, a maximum speed of 1 pixel/ms was deemed good enough and reducing the tick period of TrueNorth was not explored further. The slowest speed which can be detected is $1/\tau_r$, set by the length of the delay, $\tau_r$. Increasing the spacing of DS pixels to $k$ would also increase this minimum detectable speed to $k/\tau_r$. There is a tradeoff between the slowest detectable stimulus, and the minimum allowable time between two subsequent stimuli. For two nearby, fast moving stimuli to each be uniquely detected and measured, their onsets must be spaced as least $\tau_r$ ms apart. Thus for fast moving stimuli it is desirable to have $\tau_r$ be small, but for slow stimuli, we require $\tau_r$ to be large. Spike IO bandwidth of the TrueNorth chip also poses a constraint. The TrueNorth chip has four spike IO ports (North South East West), each capable of 40k spikes per millisecond. On the NS1e development platform used in this work, all outputs spikes are communicated to a Xilinx Zynq through one port. Although 40k spikes/ms is a lot, the model presented in this paper does generate a lot of output spikes. The worst case scenario happens in complex scenes with lots of slowly moving edges. Edge density increases the number of DS units which respond, while slow moving edges generate a lot more spikes than fast moving edges. An isolated noise event at a single pixel is a particularly bad culprit for generating spikes since it will simultaneously activate all four 4 DS units located at that pixel, and each of these DS units will generate a train of $\tau_r$ spikes before self-inhibiting. If too many output spikes are generated, the TrueNorth chip tick period will extend to allow time for the spikes to be communicated, which will distort the estimated flow (which assumes a 1ms tick). In the case where pre-recorded data is being run through the model (instead of live ATIS data), the TrueNorth Neurosynaptic System will input spikes in accordance with the slowed down tick, thereby still allowing the model to estimate speeds accurately, but the system will run slower than real-time. On-chip spike communication is much faster than off-chip spike communication, so in future a method which interprets velocity on-chip and more efficiently communicates the results off-chip can help to mitigate the IO constraint. The model described in this paper can be classified as a token based method, where a token is generated whenever an edge passes over a pixel. The actual gradient of the edge is never estimated, the gradient just has to be large enough to trigger a spike from the ATIS pixel. Large gradients may result in multiple spikes in response to a single edge, but the refractory period of the first stage will eliminate these secondary spikes. Much like many other event-based optical flow methods \cite{giulioni2016event, benosman2012asynchronous, barranco2014contour}, this method only estimates flow at edges, and it estimates normal flow (optical flow normal to the edge direction). However, some event-based methods do estimate gradients for use in optical flow computation \cite{barranco2014contour}, or use global constraints to estimate optical flow for image regions where little or no gradient is present \cite{bardow2016simultaneous}. Currently the output spike signals must be interpreted off chip to extract the velocity information. Since 43\% of the TrueNorth cores are unused, there remains room to develop on-chip interpretation of the spikes to extract velocity. However, the output of the TrueNorth chip is always in spiking format, so such development work would simplify, but not eliminate, the off-chip velocity extraction. \section{Conclusion} A SNN method for normal optical flow computation from silicon retina data has been proposed. The method is capable of measuring the speed of edges in the range of 1/50 pixels/ms to 1 pixel/ms in real time. The light signal coming into the vision sensor is transduced into spikes by a silicon retina. These spikes are then fed into the SNN for processing, which provides an output estimate also encoded as spikes. This network, consuming $80mW$ ($70 mW$ for TrueNorth, $10mW$ for the ATIS, omitting FPGA used for communication, which can be removed for targetted applications), running real time and using 2325 cores, 558000 neurons, is able to extract optical flow with both a low angular error rate below $10$ degrees, and Average Enpoint Error (AEE \cite{baker2011database}) of 11\%, for a density estimation of 51\%. \section*{Acknowledgments} This work was partially achieved during the $22^{nd}$ edition of the Telluride neuromorphic workshop in Telluride, Colorado, 2016. The authors would like to thank the organizers and all the staff of this workshop for the fruitful discussions and exchanges that took place there.
1,941,325,220,904
arxiv
\section{Introduction} Inspired by some aspects of string theory and loop quantum gravity, \emph{fuzziness} of spacetime can be expressed using the following relation for non-commutativity of coordinate operators[1,2] \begin{equation} [\hat{x}^i,\hat{x}^j]=i\theta^{ij} \end{equation} where $\theta^{ij}$ is a real, antisymmetric matrix, with the dimension of length squared which determines the fundamental cell discretization of spacetime manifold. As a consequence of above relation, the notion of point in the spacetime manifold becomes obscure as there is a fundamental uncertainty in measuring the coordinates \begin{equation} \Delta x^{i}\Delta x^{j}\geq \frac{1}{2}|\theta^{ij}|. \end{equation} This finite resolution of the spacetime points especially affects the cosmological dynamics in early stages of the universe evolution. On the other hand, inflation has been identified as a great opportunity to test theories of Planck scale physics including noncommutative geometry. Essentially, effects of trans-Planckian physics should be observable in the cosmic microwave background radiation [3-9]. For this reason, various attempts to construct noncommutative inflationary models have been done by adopting various different approaches. These approaches include using relation (1) for space-space [10] and space-time [11] coordinates and constructing a noncommutative field theory on the spacetime manifold by replacing ordinary product of fields by Weyl-Wigner-Moyal $*-product$. Another way to incorporate effects of high energy physics in inflationary models is using the generalized uncertainty principle (GUP) which is a manifestation of the existence of a fundamental length scale in the system [12].\\ Recently a new approach to noncommutative inflation has been proposed by Rinaldi [13] using the coherent state picture of noncommutativity introduced in [14]. This model is free from some of the problems that plagued models based on $*-product$ such as unexpected divergencies and UV/IR mixing (see [15] for a full review). The key idea in this model is that noncommutativity \emph{smears} the initial singularity and as a result there will be an smooth transition between pre and post big bang eras via an accelerated expansion. It has been shown that noncommutativity eliminates point-like structures in the favor of smeared objects in flat spacetime. As Nicolini {\it et al.}\, have shown [16] ( see also [17] for some other extensions ), the effect of smearing is mathematically implemented as a substitution rule: position Dirac-delta function is replaced everywhere with a Gaussian distribution of minimal width $\sqrt{\theta}$. In this framework, they have chosen the mass density of a static, spherically symmetric, smeared, particle-like gravitational source as follows \begin{equation} \rho_\theta(r)=\frac{M}{(2\pi\theta)^{\frac{3}{2}}}\exp(-\frac{r^2}{4\theta}). \end{equation} As they have indicated, the particle mass $M$, instead of being perfectly localized at a point, is diffused throughout a region of linear size $\sqrt{\theta}$. This is due to the intrinsic uncertainty as has been shown in the coordinate commutators (1). Recently we have constructed a noncommutative braneworld inflation scenario [18] based on the idea that initial singularity is smeared in a noncommutative background. Within the same streamline, the purpose of this letter is to study time evolution of cosmological perturbations in a braneworld inflation scenario in the context of spacetime noncommutativity. \section{Cosmological dynamics in the noncommutative RS II model} The 5D field equations in the Randall-Sundrum (RS) II [19] setup are \begin{equation} ^{(5)}\!G_{AB}=-\Lambda_5\, ^{(5)}\!g_{AB}+\delta(y)\,\frac{8\pi}{M_5^3}\left[ -\lambda g_{AB}+T_{AB}\right], \end{equation} where $y$ is a Gaussian normal coordinate orthogonal to the brane ( the brane is localized at $y=0$), $\lambda$ is the brane tension, and $T_{AB}$ is the energy-momentum tensor of particles and fields confined to the brane . The effective field equations on the brane are derived from the Gauss-Codazzi equations and junction conditions (using $Z_2$-symmetry)[20,21] \begin{equation} G_{ab} = - \Lambda g_{ab} + \kappa^2 T_{ab} + 6\frac{\kappa^2}{\lambda} {\cal S}_{ab} - {\cal E}_{ab}\;, \end{equation} where ${\cal S}_{ab}\sim (T_{ab})^2$ is the high-energy correction term, which is negligible for $\rho\ll\lambda$, while ${\cal E}_{ab}$ is the projection of the bulk Weyl tensor on the brane. The general form of the brane energy-momentum tensor for any matter fields (scalar fields, perfect fluids, kinetic gases, dissipative fluids, etc.), including a combination of different fields, can be covariantly given in terms of a chosen 4-velocity $u^\mu$ as \begin{equation} T_{\mu\nu}=\rho u_\mu u_\nu +ph_{\mu\nu}+\pi_{\mu\nu}+q_{\mu}u_{\nu}+q_\nu u_\mu \,. \label{3''} \end{equation} Here $\rho$ and $p$ are the energy density and isotropic pressure and $q$ and $\pi$ are the momentum density and anisotropic stress respectively. $h_{\mu\nu}$ defined as \begin{equation} h_{\mu\nu}=g_{\mu\nu}+u_\mu u_\nu = \, ^{(5)}\!g_{\mu\nu}-n_\mu n_\nu +u_\mu u_\nu \end{equation} projects into the comoving rest space at each event where $~n_{\nu}$ is the spacelike unit normal to the brane. The modified Friedmann and Raychaudhuri equations in the background are [20] \begin{equation} H^2 = \frac{\kappa^2}{3} \rho\left(1+{\rho\over 2\lambda}\right) +\frac{C}{a^{4}}+ \frac{1}{3} \Lambda - \frac{K}{a^2} \,, \end{equation} and \begin{equation} \dot H= - {\kappa^2\over 2}(\rho+p)\left(1+ {\rho\over \lambda}\right)-2{C\over a^4}+{K\over a^2}\,, \end{equation} respectively. By definition, $C=\frac{\kappa^{2}}{3}\rho_{\varepsilon0}a^{4}_{0}$\, where $\rho_{\varepsilon0}$ is the dark radiation energy density. For a matter content consisted of a perfect fluid or a minimally coupled scalar field the total effective energy density, pressure, momentum density and anisotropic stress can be written as [21] \begin{eqnarray} \rho^{{\rm eff}} &=& \rho\left(1 +\frac{\rho}{2\lambda} + \frac{\rho^{\varepsilon}}{\rho} \right)\;, \\ p^{\rm eff } &=& p + \frac{\rho}{2\lambda} (2p+\rho)+\frac{\rho^{\varepsilon}}{3}\;, \\ q^{\rm eff }_a &=& q_{a}^{\varepsilon}\;, \\ \pi^{\rm eff}_{ab} &=& \pi^{\varepsilon}_{ab}\,, \end{eqnarray} where superscript $\varepsilon$ denotes the contribution of the bulk Weyl tensor which enters the modified friedmann equation as a non-local dark radiation term. Using these definitions, the modified Friedmann and Raychaudhuri equations can be rewritten as \begin{eqnarray} H^2 &=& \frac{\kappa^2}{3} \rho^{\rm eff} + \frac{1}{3} \Lambda + \frac{K}{a^2} \,, \\ \dot H &=& -\frac{\kappa^2}{2}(\rho^{\rm eff} +p^{\rm eff})+\frac{K}{a^2}\,. \end{eqnarray} The tracefree property of $\cal{E}^{\mu}_{\nu}$ in equation (5) implies that the pressure obeys $P^{\varepsilon}={1\over3}\rho^{\varepsilon}$. \\ The local conservation equations on the brane are [21] \begin{eqnarray} &&\dot{\rho}+\Theta(\rho+p)=0\,,\\ && D_a p+(\rho+p)A_a =0 \end{eqnarray} where $\Theta$ is the volume expansion rate, which reduces to $3H$ in the FRW background ($H$ is the background Hubble rate), $A_a$ is the 4-acceleration, and $D_a$ is the covariant derivative in the rest space. The non-local conservation equations for the dark radiation matter can be expressed as [21] \begin{eqnarray} &&\dot{\rho}^\varepsilon+{{4\over3}}\Theta{\rho^{\varepsilon}}+D^a{q^{\varepsilon}}=0 \\&& \dot{q}^{\varepsilon}_{a}+4H{q^{\varepsilon}_{a}} +{{1\over3}}D_a{\rho^{\varepsilon}}+{{4\over3}}{\rho^{\varepsilon}}A_a +D^b{\pi_{ab}^{\varepsilon}} = -{(\rho+p)\over\lambda} D_a \rho\,. \end{eqnarray} We now suppose that the initial singularity that leads to RS II geometry afterwards, is smeared due to spacetime noncommutativity. A newly proposed model for the similar scenario in the usual 4D universe suggests that one could write the energy density as [13,18] \begin{equation} \rho(t)=\frac{1}{32\pi^{2}\theta^{2}}e^{-t^{2}/4\theta}\,. \end{equation} Note that we suppose that the universe enters the RS II geometry immediately after the initial smeared singularity which is a reasonable assumption (for instance, from a M-theory perspective of the cyclic universe this assumption seems to be reliable, see Ref. [22]). Using equation (20), and setting $\Lambda=0=K$, the Friedmann equation (14) in noncommutative space could be rewritten as follows \begin{equation} H^{2}=\frac{\kappa^{2}}{3}\rho^{\rm eff}(t) \end{equation} where $\rho^{\rm eff}$ is given by equation (10). From equation (16) one can find the effective noncommutative pressure using equation (20) as \begin{equation} p=-\rho+\frac{t}{6\theta}e^{-t^{2}/8\theta}\,. \end{equation} So, the equation of state parameter will be \begin{equation} \omega=-1+\frac{16}{3}{\pi }^{2} \theta\,{t}e^{-t^{2}/8\theta} \end{equation} and the speed of sound is \begin{equation} c^{2}_{s}=\frac{\dot{p}}{\dot{\rho}}=\frac{-3t-64\theta^{2} \pi^{2}e^{-t^{2}/8\theta}+32\theta\pi^{2}t^{2}e^{-t^{2}/8\theta}}{3t}\,. \end{equation} Using equations (10) and (11) we can find the \emph{effective} equation of state and speed of sound. To do this end, we note that there are constraints from nucleosynthesis on the value of $\rho^{\varepsilon}$ so that $\frac{\rho^{\varepsilon}}{\rho}\leq0.03$ at the time of nucleosynthesis [23,24]. In this respect, we can neglect this contribution to find $$\omega^{\rm eff}={\frac {1}{192}}\,{{\rm e}^{-\frac{1}{8}\,{\frac {{t}^{2}}{\theta}}}} \bigg[ -192\,{\pi }^{2}{\theta}^{2}\lambda+1024\,t{{\rm e}^{{ \frac {{-t}^{2}}{8\theta}}}}{\pi }^{4}{\theta}^{3}\lambda-3\,{{\rm e}^{{\frac {{-t}^{2}}{8\theta}}}}+32\,t{{\rm e}^{{\frac {{-t}^{2}} {4\theta}}}}{\pi }^{2}\theta \bigg]\times $$ $$\bigg[ \theta \left( {\frac {1}{ 64}}\, \left( 64\,{\pi }^{2}{\theta}^{2}\lambda+{{\rm e}^{{ \frac {{-t}^{2}}{8\theta}}}} \right) {\pi }^{-2}{\theta}^{-2}{\lambda}^{ -1} \right) \bigg] ^{2}\times$$ \begin{equation} {\pi }^{-2}{\theta}^{-4}{\lambda}^{-1} \bigg[ {{\rm e}^{{\frac {{-t}^{2}}{8\theta}}}} \left( {\frac {1}{ 64}}\, \left( 64\,{\pi }^{2}{\theta}^{2}\lambda+{{\rm e}^{{ \frac {{-t}^{2}}{8\theta}}}} \right) {\pi }^{-2}{\theta}^{-2}{\lambda}^{ -1} \right) \bigg]^{-1} \end{equation} which simplifies to the following equation for the high energy regime ($\rho\gg\lambda$) \begin{equation} \omega^{\rm eff}\approx-1+\frac{32}{3}{\pi }^{2} \theta\,{t}e^{-t^{2}/8\theta}\,. \end{equation} Similarly, the effective speed of sound in the high energy regime will be \begin{equation} (c^{2}_{s})^{\rm eff}\approx \frac{16}{3}{\pi }^{2} \theta\,{t}e^{-t^{2}/8\theta}+\frac{-3t-64\theta^{2} \pi^{2}e^{-t^{2}/8\theta}+32\theta\pi^{2}t^{2}e^{-t^{2}/8\theta}}{3t}\,. \end{equation} Figure $1a$ shows the evolution of the equation of state parameter and the effective equation of state parameter as given by equations (23) and (25) respectively. As one can see from this figure, there is a small variation in $\omega$ and $\omega^{eff}$ around the smeared singularity. Figure $1b$ shows the evolution of the effective speed of sound. It is obvious from this figure that in $t>0$ and in the high energy noncommutative regime, $c_{s}$ is imaginary. In this respect, the evolution of the universe in the early, inflationary stage is a phantom evolution.\\ \begin{figure}[htp] \begin{center} \special{psfile=omega.eps hscale=35 vscale=35 hoffset= 0 voffset=-250} \vspace{5cm}\special{psfile=cs.eps angle =0 hscale=35 vscale=35 hoffset=220 voffset=-100}\vspace{2 cm} \end{center} \vspace{0cm} \caption{\small {a) Evolution of the noncommutative equation of state parameter (solid line) and the noncommutative effective equation of state parameter ( dashed line) versus the cosmic time. b) Evolution of the noncommutative effective speed of sound versus the cosmic time. For $t>0$ and in the high energy noncommutative regime, $c_{s}$ is imaginary ( a phantom evolution).}} \end{figure}\newpage We use these results in the next section to determine time evolution of the cosmological perturbations. \section{Evolution of large scale scalar perturbations} The evolution of cosmological perturbations in the Randall-Sundrum braneworld scenario has been studied extensively ( see for instance [25] and references therein). To analyze the scalar perturbations in our noncommutative setup, we define energy density and expansion perturbations as [26] using the covariant 3+1 analysis developed in [27] \begin{equation} \Delta={a^2\over\rho}D^2\rho\,,\quad\quad Z=a^2D^2\Theta\,. \end{equation} Similarly, the perturbations in nonlocal quantities associated with the dark radiation matter are defined as \begin{equation} {U}={a^2\over\rho}D^2{\rho^\varepsilon}\,,\quad\quad {Q}={a\over\rho} D^2 {q^{\varepsilon}}\,,\quad\quad {\Pi }={1\over \rho}D^2{\pi^{\varepsilon}}\,. \end{equation} With these definitions, the equations governing on the evolution of perturbations (equations (16)-(19)) will take the following forms \begin{eqnarray} \dot{\Delta} &=&3wH\Delta-(1+w)Z\,, \\ \dot{Z} &=&-2HZ-\left({c_{\rm s}^2\over 1+w}\right) D^2\Delta-\kappa^2\rho { U}-{{1\over2}}\kappa^2 \rho\left[1+ (4+3w){ {\rho\over\lambda}}- \left({4c_{\rm s}^2\over1+w}\right){\rho^\varepsilon\over\rho}\right] \Delta \,,\\ { \dot{U}} &=& (3w-1)H{ U} + \left({4c_{\rm s}^2\over 1+w}\right)\left({{\rho^\varepsilon }\over\rho}\right) H\Delta -\left({4{\rho^\varepsilon }\over3\rho}\right) Z-aD^2{ Q}\,,\\ { \dot{Q}} &=&(3w-1)H{ Q}-{1\over3a}{ U}-{{2\over3}} a{ D^2\Pi}+{1\over3 a}\left[ \left({4c_{\rm s}^2\over 1+w}\right){{\rho^\varepsilon}\over\rho}-3(1+w) { {\rho\over\lambda}}\right]\Delta\,. \end{eqnarray} where $\rho$\,, $\omega$ and $c_{s}$ are given by equations (20)\,, (23) and (24) respectively. \\ In general, scalar perturbations on the brane cannot be predicted by brane observers without additional information from the bulk because there is no equation for $\dot{\Pi}$ in the above set of equations. Nevertheless, it has been shown that on large scales one can neglect the $D^{2}\Pi$ term in equation (33). So, on large scales, the system of equations closes on the brane, and brane observers can predict scalar perturbations from initial conditions intrinsic to the brane without the need to solve the bulk perturbation equations [26,27]. \\ To solve the above system of equations using the simplification mentioned, we introduce two new variables; the first is a scalar covariant curvature perturbation variable \begin{equation} C\equiv a^4D^2R = -4a^2HZ+2\kappa^2a^2\rho\left( 1+ {\rho \over 2\lambda} \right)\Delta+ 2\kappa^2a^2 \rho U\,, \end{equation} where $R$ is the Ricci curvature of the surfaces orthogonal to $u^\mu$. The second variable is a covariant analog of the Bardeen metric potential $\Phi_H$, \begin{equation} \Phi=\kappa^2a^2\rho \Delta\,. \end{equation} Along each fundamental world-line, covariant curvature perturbation, $C$, is locally conserved \begin{equation} C=C_0\,,\quad \dot{C}_0=0\,. \end{equation} With these new variables, the system of equations reduces to \begin{eqnarray} \dot{\Phi}&=& -H\left[1+(1+w){\kappa^2\rho\over 2H^2}\left(1+ {\rho\over \lambda}\right)\right]\Phi - \left[(1+w){a^2\kappa^4\rho^2\over 2 H}\right]U +\left[(1+w) {\kappa^2 \rho\over 4H}\right]C_0\\ \dot{U} &=& -H\left[1-3w+{2\kappa^2{\rho^\varepsilon}\over 3H^2}\right]U -{2 {\rho^\varepsilon}\over 3 a^2 H\rho}\left[1+{\rho\over\lambda} - {6 c_{\rm s}^2H^2\over (1+w)\kappa^2\rho}\right]\Phi+ \left[{{\rho^\varepsilon}\over 3a^2H\rho}\right] C_0\,. \end{eqnarray} If there is no dark radiation in the background, $\rho^\varepsilon=0$, then \begin{equation} U=U_0\exp\Big\{\int(3w-1)dN\Big\}\,, \end{equation} where $N$ is the number of e-folds. In this case, the above system reduces to a single equation for $\Phi$ which is \begin{eqnarray} {d\Phi\over dN}+\left[1+{(1+w)\kappa^2\rho\over 2H^2}\left(1+ {\rho\over \lambda}\right)\right]\Phi =~{}\left[{(1+w)\kappa^2 \rho\over 4H^2}\right]C_o - \left[{3(1+w)a_o^2\rho^2\over \lambda H^2}\right]e^{2N}U \end{eqnarray} where $U$ is given by (39). We use these results in the next section to study noncommutative modifications of the scalar perturbations dynamics. \section{Noncommutative modifications} Now we want to solve equation (40) using explicit noncommutative forms of $\rho$, $H$, $\omega$ and $U$ given by (20), (21) , (23) and (39) respectively. To do this end, we need to specify the noncommutative form of $N$ which has been appeared in equation (39). As we have shown in Ref. [18], the noncommutative number of e-folds is given by $$ N=\int^{t_{f}}_{t_{i}}H dt\simeq \frac{8}{3}\pi \kappa^{2} \,{\it \rho_{0}}\, \bigg[ \sqrt {\pi \theta}\,\,\,{\rm erf} \Big(\frac{1}{2}\,{\frac {t_{f}}{\sqrt {\theta}}} \Big) +\frac{1}{2}\,\sqrt {2\pi \theta}\,\,\, {\rm erf} \Big( \frac{1}{2}\,{\frac {\sqrt {2}t_{f}}{ \sqrt {\theta}}} \Big) {\lambda}^{-1} \bigg] $$ \begin{equation} - \frac{8}{3}\,\pi \kappa^{2}\,{\it \rho_{0}}\, \bigg[ \sqrt {\pi \theta}\,\,\, {\rm erf} \Big( \frac{1}{2}\,{\frac {t_{i}}{\sqrt {\theta}}} \Big) +\frac{1}{2}\,\sqrt {2\pi \theta}\,\,\, {\rm erf} \Big( \frac{1}{2}\,{\frac {\sqrt {2}t_{i}}{ \sqrt {\theta}}} \Big) {\lambda}^{-1} \bigg] \end{equation} where $\rm erf(x)$ denotes the error function. By expanding the error functions in equation (41) in series, the number of e-folds (supposing that the universe enters the inflationary phase immediately after the big bang,\, that is,\, $t_{i}=0$ and $t_{f}=t$) will be given by \begin{equation} N\simeq \frac{8}{3}\,\pi \kappa^{2}\,{\it \rho_{0}}\, \bigg[ t-\frac{1}{12}\,{\frac {{t}^{3}}{\sqrt {\pi }{ \theta}^{\frac{3}{2}}}}+{\frac {1}{160}}\,{\frac {{t}^{5}}{\sqrt {\pi }{\theta }^{\frac{5}{2}}}}+\frac{1}{2}\, \Big( 2\,t-\frac{1}{6}\,{\frac {\sqrt {2}{t}^{3}}{\sqrt {\pi }{\theta}^{\frac{3}{2}}}}+\frac{1}{40}\,{\frac {\sqrt {2}{t}^{5}}{\sqrt {\pi }{\theta} ^{\frac{5}{2}}}} \Big) {\lambda}^{-1} \bigg]. \end{equation} Now we can integrate equation (40) to find $$\Phi=\frac{1}{2}(1+\omega)\frac {\rho\,\lambda\,{\kappa}^{2}C_{0}}{2H^{2}\lambda+(1+\omega)(\kappa^{2}\rho\lambda+\kappa^{2}\rho^{2})}$$ $$-6(1+\omega)\frac{{H}^{2}\rho\,\lambda\,{{ a_{0}}}^{2} U\exp \left( 3\,\omega\,a-3\,\omega\,{a_{0}}-a+{a_{0}} \right)}{6H^{2}\lambda+(1+\omega)(\kappa^{2}\rho\lambda+\kappa^{2}\rho^{2})}\exp{\Big(\frac{-t^2}{4\theta}\Big)}$$ \begin{equation} +\exp\Bigg[-{\frac{1}{2}\,{\frac { 2\,{H}^{2}\lambda+(1+\omega)({ \kappa}^{2}\rho\,\lambda+{\kappa}^{2}{\rho}^{2}) }{{H}^{2}\lambda }}}\frac{t^2}{8\theta}\Bigg]. \end{equation} Figure (2) shows the evolution of $\Phi$ for both usual braneworld scenario and our noncommutative setup in the high energy inflation regime ($\rho\gg\lambda$). One should note that the subsequent evolution of the universe after times greater than a few $\sqrt{\theta}$ should be governed by a matter content\footnote{See for instance [28] for particle creation in an expanding universe.} different than the one used in equation (21) (\textit{i.e.} energy density of the initial singularity smeared by noncommutativity). So, the evolution of $\omega$,\, $c^{2}_{s}$ and $\Phi$ in the low energy regime essentially should be different. \begin{figure}[htp] \begin{center}\special{psfile=phi.eps angle =0 hscale=45 vscale=45 hoffset=100 voffset=-280} \vspace{1.5cm} \end{center} \vspace{6cm} \caption{\small {Evolution of the parameter $\Phi$ which is an analog of the Bardeen metric potential as defined in (35) for both usual braneworld scenario (dashed line) and our noncommutative setup (solid line) when $\frac{\rho_{0}}{\lambda}=10^{10} $}. We assumed that no dark radiation is present in the background geometry.} \end{figure} The solution (43) is valid when there is no dark radiation in the background. If $\rho^{\varepsilon}\neq0$\,, then one should solve the system of equations (37) and (38) using the explicit form of $\rho^{\varepsilon}$. Generally the time dependence of $\rho^{\varepsilon}$ for brane observer is not determined. Here we introduce a possible candidate for this quantity: as we have mentioned previously, the constraint from nucleosynthesis on the value of $\rho^{\varepsilon}$ is so that $\frac{\rho^{\varepsilon}}{\rho}\leq0.03$ at the time of nucleosynthesis. Based on this constraint, we can assume for instance that $\rho^{\varepsilon}$ is a small fraction of $\rho$ at a given time. Since the time evolution of $\rho$ is determined by (20), the time evolution of $\rho^{\varepsilon}$ can be supposed to be $$\rho^{\varepsilon}(t)=\frac{\delta}{32\pi^{2}\theta^{2}}e^{-t^{2}/4\theta}\, $$ where $\delta$ is a small constant and less than $0.03$. This form of $\rho^{\varepsilon}(t)$ can be used to solve the system of equations (37) and (38) explicitly. Nevertheless, this procedure needs a lot of calculations with very lengthy solutions that we ignore their presentation here.\\ The curvature perturbation defined in metric-based perturbation theory is \begin{equation} \xi={\cal R}+ {\delta\rho \over 3(\rho+p)}\,, \end{equation} which reduces to ${\cal R}$ on uniform density ( $\delta\rho=0$ ) hypersurfaces. If there is no dark radiation in the background ($\rho^{\varepsilon}=0$), the total curvature perturbation on large scale is given by the following differential equation [24] \begin{equation} \dot{\xi}^{\rm \,eff}= \dot{\xi}^{\rm \,m}+H\left[c_{\rm s}^2-{1\over 3}+\left({\rho+p \over \rho+ \lambda}\right)\right] {\delta\rho^{\varepsilon} \over (\rho+p)(1+\rho/\lambda)}\,. \end{equation} where $\xi^{m}$ is matter perturbation which is zero for adiabattic perturbations. Since the time variation of $\rho$,\, $H$,\, $p$,\, $c_{s}$\, and $\delta\rho^{\varepsilon}$ are given by equations (20), (21), (22), (24) and (39) respectively, we can obtain the time evolution of the curvature perturbation explicitly as follows $$\xi^{\rm \,eff}={\frac {1}{96}}\frac{\,{\rm \,Ei} \left( 1,{\frac {3{t}^{2}}{8\theta}} \right)} {{\pi }^{2}{\theta}^{2}{\lambda}}-\frac{1}{48}\,\frac{{{\rm e}^{- {\frac {{3t}^{2}}{8\theta}}}}}{{\pi }^{2}{\theta}^{2}{\lambda}}+\frac{1}{3} \,{\rm \,Ei} \left( 1,{\frac {{t}^{2}}{4\theta}} \right) -\frac{2}{3}\,{ {\rm e}^{-{\frac {{t}^{2}}{4\theta}}}}$$ \begin{equation} -{\frac {1}{768}}\, {{\rm \rm erf}\left({\frac {t}{2\sqrt {\theta}}}\right)}{\lambda}^{-1} {\pi }^{-3}{\theta}^{-3}{\frac {1}{\sqrt {\theta\,\pi }}}-\frac{1}{24}\, {{\rm erf}\left(\frac{1}{4}\,{\frac {\sqrt {2}t}{\sqrt {\theta}}}\right)} \sqrt {2}{\theta}^{-1}{\pi }^{-1}{\frac {1}{\sqrt {\theta\,\pi }}} \end{equation} Where ${\rm \,Ei}(a,z)$ is the exponential integral defined as ${\rm \,Ei}(a,z)=z^{a-1}\Gamma(1-a,z)$. Figure $3$ shows the evolution of ${\xi}^{\rm \,eff}$ versus the cosmic time for both commutative and noncommutative brane inflation. For the commutative braneworld inflation, we have considered the chaotic inflation with $V(\phi)=\frac{1}{2}m^{2}\phi^{2}$ and we obtained the form of $\phi$ using its relation with $\rho$. The amplitude of perturbations in the commutative regime decays faster than the noncommutative regime with the same parameter values. As a result we need smaller number of e-folds in the noncommutative regime to have a successful braneworld inflation. \begin{figure}[htp] \begin{center}\special{psfile=xi.eps angle =0 hscale=40 vscale=40 hoffset=100 voffset=-250} \vspace{1.5cm} \end{center} \vspace{6cm} \caption{\small { a) Evolution of the parameter $\xi$ in commutative brane inflation ( dashed line). b) Evolution of the parameter $\xi$ defined in (44)( with the same parameters values as in figure (2)) when $\frac{\rho_{0}}{\lambda}=10^{10} $} and no dark radiation is present in the background geometry (solid lines). } \end{figure} \section{Conclusion} Spacetime noncommutativity as a trans-Planckian effect, essentially could have some observable effects on the cosmic microwave background radiation. In this respect, it is desirable to study an inflation scenario within a noncommutative background. Recently we have shown the possibility of realization of a non-singular, bouncing, early time cosmology in a noncommutative braneworld scenario [18]. In that work, using the smeared, coherent state picture of the spacetime noncommutativity, we have constructed a braneworld inflation that has the potential to support the scale invariant scalar perturbations. Here, following our previous work, we have studied the time evolution of the perturbations in this noncommutative braneworld setup. We have neglected the contribution of the dark radiation term ( originating in the bulk Weyl tensor) in the background geometry to have a closed set of equations on the brane. However, the contribution of this term in the evolution of perturbations on the brane are taken into account. In this way, by studying the effective quantities ( such as the effective equation of state and speed of sound ) we have realized the possibility of a phantom evolution in the early, inflationary stage of the universe history. Our analysis of the perturbations on the brane shows that the amplitude of perturbations in the commutative regime decays faster than the noncommutative regime with the same parameter values. As a result we need smaller number of e-folds in the noncommutative regime to have a successful braneworld inflation.\\ {\bf Acknowledgment}\\ This work has been supported partially by Research Institute for Astronomy and Astrophysics of Maragha, IRAN.
1,941,325,220,905
arxiv
\section{Introduction} Several types of galactic VHE gamma-ray sources like pulsar wind nebulae (PWNe) and supernova remnants (SNRs) have been detected to date but none so far in the vicinity of a globular cluster. Globular clusters are old stellar systems which exhibit very-high stellar densities in their cores. This leads to numerous stellar encounters \cite{pooley2006} and to the prolific formation of millisecond pulsars (msPSRs) \cite{ransom2008}. Globular clusters are predicted VHE gamma-ray sources. In these models the gamma-ray emission is produced by inverse Compton (IC) up-scattering of stellar radiation fields and the cosmic microwave background by relativistic electrons originating either from the msPSRs themselves or their PWNe \cite{bednarek2007,venter2009,cheng2010}. Here the discovery of VHE gamma-ray emission from the direction of the Galactic globular cluster Terzan~5 is reported. This globular cluster is located at a distance of 5.9~kpc \cite{ferraro2009} at RA(J2000)~17$^\mathrm{h}$48$^\mathrm{m}$04$^\mathrm{s}$.85 and Dec~$-24^{\circ}$46$^\prime$44$^{\prime\prime}$.6 (Galactic coordinates: $l = 3.84^{\circ}$, $b = 1.69^{\circ}$) and exhibits a core radius $r_\mathrm{c} = 0^\prime.15$, a half-mass radius $r_\mathrm{h} = 0^\prime.52$ and a tidal radius $r_\mathrm{t} = 4^\prime.6$ \cite{lanzoni2010}. Terzan~5 hosts the largest population of detected msPSRs (33) \cite{ransom2008} and it has been discovered by \emph{Fermi-LAT} in the GeV range \cite{kong2010,abdo2010}. So far only upper limits have been reported from globular clusters in the VHE gamma-ray range (e.g. \cite{aharonian2009,anderhub2009,mccutcheon2009}). \section{Observation and Analysis} \begin{figure*}[ht] \centering \includegraphics[width=13cm]{icrc0403_fig01.eps} \caption{Exposure-corrected excess image from the H.E.S.S. data, smoothed with a Gaussian function of width 0.1$^\circ$ and overlaid with significance contours (4 -- 6~$\sigma$) in RADec~J2000 coordinates. The circles show the half-mass radius (in black) and the larger tidal radius (in cyan) of the globular cluster. The cross indicates the best-fit source position, assuming a 2D Gaussian shape, with 1~$\sigma$ uncertainty on each axis. The rectangle represents the integration region used for the full-source spectral analysis. The upper-right corner circle illustrates the instrumental PSF. } \label{figure:excess_hess} \end{figure*} The observations presented here have been undertaken with the H.E.S.S. array which is an array of four Imaging Atmospheric Cherenkov Telescopes located in the Khomas highlands in Namibia. Stereoscopic trigger and analysis method allow efficient cosmic ray background rejection and accurate energy and arrival direction reconstruction for gamma-rays in the energy range 100 GeV - 100 TeV. Terzan 5 has been observed for 90 hours of good life-time with 3- and 4 telescopes with an average zenith angle of 20.4$^\circ$ and a mean pointing direction off-set of 0.95$^\circ$. These observations resulted in the detection of a source of VHE gamma-rays in the vicinity of Terzan~5. Using \textit{hard cuts} \cite{aharonian2006} a significance of 5.3 $\sigma$ at the position of Terzan~5 is found with a nearby peak significance of 7.5 $\sigma$. This is confirmed by an independent calibration and analysis chain \cite{denaurois2009}. The source appears to extend beyond the tidal radius of the globular cluster. A 2-dimensional Gaussian fit results in a best-fit position of RA(J2000)~17$^\mathrm{h}$47$^\mathrm{m}$49$^\mathrm{s} \pm 1^\mathrm{m}.8_\mathrm{stat} \pm 1^\mathrm{s}.3_\mathrm{sys}$ and Dec~$-24^{\circ}$48$^\prime$30$^{\prime\prime} \pm 36^{\prime\prime}_\mathrm{stat} \pm 20^{\prime\prime}_\mathrm{sys}$, offset by 4$^\prime$.0 from the GC center. Therefore the source is named HESS~J$1747-248$. The size of the source is given by the Gaussian widths 9$^\prime$.6$ \pm $2$^\prime$.4 and 1$^\prime$.8$ \pm $1$^\prime$.2, for the major and minor axes respectively, oriented 92$^\circ \pm $6$^\circ$ westwards from North. For spectral analysis a more restrictive data selection has been applied to improve the energy reconstruction which resulted in 62 hours of data. For a power-law spectral model of the form $k \left( \frac{E}{E_0} \right)^{-\Gamma}$, the flux normalization $k$ at $E_0 = 1$~TeV is (5.2$\pm$1.1)$\times$10$^{-13}$~cm$^{-2}\,$s$^{-1}\,$TeV$^{-1}$ and the spectral index $\Gamma = 2.5 \pm 0.3_\mathrm{stat} \pm 0.2_\mathrm{sys}$. This corresponds to an integral flux of (1.2$\pm$0.3)$\times$10$^{-12}$~cm$^{-2}\,$s$^{-1}$, or 1.5\% of the Crab flux, in the 440~GeV --- 24~TeV range. There are not enough excess events to discuss a more complex spectral model. \begin{figure*}[ht] \centering \includegraphics[width=9.0cm]{icrc0403_fig02.eps} \caption{VHE $\gamma$-ray spectrum of HESS~J1747-248 with 1~$\sigma$ error bars, fitted with a power-law model. The fit results are discussed in the text. } \label{figure:spectrum_hess} \end{figure*} \section{Multi-wavelength environment} Several interesting structures have been found in the surroundings of HESS~J$1747-248$ in archival multi-wavelength data. In the X-ray regime diffuse emission extending beyond $r_\mathrm{h}$ has been reported \cite{eger2010}. This diffuse emission is centered on the core of Terzan~5, exhibits an unabsorbed flux of $(5.5 \pm 0.8) \times 10^{-13}$ erg cm$^{-2}$ s$^{-1}$ in the 1~-~7~keV band and is most likely of non-thermal origin with a hard spectrum with photon index $0.9\pm0.5$. Also diffuse radio emission has been found which extends to the north-west of Terzan~5 but does not show a tale-telling morphology like e.g. a SNR shell \cite{clapson2011}. The origin of the diffuse X-ray emission as well as the diffuse radio emission is ambiguous but could be connected to the large population of msPSRs in Terzan~5 \cite{eger2010,clapson2011}. Several scenarios for VHE gamma-ray emission would predict multi-wavelength counterparts, however, the relation between HESS~J$1747-248$ and the diffuse X-ray and radio sources remains unclear at the moment. \section{Discussion} Available multi-wavelength data on the one hand do not show any typical VHE gamma-ray emitter (like PWN or SNR) in the vicinity of the detected source. On the other hand the properties of the source, namely the extension and the off-set ($2 \sigma$ level) are unexpected for a globular cluster. Therefore the results of the H.E.S.S. observations are difficult to interpret. Here scenarios excluding and including the globular cluster as the origin of the VHE gamma-ray emission are briefly discussed. \subsection{Chance coincidence} The positional concurrence between the globular cluster and the VHE gamma-ray source could in principle be just a chance coincidence of physically unrelated objects. Notably the source parameters (extension, photon spectrum) of HESS~J$1747-248$ are compatible with the properties of VHE gamma ray detected PWNe \cite{mattana2009}. The probability of chance coincidence can be estimated from the distribution of VHE gamma-ray sources in the galactic disk. From the H.E.S.S. galactic plane scan the lateral distribution of sources in the longitude range from $-85^\circ - +60^\circ$ has been obtained. It can be described by a Gaussian profile (containing 48 sources) centered at b=$-0.26^\circ$ with a width of $0.4^\circ$ and additional four outliers with b $<-2^\circ$ below the galactic disk \cite{chaves2009}. At a latitude of $1.7^\circ$ Terzan~5 is almost 5 $\sigma$ away from the center of the Gaussian distribution and there are no other VHE sources detected in the latitude band of $1.5^\circ - 2^\circ$. Hence HESS~J$1747-248$ presents an outlier to the lateral source distribution. If it is assumed that one outlier is located in the latitude band of $1.5^\circ - 2^\circ$ then the chance coincidence that it is placed within $0.1^\circ$ of the center of the globular cluster is about 10$^{-4}$. Thus, its quite unlikely that the proximity of Terzan~5 and HESS~J$1747-248$ is due to chance but this possibility cannot be excluded. \subsection{Leptonic scenario} Globular clusters are predicted VHE gamma-ray emitters \cite{bednarek2007,venter2009,cheng2010}. All these models rely on IC up-scattering of low energy photons (stellar radiation, cosmic microwave background) by energetic electrons. The electrons are proposed to be accelerated either by the population of msPSRs themselves or by their colliding PWNe. Models \cite{bednarek2007,venter2009} predict a VHE gamma-ray flux for Terzan~5 of about 1\% of the flux of the Crab nebula for reasonable input parameters. This is in broad agreement with the flux of HESS~J$1747-248$. For an IC scenario it is expected that the gamma-ray emission should follow the shape of the up-scattered radiation field which is highly centrally peaked in the case of a globular cluster. Therefore, an extended source with off-set from the globular cluster center is challenging to interpret for such a scenario. IC emission in the VHE gamma-ray regime should be accompanied by synchrotron emission in the X-ray regime. However, since the detected diffuse X-ray emission \cite{eger2010} is also centered on the globular cluster core a simple model where the X-ray and VHE gamma-rays emission originates from the same population of electrons cannot explain the morphology of the VHE source. To summarize, the population of msPSRs would energetically be capable of producing the VHE gamma-ray emission but the source morphology is not self-evident for such a scenario. \subsection{Hadronic scenario} Globular clusters are believed to boost the rate of stellar mergers due to the extreme stellar densities in their cores (e.g. \cite{shara2002,grindlay2006}). These collisions lead in some cases to stellar explosions like SN Ia powered by white~dwarf~-~white~dwarf mergers. Remnants of these SN Ia could be the acceleration site of hadronic cosmic rays (see \cite{acero2010} for the detection of SN 1006 in VHE gamma-rays). To explain the VHE emission an energy in hadronic cosmic rays of 10$^{51} (n/0.1\mathrm{cm}^{-3})$~erg ($n$ density of ambient target material) would be needed if it is assumed that the cosmic ray spectrum follows a power-law with index -2 below the region which can be probed with H.E.S.S. This is somewhat high for a supernova. The source morphology could be explained by a hadronic scenario but there is no multi-wavelength support for a SNR origin \cite{clapson2011} of the VHE gamma-ray emission. \section{Summary} A source of VHE gamma-ray emission, HESS~J$1747-248$, has been detected in the vicinity of the Galactic globular cluster Terzan~5. It exhibits a flux of about 1.5\% of the flux of the Crab nebula above 440~GeV and a spectrum which can be described by a power-law with index $-2.5$. The source appears to be extended, off-set from the core of Terzan~5 but overlaps significantly with the globular cluster. The probability of a chance coincidence between HESS~J$1747-248$ and Terzan~5 is low but this possibility cannot be ruled out completely. If the VHE gamma-ray source is indeed physically connected with the globular cluster it would represent the first member of a new type of VHE gamma-ray emitters. The nature of the source is still uncertain. On the one hand its properties (e.g. morphology) are challenging to interpret in a scenario where energetic electrons produced by the population of msPSRs up-scatter ambient radiation fields to gamma-ray energies. On the other hand there is also no support in other wavebands for a scenario where gamma-rays are produced by collisions of target nuclei from the ISM with hadronic cosmic rays originating from a SNR. Further multi-wavelength observations (X-rays, radio) may help to reveal the nature of HESS~J$1747-248$. In parallel, more sophisticated source models may narrow down the list of applicable emission scenarios. \section*{Acknowledgements} The support of the Namibian authorities and of the University of Namibia in facilitating the construction and operation of H.E.S.S. is gratefully acknowledged, as is the support by the German Ministry for Education and Research (BMBF), the Max Planck Society, the French Ministry for Research, the CNRS-IN2P3 and the Astroparticle Interdisciplinary Programme of the CNRS, the U.K. Science and Technology Facilities Council (STFC), the IPNP of the Charles University, the Polish Ministry of Science and Higher Education, the South African Department of Science and Technology and National Research Foundation, and by the University of Namibia. We appreciate the excellent work of the technical support staff in Berlin, Durham, Hamburg, Heidelberg, Palaiseau, Paris, Saclay, and in Namibia in the construction and operation of the equipment.
1,941,325,220,906
arxiv
\section{Introduction} Smith-Purcell radiation (SPR) arriving while a charged particle moves in the vicinity of the periodically deformed surface is widely used or planned to be used both for new Free Electron Laser (FEL) schemes~\cite{Walsh_FEL, Andrews0, Andrews, 5} including new terahertz sources~\cite{3,4,Prokop} and for beam diagnostics~\cite{Doucas_NIMB_01,2,Doucas}. In these cases the metal gratings of the dif\mbox{}ferent shapes are used as radiators. For all metal gratings the SPR have a convenient feature that is so-called Smith-Purcell dispersion relation~\cite{1}: \begin{equation} \fl \displaystyle \lambda_m = \frac{d}{m} \Big (\beta^{-1} - \cos \theta \Big ), \quad m = 1, 2, \ldots \label{Eq:1} \end{equation} Here $\lambda$ is the radiation wavelength, $d$ is the grating period, $m$ is the dif\mbox{}fraction order, $\beta =v/c$ is the particle velocity in the speed of light units, $\theta$ is the radiation polar angle. This relation was proven experimentally many times. In the case of coherent SPR from lamellar gratings the relation was demonstrated by Shibata et al. in~\cite{Shibata}. Unfortunately, we still have a lack of knowledge about the properties of SPR from the dielectric gratings. The first theoretical investigations were made by Lampel in~\cite{Lampel} but we believe that the method used in cited paper is an ambiguous one. Experimental investigations were made in the papers~\cite{Yamamoto_PC, Horiuchi} where authors examined the millimeter wavelength radiation from 2D and 1D photonic crystals used as SPR targets. The 1D photonic crystal is just a periodical structure, consisting of a number of teflon (PTFE) cylinders~\cite{Horiuchi}. Such structure is very similar to well-known grating and one may consider the radiation from it as SPR from the dielectric grating. Nevertheless, in the photonic crystals Cherenkov radiation (CR) plays an important role~\cite{Luo, Kremers}. The standard procedure to make theoretical studies of the radiation from the photonic crystal is to assume that the crystal is inf\mbox{}inite (a number of periods tends to infinity) and not absorbing one~\cite{Kremers}. That automatically gives us no chance to compare radiation characteristics from dielectric and metal gratings using the same formalism. In the millimeter wavelength region the grating have just some tens of periods and the inf\mbox{}inite assumption is not a good one. Hence, we need a model to simulate the characteristics of all kinds of radiation generated by the electron beam moving near the grating. Such model may be useful for updating of the radiation schemes for practical application. In this paper we present the results of theoretical and experimental studies of the coherent SPR from the lamellar dielectric grating (or 1D photonic crystal) generated by the bunched electron beam with 6.1\,MeV energy in the millimeter wavelength region. Up to our knowledge that is only the second experimental investigation of such radiation, the f\mbox{}irst one was performed by Horiuchi et al. in~\cite{Horiuchi}. The main our goal is to develop a physically clear model suitable for a case of simultaneous generation of coherent SPR and CR in periodic structure with an arbitrary permittivity that makes it possible to compare the radiation characteristics from the dielectric and metal gratings. In our experimental part we compare the characteristics of radiation from the aluminium and tef\mbox{}lon lamellar gratings. \section{Theoretical model} In our theoretical estimations we will follow the recent papers of Karlovets and Potylitsyn~\cite{Karlovets_JETPLett_09, Karlovets_PRSTAB_10} where authors have shown a simple and elegant method of the Maxwell's equations solution that makes it possible to simulate the characteristics of any type of polarization radiation (including transition radiation, diffraction radiation, CR, SPR) appearing simultaneously. The term ``polarization radiation'' clearly shows a nature of the radiation that is a polarization of a target (grating) material by electromagnetic f\mbox{}ield of the traveling charged particle. In the cited articles~\cite{Karlovets_JETPLett_09, Karlovets_PRSTAB_10} it was shown that the method developed gives well known results for transition radiation, dif\mbox{}fraction one, CR and SPR that are just dif\mbox{}ferent ``kinematic'' cases of the polarization radiation. Fig.~\ref{Teor_scheme} shows the geometry of the problem and some designations. The electron bunch moves along $z$ axis with velocity $v$ and impact-parameter $h$. The grating is made of some material with permittivity $\varepsilon(\omega)$ and have the period $d$ and the number of periods $N_d$. A groove width is $a$, depth is $b$, a substrate thickness is $g$. The grating size in direction perpendicular to the f\mbox{}igure plane is assumed to be inf\mbox{}inite. The detector is situated in the wave- (far-f\mbox{}ield) zone and its position is determined by a polar angle $\theta$ and an azimuth angle $\phi$. Such unusual grating orientation was chosen because we need to satisfy a condition that the radiation wavelength is less than the size of a surface of the grating through which we refract the radiation in order to use simlpe Fresnel coef\mbox{}f\mbox{}icients (see further). In cited paper~\cite{Karlovets_PRSTAB_10} this condition was satisfied in another way, it was assumed that $a \ll \lambda$. \begin{figure}[tb] \centering \includegraphics[width=85mm]{fig1} \parbox{80mm}{\caption{Theoretical simulation scheme and some designations.\label{Teor_scheme}}} \end{figure} According to the used method a magnetic f\mbox{}ield of the polarization radiation ${\bf H^{pol}} ({\bf r}, \omega )$ in a general case may be written as~\cite{Karlovets_JETPLett_09,Karlovets_PRSTAB_10}: \begin{eqnarray} \fl \displaystyle {\bf H^{pol}} ({\bf r}, \omega ) = {\textrm {curl}} \frac{1}{c} \int \limits_{V_{T}} {\bf j_{pol}^{(0)}} ({\bf {r}}^{\prime}, \omega) \frac{e^{i \sqrt{\varepsilon (\omega )} \frac{\omega}{c} |{\bf r} - {\bf r}^{\prime}|}}{|{\bf r} - {\bf r}^{\prime}|} d^3 r^{\prime}. \label{4} \end{eqnarray} It should be mentioned that this formula is the exact solution of the Maxwell equation with the only assumption that the media is not magnetic one. Here $c$ is the speed of light; ${\bf j_{pol}^{(0)}}({\bf {r}}^{\prime}, \omega) = \sigma (\omega) {\bf E_e} ({\bf {r}}^{\prime}, \omega)$ is the polarization current density, $\sigma (\omega) = \frac{(\varepsilon(\omega)-1) \omega}{4 \pi i}$ is the grating conductivity; ${\bf E_e} ({\bf {r}}^{\prime}, \omega)$ is the Fourier transform of electron Coulomb f\mbox{}ield; $\frac{e^{i \sqrt{\varepsilon (\omega )} \omega |{\bf r} - {\bf r}^{\prime}|/c}}{|{\bf r} - {\bf r}^{\prime}|}$ is the Green function where ${\bf r}^{\prime}$ is the coordinate of the radiation point and ${\bf r}$ is the coordinate of the detection point. The integration is performed over the whole grating volume $V_T$. In our case (far-f\mbox{}ield radiation and infinite size in $x$ direction) the expression may be signif\mbox{}icantly simplif\mbox{}ied expanding Green function: \begin{eqnarray} \begin{array}{l} \fl\displaystyle {\bf H^{pol}} ({\bf r}, \omega ) = \frac{2 \pi i}{c} \frac{e^{\sqrt{\varepsilon (\omega)}r \frac{\omega}{c}}}{r} {\bf k} \times \\ \fl \qquad \qquad \quad \times \displaystyle \int dz^{\prime} dy^{\prime} {\bf j_{pol}^{(0)}} (k_x, y^{\prime}, z^{\prime}, \omega) e^{-i (k_y y^{\prime} + k_z z^{\prime})}. \end{array} \label{15} \end{eqnarray} Here $\bf k$ is the wave-vector in the radiation direction, ${\bf j_{pol}^{(0)}} (k_x, y^{\prime}, z^{\prime}, \omega)$ is the special Fourier transform of the polarization current density. Physically such expansion means that we replace our radiating region inside the grating by a single ef\mbox{}fective dipole situated at the coordinate origin. Let us examine our bunched beam properties. Let us assume that our bunch have some longitudinal (along $z$ axis) distribution of $N_e$ ($N_e \gg 1$) noninteracting electrons that are moving in the same direction with the same speed. In this case a bunch current density may be written as: \begin{equation} \fl {\bf j} ({\bf r},t) = e {\bf v} \sum_{n=1}^{N_e} \delta (x)\,\delta (y-h)\, \delta (z-z_n-vt). \end{equation} Here $e$ is the elementary charge, ${\bf r_n} = \{0, h, z_n \}$ is the position of the $n$-th electron in the bunch, ${\bf v} = \{0,0,v \}$ is the bunch velocity vector. We do no take into account a transverse distribution of the electrons in the bunch because in the real experimental conditions (bunch transverse size is less than $\gamma \lambda$) a transverse form-factor is close to unity as it was shown by Shibata et al. in~\cite{Shibata}. A complete Fourier transform of the bunch current density have the form: \begin{equation} \fl {\bf j} ({\bf k}, \omega) = \frac{e}{(2\pi)^3} \frac{{\bf v}}{v}\, e^{-i k_y h} \delta \left( \frac{\omega}{v} - k_z\right) \sum_n e^{-i k_z z_n} \end{equation} In this case the complete Fourier transform of the electron bunch f\mbox{}ield ${\bf E_e} ({\bf k},\omega)$ that is convenient to use obtaining a special Fourier transform ${\bf E_e} (k_x, y, z, \omega)$ that is needed for the problem solution (see Eq.~(\ref{15})) may be written as: \begin{equation} \fl {\bf E_e}({\bf k}, \omega) = \frac{2 e i}{(2 \pi)^2 \omega} \frac{{\bf v} \omega^2 /c^2- {\bf k} ({\bf k}, {\bf v} )}{k^2-\omega^2 / c^2} e^{-i k_y h} \sum_n e^{-i k_z z_n } \end{equation} The special Fourier transform ${\bf E_e} (k_x, y, z, \omega)$ for a case $h<0$ may be found as: \begin{eqnarray} \displaystyle \fl {\bf E_e}(k_x,y,z, \omega) = -\frac{i e}{2\pi v } \frac{\exp \left[-(h+ y)\sqrt{k_x^2 + \frac{\omega^2}{v^2} \gamma^{-2}} \right]} {\sqrt{k_x^2 + \frac{\omega^2}{v^2} \gamma^{-2}}} \times \\ \fl \times \left\{ k_x, -i \sqrt{k_x^2 + \frac{\omega^2}{v^2} \gamma^{-2}}, \frac{\omega}{v} \gamma^{-2} \right\} \sum_n e^{i \frac{\omega} {v} (z-z_n)}. \label{16} \end{eqnarray} Combining Eqs.~(\ref{15}) and~(\ref{16}) one may easily calculate double integral and obtain the radiation magnetic f\mbox{}ield: \begin{eqnarray} \begin{array}{l} \displaystyle \fl {\bf H^{pol}} = -\frac{i}{2\pi c} \frac{\omega}{v}(\varepsilon -1)\frac{\exp \left[ i \sqrt{\varepsilon} \frac{\omega}{c} r \right]}{r} \frac{{\bf k} \times {\bf q}}{\sqrt{k_x^2 + \frac{\omega^2}{v^2}\gamma^{-2}}} \frac{\sin \left[ \frac{Nd}{2} \left(\frac{\omega}{v}-k_z \right) \right] }{\frac{\omega}{v}-k_z} \\ \fl \displaystyle \left[ \frac{\sin \left[ \frac{d-a}{2} \left(\frac{\omega}{v}-k_z \right) \right] }{\sin \left[ \frac{d}{2} \left(\frac{\omega}{v}-k_z \right) \right]} \frac{\exp \left[-b \left(\sqrt{k^2_x+\frac{\omega^2}{v^2}\gamma^{-2}}+i k_y\right)\right] -1}{\sqrt{k^2_x+\frac{\omega^2}{v^2}\gamma^{-1}}+i k_y} + \right. \\ \fl \displaystyle \left. + \frac{\exp \left[-(g+b) \left(\sqrt{k^2_x+\frac{\omega^2}{v^2}\gamma^{-2}}+i k_y\right)\right] - \exp \left[-b \left(\sqrt{k^2_x+\frac{\omega^2}{v^2}\gamma^{-2}}+i k_y\right)\right]}{\sqrt{k^2_x+\frac{\omega^2}{v^2}\gamma^{-1}}+i k_y} \right] \times \\ \fl \displaystyle \times \exp \left[ -h \sqrt{k^2_x+\frac{\omega^2}{v^2}\gamma^{-2}} \right] \sum_n e^{-i \frac{\omega}{v} z_n}, \label{17} \end{array} \label{H_pol} \end{eqnarray} where we used a following designation: \begin{eqnarray} \fl \displaystyle {\bf q} = \left\{k_x, \sqrt{k_x^2+\frac{\omega^2}{v^2}\gamma^{-2}},\frac{\omega}{v}\gamma^{-2} \right\}. \label{18} \end{eqnarray} A spectral-angular density of the radiation in vacuum (after refraction from the grating) from a single electron may be found like following: \begin{equation} \displaystyle \fl \frac{d^2W_{s}}{\hbar d\omega d\Omega} = \frac{c r^2}{\hbar}|{\bf E^{R}_{vac}}|^2, \label{dW} \end{equation} where in order to f\mbox{}ind the squared absolute value of the radiation electrical f\mbox{}ield $|{\bf E^{R}_{vac}}|^2$ we will use a reciprocity theorem~\cite{Landau} that was applied to the polarization radiation in~\cite{Karlovets_JETPLett_09,Karlovets_PRSTAB_10}. \begin{equation} \displaystyle \fl |{\bf E^{R}_{vac}}|^2 = T_{\perp} |H^{pol}_{\perp}|^2 + T_{\parallel} \left( |H^{pol}_{\parallel}|^2 + |H^{pol}_{y}|^2 \right) \label{E2} \end{equation} where $|H^{pol}_{\perp}|^2 $ and $|H^{pol}_{\parallel}|^2$ are the components of the magnetic f\mbox{}ield perpendicular and parallel to the incidence plane: \begin{eqnarray} \begin{array}{l} \displaystyle \fl H^{pol}_{\perp} = H^{pol}_x \frac{\sin \theta \sin \phi}{\sqrt{1-(\sin \theta \cos \phi)^2}} - H^{pol}_z \frac{\cos \theta}{\sqrt{1-(\sin \theta \cos \phi)^2}} \\ \displaystyle \fl H^{pol}_{\parallel} = H^{pol}_x \frac{\cos \theta}{\sqrt{1-(\sin \theta \cos \phi)^2}} + H^{pol}_z \frac{\sin \theta \sin \phi}{\sqrt{1-(\sin \theta \cos \phi)^2}} \end{array} \label{H_comp} \end{eqnarray} \begin{eqnarray} \begin{array}{l} \displaystyle \fl T_{\perp} = \left| \frac{2 \sin \theta \cos \phi}{\varepsilon \sin \theta \cos \phi + \sqrt{\varepsilon - 1 + (\sin \theta \cos \phi)^2}} \right|^2 \\ \displaystyle \fl T_{\parallel} = \left| \frac{2 \sin \theta \cos \phi}{ \sqrt{\varepsilon} \left( \sin \theta \cos \phi + \sqrt{ \varepsilon - 1 + (\sin \theta \cos \phi)^2} \right) } \right|^2 \end{array} \label{T_comp} \end{eqnarray} are the refraction coef\mbox{}f\mbox{}icients expressed through Fresnel ones. This expression is correct if the radiating surface is larger than the wavelength (see discussion in~\cite{Karlovets_PRSTAB_10}). In other way such assumption may give some errors. The components of the unit vector in radiation direction may be written taking into account Snell's law: \begin{equation} \fl {\bf e} = \varepsilon^{-1/2}\left\{\sin \theta \, \sin \phi , -\sqrt{\varepsilon -1 +(\sin \theta \, \cos \phi)^2} , \cos \theta \right\} \label{unit_vec} \end{equation} Combining Eqs.~(\ref{H_pol}) --~(\ref{unit_vec}), one may obtain the solution of the problem. In the expression obtained there is, obviously, a squared sum of the radiation f\mbox{}ields of each electron that may be treated in the following way: \begin{eqnarray} \displaystyle \fl \left| \sum_{n=1}^{N_e} e^{-i \frac{\omega}{v} z_n} \right|^2 = \left\{ \begin{array}{l} \displaystyle N_e, \quad n=m; \\ \displaystyle \sum_{n=1}^{N_e-1} e^{-i \frac{\omega}{v} z_n} \sum_{m=1}^{N_e} e^{i \frac{\omega}{v} z_m}, \quad n \neq m. \end{array} \right. \end{eqnarray} In the case $n=m$ one obtains simple incoherent radiation that depends linearly on the bunch population. In the case $n \neq m$ one obtains coherent radiation: \begin{eqnarray} \begin{array}{l} \fl \displaystyle \sum_{n=1}^{N_e-1} e^{-i \frac{\omega}{v} z_n} = \sum_{n=1}^{N_e-1} \int_{-\infty}^{\infty} \delta (z-z_n) e^{-i \frac{\omega}{v} z} dz =\\[3ex] \fl \displaystyle (N_e-1)\int_{-\infty}^{\infty} \rho (z) e^{-i \frac{\omega}{v} z} dz, \end{array} \end{eqnarray} where for the Gaussian beam with rms $\sigma_z$ one may obtain ($N_e \gg 1$): \begin{equation} \fl \rho(z)=\frac{1}{N_e-1} \sum_{n=1}^{N_e-1} \delta (z-z_n) \simeq \frac{1}{\sqrt{2\pi}\sigma_z} \exp\left[-\frac{z^2}{2 \sigma_z^2}\right] \end{equation} Total spectral-angular density of SPR radiation from the bunch with population $N_e$ may be written as: \begin{equation} \fl \displaystyle \frac{d^2W_{tot}}{\hbar d\omega\, d\Omega} = \frac{d^2W_{s}}{\hbar d\omega\, d\Omega} N_e \left[ 1+(N_e-1) |f_z(\sigma_z)|^2 \right], \label{33} \end{equation} where $\displaystyle \left|f_z(\sigma_z)\right|^2= \left|\int_{-\infty}^{\infty} \rho (z) e^{-i \frac{\omega}{v} z} dz\right|^2$ is a longitudinal form-factor of the electron bunch. For the Gaussian bunch with rms $\sigma_z$ the form-factor have a standard form: \begin{eqnarray} \fl \displaystyle |f_z(\sigma_z)|^2 = \exp \left[-\frac{\omega^2 \sigma_z^2}{\beta^2 c^2}\right]. \label{form} \end{eqnarray} During our theoretical estimations, we made several assumptions that should be taken into account during our experiment. First, we assumed that the width of the grating is inf\mbox{}inite. Second, we assumed that the point-liked detector is situated far away from the grating. Third important assumption is one concerning the radiation output through only one surface of the grating with single refraction, so we may loose some information about secondary refractions in the grooves. Now let us to compare the characteristics of radiation from the teflon and metal gratings using the same expression~(\ref{33}). Let us assume that a teflon permittivity is $\varepsilon_t=2.1+0.001i$ and a metal permittivity is $\varepsilon_m=1+10^6 i$. The period of both gratings is $d=12$\,mm, the number of periods is $N_d=13$, the groove width is $a=d/2=6$\,mm. For our teflon grating the groove depth is assumed to be $b_t=5$\,mm and substrate thickness is $g_t=1$\,mm. It is obvious that no SPR will be generated from the metal grating in the geometry shown in Fig.~\ref{Teor_scheme} because the polarization currents will be induced only in a skin-layer inside the substrate that have no periodic deformation. That is why we take the substrate thickness of the metal grating equal to $g_m=0$. This assumption may give us an error because of the previously used Fresnel coefficients but it is the only way to carry out the comparison using the same formalism. Because of the skin-layer we take the metal grating groove depth equal to $b_m=0.1$\,mm. The impact-parameter is equal $h=10$\,mm for approximately 6.1\,MeV ($\gamma=12$) electrons. The bunch length (rms) is $\sigma_z=2$\,mm. Both in simulations and in the experiment one component of radiation is taken into account, that is one in $yz$ plane. Fig.~\ref{fig2} shows the polar dependence of monochromatic ($\lambda=6$\,mm) coherent SPR from both gratings. One may clearly see in Fig.~\ref{fig2} the f\mbox{}irst and third orders of SPR line ($\theta_1=60$\,deg and $\theta_3=120$\,deg). The second order is suppressed and this moment is not clear. We believe this effect appears because of our single refraction assumption~\cite{Karlovets_PRSTAB_10}. Nevertheless, one may see that SPR from both gratings have almost the same intensity, but in the case of teflon grating there is a powerful radiation in the smaller polar angles. It is not really correct to speculate about forward directed radiation because during our theoretical simulations we do not take into account the one from the faces of the grating. \begin{figure}[tb] \centering \includegraphics[width=80mm]{fig2} \parbox{80mm}{\caption{Monochromatic ($\lambda=6$\,mm) coherent SPR from the teflon grating (blue dashed line) and the metal grating (red solid line). \label{fig2}}} \end{figure} Fig.~\ref{fig3a},~\ref{fig3b} show the spectral-angular distributions of the radiation from both gratings. One may clearly see that the red line, marking Smith--Purcell relation for the f\mbox{}irst dif\mbox{}fraction order is not the only dispersion relation in these f\mbox{}igures. The additional lines are very similar to the ones measured by Horiuchi et al. in~\cite{Horiuchi}. As it was mentioned before, the authors of the cited paper have measured SPR from the periodic teflon target and have found some additional dispersion lines that have not been found for the metal grating. The model developed shows almost the same situation: the additional radiation from the metal grating is very weak, comparing with teflon grating one. Such ef\mbox{}fect may be explained by the CR contribution. We also should mention that the simulated radiation is monotonic one. \begin{figure}[tb] \centering \includegraphics[width=80mm]{fig3} \parbox{80mm}{\caption{Spectral--angular dependence of the coherent SPR from the teflon grating. Red line shows Smith--Purcell dispersion relation~(\ref{Eq:1}). \label{fig3a}}} \end{figure} \begin{figure}[tb] \centering \includegraphics[width=80mm]{fig4} \parbox{80mm}{\caption{Spectral--angular dependence of the coherent SPR from the metal grating. Red line shows Smith--Purcell dispersion relation~(\ref{Eq:1}). \label{fig3b}}} \end{figure} \section{Experiment} In order to check theoretical predictions we decided to carry out an experimental investigation of the spectral-angular characteristics of the coherent SPR from the teflon lamellar grating and to compare them with aluminium grating ones. The experimental scheme is shown in FIG.~\ref{fig:1_5}. The impact-parameter, polar angle $\theta$ and azimuth angle $\phi$ were changed during the experiment using stepper motors. The electron beam extracted into air through $40$\,$\mu$m Be foil was used. The train of bunches with electron energy 6.1\,Mev ($\gamma\approx 12$), consisting of $n_b=10500$\,bunches (the bunch population is about $N_e=10^8$\,electrons) with $\tau=4$\,$\mu$s duration travels in a line of the grating. The transverse sizes of the electron beam in the extraction point are about $4\times4$\,mm$^2$ (full width). The longitudinal distribution of the electron density in the bunch is believed to be a Gaussian with rms $\sigma_z=2$\,mm. \begin{figure}[tb] \centering \includegraphics[width=80mm]{fig5} \parbox{80mm}{\caption{The scheme of the experiment. \label{fig:1_5}}} \end{figure} The detecting system consisted of so-called ``telescope'', which represented a parabolic mirror (diameter $170$\,mm, focal distance $151$\,mm) in focus of which the detector was set up. Such telescope allows to measure the angular radiation characteristics equal to wave- (far-f\mbox{}ield) zone ones~\cite{Naum_JETPLett}. The distance from the grating to the parabolic mirror was equal to $450$\,mm. The radiation from each train was detected using DP-21M detector. The last is based on wide-band antenna, high-frequency low barrier Schottky diode and preamplif\mbox{}ier. The sensitivity of the detector was measured in the wave regions $3.8 \div 5.6$\,mm and $11 \div 17$\,mm and was equal to $300$\,mV/mW at wavelengths $5.5$\,mm and $11.5$\,mm~\cite{17}. The registered waveband ($3 \div 25$\,mm) was limited by coherent threshold in the smaller wavelengths and by the beyond-cutof\mbox{}f waveguide (diameter $15$\,mm) used to decrease accelerator RF background in the larger wavelengths. An angular acceptance of the detector was def\mbox{}ined by the ratio of the beyond-cutof\mbox{}f waveguide diameter to the focal distance of the parabolic mirror, and was equal to about $5$\,deg. Incoherent radiation may not be measured by the detector. The measured radiation yield was averaged over 20 trains. The statistical error was less than 10\% during the experiments. The beam center was def\mbox{}ined while scanning by the grating and measuring the Faraday cup signal. In this case the grating operated as a ``narrow scraper''. The latter was equal to $h=12$\,mm. During the experiment a grid polarizer was used and the radiation polarization component in yz plane was measured. We used tef\mbox{}lon and aluminium lamellar gratings with the period $d=12$\,mm. Grating length was equal to $150$\,mm, width was equal to $120$\,mm, the groove width was $a=d/2=6$\,mm, the groove depth was $b=5$\,mm and substrate thickness was $g=1.3$\,mm. The scheme shown in Fig.~\ref{Teor_scheme} is not useful for the aluminium grating that is why it was set up in the ``standard'' position with grooves directed toward the detector. However, for the teflon grating both orientations are possible and they both were used during the experiment. Let us denote: G1 is the orientation of the teflon grating with the substrate directed toward the detector (just like in Fig.~\ref{Teor_scheme}) and G2 is the orientation of the teflon grating with the grooves directed toward the detector (opposite case). As the f\mbox{}irst step we measured the polar distribution of the coherent SPR for both gratings for all possible geometries. The result is shown in Fig.~\ref{fig5} by the red dots for aluminium grating, by the green diamonds for the teflon grating in G1 orientation and by the blue stars for the teflon grating in G2 orientation. The statistical error was comparable with the point size and is not shown in the f\mbox{}igure. Blue line shows a ``control level'' that was chosen in order to take into account only the stronger ef\mbox{}fects. \begin{figure}[tb] \centering \includegraphics[width=80mm]{fig6} \parbox{80mm}{\caption{Coherent SPR intensity vs. polar angle. Red dots -- aluminium grating, green diamonds -- teflon grating in $G1$ orientation, blue stars -- teflon grating in $G2$ orientation. Blue horizontal line shows the ``control level''. \label{fig5}}} \end{figure} First one may see that almost all radiation from the aluminium grating is situated in the area of large polar angles and falls drastically while polar angle $\theta$ is larger than $112$\,deg. We believe that this fact that do not coincide with our theoretical estimations is caused by the f\mbox{}inite transverse size of the grating. Indeed, the transverse radius of the Lorentz-boosted electron f\mbox{}ield is about $\gamma \lambda$ and for $\lambda=16$\,mm (taken from Smith-Purcell relation for $\theta=110$\,deg) it is about $\gamma \lambda = 190$\,mm, while the grating size is just $120$\,mm. That seems to be a reason for signif\mbox{}icant attenuation of larger wavelengths. The radiation at polar angles less than $90$\,deg is very weak. Let us analyze the SPR from the teflon grating. First of all one may see that the radiation is not monotonic one but have clear peak structure. We can see one good peak for G2 that is situated near $\theta=68$\,deg. For G1 we may see the same peak near $\theta=68$\,deg but it is not so intensive and we can see one additional peak near $\theta=57$\,deg. The peak structure looks very strange but it was also shown in the previously cited paper~\cite{Horiuchi}, where it was explained using the terms of photonic crystal band-gaps. In our theoretical estimations we assumed single refraction through one plane, but there seems to be an additional contribution to the radiation yield from the planes that were not taken into account (faces, grooves). It seems that this problem should be solved numerically. As the second step we measured the radiation spectra in both registered peaks. The low-pass dichoric f\mbox{}ilters were used for this procedure~\cite{Doucas,Hanke}. The f\mbox{}ilters were set up in order before the detector instead of beyond-cutof\mbox{}f waveguide. In this case the angular acceptance of the detector is rather large (about $16$\,deg) that plays a role in the detected spectra. In Fig.~\ref{fig6} the spectrum of the coherent SPR is shown. The spectrum was measured from the teflon grating with the orientation G2 in the peak near $68$\,deg. From Fig.~\ref{fig6} one may see that the measured spectrum have two lines: in a region from $8.5$ to $10.2$\,mm and from $11.9$ to $13.6$\,mm. According to the Smith-Purcell dispersion relation (Eq.~(\ref{Eq:1})) we should have here the wavelength $\lambda=7.5$\,mm. Such shift of the registered wavelength (f\mbox{}irst of them) is not really clear but the reason may be the large angular acceptance of the detector during the spectral measurements. The second spectral line has nothing to do with the Smith-Purcell relation. During our theoretical simulations (see Fig.~\ref{fig3a}) we shown that there are rather powerful additional radiation lines. It seems that the second measured spectral line comes from these additional lines. \begin{figure}[tb] \centering \includegraphics{fig7} \parbox{80mm}{\caption{The spectrum of the coherent SPR from the teflon grating with the orientation G2 in the peak near $68$\,deg. \label{fig6}}} \end{figure} In Fig.~\ref{fig7} the spectrum of the coherent SPR that was measured from the teflon grating with orientation G1 in the peak near $57$\,deg is shown. This peak was measured only up to $12$\,mm wavelength because of the technical limitations. One may clearly see that the measured radiation have maximum in the wavelength region from $5.1$\,mm to $6.8$\,mm. According to the Smith-Purcell relation the radiation with wavelength $5.5$\,mm should be generated at this polar angle. This fact is the most interesting in our investigation because this radiation wavelength is signif\mbox{}icantly lower than our coherent threshold. Substituting the wavelength ($\lambda=6$\,mm) and our bunch length ($\sigma_z=2$\,mm) into Eq.~(\ref{form}) one may obtain the value of form-factor that is equal to $|f_z(\sigma_z)|^2 = 0.012$. According to our theoretical simulations at this wavelength the coherent SPR from the metal grating and the teflon one should have almost same intensity (see Fig.~\ref{fig2}). But the radiation from the teflon grating is strong enough to be measured in a contrast to the radiation from the metal grating that was not detected (see Fig.~\ref{fig5}). This fact needs additional theoretical investigations but seems to be promising for creation of new radiation sources in sub-millimeter and terahertz regions based on coherent radiation of short electron bunches. One may also see in Fig.~\ref{fig7} that there is some amount of radiation in the region from $8.5$\,mm to $10.2$\,mm. The origin of this radiation is large angular acceptance of our detecting system: we detect some amount of radiation from 68\,deg peak. \begin{figure}[tb] \centering \includegraphics{fig8} \parbox{80mm}{\caption{The spectrum of the coherent SPR from the teflon grating with orientation G1 in the peak near $57$\,deg. \label{fig7}}} \end{figure} \section{Conclusion} In this paper we have presented the simple model for estimation of coherent SPR characteristics that is suitable for both metal and dielectric gratings. The predicted radiation from the dielectric grating dif\mbox{}fers drastically from the SPR from the metal one. There are additional intensive radiation lines because of CR mechanism. This additional radiation may be useful for the SPR based FELs because it may intensify the process of the continuous beam bunching. This assumption, surely, needs additional investigations. In the experimental part of our investigation we have compared the characteristics of the coherent SPR from the aluminium and teflon lamellar gratings in the same conditions. This characteristics dif\mbox{}fer signif\mbox{}icantly: the radiation from the aluminium grating is situated in the area of large polar angles where it is expected because of the coherent threshold. The radiation from the teflon grating have clear peak structure, not monotonic one and these peaks are situated in the area of smaller polar angles. The intensity of radiation from the teflon grating and SPR intensity from the aluminum one have the same order of magnitude. The spectra of radiation in this peaks have complicated structure proving our estimation about additional radiation lines. Also the radiation spectrum in the peak $\theta=57$\,deg shows the presence of the $\lambda \sim 6$\,mm wavelength that should be significantly suppressed because of our coherent threshold. That means that the radiation from the teflon grating at this wavelength is strong enough to be measured in spite of the form-factor value equal to $0.012$ in contrast to the radiation from the metal grating that was not detected. This fact is also promising for creation of new radiation sources based on prebunched electron beams with small dimensions. In the conclusion we may say that the polarization radiation from the dielectric gratings is a new f\mbox{}ield of investigations and seems to be really promising for practical application in various f\mbox{}ields. \section{Acknowlegment} The authors would like to thank D.V. Karlovets for useful discussion during theoretical estimations and the microtron staf\mbox{}f for the help during the experiment. The work was partly supported by the Russian Federal Agency for Education within the program ``Scientif\mbox{}ic and educational specialists of innovation Russia'' under the contracts No. $\Pi617$ and $\Pi1143$. \section*{References}
1,941,325,220,907
arxiv
\section{Introduction} \label{intro} Micro-expression, a very brief and involuntary form of facial expressions occurring when people want to conceal one's true feelings, usually lasts between 0.04s to 0.2s \cite{Ekm2009}. Automatic micro-expression analysis involves micro-expression spotting and micro-expression recognition (\textbf{MER}) \cite{Oh2018a}. Micro-expression spotting aims to automatically detect the temporal interval (from onset to offset) of a micro-movement in a sequence of video frames, also including the apex frame spotting, while MER refers to the classification task of identifying the micro-expression involved in the well-segmented video from onset to offset \cite{Oh2018a, Patel2015}. Our focus in this work is micro-expression recognition. As an essential way of human emotional behavior understanding, MER has attracted increasing attention in human-centered computing in the past decades. Its potential applications~\textit{e.g.}~in police case diagnosis and psychoanalysis \cite{Ekman1969,Ekm2009} make it a core component in the next generation of computer system, in which a natural human machine interface enables the user to account for subtle appearance changes of human faces, to reveal the hidden emotions \cite{Michael2010,Ekman2009} of humans, and to help understanding people's deceitful behaviors. Although advances have been made, due to complex factors,~\textit{e.g.}~the subtle changes of micro-expressions, it is extremely difficult for MER to achieve superiority performance. Amongst, one critical research issue is how to extract salient and discriminative features from micro-expression. Currently, amount of feature descriptor methods have been proposed. They are commonly categorized into handcrafted features and deep learning features. However, these existing methods characterize the discrimination of micro-expressions inefficiently. On the other hand, it is time-consuming to design handcrafted feature~\cite{Xu2017} and manually adjust the optimal parameters~\cite{Lu2014}. To address these issues, we proposed a simple yet efficient method, termed as Feature Refinement (\textbf{FR}), is proposed to extract the expression-specific feature. The FR consists of three feature refinement stages: expression-shared feature learning, expression-specific feature distilling, and expression-specific feature fusion. In the first stage, a shallow two-stream Inception network with Inception blocks is designed to capture global and local information of optical flows for expression-shared feature learning. In the expression-specific feature distilling stage, based on expression-shared features, the proposal module with attention mechanism and proposal loss is proposed to distill expression-specific features for obtaining salient and discriminative features. The constraint of expression-specific objective function is designed to lead separate and different feature mapping. In the last stage, the element-wise sum function is used to fuse the separate expression-specific features for modeling expression-refined features under the deep network for expression categories prediction. The fusion of expression-specific feature can boost the feature learning. These three stages constitute the whole process of feature refinement. Across these three stages of feature learning, the novel deep network obtains the salient and discriminative representations for MER. Overall, our contributions can be summarized as follows, \begin{itemize} \item We propose a deep learning based three-level feature refinement architecture for MER. This architecture can effectively and automatically learn the salient and discriminative feature for micro-expression recognition by distilling expression-specific features and fusing these features to final expression-refined features for expression classification. \item We propose a constructive but straightforward attention strategy and a simple proposal loss in expression proposal module for expression-specific feature learning.~Specifically, attention factors can capture the characteristics from the subtle changes of micro-expressions. On the other hand, penalization of the proposal loss can optimize the discrimination of features. \item We extensively validate our FR on three benchmarks of MER. The experiment results sufficiently demonstrate the efficiency of expression-specific feature learning for micro-expression recognition, and provided the latest results for micro-expression recognition across three commonly available experimental protocols. \end{itemize} The rest of the paper is organized as follows. Section \uppercase\expandafter{\romannumeral2} introduces the related works. Section \uppercase\expandafter{\romannumeral3} presents our Inception-based feature learning algorithm in detail. Section \uppercase\expandafter{\romannumeral4} reports our experimental analysis. Section \uppercase\expandafter{\romannumeral5} discusses conclusions and future research directions. \section{Related Work} \subsection{Handcrafted features} The success of existing~traditional approaches in MER is attributed in good part to the quality of the handcrafted visual features representation. Normally, handcrafted features are fed into a supervised classifier~\textit{e.g.}~Support Vector Machines~\cite{Cortes1995} to train a recognizer for the target expressions. The features are generally categorized into appearance-based and geometric-based features. \subsubsection{Appearance-based features} Local Binary Pattern from Three Orthogonal Planes (LBP-TOP) \cite{Zhao2007} is the most widely used appearance-based feature for micro-expression recognition. It combines the temporal features along with the spatial features from three orthogonal planes of the image sequence. As one of the earlier works in MER, LBP-TOP has been widely used in micro-expression analysis. Due to its low computational complexity, many LBP-TOP variants has been proposed,~\textit{e.g.}~LBP from three Mean Orthogonal Planes (LBP-MOP) \cite{Wang2015}, Spatiotemporal Completed Local Quantized Patterns (STCLQP) \cite{Huang2016}, and hierarchical spatiotemporal descriptors \cite{Zong2018a}. Wang~\textit{et al.}~\cite{Wang2014} proposed a spatiotemporal descriptor utilizing six intersection points namely LBP with Six Intersection Points (LBP-SIP) to suppress redundant information in LBP-TOP and preserve more efficient computational complexity. Wang~\textit{et al.}~\cite{Wang, Wang2015a} explored the influence of the color space for features extraction and extracted Tensor features from Tensor Independent Color Space (TICS), validating that color information can improve the recognition effect of LBP-TOP. Huang~\textit{et al.}~\cite{Huang2015} proposed Spatiotemporal LBP with integral projection (STLBP-IP), which used facial shape information to improve recognition performance. Huang~\textit{et al.}~\cite{Huang2019} further proposed discriminative spatiotemporal LBP with revisited integral projection (DiSTLBP-RIP) to reveal the discriminative information. Besides the LBP family, 3D Histograms of Oriented Gradients (3DHOG) \cite{Polikovsky2009, Polikovsky2013} was another appearance-based feature counting occurrences of gradient orientation in localized portions of the image sequence. \subsubsection{Geometric-based features} Geometric-based features aim to represent micro-expression samples by the aspect of face geometry,~\textit{e.g.}~shapes and location of facial landmarks. They can be categorized into optical flow based and texture variations based features. To estimate the apparent motion of objects, the optical flow method was mostly introduced to extract motion features in MER. To suppress facial identity appearance information on micro-expression, Lu~\textit{et al.}~\cite{Lu2014} proposed Delaunay-based Temporal Coding Model (DTCM), which encoded the local temporal variation in each sub-region by Delaunay triangulation and standard deviation analysis. Liu~\textit{et al.}~\cite{Liu2016} proposed the Main Directional Mean Optical Flow (MDMO) to reduce the feature dimension by using optical flow in the main direction. The MDMO was least affected by the varied number of frames in the image sequence. Xu~\textit{et al.}~\cite{Xu2017} designed Facial Dynamics Map (FDM) to suppress abnormal optical flow vectors which resulted from noise or illumination changes. Liong~\textit{et al.}~\cite{Liong2018a} proposed Bi-Weighted Oriented Optical Flow (Bi-WOOF), which applied two schemes to weight the HOOF \cite{Chaudhry2009} descriptor locally and globally. The Bi-WOOF can obtain promising performance using only the onset and apex frame, increasing its effectiveness by a large margin. Overall, handcrafted feature extraction~approach mostly relies on the manually designed extractor, which needs professional knowledge and complex parameter adjustment process. Meanwhile, each method suffers from poor generalization ability and robustness. Furthermore, due to the limited representation ability, engineered features may hardly handle the challenge of nonlinear feature warping caused by complicated situations,~\textit{e.g.}~under different environments. \subsection{Deep learning features} Recently, deep learning has been considered as an efficient way to learn feature representations. According to the different evaluation mechanisms, the existing methods were evaluated on the sole database, on Composite Database Evaluation protocol, and on Cross-database Micro-expression Recognition protocol, respectively. \subsubsection{Features evaluated on the single database} Evaluation on the single database means the training and testing samples are from the same micro-expression database.~Leave-One-Subject-Out (LOSO), Leave-One-Video-Out (LOVO) or $K$-Fold rule is commonly used to evaluate deep learning model. For example, Kim~\textit{et al.}~\cite{Kim2016} proposed a feature representation for the spatial information at different temporal states, which was based on the Long Short-Term Memory (LSTM) \cite{Hochreiter1997} recurrent neural network and only evaluated on a single dataset of CASME~\uppercase\expandafter{\romannumeral2} \cite{Yan2014}. Peng~\textit{et al.}~\cite{Peng2017} proposed Dual Temporal Scale Convolutional Neural Network (DTSCNN), which was evaluated on CASME \cite{Yan2013} and CASME~\uppercase\expandafter{\romannumeral2} databases. The DTSCNN was the first work in MER that utilized shallow two-stream neural network with inputs of optical-flow sequences. Nag~\textit{et al.}~\cite{Nag2019} proposed an unified architecture for micro-expression spotting and recognition, in which spatial and temporal network extracts time-contrasted features from the feature maps to contrast out subtle motions of micro-expressions. Wang~\textit{et al.}~proposed transferring Long-term Convolutional Neural Network (TLCNN) \cite{Wang2018} which utilized transfer learning from macro-expression to micro-expression database for MER. So far, there were many other deep learning features evaluated on the single database,~\textit{e.g.}~Spatiotemporal Recurrent Convolutional Networks (STRCN) \cite{Xia2020}, Three-stream 3D flow convolutional neural network \cite{Li2018}, Lateral Accretive Hybrid Network (LEARNet) \cite{Verma2019}, and 3D-Convolutional Neural Network method \cite{Reddy2019}. \subsubsection{Features evaluated on Composite Database Evaluation protocol} The Composite Database Evaluation (CDE) protocol \cite{See2019} means all the micro-expression databases are composited into one database and LOSO validation rule is used to evaluate the algorithm. As a deep learning work based on CDE protocol, Optical Flow Feature from Apex frame Network (OFF-ApexNet)~\cite{Gan2019} extracted optical flow features from the onset and apex frames of each video, then learned features representation by feeding horizontal and vertical components of optical flows into a two-stream CNN network. Since the CNN network was shallow, it reduced the over-fitting caused by the scarcity of data in the micro-expression databases. Very recently, more deep learning features were proposed \cite{Liong2019,Zhou2019,Liu2019,Quang2019} in MEGC 2019 \cite{See2019}. More specifically, Expression Magnification and Reduction (EMR) with adversarial training \cite{Liu2019} was a part-based deep neural network approach with adversarial training and expression magnification. With the special data augmentation strategy of expression magnification and reduction, EMR won the first place in MEGC 2019. Shallow Triple Stream Three-dimensional CNN (STSTNet) \cite{Liong2019} is an extended version of OFF-ApexNet. Besides the horizontal and vertical components of optical flows, STSTNet extracted the handcrafted feature named optical strain to learn more efficient features. By concentrating the three optical features to a single 3D image and then feeding the concentrated images to a shallow three-dimensional CNN, STSTNet achieved state-of-the-art performance on CDE protocol among the methods without data augmentation. Dual-Inception~\cite{Zhou2019} was achieved by feeding the optical flow features extracted from the onset and mid-position frames into a designed two-stream Inception network. With data augmentation methods, Quang et al. \cite{Quang2019} applied Capsule Networks (CapsuleNet) based on the apex frames to MER . \subsubsection{Features evaluated on Cross-database Micro-expression Recognition protocol} Cross-database Micro-expression Recognition (CDMER) protocol means the samples of training and testing are selected from two different micro-expression databases~\cite{Zong2017,Yap2018,Zong2019}. Based on the emotion classification, Zong~\textit{et al.}~proposed domain generators approach for cross-database micro-expression recognition~\cite{Zong2018}. Holdout-database Evaluation (HDE) protocol \cite{Yap2018}, which is considered as a specific type of CDMER, was advocated in MEGC 2018 \cite{Yap2018}. This protocol aims to tackle the recognition of micro-expressions based on AU-centric objective classes rather than emotion classes. The two earliest works that introduced Deep Neural Network (DNN) on the HDE protocol were proposed by Peng~\textit{et al.}~\cite{Peng2018} and Khor~\textit{et al.}~\cite{Khor2018}. Specifically, Peng~\textit{et al.}~used Resnet10 as a backbone and introduced transfer learning from macro-expression databases to learn micro-expression features~\cite{Peng2018}. Khor~\textit{et al.}~adopted Enriched Long-term Recurrent Convolutional Network (ELRCN) to improve the recognition performance~\cite{Khor2018}, which contained the channel-wise for spatial enrichment and the feature-wise for temporal enrichment predicted the micro-expression by passing the feature vector through LSTM. Although the aforementioned works have studied the problem of feature learning for MER, they primarily focused on learning expression-shared features from the input, ignoring resolving how to obtain salient and discriminative features. Due to the low intensity of micro-expression, generic feature learning lacks of revealing the intrinsic different characteristic of different micro-expressions. Expression-shared feature learning aims to learn identical feature space in the process of classification~\cite{Guo2019}. However, in~\cite{Guo2019}, the identical features for all categories may hardly lead to optimized performance. To solve these issues of expression-shared feature learning, this paper leverages expression-specific discriminant mapping features for MER. Specifically, a straightforward feature refinement framework is proposed to learn salient and discriminative features for micro-expression recognition, which combines expression-specific features from expression-shared features by an expression proposal module with the attention mechanism. \begin{figure*}[!t] \centering \includegraphics[width=\textwidth]{framework} \caption{ The architecture of the proposed FR for MER. The ``Databases'' module shows data from different micro-expression databases. The module ``expression-shared feature learning'' in the approach contains apex frame chosen, optical flow extraction and expression-shared feature learning in basic Inception network. The module ``expression-specific feature learning'' shows the expression-specific feature learning by using separate attention factors in the proposal module. The module ``feature fusion for classification'' illustrates that the expression-specific features are fused by element-wise sum function, and fused features are used to predict labels of categories.} \label{fr} \end{figure*} \section{Proposed Method} \label{sec:ProposedMethod} Fig.~\ref{fr} describes our proposed FR architecture. It leverages two-stream Inception network as a backbone for expression-shared feature learning, an expression proposal module with attention mechanism for expression-specific feature learning, and a classification module for label prediction by fused expression-refined features. \subsection{Expression-shared feature learning} As shown in Fig.~\ref{fr}, the expression-shared feature learning module consists of three critical components: apex frame chosen, optical flow extraction and two-stream Inception network. Note that, as SMIC database~\cite{Li2013} doesn't supply the human-annotated apex frame, the apex frame spotting becomes very necessary. Several apex frame spotting algorithms have been proposed in recent years~\cite{Liong2015, Patel2015, Li2018b, Yan2014a,Peng,Zhou2019}. For example, the mid-position frame is straightforwardly chosen as the apex frame~\cite{Peng,Zhou2019}. Moreover, Liu~\textit{et al.}~\cite{Liu2019} used motion difference to locate the apex frame. Quang~\textit{et al.}~\cite{Quang2019} divided the face image into ten regions, and then computed the absolute pixel differences to find the apex frame. Considering a trade-off between efficiency and effectiveness, the inter-frame difference method (interframe-Diff) is presented to locate apex frame on SMIC database. The interframe-Diff defines the index of apex frames $t_{apex}$ as follows: \begin{equation} t_{apex} = \arg \max{mean (I(x, y, t)-I(x, y, 0))}, \label{diff1} \end{equation} where $I(x,y,t)$ denotes the $t$-th frame of each sample, $mean()$ computes the mean absolute value of the pixel value difference between the onset and the $t$-th frames at $(x, y)$. As a motion information feature, the optical flow is extensively used by~\cite{Liong2019, Gan2019, Liong} for micro-expression recognition. More specifically, the optical flow of each sample is extracted from the onset and apex frames, where onset and apex frames mean the frames with neutral-expression and the highest expression intensity, respectively. For FR, TV-L1 optical flow method~\cite{Zach2007} is utilized to obtain motion feature from the onset and apex frames of each micro-expression video. As shown in Fig.~\ref{fr}, for preserving more motion information, two optical flow components are extracted to represent the facial change along horizontal and vertical directions. Furthermore, Considering horizontal and vertical components of optical flow, FR designs two-stream Inception network based on the Inception V1 block~\cite{Szegedy2015}, which complements with two-layer depths. Specifically, inception block is designed to capture both the global and local information of the optical component for feature learning. Distinguishing from traditional convolution with a fixed size of filters, Inception blocks parallelize filters of multiple sizes at the same level. Additionally, motivated by LeNet~\cite{Lecun1998}, the number of filters in the first and second layers is set at 6 and 16, respectively. Finally, the flattened two sets of feature maps are concentrated into an expression-shared feature. \begin{figure} \centering \includegraphics[width=0.35\textwidth]{attention_unit} \caption{ Attention unit in the proposal module for expression-specific feature learning.} \label{att_unit} \end{figure} \subsection{Expression-specific feature learning} As illustrated in Fig.~\ref{fr}, based on the expression-shared feature, expression proposal module with the attention mechanism and proposal loss is introduced to learn the expression-specific features. Given a micro-expression sample $x$ and an expression-shared feature $z$, a proposal module with $K$ sub-branches is designed, where each of sub-branch learns a set of specific-feature for each micro-expression category (total $K$ categories). The proposal module becomes the core component of FR to obtain salient and discriminative features. The $Softmax$ attention and proposal loss are described as follows. \textbf{Softmax attention:} Attention mechanism provides the flexibility to our model to learn $K$ expression-specific features from the same set of expression-shared feature. $K$ attention units are shown in the part (2) of Fig. \ref{fr}, and the detail of each attention unit is depicted in Fig. \ref{att_unit}. Specifically, the expression-shared feature $z$ is connected with $K$ fully connected layers separately. After activated by $Softmax$, we gain the attention weight of $z$ for each specific expression, \begin{equation} a_k= {softmax}_k(z), \label{softmax} \end{equation} where $a_k$ is a vector that has the same dimension as $z$, and $k\leq K$. Then, the representation $z_k^*$ for each specific expression is given by: \begin{equation} z_k^*= a_k\ast z. \label{softmax} \end{equation} \textbf{Proposal loss:} Besides the inducing of attention strategy, expression-specific feature learning is also constrained with a prososal loss which is averaged from $K$ expression-specific detection losses in the proposal module. Each expression-specific detector (`Detector$\_k$' in Fig.~\ref{fr}) is aligned with a feature vector $z_k^*$, which contains two fully connected layers with a $Sigmoid$~layer as the output. Thus, each sub-proposal branch is trained by optimizing the following detection loss, \begin{equation} \footnotesize \mathcal{L}^{(k)}_{detc}(\theta;y_p)=-\sum_{i=1}^{N}[y_p^{k(i)}\log(p(y_p^{k(i)})) \\ +(1-y_p^{k(i)})\log(1-p(y_p^{k(i)}))], \label{prop} \end{equation} where $y_p^{k(i)}$ and $p({y_p}^{k(i)})$ denote the ground truth (either 1 or 0) and the probability of the $i$-th sample as the $k$-th category expression, respectively. $N$ is the total number of training samples. $\mathcal{L}^{(k)}_{detc}$ allows the network to generate expression-specific features for every expression. Based on the loss occurring in each sub-proposals, the loss of the proposal module consisted of $K$ sub-branches is defined as the average of $K$ detection losses: \begin{equation} \mathcal{L}_{prop}(\theta;y_p) = {\frac {1}{K}}\sum_{k=1}^K\mathcal{L}_{detc}^{(k)}. \label{prop1} \end{equation} By the restraint of the proposal loss of $\mathcal{L}_{prop}$ and the attention mechanism, the proposal module obtains salient and discriminative expression-specific feature for micro-expression recognition. \subsection{Fused expression-refined features for classification} Generally, a simple way is to concatenate the expression-specific features into one feature vector, which is directly fed into Fully Connected Layers. However, this method will cause the high dimension and contain more trainable parameters. Motivated by the feature fusion method in \cite{Wu2019}, this work utilizes a simple but efficient method, namely, the element-wise sum as a fusion function. The efficiency evaluation about element-wise sum for fusion can be referred to Section~\ref{sec:modelablation}. Consequently, the aggregated feature representation $z'$ is defined as follows: \begin{equation} z'= \sum_{k=1}^{K} z_k^*. \label{softmax} \end{equation} Then, the expression-refined feature $z'$ is fed into the final classification module which consists of two fully connected layers. To avoid over-fitting, the first fully connected layer was followed by a Dropout layer (with dropout probability being 0.5). Lastly, the output of the last fully connected layer $F_i$ is activated by a $softmax$ unit. Hence, the classification loss $L_{cls}$ is expressed as: \begin{equation} \mathcal{L}_{cls}(\phi;y_c)=-\sum_{i=1}^{N}[y_c^i\log(\frac{exp(F_i)}{\sum_i {exp(F_i)}})], \\ \label{cls} \end{equation} where $y_c^i$ is the class label for the $i$-th training instance. By joining the proposal loss of $\mathcal{L}_{prop}$ in Eq. (\ref{prop1}) and classification loss of $L_{cls}$ in Eq. (\ref{cls}), a novel FR loss is proposed as follows: \begin{equation} \mathcal{L}(\theta, \phi;y_p, y_c)= \lambda\mathcal{L}_{prop}(\theta;y_p) + \mathcal{L}_{cls}(\phi;y_c), \label{ldisent} \end{equation} where the hyper-parameter $\lambda$ balances the contribution of expression proposal and category classification. \section{Experiments} \subsection{Datasets} Experiments are conducted on three commonly used spontaneous micro-expression databases: SMIC~\cite{Li2013}, CASME II \cite{Yan2014}, and SAMM~\cite{Davison2018}. \textbf{SMIC~\cite{Li2013}:} The SMIC database contains SMIC-HS (recorded by a high-speed camera of 100 \textit{fps}), SMIC-VIS (by a normal visual camera of 25 \textit{fps}), and SMIC-NIR (by a near-infrared camera). SMIC-HS has 164 micro-expression clips from 16 subjects, while SMIC-VIS/SMIC-NIR consists of 71 samples from 8 participants. Additionally, these samples in three sub-databases are annotated as \textit{Negative}, \textit{Positive}, and \textit{Surprise}. The sample resolution is $640 \times 480$ \textit{pixels} and the facial area is around $190 \times 230$ \textit{pixels}. \textbf{CASME~II~\cite{Yan2014} :} The CASME~II contains two versions: the first one includes 247 samples of 5 micro-expression classes (\textit{Happiness}, \textit{Surprise}, \textit{Disgust}, \textit{Repression}, and \textit{Others}), and the second has 256 samples of 7 classes (\textit{Happiness}, \textit{Surprise}, \textit{Disgust}, \textit{Sadness}, \textit{Fear}, \textit{Repression}, and \textit{Others}). All the samples are gathered from 26 subjects. It was recorded by a camera with 200 \textit{fps}. The resolution of the samples are $640\times 480$ \textit{pixels} and the resolution of facial area is around $280 \times 340$ \textit{pixels}. \textbf{SAMM~\cite{Davison2018}:} The SAMM database contains 159 micro-expression instances from 32 participants at 200 \textit{fps}. The resolution of the samples are $2040 \times 1088$ \textit{pixels} and the resolution of facial area is around $400 \times 400$ \textit{pixels}. Samples in SAMM are categorized into \textit{Happiness}, \textit{Surprise}, \textit{Disgust}, \textit{Repression}, \textit{Angry}, \textit{Fear}, \textit{Contempt}, and \textit{Others}. \subsection{Experiment settings} Since the apex frame annotation in SMIC database is not available, interframe-Diff method as described in Section \uppercase\expandafter{\romannumeral3} is applied to spot the apex frame. For the CASME~II and SAMM databases, the ground truth of the apex frames is directly used. The Libfacedetection~\cite{Yu2016} is utilized to crop the facial area out of onset and apex frames. TV-L1 optical flow is extracted from the onset and apex frames. Two components of optical flow images are resized to $28 \times 28$ \textit{pixels} before feeding to the Inception network. All the experiments are conducted with Ubuntu 16.04, Python 3.6.2 with Keras 2.2.4 and Tensorflow 1.11.0 on 1 NVIDIA GTX Titan X GPU (12 GB). \textbf{Setup:} To evaluate the effect of each module of FR, an ablation study is first designed to investigate the backbone selection, strategy selection, and fusion module selection. Second, three groups of experiments, namely, CDE experiment, CDMER experiment, and the single database experiment, are designed to validate the effectiveness of the proposed method. For fair comparison, all the experiments on CDE and the single database evaluation are conducted with Leave-One-Subject-Out (LOSO) cross-validation, where samples from one subject are held out as the testing set while all remaining samples for training. For CDMER, the model is trained on the source dataset and tested on the target dataset. \textbf{The detail settings for model ablation experiment, CDE experiment, CDMER experiment, and the single database experiment can be referred to Section~\ref{sec:appendix}.} \textbf{Performance metric:} Here, this work applies different performance metric for the three groups. \begin{itemize} \item For CDE protocol, according to the MEGC 2019, Unweighted F1-score (\textbf{UF1}) and Unweighted Average Recall (\textbf{UAR}) are used to measure the performance of various methods on composite and individual databases. \item For CDMER protocol, according to~\cite{Zong2017}, Accuracy (\textbf{Acc}) and UF1 are reported for evaluating the performance. \item For the evaluation on the single database, Acc is used. \end{itemize} All results of each type of experiments are the average of at least ten rounds. \textbf{The evaluation metrics can be also referred to Section~\ref{sec:appendix}.} \begin{table*} \centering \footnotesize \caption{Ablation models. The best result is in bold.} \label{ablation} \vspace{0.2cm} \subtable[{\color{black}Backbone selection for the expression-shared feature learning.}]{ \begin{tabular}{cccc} \hline Model &UF1 & UAR\\ \hline Dual-Inception \cite{Zhou2019}&0.7322& 0.7278\\ Basic Inception & \textbf{0.7360} & \textbf{0.7391} \\ \hline \end{tabular} \label{ablation:apex frame} } \hspace{1.2cm} \subtable[Strategy selection for the expression-specific feature learning.]{ \centering \begin{tabular}{cccc} \hline Model &UF1 & UAR\\ \hline FR-fc &0.7377&0.7443 \\ FR& \textbf{0.7838}&\textbf{0.7832} \\ \hline \end{tabular} \label{ablation:specific feature learning} } \hspace{1.2cm} \subtable[Fusion mode selection to obtain the expression-refined feature.]{ \centering \begin{tabular}{cccc} \hline Model &UF1 & UAR\\ \hline FR-concatenated &0.7632&0.7727 \\ FR&\textbf{0.7838}&\textbf{0.7832}\\ \hline \end{tabular} \label{ablation:fusion mode} } \end{table*} \subsection{Model ablation} \label{sec:modelablation} The ablation study is performed on the composite database of CDE protocol. Table \ref{ablation} reports the results in terms of UAR and UF1 on different models. \begin{table*} \caption{Performance comparison among the handcrafted feature methods, classical CNN methods, state of the art and our proposed methods on the composite database and the individual databases on CDE protocol. In the CDE protocol, except samples of SMIC-HS, samples in CASME II and SAMM are regrouped into three classes,~\textit{i.e.}~\textbf{\textit{Negative}, \textit{Positive}, and \textit{Surprise}}. Then, regrouped samples of three databases are combined into one dataset and using LOSO validation methods to evaluate the performance. The best results are in bold.} \label{cde_result} \footnotesize \begin{center} \setlength{\tabcolsep}{2.7mm}{ \begin{tabular}{clcccccccc} \hline \multirow{2}*{Groups}&\multirow{2}*{Approaches} & \multicolumn{2}{c}{Composite} & \multicolumn{2}{c}{SMIC-HS} & \multicolumn{2}{c}{CASME II} & \multicolumn{2}{c}{SAMM}\\ \cline{3-10} && UF1&UAR&UF1&UAR&UF1&UAR&UF1&UAR\\ \hline \multirow{2}*{Handcrafted features} &LBP-TOP \cite{Zhao2007}&0.5882& 0.5785 &0.2000&0.5280&0.7026&0.7429 &0.3954&0.4102\\ \cline{2-10} &Bi-WOOF \cite{Liong2018a}&0.6296&0.6227&0.5727&0.5829&0.7805&0.8026&0.5211&0.5139\\ \hline \multirow{9}*{Deep learning features} &AlexNet \cite{Krizhevsky2012}&0.6933&0.7154&0.6201&0.6373&0.7994&0.8312&0.6104&0.6642\\ \cline{2-10} &GoogLeNet \cite{Szegedy2015}&0.5573&0.6049&0.5123&0.5511&0.5989&0.6414&0.5124&0.5992\\ \cline{2-10} &VGG16 \cite{Simonyan2014}&0.6425&0.6516&0.5800&0.5964&0.8166&0.8202&0.4870&0.4793\\ \cline{2-10} &CapsuleNet \cite{Quang2019}&0.6520& 0.6506& 0.5820& 0.5877& 0.7068 &0.7018 &0.6209& 0.5989\\ \cline{2-10} &OFF-ApexNet \cite{Gan2019}&0.7196&0.7096&0.6817&0.6695&0.8764&0.8681&0.5409&0.5392\\ \cline{2-10} &Dual-Inception \cite{Zhou2019}&0.7322&0.7278&0.6645&0.6726&0.8621&0.8560&0.5868&0.5663\\ \cline{2-10} & STSTNet \cite{Liong2019}&0.7353&0.7605&0.6801&0.7013&0.8382&0.8686&0.6588&0.6810\\ \cline{2-10} &EMR \cite{Liu2019}&\textbf{0.7885}&0.7824&\textbf{0.7461}&\textbf{0.7530}&0.8293&0.8209&\textbf{0.7754}&0.7152\\ \cline{2-10} &\textbf{FR (Ours)}&0.7838&\textbf{0.7832}&0.7011&0.7083& \textbf{0.8915}&\textbf{0.8873}&0.7372&\textbf{0.7155}\\ \hline \end{tabular}} \end{center} \end{table*} \textbf{(1) Backbone selection for the expression-shared feature learning} To better capture the subtle motion of micro-expression for our proposed model, we make a backbone selection for the expression-shared feature learning. \textit{\textbf{a. Dual-Inception \cite{Zhou2019}:}} The model \textcolor{black}{straightforwardly} uses the mid-position frame of each sample as the apex frame for MER. \textit{\textbf{b. Basic Inception:}} Different from~\cite{Zhou2019}, Basic Inception uses interframe difference algorithm described in Section \uppercase\expandafter{\romannumeral3} to approximately locate the apex frames for SMIC-HS database, and uses ground truth of apex frames in CASME II and SAMM. Table \ref{ablation:apex frame} compares basic Inception to Dual-Inception on the composite database. Basic Inception outperforms Dual-Inception in terms of UAR and UF1. Thus, the Basic Inception is chosen as the backbone of our framework to learn the expression-shared feature. \textbf{(2) Strategy selection for the expression-specific feature learning} In the proposal module, two followed strategies are presented to learn expression-specific features: \textit{\textbf{a. FR-fc:}} FR-fc only uses fully-connected layers in the proposal module for the expression-specific feature learning, and then uses the element-wise sum mode to aggregate expression-specific features of each expression category for classification. \textit{\textbf{b. FR:}} FR differs from FR-fc using attention strategy to learn expression-specific features. From Table \ref{ablation:specific feature learning}, FR with attention mechanism boosts the performance from 0.7377 to 0.7838 in terms of UF1, from 0.7443 to 0.7832 in terms of UAR. This suggests that attention factors in the proposal module have the capability to highlight specific characteristics and generate salient features. \textbf{(3) Fusion mode selection to obtain the expression-refined feature} To validate the fusion mode for fusing the expression-specific features, this part compares element-wise sum mode used in this paper to concatenated mode. \textit{\textbf{a. FR-concatenated:} }FR-concatenated model directly concatenates the expression-specific features. \textit{\textbf{b. FR:}} FR model uses element-wise sum mode to obtain expression-refined features. According to Table \ref{ablation:fusion mode}, FR outperforms the FR-concatenated. It may attribute to that element-wise sum mode alleviates the over-fitting problem as element-wise sum model more suppress the dimension of expression-refined feature than concatenated method. Consequently, the element-wise sum mode is chosen for expression-specific features fusion to obtain expression-refined features. \subsection{Performance evaluation on CDE protocol} Table~\ref{cde_result} compares our method to several state-of-the-art methods on CDE protocol. These methods contain two handcrafted features (LBP-TOP \cite{Zhao2007} and Bi-WOOF \cite{Liong2018a}), six deep learning features without data augmentation technology (AlexNet \cite{Krizhevsky2012}, GoogLeNet \cite{Szegedy2015}, VGG16 \cite{Simonyan2014}, OFF-ApexNet \cite{Gan2019}, Dual-Inception \cite{Zhou2019}, and STSTNet \cite{Liong2019}), and two deep learning features with data augmentation (CapsuleNet \cite{Quang2019} and EMR \cite{Liu2019}). Specifically, AlexNet, GoogLeNet, and VGG16 were reproduced by Liong~\textit{et al.}~\cite{Liong2019} instead by the inputs of optical flow features. \subsubsection{Comparison with handcrafted features} When comparing to LBP-TOP, our proposed FR improves the baseline consistently with gains of 19.56\%, 50.11\%, 18.89\%, and 34.18\% in terms of UF1 for composite, SMIC-HS, CASME II, and SAMM databases, respectively. It increases the performance of the baseline by 20.47\%, 18.03\%, 14.44\%, and 30.53\% in terms of UAR for composite, SMIC-HS, CASME II, and SAMM databases, respectively. FR consistently improves the Bi-WOOF by a large margin. It indirectly suggests that FR learns discriminative and meaningful features on micro-expression database, which is better than the traditional handcrafed features. \subsubsection{Comparison with deep learning features} Table \ref{cde_result} indicates that our proposed FR outperforms most state-of-the-art algorithms on all the databases. It is explained by that the scarcity of data causes the existing deep-depth networks the over-fitting. The promising results further suggest that the shallow neural networks with fewer parameters alleviate the over-fitting problem in MER. \begin{itemize} \item FR achieves better performance than OFF-ApexNet, CapsuleNet, Dual-Inception, and STSTNet. The similarity between FR and these three models is that they all learn features by feeding the extracted optical flows into designed shallow networks. But the major difference is that our proposed FR distills more meaningful characteristic from the expression-shared features for MER by expression-specific feature learning and fusion, while they focus on the expression-shared feature learning. \item As we know, STSTNet is an extension of OFF-ApexNet, which learns deep learning features for MER with three pre-extracted optical flows. The considerable result of STSTNet indirectly suggests that the more pre-processing features input, the better performance the method obtains. Although our proposed FR and STSTNet belongs to multi-stream feature learning approach, with expression-specific feature learning and fusion, FR gains improvements of 5.03\% and 2.07\% in terms of average UF1 and UAR across four databases, respectively, though the performance of STSTNet are relatively high. The improvement indicates that exploring features with more salient and discriminative characteristic based on fewer pre-processing is a more promising approach. \item EMR used Eulerian Video Magnification (EVM) for magnifying micro-expression, in which EVM has been shown its effectiveness to micro-expression recognition~\cite{Li2018a}. Comparing with EMR, FR slightly degrades the performance by 0.64\% in terms of average UF1 across four databases, while increases the performance by 0.58\% in terms of average UAR across four databases. Thus, FR still obtains a competitive performance to EMR. On the other hand, FR is built on more simple pre-process~\textit{e.g.}~facial cropped and apex frame chosen than EMR~\textit{e.g.}~macro-expression samples relabeling to three categories and micro-expressions magnification. \end{itemize} \begin{table*} \caption{Experimental results (UF1 / Acc) of different micro-expression features for \textbf{Type-I} of CDMER tasks. The results of feature descriptors in the benchmark are directly extracted from \cite{Zong2019}. The best results in each experiment are in bold.} \label{cdmer result_task1} \begin{center} \scriptsize \setlength{\tabcolsep}{1.0mm}{ \begin{tabular}{clccccccc} \hline Groups&Feature Descriptors & \textit{Exp}.1: H$\rightarrow$V &\textit{Exp}.2: V$\rightarrow$H&\textit{Exp}.3: H$\rightarrow$N & \textit{Exp}.4: N$\rightarrow$H & \textit{Exp}.5: V$\rightarrow$N & \textit{Exp}.6: N$\rightarrow$V & Average \\ \hline \multirow{12}*{Handcrafted features} &LBP-TOP\textit{(R3P8)} \cite{Zhao2007} &0.8002/0.8028& 0.5421/0.5427& 0.5455/0.5352 & 0.4878/0.5488 & 0.6186/ 0.6338 &0.6078/0.6338 &0.6003/0.6162\\ \cline{2-9} &LBP-TOP\textit{(R1P4)} \cite{Zhao2007}&0.7185/0.7183& 0.3366/0.4024& 0.4969/0.4930& 0.3457/0.4024& 0.5480/0.5775 & 0.5085/0.5915 &0.4924/0.5332\\ \cline{2-9} &LBP-TOP\textit{(R1P8)} \cite{Zhao2007} & 0.8561/0.8592& 0.5329/0.5366& 0.5164/0.5775& 0.3246/0.3537& 0.5124/0.5775& 0.4481/0.5070 & 0.5318/0.5686\\ \cline{2-9} &LBP-TOP\textit{(R3P4)} \cite{Zhao2007}&0.4656/0.4930 &0.4122/0.4512& 0.3682/0.4085& 0.3396/0.4085& 0.5069/0.5915 &0.5144/0.6056 &0.4345/0.4931\\ \cline{2-9} &LBP-SIP\textit{(R1)} \cite{Wang2014} & 0.6290/0.6338& 0.3447/0.4085& 0.3249/0.3380 &0.3490/0.4207 &0.5477/0.6056& 0.5509/0.6056 &0.4577/0.5020\\ \cline{2-9} &LBP-SIP\textit{(R3)} \cite{Wang2014}& 0.8574/0.8592& 0.4886/0.5000& 0.4977/0.5493& 0.4038/0.4268& 0.5444/0.5915& 0.3994/0.4648& 0.5319/0.5653\\ \cline{2-9} &LPQ-TOP\textit{(decorr=0.1)} \cite{Paeivaerinta2011} &\textbf{0.9455}/\textbf{0.9437}& 0.5523/0.5488& 0.5456/0.6197 &0.4729/0.4756 &0.5416/0.5775 &0.6365/0.6620 &0.6157/0.6379\\ \cline{2-9} &LPQ-TOP\textit{(decorr=0)} \cite{Paeivaerinta2011}& 0.7711/0.7746 &0.4726/0.4878& 0.6771/0.6761& 0.4701/0.4817& 0.7076/0.7183& 0.6963/0.7042& 0.6325/0.6405\\ \cline{2-9} &HOG-TOP\textit{(p=4)} \cite{Li2018a}&0.7068/0.7183 &0.5649/0.5732& \textbf{0.6977}/\textbf{0.7042} &0.2830/0.2927& 0.4569/0.4930& 0.3218/0.3662 & 0.4554/0.4847\\ \cline{2-9} &HOG-TOP\textit{(p=8)} \cite{Li2018a}& 0.7364/0.7465 &0.5526/0.5610 &0.3990/0.4648& 0.2941/0.3232& 0.4137/0.4648 &0.3245/0.3803 &0.4453/0.4901\\ \cline{2-9} &HIGO-TOP\textit{(p=4)} \cite{Li2018a}& 0.7933/0.8028& 0.4775/0.5061& 0.4023/0.4789 &0.3445/0.3598& 0.5000/0.5352& 0.3747/0.4085 &0.4821/0.5152\\ \cline{2-9} &HIGO-TOP\textit{(p=8)} \cite{Li2018a}& 0.8445/0.8451 &0.5186/0.5366& 0.4793/0.5493& 0.4322/0.4390& 0.5054/0.5493& 0.4056/0.4648 &0.5309/0.5640\\ \hline \multirow{4}*{Deep learning features} &C3D-FC1 \textit{(Sports1M)} \cite{Tran2015}& 0.1577/0.3099 &0.2188/0.2378 &0.1667/0.3099& 0.3119/ 0.3415 &0.3802/0.4930& 0.3032/0.3662 &0.2564/0.3431\\ \cline{2-9} &C3D-FC2 \textit{(Sports1M)} \cite{Tran2015}&0.2555/0.3662 &0.2974/0.2927 &0.2804/0.3380 &0.3239/0.3659 &0.4518/0.4789& 0.3620/0.3803& 0.3285/0.3703\\ \cline{2-9} &C3D-FC1 \textit{(UCF101)} \cite{Tran2015} &0.3803/0.4648 &0.3134/0.3476& 0.3697/0.4789& 0.3440/0.3476 &0.3916/0.4789 &0.2433/0.2958 &0.3404/0.4023\\ \cline{2-9} &C3D-FC2 \textit{(UCF101)} \cite{Tran2015} &0.4162/0.4648& 0.2842/0.3232& 0.3053/0.4225 &0.2531/0.2805& 0.3937/0.4789& 0.2489/0.3239 &0.3169/0.3823\\ \cline{2-9} &\textbf{FR (Ours)}&0.7065/0.7149 &\textbf{0.5971}/\textbf{0.5968}&0.5335/0.5673& \textbf{0.5137}/\textbf{0.5200}& \textbf{0.7934}/\textbf{0.7910}& \textbf{0.7921}/\textbf{0.7921}& \textbf{0.6561}/\textbf{0.6636}\\ \hline \end{tabular} } \end{center} \end{table*} \begin{table*} \caption{Experimental results (UF1 / Acc) of different micro-expression features for \textbf{Type-II} of CDMER tasks. Those results of feature descriptors in the benchmark are directly cited from \cite{Zong2019}. The best results in each experiment are in bold.} \label{cdmer result_task2} \begin{center} \scriptsize \setlength{\tabcolsep}{1mm}{ \begin{tabular}{clccccccc} \hline Groups&Feature Descriptors & \textit{Exp}.7: C$\rightarrow$ H &\textit{Exp}.8: H$\rightarrow$C&\textit{Exp}.9: C$\rightarrow$V & \textit{Exp}.10: V$\rightarrow$C & \textit{Exp}.11: C$\rightarrow$N & \textit{Exp}.12: N$\rightarrow$C & Average \\ \hline \multirow{12}*{Handcrafted features} &LBP-TOP\textit{(R3P8)} \cite{Zhao2007}&0.3697/0.4512 &0.3245/0.4846 &0.4701/0.5070&0.5367/0.5308&0.5295/0.5211&0.2368/0.2385&0.4112/0.4555\\ \cline{2-9} &LBP-TOP\textit{(R1P4)} \cite{Zhao2007} &0.3358/0.4451&0.3260/0.4769&0.2111/0.3521&0.1902/0.2692&0.3810/0.4366&0.2492/0.2692&0.2823/0.3749\\ \cline{2-9} &LBP-TOP\textit{(R1P8)} \cite{Zhao2007}&0.3680/0.4390&0.3339/0.5462&0.4624/0.4930&0.5880/0.5769&0.3000/0.3380&0.1927/0.2308&0.3742/0.4373\\ \cline{2-9} &LBP-TOP\textit{(R3P4)} \cite{Zhao2007}&0.3117/0.4390&0.3436/0.4462&0.2723/0.3944&0.2356/0.2846&0.3818/0.4.30&0.2332/0.2538&0.2964/0.3852\\ \cline{2-9} &LBP-SIP\textit{(R1) }\cite{Wang2014}&0.3580/0.4512&0.3039/0.4462 &0.2537/0.3803&0.1991/0.2692&0.3610/0.4648&0.2194/0.2692&0.2825/0.3802\\ \cline{2-9} &LBP-SIP\textit{(R3)} \cite{Wang2014}&0.3772/0.4268&0.3742/\textbf{0.5615}&\textbf{0.5846}/0.5915 &0.6065/0.6000&0.3469/0.3521&0.2790/0.2769&0.4279/0.4681\\ \cline{2-9} &LPQ-TOP\textit{(decorr=0.1)} \cite{Paeivaerinta2011}&0.3060/0.4207&0.3852/0.4846&0.2525/0.3380&0.4866/0.4769&0.3020/0.3521&0.2094/0.2385& 0.3236/0.3851\\ \cline{2-9} &LPQ-TOP\textit{(decorr=0)} \cite{Paeivaerinta2011}& 0.2368/0.4390& 0.2890/0.5154& 0.2531/0.3803 &0.3947/0.4077& 0.2369/0.3521& 0.4008/0.4154 &0.3019/0.4183\\ \cline{2-9} &HOG-TOP\textit{(p=4)} \cite{Li2018a}& 0.3156/0.3476& 0.3502/0.4769& 0.3266/0.3521 &0.4658/0.4692& 0.3219/0.3521 &0.2163/0.2746& 0.3327/0.3791\\ \cline{2-9} &HOG-TOP\textit{(p=8)} \cite{Li2018a}& 0.3992/0.4390& 0.4154/0.5231& 0.4403/0.4507 &0.4678/0.4769&0.4107/0.4085 &0.1390/0.2077 &0.3787/0.4177\\ \cline{2-9} &HIGO-TOP\textit{(p=4) }\cite{Li2018a}& 0.2945/0.3963 &0.3420/0.5385 &0.3236/0.4085& 0.5590/0.5538 &0.2887/0.2958& 0.2668/0.3154 &0.3458/0.4181\\ \cline{2-9} &HIGO-TOP\textit{(p=8)} \cite{Li2018a}&0.2978/0.4146& 0.3609/0.5000& 0.3679/0.4366& 0.5699/0.5462&0.3395/0.3380 &0.1743/0.2231& 0.3517/0.4098\\ \hline \multirow{4}*{Deep learning features} &C3D-FC1 \textit{(Sports1M) }\cite{Tran2015}&0.1994/0.4268& 0.2394/\textbf{0.5615}& 0.1631/0.3239& 0.1075/0.1923& 0.1631/0.3239& 0.2397/0.5615 &0.1854/0.3983\\ \cline{2-9} &C3D-FC2 \textit{(Sports1M)} \cite{Tran2015}&0.1994/0.4268& 0.1317/0.2462& 0.1631/0.3239 &0.1075/0.1923& 0.1631/0.3239& 0.2397/0.5615& 0.1674/0.3458\\ \cline{2-9} &C3D-FC1 \textit{(UCF101)} \cite{Tran2015}& 0.1581/0.3110& 0.1075/0.1923& 0.1886/0.3944& 0.1075/0.1923 &0.1886/0.3944& 0.2397/0.5615 &0.1650/0.3410\\ \cline{2-9} &C3D-FC2 \textit{(UCF101)} \cite{Tran2015}& 0.1994/0.4268 &0.1705/0.1923& 0.1631/0.3239& 0.1075/0.1923& 0.1631/0.3239 &0.1075/0.1923& 0.1414/0.2753\\ \cline{2-9} &\textbf{FR (Ours)}&\textbf{0.4670}/\textbf{0.4905}&\textbf{0.4883}/0.5380&0.5678/\textbf{0.6101}&\textbf{0.5929}/\textbf{0.6019}&\textbf{0.4399}/\textbf{0.4823}&\textbf{0.5963}/\textbf{0.6081}&\textbf{0.5254}/\textbf{0.5552}\\ \hline \end{tabular} } \end{center} \end{table*} \subsection{Performance evaluation on CDMER benchmark} In this experiment, the proposed FR is further evaluated by using CDMER protocol~\cite{Zong2017}. CDMER protocol contains 12 sub-experiments from the TYPE-I and TYPE-II tasks. The detail setting of CDMER is referred to Section~\ref{sec:appendix} and Table \ref{cdmer two types}. Table \ref{cdmer result_task1} and Table \ref{cdmer result_task2} compare FR to the state-of-the-art algorithms referred by ~\cite{Zong2019}. Their parameter settings are described as follows: \begin{itemize} \item For LBP-TOP \cite{Zhao2007}, the uniform pattern is used. For the neighboring radius $R$ and the number of the neighboring points $P$, experiments consider three cases: $R=1, P=4$ (R1P4), $R=1, P=8$ (R1P8), $R=3, P=4$ (R3P4), and $R=3, P=8$ (R3P8). \item For LBP-SIP \cite{Wang2014}, the neighboring radius $R$ is set at 1 and 3, respectively. \item For LPQ-TOP \cite{Paeivaerinta2011}, the size of the local window in each dimension is set as $[5, 5, 5]$, and the factor for correlation model $decorr$ is set as [0.1, 0.1] and [0, 0], respectively. \item The number of bins $p$ is set as 4 and 8 for HOG-TOP and HIGH-TOP~\cite{Li2018a}, respectively. \item For three dimensional convolutional neural network (C3D) \cite{Tran2015}, the micro-expression features are extracted from the last two fully connected layers from the pre-trained Sport-1M \cite{Karpathy2014} and UCF101 \cite{Soomro2012} model. \end{itemize} \begin{table*}[t] \caption{Experimental results of different micro-expression recognition approaches on the single database in terms of Acc. CASME II (5 class) means samples of the original 5 classes in CASME II are used (\textit{Happiness}, \textit{Surprise}, \textit{Disgust}, \textit{Repression} and \textit{Others}). CASME II (4 class) and SAMM~databases mean samples of both databases are regrouped into four classes,~\textit{i.e.}, \textit{Negative}, \textit{Positive}, \textit{Surprise} and \textit{Others}. The best results in each experiment are highlighted in bold. All the results reported are based on the original samples without expression magnification.} \label{single result} \begin{center} \footnotesize \setlength{\tabcolsep}{1.6mm}{ \begin{tabular}{cllccccc} \hline Groups &Approaches&SMIC-HS&CASME II &CASME II&SAMM&Average\\ &&&(5 classes)& (4 classes)&&\\ \hline \multirow{22}*{Handcrafted features} &LBP-TOP \cite{Zhao2007}&0.4878&-&0.4090$^*$&0.4150$^*$&0.4357\\ \cline{2-7} &OSF + OS weighted LBP-TOP \cite{Liong2016}&0.5244&-&-&-&0.5244\\ \cline{2-7} &OS \cite{Liong2014a}&0.5356&-&-&-&0.5356\\ \cline{2-7} &OS weighted LBP-TOP \cite{Liong2014}&0.5366&0.4200&-&-&0.4783\\ \cline{2-7} &STM \cite{Ngo2014}&0.4434&0.4378&-&-&0.4406\\ \cline{2-7} &LBP-MOP \cite{Wang2015}&0.5061&0.4413&-&-&0.4737\\ \cline{2-7} &LBP-TOP + ROIs \cite{Liong2018}&0.5400&0.4600&-&-&0.5000\\ \cline{2-7} &LBP-SIP \cite{Wang2014}&0.4451&0.4656&0.4570$^*$&0.4170$^*$&0.4462\\ \cline{2-7} &LBP-TOP + DMDSP \cite{Ngo2017}&0.5800&0.4900&-&-&0.5350\\ \cline{2-7} &LBP-TOP + Adaptive MM \cite{Park2015}&0.5191&-&-&-&0.5191\\ \cline{2-7} &HFOFO \cite{Happy2019}&0.5183&0.5664&-&-&0.5424\\ \cline{2-7} &STCLQP \cite{Huang2016}&0.6402&0.5839&-&-&0.6121\\ \cline{2-7} &STLBP-IP \cite{Huang2015}&0.5793&0.5951&0.5510$^*$&0.5680$^*$&0.5734\\ \cline{2-7} &MMFL \cite{He2017a}&0.6315&0.5981&-&-&0.6148\\ \cline{2-7} &Hierarchical STLBP-IP \cite{Zong2018a}&0.6078&0.6383&-& - &0.6231\\ \cline{2-7} &STRBP \cite{Huang2017}&0.6098&0.6437&-&-&0.6268\\ \cline{2-7} &DiSTLBP-RIP \cite{Huang2019}&0.6341&0.6478&-&-&0.6410\\ \cline{2-7} &MDMO \cite{Liu2016}&0.6150$^*$&-&0.5100$^*$& - &0.5630\\ \cline{2-7} &FDM \cite{Xu2017}&0.5488&0.4593&0.4170$^*$& - &0.4750\\ \cline{2-7} &Bi-WOOF \cite{Liong2018a}&0.5930$^*$&-&0.5890$^*$&0.5980$^*$&0.5930\\ \cline{2-7} &OF Maps \cite{Allaert2017}&-&\textbf{0.6535}&-&-&0.6535\\ \cline{2-7} &Bi-WOOF + Phase \cite{Liong2017}&\textbf{0.6829}&0.6255&-&-&\textbf{0.6542}\\ \cline{2-7} &HIGO \cite{Li2018a}&0.6524&0.5709&-&-&0.6117\\ \hline \multirow{8}*{Deep learning features} &Image-based CNN \cite{Takalkar2017}&0.3120$^*$&-&0.4440$^*$&0.4360$^*$&0.3973\\ \cline{2-7} &3D-FCNN \cite{Li2018}&0.5549&0.5911&-&-&0.5730\\ \cline{2-7} &CNN + LSTM \cite{Kim2016}&-&0.6098&-&-&0.6098\\ \cline{2-7} &STRCN-A \cite{Xia2020}&0.4810&-&0.4710&0.4880&0.4800\\ \cline{2-7} &STRCN-G \cite{Xia2020}&0.5760&-&0.6210&\textbf{0.6420}&0.6130\\ \cline{2-7} &\textbf{FR (Ours)}&0.5790&0.6285&\textbf{0.6838}&0.6013&0.6232\\ \hline \multicolumn{5}{l}{\scriptsize{$*$ means that we directly extracted the result from \cite{Xia2020} as the original papers did not report these relevant results.}}\\ \end{tabular} } \end{center} \end{table*} \subsubsection{Comparison with handcrafted features} As seen from Table \ref{cdmer result_task1} and Table \ref{cdmer result_task2}, our proposed FR outperforms all handcrafted features in terms of both unweighted F1-score and accuracy. Additionally, two important observations are concluded as follows: \begin{itemize} \item The performance of handcrafted features fluctuates greatly with the adjustment of parameters. For example, in Table \ref{cdmer result_task1}, as the results of LBP-TOP in \textit{Exp.1} of Type-I task showed, it gains considerable performance (0.8561 / 0.8592) under R1P8, but worse result (0.4656 / 0.4930) under R3P4. The same to LBP-SIP in \textit{Exp.9} of Table \ref{cdmer result_task2}. \item Varied parameters lead to perform unsteadily for Engineered features on all tasks. Instead, our proposed FR achieves the stable performance. For example, although LPQ-TOP\textit{(decorr=0.1)} obtains better results (with UF1 being 0.9455, with Acc being 0.9437) than our proposed FR (with UF1 being 0.7065, with Acc being 0.7149) in \textit{Exp.1} of TYPE-I task, but dramatically degrades performance in \textit{Exp.3} of TYPE-II task. Therefore, the both two observations indicate that our proposed method performs more stable and robust (database-invariant) to different situations than engineered features. \end{itemize} \subsubsection{Comparison with deep learning features} FR outperforms image-based C3D, which used the small scale of micro-expression data and low intensity of micro-expressions. It suggests that exploiting available information~\textit{e.g.}~optical flow and shallow network will benefit to MER. Finally, the results of experiments on CDE and CDMER protocols demonstrate that FR performs robust recognition in complex situations,~\textit{e.g.}~cross-database MER. \subsection{Performance evaluation on the single database} Table \ref{single result} compares our proposed FR to the state-of-the-art algorithms on four single micro-expression database. \subsubsection{Comparison with handcrafted features} FR outperforms most of the handcrafted features listed in Table \ref{single result} in terms of average recognition accuracy, except STRBP \cite{Huang2017}, DiSTLBP-RIP \cite{Huang2019}, OF Maps \cite{Allaert2017}, and Bi-WOOF with Phase \cite{Liong2017}. It suggests that although almost many of the research fields in computer vision are focusing on the deep learning century, traditional machine learning features play an importance role in MER. It is explained by the scale of data. As we know, OF Maps and Bi-WOOF with Phase need more process to extract better features and are based on professional knowledge. Specifically, they all need to compute direction and magnitude statistical profiles of optical flow, and Bi-WOOF with Phase also needs using Riesz transform to extract phase information. On the other hand, LBP-based features of STRBP and DiSTLBP-RIP are based on the entire sequence of samples, which also need to use temporal interpolation method (TIM) \cite{Zhou2012} to normalize each video for performance improvement. In contrast, our proposed FR only needs simple pre-processing,~\textit{e.g.}~face detection, apex frame chosen, and optical flow extraction. \subsubsection{Comparison with deep learning features} Furthermore, Table \ref{single result} compares our proposed FR to Image-based CNN~\cite{Takalkar2017}, 3D-FCNN \cite{Li2018}, CNN with LSTM \cite{Kim2016}, and Spatiotemporal Recurrent Convolutional Networks (STRCN) \cite{Xia2020}, of which experiments were conducted on the same database and micro-expression categories to ours. Specifically, STRCN contains STRCN-A and STRCN-G. The first one vertorizes one channel of a frame into a column of the matrix for the appearance features, while the latter one uses optical flow images as the input to train the model. First, Table \ref{single result} shows all the handcrafted features outperform the Image-based CNN feature \cite{Takalkar2017}, because Image-based CNN feature ignored the temporal information. On the other hand, other works on CNN~\cite{Li2018,Kim2016} demonstrate temporal information significantly boosts the performance of CNN. Thus, these results suggest that when designing CNN for MER, temporal information should be considered. Furthermore, FR gains a considerable improvement of 14.32\% by comparing with STRCN-A. FR obtains competitive performance to STRCN-G. The comparison suggests that geometric-based features may become a complementary information to deep learning model. Additionally, the comparison motivates us to consider how the geometric-based features are embedded in our FR model in future. Finally, our FR suppresses all the deep learning features. The comparison results demonstrate that both expression-specific feature and feature fusion contribute the discriminative information to MER in deep learning methods. \begin{table}[t!] \centering \caption{Performance comparison of the composite database among Basic Inception and FR on CDE protocol.} \setlength{\tabcolsep}{3.0mm}{ \begin{tabular}{ccc} \hline Model &UF1 & UAR\\ \hline Basic Inception & 0.7360 & 0.7391\\ \textbf{FR}&\textbf{0.7838}&\textbf{0.7832}\\ \hline \end{tabular}} \label{result_cmp_on_Basic and FR} \end{table} \begin{figure}[t!] \centering \subfigure[The confusion matrix of composite database in Basic Inception.] { \includegraphics[width=0.20\textwidth]{Basic_composite} \label{cm_basic} } \subfigure[The confusion matrix of composite database in FR.] { \includegraphics[width=0.20\textwidth]{FR_composite} \label{cm_fr} } \caption{The confusion matrices of composite database in Basic Inception and FR.} \label{cm} \end{figure} \begin{figure*}[t!] \centering \subfigure[Expression-shared feature distribution of Basic Inception.] { \includegraphics[width=0.45\textwidth]{Basic_CDE} \label{Basic_feature_distribution} } \hspace{1cm} \subfigure[Expression-refined feature distribution of FR.] { \includegraphics[width=0.45\textwidth]{FR_CDE} \label{FR_feature_distribution} } \caption{Feature distributions of Basic Inception and FR.} \label{feature_distribution} \end{figure*} \subsection{Analysis on feature's salience and discrimination} As previously described in Section~\ref{sec:ProposedMethod}, expression-specific feature learning and fusion aim to learn the salient and discriminative feature. Here, to better reveal the effect of expression-specific feature learning and fusion module to FR, the comparison is conducted to compare the expression-refined features to the expression-shared features obtained by the Basic Inception on the composite database. The CDE protocol is used. Table \ref{result_cmp_on_Basic and FR} reports comparison results in terms of UF1 and UAR. It is seen that with expression-specific feature learning and fusion module FR obtains the significant improvement which improves the Basic Inception from 73.60\% to 78.38\% in terms of UF1, from 73.91\% to 78.32\% in terms of UAR. This suggests that expression-specific feature learning and fusion module is the most contributed module for our FR. Furthermore, Fig. \ref{cm} shows the confusion matrices of Basic Inception and FR on each micro-expression category. As seen from Fig. \ref{cm}, FR obtains the accuracy of 83.60\%, 70.64\%, and 80.72\% for negative, positive, and surprise, respectively. When adding expression-specific feature learning and fusion module to FR, FR improves Basic Inception consistently with gains of 3.2\%, 6.42\%, and 3.61\% for negative, positive, and surprise, respectively. This indicates that the expression-refined features can improve the \textbf{salience} in each class and highlight the specific characteristics of each type of expression, and thus perform better than the expression-shared features in individual categories. In order to get more reasonable and stable feature distribution visualization, 34 subjects are randomly chosen, where 282 samples are used for training and the rest for testing. Fig. \ref{feature_distribution} shows the feature distribution of Basic Inception and FR from the testing samples, where all features are mapped to 2D using t-SNE \cite{LaurensvanderMaaten2008}. For the Basic Inception model, the features are from the concentrated layer before the classifier, while for FR, the expression-refined features are obtained before the finally classification module. It is observed that the feature representations learned by FR are better separated, making the intra-class distribution more compact and the inter-class distribution more dispersed. These visualization results indicate that expression-refined features learned by FR are more \textbf{discriminative} than the expression-shared features learned by Basic Inception. \begin{table*} \caption{The Accuracy (Acc), learnable Parameters (Parameter), and Execution Time (ET) comparison among backbone of Basic Inception and FR model, where learnable parameters include the weights and biases of the network, and Execution Time means the average training and testing times on single database evaluation.} \label{compexityofspecific} \begin{center} \footnotesize \setlength{\tabcolsep}{3mm}{ \begin{tabular}{l|ccc|ccc|ccc} \hline \multirow{3}*{Methods}&\multicolumn{3}{c|}{SMIC-HS} & \multicolumn{3}{c|}{CASME II (4 classes)} & \multicolumn{3}{c}{SAMM}\\ \cline{2-10} &Acc&Parameter &ET&Acc&Parameter &ET&Acc&Parameter &ET\\ &&($Million$)&($s$)&&($Million$)&($s$)&&($Million$)&($s$)\\ \hline Basic Inception&0.5612&6.4803&15.0610&0.6601&6.4813&32.9442&0.5783&6.4813&18.5792\\ FR&0.5790&10.2418&15.5559&0.6838&11.4231&36.7728&0.6013&11.4231&20.6564\\ \hline \end{tabular}} \end{center} \end{table*} \subsection{Analysis on the complexity of the expression-specific feature learning and fusion} As previously described in Section~\ref{sec:ProposedMethod}, FR with the backbone of Basic Inception learns expression-shared feature, while FR with the expression-specific feature learning and fusion module learns and aggregates expression-specific feature. To analyze the complexity of the expression-specific feature learning and fusion of FR, Table \ref{compexityofspecific} compares FR to Basic Inception in terms of accuracy (Acc), learnable parameters (Parameters), and execution time (ET) on three micro-expression databases. According to Table \ref{compexityofspecific}, compared with the number of parameters in Basic Inception, the learnable parameters size indeed increases significantly in FR which is caused by the expression-specific feature learning and fusion. In other words, as SMIC-HS contains three categories, FR includes three expression-specific feature learning sub-branches. It leads to more 3.7615 $million$ parameters to learn, but only more 0.4949 $s$ to execute for FR when compared with Basic Inception. It reveals that although the number of parameters increases by adding expression-specific feature learning and fusion module, the additional execution time is still acceptable. The comparison results and analysis validate expression-specific feature learning and fusion is still efficient and effective for MER. \begin{table}[t!] \caption{Accuracy (Acc) of SMIC-HS database under different protocols of FR. $Exp.i$ is the number of the experiment in CDMER protocol and $S$ and $T$ are the source and target micro-expression databases, respectively. \textit{C}, \textit{H}, \textit{V}, and \textit{N} denote the CASME II, SMIC-HS, SMIC-VIS, and SMIC-NIR databases, respectively. The best results are highlighted in bold.} \label{smichs_3protocols} \begin{center} \footnotesize \setlength{\tabcolsep}{2.6mm}{ \begin{tabular}{cccc} \hline Protocols &Experiments &Acc & Validation rule\\ \hline CDE& -&\textbf{0.6951}&LOSO\\ \hline \multirow{3}*{CDMER} & \textit{Exp}.2: V$\rightarrow$H&0.5968& 5-fold\\ & \textit{Exp}.4: N$\rightarrow$H&0.5200& 5-fold\\ & \textit{Exp}.7: C$\rightarrow$H&0.4905& 5-fold\\ \hline Single database& -&0.5790& LOSO\\ \hline \end{tabular}} \end{center} \end{table} \subsection{Discussion on three protocols} This previous parts extensively discuss on results under CDE, CDMER, and the single database evaluation protocols. According to the previously discussed results, several observations can be concluded in the following. First, algorithm in the single database of SMIC-HS obtained worse results than by using CDMER protocol. Second, algorithm failed to achieve same or similar performance under CDE protocol and in the single database experiment. The following will discuss these two observations. \textbf{(1) Why does algorithm under single database evaluation not always outperform under CDMER protocol?} According to Table \ref{smichs_3protocols}, the performance of SMIC-HS database under the single database protocol is only better than that of CDMER protocol in the $Exp.4: N\rightarrow H$ and $Exp.7: C\rightarrow H$. In contrast, it works worse than CDMER in the $Exp.2: V\rightarrow H$. In $Exp.4: N\rightarrow H$ and $Exp.7: C\rightarrow H$, the data between SMIC-NIR/CASME II and SMIC-HS is heterogeneous. In other words, the data were collected under different conditions. In contrast, in $Exp.2: V\rightarrow H$, samples in SMIC-VIS recorded by a normal visual camera are more similarly to the sample in SMIC-HS database. As well, both databases contains the same participants. The recent works~\cite{Liu2019,Peng2018,Wang2018} have indicated that the model leveraging on macro-expression can boost the deep neural network. The way may benefit to micro-expression recognition: Compared with collecting more micro-expressions of a person to reveal one's true feeling, it is much easier to collect the person's macro-expressions. It motivates us to leverage macro-expressions transferring learning method to boost our proposed FR in future. Additionally, leveraging the subject information and transfer learning strategy allows us to obtain better recognition performance. \textbf{(2) Why does algorithm under CDE protocol outperform that under the single database evaluation?} This is explained by that increasing number of samples from other micro-expression databases and the optical flow feature contribute to FR. As we know, more samples can partly avoid from the overfitting problem. On the other hand, in our framework, optical flow is fed into FR. Optical flow mainly focuses on extracting motion feature of samples, which mostly suppresses the facial identity. Consequently, besides transfer learning and other domain adaption mechanisms, adding samples from different micro-expression databases and also utilizing proper motion feature to train the model is a considerable approach to obtain better performance of MER. \section{Conclusion} In this paper, we propose a novel approach for micro-expression recognition, which involves three feature refinement stages: expression-shared feature learning, expression-specific feature distilling, and expression-specific feature fusion. Different from the existing deep learning methods in MER which focus on learning expression-shared features, our approach aims to learn a set of expression-refined features by expression-specific feature learning and fusion. To make the learned feature more salient and discriminative, we propose a constructive but straightforward attention strategy and a simple proposal loss in expression proposal module for expression-specific feature learning. Experiments on three publicly available micro-expression databases and three different evaluation scenarios testify the efficacy of our proposed approach. In the future, we will consider an end-to-end approach for MER, find more effective ways to enrich the micro-expression samples, and use transfer learning from the large-scale databases to make benefit for MER. \section{Appendix: experiment settings and evaluation metrics for experiments } \label{sec:appendix} \subsection{Detail experiment settings for four types of experiments} Firstly, to easily understand the settings of the four types of experiments, we give the detailed description of the experiment settings as follows. \subsubsection{Settings of model ablation} Ablation study is conducted on the CDE protocol in MEGC 2019 \cite{See2019}, where this study focuses on choosing the backbone for expression-shared feature learning, the strategy for expression-specific feature learning, and the fusion mode for expression-specific features aggregating. According to the CDE protocol, LOSO validation is used to evaluate the model performance. SMIC-HS, CASME~II, and SAMM are merged into one dataset. To make these databases share the common types of expression, original emotion classes are re-grouped into three main categories,~\textit{i.e.}~\textit{Positive}, \textit{Surprise}, and \textit{Negative}. Specifically, samples of \textit{Happiness} are given \textit{Positive} labels while the labels of \textit{Surprise} samples are unchanged. Samples of \textit{Disgust}, \textit{Repression}, \textit{Anger}, \textit{Contempt}, \textit{Fear}, and \textit{Sadness} are grouped into \textit{Negative}. Table \ref{cde data} presents the detail information about three databases used for CDE protocol. In the ablation experiment, without momentum, the batch size, learning rate, and loss function weight factor $\lambda$ are set as 32, 0.001, and 0.85, respectively. \begin{table}[h!] \footnotesize \caption{The sample distribution on three classes (\textit{Positive}, \textit{Negative}, and \textit{Surprise}) of three databases and composite database on the CDE protocol. As LOSO validation rule is used in CDE experiment, subject number is given in the table.} \label{cde data} \begin{center} \setlength{\tabcolsep}{0.5mm}{ \begin{tabular}{cccccc} \hline \multirow{2}*{Database} &\multicolumn{4}{c}{Micro-Expression Category} &\multirow{2}*{Subjects} \\\cline{2-5} &\textit{Negative} &\textit{Positive}& \textit{Surprise} & \textbf{Total} \\ \hline SMIC-HS~\cite{Li2013} & 70 &51&43&164&16\\ \hline CASME II~\cite{Yan2014} &88$^{\dag}$ &32&25&145&24\\ \hline SAMM ~\cite{Davison2018}& 92$^{\S}$&26&15&133&28 \\ \hline Composite&250&109&83&442&68\\ \hline \multicolumn{5}{l}{\scriptsize{${\dag}$ Negative class of CASME II: Disgust and Repression.}}\\ \multicolumn{5}{l}{\scriptsize{${\S}$ Negative class of SAMM: Anger, Contempt, Disgust, Fear and Sadness.}}\\ \end{tabular}} \end{center} \end{table} \begin{table}[h!] \footnotesize \caption{Two types of CDMER tasks. It contains 12 CDMER experiments. Each source to target experiments of CDMER is denoted by $Exp.i: S \rightarrow T$, where $Exp.i$ is the number of this experiment and $S$ and $T$ are the source and target micro-expression databases, respectively. \textit{C}, \textit{H}, \textit{V}, and \textit{N} denote the CASME II, SMIC-HS, SMIC-VIS, and SMIC-NIR databases, respectively.} \label{cdmer two types} \begin{center} \setlength{\tabcolsep}{2mm}{ \begin{tabular}{cccc} \hline Type & CDMER Task &Source Database&Target Database\\ \hline \multirow{6}*{Type-I} &\textit{Exp}.1: H$\rightarrow$V &SMIC-HS&SMIC-VIS\\ &\textit{Exp}.2: V$\rightarrow$H &SMIC-VIS&SMIC-HS\\ &\textit{Exp}.3: H$\rightarrow$N &SMIC-HS&SMIC-NIR\\ &\textit{Exp}.4: N$\rightarrow$H &SMIC-NIR&SMIC-HS\\ &\textit{Exp}.5: V$\rightarrow$N &SMIC-VIS&SMIC-NIR\\ &\textit{Exp}.6: N$\rightarrow$V &SMIC-NIR&SMIC-VIS\\ \hline \multirow{6}*{Type-II} &\textit{Exp}.7: C$\rightarrow$H &CASME II&SMIC-HS\\ &\textit{Exp}.8: H$\rightarrow$C &SMIC-HS&CASME II\\ &\textit{Exp}.9: C$\rightarrow$V &CASME II&SMIC-VIS\\ &\textit{Exp}.10: V$\rightarrow$C &SMIC-VIS&CASME II\\ &\textit{Exp}.11: C$\rightarrow$N &CASME II&SMIC-NIR\\ &\textit{Exp}.12: N$\rightarrow$C &SMIC-NIR&CASME II\\ \hline \end{tabular}} \end{center} \end{table} \begin{table}[h!] \footnotesize \caption{The sample distribution on three classes (\textit{Positive}, \textit{Negative}, and \textit{Surprise}) of CASME II and SMIC databases on CDMER protocol. Five-fold cross validation rule is used in CDMER experiment.} \label{cdmer data} \begin{center} \setlength{\tabcolsep}{2.5mm}{ \begin{tabular}{lccccc} \hline \multirow{2}*{Database} &\multicolumn{4}{c}{Micro-Expression Category} \\\cline{2-5} &\textit{Negative} &\textit{Positive}& \textit{Surprise} & \textbf{Total} \\ \hline SMIC-HS~\cite{Li2013} & 70 &51&43&164 \\ \hline SMIC-NIR~\cite{Li2013} & 23 &28&20&71\\ \hline SMIC-VIS~\cite{Li2013} & 23 &28&20&71\\ \hline CASME II~\cite{Yan2014} &73$^{\dag}$ &32&25&130\\ \hline \multicolumn{5}{l}{\scriptsize{${\dag}$ Negative class of CASME II: Disgust, Sadness and Fear.}}\\ \end{tabular}} \end{center} \end{table} \subsubsection{Settings of CDE experiment} Our proposed FR compares with the baseline of MEGC 2019 \cite{Pfister2011, Zhao2007}, three popular deep learning networks \cite{Krizhevsky2012, Szegedy2015, Simonyan2014}, and several state-of-the-art methods \cite{Liong2018a, Gan2019, Zhou2019,Liong2019, Liu2019,Quang2019}. In the experiments, we use the same expression grouped rules and parameters setting for CDE evaluation to the experiment of model ablation. \subsubsection{Settings of CDMER experiment } This paper evaluates the proposed FR with~\cite{Zhao2007, Wang2014, Paeivaerinta2011, Li2018a, Tran2015} under CDMER protocol \footnote{http://aip.seu.edu.cn/cdmer/} \cite{Zong2017}. Five-fold cross validation. \textbf{CDMER protocol:} Four publicly available micro-expression databases are used in the CDMER benchmark: SMIC-HS, SMIC-VIS, SMIC-NIR, and CASME~II. Different from CDE protocol, in each experiment of CDMER, two of four databases are chosen, where one is used as source database and another as target database. Thus, there are 12 sub-experiments in CDMER. The detail setup is depicted in Table \ref{cdmer two types}. Each source to target sub-experiment of CDMER is denoted by $Exp.i: S \rightarrow T$, where $Exp.i$ is the number of this sub-experiment, $S$ and $T$ are the source and target databases, respectively. Table \ref{cdmer data} describes the sample distribution for CDMER experiments. The re-group rule for \textit{Positive} and \textit{Surprise} classes is the same to the CDE protocol, while the samples with \textit{Disgust}, \textit{Sadness}, and \textit{Fear} classes belong to \textit{Negative} class. In the CDMER experiment, the learning rate is set as 0.0005, with the momentum rate of 0.8. Batch size and $\lambda$ are set as 32 and 0.85, respectively. \begin{table*}[t!] \footnotesize \caption{The sample distribution on four classes (\textit{Positive}, \textit{Negative}, \textit{Surprise}, and \textit{Others}) of three databases on the single database evaluation. As LOSO validation rule is used in the single database experiment, subject number is also given in the table.} \label{single data} \begin{center} \setlength{\tabcolsep}{1.0mm}{ \begin{tabular}{lcccccc} \hline \multirow{2}*{Database} &\multicolumn{5}{c}{Micro-Expression Category} &\multirow{2}*{Subjects}\\\cline{2-6} &\textit{Negative} &\textit{Positive}& \textit{Surprise} & \textit{Others}& \textbf{Total} \\ \hline SMIC-HS~\cite{Li2013}&70&51&43&-&164&16 \\ \hline CASME II~\cite{Yan2014}&\multirow{2}*{73$^{\dag}$}&\multirow{2}*{32}&\multirow{2}*{25}&\multirow{2}*{126$^{\S}$}&\multirow{2}*{256}&\multirow{2}*{26}\\ (4 classes)&&&&&&\\ \hline SAMM \cite{Davison2018}&92$^{\sharp}$&26&15&26&159&32\\ \hline \multicolumn{6}{l}{\scriptsize{${\dag}$ Negative class of CASME II: Disgust, Sadness and Fear.}} \\ \multicolumn{6}{l}{\scriptsize{${\S}$ Others class of CASME II: Repression and Others. }}\\ \multicolumn{6}{l}{\scriptsize{${\sharp}$ Negative class of SAMM: Disgust, Anger, Contempt, Fear and Sadness. }}\\ \end{tabular}} \end{center} \end{table*} \begin{table*}[t!] \footnotesize \caption{\textcolor{black}{The sample distribution of CASME II (5 classes). As LOSO validation rule is used in the single database experiment, subject number is also given in the table.}} \label{casme2_5classes} \begin{center} \setlength{\tabcolsep}{0.7mm}{ \begin{tabular}{lccccccc} \hline \multirow{2}*{Database} &\multicolumn{6}{c}{Micro-Expression Category} &\multirow{2}*{Subjects}\\ \cline{2-7} &\textit{Repression} &\textit{Happiness}& \textit{Surprise} & \textit{Disgust}& \textit{Others} &\textbf{Total} \\ \hline CASME II~\cite{Yan2014}&\multirow{2}*{27}&\multirow{2}*{32}&\multirow{2}*{25}&\multirow{2}*{64}&\multirow{2}*{99}&\multirow{2}*{247}&\multirow{2}*{26}\\ \textcolor{black}{(5 classes)}&&&&&&\\ \hline \end{tabular}} \end{center} \end{table*} \subsubsection{Settings of the single database experiment } Leave-one-subject-out (LOSO) protocol is used. The parameters settings of the single database experiment are the same to CDMER evaluation. Table~\ref{single data} lists the detailed information about ``CASME II (4 classes)'', SAMM and SMIC-HS databases. For CASME II, there are two versions of the database. The first version contains 247 samples of 5 classes, while the second version involves 256 samples labeled as 7 classes. To make fair comparison, we use these two versions of CASME II for the single database experiment. For the first version, we directly use 5 micro-expression categories. For convenience, we abbreviate the first version of CASME II with 5 classes as ``CASME II (5 classes)'' in our experiment. Table~\ref{casme2_5classes} describes the detail information of ``CASME II (5 classes)''. Following~\cite{Liu2016, Wang, Zong2018a, Xia2020}, for the second version, the samples with \textit{Happiness} class belong to \textit{Positive} class, while the samples with \textit{Surprise} class remain the original label. The samples with \textit{Disgust}, \textit{Anger}, \textit{Contempt}, \textit{Fear}, and \textit{Sadness} classes are assigned to \textit{Negative} label. Samples of \textit{Repression} are labeled to \textit{Others} class. Lastly, the samples in the second version of CASME II are categorized into four classes. Here, we denote it as ``CASME II (4 classes)'' in the experiment. The same category protocol is used for SAMM database. \subsection{Detail description for performance metrics} The Acc, UAR, and UF1 metrics are defined as follows: \begin{equation} \footnotesize Acc := \frac{\sum_{k=1}^{K}\sum_{m=1}^{M}TP_k^{(m)}}{\sum_{k=1}^{K}\sum_{m=1}^{M}FP_k^{(m)}+\sum_{k=1}^{K}\sum_{m=1}^{M}TP_k^{(m)}}, \label{acc} \end{equation} \begin{equation} \footnotesize UF1 = \frac{1}{K}{F1_k}, \label{Uf1} \end{equation} and \begin{equation} \footnotesize UAR = \frac{1}{K}\sum Acc_k. \label{uar} \end{equation} where, \begin{equation} \footnotesize F1_k := \frac{2\cdot\sum_{m=1}^{M}TP_k^{(m)}}{2\cdot\sum_{m=1}^{M}TP_k^{(m)}+\sum_{m=1}^{M}FP_k^{(m)}+\sum_{m=1}^{M}FN_k^{(m)}}, \label{f1} \end{equation} and \begin{equation} \footnotesize Acc_k = \frac{\sum_{m=1}^{M}TP_k^{(m)}}{n_k}, \label{acck} \end{equation} where $K$ is the number of classes, $M$ is the number of folds of LOSO, $n_k$ is the total number of samples in the ground truth of the $k$-th class, and $Acc_k$ is the per-class accuracy scores. For the $m$-th fold of LOSO by the $k$-th class, $TP_k^{(m)}$, $FP_k^{(m)}$, and $FN_k^{(m)}$ are true positives, false positives, and false negatives, respectively. \section*{Acknowledgements} This work was supported in part by the National Natural Science Foundation of China under Grant 61672267, and Grant U1836220, the Postgraduate Research \& Practice Innovation Program of Jiangsu Province under Grant KYCX19\_1616, Jiangsu Specially-Appointed Professor Program (No. 3051107219003), Jiangsu joint research project of Sino-foreign cooperative education platform, the Talent Startup project of NJIT (No. YKJ201982), and Central Fund of Finnish Cultural Foundation.
1,941,325,220,908
arxiv
\section{Introduction} Transportation effects driven by anomalies in chiral systems have attracted large interest recently. They can by induced by external magnetic field like in the Chiral Magnetic Effect(CME) , where electric current is generated along the magnetic field due to the presence of axial chemical potential(\cite{ref:CME}). In magnetic field there also exists the Chiral Separation Effect (CSE), where conversely axial current along the magnetic field is generated in presence of vector chemical potential (\cite{ref:CSE1, ref:CSE2}). Another example is the Chiral Vortical Effect(CVE) (\cite{ref:CVE1,ref:CVE2,ref:CVE3,ref:CVE4}) occurring in rotating chiral systems. These anomalous transport effects couple vector and axial charge densities and currents which leads to the existence of gapless excitations such as the Chiral Magnetic Wave (CMW)(\cite{ref:CMW}) and the Chiral Vortical Wave (CVW)(\cite{ref:CVW}). One may wonder what excitation could occur in the rotating chiral system placed in the external magnetic field. In hydrodynamic framework it was recently investigated in (\cite{ref:CHW}). In this paper we briefly discuss both linear and non-linear case in the hydrodynamic approach and then proceed to the analysis in chiral kinetic theory. \section{Hydrodynamic approach} In this section we, in general, follow (\cite{ref:CHW}). We consider the rotating system of both right and left Weyl fermions placed in constant homogeneous external magnetic field $\mathbf B$. We assume the angular velocity of the system $\bm \omega$ to be constant. The expressions for chiral currents are easily obtained from CME, CSE and CVE: \begin{equation} \mathbf j_{R/L} = \pm\frac{1}{4\pi^2}\mu_{R/L}\mathbf B \pm \left(\frac{1}{4\pi^2}\mu_{R/L}^2 + \frac{1}{12}T^2\right)\bm \omega , \end{equation} where $\mathbf j_{R/L} = \frac{1}{2}(\mathbf j_V\pm\mathbf j_A)$~($\mathbf j_V$ is vector current and $\mathbf j_A$ is axial current) and $\mu_{R/L} = \mu\pm\mu_5$. The corresponding charge densities are $n_{R/L} = \frac{1}{2}(j_V^0\pm j_A^0) .$ The continuity equations read as \begin{equation} \partial_t n_{R/L} + \bm\nabla\cdot\mathbf j_{R/L} = 0 . \end{equation} We consider small fluctuations of densities $\delta n_{R/L}$ on top of a uniform equilibrium background assuming the temperature to be constant. Les us choose the axes so that $\mathbf B = (B,0,0)$ and $\bm \omega = (\omega_1,\omega_2,0)$. Then \begin{multline}\label{hydro} \partial_t\delta n_{R/L} \pm\frac{1}{4\pi^2}(B\partial_x\delta\mu_{R/L}+\omega_1\partial_x\delta(\mu_{R/L})^2 \\ + \omega_2\partial_y\delta(\mu_{R/L})^2) = 0 . \end{multline} We introduce susceptibilities for the corresponding densities $\chi_{R/L} = \frac{\partial n_{R/L}}{\partial \mu_{RL}}$. Let us concentrate on the right-handed particles below. If in equilibrium $\mu_R = \mu_0$, then up to the first order $\delta\mu_R^2 = \dfrac{2\mu_0}{\chi_R}\delta n$. So (\ref{hydro}) transforms into \begin{equation} \partial_t\delta n + \frac{1}{4\pi^2\chi_R}((B+2\omega_1\mu_0)\partial_x\delta n + 2\omega_2\mu_0\partial_y\delta n) = 0 . \end{equation} The corresponding velocity of the wave described by this linear equation is \begin{equation} v_R = \frac{|\mathbf B + 2\mu_0\bm \omega|}{4\pi^2\chi_R} . \end{equation} This shows that velocities of CMW and CVW simply sum up as vectors. Note that if $\mathbf B + 2\mu_R\bm \omega = 0$ the velocity is zero. Let us investigate this case more deeply. We need to take into account the non-linear terms that we have dropped before. Note that \begin{equation} \delta{\mu_R^2} = 2\mu_0\delta\mu_R + (\delta\mu_R)^2 . \end{equation} Substituting into~(\ref{hydro}) ($\omega_2=0$ and $B+2\mu_0\omega = 0$ now) we have \begin{eqnarray} \partial_t\delta n_R + \frac{1}{4\pi^2}(B+2\mu_0\omega)\partial_x\delta\mu_R +\frac{\omega\partial_x(\delta\mu_R)^2}{4\pi^2} \nonumber \\ =\partial_t\delta n_R + \frac{\omega\partial_x(\delta n_R)^2}{4\pi^2\chi_R^2} = 0 . \end{eqnarray} This is Hopf equation and notably magnetic field has disappeared from the expression, so in this case the features of the solution are defined only by vorticity. The implicit solution of this equation is \begin{equation} \delta n(x,t) = F\left(x-\frac{\omega t\delta n}{2\pi^2\chi_R^2}\right) , \end{equation} where $F(x)=\delta n(x,t=0)$ is initial density distribution. \section{Kinetic approach} Kinetic theory recently has been used to study transportation processes such as CME and CVE as well as the corresponding gapless excitations - CMW and CVW (\cite{ref:Kinetic1,ref:CVW}). In this section we study the single right-handed Weyl fermion field. While quantization of such a field right particles and left antiparticles emerge, they will be denoted by indices + and - respectively. We consider the temperature to be high (compared to $\sqrt B$, $\omega$ and background chemical potential $\mu$ which is necessary for treating fermions classically with the only exception for Berry connection, see \cite{ref:Kinetic2}) and constant. Taking similar approach as in (\cite{ref:CVW, ref:Kinetic1}) we start from the kinetic equation \begin{equation}\label{kinetic} \frac{\partial f_{\pm}}{\partial t} + \dot\mathbf x\cdot\frac{\partial f_{\pm}}{\partial \mathbf x}+\dot\mathbf p\cdot\frac{\partial f{\pm}}{\partial \mathbf p} = C_{\pm} [f_+,f_-] , \end{equation} where $f_{\pm}(t,\mathbf x,\mathbf p)$ are distribution functions and $C_{\pm}[f_+,f_-]$ are collision integrals. The equations of motion for particles $(+)$ in their local rest frame are \begin{equation} \dot\mathbf x = \hat\mathbf p + \dot\mathbf p\times\mathbf b , ~\dot\mathbf p = \dot\mathbf x\times(\mathbf B+2p~\bm \omega) , \end{equation} similarly to (\cite{ref:Kinetic2}) with additional Coriolis force term. Here $p = |\mathbf p|$, $\hat\mathbf p = \dfrac{\mathbf p}{p}$ and $\mathbf b = \dfrac{\hat\mathbf p}{2p^2}$ is the curvature of Berry's connection in momentum space. From these equations(and similar equations for antiparticles) we obtain \begin{eqnarray} \sqrt{G_{\pm}}\dot\mathbf x = \hat\mathbf p + \mathbf B'_{\pm}(\hat\mathbf p\cdot\mathbf b) ,\\ \sqrt{G_{\pm}}\dot\mathbf p = \pm\hat\mathbf p\times\mathbf B'_{\pm} , \end{eqnarray} where $\sqrt G_{\pm} = 1+\mathbf B'_{\pm}\cdot\mathbf b$ modifies phase space volume due to the interplay between effective magnetic field and Berry connection, $\mathbf B'_{\pm} = \mathbf B \pm 2 p ~\bm \omega$ is the effective magnetic field. We now want to linearise the kinetic equation. Consider small fluctuations above the equilibrium Fermi-Dirac distribution $f_{0\pm}(p)$: \begin{equation} f_{\pm}=f_{0\pm}(p)-\partial_p f_{o\pm}(p)\delta f_{\pm}(t,\mathbf x,\mathbf p) , \end{equation} and take the Fourier transformation of $\delta f_{\pm}(t,\mathbf x,\mathbf p)$ to be $h_{\pm}(\nu,\mathbf k,\mathbf p)$. In equilibrium the collision term is zero, so we can write it as \begin{equation} C_{\pm}[f_+,f_-] = -\partial_p f_{0\pm}I_{\pm}[h_+,h_-]+O(h^2) . \end{equation} The kinetic equation (\ref{kinetic}) now becomes \begin{equation}\label{reduced} -i\nu h_{\pm} + \dot\mathbf x(i\mathbf k \pm \mathbf B_{\pm}\times\frac{\partial}{\partial\mathbf p})h_{\pm} = I_{\pm}[h_+,h_-] . \end{equation} Now we want to average this in the momentum space, so we define the brackets $\langle...\rangle_{\pm}$ as \begin{equation} \langle...\rangle_{\pm} = \int_{\mathbf p}\sqrt{G_{\pm}}\partial_p f_{0\pm}(p)(...) , \end{equation} where $\int_{\mathbf p} = \int\dfrac{d^3p}{(2\pi)^3}$. From the charge conservation constraint one obtains \begin{equation} \int_{\mathbf p}\sqrt{G_+}C_+[f_+,f_-]-\int_{\mathbf p}\sqrt{G_-}C_-[f_+,f_-] = 0 , \end{equation} for arbitrary $f_{\pm}$ which implies \begin{equation} \int_{\mathbf p}\sqrt{G_+}\partial_p f_{0+} I_+[h_{\pm}] - \int_{\mathbf p}\sqrt{G_-}\partial_p f_{0-} I_-[h_{\pm}] = 0 , \end{equation} for arbitrary $h_{\pm}$. Also, the "Lorentz force" term vanishes after averaging and integrating by parts. So, averaging the equations (\ref{reduced}) with the corresponding bracket and taking the difference we obtain \begin{eqnarray}\label{empty} \nu(\langle h_+\rangle_+ - \langle h_-\rangle_-) - \mathbf k(\langle\dot\mathbf x h_+\rangle_+ - \langle\dot\mathbf x h_-\rangle_-) = 0 . \end{eqnarray} \subsection{Hydrodynamic regime} We now want to study hydrodynamic regime, which similarly to \cite{ref:CVW} means that we consider $h_{\pm}$ to be independent on $p$ and $h_+ = -h_- = h$. This means $h_{\pm}$ can be taken out of averaging brackets and the equation becomes \begin{equation}{\label{kinhydro}} 2\nu h(\langle1\rangle_+ + \langle1\rangle_-)-\mathbf k(\langle\dot\mathbf x\rangle_+ + \langle\dot\mathbf x\rangle_-)h = 0. \end{equation} For $\langle\dot\mathbf x\rangle_+$ we have \begin{multline} \langle\dot\mathbf x\rangle_+ = \int_{\mathbf p}(\hat\mathbf p + \mathbf B'_+(\hat\mathbf p\cdot\mathbf b))\frac{\partial f_{0+}(p)}{\partial p} = \int_{\mathbf p}\mathbf B'_+(\frac{\partial f_{0+}(p)}{\partial \mathbf p}\cdot\mathbf b) \\ = -\int_{\mathbf p}\mathbf B'_+(f_{0+}\frac{\partial b_i}{\partial p_i}) -\int_{\mathbf p}f_{0+}(b_i\frac{\partial\mathbf B'_+}{\partial p_i}) \\ = -\int_{\mathbf p}f_{0+}(p)2\pi\delta^{(3)}(\mathbf p)\mathbf B'_+(p) -\int_{\mathbf p}f_{0+}(b_i(2\hat p_i\bm \omega)) \\ = -\frac{\mathbf B}{8\pi^2} - \bm \omega\int_{\mathbf p}\frac{f_{0+}}{p^2} = -\frac{\mathbf B}{8\pi^2} - \frac{\bm \omega}{2\pi^2}\int_0^\infty f_{0+}dp . \end{multline} For Fermi-Dirac distribution $f_{0\pm} = \dfrac{1}{e^{\beta(p\mp\mu_0)}}$ we get \begin{equation} \int_0^\infty f_{0+}dp = \frac{1}{\beta}\log(1+e^{\beta\mu_0}) . \end{equation} Analogously \begin{eqnarray} \langle\dot\mathbf x\rangle_- = -\frac{\mathbf B}{8\pi^2} + \frac{\bm \omega}{2\pi^2}\int_0^\infty f_{0-}dp \\ \int_0^\infty f_{0-}dp = \frac{1}{\beta}\log(1+e^{-\beta\mu_0}) . \end{eqnarray} Finally \begin{multline}{\label{sum}} \langle\dot\mathbf x\rangle_+ + \langle\dot\mathbf x\rangle_- = - \frac{\mathbf B}{4\pi^2} - \frac{\bm \omega}{2\pi^2\beta}\log\left(\frac{1+e^{\beta\mu_0}}{1+e^{-\beta\mu_0}}\right) \\ \approx -\frac{\mathbf B}{4\pi^2} - \frac{\bm \omega\mu_0}{2\pi^2} . \end{multline} in our approximation $\beta\mu_0\ll1$. \begin{multline}{\label{susc}} \langle1\rangle_+ + \langle1\rangle_- = \int_{\mathbf p}\sqrt{G_+}\partial_pf_{0+} + \int_{\mathbf p}\sqrt{G_-}\partial_pf_{0-} \\ = - \frac{\partial n}{\partial\mu}|_{\mu_0} = - \chi , \end{multline} where $\chi$ is charge susceptibility. So from (\ref{kinhydro},\ref{sum},\ref{susc}) we get the velocity of the Chiral Magnetic-Vortical Wave \begin{equation} \mathbf v = \frac{\mathbf B + 2\mu_0\bm \omega}{4\pi^2\chi} . \end{equation} Notably this result coincides with the one obtained in \cite{ref:CHW} and reobtained in the previous section. \subsection{Relaxation time approximation} We now want to study the effect beyond the hydrodynamic regime and use relaxation time approximation (RTA) for the collision term like in \cite{ref:Kinetic1}. The collision term in this approximation is \begin{equation} I_{\pm}[h_+,h_-] = -\frac{1}{\tau}(h_{\pm}\mp\bar h) , \end{equation} where from the charge conservation \begin{equation} \bar h = \frac{1}{2}\langle h_+\rangle_+ - \langle h_-\rangle_-) . \end{equation} We consider $\mathbf B$ to be parallel to $\bm \omega$ and, moreover, consider only $\mathbf k$ parallel to them to simplify our discussion. In this case the only preferred direction in space is along $\mathbf B$, so $h_{\pm}$ could depend only on the absolute value of the component of momentum orthogonal to $\mathbf B$ so the Lorentz force term in ~(\ref{reduced}) vanishes and we have \begin{equation} -i\nu h_{\pm} + i\dot{\mathbf x}\cdot\mathbf k h_{\pm} = -\frac{1}{\tau}(h_{\pm}\mp\bar h) . \end{equation} So, \begin{eqnarray} \langle h_+\rangle_+ = \left<\frac{\bar h}{-i\nu\tau + i\dot{\mathbf x}\cdot\mathbf k\tau +1}\right>_+ , \\ \langle h_-\rangle_- = \left<\frac{-\bar h}{-i\nu\tau + i\dot{\mathbf x}\cdot\mathbf k\tau +1}\right>_- . \end{eqnarray} The condition of consistency of this system is \begin{multline}{\label{consist}} \left<\frac{1}{-i\nu\tau + i\dot{\mathbf x}\cdot\mathbf k\tau +1}\right>_+ + \left<\frac{1}{-i\nu\tau + i\dot{\mathbf x}\cdot\mathbf k\tau +1}\right>_- \\ = -\chi . \end{multline} We now expand $\nu$ in series assuming that $B$ and $\omega$ are small: \begin{equation} \nu(\vec k) = \nu_{n}(k) + \nu_{a}(\vec k) + ... , \end{equation} where $\nu_{n} = O(B^0,\omega^0)$ ("normal") is the term in the absence of any magnetic field and vorticity and $\nu_{a} = O(B^1,\omega^1)$ ("anomalous") is the first term of expansion which is responsible for the phenomenon we are studying. If $\mathbf B = 0$, $\bm \omega = 0$: \begin{multline}{\label{relax+}} \left<\frac{\tau^{-1}}{-i\nu_{n} + i\dot{\mathbf x}\cdot\mathbf k + \tau^{-1}}\right>_+ \\ = \frac{\tau^{-1}}{2}\int_{-1}^1\frac{d\cos\theta}{-i\nu_{n} +ik\cos\theta+\tau^{-1}}\int_{\mathbf p}\partial_p f_{0+} \\ = \frac{1}{2ik\tau}\log\left(\frac{1+i(k-\nu_{n})\tau}{1-i(k+\nu_{n})\tau}\right)\int_{\mathbf p}\sqrt{G_+}\partial_p f_{0+} . \end{multline} Let us denote $\chi_+ = \int_{\mathbf p}\sqrt{G_+}\partial_p f_{0+}$ and, analogously $\chi_- = \int_{\mathbf p}\sqrt{G_-}\partial_p f_{0-}$. From (\ref{susc}) we have \begin{equation} \chi_+ +\chi_- = -\chi . \end{equation} From (\ref{consist}), (\ref{relax+}) and the analogous equation for antifermions we have \begin{equation} \nu_{n}=\frac{1-k\tau\cot(k\tau)}{i\tau}\approx-i\tau\left(\frac{1}{3}(k\tau)^2+...\right) , \end{equation} so the first term of the expansion is diffusion. Let now $\mathbf B , \bm \omega\neq 0$ \begin{multline} \left<\frac{\tau^{-1}}{-i\nu + i\dot{\vec x}\cdot\vec k + \tau^{-1}}\right>_+ \\ = \frac{1}{(2\pi)^2}\int_{-1}^1d\cos\theta \int_0^{\infty}\frac{p^2\sqrt G_+\tau^{-1}\partial f_{0+}/\partial p dp}{-i\nu + \frac{ik}{\sqrt G_+}(\cos\theta +\frac{B_+'}{2p^2})+\tau^{-1}} \\ \approx \frac{\chi_+}{2ik\tau}\log\left(\frac{1+i(k-\nu)\tau}{1-i(k+\nu)\tau}\right) +\frac{\tau^{-1}}{2(2\pi)^2}\int_0^{\infty}B_+'\frac{\partial f_{0+}}{\partial p} dp \\ \times\int_{-1}^1d\cos\theta\frac{\cos\theta(\tau^{-1}-i\nu)-ik+2(\tau^{-1}-i\nu)^2}{-i\nu + ik+\tau^{-1}} \\ = \chi_+\left(1+\frac{\nu_{a}}{2k}\frac{2ik\tau}{1-2i\nu_{n}\tau +(k^2-\nu_{n}^2)\tau^2}\right) \\ + \frac{\tau^{-1}}{2(2\pi)^2}(-\frac{B}{2}-\frac{2\omega}{\beta}\log(1+e^{\beta\mu_0})) \frac{6\nu_{n}\tau}{k} . \end{multline} The last computation requires some clarifications. After the second equality sign we kept only terms up to first order in $B$ and $\omega$. Then in the zero order term we kept both $\nu_{n}$ and $\nu_{a}$ while in the first order term we kept only $\nu_{n}$ as $\nu_{a}$ is first order in $B$ and $\omega$ itself. Analogously \begin{multline} \left<\frac{\tau^{-1}}{-i\nu + i\dot{\vec x}\cdot\vec k + \tau^{-1}}\right>_- \\ =\chi_-\left(1+\frac{\nu_{a}}{2k}\frac{2ik\tau}{1-2i\nu_{n}\tau +(k^2-\nu_{n}^2)\tau^2}\right) \\ + \frac{\tau^{-1}}{2(2\pi)^2}(-\frac{B}{2}+\frac{2\omega}{\beta}\log(1+e^{-\beta\mu_0})) \frac{6\nu_{n}\tau}{k} . \end{multline} So we get \begin{equation} \frac{i\tau\nu_{a}\chi}{1-2i\tau\nu_{n}+(k^2-\nu_{n}^2)\tau^2} - \frac{6\nu_{n}(B+2\omega\mu_0)}{8\pi^2k} = 0 , \end{equation} and inserting $\nu_{n}$ we get \begin{equation} \nu_{a} = k\frac{B + 2\mu_0\omega}{4\pi^2\chi}\frac{3(1-k\tau\cot(k\tau))}{\sin^2(k\tau)} = k\frac{B + 2\mu_0\omega}{4\pi^2\chi} , \end{equation} with the last equality holding for $k\tau\ll1$. Note that this anomalous (first order in $B$ and $\omega$) part of the dispersion relation turns out to be purely real and, hence, non-dissipative. \section{Discussion} In this paper we have taken hydrodynamic and kinetic approaches to find the velocity of gapless excitation in chiral media in presence of magnetic field and vorticity - Chiral Magnetic-Vortical Wave. Using these approaches we have rederived the result of (\cite{ref:CHW}) that in the first order in $B$ and $\omega$ the velocity in question is given just by the vector sum of velocities of CMW and CVW(in RTA we have proven that only for the case of parallel $\mathbf B$ and $\bm \omega$ and the wave vector $\mathbf k$ parallel to them as well). Another interesting result is that in RTA CMVW turns out to be non-dissipative in linear order in $B$ and $\omega$ as dispersion relation for $\nu_{a}$ turns out to be purely real. \section{Acknowledgements} Author is grateful to A.~S.~Gorsky for suggesting this problem and valuable discussions and to A.~Aristova for useful communications.
1,941,325,220,909
arxiv
\section{Introduction} In a landmark agreement of general relativity and quantum field theory (known as quantum gravity theory) Hawking \cite{Haw1} predicted that a black hole (BH) could emit a blackbody like radiation which is the so-called Hawking radiation (HR). In fact, HR asserts that any (not naked) BH, due to the vacuum fluctuations, can evacuate the quantum particles generated from the virtual particle pairs which obey the annihilation-creation mechanism of the quantum field theory. It is hypothesized that in each HR process, one of these particles which has negative energy falls into the BH to reduce its mass, before being annihilated by its spouse (the one with positive energy). Due to the vacua difference the non-absorbed particle can thus escape to spatial infinity, and it could be detected by an observer as a HR particle. The corresponding temperature of the HR is in striking agreement with the first law of thermodynamics which comprises the entropy of the BH \cite{Bek1 . Unfortunately, the theoretical computations reveal that with today's technology the HR of a cosmological BH (having mass of a few solar masses or greater) is almost impossible. Because, such BHs have extremely a weak radiation. However, using a different spacetime configuration one can get arbitrarily strong HR. In particular, HR around a micro BH can tear it apart relatively quickly \cite{Adam}. Unruh \cite{un01} proposed a theoretical method which renders the HR detection possible by stimulating an analogue BH (a quantum fluid originated from Bose-Einstein condensation) in a laboratory environment. His argument was based on the fact that a transition from subsonic flow to supersonic flow is analogous to a BH event horizon. After then, numerous systems (ultracold fermions, electromagnetic waveguides, light in a nonlinear liquid, etc.) have been proposed for the analogue BHs (see \cit {Leonhardt,Nambu,Carbonaro,Iorio,Richartz} and references therein). For example, Corley and Jacobson \cite{Corley} came up with a new idea for the possible detection of the HR, which is about condensed matter BH laser. According to their Gedanken experiment, when a BH possesses two horizons, HR reflects between those horizons and by this way, in each round trip, an amplified radiation could disperse around the BH: natural increase in the probability of HR detection. Since Unruh's seminal papers \cite{un1}, it has been understood that in the theory of supersonic acoustic flows the BH physics could be mimicked. The obtained acoustic black holes (ABHs) govern the propagation of sound, which depends algebraically on the flow velocity and density \cite{PLB1,PLB2 . Recently remarkable progress has been achieved in modeling the HR within the laboratory \cite{Jeff}. So, these developments will not only help the physicists to understand the most profound insights about the quantum gravity, but they increase the popularity of the ABHs in the literature, as well. The HR of the ABH was firstly studied by Li-Chun et al. \cite{Chun} who employed two different methods in their computations: the reduced global embedding approach and analytical continuation method of wave function across the acoustic horizon. The main aim of this paper is to continue the spirit of \cite{Chun} and give supplementary quantum gravity effects on the HR of the ABH by focusing on the GUP \cite{GUP0,gup01}. The remainder of the paper is organized as follows. In Sec. 2, we begin with the spacetime metric for a rotating ABH and discuss some of its basic features. We also show that the associated spacetime metric can be obtained in a static metric structure by performing a simple dragging coordinate transformation. In Sec. 3, we study how a massive chargeless scalar field propagates under the effect of GUP in the background of the ABH. Writing out the associated Klein-Gordon equation (KGE) with GUP, we show that KGE (within the framework of the Hamilton-Jacobi (HJ) method \cite{Vanz}) completely separates with a suitable ansatz and yields GUP corrected Hawking temperature. Section 4 is devoted to derivation of the quantum gravity corrected (QGC) entropy, which we call it simply GUP entropy ($S_{GUP}$). In Sec. 5, we explore the quantum gravity effects on the HR of ABH arising from that of $S_{GUP}$. To this end, we perform the quantum tunneling computations prescribed by Parikh and Wilczek \cite{PWT}. The paper ends with Conclusion in Sec. 6. \section{Rotating ABH} The spacetime of the rotating ABH was introduced in \cite{PLB1,PLB2}. Its line-element reads \begin{equation} ds^{2}=-Fdt^{2}+\frac{1}{G}dr^{2}-Hd\varphi dt+Kd\varphi ^{2}, \label{1} \end{equation where the metric functions are given b \begin{eqnarray} F &=&\frac{1}{\lambda _{+}}\left( 1-\frac{\lambda _{-}(A^{2}+B^{2})} c^{2}r^{2}}\right) ,\,\,\,\, \label{2} \\ G &=&\frac{\lambda _{-}}{\lambda _{+}}\left( 1-\frac{\lambda _{-}A^{2}} c^{2}r^{2}}\right) ,\,\,\,\, \label{3} \\ H &=&\frac{2B}{c}, \label{4} \\ K &=&\frac{\lambda _{+}r^{2}}{\lambda _{-}}. \label{5} \end{eqnarray} Here $A$, $B$ and $\beta $ are the real constants. $\lambda _{\pm }=1\pm \beta $ and $c=\sqrt{\frac{dh}{d\rho }}$ denotes the speed of sound. Throughout the paper, without loss of generality, we assume that the constant $A$ has a positive definite value. The event horizon ($r_{h}$) of the rotating ABH is conditional on $g^{rr}(r_{h})=G(r_{h})=0$. So that one gets \begin{equation} r_{h}=\frac{\sqrt{\lambda _{-}}A}{c}. \label{6} \end{equation} On the other hand, the condition $g_{tt}(r_{e})=0$ gives the radius of the ergosphere as follows \begin{equation} r_{e}=\frac{\sqrt{\lambda _{-}}\sqrt{A^{2}+B^{2}}}{c}. \label{7} \end{equation} Here we are interested in the frame-dragging effect (a BH drags spacetime with it as it rotates) which is often termed dragging of inertial frames. This effect produces a detectable gyroscopic precession called the Lense-Thirring effect \cite{LTE}. When we use the dragging coordinate transformation \cite{Jcap} $d\phi =d\varphi -\Omega dt,$ where $\Omega \frac{H}{2K}$, metric \eqref{1} becomes \begin{equation} ds^{2}=-Zdt^{2}+\frac{1}{G}dr^{2}+Kd\phi ^{2}, \label{8} \end{equation} where \begin{equation} Z=\frac{4KF+H^{2}}{4K}=\frac{c^{2}r^{2}-A^{2}\lambda _{-}}{c^{2}r^{2}\lambda _{+}}. \label{9} \end{equation} Hereby, the Hawking temperature of the rotating ABH can be computed \cit {Waldbook,rot3} as \begin{equation} T_{H}=\frac{1}{4\pi }\sqrt{\frac{G}{Z}}\left. \frac{dZ}{dr}\right\vert _{r=r_{h}}=\frac{\sqrt{\lambda _{-}}}{4\pi }\left. \frac{dZ}{dr}\right\vert _{r=r_{h}}=\frac{c}{2\pi \lambda _{+}A}. \label{10} \end{equation} \section{Effect of GUP on Quantum Tunneling of Scalar Particles} Employing the modified commutation relations \cite{gup4}, it is shown in \cite{GUP0,GUP1,gup05,peng} that the KGE with GUP for a scalar field $\Psi $ takes the following form \begin{equation} -(i\hslash )^{2}\partial ^{t}\partial _{t}\Psi =\left[ (i\hslash )^{2}\partial ^{i}\partial _{i}+m_{p}^{2}\right] \left[ 1-2\alpha _{GUP}\left( (i\hslash )^{2}\partial ^{i}\partial _{i}+m_{p}^{2}\right) \right] \Psi , \label{11} \end{equation} where $\alpha _{GUP}$\ and $m_{p}$ are the GUP parameter and mass of the scalar particle (phonon), respectively. The generalized KGE (11) can be solved by using the semiclassical WKB approximation \cite{gup05}. We therefore choose the following ansatz for the scalar field: \begin{equation} \Psi (t,r,\phi )=\exp {\left( \frac{i}{\hbar }\mathcal{S}\left( t,r,\phi \right) \right) }. \label{12} \end{equation} where $\mathcal{S}( t,r,\phi)$ is the classically forbidden action for the tunneling. Inserting the above scalar field $\Psi $ into Eq. \eqref{11} for the background \eqref{8}, we obtain the following expression (in leading order of $\hbar )$ \begin{equation} \frac{1}{Z}(\partial _{t}\mathcal{S})^{2}=G\,(\partial _{r}\mathcal{S})^{2} \frac{1}{K}(\partial _{\phi }\mathcal{S})^{2}+m_{p}^{2}\left( 1-2\,\alpha _{GUP}\,G(\partial _{r}\mathcal{S})^{2}-\frac{2\alpha _{GUP}}{K}(\partial _{\phi }\mathcal{S})^{2}-2\alpha _{GUP}m_{p}^{2}\right) . \label{13} \end{equation} Taking the symmetries of the metric \eqref{8} into account, one can choose the following HJ ansatz for the action \begin{equation} \mathcal{S}(t,r,\phi )=-E\,t+W(r)+j\phi +C, \label{14} \end{equation where $C$ is a complex constant, $E$ is the energy, and $j$ denotes the angular momentum of the particle. Substituting the action \eqref{14} into Eq. \eqref{13}, one gets \begin{equation} \frac{1}{Z}E^{2}=G\,(W^{\prime })^{2}+\frac{j^{2}}{K}+m_{p}^{2}\left( 1-2\alpha _{GUP}G(W^{\prime })^{2}-\frac{2\alpha _{GUP}}{K}j^{2}-2\alpha _{GUP}m_{p}^{2}\right) . \label{15} \end{equation} We focus only on the radial trajectories. Therefore, the radial part (by ignoring the higher order terms of $\alpha _{GUP}$) results in the following integral \begin{equation} W(r)=\pm \int \frac{1}{\sqrt{\Delta (r)}}\frac{\sqrt{E^{2}-\frac{\Delta \left( r\right) }{G}\left( \frac{j^{2}}{K}+m_{p}^{2}\right) \left( 1-2m_{p}^{2}\alpha _{GUP}\right) }}{\sqrt{1-2m_{p}^{2}\alpha _{GUP}} dr, \label{16} \end{equation where \begin{equation} \Delta (r)=ZG=\frac{\lambda _{-}}{\lambda _{+}^{2}}\frac{\left( c^{2}r^{2}-A^{2}\lambda _{-}\right) ^{2}}{c^{4}r^{4}}, \label{17} \end{equation} which vanishes at the event horizon; $\Delta (r_{h})\rightarrow 0$. In order to work out the integral \eqref{16}, we first expand the function $\Delta (r) $ in Taylor's series near the horizon \begin{equation} \Delta (r)\approx \Delta (r_{h})+\Delta ^{\prime }(r_{h})(r-r_{h})+\frac{1}{ }\Delta ^{\prime \prime }(r_{h})(r-r_{h})^{2}. \label{18} \end{equation} Then, we evaluate the integral around the pole located at $r_{h}$ by deforming the contour. The result is given by \begin{equation} W(r_{h})=\pm \frac{i\pi E}{2\sqrt{\lambda _{-}}}\frac{\lambda _{+}r_{h}} \sqrt{1-2m_{p}^{2}\alpha _{GUP}}}. \label{19} \end{equation} The positive (negative) sign indicates the outgoing (ingoing) phonon. At this point, we should note that the famous factor two problem in the above expression which yields the wrong tunneling rate can be fixed with a procedure described in \cite{Akhmedova1}. Another way to overcome this problem is to set the probability of ingoing phonons to $100\%$. Namely, we have \begin{equation} P_{-}\simeq e^{-2ImW_{-}}=1, \label{20} \end{equation which leads to \begin{equation} Im\mathcal{S}_{-}=ImW_{-}+ImC=0. \label{21} \end{equation} On the other hand, for the outgoing phonon we have \begin{equation} Im\mathcal{S}_{+}=ImW_{+}+ImC. \label{22} \end{equation} From Eq. \eqref{19} it is not difficult to see that $W_{+}=-W_{-}$. Hence, one reads the tunneling probability of the outgoing phonons as follows \begin{equation} P_{+}=e^{-2Im\mathcal{S}_{+}}\simeq e^{-4ImW_{+}}. \label{23} \end{equation} Finally, using Eqs. \eqref{20} and \eqref{23} the tunneling rate of phonons becomes \begin{equation} \Gamma =\frac{P_{+}}{P_{-}}\simeq e^{(-4ImW_{+})}. \label{24} \end{equation} Ultimately, we can find the GUP temperature ($T_{GUP}$) of the ABH by comparing the latter result with the Boltzmann formula $\Gamma =e^{-\beta E} , where $\beta $ is the inverse temperature \cite{rbook}. Thus, we have \begin{equation} T_{GUP}=\frac{\sqrt{\lambda _{-}}}{2\pi \lambda _{+}}\frac{\sqrt 1-2m_{p}^{2}\alpha _{GUP}}}{r_{h}}=T_{H}\sqrt{1-2m_{p}^{2}\alpha _{GUP}}. \label{25} \end{equation} As can be seen above, after terminating the GUP parameter i.e., $\alpha _{GUP}=0$, one can recover the original Hawking temperature \eqref{10}. \section{GUP Entropy} In this section, we shall revisit the recent studies \cite{Pasos0,Pasos} to derive the $S_{GUP}$ for a BH. In general, the GUP is defined by \cit {Vagenas} \begin{equation} \Delta x\Delta p_{GUP}\geq \hbar \left( 1-\frac{y}{\hbar }\Delta p_{GUP} \frac{y^{2}}{\hbar ^{2}}(\Delta p_{GUP})^{2}\right) , \label{26n} \end{equation} where $y=\alpha _{GUP}l_{p}$ in which $\alpha _{GUP}$ is a dimensionless positive constant and $l_{p}=\sqrt{\frac{\hbar G}{c^{3}}}$ is the Planck length. Equation (26) can be reorganized as \begin{equation} \Delta p_{GUP}\geq \frac{\hbar (\Delta x+y)}{2y^{2}}\left( 1-\sqrt{1-\frac 4y^{2}}{(\Delta x+y)^{2}}}\right) , \label{27n} \end{equation} In fact $l_{p}/\Delta x$ is infinitesimally small compared with unity. Without loss our generality, using units $l_{p}=G=c=\hbar =k_{B}=1 $ and in sequel expanding the above equation in Taylor series, we find out \begin{equation} \Delta p_{GUP}\geq \frac{1}{\Delta x}\left[ 1-\frac{\alpha _{GUP}}{2\Delta x +\frac{\alpha _{GUP}^{2}}{2(\Delta x)^{2}}+\cdots \right] . \label{28n} \end{equation} As known from introductory quantum mechanics textbooks, in the absence of the GUP effect ($\alpha _{GUP}=0$) we get the ordinary (Heisenberg) uncertainty principle and its saturated form \cite{Pasos0,Pasos} as follows \begin{equation} \Delta x\Delta p\geq 1, \label{29n} \end{equation} \begin{equation} \Xi\Delta x\geq 1, \label{30n} \end{equation} where $\Xi$ denotes the energy of a quantum-scale particle. Hence, getting analogy between Eqs. (28) and (29), one can also derive the QGC version of Eq. (30) as \cite{Pasos0} \begin{equation} \Xi_{QGC}\geq \Xi\left[ 1-\frac{\alpha _{GUP}}{2(\Delta x)}+\frac{\alpha _{GUP}^{2}}{2(\Delta x)^{2}}+\cdots \right] . \label{31n} \end{equation The quantum tunneling rate for a quantum particle with $\Xi_{QGC}$ reads \cit {Pasos0} \begin{equation} \Gamma \simeq \exp [-2Im\mathcal{I}]=\exp \left( -\Xi_{QGC}/T_{QGC}\right), \label{32n} \end{equation where $T_{QGC}$ denotes the QGC temperature. Now, if we compare Eq. (32) with the Boltzmann factor, we obtain \begin{equation} T_{QGC}=T_{H}\left[ 1-\frac{\alpha _{GUP}}{2(\Delta x)}+\frac{\alpha _{GUP}^{2}}{2(\Delta x)^{2}}+\cdots \right] ^{-1}. \label{33n} \end{equation} Inspiring from the recent studies \cite{Pasos0,Pasos}, we can assign $\Delta x$ to $A_{h}/\pi $. Thus, employing the first law of BH thermodynamics, one can derive the GUP entropy as follows \begin{eqnarray} S_{GUP} &=&\int \frac{\kappa dA_{h}}{8\pi T_{QGC}}=\int \frac{T_{H}dA_{h}} 4T_{QGC}} \notag \\ &=&\int \frac{dA_{h}}{4}\left[ 1-\frac{\pi \alpha _{GUP}}{2A_{h}}+\frac{\pi ^{2}\alpha _{GUP}^{2}}{2A_{h}^{2}}+\cdots \right] , \notag \\ &=&\frac{A_{h}}{4}-\frac{\pi \alpha _{_{GUP}}}{8}\ln {\frac{A_{h}}{4}}-\frac \pi ^{2}\alpha _{GUP}^{2}}{8A_{h}}+\cdots . \label{34n} \end{eqnarray where $A_{h}$ is the perimeter length of the event horizon and $\kappa =2\pi T_{H}$ is the surface gravity. In Eq. (34), the existence of $\alpha _{_{GUP}}$ is brought correction terms to the BH entropy. Thus, whenever \alpha _{_{GUP}}=0$ one can reproduce the well-known area law for the BH mechanics: $\left. S_{GUP}\right\vert _{\alpha _{_{GUP}}=0}\rightarrow S=A_{h}/4$. Meanwhile, the result obtained in Eq. (34) is in accordance with the earlier works that take account of the influences of the loop quantum gravity and string theory on the quantum corrected entropy (see, for instance, \cite{Rovel,QGC1,QGC2,QGC3,QGC4} and references therein). \section{Quantum Gravity Corrected Hawking Radiation of ABH} The Painlev\'{e}-Gullstrand coordinate (PGC) \cite{Pain,Gull} system is one of the coordinate transformations in general relativity that defines a spacetime, which is regular at the horizon. The constant time surfaces in the PGCs traverse the event horizon to reach the singularity. In fact, a geometry described by the PGC can be seen as a flow whose current speed is equal to the Newtonian escape velocity at each point. Furthermore, the PGC time is the proper time of an observer who freely falls radially from rest \cite{Hamil,Kanai}. In this section, we shall define the PGC form of the ABH and compute its HR via the HJ method \cite{Vanz}. Subsequently, in the framework of the Parikh-Wilczek tunneling method (PWTM) \cite{PWT}, the QGC HR will be studied. According to Eqs. (3) and (9), one can easily see that \begin{equation} G=\lambda _{-}Z. \label{35n} \end{equation} Therefore metric (8) can be rewritten as \begin{equation} ds^{2}=-Zdt^{2}+\frac{1}{\lambda _{-}Z}dr^{2}+Kd\phi ^{2}. \label{36n} \end{equation} After rescaling the radial coordinate to \begin{equation} r\rightarrow \sqrt{\lambda _{-}}\widetilde{r}, \label{37n} \end{equation} the latter metric (36) becomes \begin{equation} ds^{2}=-\widetilde{Z}dt^{2}+\frac{1}{\widetilde{Z}}d\widetilde{r}^{2} \widetilde{K}d\phi ^{2}, \label{38nn} \end{equation} where \begin{equation} \widetilde{Z}=\frac{c^{2}\widetilde{r}^{2}-A^{2}}{c^{2}\widetilde{r ^{2}\lambda _{+}}, \label{39n} \end{equation} \begin{equation} \widetilde{K}=\lambda _{+}\widetilde{r}^{2}. \label{40nn} \end{equation} In metric (38) the event horizon corresponds to \begin{equation} \widetilde{r}_{h}=\frac{A}{c}. \label{41nn} \end{equation} It is needless to say that the Hawking temperature (10) remains intact in this framework, as it should be. Now, one can pass to the PGCs by applying the following transformation \cite{SakCTP} to the metric (38) \begin{equation} d\widetilde{t}=dt+\frac{\sqrt{1-\widetilde{Z}}}{\widetilde{Z}}d\widetilde{r}, \label{42nn} \end{equation} where $\widetilde{t}$ is referred to the PGC time. In fact, as it can be deduced from many introductory textbooks written on the general relativity, \widetilde{t}$ is equivalent to the proper time in this coordinate system \cite{Rob}. Inserting the transformation (42) into metric (38), we get the following line-element \begin{equation} ds^{2}=-\widetilde{Z}d\widetilde{t}^{2}+2\sqrt{1-\widetilde{Z}}d\widetilde{t d\widetilde{r}+d\widetilde{r}^{2}+\widetilde{K}d\phi ^{2}. \label{43n} \end{equation} The relativistic HJ equation \cite{Angh}\ of the classical action $\mathcal{I}$ is given by \begin{equation} g^{\mu \nu }\partial _{\mu }\mathcal{I}\partial _{\nu }\mathcal{I}+m^{2}=0, \label{44nn} \end{equation} where $m$ is the mass of the phonon. For the metric (43), Eq. (44) results in \begin{equation} m^{2}-(\partial _{\widetilde{t}}\mathcal{I})^{2}+2\sqrt{1-\widetilde{Z} (\partial _{\widetilde{t}}\mathcal{I})(\partial _{\widetilde{r}}\mathcal{I}) \widetilde{Z}(\partial _{\widetilde{r}}\mathcal{I})^{2}+\frac{1}{\widetilde{ }}(\partial _{\phi }\mathcal{I})^{2}=0. \label{45n} \end{equation} Letting \begin{equation} \mathcal{I}=\mathcal{W}(\widetilde{r})+\mathcal{J}(\phi )-\mathcal{E \widetilde{t}, \label{46n} \end{equation} and in sequel substituting the above ansatz in Eq. (45), we obtain \begin{equation} \mathcal{W}_{\left( \mathcal{\pm }\right) }=\int \frac{\mathcal{E}\sqrt{1 \widetilde{Z}}\pm \sqrt{\mathcal{E}^{2}-\widetilde{Z}\left( m^{2}+\frac \left( \partial _{\phi }\mathcal{J}\right) ^{2}}{\widetilde{K}}\right) }} \widetilde{Z}}d\widetilde{r}. \label{47n} \end{equation} Thus, one can find that near the horizon Eq. (41) reduces to \begin{equation} \mathcal{W}_{\left( \mathcal{\pm }\right) }^{\mathcal{NH}}\mathcal{\approx E \int \frac{1\pm 1}{\widetilde{Z}}d\widetilde{r}. \label{48n} \end{equation} It is easy to see that $\mathcal{W}_{(-)}^{\mathcal{NH}}=0$ (i.e., probability of ingoing phonons $P_{-}=1$), which warrants $100\%$ absorption of the ingoing phonons by the ABH. On the other hand, the integral of \mathcal{W}_{(+)}^{\mathcal{NH}}$ has a pole at the event horizon. To evaluate the associated integral, we use the Taylor series to expand the metric function $\widetilde{Z}$ around the horizon $r_{h}$: \begin{equation} \widetilde{Z}\cong \widetilde{Z}^{\prime }(\widetilde{r}_{h})(\widetilde{r} \widetilde{r}_{h})+\Game (\widetilde{r}-\widetilde{r}_{h})^{2}. \label{49n} \end{equation} Then, after deforming the contour around the pole $r_{h}$, we obtain \begin{eqnarray} \mathcal{W}_{(+)}^{\mathcal{NH}} &\mathcal{\approx }&2\mathcal{E}\int \frac{ \widetilde{r}}{\widetilde{Z}}, \notag \\ &\mathcal{\approx }&2\mathcal{E}\int \frac{d\widetilde{r}}{\widetilde{Z ^{\prime }(\widetilde{r}_{h})(\widetilde{r}-\widetilde{r}_{h})}, \notag \\ &\mathcal{\approx }&i\pi \mathcal{E}\lambda _{+}\widetilde{r}_{h}. \label{50n} \end{eqnarray} The above result yields the tunneling probability of the outgoing phonons as follows \begin{equation} P_{+}=\exp (-2Im\mathcal{I)}=\exp \left[ -2Im\mathcal{W}_{(+)}^{\mathcal{NH} \right] =\exp \left( -2\pi \mathcal{E}\lambda _{+}\widetilde{r}_{h}\right) , \label{51nn} \end{equation} which is also equal to the tunneling rate (since $P_{-}=1$): \begin{equation} \Gamma =\frac{P_{+}}{P_{-}}=\exp \left( -2\pi \mathcal{E}\lambda _{+ \widetilde{r}_{h}\right) . \label{52nn} \end{equation} After recalling the Boltzmann formula $\left[ \Gamma =\exp (-\mathcal{E}/T \right] $, we can read the horizon temperature of the ABH within the PGCs: \begin{equation} T_{PGC}=\frac{1}{2\pi \lambda _{+}\widetilde{r}_{h}}. \label{53nn} \end{equation} The above result is in full agreement with the standard Hawking temperature (10). Meanwhile, from the first law of thermodynamics $dE=T_{H}dS_{BH}$ one can derive the thermodynamic energy as \begin{equation} E=\frac{2\ln \left( \widetilde{r}_{h}\right) }{\lambda _{+}}. \label{54nn} \end{equation} We also want to extend our computations to the tunneling method which considers the self-gravitation and back-reaction effects. To this end, we employ the PWTM \cite{PWT}. In the PGCs,\ the radial null geodesics of a test particle are defined by \begin{equation} \dot{\widetilde{r}}_{\left( \pm \right) }=\frac{d\widetilde{r}}{d\widetilde{ }}=\pm 1-\sqrt{1-\widetilde{Z}}, \label{55nn} \end{equation} where positive (negative) sign stands for the outgoing (ingoing) geodesics. Hence, the near horizon radial outgoing null geodesics can be derived as \begin{eqnarray} \dot{\widetilde{r}}_{\left( +\right) } &\cong &1-\sqrt{1-\widetilde{Z}\left( \widetilde{r}_{h}\right) }+\frac{1}{2}\frac{\widetilde{Z}^{\prime } \widetilde{r}_{h})}{\sqrt{1-\widetilde{Z}\left( \widetilde{r}_{h}\right) }} \widetilde{r}-\widetilde{r}_{h})+\Game (\widetilde{r}-\widetilde{r}_{h})^{2}, \notag \\ &\approx &\kappa (\widetilde{r}-\widetilde{r}_{h}), \label{56nn} \end{eqnarray} where $\kappa =\frac{\widetilde{Z}^{\prime }(\widetilde{r}_{h})}{2}$ is the surface gravity \cite{Waldbook}. According to the PWTM, while the particle tunnels the event horizon from in ($\widetilde{r}_{i}$) to out ($\widetilde{ }_{f}$), the BH is supposed to emit a circular shell of energy $\omega $ which is very small compared with the total (fixed) energy $E$ i.e., $\omega \ll E$ \cite{PWT}. This event precipitates the energy of the ABH from $E$ to $E-\omega $. Having regard to this self-gravitational effect \cite{KWa}, the imaginary part of the action becomes \cite{SakCTP,Zhan,Bane,SakIJTP} \begin{align} Im\mathcal{I}& =Im\int_{\widetilde{r}_{i}}^{\widetilde{r}_{f}}\int_{E}^{E \omega }\frac{dH}{\dot{\widetilde{r}}_{\left( +\right) }}d\dot{\widetilde{r} _{\left( +\right) }, \notag \\ & =-Im\int_{r_{in}}^{r_{out}}\int_{0}^{\omega }\frac{d\widetilde{\omega }} \dot{\widetilde{r}}_{\left( +\right) }}d\dot{\widetilde{r}}_{\left( +\right) }, \label{57nn} \end{align} where the Hamiltonian $H=E-\widetilde{\omega }$\ ($dH=-d\widetilde{\omega } ). Using Eq. (56), one can evaluate the integral (57) by deforming the contour. Thus, one obtains \begin{eqnarray} Im\mathcal{I} &=&-\pi \int_{0}^{\omega }\frac{d\tilde{\omega}}{\kappa }= \frac{1}{2}\int_{0}^{\omega }\frac{d\tilde{\omega}}{T_{H}}, \notag \\ &=&-\frac{1}{2}\int_{S(E)}^{S(E-\omega )}dS, \notag \\ &=&-\frac{1}{2}\left[ S(E-\omega )-S(E)\right] , \notag \\ &=&-\frac{1}{2}\Delta S. \label{58nn} \end{eqnarray} Herewith, the tunneling rate becomes \cite{PWT} \begin{equation} \Gamma \sim e^{-2Im\mathcal{I}}=e^{\Delta S}. \label{59n} \end{equation} If we re-set the universal gravitational constant to $G(=l_{p}^{2})=1/8$ (keeping on $c=\hbar =1$), the entropy results in $S=\frac{A_{h}}{4G =2A_{h}=4\pi \widetilde{r}_{h}$: twice of the perimeter length of the horizon (see for example \cite{3bh,Akbar}). Then, ignoring the effect of the higher order GUP effects, Eq. (34) becomes (see also \cite{QGC5}) \begin{equation} S_{GUP}=2A_{h}-\frac{\pi \alpha _{GUP}}{16\sqrt{2}}\ln \left( 2A_{h}\right) +\Game (\alpha _{GUP})^{2}. \label{60nn} \end{equation} With the aid of Eq. (54) one can re-express the event horizon as $\widetilde r}_{h}=\exp \left( \frac{E}{2}\lambda _{+}\right) $. Whence, we can obtain the change in the $S_{GUP}$ as \begin{eqnarray} \Delta S_{GUP} &=&S_{GUP}(E-\omega )-S_{GUP}(E), \notag \\ &=&4\pi \left[ \exp \left( \frac{E-\omega }{2}\lambda _{+}\right) -\exp \left( \frac{E}{2}\lambda _{+}\right) \right] +\frac{\pi }{32\sqrt{2}}\alpha _{GUP}\lambda _{+}\omega . \label{61} \end{eqnarray} When we expand Eq. (61) in the Taylor series with respect to $\omega $ and accordingly arrange the terms up to the leading order in $\omega $, we get \begin{equation} \Delta S_{GUP}\cong -\left( \frac{1}{T_{H}}-\frac{\pi }{32\sqrt{2}}\alpha _{GUP}\lambda _{+}\right) \omega +O(\omega ^{2}). \label{62} \end{equation} Using Eq. (59), we can define the QGC tunneling rate as \begin{equation} \Gamma ^{QGC}\sim e^{\Delta S_{GUP}}=e^{-\frac{\omega }{T_{H}^{QGC}}}, \label{63} \end{equation} which yields the QGC Hawking temperature: \begin{eqnarray} T_{H}^{QGC} &=&\left( \frac{1}{T_{H}}-\frac{\pi }{32\sqrt{2}}\alpha _{GUP}\lambda _{+}\right) ^{-1}, \notag \\ &=&T_{H}\left( 1-\frac{\pi }{32\sqrt{2}}\alpha _{GUP}\lambda _{+}T_{H}\right) ^{-1}, \notag \\ &=&T_{H}\left( 1-\frac{\alpha _{GUP}}{64\sqrt{2}\widetilde{r}_{h}}\right) ^{-1}. \label{64n} \end{eqnarray} One can compare Eqs. (64) and (33) (with $\Delta x=\frac{A_{h}}{\pi G}=\frac 8A_{h}}{\pi }=16\widetilde{r}_{h}$, $\alpha _{GUP}\equiv \alpha _{GUP}l_{p}=\frac{\alpha _{GUP}}{2\sqrt{2}}$, and taking cognizance of the leading order of $\alpha _{GUP}$) to verify that both temperatures obtained are exactly the same. Furthermore, it is clear from Eq. (64) that ignorance of the back-reaction effects ($\alpha _{GUP}=0$) regains the standard Hawking temperature (10). \section{Conclusion} The HR of the rotating ABH in (2+1) dimensional spacetime was thoroughly investigated in \cite{Chun}. However, it appears that GUP effects on that HR has not been thoroughly studied in the literature. In this paper, we have filled this gap by employing the KGE with GUP (11) and the GUP entropy (34). For simplicity, we have considered the rotating ABH in the dragging coordinate system. Next, we have demonstrated that the KGE with GUP for a massive scalar field propagating in the background of an ABH completely separates with the HJ ansatz. Then, focusing on the quantum tunneling formalism, we have managed to find the GUP modified Hawking temperature (25) of the ABH. Utilizing the GUP entropy derived in Sec. (4), we have also obtained the QGC Hawking temperature (64). Both temperatures have the standard Hawking temperature limit when the GUP effect is terminated ($\alpha _{GUP}=0$).
1,941,325,220,910
arxiv
\section{Introduction} \indent Several years ago, much attention was paid to the study of $q$-deformation of the Virasoro algebra \cite{CZ}-\cite{disc}. The $q$-deformed Virasoro algebra was first introduced by Curtright and Zachos (CZ) as a deformation for both commutators and structure constants \cite{CZ}. Some other versions \cite{CILPP,DS} which can be transformed from the CZ deformation, conformal field theoretical analogues \cite{AS}, matrix representation \cite{NQ} and some approaches to discretized systems \cite{disc} have also been discussed. {}From the quantum group theoretical viewpoint \cite{DJ}, these developments of the CZ deformation are nothing more than analogies because a Hopf algebra structure has not been established for them. Another type of deformed Virasoro algebra has been investigated lately \cite{saito1}: \begin{equation} [L_n^{(i)},L_m^{(j)}]=\sum_{\epsilon=\pm 1}C^{n \hskip 8pt i} _{m \ \epsilon j}L^{(i+\epsilon j)}_{n+m}, \label{EBg} \end{equation} where the structure constants are \begin{equation} C^{n \ i}_{m \ j} ={[{nj-mi \over 2}]_q[(i+j)]_q \over [i]_q[j]_q} \label{EEk} \end{equation} with \begin{equation} [x]_q={q^x-q^{-x} \over q-q^{-1}}. \end{equation} This algebra is similar to the trigonometric algebra presented in \cite{sine}, but is different from it as is pointed out in reference \cite{CP}. In this paper, we present some formulae based on this algebra from various points of view. We refer to this algebra as a (fermionic) $q$-Virasoro algebra. It possesses a Hopf algebra structure, which is a cocommutative one. However, its operator representation satisfies the quantum algebra ${\cal U}_q (sl(2))$ incorporating the Virasoro zero mode operator. The Hopf algebra structure was found by introducing an additional set of generator indices \cite{saito1} into the differential operator representation of the CZ deformation. In this sense, the differential operator representation plays an important role in deriving the situation. After this discovery, the oscillator representation \cite{CP} and the operator product representations \cite{hsato,OS}, relevance to a discretized Liouville model \cite{BC}, were intensively developed. Further extensions of the $q$-Virasoro algebra have been considered: to the supersymmetric case \cite{CP} and to more general structure constants \cite{KS}. In spite of these successful developements, various unsolved questions remain about this algebra; what is the origin of this $q$-deformation or what physical situation embodies this algebra. As for its central extension, the general solution of the Jacobi identity costraint equation for central extension has yet to be found. We do not still know a unique supersymmetric algebra which includes ghost and superghost sectors. Moreover, is there any further nontrivial generalization which might lead us to a quantum Hopf algebra structure? These problems are important in dealing with physical situations utilizing the $q$-Virasoro algebra. In this paper, we would like to focus our attention on the transformation properties of the $q$-Virasoro algebra from a point of the classical Noether current. In sect.2, starting from an analogy with the classical Noether current, we define a $q$-analogue of the canonical energy-momentum (EM) tensor. (Throughout this paper, we often refer to an analogous object deformed by the parameter $q$ as a $q$-object for short, e.g., $q$-EM tensor). In sect.3, applying the result to a two-dimensional chiral fermion theory, we define the Fourier mode operators of the $q$-EM tensor. It becomes clear that the mode operators coincide with the $q$-Virasoro generators. In this case, the $q$-EM tensor can be shown to be related to an analogue of conformal transformations. Next in sect. 4, from the point of a transformation law for a field, we discuss the relation between magnetic translations and $q$-conformal transformations in a nonrelativistic classical theory under constant magnetic field. In sect.5 and 6, some brief remarks on the differential operator associated with the $q$-conformal transformations are added. In the appendix, we describe a method for obtaining the central extension of the $q$-Virasoro algebra through the Jacobi identities. \setcounter{equation}{0} \section{$q$-Noether current} \indent In this section, we discuss an analogue of the classical Noether current in conformity with the standard Noether current in order to give a hint to consider the origin or the reason why the $q$-Virasoro generators take a particular form as in \cite{BC}\cite{hsato}. Although we often refer to the commutator algebra \eq{EBg} (including central extensions), there arises no confusion if we keep in mind the correspondence between the Poisson bracket and commutator. In field theories (for any field $\phi_i$), a conserved current comes from the invariance of the action \begin{equation} \delta S=\int\partial_\mu({{\delta{\cal L}}\over{\delta(\partial_\mu\phi_i)}}\delta\phi_i -{\cal L} \epsilon^{\mu} )d^Dx, \end{equation} under an infinitesimal transformation \begin{equation} x'^{\mu} = x^{\mu} - \epsilon^{\mu}(x), \label{eq202} \end{equation} where $\delta\phi_i$ is the Lie derivative. If we require the invariance of $\phi_i$ under the transformation, i.e. $\phi'_i(x')=\phi_i(x)$, the Lie derivative can be written in the following form \begin{equation} \delta\phi_i(x)=\phi'_i(x)-\phi(x)=\epsilon^{\mu}\partial_\mu\phi_i(x) + O(\epsilon^2), \end{equation} and we get the conserved current \begin{equation} J^{\mu}=({\delta{\cal L} \over \delta(\partial_\mu\phi_i)}\partial_\nu\phi_i -\delta^{\mu}_{\nu}{\cal L} )\epsilon^{\nu} =T^{\mu}_{\nu}\epsilon^{\nu}. \label{eq204} \end{equation} The canonical EM tensor $T^{\mu}_{\nu}$ reflects the translational invariance of the system. In particular, the EM tensor of the conformal field theories corresponds to the generators of the Virasoro algebra and satisfies the conservation law \begin{equation} \partial_{\bar z} T(z)=\partial_z {\bar T}(\bar z)=0, \hskip 30pt T_{z\bar z}=T_{\bar z z}=0, \label{eq222} \end{equation} where $T(z)=T_{zz}$ and ${\bar T}(\bar z)=T_{\bar z\zb}$ \cite{cft}. Namely we obtain the generator \begin{equation} L_n={1\over 2\pi i}\oint dz z^{n+1}T(z), \label{eq224} \end{equation} which satisfies \begin{equation} [L_n,L_m]=(n-m)L_{n+m}+{c\over 12}(n^3-n)\delta_{n+m,0}. \label{eq223} \end{equation} When constructing a $q$-analogue of some quantity, the invariance under the inversion $q\rightarrow q^{-1}$ may be useful (hereafter in this section, we denote $q$ by $Q$ ). We then insert $Q$ into the Lagrangian so that the action should be invariant under the replacement $Q\rightarrow Q^{-1}$. Furthermore we assume that $Q$ is not related to the dynamics, i.e., the action should not depend on $Q$: \begin{equation} S=S(Q)=S(Q^{-1}), \hskip 30pt {d S\over d Q}=0. \label{add1} \end{equation} One of the possible ways to introduce the parameter $Q$ is in the following change of variables in the integrand, \begin{equation} S(Q)=Q^D\int{\cal L} (\phi_i,\partial\phi_i;xQ)d^Dx, \end{equation} and \begin{equation} S(Q^{-1})=Q^{-D}\int{\cal L} (\phi_i,\partial\phi_i;xQ^{-1})d^Dx. \label{add2} \end{equation} The above requirements \eq{add1}-\eq{add2} may be reinterpreted as follows. Hereafter we abbreviate the index $i$ of fields. The invariance of $S$ under $Q\rightarrow Q^{-1}$ can be understood as the invariance under the dilatation $xQ\rightarrow xQ^{-1}$. Namely, this dilatation determines the $\epsilon^\mu(x)$ defined in \eq{eq202} under the transformation $x\rightarrow xQ^{-2}$ \begin{equation} \epsilon^{\mu}(x) = x^{\mu}(1-Q^{-2}). \label{eq208} \end{equation} The invariance of $\phi$ under this transformation reads $\phi'(x)=Q^{2\Delta}\phi(xQ^2)$, where $\Delta$ means the canonical dimension of the field. If $\phi(x)$ is a regular function of $x$, $\phi(xQ)$ can be expressed as an infinite series in the derivatives of $\phi(x)$ \begin{equation} \phi(xQ)=Q^{x\partial}\phi(x). \end{equation} The Lie derivative of $\phi(x)$ is thus exactly written in the following form \begin{equation} \delta\phi(x) =Q^2 D^{(\Delta)}_{\mu}\phi(xQ)\epsilon^{\mu}(x), \label{eq209} \end{equation} where $D^{\Delta}_{\mu}$ is the $Q$-derivative defined by \begin{equation} (D^{(\Delta)}f)(ax)={1\over ax}{Q^{\Delta+x\partial}-Q^{-\Delta-x\partial} \over Q-Q^{-1}}f(ax). \end{equation} If $Q$ is infinitesimally deviated from the unity, \eq{eq208} becomes an infinitesimal quantity and we get the following analogue of the conserved current, \begin{equation} J_Q^{\mu}=({\delta{\cal L} \over \delta(\partial_\mu\phi)}Q^2 D^{(\Delta)}_{\nu}\phi(xQ) -\delta^{\mu}_{\nu}{\cal L} )\epsilon^{\nu}. \label{eq211} \end{equation} It should be noted here that when $q=1$, \eq{eq211} coincides with the dilatation current \[ j^{\mu}={\delta{\cal L} \over \delta(\partial_\mu\phi)}(\Delta+x^\nu\partial_\nu)\phi -x^{\mu}{\cal L} \] and can be written in the form $x^{\nu}T^{\mu}_{\nu}+\Delta AB^{\mu}C$ where $B^\mu$ is $\partial^\mu$ (or $\gamma^\mu$) for a boson (fermion) field. Thus, the canonical EM tensor can be obtained by putting $\Delta=0$ and dropping $x^{\nu}$ out from $j^{\nu}$. In the same way, we define an analogous "EM tensor" from \eq{eq211} by putting $\Delta=0$ and dropping $\epsilon^{\nu}$, \begin{equation} J^{\mu}_{\nu}(x)=Q^2{\delta{\cal L} \over \delta(\partial_\mu\phi(x))} D^{(0)}_{\nu}\phi(xQ)-\delta^{\mu}_{\nu}{\cal L}. \label{eq216} \end{equation} This may be interpreted as a $q$-analogue of the EM tensor because it becomes the canonical EM tensor in the limit of $Q\rightarrow 1$. In general, \eq{eq211} is no longer a conserved current in the case of $Q$ being finitely deviated from unity. On the other hand in conformal field theories, this current is trivially conserved because of the chiral decomposition of the theories. In spite of the triviality of conservation, the $q$-EM tensor plays an important role in generating deformed Virasoro algebras. For example, in the case of a massless fermion which has the Lagrangian density \begin{equation} {\cal L} = {1\over 2}\psi\partial_{\bar z}\psi +{1\over 2}{\bar\psi}\partial_z{\bar\psi}, \end{equation} the components of $J_{\mu\nu}$ are as follows; \begin{eqnarray} &\hskip 20pt J_{z\bar z}=J_{\bar z z}=0, \\ &J(z)=J_{zz}=-{1\over 2}Q^2\psi(z)D_z\psi(zQ), \label{eq225}\\ &{\bar J}(\bar z)=J_{\bar z\zb}=-{1\over 2}Q^2{\bar\psi} (\bar z)D_{\bar z}{\bar\psi}(\bar z Q). \end{eqnarray} The first equation of the above shows that $J_{\mu\nu}$ is traceless $J^{\mu}_{\mu}=0$. The others indicate that $J_{zz}$ ($J_{\bar z\zb}$) is a function depending only on $z$ ($\bar z$) and the conservation law of $J_{\mu\nu}$ is \begin{equation} \partial_{\bar z}J(z)=\partial_z {\bar J}(\bar z)=0. \end{equation} The $J(z)$ becomes $T(z)$ defined in \eq{eq222} in the limit $q\ra1$ and the Fourier mode of $J(z)$ satisfies the $q$-Virasoro algebra \eq{EBg} as well as its classical Poisson bracket algebra. This will be clear soon in the heading of the next section. \setcounter{equation}{0} \section{2-dimensional fermion current} \indent Now let us define the Fourier mode expansion of the $q$-EM tensor \eq{eq225} \begin{equation} L_n^{(k)}={1\over 2\pi i}\oint dz z^{n+1}J^{(k)}(z) \label{eq301} \end{equation} in which \begin{equation} J^{(k)}(z)={2Q^{-2} \over Q+Q^{-1}}J(zQ^{-1}) \hskip25pt\rm{with}\hskip15pt Q=q^{k/2} \hskip 15pt (k\in{\bf Z}) \label{eq3491} \end{equation} Taking into account normal ordering in the quantum situation, the operator \eq{eq301} satisfies the following centrally extended algebra \cite{BC} \cite{hsato} \begin{equation} [L_n^{(i)},L_m^{(j)}]=\sum_{\epsilon=\pm 1}C^{n \hskip 8pt i} _{m \ \epsilon j}L^{(i+\epsilon j)}_{n+m} +{1\over2}C_{ij}(n)\delta_{n+m},\label{EBgg} \end{equation} with \begin{equation} C_{ij}(n)={1\over [i]_q[j]_q} \sum_{k=1}^n[{(n+1-2k)i\over 2}]_q[{(n+1-2k)j\over 2}]_q\,. \label{1111} \end{equation} It is obvious that we can easily reduce the above statement to the classical situation by omitting the central term at any time. Next, we show that our $q$-EM tensor $J(z)$ is related to an analogy of conformal transformations. The explicit expression for \eq{eq301} is given in \cite{hsato} \begin{equation} L^{(k)}_n={-1\over2\pi i}\oint dzz^n :\psi(zq^{-k/2}) {q^{{k\over2}z\partial}-q^{-{k\over2}z\partial}\over q^k-q^{-k}} \psi(z):\,\,. \label{EBb}\end{equation} This can be rewritten as \begin{equation} L_n^{(k)}={-1\over 2\pi i}\oint dzz^n {q^{k(n+1)/2}\over q^k-q^{-k}} :\psi(z)q^{kz\partial}\psi(z):. \label{qv2} \end{equation} The factor $q^{k(n+1)/2}$ comes from the scaling of $z$. Furthermore, $L^{(k)}$ has the following property \begin{equation} L_n^{(k)}=L_n^{(-k)}\enskip, \end{equation} and thus \eq{qv2} can be symmetrized on the upper index $k$ \begin{equation} L_n^{(k)}={1\over 2\pi i}\oint dz {1\over2}:\psi(z)z^n {[kz\partial + k(n+1)/2]_q \over [k]_q}\psi(z):. \label{qv12} \end{equation} Going back to the Noether current argument, the conserved charge for this chiral fermion theory should be \begin{equation} Q_{q-Vir}={1\over 2\pi i}\oint dz :{{\delta{\cal L}}\over {\delta({\bar\partial}\psi)}}\delta\psi(z):, \label{qv10} \end{equation} where $\delta\psi$ means a Lie derivative for our certain particular transformation which might be called $q$-conformal transformations. Comparing RHSs between \eq{qv12} and \eq{qv10}, we obtain the variation \begin{equation} \delta\psi(z)=z^n{[kz\partial + k(n+1)/2]_q \over [k]_q}\psi(z).\label{qv5} \end{equation} This is nothing but a $q$-analogue of the conformal transformation $\delta\psi=z^n(z\partial+{1\over2}(n+1))\psi$. \setcounter{equation}{0} \section{3-dimensinal fermion current} \indent A similar situation to the previous section exists in the following nonrelativistic fermion field theory in two-dimensional space under constant magnetic field \begin{equation} {\cal L}_3=i\Psi^{\dagger}{\dot\Psi}-{1\over2}(D\Psi)^{\dagger} (D\Psi) \enskip\label{EEX} \end{equation} where $D_i=\partial_i-iA_i$. The equation of motion leads to the Schr{\"o}dinger equation for the Landau motion. In this system, it is known that the usual translational invariance is modified into the so-called magnetic translational one \cite{mag} defined by \begin{equation} \Psi'(x,y)=exp(\epsilon b-\bar \epsilon b^\dagger)\Psi(x,y) =T_{(\epsilon,\bar \epsilon)}\Psi(x,y) \enskip, \label{magtrans} \end{equation} and thus \begin{equation} \delta \Psi = ( T_{(\epsilon,\bar \epsilon)} -1)\Psi(x,y) \label{mmm} \end{equation} where $b$ and $b^\dagger$ are the harmonic oscillators which commute with the Hamiltonian. In the gauge $A=(-y/2,x/2)$, \begin{equation} b={1\over2}{\bar w}+\partial_w \enskip,\hskip 25pt b^{\dagger}={1\over2}w-\bar \partial_w \enskip \hskip 25pt w=x+iy \end{equation} Instead of \eq{mmm}, let us define a new transformation which is composed of the difference between two magnetic translations \begin{equation} \delta\Psi={\hat {\cal L}}_n^{(k)}(w,{\bar w})\Psi ={{T_{(k\epsilon,n\bar \epsilon)}-T_{(-k\epsilon,n\bar \epsilon)}} \over{q^k-q^{-k}}}\Psi. \label{qv11} \end{equation} The classical conserved current and charges for this transformation are given by \begin{equation} J_\mu={{\delta{\cal L}_3}\over {\delta(\partial_\mu\Psi)}}\delta\Psi-\delta^{\mu}_0{\cal L}_3, \end{equation} \begin{equation} {\cal L}_n^{(k)}=\int d^2w\Psi^{\dagger}(w,t) {T_{(k\epsilon,n\bar \epsilon)}-T_{(-k\epsilon,n\bar \epsilon)}\over q^k-q^{-k}}\Psi(w,t)\,. \label{EEi} \end{equation} Eq.\eq{EEi} satisfies the $q$-Virasoro algebra \eq{EBg} \cite{sato}. In order to see the similarity between \eq{qv12} and \eq{EEi}, let us compare the transformation law \eq{qv11} with the two-dimensional relativistic case \eq{qv5} using dimensional reduction. We follow the method of ref.\cite{GJ} to extract holomorphic parts from ${\hat {\cal L}}_n^{(k)}(w,{\bar w})$. After moving all ${\bar w}$ parts to the left of the $w$ parts, we replace ${\bar w}\rightarrow 2\partial$ and $\bar \partial\ra0$. For example, \[ T_{(k\epsilon,n\bar \epsilon)}= exp({k\epsilon\over2}{\bar w})exp(n\bar \epsilon\bar \partial)exp(-{n\bar \epsilon\over2}w) exp(k\epsilon\partial)\hskip 10pt\rightarrow \hskip 10pt exp(-n\bar \epsilon w/2)exp(2k\epsilon\partial-\epsilon\bar \epsilon kn/2). \] Further transformation is needed to see the coincidence with the form \eq{qv5}. Taking account of the coordinate transformation from a cylinder to the $z$-plane $ w=-{2\over\bar \epsilon}\ln z $ and of the parametrization $ q=e^{-\epsilon\bar \epsilon}$, we get a dimensionally reduced operator for ${\hat {\cal L}}_n^{(k)}(w,{\bar w})$ and so \begin{equation} \delta\Psi \rightarrow z^n{[kz\partial_z+nk/2]_q\over[k]_q}\Psi. \label{5533} \end{equation} This expression is very similar to \eq{qv5}. This is a natural result judging from the fact that the theory \eq{EEX} can be effectively described by a two-dimensional massless fermion theory \cite{drop}. In closing the section, we make a few remarks. First, the relations among ${\hat{\cal L}}_0^{(j)}$ can be easily found in this representation. All the ${\hat{\cal L}}_0^{(j)}$s are written as \begin{equation} {\hat{\cal L}}_0^{(j)}={k^j - k^{-j} \over q^j-q^{-j}}. \end{equation} Using this relation, we obtain \begin{equation} {\hat{\cal L}}_0^{(2)}={1\over[2]_q}{\hat{\cal L}}_0^{(1)} \sqrt{4+(q-q^{-1})^2{\hat{\cal L}}_0^{(1)} } \end{equation} \begin{equation} {\hat{\cal L}}_0^{(3)}={1\over[3]_q}{\hat{\cal L}}_0^{(1)} \{1+(q-q^{-1})[2]_q{\hat{\cal L}}_0^{(2)} \}, \end{equation} and so on. Second, one may consider the dimensional reduction of other differentioal operator algebras for the operators $T_{(k,n)}$ or $V_n^k$ \begin{equation} T_{(k,n)} \rightarrow M_n^k \equiv z^n q^{k(z\partial +n/2)} \label{qv901} \end{equation} \[ V_n^k=2(T_{(k,n)}+T_{(-k,n)}){T_{(1,0)}-T_{(-1,0)} \over q-q^{-1}} \] \begin{equation} \hskip 10pt \rightarrow 2( M_n^k + M_n^{-k} ) { M_0^1-M_0^{-1} \over q-q^{-1}} \label{qv902} \end{equation} where $T_{(k,l)}$ means $T_{(k\epsilon,l\bar \epsilon)}$. The dimensionally reduced differential operators satisfy the same algebras as those before reduction \begin{equation} [T_{(k,n)},T_{(l,m)}]= (q^{mk-nl \over2}-q^{nl-mk \over2})T_{(k+l,n+m)} \label{qv903} \end{equation} \begin{equation} [V_m^j,V_n^k]= \sum_{\epsilon,\eta=\pm1}C^{m \hskip 8pt j}_{n \ \epsilon k}(\eta) V_{m+n}^{j+\epsilon k+\eta}, \hskip 20pt C^{m \ j}_{n \ k}(r)=r[{n(j+r) -m(k+r) \over 2}]_q. \label{qv904} \end{equation} The former is called the Moyal-sine algebra \cite{sine} and the latter the bosonic $q$-Virasoro algebra \cite{CP}. However, this situation changes when they are put in a fermion bilinear form. Once they are put into the fermion bilinear form like in \eq{qv2}, any operator $O(b,b^\dagger)$ is symmetrized as $O(b,b^\dagger)-O(-b,b^\dagger)$ because of the Grassmann property of the fermion field. For example, the insertion of the above reduced operator \eq{qv901} into the bilinear form is equivalent to \eq{qv2} up to some normalizations, and \eq{qv2} satisfies not the original Moyal-sine algebra \eq{qv903} but the $q$-Virasoro algebra \eq{EBgg}. Similarly, any other algebraic relation composed of $T_{(k,n)}$ operators becomes different from the original algebra after inserted in the bilinear integral. Only the (fermionic) $q$-Virasoro algebra \eq{EBgg} is preserved in this reduction procedure in the bilinear form. In this sense, the appearance of the fermionic $q$-Virasoro algebra in the 3-d system is nontrivial. \setcounter{equation}{0} \section{Differential operator algebra} \indent Let us consider a little more about the differential operator in \eq{5533} (Although it is slightly different from one in \eq{qv5}, the results after eq.\eq{eq11} do not change). \begin{equation} \L_n^{(k)}=-{1\over [k]}z^n[k(z\partial+{n\over 2})]\enskip.\label{eq3} \end{equation} This is known as a realization of the centerless $q$-Virasoro algebra \eq{EBg}\cite{saito1}. It may be convenient to rewrite the operator \eq{eq3} as \begin{equation} \L_n^{(k)}=-{1\over [k]} z^n \sum_{j=1}^k q^{(k-2j+1)(z\partial+{n\over 2})}[z\partial+{n\over 2}]. \label{eq4} \end{equation} Operating it on the basis \begin{equation} {\hat \phi}(z)=z^{-h},\hskip 30pt (h\geq0) \label{eq5} \end{equation} it is obvious that \begin{equation} \L_n^{(k)} {\hat \phi}(z)=-z^n {[k({n\over2}-h)]\over[k]} {\hat \phi}(z). \label{eq6} \end{equation} Using this relation, we obtain the following formulae \begin{equation} \L_0^{(k)}{\hat \phi}(z)={ [kh] \over [k]}{\hat \phi}(z) \label{qv15} \end{equation} and \begin{equation} \L_n^{(k)}{\hat \phi}(z)= {[k({n\over2}-h)]\over[k][{n\over2}-h]}\L_n^{(1)}{\hat \phi}(z),\label{eq8} \end{equation} \begin{equation} [\L_n^{(i)},\L_m^{(j)}]{\hat \phi}(z)=\sum_{\epsilon=\pm1} {[{{nj-\epsilon mi}\over2}][(i+\epsilon j)(h-{{n+m}\over2})] \over [i][j][h-{n+m \over2}] } \L_{n+m}^{(1)}{\hat \phi}(z). \label{eq9} \end{equation} In order to write the analogy with conformal primary state vectors \cite{cft}, let us introduce the following operation rule of an arbitrary polynomial $D$ of $\L_n^{(k)}$ on the 'state' \begin{equation} D\ket{{\hat \phi}}\equiv\lim_{z\ra0}{\hat \phi}(z)^{-1}D{\hat \phi}(z). \label{qv16} \end{equation} As a result of \eq{eq6} and \eq{qv15}, we obtain \begin{eqnarray} &\L_0^{(k)}\ket{{\hat \phi}}={ [kh] \over [k]}\ket{{\hat \phi}} \label{eq11}\\ &\L_n^{(k)}\ket{{\hat \phi}}=0 \hskip 30pt(n>0), \label{eq12} \end{eqnarray} which are similar to the definition of the Virasoro primary vectors \cite{cft}. Some other formulae for the above 'primary vectors' can be derived using \eq{EBg}, \eq{eq8} and \eq{eq9}; \noindent (i) \begin{equation} \L_n^{(k)}\ket{{\hat \phi}}={[k({n\over2}-h)] \over [k][{n\over2}-h]}\L_n^{(1)}\ket{{\hat \phi}}, \label{qv6} \end{equation} (ii) $n>0$ and $n+m\not=0$ \begin{equation} \L_n^{(i)}\L_m^{(j)}\ket{{\hat \phi}}=\sum_{\epsilon=\pm1} {[{{nj-\epsilon mi}\over2}][(i+\epsilon j)(h-{{n+m}\over2})] \over [i][j][h-{n+m\over2}] } \L_{n+m}^{(1)}\ket{{\hat \phi}}, \label{eq14a} \end{equation} (iii) $n>0$, $m,l<0$, $n+m>0$ and $n+l>0$ \begin{equation} \L_n^{(i)}\L_m^{(j)}\L_l^{(k)}\ket{{\hat \phi}} \\ =\sum_{\epsilon,\eta=\pm1} { [{{nj-\epsilon mi}\over2}] [{{(n+m)k-\eta(i+\epsilon j)k}\over2}] [(i+\epsilon j+\eta k)(h-{{n+m+l}\over2})] \over [i][j][k][h-{n+m+l\over2}] } \L_{n+m+l}^{(1)}\ket{{\hat \phi}}\enskip . \label{eq14b} \end{equation} Although we do not write down further formulae for higher order of $L_n^{(k)}$ \begin{equation} \ket{{\matrix{k_1&k_2&\cdots&k_m\cr n_1&n_2&\cdots&n_m}};{\hat\phi}} =\prod_{j=1}^m \L_{-n_j}^{ (k_j)}\ket{\hat\phi}, \label{3976} \end{equation} it is clear that they can be also obtained streightforwardly. For example, the eigenvalue of \eq{3976} for $L_0^{(k)}$ is given by \begin{equation} {1\over[k]}[k(h+\sum_{j=1}^m n_j)]. \label{eq16} \end{equation} \setcounter{equation}{0} \section{ Primary fields} \indent We rederive the formulae \eq{eq11}-\eq{eq16} presented in the previous section from the point of a field representation. Similarly to the Virasoro primary vectors, let us define the following primary vector \begin{equation} \ket{h}=\lim_{z\rightarrow 0}\Phi(z)\ket{0} \end{equation} in which we introduce the vacuum vector defined by \begin{equation} L_n^{(k)}\ket{0}=0 \hskip 20pt (n\geq -1). \end{equation} We assume that our primary field $\Phi$ should satisfy the following commutator \begin{equation} [L_n^{(k)},\Phi(z)]= {1\over [k]}z^n[k(z\partial+{n\over 2}+h)]\Phi(z) + A(n,h,k)z^n\Phi(z), \label{eq19} \end{equation} where $A(n,h,k)$ is an operator which satisfies the following conditions \begin{eqnarray} & \lim_{q\rightarrow 1} A(n,h,k)=n(h-{1\over2}), \\ & A(n=0,h,k)=A(n,h={1\over2},k)=0. \end{eqnarray} It is obvious that the RHS of \eq{eq19} becomes the usual commutator for the Virasoro primary field $z^n ( z\partial+h(n+1) )\Phi(z)$ in the limit $q\ra1$. If $h=1/2$, \eq{eq19} coincides with the case of a massless free fermion \cite{hsato} \begin{equation} [L_n^{(k)},\psi(z)]={1\over [k]}z^n[kz\partial+{k\over 2}(n+1)]\psi(z). \end{equation} For other value of $h$, however at present, we have not found any realization of the same $q$-Virasoro generators even in bosonic field case. Under these assumptions, the second term on the RHS in \eq{eq19} is not necessary for the derivation of the formulae \eq{eq11}-\eq{eq16} as will be shown below. Now let us derive the formulae \eq{eq11}-\eq{eq16} from \eq{eq19}. First, the primarity conditions \eq{eq11} and \eq{eq12} can be verified from \eq{eq19} as \begin{eqnarray} &L_0^{(k)}\ket{h}=\lim_{z\ra0}[L_0^{(k)},\Phi(z)] ={ [kh] \over [k]}\ket{h} \\ &L_n^{(k)}\ket{h}=\lim_{z\ra0}[L_n^{(k)},\Phi(z)] =0 \hskip 30pt(n>0). \label{eq23} \end{eqnarray} Second, the formula \eq{qv6} is obtained as follows. Rewriting \eq{eq19} as \begin{eqnarray} [L_n^{(k)},\Phi(z)]=&{1\over [k]}\sum_{j=1}^k q^{(k-2j+1)(z\partial+h-{n\over 2})} \left( [L_n^{(1)},\Phi(z)] -A(n,h,1)z^n\Phi(z)\right) \nonumber \\ & + z^nA(n,h,k)\Phi(z), \nonumber \end{eqnarray} and considering \[ \lim_{z\ra0}[L_n^{(k)},\Phi(z)]\ket{0}={1\over [k]}\sum_{j=1}^k q^{(k-2j+1)(h-{n\over 2})} \lim_{z\ra0}[L_n^{(1)},\Phi(z)]\ket{0}, \] we hence obtain \begin{equation} L_n^{(k)}\ket{h} ={[k({n\over2}-h)]\over[k][{n\over2}-h]}L_n^{(1)}\ket{h}. \label{eq25} \end{equation} Finally, the other formulae eqs.\eq{eq14a} and \eq{eq14b} follow from \eq{EBg}, \eq{eq23} and \eq{eq25} \begin{equation} L_n^{(i)}L_m^{(j)}\ket{h}=\sum_{\epsilon=\pm1} {[{{nj-\epsilon mi}\over2}][(i+\epsilon j)(h-{{n+m}\over2})] \over [i][j][h-{n+m\over2}] } L_{n+m}^{(1)}\ket{h} \hskip 30pt (n>0, n+m\not=0), \label{eq26} \end{equation} and so on. As a simple application of the formula \eq{eq25}, we write a similar formula for the following commutation relation \cite{hsato} \begin{equation} [L_n^{(j)},T^{(k)}(z)]=z^n\sum_{\epsilon=\pm1}{[k+\epsilon j]\over[j][k]} [{j\over2}(z\partial+2)+{n\over2}(j+\epsilon k)]T^{(j+\epsilon k)}(z) + C_{jk}(n)z^{n-2} \label{eq27} \end{equation} where \begin{equation} T^{(k)}(z)=\sum_{n=-\infty}^{\infty}{L_n^{(k)}\over z^{n+2}}. \end{equation} Eq.\eq{eq27} can be rewritten on the primary state using \eq{eq25} \[ [L_n^{(j)},T^{(k)}(z)] \ket{h} =z^n\sum_{\epsilon=\pm1} { [{j\over2}(z\partial+2)+{n\over2}(j+\epsilon k)] \over [j] } { [(k+\epsilon j)(h+1+{1\over2}z\partial)] \over [k(h+1+{1\over2}z\partial)] } T^{(k)}(z)\ket{h} \] \begin{equation} + C_{jk}(n)z^{n-2}\ket{h}. \label{qv100} \end{equation} Integrating this formula, we can verify the algebra \eq{EBgg} on the state \[ \hskip -80pt [L_n^{(i)},L_m^{(j)}]\ket{h} ={1\over2\pi i} \oint dz z^{m+1}[L_n^{(i)},T^{(j)}(z)]\ket{h} \] \[ \hskip 50pt=\sum_{\epsilon=\pm 1} {[{\epsilon nj-mi \over 2}][i+\epsilon j] \over [i][\epsilon j]} L^{(i+\epsilon j)}_{n+m}\ket{h}+C_{ij}(n)\ket{h}. \] All the formula in this section are explicitly verified in the case of massless fermion with $h=1/2$. \setcounter{equation}{0} \section{ Conclusions} \indent The $q$-Virasoro algebra may be understood as a natural extention of the quantum algebra ${\cal U}_q (sl(2))$ \cite{add1} in spite of possessing an ordinary Hopf algebra structure and thus it is different from the $W_{1+\infty}$ algebra \cite{add2} in this point. We have discussed various features of it in fermion systems and in the differential operator realization starting from deforming the canonical EM tensor. We have shown that the $q$-EM tensor defined in sect.2 generates the $q$-Virasoro algebra in two-dimensional chiral fermion theory. However, we would like to mention here that there remain a few points which should be investigated as further problems. First, the $q$-EM tensor for higher dimensions is not conserved in the sense either of ordinary divergence equation or of $q$-derivative equation. Second, we have not found any relevance of the $q$-EM tensor to the bosoinic $q$-Virasoro algebra \eq{qv904} \cite{BC,KS,CP} in contrast with the success for the fermionic case \begin{eqnarray} &J(z)=Q^2\partial\phi(z)D_z\phi(zQ), \\ &{\bar J}(\bar z)=Q^2\partial_{\bar z}{\bar\phi}(\bar z)D_{\bar z}{\bar\phi}(\bar z Q). \end{eqnarray} This situation might be related to the fact that an isomorophism between the fermionic $q$-Virasoro algebra and the bosonic one has not been found. {}From this point of view, improvement of the $q$-EM tensor of this paper must be an interesting topic as a further work. We have attempted to formulate the $q$-Virasoro algebra as an expression of invariance under a deformed transformation, which is called the $q$-conformal transformation. We have considered the deformation of a conserved current in an undeformed field theory. In distinction to this case, there may exist a more fundamental approach for analyzing a conserved current after constructing a deformed field theory as well. Finally, we expect that our analysis may point the way to a more suitable formulation of the $q$-conformal transformation law of a fermion/boson field and to a clarification of the meaning of the $q$-Virasoro algebra. \vspace{1cm} \noindent {\em Acknoledgements} This work was done while visiting the theory group of M{\"u}nchen University. The author would like to thank Prof. J. Wess for kind hospitality, Prof. R. Sasaki and Dr. N. Aizawa for stimulating communications at an early stage of this work. Thanks are also due to Prof. A. Solomon for reading the manuscript. \newpage \setcounter{equation}{0}
1,941,325,220,911
arxiv
\section{Introduction} Consider the following linear system, \begin{equation} \label{I1.1} Ax=b, \end{equation} where $A\in \mathbb{R}^{n\times n}$ is a nonsingular matrix, $b\in \mathbb{R}^{n\times 1}$ is a given vector and $x\in \mathbb{R}^{n\times 1}$ is an unknown vector. In order to solve (\ref{I1.1}), iterative methods of the form \begin{align} \label{I1.2}x^{i+1}=Hx^{i}+c, ~~~~~~~~i=1,2,3... \end{align} are often employed. The iterative formula (\ref{I1.2}) is obtained by splitting $A$ into the form $A=U-V,$ where $U$ is nonsingular and then setting $H=U^{-1}V$ and $c=U^{-1}b$. Such a splitting is called a single splitting (see, \cite{wn}) of $A$ and the matrix $H$ is called an iteration matrix \cite{vg}. It is well known (see chapter 7, \cite{bp4}) that the iterative method (\ref{I1.2}) converges to the unique solution of (\ref{I1.1}) (irrespective of the choice of initial vector $x^{\circ}$) if and only if $\rho(H)<1$, where $\rho(H)$ denotes the spectral radius of $H$, viz., the maximum of the moduli of the eigenvalues of $H$. Note that standard iterative methods like the Jacobi, Gauass-Seidel and successive over-relaxation methods arise from different choices of real square matrices $U$ and $V$. A decomposition $A=U-V$ of $A\in \mathbb{R}^{n\times n}$ is called a regular splitting if $U^{-1}$ exists, $U^{-1} \geq 0$ and $V \geq 0$, where the matrix $B \geq 0$ means all the entries of $B$ are nonnegative. The notion of regular splitting was proposed by Varga \cite{vg} and it was shown that $\rho(H)<1$ if and only if $A$ is monotone. Here, matrix $A$ monotone \cite{cz} means $A^{-1}$ exists and $A^{-1}\geq 0.$ A decomposition $A=U-V$ of $A\in \mathbb{R}^{n\times n}$ is called a weak regular splitting if $U^{-1} \geq 0$ and $U^{-1}V \geq 0.$ This was proposed by Ortega and Rheinboldt \cite{or} and again it was shown that $\rho(H)<1$ if and only if $A$ is monotone. These results show the importance of monotone matrices and the spectral radius $\rho(H)$ of an iteration matrix, in the study of convergence of the iterative methods of form (\ref{I1.2}). It is well known that the convergence of the iterative method (\ref{I1.2}) is faster whenever $\rho(H)$ is smaller and $\rho(H)< 1.$ This leads to the problem of comparison between the spectral radii of the iteration matrices of corresponding iterative methods which are derived from two different splittings $A=U_{1}-V_{1}$ and $A=U_{2}-V_{2}$ of the same matrix $A.$ Results related to this problem are called comparison results for splittings of matrices. So far, various comparison theorems for different kinds of single splittings of matrices have been derived by several authors. For details of these results one could refer to (\cite{besz} to \cite{bp4}, \cite{le}, \cite{ys2}, \cite{vg}, \cite{wn1} and \cite{wn2}). Berman and Plemmons \cite{bp2} then extended the notion of splitting to rectangular matrices and called it as a proper splitting. A decomposition $A=U-V$ of $A\in \mathbb{R}^{m\times n}$ is called a proper splitting if $\mathcal{R}(A)=\mathcal{R}(U)$ and $\mathcal{N}(A)=\mathcal{N}(U)$, where $\mathcal{R}(A)$ and $\mathcal{N}(A)$ denote the range space of $A$ and the null space of $A$, respectively. Analogous to the invertible case, with such a splitting one associates an iterative sequence $x^{i+1}=Hx^{i}+c,$ where (this time) $H=U^{\dagger}V$ (again) called iteration matrix, $c=U^{\dagger}b$ and $U^{\dagger}$ denotes the Moore-Penreose inverse of $U$ (see next section for definition). The initial vector $x^{\circ},$ however cannot be chosen arbitrarily; it must not belong to $\mathcal{N}(V).$ Once again it is well known that this sequence converges to $A^{\dagger}b,$ the least square solution of minimum norm, of the system $Ax=b$ (irrespective of the initial vector $x^{\circ}$) if and only if $\rho(H)< 1.$ For details, refer to \cite{bp4}. Recently, Jena et al. \cite{ljdmsp} extended the notion of regular and weak regular splittings to rectangular matrices and the respective definitions are given next. A decomposition $A=U-V$ of $A\in \mathbb{R}^{m\times n}$ is called a proper regular splitting if it is proper splitting such that $U^{\dagger} \geq 0$ and $V \geq 0.$ It is called proper weak regular splitting if it is proper splitting such that $U^{\dagger} \geq 0$ and $U^{\dagger}V \geq 0.$ Note that Berman and Plemmons \cite{bp2} proved a convergence theorem for these splittings without specifying the types of matrix decomposition. A matrix $A\in \mathbb{R}^{m\times n}$ is called semi-monotone if $A^{\dagger}\geq 0.$ The authors of \cite{ljdmsp} have considered proper regular splitting of semi monotone matrix $A$ and obtained some comparison results. Now, we turn our focus on to the comparison results for double splittings that are available in the literature. A decomposition $A=P-R+S,$ where $P$ is nonsingular, is called a double splitting of $A\in \mathbb{R}^{n\times n}.$ This notion was introduced by $Wo\acute{z}nicki$ \cite{wn}. With such a splitting, the following iterative scheme was formulated for solving (\ref{I1.1}): \begin{align} \label{I1.3} x ^{i+1} = P^{-1}Rx^{i}- P^{-1}Sx^{i-1} + P^{-l}b, ~~~~~~~~~~~~i = 1 , 2 , 3. . . \end{align} Following the idea of Golub and Varga {\cite{gb} }, $Wo\acute{z}nicki$ wrote equation (\ref{I1.3}) in the following equivalent form: \[ \begin{pmatrix} x^{i+1} \\ x^{i} \end{pmatrix} =\begin{pmatrix} P^{-1}R & -P^{-1}S\\ I & 0 \end{pmatrix} \begin{pmatrix} x^{i}\\ x^{i-1} \end{pmatrix} +\begin{pmatrix} P^{-1}b\\ 0 \end{pmatrix}, \] where $I$ is the identity matrix. Then, it was shown that the iterative method (\ref{I1.3}) converges to the unique solution of (\ref{I1.1}) for all initial vectors $x^{0}$ , $x^{1}$ if and only if the spectral radius of the iteration matrix \begin{align*} W&=\begin{pmatrix} P^{-1}R & -P^{-1}S\\ I & 0 \end{pmatrix} \end{align*} is less than one, that is $p(W) < 1$. Based on this idea, in recent years, several comparison theorems have been proved for double splittings of matrices. We briefly review few of them here. First, let us recall the definitions of regular and weak regular double splittings. A decomposition $A=P-R+S$ is called regular double splitting if $P^{-1}\geq 0$, $R\geq 0$ and $-S\geq 0$; it is called weak regular double splitting if $P^{-1}\geq 0$, $P^{-1}R\geq 0$ and $-P^{-1}S\geq 0$. Shen and Huang \cite{ss} have considered regular and weak regular double splittings of a monotone matrix or Hermitian positive definite matrix and obtained some comparison theorems. Miao and Zheng \cite{mizh} have obtained comparison theorem for the spectral radii of matrices arising from double splitting of different monotone matrices. Song and Song \cite{sjsy} have studied convergence and comparison theorems for nonnegative double splittings of a real square nonsingular matrices. Li and Wu \cite{li} have obtained some comparison theorems for double splittings of a matrix. Jena et al. \cite{ljdmsp} and Mishra \cite{deb} have introduced the notions of double proper regular splittings and double proper weak regular splittings and derived some comparison theorems. Recently, Alekha kumar and Mushra \cite{bm} have considered proper nonnegative double splittings of nonnegative matrix and derived certain comparison theorems. In this article we generalize the comparison results of Shen and Huang \cite{ss} from square nonsingular matrices to rectangular matrices and from classical inverses to Moore-Penrose inverses. Infact, we consider two double splittings $A=P_{1}-R_{1}+S_{1}$ and $A=P_{2}-R_{2}+S_{2}$ of a semi-monotone matrix $A\in \mathbb{R}^{m\times n}$ and derive two comparison theorems for the spectral radii of the corresponding iteration matrices. In section 2, we introduce notations and preliminary results. We present main results in section 3. \section{Notations, Definitions and Preliminaries} In this section, we fix notations and collect basic definitions and preliminary results which will be used in the sequel. Let $\mathbb{R}^{m\times n}$ denote the set of all real matrices with $m$ rows and $n$ columns. For $A\in \mathbb{R}^{m\times n},$ the transpose of $A$ is denoted by $A^{t}$; and the matrix $X\in \mathbb{R}^{n\times m}$ satisfying $AXA=A$, $XAX=X$, $(AX)^{t}=AX$ and $(XA)^{t}=XA$ is called the Moore-Penrose inverse of $A.$ It always exists and unique, and is denoted by $A^{\dagger}$. If $A$ is invertible then $A^{\dagger}=A^{-1}$. Let $L$ and $M$ be complementary subspaces of a real Euclidean space $\mathbb{R}^{n}$. Then the projection of $\mathbb{R}^{n}$ on $L$ along $M$ is denoted by $P_{L,M}$. If, in addition, $L$ and $M$ are orthogonal then it is called an orthogonal projection and it is denoted simply by $P_{L}$. The following well known properties (see, \cite{bg}) of $A^{\dagger}$, will be used in this manuscript: $\mathcal{R}(A^{t})=\mathcal{R}(A^{\dagger})$, $\mathcal{N}(A^{t})=\mathcal{N}(A^{\dagger})$, $AA^{\dagger}=P_{\mathcal{R}(A)}$, $A^{\dagger}A=P_{\mathcal{R}(A^{t})}$. In particular, if $x\in \mathcal{R}(A^{t})$ then $x=A^{\dagger}Ax$. A matrix $A\in \mathbb{R}^{m\times n}$ is nonnegative, if all the entries of $A$ are nonnegative, this is denoted $A\geq 0.$ The same notation and nomenclature are also used for vectors. For $A, B\in \mathbb{R}^{m\times n}$, we write $B\geq A$ if $B-A\geq 0.$ We now present some results connecting nonnegativity of a matrix and its spectral radius. \begin{lem} (Theorem 2.1.11, \cite {bp4})\label{com} Let $A\in\mathbb {R}^{n\times n}$ and $A\geq 0$. Then $\alpha x\leq Ax$, $x\geq 0\Rightarrow \alpha \leq \rho(A)$ and $Ax\leq \beta x$, $x> 0 \Rightarrow \rho(A)\leq \beta$. \end{lem} \begin{thm}(Theorem 3.16, \cite{vg})\label{Theorem1} Let $B\in\mathbb {R}^{n\times n}$ and $B\geq 0$. Then $\rho(B)< 1$ if and only if $(I-B)^{-1}$ exists and $(I-B)^{-1}=\sum_{k=0}^{\infty} B^{k}\geq 0$. \end{thm} The next theorem is a part of the Perron-Frobenius theorem. \begin{thm}(Theorem 2.20, \cite{vg})\label{Theorem2} Let $A\in\mathbb {R}^{n\times n}$ and $A\geq 0$ . Then \newline $(i)$ $A$ has a nonnegative real eigenvalue equal to the spectral radius. \newline $(ii)$ There exists a nonnegative real eigenvector for its spectral radius. \end{thm} \begin{lem}(Lemma 2.2, \cite{ss})\label {spectral} Let $A=\begin{pmatrix} B & C\\ I & 0 \end{pmatrix} \geq 0$ and $\rho(B+C)< 1$. Then, $\rho(A)< 1$. \end{lem} As we mentioned in the introduction, a decomposition $A=U-V$ of $A\in \mathbb{R}^{m\times n}$ is called a proper splitting if $\mathcal{R}(A)=\mathcal{R}(U)$ and $\mathcal{N}(A)=\mathcal{N}(U).$ The next two results are on proper splittings. \begin{thm}(Theorem 1, \cite {bp2}) If $A=U-V$ is a proper splitting of $A\in \mathbb{R}^{m\times n}$, then $AA^{\dagger}=UU^{\dagger}$ and $A^{\dagger}A=U^{\dagger}U.$ \end{thm} \begin{thm} ( Theorem 3, \cite{bp2}) \label{eq} Let $A=U-V$ be a proper splitting of $A\in \mathbb{R}^{m\times n}$ such that $U^{\dagger}\geq 0$ and $U^{\dagger}V\geq 0$. Then the following are equivalent:\\ (i) $A^{\dagger}\geq 0$.\\ (ii) $A^{\dagger}V\geq 0$.\\ (iii) $\rho(U^{\dagger}V)<1.$ \end{thm} Note that, a proper splitting $A=U-V$ of $A\in \mathbb{R}^{m\times n}$, satisfying the conditions $U^{\dagger}\geq 0$ and $U^{\dagger}V\geq 0$ is named as proper weak regular splitting by Jena et al. \cite{ljdmsp}. We now turn to results on double splittings. For $A\in \mathbb{R}^{m\times n}$, a decomposition $A=P-R+S$ is called a $double~ splitting$ of $A$. A double splitting $A=P-R+S$ of $A\in \mathbb{R}^{m\times n}$ is called a $ proper~ double ~splitting$ if $\mathcal{R}(A)=\mathcal{R}(P)$ and $\mathcal{N}(A)=\mathcal{N}(P)$. Again, consider the following rectangular linear system \begin{equation} \label{I1.4} Ax=b, \end{equation} where $A\in \mathbb{R}^{m\times n}$ (this time $A$ need not be nonsingular), $b\in \mathbb{R}^{m\times 1}$ is a given vector and $x\in \mathbb{R}^{n\times 1}$ is an unknown vector. Similar to the nonsingular case, if we use proper double splitting $A=P-R+S$ to solve (\ref{I1.4}), it leads to the following iterative scheme: \begin{align} \label{I1.5} x^{k+1}=P^{\dagger}Rx^{k}-P^{\dagger}Sx^{k-1}+P^{\dagger}b, ~\text{where} ~k= 1, 2, ... \end{align} Motivated by $Wo\acute{z}nicki's$ {\cite{wn} } idea , equation (\ref{I1.5}) can be written as \[ \begin{pmatrix} x^{k+1}\\ x^{k} \end{pmatrix} =\begin{pmatrix} P^{\dagger}R & -P^{\dagger}S\\ I & 0 \end{pmatrix} \begin{pmatrix} x^{k}\\ x^{k-1} \end{pmatrix} +\begin{pmatrix} P^{\dagger}b\\ 0 \end{pmatrix}. \] If we denote, $X^{k+1}=\begin{pmatrix} x^{k+1}\\ x^{k} \end{pmatrix},$ $W= \begin{pmatrix} P^{\dagger}R & -P^{\dagger}S\\ I & 0 \end{pmatrix},$ $X^{k}=\begin{pmatrix} x^{k}\\ x^{k-1} \end{pmatrix}$\\ and $B=\begin{pmatrix} P^{\dagger}b\\ 0 \end{pmatrix},$ then we get \begin{equation} \label{I1.6} X^{k+1}=WX^{k}+B, k=1,2... \end{equation} Then, it can be shown that the iterative method (\ref{I1.6}) converges to to the unique least square solution of minimum norm, of (\ref{I1.4}) if and only if $\rho(W)<1.$ Next, we introduce some subclasses of proper double splittings. \begin{defn} Let $A\in \mathbb{R}^{m\times n}$. A proper double splitting $A=P-R+S$ is called\\ (i) regular proper double splitting if $P^{\dagger}\geq 0$, $R\geq 0$ and $-S\geq 0.$\\ (ii) weak regular proper double splitting if $P^{\dagger}\geq 0$, $P^{\dagger}R\geq 0$ and $-P^{\dagger}S\geq 0.$ \end{defn} Note that the authors of \cite{ljdmsp} called regular proper double splittings and weak regular proper double splittings as double proper regular splittings and double proper weak regular splittings, respectively. However, we feel that the present usage is more appropriate and hence we continue the same nomenclature throughout this manuscript. The next result gives the relation between the spectral radius of the iteration matrices associated with a single splitting and a double splitting. \begin{thm}(Theorem 4.3, \cite{deb} ) Let $A=P-R+S$ be a weak regular proper double splitting of $A\in \mathbb{R}^{m\times n}.$ Then $\rho(W)< 1$ if and only if $\rho(U^{\dagger}V)< 1,$ where $U=P$ and $V=R-S.$ \end{thm} We conclude this section with a convergence theorem for a proper double splitting of a monotone matrix. \begin{thm} (Theorem 3.6, \cite{ljdmsp} )\label{con} Let $A\in \mathbb{R}^{m\times n}$ such that $A^{\dagger}\geq 0$. Let $A=P-R+S$ be a weak regular proper double splitting. Then, $\rho(W)< 1$. \end{thm} \section{Main Results} In this section, the main results of this article are presented. These results extend the results of Shen and Huang \cite{ss} from square nonsingular matrices to rectangular matrices and from classical inverses to Moore-Penrose inverses. Let $A\in \mathbb{R}^{m\times n}$. Let $A=P_{1}-R_{1}+S_{1}=P_{2}-R_{2}+S_{2}$ be two double proper splittings of $A$. Set $W_{1}=\begin{pmatrix} P_{1}^{\dagger}R_{1} & -P_{1}^{\dagger}S_{1}\\ I & 0 \end{pmatrix}$ and $W_{2}=\begin{pmatrix} P_{2}^{\dagger}R_{2} & -P_{2}^{\dagger}S_{2}\\ I & 0 \end{pmatrix}$. The next result gives the comparison between $\rho(W_{1})$ and $\rho(W_{2})$. As mentioned earlier, this comparison is useful to analyse the rate of convergence of the iterative methods formulated from these double splittings, for solving linear system $Ax=b$. \begin{thm}\label{dcr1} Let $A\in \mathbb{R}^{m\times n}$ be such that $A^{\dagger}\geq 0$. Let $A=P_{1}-R_{1}+S_{1}$ be a regular proper double splitting such that $P_{1}P_{1}^{\dagger}\geq 0$ and let $A=P_{2}-R_{2}+S_{2}$ be a weak regular proper double splitting. If $P_{1}^{\dagger}\geq P_{2}^{\dagger}$ and any one of the following conditions,\\ (i) $P_{1}^{\dagger}R_{1}\geq P_{2}^{\dagger}R_{2}$\\ (ii) $P_{1}^{\dagger}S_{1}\geq P_{2}^{\dagger}S_{2}$ \\ holds, then $\rho(W_{1})\leq \rho(W_{2})< 1 $. \end{thm} \begin{proof} Since $A=P_{1}-R_{1}+S_{1}$ is a regular proper double splitting of $A,$ by Theorem \ref{con}, we get $\rho(W_{1})< 1.$ Similarly, $\rho(W_{2})< 1.$ It remains to show that $\rho(W_{1})\leq \rho(W_{2}).$ Assume that $\rho(W_{1})=0$. Then the conclusion follows, obviously. So, without loss of generality assume that $\rho(W_{1})\neq 0$. Since $A=P_{1}-R_{1}+S_{1}$ is a regular proper double splitting, we have $W_{1}=\begin{pmatrix} P_{1}^{\dagger}R_{1} & -P_{1}^{\dagger}S_{1}\\ I & 0 \end{pmatrix}\geq 0.$ Then, by the Perron-Frobenius theorem, there exists a vector $x=\begin{pmatrix} x_{1}\\ x_{2} \end{pmatrix}\in \mathbb{R}^{2n},$ $x\geq 0$ and $x\neq 0$ such that $W_{1}x=\rho(W_{1})x.$ This implies that \begin{align} \label{dcr1.1} P_{1}^{\dagger}R_{1}x_{1}-P_{1}^{\dagger}S_{1}x_{2}&=\rho(W_{1})x_{1}. \\ \label{dcr1.2} x_{1}&=\rho(W_{1})x_{2}. \end{align} Upon pre multiplying equation (\ref{dcr1.1}) by $P_{1}$ and using equation (\ref{dcr1.2}), we get \begin{align} \label{dcr1.3} [\rho(W_{1})]^{2}P_{1}x_{1}=\rho(W_{1})P_{1}P_{1}^{\dagger}R_{1}x_{1}-P_{1}P_{1}^{\dagger}S_{1}x_{1}. \end{align} We have $P_{1}P_{1}^{\dagger}\geq 0,$ $R_{1}\geq 0,$ $-S_{1}\geq 0$ and $x_{1}\geq 0.$ Therefore, by (\ref{dcr1.3}), $[\rho(W_{1})]^{2}P_{1}x_{1}\geq 0.$\\ Now, again from (\ref{dcr1.3}), \begin{align*} 0&=[\rho(W_{1})]^{2}P_{1}x_{1}-\rho(W_{1})P_{1}P_{1}^{\dagger}R_{1}x_{1}+P_{1}P_{1}^{\dagger}S_{1}x_{1}\\&\leq \rho(W_{1})P_{1}x_{1}-\rho(W_{1})P_{1}P_{1}^{\dagger}R_{1}x_{1}+\rho(W_{1})P_{1}P_{1}^{\dagger}S_{1}x_{1}\nonumber\\ &=\rho(W_{1})[P_{1}x_{1}-P_{1}P_{1}^{\dagger}(R_{1}-S_{1})x_{1}]\\ &=\rho(W_{1})[P_{1}x_{1}-R_{1}x_{1}+S_{1}x_{1}]\\ &=\rho(W_{1})Ax_{1}, \end{align*} where we have used the facts that $0< \rho(W_{1})< 1$ and $\mathcal{R}(R_{1}-S_{1})\subseteq \mathcal{R}(P_{1}).$ This proves that $Ax_{1}\geq 0.$\\ Also, by using equations (\ref{dcr1.1}) and (\ref{dcr1.2}), we get \begin{align*} W_{2}x-&\rho(W_{1})x= \begin{pmatrix}P_{2}^{\dagger}R_{2}x_{1}-P_{2}^{\dagger}S_{2}x_{2}-\rho(W_{1})x_{1} \\ x_{1}-\rho(W_{1})x_{2} \end{pmatrix}\\ &=\begin{pmatrix}(P_{2}^{\dagger}R_{2}-P_{1}^{\dagger}R_{1})x_{1}+\frac{1}{\rho(W_{1})}(P_{1}^{\dagger}S_{1}-P_{2}^{\dagger}S_{2})x_{1} \\ 0 \end{pmatrix}\\ &=\begin{pmatrix} \nabla\\ 0 \end{pmatrix}, \end{align*} where $\nabla = (P_{2}^{\dagger}R_{2}-P_{1}^{\dagger}R_{1})x_{1}+\frac{1}{\rho(W_{1})}(P_{1}^{\dagger}S_{1}-P_{2}^{\dagger}S_{2})x_{1}.$\\ {\bf Case(i)} Let us assume that $P_{1}^{\dagger}R_{1}\geq P_{2}^{\dagger}R_{2}.$ Since $0<\rho(W_{1})< 1,$ we get $(P_{2}^{\dagger}R_{2}-P_{1}^{\dagger}R_{1})x_{1}\geq \frac{1}{\rho(W_{1})}(P_{2}^{\dagger}R_{2}-P_{1}^{\dagger}R_{1})x_{1}$. Then \begin{align} \nabla=&(P_{2}^{\dagger}R_{2}-P_{1}^{\dagger}R_{1})x_{1}+ \frac{1}{\rho(W_{1})}(P_{1}^{\dagger}S_{1}-P_{2}^{\dagger}S_{2})x_{1}\nonumber\\&\geq \frac{1}{\rho(W_{1})}(P_{2}^{\dagger}R_{2}-P_{1}^{\dagger}R_{1})x_{1}+ \frac{1}{\rho(W_{1})}(P_{1}^{\dagger}S_{1}-P_{2}^{\dagger}S_{2})x_{1}\nonumber\\ &=\frac{1}{\rho(W_{1})}[(P_{2}^{\dagger}(R_{2}-S_{2})x_{1}- P_{1}^{\dagger}(R_{1}-S_{1})x_{1}]\nonumber\\ &=\frac{1}{\rho(W_{1})}[P_{2}^{\dagger}P_{2}- P_{2}^{\dagger}A-P_{1}^{\dagger}P_{1}+ P_{1}^{\dagger}A]x_{1}\nonumber\\ &=\frac{1}{\rho(W_{1})}(P_{1}^{\dagger}-P_{2}^{\dagger})Ax_{1}, \end{align} where we have used the fact that $P_{1}^{\dagger}P_{1}=P_{2}^{\dagger}P_{2}.$ Since $Ax_{1}\geq 0$ and $P_{1}^{\dagger}\geq P_{2}^{\dagger},$ from the above inequality, we get $\nabla\geq 0.$ Then, $W_{2}x-\rho(W_{1})x=\begin{pmatrix} \nabla\\ 0 \end{pmatrix} \geq 0.$ This implies that $\rho(W_{1})x\leq W_{2}x$. So, by Lemma \ref{com}, $\rho(W_{1})\leq \rho(W_{2})$. This proves that $\rho(W_{1})\leq \rho(W_{2})< 1.$\\ {\bf Case(ii)} Assume that $P_{1}^{\dagger}S_{1}\geq P_{2}^{\dagger}S_{2}$. Since $0<\rho(W_{1})< 1$ and $Ax_{1}\geq 0,$ agian we get \begin{align*} \nabla= &(P_{2}^{\dagger}R_{2}-P_{1}^{\dagger}R_{1})x_{1}+ \frac{1}{\rho(W_{1})}(P_{1}^{\dagger}S_{1}-P_{2}^{\dagger}S_{2})x_{1}\\&\geq (P_{2}^{\dagger}R_{2}-P_{1}^{\dagger}R_{1})x_{1}+ (P_{1}^{\dagger}S_{1}-P_{2}^{\dagger}S_{2})x_{1}\\ &=(P_{1}^{\dagger}-P_{2}^{\dagger})Ax_{1}\geq 0. \end{align*} This implies that $W_{2}x-\rho(W_{1})x=\begin{pmatrix} \nabla\\ 0 \end{pmatrix} \geq 0.$\\ So, again by Lemma \ref{com}, we get $\rho(W_{1})\leq \rho(W_{2})$. This proves that $\rho(W_{1})\leq \rho(W_{2})< 1.$\\ \end{proof} The following example shows that the converse of Theorem \ref{dcr1} is not true. \begin{ex} Let $A=\begin{pmatrix} 3 & -2 & 0\\ -1 & 1 & 0 \end{pmatrix}$. Let $P_{1}=\begin{pmatrix} 5 & -1 & 0\\ 0 & 1 & 0 \end{pmatrix},$ \newline $R_{1}= \begin{pmatrix} 1 & 0 & 0\\ 0 & 0 & 0 \end{pmatrix} ,$ $S_{1}= \begin{pmatrix} -1 & -1 & 0\\ -1 & 0 & 0 \end{pmatrix},$ $P_{2}=\begin{pmatrix} 3 & 0 & 0\\ 0 & 2& 0 \end{pmatrix},$ $R_{2}= \begin{pmatrix} 0 & 1 & 0\\ 0 & 1 & 0 \end{pmatrix} $ and $S_{2}= \begin{pmatrix} 0 & -1 & 0\\ -1 & 0 & 0 \end{pmatrix} $. Then $P_{1}^{\dagger}=\frac{1}{5}\begin{pmatrix} 1 & 1\\ 0 & 5\\ 0 & 0 \end{pmatrix},$ $P_{1}^{\dagger}R_{1}=\frac{1}{5}\begin{pmatrix} 1 & 0 & 0\\ 0 & 0 & 0\\ 0 & 0 & 0 \end{pmatrix},$ $P_{1}^{\dagger}S_{1}=\frac{1}{5}\begin{pmatrix} -2 & -1 & 0\\ -5 & 0 & 0\\ 0 & 0 & 0 \end{pmatrix},$ $P_{2}^{\dagger}=\frac{1}{6}\begin{pmatrix} 2 & 0\\ 0 & 3\\ 0 & 0 \end{pmatrix},$ $P_{2}^{\dagger}R_{2}=\frac{1}{6}\begin{pmatrix} 0 & 2 & 0\\ 0 & 3 & 0\\ 0 & 0 & 0 \end{pmatrix}$ and $P_{2}^{\dagger}S_{2}=\frac{1}{6}\begin{pmatrix} 0 & -2 & 0\\ -3 & 0 & 0\\ 0 & 0 & 0 \end{pmatrix}$. It is easy to verify that $A=P_{1}-R_{1}+S_{1}$ is a regular proper double splitting and $A=P_{2}-R_{2}+S_{2}$ is a weak regular proper double splitting. Also, $0.9079=\rho(W_{1})\leq \rho(W_{2})=0.9158 < 1$. However, the conditions $P_{1}^{\dagger}\geq P_{2}^{\dagger} ,$ $P_{1}^{\dagger}R_{1}\geq P_{2}^{\dagger}R_{2}$ and $P_{1}^{\dagger}S_{1}\geq P_{2}^{\dagger}S_{2}$ do not hold. \end{ex} \begin{cor} (Theorem 3.1, \cite {ss}) \label{cor1} Let $A^{-1}\geq 0$. Let $A=P_{1}-R_{1}+S_{1}$ be a regular double splitting and $A=P_{2}-R_{2}+S_{2}$ be a weak regular double splitting. If $P_{1}^{-1}\geq P_{2}^{-1}$ and any one of the following conditions,\\ (i) $P_{1}^{-1}R_{1}\geq P_{2}^{-1}R_{2}$\\ (ii) $P_{1}^{-1}S_{1}\geq P_{2}^{-1}S_{2}$ \newline holds, then $\rho(W_{1})\leq \rho(W_{2})< 1, $ where $W_{1}=\begin{pmatrix} P_{1}^{-1}R_{1} & -P_{1}^{-1}S_{1}\\ I & 0 \end{pmatrix}$ and \newline$W_{2}=\begin{pmatrix} P_{2}^{-1}R_{2} & -P_{2}^{-1}S_{2}\\ I & 0 \end{pmatrix}$. \end{cor} \begin{cor}\label{cor2} Let $A^{-1}\geq 0$. Let $A=P_{1}-R_{1}+S_{1}$ be a regular double splitting and $A=P_{2}-R_{2}+S_{2}$ be a weak regular double splitting. If $P_{1}^{-1}\geq P_{2}^{-1}$ and $R_{1}\geq R_{2}$ hold, then $\rho(W_{1})\leq \rho(W_{2})< 1 $. \end{cor} The conclusion of Theorem \ref{dcr1} can also be achieved by replacing a regular proper double splitting $A=P_{1}-R_{1}+S_{1}$ with a weak regular proper double splitting; and a weak regular proper double splitting $A=P_{2}-R_{2}+S_{2}$ with a regular proper double splitting, in Theorem \ref{dcr1}. The following is the exct statement of this result. \begin{thm} \label{mr1} Let $A\in \mathbb{R}^{m\times n}$ such that $e=(1,1,...,1)^{t}\in \mathcal{R}(A)$ and $A^{\dagger}\geq 0$. Let $A=P_{1}-R_{1}+S_{1}$ be a weak regular proper double splitting and let $A=P_{2}-R_{2}+S_{2}$ be a regular proper double splitting such that $P_{2}^{\dagger}$ has no zero row and $P_{2}P_{2}^{\dagger}\geq 0$. If $P_{1}^{\dagger}\geq P_{2}^{\dagger}$ and any one of the following conditions,\\ (i) $P_{1}^{\dagger}R_{1}\geq P_{2}^{\dagger}R_{2}$\\ (ii) $P_{1}^{\dagger}S_{1}\geq P_{2}^{\dagger}S_{2}$ \\ holds, then $\rho(W_{1})\leq \rho(W_{2})< 1 $. \end{thm} \begin{proof} Since $A=P_{1}-R_{1}+S_{1}$ is a weak regular proper double splitting of $A,$ by Theorem \ref{con}, we get $\rho(W_{1})< 1.$ Similarly, $\rho(W_{2})< 1.$ It remains to show that $\rho(W_{1})\leq \rho(W_{2}).$ Let $J$ be an $m\times n$ matrix in which each entry is equal to 1. For given $\epsilon >0,$ set $A_{\epsilon}=A-\epsilon J,$ $R_{1}(\epsilon)=R_{1}+\frac{1}{2}\epsilon J$, $S_{1}(\epsilon)=S_{1}-\frac{1}{2}\epsilon J$,\linebreak $R_{2}(\epsilon)=R_{2}+\frac{1}{2}\epsilon J,$ $S_{2}(\epsilon)=S_{2}-\frac{1}{2}\epsilon J,$ $W_{1}(\epsilon)=\begin{pmatrix} P_{1}^{\dagger}R_{1}(\epsilon) & -P_{1}^{\dagger}S_{1}(\epsilon)\\ I & 0 \end{pmatrix}$ and $W_{2}(\epsilon)=\begin{pmatrix} P_{2}^{\dagger}R_{2}(\epsilon) & -P_{2}^{\dagger}S_{2}(\epsilon)\\ I & 0 \end{pmatrix}$. We have, $e=(1,1,...,1)^{t}\in \mathcal{R}(A)$. So, there exists a matrix $B\in \mathbb{R}^{n\times n}$ such that $J=AB$. Then $A_{\epsilon}=A-\epsilon J=(A-\epsilon AB)=(A-\epsilon AA^{\dagger}AB)=(A-\epsilon AA^{\dagger} J)=A(I-\epsilon A^{\dagger}J).$ Now, choose the above $\epsilon $ such that $\rho(\epsilon A^{\dagger}J)< 1$ and $\mathcal{N}(A_{\epsilon})=\mathcal{N}(A)$. Since $\rho(\epsilon A^{\dagger}J)< 1,$ $I-\epsilon A^{\dagger}J$ is invertible and hence $\mathcal{R}(A_{\epsilon})=\mathcal{R}(A).$ Then $A_{\epsilon}=A-\epsilon J$ becomes a proper splitting and thus we can conclude that $A_{\epsilon}=P_{1}-R_{1}(\epsilon)+S_{1}(\epsilon)$ is a weak regular proper double splitting and $A_{\epsilon}=P_{2}-R_{2}(\epsilon)+S_{2}(\epsilon)$ is a regular proper double splitting. For the same $\epsilon,$ define $X=(I-\epsilon A^{\dagger}J)^{-1}A^{\dagger},$ we shall prove that $X$ is the Moore-Penrose inverse of $A_{\epsilon}.$ Let $x\in \mathcal{R}(A_{\epsilon}^{t}).$ Then \begin{align*} XA_{\epsilon}x&=(I-\epsilon A^{\dagger}J)^{-1}A^{\dagger}(A-\epsilon AA^{\dagger} J)x\\ &=(I-\epsilon A^{\dagger}J)^{-1}(A^{\dagger}Ax-\epsilon A^{\dagger}AA^{\dagger}Jx)\\ &=(I-\epsilon A^{\dagger}J)^{-1}(x-\epsilon A^{\dagger}Jx)\\ &=x \end{align*} and for $y\in \mathcal{N}(A_{\epsilon}^{t}),$ we get \begin{align*} Xy=(I-\epsilon A^{\dagger}J)^{-1}A^{\dagger}y=0 \end{align*} Hence, by the definition, $A^{\dagger}_{\epsilon}=X=(I-\epsilon A^{\dagger}J)^{-1}A^{\dagger}.$ Also, \newline $A^{\dagger}_{\epsilon}=(I+\epsilon A^{\dagger}J+\epsilon (A^{\dagger}J)^{2}+...)A^{\dagger}\geq 0.$ Then $\rho(P_{2}^{\dagger}(R_{2}(\epsilon)-S_{2}(\epsilon)))< 1.$ So, by Lemma \ref{spectral}, $\rho(W_{2}(\epsilon))< 1.$ Clearly, $P_{2}^{\dagger}R_{2}(\epsilon)> 0$ and $-P_{2}^{\dagger}S_{2}(\epsilon)> 0.$ So, $W_{2}(\epsilon)\geq 0$. Then, by the Perron-Frobenius theorem, there exists a vector $x(\epsilon)=\begin{pmatrix} x_{1}(\epsilon)\\ x_{2}(\epsilon) \end{pmatrix}\in \mathbb{R}^{2n},$ $x(\epsilon)\geq 0$ and $x(\epsilon)\neq 0$ such that $W_{2}(\epsilon)x(\epsilon)=\rho(W_{2}(\epsilon))x(\epsilon).$ This implies, \begin{align} \label{2.1}P_{2}^{\dagger}R_{2}(\epsilon)x_{1}(\epsilon)-P_{2}^{\dagger}S_{2}(\epsilon)x_{2}(\epsilon)&=\rho(W_{2}(\epsilon))x_{1}(\epsilon)\\ \label{2.2} x_{1}(\epsilon)&=\rho(W_{2}(\epsilon))x_{2}(\epsilon). \end{align} If $\rho (W _{2}(\epsilon))=0$ then from equations (\ref{2.1}) and (\ref{2.2}), $x(\epsilon)=0.$ This is a contradiction. So, $0< \rho (W _{2}(\epsilon))< 1.$ Then by using equations (\ref{2.1}) and (\ref{2.2}), as in the proof of the Theorem \ref{dcr1}, we can show that $\rho(W_{2}(\epsilon))A_{\epsilon}x_{1}(\epsilon)\geq 0.$ This implies that $A_{\epsilon}x_{1}(\epsilon)\geq0.$ Also, from equations (\ref{2.1}) and (\ref{2.2}), we get \begin{align*} &W_{1}(\epsilon)x(\epsilon)-\rho(W_{2}(\epsilon))x(\epsilon)\\ &=\begin{pmatrix}P_{1}^{\dagger}R_{1}(\epsilon)x_{1}(\epsilon)-P_{1}^{\dagger}S_{1}(\epsilon)x_{2}(\epsilon)-\rho(W_{2}(\epsilon))x_{1}(\epsilon) \\ x_{1}(\epsilon)-\rho(W_{2}(\epsilon))x_{2}(\epsilon) \end{pmatrix}\\ &=\begin{pmatrix} (P_{1}^{\dagger}R_{1}(\epsilon)-P_{2}^{\dagger}R_{2}(\epsilon))x_{1}(\epsilon)+ \frac{1}{\rho(W_{2}(\epsilon))}(P_{2}^{\dagger}S_{2}(\epsilon)-P_{1}^{\dagger}S_{1}(\epsilon))x_{1}(\epsilon)\\ 0 \end{pmatrix}\\ &=\begin{pmatrix} \nabla\\ 0 \end{pmatrix}, \end{align*} where $\nabla= (P_{1}^{\dagger}R_{1}(\epsilon)-P_{2}^{\dagger}R_{2}(\epsilon))x_{1}(\epsilon)+ \frac{1}{\rho(W_{2}(\epsilon))}(P_{2}^{\dagger}S_{2}(\epsilon)-P_{1}^{\dagger}S_{1}(\epsilon))x_{1}(\epsilon).$\\ {\bf Case$(i)$} Assume that $P_{1}^{\dagger}R_{1}\geq P_{2}^{\dagger}R_{2}$. Since $0< \rho (W _{2}(\epsilon))< 1,$ we get that $(P_{1}^{\dagger}R_{1}(\epsilon)-P_{2}^{\dagger}R_{2}(\epsilon))x_{1}(\epsilon)\leq \frac{1}{\rho(W_{2}(\epsilon))}(P_{1}^{\dagger}R_{1}(\epsilon)-P_{2}^{\dagger}R_{2}(\epsilon))x_{1}(\epsilon).$ \text{Therefore,} \begin{align*} \nabla &\leq \frac{1}{\rho(W_{2}(\epsilon))}(P_{1}^{\dagger}R_{1}(\epsilon)-P_{2}^{\dagger}R_{2}(\epsilon))x_{1}(\epsilon)+ \frac{1}{\rho(W_{2}(\epsilon))}(P_{2}^{\dagger}S_{2}(\epsilon)-P_{1}^{\dagger}S_{1}(\epsilon))x_{1}(\epsilon)\\ &=\frac{1}{\rho(W_{2}(\epsilon))}[(P_{1}^{\dagger}(R_{1}(\epsilon)-S_{1}(\epsilon))x_{1}(\epsilon)- P_{2}^{\dagger}(R_{2}(\epsilon)-S_{2}(\epsilon))x_{1}(\epsilon)]\\ &=\frac{1}{\rho(W_{2}(\epsilon))}[P_{1}^{\dagger}P_{1}- P_{1}^{\dagger}A_{\epsilon}-P_{2}^{\dagger}P_{2}+ P_{2}^{\dagger}A_{\epsilon}]x_{1}(\epsilon)\\ &=\frac{1}{\rho(W_{2}(\epsilon))}(P_{2}^{\dagger}-P_{1}^{\dagger})A_{\epsilon}x_{1}(\epsilon) \end{align*} where we have used the fact that $P_{1}^{\dagger}P_{1}=P_{2}^{\dagger}P_{2}.$ Since $A_{\epsilon}x_{1}(\epsilon)\geq 0$ and $P_{1}^{\dagger}\geq P_{2}^{\dagger},$ we get that $\nabla\leq 0.$ Thus, $ W_{1}(\epsilon)x(\epsilon)-\rho(W_{2}(\epsilon))x(\epsilon)=\begin{pmatrix} \nabla\\ 0 \end{pmatrix}\leq 0.$ This implies, $ W_{1}(\epsilon)x(\epsilon)\leq \rho(W_{2}(\epsilon))x(\epsilon).$ So, by Lemma \ref{com}, $\rho(W_{1}(\epsilon))\leq \rho(W_{2}(\epsilon)).$\\ Now, from the continuity of eigenvalues, we have $$\rho(W_{1})=\lim_{\epsilon\to\0} \rho(W_{1}(\epsilon))\leq \lim_{\epsilon\to\0} \rho(W_{2}(\epsilon))=\rho(W_{2}).$$ {\bf Case$(ii)$} Assume that $P_{1}^{\dagger}S_{1}\geq P_{2}^{\dagger}S_{2}.$ We have $\rho(\epsilon A^{\dagger}J)< 1.$ Choose the above $\epsilon$ small enough such that $$P_{1}^{\dagger}S_{1} -P_{2}^{\dagger}S_{2}\geq \frac{\epsilon}{2}(P_{1}^{\dagger} -P_{2}^{\dagger})J.$$ Since, $P_{1}^{\dagger}S_{1}(\epsilon)\geq P_{2}^{\dagger}S_{2}(\epsilon),$ $A^{\dagger}_{\epsilon}\geq 0$ and $0<\rho(W_{2})< 1,$ we get \begin{align*} \nabla & \leq (P_{1}^{\dagger}R_{1}(\epsilon)-P_{2}^{\dagger}R_{2}(\epsilon))x_{1}(\epsilon)+ (P_{2}^{\dagger}S_{2}(\epsilon)-P_{1}^{\dagger}S_{1}(\epsilon))x_{1}(\epsilon)\\ &=(P_{2}^{\dagger}-P_{1}^{\dagger})A_{\epsilon}x_{1}(\epsilon)\leq 0. \end{align*} This implies that $ W_{1}(\epsilon)x(\epsilon)-\rho(W_{2}(\epsilon))x(\epsilon)=\begin{pmatrix} \nabla\\ 0 \end{pmatrix} \leq 0.$\\ So, $ W_{1}(\epsilon)x(\epsilon)\leq \rho(W_{2}(\epsilon))x(\epsilon).$ Then, by Lemma \ref{com}, $\rho(W_{1}(\epsilon))\leq \rho(W_{2}(\epsilon)).$ Similar to the proof of case$(i)$, this implies that $\rho(W_{1})\leq \rho(W_{2})$. \end{proof} The following example illustrates Theorem \ref{mr1}. \begin{ex} Let $A=\begin{pmatrix} 1 & 0 & 1\\ 0 & 1 & 0\\ \end{pmatrix}$ then $A^{\dagger}=\frac{1}{2}\begin{pmatrix} 1 & 0 \\ 0 & 2 \\ 1 & 0 \end{pmatrix}\geq 0.$\\ Set $P_{1}=\begin{pmatrix} 3 & 0 & 3\\ 0 & 3 & 0\\ \end{pmatrix},$ $R_{1}= \begin{pmatrix} 2 & 0 & 2\\ 0 & 1 & 0\\ \end{pmatrix} $ and $S_{1}= \begin{pmatrix} 0 & 0 & 0\\ 0 & -1 & 0\\ \end{pmatrix} $.\\ $P_{2}=\begin{pmatrix} 4 & 0 & 4\\ 0 & 4 & 0\\ \end{pmatrix},$ $R_{2}= \begin{pmatrix} 2 & 0 & 2\\ 0 & 0 & 0\\ \end{pmatrix} $ and $S_{2}= \begin{pmatrix} -1 & 0 & -1\\ 0 & -3 & 0\\ \end{pmatrix} $.\\ Then $P_{1}^{\dagger}=\frac{1}{6}\begin{pmatrix} 1 & 0 \\ 0 & 2 \\ 1 & 0 \end{pmatrix},$ $P_{1}^{\dagger}R_{1}=\frac{1}{6}\begin{pmatrix} 2 & 0 & 2\\ 0 & 2 & 0\\ 2 & 0 & 2 \end{pmatrix}$ and $P_{1}^{\dagger}S_{1}=\frac{1}{6}\begin{pmatrix} 0 & 0 & 0\\ 0 & -2 & 0\\ 0 & 0 & 0 \end{pmatrix}$.\\ $P_{2}^{\dagger}=\frac{1}{8}\begin{pmatrix} 1 & 0 \\ 0 & 2 \\ 1 & 0 \end{pmatrix},$ $P_{2}^{\dagger}R_{2}=\frac{1}{8}\begin{pmatrix} 2 & 0 & 2\\ 0 & 0 & 0\\ 2 & 0 & 2 \end{pmatrix}$ and $P_{2}^{\dagger}S_{2}=\frac{1}{8}\begin{pmatrix} -1 & 0 & -1\\ 0 & -6 & 0\\ -1 & 0 & -1 \end{pmatrix}$.\\ Note that $A=P_{1}-R_{1}+S_{1}$ is a weak regular proper double splitting and $A=P_{2}-R_{2}+S_{2}$ is a regular proper double splitting. Also, $e\in \mathcal{R}(A)$, $P_{2}^{\dagger}$ has no zero row and $P_{2}P_{2}^{\dagger}\geq 0$. We can verify that $P_{1}^{\dagger}\geq P_{2}^{\dagger},$ and $P_{1}^{\dagger}R_{1}\geq P_{2}^{\dagger}R_{2}.$ Hence $0.7676=\rho(W_{1})\leq \rho(W_{2})=0.6660 < 1$. \end{ex} The following result is an obvious consequence of Theorem 3.5. \begin{cor} (Theorem 3.2, \cite {ss}) \label{cor4} Let $A^{-1}\geq 0$. Let $A=P_{1}-R_{1}+S_{1}$ be a weak regular double splitting and $A=P_{2}-R_{2}+S_{2}$ be a regular double splitting. If $P_{1}^{-1}\geq P_{2}^{-1}$ and any one of the following conditions,\\ (i) $P_{1}^{-1}R_{1}\geq P_{2}^{-1}R_{2}$\\ (ii) $P_{1}^{-1}S_{1}\geq P_{2}^{-1}S_{2}$ \\ holds, then $\rho(W_{1})\leq \rho(W_{2})< 1 $. \end{cor} The next result proof is similar to the proof of Theorem $3.1$. Thus we skip the proof. \begin{thm}\label{mr3} Let $A\in \mathbb{R}^{m\times n}$ and $A^{\dagger}\geq 0$. Let $A=P_{1}-R_{1}+S_{1}$ be a weak regular proper double splitting and $A=P_{2}-R_{2}+S_{2}$ be a weak regular proper double splitting. If $P_{1}^{\dagger}A\geq P_{2}^{\dagger}A$ and any of the following conditions,\\ (i) $P_{1}^{\dagger}R_{1}\geq P_{2}^{\dagger}R_{2}$\\ (ii) $P_{1}^{\dagger}S_{1}\geq P_{2}^{\dagger}S_{2}$ \\ holds, then $\rho(W_{1})\leq \rho(W_{2})< 1 $. \end{thm} \newpage
1,941,325,220,912
arxiv
\section{Introduction} As known, given an element $f\in\mc H$ and a sequence of elements $\{f_n\}_{n\in\mathbb N}$ in a Hilbert space $\mc H$ endowed of the inner product $\ip{\cdot}{\cdot}$, the sequence $a_n:\mathbb N\rightarrow\mathbb C$, $a_n:=\ip{f}{f_n}$ is called {\it moment sequence} -briefly \textit{moments}- \textit{of} $f\in \mc H$. The problem to find a solution $f\in\mc H$ of the equations: $$ \ip{f}{f_n}=a_n,\quad n\in\mathbb N $$ given $\{f_n\}_{n\in\mathbb N}$ and $\{a_n\}_{n\in\mathbb N}\subset\mathbb C$, is known as \textit{ moment problem}. In particular, the sequence $\{f_n\}_{n\in\mathbb N}$ is called \textit{Riesz-Fischer sequence} if, for every $\{a_n\}_{n\in\mathbb N}\in l^2$ (i.e. such that $\sum_1^\infty|a_n|^2<\infty$), there exists a solution $f$ of the moment problem. On the other hand, the sequence $\{f_n\}_{n\in\mathbb N}$ is called \textit{Bessel sequence} if, for all $f\in\mc H$, one has $\ip{f}{f_n}_{n\in\mathbb N}\in l^2$. We have the well-known characterization results \cite[Th. 3, Sec. 2]{young}: \begin{itemize} \item $\{f_n\}_{n\in\mathbb N}$ is a Riesz-Fischer sequence if, and only if, there exists $A>0$ such that: \begin{equation} \label{cararf} A\sum_{n=1}^k |c_n|^2\leq \sum_{n=1}^k\|c_n f_n\|^2 \end{equation} for all finite scalar sequences $\{c_n\}\subset\mathbb C$; \item $\{f_n\}_{n\in\mathbb N}$ is a Bessel sequence if, and only if, there exists $B>0$ such that: \begin{equation} \label{carabess} \sum_{n=1}^k\|c_n f_n\|^2\leq B\sum_{n=1}^k|c_n|^2 \end{equation} for all finite scalar sequences $\{c_n\}\subset\mathbb C$. \end{itemize} Bessel and Riesz-Fischer sequences plays an important role in the theory of frames and, in particular, in the study of Riesz bases \cite{young, christensen2}. Roughly speaking, a frame is an extension of a basis in a Hilbert space, in the sense that every vector of $\mc H$ can be decomposed in terms of elements of a frame, but this decomposition is not in unique. This "loss of constraints" or "more leeway" allows several applications in many branches of mathematical sciences and technology. More precisely, a sequence $\{f_n\}_{n\in\mathbb N}$ in $\mc H$ is a \textit{frame} if there exists $A,B>0$ such that: $$ A\|f\|^2\leq \sum_{n\in\mathbb N}|\ip{f}{f_n}|^2\leq B\|f\|^2, \forall f\in\mc H . $$ A frame $\{f_n\}_{n\in\mathbb N}$ that is also a basis for $\mc H$, is called \textit{Riesz basis}. Furthermore, Bessel, Riesz-Fischer sequences and Riesz basis are related via linear operators to orthonormal basis (see \cite{young, christensen2, balastoeva}). Furthermore, if $\{f_n\}_{n\in\mathbb N}$ is complete or total (i.e. the set of its linear span is dense in $\mc H$), then it is a Riesz basis if, and only if, it is both Bessel and Riesz-Fischer sequence \cite{young}. If it is not diversely specified, a frame is intended as discrete. However, a notion of \textit{continuous frame} have been introduced by \cite{keiser} and \cite{AAG_book, AAG_paper}, the last in order to study coherent states. Instead of a sequence, it is considered a map $F$ from a measure space $(X,\mu)$ ($\mu$ is a positive measure) and a Hilbert space $\mc H$, i.e.: $F: X\rightarrow \mc H$, $F: x\mapsto F_x$. This map is called continuous frame with respect to $(X,\mu)$ if: \begin{itemize} \item $F$ is weakly measurable, i.e. that is $x\rightarrow \ip{f}{F_x}$ is $\mu$-measurable for every $f\in \mc H$; \item there exists $A, B>0$ such that: $$ A\|f\|^2\leq \int_X|\ip{f}{F_x}|^2d\mu\leq B\|f\|^2, \quad\forall f\in\mc H . $$ \end{itemize} In \cite{TTT}, C. Trapani, S. Triolo and the author have extended the notion of frames, and related topics as Bessel, bases, Riesz basis, etc., to the spaces of distributions; in \cite{FT} and \cite{corsotsch} a further study has be done respectively for Riesz-Fischer sequences and multipliers. An appropriate framework for the spaces of distributions is given by the \textit{rigged Hilbert space} i.e. is a triple $\mc D\subset\mc H\subset \mc D^\times$ where $\mc D$ is a locally convex space, $\mc D^\times$ the conjugate dual, and where the inclusions have to be intended as continuous and dense embedding. They have been introduced by Gel'fand in \cite{gelf3, gelf} with the aim to define the \textit{generalized eigenvectors} of a essentially self-adjoint operator on $\mc D$ and to prove the theorem known as Gel'fan-Maurin theorem, on the existence of a complete system of generalized eigenvector (see also \cite{gould}). For that it is called also \textit{Gel'fand triple}, also denoted by $(\mc D,\mc H,\mc D)$. However \textit{Gel'fand triple} plays a relevant role also in other branches of mathematics, such as Gabor Analysis: see for example \cite{cordero,feich2,feich 3}. But you can find in the site www.nuhag.eu/talks more exhaustive references, papers and talks. Reconsidering Riesz-Fischer maps introduced in \cite{FT}, the aim of this paper is to continue the study, proving characterizing conditions, analogously as for the Riesz-Fischer sequences that are characterized by the inequality (\ref{cararf}). The paper is organized as follows. In Section 2 are recalled some preliminaries, definitions and previous results. In Section 3 are proved some characterizations of Riesz-Fischer maps in term of lower bounds properties. \section{Preliminary definitions and facts} As usual, let us denote as $\mc H$ a Hilbert space, $\ip{\cdot}{\cdot}$ is the inner product and $\|\cdot\|$ the Hilbert norm. Let $\mc D$ be a dense subspace of $\mc H$ endowed with a locally convex topology $t$ stronger than the topology induced by the Hilbert norm. The embedding of $\mc D$ in $\mc H$ is continuous and dense, and it is denoted by $\mc D\hookrightarrow\mc H$. The space of conjugate linear continuous forms on $\mc D$ is called \textit{conjugate dual of} $\mc D$ and it is denoted by $\mc D^\times$. Unless otherwise stated, the value of $F\in\mc D^\times$ on $f\in\mc D$ is denoted by $\ip{f}{F}$. The space $\mc D^\times$ is endowed with the \textit{strong dual topology} $t^\times=\beta(\mc D^\times,\mc D)$ defined by the set of seminorms: \begin{equation}\label{semin_Dtimes} p_\mc M(F)=\sup_{g\in \mc M}|\ip{g}{F}|, \quad F\in \mc D^\times, \end{equation} where $\mc M$ is a bounded subset of $\mc D[t]$. In this way, the Hilbert space $\mc H$ can be continuously embedded as subspace of $\mc D^\times$ (see \cite{horvath}). If $\mc D$ is reflexive, i.e. $\mc D^{\times\times}=\mc D$, the embedding is dense. We obtain the Gel'fand triple: \begin{equation}\label{eq_one_intr} \mc D[t] \hookrightarrow \mc H \hookrightarrow\mc D^\times[t^\times], \end{equation} where $\hookrightarrow$ denote a continuous and dense embedding. The sesquilinear form $\ip{\cdot}{\cdot}$ that put $\mc D$ and $\mc D^\times$ in duality is an extension of the inner product of $\mc H$ and the notation is the same. We put: $\ip{F}{f}:=\overline{\ip{f}{F}}$. Through the paper, $(X,\mu)$ a measure space, where $\mu$ is a $\sigma$-finite positive measure. We write $L^1(X,\mu), L^2(X,\mu),\cdots$ the usual spaces of measurable functions. In the case $X=\mathbb R$, and $\mu$ is the Lebesgue measure, we denote them as $L^p(\mathbb R)$. Furthermore, $\mathcal S$ stands as the \textit{Schwartz space}, i.e. the space of infinitely differentiable and rapidly decreasing functions on $\mathbb R$. The conjugate dual of $\mathcal S$, denoted by $\mathcal S^\times$, is known as the space of \textit{tempered distributions} (see \cite{reed1} for more accurate definitions). An usual example of rigged Hilbert space is given by: $$ \mathcal S\hookrightarrow L^2(\mathbb R)\hookrightarrow\mathcal S^\times. $$ The vector space of all continuous linear maps from $\mc D[t]$ into $\mc D^\times[t^\times]$ is denoted by ${\mc L}(\D,\D^\times)$. {If $\mc D[t]$ is barreled (e.g.~reflexive)}, can be introduced an involution in ${\mc L}(\D,\D^\times)$, $X \mapsto X^\dag$, by: \begin{equation} \label{eq: X dag}\ip{X^\dag \eta}{ \xi} = \overline{\ip{X\xi}{\eta}}, \quad \forall \xi, \eta \in \mc D. \end{equation} Hence, in this case, ${\mc L}(\D,\D^\times)$ is a $^\dagger$-invariant vector space.\\ In this paper we consider maps with values in a distribution spaces, defined in \cite{TTT} as \textit{weakly measurable maps}, and here denoted by $\omega$. The definition extends the notion, previously recalled in the introduction, of weakly measurable functions considered for continuous frame (see \cite{AAG_paper}). \begin{defn}$\!\!${\bf }$\;$\rm The correspondence $\omega:X\rightarrow \mc D^\times$, $x\mapsto \omega_x$ is called \textit{weakly measurable map} if the complex valued function $x\mapsto\ip{f}{\omega_x}\in\mathbb C$ is $\mu$-measurable for all $f\in\mc D$. \end{defn} In particular, the notions of completeness and independence of sequences in the Hilbert space is extended to the spaces of distributions by the following: \begin{defn}$\!\!${\bf }$\;$\rm \label{tandg} Let $\omega: x\in X\to \omega_x \in \mc D^\times$ be a weakly measurable map, then: \begin{itemize} \item[i)] $\omega$ is \textit{total} or {\it complete} if, $f \in \mc D $ and $\ip{f}{\omega_x}=0$ $\mu$-a.e. $x \in X$ implies $f=0$; \item[ii)]$\omega$ is \textit{$\mu$-independent} if the unique measurable function $\xi:X\rightarrow \mathbb C$ such that $\int_X \xi(x)\ip{g}{\omega_x} d\mu=0$ for every $g \in \mc D$, is $\xi(x)=0$ $\mu$-a.e. \end{itemize} \end{defn} Let us recall the notion of \textit{Bessel map}: \begin{defn}$\!\!${\bf }$\;$\rm A weakly measurable map $\omega$ is a {\em Bessel distribution map} (briefly: Bessel map) if for every $f \in \mc D$, $ \int_X |\ip{f}{\omega_x}|^2d\mu<\infty$. \end{defn} As a consequence of the closed graph theorem, if $\mc D$ is a Fr\`echet space, for the Bessel maps one has the following characterization result: \begin{prop}[{\cite[Proposition 3.1]{TTT}}]\label{prop2} If $\mc D[t]$ a Fr\`{e}chet space, and $\omega: x\in X \to \omega_x\in \mc D^\times$ a weakly measurable map. The following statements are equivalent. \begin{itemize} \item[(i)] $\omega$ is a Bessel map; \item[(ii)]there exists a continuous seminorm $p$ on $\mc D[t]$ such that: \begin{equation*}\label{eqn_bessel1}\left( \int_X |\ip{f}{\omega_x}|^2d\mu\right)^{1/2}\leq p(f), \quad \forall f \in \mc D.\end{equation*} \item[(iii)] for every bounded subset $\mathcal M$ of $\mc D$ there exists $C_{\mathcal M}>0$ such that: \begin{equation*}\label{eqn_bessel2} \sup_{f\in\mathcal M}{\Bigr |}\int_X\xi(x)\ip{\omega_x}{f}d\mu{\Bigl |}\leq C_{\mathcal M}\|\xi\|_2, \quad \forall \xi\in L^2(X,\mu). \end{equation*} \end{itemize} \end{prop} The previous proposition has the following consequences \cite{TTT}: \begin{itemize} \item If $\xi\in L^2(X,\mu)$, then the conjugate linear functional $ {\Lambda^\xi_\omega}$ on $\mc D$ defined by: \begin{equation} \label{functlambda} \ip{f} {\Lambda^\xi_\omega}:=\int_X\xi(x)\ip{f}{\omega_x}d\mu,\quad\forall f\in\mc D \end{equation} is defined and continuous, i.e. $\Lambda_\omega^\xi\in \mc D^\times[t^\times]$; \item the {\em synthesis operator} $D_\omega:L^2(X, \mu)\to \mc D^\times[t^\times]$ defined by $ D_\omega: \xi \mapsto {\Lambda^\xi_\omega}$ is continuous; \item the {\em analysis operator} $C_\omega: \mc D[t]\to L^2(X, \mu)$ defined by $(C_\omega f)(x) =\ip{f}{\omega_x}$ is continuous; \item the {\it frame operator} $S_\omega:\mc D\rightarrow\mc D^\times$, $S_\omega:=D_\omega C_\omega$ is continuous, i.e. $S_\omega\in{\mc L}(\D,\D^\times)$. \end{itemize} \begin{rem} If $\mc D$ is a Fr\`echet space, a Bessel map $\omega$ is upper bounded by a continuous seminorm on $\mc D$, but, in general, is not upper bounded by the Hilbert norm. An example, considered in \cite{FT}, is the system of derivative of Dirac deltas on $\mathcal S\hookrightarrow L^2(\mathbb R)\hookrightarrow \mathcal S^\times$ denoted by $\{\delta'_x\}_{x\in\mathbb R}$, and defined by: $\ip{f}{\delta'_x}:=-f'(x)$ . Then $\delta'_x$ is a Bessel map but is not upper bounded by the Hilbert norm. In \cite{TTT} is defined the \textit{bounded Bessel map}, i.e. a Bessel map $\omega$ such that there exists $B>0$ such that: $\int_X|\ip{f}{\omega_x}|^2d\mu\leq B\|f\|^2$ for all $f\in\mc D$. In particular, if $\omega$ also total and if there exists $B>0$ such that $0<\int_X|\ip{f}{\omega_x}|^2d\mu\leq B\|f\|^2$ forall $f\neq 0$, then $\omega$ it is called \textit{distribution upper semiframe} \cite{FT}, as extension to the space of distributions of the corresponding notion of \textit{continuous upper semiframe} introduced in \cite{semifr1}. \end{rem} If a bounded Bessel map $\omega$ is also bounded from below by the Hilbert norm, we have the definition of \textit{distribution frame}: \begin{defn}$\!\!${\bf }$\;$\rm \cite[Definition 3.6]{TTT} \label{defn_distribframe} Let $\mc D[t] \hookrightarrow\mc H\hookrightarrow\mc D^\times[t^\times]$ be a rigged Hilbert space, with $\mc D[t]$ a reflexive space and $\omega$ a Bessel map. We say that $\omega$ is a {\em distribution frame} if there exists $A,B>0$ such that: \begin{equation*} \label{eqn_frame_main1} A\|f\|^2 \leq \int_X|\ip{f}{\omega_x}|^2d\mu \leq B \|f\|^2, \quad \forall f\in \mc D. \end{equation*} We have that (see \cite{TTT} for details): $\Lambda_\omega^\xi$ is bounded in $(\mc D,\|\cdot\|)$ and the bounded extension to $\mc H$ is denoted by $\widetilde{\Lambda}_\omega^\xi$; the synthesis operator $D_\omega$ has range in $\mc H$ and it is bounded; the Hilbert adjoint $D^*_\omega$ extends $C_\omega$ to $\mc H$; the operator $\tilde{S}_\omega=D_\omega D_\omega^*$ is bounded and extends the frame operator $S_\omega$. \end{defn} If $\omega$ is a distribution frame, then the frame operator $\hat{S}_\omega$ enjoys the inequality $$ A\|f\| \leq \| \hat{S}_\omega f\| \leq B\|f\|,\quad \forall f\in \mc H. $$ Since $\hat{S}_\omega$ is symmetric, this implies that $\hat{S}_\omega$ has a bounded inverse $\hat{S}_\omega^{-1}$ everywhere defined in $\mc H$. In \cite{FT} are defined the Riesz-Fischer maps in the space of distributions. They are the analogous of the corresponding sequences in Hilbert space and where an extension to the continuous case is given in \cite{rahimi}. \begin{defn}$\!\!${\bf }$\;$\rm \cite[Definition 3.4]{FT} Let $\mc D[t]$ be a locally convex space. A weakly measurable map $\omega: x\in X \mapsto \omega_x\in \mc D^\times$ is called a {\em Riesz-Fischer distribution map} (briefly: Riesz-Fischer map) if, for every $h\in L^2({X,\mu})$, there exists $f \in \mc D$ such that: \begin{equation} \label{rf} \ip{f}{\omega_x}=h(x)\quad \mbox{$\mu$-a.e.} \end{equation} In this case, we say that $f$ is a solution of equation $\ip{f}{\omega_x}=h(x)$. \end{defn} Clearly, if $f_1$ and $f_2$ are solutions of (\ref{rf}), then $f_1-f_2\in\omega^\bot:=\{g\in \mc D: \ip{g}{\omega_x}=0, \quad \mu-a.e.\}$. If $\omega$ is total, the solution is unique. The analysis operator $C_\omega$ is defined on $dom(C_\omega):=\{f\in\mc D: \ip{f}{\omega_x}\in L^2(X,\mu)\}$ as $C_\omega: f\in dom(C_\omega)\mapsto\ip{f}{\omega_x}\in L^2(X,\mu)$. Clearly, $\omega$ is a Riesz-Fischer map if and only if $C_\omega: dom(C_\omega)\rightarrow L^2(X,\mu)$ is surjective. If $\omega$ is total, it is injective too, so, in this case, $C_\omega$ is invertible. To define the synthesis operator $D_\omega$ we consider the following subset of $L^2(X,\mu)$: $$dom(D_\omega):=\{\xi\in L^2(X,\mu), \mbox{s.t.} \, \int_X\xi(x){\omega_x}d\mu\, \mbox{is convergent in}\,\, \mc D^\times\}.$$ As \textit{convergent in} $\mc D^\times$ we mean: $\int_X\xi(x)\ip{f}{\omega_x}d\mu$ is convergent for all $f\in\mc D$ and the conjugate functional on $\mc D$ defined in (\ref{functlambda}) by $\Lambda_\omega^\xi$, is continuous, so $\Lambda_\omega^\xi\in\mc D^\times$. Then the synthesis operator $D_\omega: dom(D_\omega)\rightarrow \mc D^\times$ is defined by: $$D_\omega: \xi\mapsto \Lambda_\omega^\xi:=\int_X\xi(x)\omega_xd\mu.$$ The range of $D_\omega$ is denoted by $Ran (D_\omega)$: $$ Ran (D_\omega):=\Bigl{\{ }F\in\mc D^\times: \exists\, \xi\in dom(D_\omega):\, \forall f\in\mc D,\, \ip{f}{F}:=\int_X\xi(x)\ip{f}{\omega_x}d\mu\Bigr{\}}. $$ If $\mc D$ is a Fr\`echet space, as a consequence of the closed graph theorem one has, for a total Riesz-Fischer map $\omega$, the following inequality holds: \begin{cor}\cite[Corollary 3.7]{FT} \label{lbound} Assume that $\mc D[t]$ is a Fr\`echet space. If the map $\omega: x\in X \to \omega_x\in \mc D^\times$ is a total Riesz-Fischer map, then for every continuous seminorm $p$ on $\mc D$, there exists a constant $C>0$ such that, for the solution $f$ of \eqref{rf}: $$ {p}({f})\leq C\|\ip{f}{\omega_x}\|_2. $$ \end{cor} It follows that, if $\omega$ a total Riesz-Fischer map, the inverse of the analysis operator $C_\omega^{-1}:L^2(X,\mu)\rightarrow\, dom(C_\omega)$ is continuous. \section{Main results} In this section are proved some characterization properties of Riesz-Fischer maps. We have the following: \begin{prop} \label{propdisrf} Let $(X,\mu)$ be a measure space, $h(x)\in L^2(X,\mu)$ and $\omega: X\ni x\mapsto\omega_x\in\mc D^\times$ a weakly measurable map. Then $\omega$ is a Riesz-Fischer map if, and only if, there exists a bounded subset $\mathcal M\subset\mc D$ such that: \begin{equation} \label{disrf} {\Bigl |}\int_X\xi(x)\overline{h(x)}d\mu{\Bigl |}\leq\sup_{f\in\mathcal M}{\Bigl |}\int_X\xi(x)\ip{\omega_x}{f}d\mu{\Bigl |} \end{equation} for all $\xi(x)\in L^2(X,\mu)$ such that $\int_X\xi(x){\omega_x}d\mu$ is convergent in $\mc D^\times$. \end{prop} \begin{proof} Necessity is obvious: let $\bar{f}$ be a solution of (\ref{rf}), then, for all $\xi(x)$, one has: $$ {\Bigl |}\int_{X}\xi(x)\overline{h(x)}d\mu{\Bigl |}={\Bigl |}\int_{X}\xi(x)\ip{\omega_x}{\bar{f}}d\mu{\Bigl |}\leq\sup_{f\in\mathcal M}{\Bigl |}\int_{X}\xi(x)\ip{\omega_x}{f}d\mu{\Bigl |}. $$ Sufficiency: let us consider the subspace $\mathcal E\subset\mc D^\times$ defined by the set of $F\in\mc D^\times$ such that there exist $\xi\in L^2(X,\mu)$: $F=\int_X\xi(x)\omega_xd\mu$, and let us define the linear functional $\nu$ on $\mathcal E$ by: $$ \nu(F)=\nu{\Bigl (}\int_{X}\xi(x)\omega_xd\mu{\Bigr )}:=\int_{X}\xi(x)\overline{h(x)}d\mu. $$ It follows immediately from hypothesis that $\nu$ is defined unambiguously. Furthermore, from (\ref{disrf}) one has: $$ {\bigl |}\nu(F){\bigl |}={\Bigl |}\int_{X}\xi(x)\overline{h(x)}d\mu{\Bigl |}\leq\sup_{f\in\mathcal M}{\Bigl |}\int_{X}\xi(x)\ip{\omega_x}{f}d\mu{\Bigl |}=\sup_{f\in\mathcal M}|\ip{f}{F}|, \forall F\in \mathcal E $$ i.e. $\nu(F)$ is bounded by a seminorm of $\mc D^\times[t^\times]$. By Hahn-Banach theorem, there exists an extension $\widetilde{\nu}$ of $\nu$ to $\mc D^\times$. Since $\mc D$ is reflexive, there exists $\bar f\in\mc D$ such that: $\widetilde{\nu}(F)=\ip{F}{\bar f}$. Since: $$ \int_X\xi(x)[\ip{\omega_x}{\bar f}-\overline{h(x)}]d\mu=\nu(F)-\int_X \xi(x)\overline{h(x)}d\mu= 0,\quad \forall\, \xi\in L^2(X,\mu) $$ then $\ip{\bar f}{\omega_x}= h(x)$ $\mu$-a.e. \end{proof} As consequence, we have the following: \begin{cor} \label{corodisRF} $\omega$ is a Riesz-Fischer map if, and only if, there exists a bounded subset ${\mathcal M}\subset\mc D$ such that: \begin{equation} \|\xi\|_2\leq \sup_{f\in\mathcal M}{\Bigl |}\int_X\xi(x)\ip{f}{\omega_x}d\mu{\Bigl |}, \label{disRF} \end{equation} for all $\xi(x)\in L^2(X,\mu)$ such that $\int_X\xi(x){\omega_x}d\mu$ is convergent in $\mc D^\times$. \end{cor} \begin{proof} Sufficiency: the condition (\ref{disRF}) holds. Let us consider $h(x)\in L^2(X,\mu)$ with $\|h(x)\|_2\leq 1$: $$ {\Bigl |}\int_X\xi(x)\overline{h(x)}d\mu{\Bigl |}\leq\|\xi\|_2\leq \sup_{f\in\mathcal M}{\Bigl |}\int_X\xi(x)\ip{f}{\omega_x}d\mu{\Bigl |}. $$ Then, for the previous proposition, $\omega$ is a Riesz-Fischer map.\\ Necessity: since $\omega$ is a Riesz-Fischer map, putting $\frac{\xi(x)}{\|\xi(x)\|_2}=h(x)$, for the previous proposition there exists a bounded subset $\mathcal{M}\subset \mc D$ such that: $$ \|\xi\|_2=\int_X\xi(x)\frac{\overline{\xi(x)}}{\|\xi(x\|_2}d\mu\leq \sup_{f\in\mathcal M}{\Bigl |}\int_X\xi(x)\ip{f}{\omega_x}d\mu{\Bigl |}. $$ \end{proof} The previous Corollary can be rephrased as: \begin{cor} $\omega$ is a Riesz-Fischer map if, and only if, the synthesis operator $D_\omega$ is invertible and the inverse $D_\omega^{-1}: Ran ({D_\omega})\rightarrow L^2(X,\mu)$ is continuous. \end{cor} \section{Conclusions} The inequalities in Proposition \ref{eqn_bessel2}(iii) and in Corollary \ref{corodisRF} are an extension to the rigged Hilbert spaces respectively of the inequalities (\ref{carabess}), and (\ref{cararf}) for sequences in Hilbert spaces (see \cite[Th. 2, Th. 3, Sec. 2]{young}). In the case of sequences, it follows immediately that: if $\{e_n\}_{n\in\mathbb N}$ is an orthonormal basis of $\mc H$, then $\{f_n\}_{n\in\mathbb N}$ is a Bessel sequence if, and only if, there exists a bounded operator $T:\mc H\rightarrow \mc H$ such that $f_n=T e_n$; $\{f_n\}_{n\in\mathbb N}$ is a Riesz-Fischer sequence if, and only if, there exists a bounded operator $V:\mc H\rightarrow\mc H$ such that $V f_n = e_n$ (for frames and Riesz-bases see also \cite[Proposition 4.6]{balastoeva}). Since in the spaces of distributions the orthonormality is not defined, a sort of ''orthonormal basis'' is played by the \textit{Gel'fand basis}: see \cite{TTT} and \cite[Definition 5.3]{FT}. So, it would be appropriate to carry on a further study, started in \cite{TTT}, about the transformations between Gal'fand basis, Bessel, Riesz-Fischer maps, distribution frames, and Riesz distribution basis. \section*{Acknowledgments} This work has been realized within of the activities of Gruppo UMI Teoria del\- l'Approssimazione e Applicazioni and Gruppo Nazionale per l'Analisi Matematica, la Probabilit\`a e le loro Applicazioni (GNAMPA) of the Istituto Nazionale di Alta Matematica (INdAM). \bibliographystyle{amsplain}
1,941,325,220,913
arxiv
\section{Introduction} The task of video object matting is to compute temporal coherent alpha mattes for a foreground video object at each frame. It is a fundamental task for many video editing applications, \textit{e.g.}~compositing the foreground object into new background videos. The resulting alpha mattes represent the fractional opacity (between 0 and 1) of pixels. Such opacity mainly comes from the transparency or the partial coverage of background pixels around the foreground object boundaries. Specifically, the matting problem tries to solve for three types of unknowns at each pixel, i.e., the foreground color $F$, the background color $B$, and the alpha value $\alpha$, based on the measured pixel color $C$, where $C = \alpha F+(1-\alpha)B$. Moreover, to facilitate image and video matting, a trimap~\cite{wang2008image} is usually required to separate an image into the foreground region (FR), the background region (BR), and the unknown region (UR). Here UR covers partial or transparent foreground object boundaries. \begin{figure}[t] \begin{center} \setlength{\tabcolsep}{1pt} \resizebox{0.9\linewidth}{!}{ \begin{tabular}{cccc} \rotatebox{90}{\scriptsize{Frames}} & \includegraphics[width=0.3\columnwidth]{Teaser/00093_im.jpg} & \includegraphics[width=0.3\columnwidth]{Teaser/00094_im.jpg} & \includegraphics[width=0.3\columnwidth]{Teaser/00095_im.jpg} \\ \rotatebox{90}{\scriptsize{Trimap}} & \includegraphics[width=0.3\columnwidth]{Teaser/00093_tri.jpg} & \includegraphics[width=0.3\columnwidth]{Teaser/00094_tri.jpg} & \includegraphics[width=0.3\columnwidth]{Teaser/00095_tri.jpg} \\ \rotatebox{90}{\scriptsize{GCA~\cite{li2020natural}}} & \includegraphics[width=0.3\columnwidth]{Teaser/00093_single.jpg} & \includegraphics[width=0.3\columnwidth]{Teaser/00094_single.jpg} & \includegraphics[width=0.3\columnwidth]{Teaser/00095_single.jpg} \\ \rotatebox{90}{\scriptsize{Ours}} & \includegraphics[width=0.3\columnwidth]{Teaser/00093_ours.jpg} & \includegraphics[width=0.3\columnwidth]{Teaser/00094_ours.jpg} & \includegraphics[width=0.3\columnwidth]{Teaser/00095_ours.jpg} \\ & \#94 & \#95 & \#96 \end{tabular} } \end{center} \vspace{-10pt} \caption{A video matting result comparison using an Internet video clip ``plant''. Trimaps are generated using our trimap generation method. Red, blue and green color correspond to FR, BR, and UR respectively. ``\#'' denotes the frame number. Our method is capable of generating more temporally coherent result compared to GCA\protect~\cite{li2020natural}, an image matting network. Please see the supplementary video for the complete result.} \label{fig:teaser} \vspace{-10pt} \end{figure} Video object matting is related to image matting in the sense that each frame of the matting output essentially solves the corresponding image matting problem. The matting problem is challenging since the number of unknowns exceeds the number of measured colors. Thus, it is critical to build priors to constrain the solution space~\cite{aksoy2017designing,chen2013knn,Chuang2001ABA,levin2006closed}. State-of-the-art (SOTA) image matting algorithms typically build on convolutional neural network (CNN). They improve the image matting results significantly by learning multi-scale features to predict alpha values for pixels in the UR~\cite{Cai_2019_ICCV,chen2018tomnet,cho2016natural,Hou_2019_ICCV,DBLP:conf/bmvc/LutzAS18,Tang_2019_CVPR,xu2017deep}. Given an input video clip and its corresponding trimap for each frame, one can perform video matting with any image matting method by processing each video frame independently. However, this approach may lead to temporal incoherence in the obtained alpha mattes (\textit{e.g.}~flickering, shown in the third row of Figure~\ref{fig:teaser}). To improve temporal coherence, existing video matting methods exploit temporal correspondence between video frames, such as optical flow, to construct multi-frame alpha or color priors or compute temporal affinities to incorporate motion cues~\cite{Apostoloff2004,Choi2012,Chuang:2002,Li_2013_ICCV,Dongqing2020}. However, they rely on local color distributions as main features and may suffer from motion ambiguities at transparent pixels, resulting in flickering or blocky artifacts in the matting results. This paper proposes a novel CNN-based video object matting method to achieve temporal coherent results. Its essential component is a simple yet effective attention-based temporal aggregation module (TAM) that can be seamlessly combined with SOTA image matting networks, such as guided contextual attention (GCA) in~\cite{li2020natural}, index matting (Index)~\cite{Lu_2019_ICCV} and deep image matting (DIM) in~\cite{xu2017deep}, extending them into video matting networks. This simple design maximizes image matting networks' strength and yields a substantial performance boost for video matting, especially on temporal-related metrics. We leverage the widely used attention mechanism to compute the temporal attention weights~\cite{vaswani2017attention,wang2018non} for a pair of pixels adjacent to each other along the time axis. Conceptually, these weights are analogous to the non-local, temporal affinity values used in traditional affinity-based video matting methods ~\cite{Choi2012,eisemann2009spectral}. However, the attention weights are computed using a high-dimensional feature rather than local color and motion features. Moreover, we design a novel target affinity term to supervise the learning of attention weights. This term's ground-truth values are automatically derived from the alpha values and used in a binary cross-entropy loss to guide the training. Such design significantly improves our method's robustness against noises due to video compression, appearance change and motion. As shown in Figure~\ref{fig:teaser}, our method (the last row) can generate much more temporal coherent result. Another challenge is generating trimaps for an input video clip to fulfill the task of video object matting. To this end, we propose to train the space-time memory network (STM)~\cite{oh2019video}, which is a semi-supervised video object segmentation (VOS) network, to segment each frame into FR, BR and UR. It only requires the user to annotate trimaps of a target object at several keyframes, usually three to five frames for a video clip of around 200 frames in our experiments, which enhances the efficiency of video matting significantly. To handle the large variations of user-annotated keyframe trimaps, we perform online-finetuning on the STM network. We then ensemble the bidirectional prediction results to improve the quality of generated trimaps. In summary, the main contributions of this work are: \begin{itemize} \setlength{\parskip}{0pt} \setlength{\itemsep}{3pt} \item We propose a temporal aggregation module that integrates image matting networks to achieve temporally coherent video matting results. It leverages the attention mechanism to compute temporal affinity values in the feature space, resulting in a robust matting method to handle challenging videos featuring appearance change, occlusion, and fast motion. \item We propose an STM-based trimap generation method to enhance the efficiency of video matting greatly. The user only needs to annotate trimaps at several keyframes to generate trimap for every video frame. \item To enable video object matting and trimap generation networks training, we construct a video object matting dataset, termed VideoMatting108, that covers various objects and different types of motions. In total, our dataset has 108 foreground video clips with ground-truth alpha mattes, all in 1080p resolution, averaging in 821 frames per clip. The dataset will be made publicly available. \end{itemize} \section{Related Works} \noindent \textbf{Image matting.} The sampling-based image matting methods~\cite{Chuang2001ABA,Feng2016ACS,gastal2010shared, he2011global,Ruzon2000Alpha} build the FR and BR color priors using the sampled pixels to infer the alpha values, while the affinity-based methods~\cite{ aksoy2017designing,aksoy2018semantic,Xue2009AGF,chen2013knn,grady2005random,levin2006closed,levin2008spectral,Sun2004Poisson} propagate the alpha values from the known FR and BR pixels to the UR pixels based on affinity score and have proven to be robust when dealing with complex images~\cite{Chuang2001ABA,gastal2010shared,Ruzon2000Alpha}. The deep learning-based matting methods usually train a convolutional encoder-decoder neural network to predict alpha values or foreground/background colors with user-specified trimaps~\cite{Cai_2019_ICCV,chen2018tomnet,cho2016natural,Hou_2019_ICCV,li2020natural,Lu_2019_ICCV,DBLP:conf/bmvc/LutzAS18,Tang_2019_CVPR,zhou2020attention}. Recently, ``trimap-free'' image matting methods also received much attention as they do not require user annotation. Some of the methods use other forms of prior instead of trimaps, \textit{e.g.}~background image~\cite{sengupta2020background}, rough segmentation map or coarse alpha matte~\cite{yu2020mask}. Others do not use any prior at all~\cite{Zhang_2019_CVPR,liu2020boosting,Qiao_2020_CVPR}. The most used image matting dataset in this line of research is provided by Xu \textit{et al.}~\cite{xu2017deep}, and a larger dataset is proposed recently by Qiao \textit{et al.}~\cite{Qiao_2020_CVPR}. \noindent \textbf{Video matting.} The central problem of video matting is how to obtain temporally coherent alpha mattes. Chuang \textit{et al.}~\cite{Chuang:2002} proposed to interpolate manually specified trimaps at key-frames using optical flow, estimate the background pixels, and then perform Bayesian matting at each frame with the estimated background. Motion cues and prior distributions for alpha values and multi-frame colors are widely used in video matting~\cite{Apostoloff2004,Shum:2004:PLF,Alphaflow2013,Xiao2005Layer}. In~\cite{Choi2012,eisemann2009spectral,LEE201025,Li_2013_ICCV}, spatio-temporal edges between pixels are constructed to compute the alpha mattes for video frames simultaneously. These methods are the extension of affinity-based methods to video matting, which is time-consuming due to the Laplacian matrix's fast-growing size. Zou \textit{et al.}~\cite{Dongqing2020} proposed to select nonlocal neighbors through sparse coding to constrain pixels having similar features in different frames to get similar alpha values. Besides, depth information can help to construct trimaps and differentiate between pixels of similar colors~\cite{ZhuJointDepthandMatte2009}. Hardware-assisted methods in~\cite{Joshi:2006:NVM,McGuire:2005:DVM} automatically generate and propagate trimaps in all video frames and optimize for high-quality alpha mattes. Recently, CNN-based video matting methods~\cite{lin2020realtime,ke2020green} gained much attention. However, these methods do not explicitly enforce the temporal consistency between video frames during training, which may lead to temporal incoherent results. Moreover, they are restricted to a single type of foreground object (\textit{e.g.}~portrait) or require near static backgrounds. Our method has neither of these limitations. \begin{figure*}[t] \centering \includegraphics[width=0.95\textwidth]{Figs/Fig1.pdf} \caption{The flowchart of our method during training. ``OS'' denotes output stride. We do not show encoder-decoder skip connections for clarity. All networks and modules share the same weight across different frames.} \label{fig:flowchart} \end{figure*} \noindent \textbf{Attention mechanisms in segmentation and matting.} Attention mechanism provides an effective way for neural networks to reinforce correlated features and suppress feature noise, leading to a performance boost in segmentation. There are two main variations of this mechanism. One is the channel-wise self-attention pioneered by Hu \textit{et al.}~\cite{hu2018squeeze}. Given an input feature tensor, it leverages the global average pooling and the fully-connected layer to infer a channel-wise weight vector to modulate feature maps. The other is the non-local block proposed by Wang \textit{et al.}~\cite{wang2018non}. It computes the spatiotemporal correlation as the attention, reinforcing the consistency of feature maps effectively. The channel-wise attention approach is widely adopted in image segmentation~\cite{yu2018bisenet,yu2018learning,zhang2018context}. Many methods~\cite{fu2019dual,Yu-ECCV-RepGraph-2020,yu2020context,DBLP:conf/eccv/YuanCW20,zhao2018psanet} exploit variants of non-local attention modules to capture the spatial long-range dependency. For image matting tasks, the attention mechanism is mostly used for fusing high and low-level image features. Qiao \textit{et al.}~\cite{Qiao_2020_CVPR} adopted both channel-wise and spatial attention for trimap-free matting since high-level image features are the key to recognize a foreground object. GCA~\cite{li2020natural} also utilizes high-level image features as the attention map to guide the low-level alpha features, achieving SOTA performance in image matting. We thus employ GCA as one of the base matting network structures. Several recent VOS methods also utilize the attention mechanism to fuse features from different video frames for improving temporal consistency~\cite{oh2019fast,Voigtlaender_2019_CVPR}. Oh \textit{et al.}~\cite{oh2019video} extended the memory network approach used in NLP to VOS, which is also a variation of the spatiotemporal attention mechanism. Yang \textit{et al.}~\cite{yang2020collaborative} extended this idea by matching both the foreground and background with multi-scale features in those frames, achieving SOTA performance. Our method also leverages the attention mechanism for temporally coherent matting. Nevertheless, our attention module is bi-directional, and we use additional temporal loss terms to supervise the network training. \noindent \textbf{Temporal coherence.} One standard solution to temporal coherence is the temporal smoothing filter, which considers the spatial and temporal adjacent pixels simultaneously~\cite{chang2007example,Chen_2017_ICCV,Lang:2012:PTC,Paris2008}. Another solution is to impose the temporal coherence in the post-processing, which is blind to image filters~\cite{Bonneel:2015:BVT,Lai_2018_ECCV}. In contrast, our method does not rely on temporal smoothing filter but the feature-space affinity to produce temporally coherent alpha mattes. \section{Our method} \label{sec:our_method} Given an input video, our method first runs trimap generation to propagate the user-annotated trimaps to the other frames. We then run a video matting network, formed by integrating temporal aggregation module (TAM) into an image-based matting network, to obtain a temporally coherent alpha matte at each frame (See Figure~\ref{fig:flowchart}). In the following, we denote the video matting network as GCA+TAM or DIM+TAM, which correspond to base networks GCA~\cite{li2020natural} and DIM~\cite{xu2017deep} respectively. When computing an alpha matte for frame $\mathbf{I}^t$ in testing stage, TAM only needs to aggregate the CNN features from three consecutive frames, \textit{i.e.}~$\mathbf{I}^{t-1},\mathbf{I}^t,\mathbf{I}^{t+1}$. The choice of three consecutive frames offers great flexibility in network design while ensures computational efficiency. However, during training, our network takes five consecutive frames simultaneously as inputs, \textit{i.e.}~$\mathbf{I}^{t-2},...,\mathbf{I}^{t+2}$, and predicts $\alpha^{t-1},\alpha^{t},\alpha^{t+1}$ to facilitate the computation of loss functions. Note that we choose to integrate TAM into the base network at the decoder stage of output stride (OS) 8. It indicates that the resolution of the feature map should be $H/8 \times W/8$, where $H,W$ is the input image resolution. This choice is to balance computational cost and feature level, and we empirically found that OS=8 is a good trade-off (see the supplementary material for the OS experiment). In the following, we will first describe the design of TAM in Sec.~\ref{sec:tam}, and proceed to describe the training loss (Sec.~\ref{sec:loss}) and the training strategy of TAM (Sec.~\ref{sec:tam_training}). Finally, we describe the details of trimap generation using STM~\cite{oh2019video} (Sec.~\ref{sec:trimap_generation}) and our video object matting dataset in Sec.~\ref{sec:dataset}. \subsection{Temporal Aggregation Module} \label{sec:tam} Figure~\ref{fig:tam} illustrates the structure of TAM. It leverages the attention mechanism to aggregate features from $\mathbf{I}^{t-1}$ and $\mathbf{I}^{t+1}$ with features from $\mathbf{I}^t$ for pixels inside the UR of $\mathbf{I}^t$. This design benefits temporal coherence by encoding temporal correlation in the aggregated features at frame $\mathbf{I}^t$. For a video frame $\mathbf{I}^t$, we denote its input feature map as $\mathbf{F}^t \in \mathbb{R}^{N \times C}$. Here $C$ is the total number of channels, and $N$ is the total number of pixels in this feature map. TAM takes three feature maps $\mathbf{F}^{t-1},\mathbf{F}^t,\mathbf{F}^{t+1}$ as inputs. First, we compute key and query features with separate convolution layers. Specifically, we feed $\mathbf{F}^{t-1},\mathbf{F}^{t+1}$ into a $3 \times 3$ query convolution layer. With $\mathbf{Q}^{t-1},\mathbf{Q}^{t+1}$ we denote the output query feature map. In contrast, we input $\mathbf{F}^t$ into a $3 \times 3$ key convolution layer and denote the output feature map as $\mathbf{K}$. Since we only focus on UR of $\mathbf{I}^t$ in TAM, we extract the UR in $\mathbf{K}$ using a mask generated by down-sampling the original trimap to the same resolution. For each pixel $i$ within the UR of $\mathbf{K}$, we extract two $W \times W$ feature patches centered at the same position of pixel $i$ from features $\mathbf{Q}^{t-1}$ and $\mathbf{Q}^{t+1}$, respectively. We denote this window as $\mathbb{W}$ and the extracted patch features as $\mathbf{U}^{t-1},\mathbf{U}^{t+1} \in \mathbb{R}^{M \times W^2 \times C}$, where $M$ is the number of pixels in the UR. Next, we compute the attention weights between $\mathbf{K}$ and $\mathbf{U}^{t-1}$ and similarly for $\mathbf{K}$ and $\mathbf{U}^{t+1}$, and then use the attention weights as temporal affinity values to modulate the corresponding adjacent frame features. Taking $\mathbf{K}$ and the query features $\mathbf{U}^{t-1}$ of $\mathbf{I}^{t-1}$ as an example, we compute the set of affinity values $\mathbf{A}^{t-1} \in \mathbb{R}^{M \times W^2}$ as follows: \begin{equation} \mathbf{A}^{t-1}(i,j) = \frac { {\rm e}^{\mathbf{K}(i) \cdot \mathbf{U}^{t-1}(i,j)} } { \sum_{k=1}^{W^2} {\rm e}^{\mathbf{K}(i) \cdot \mathbf{U}^{t-1}(i,k)} }, i \in UR, j \in \mathbb{W}. \label{eq:TAM_aff} \end{equation} \noindent where $\mathbf{A}^{t-1}(i,j)$ denotes the affinity value computed for a pixel $i$ of $\mathbf{I}^t$ and a pixel $j$ within the patch at $\mathbf{I}^{t-1}$; note that $\cdot$ denotes the point-wise dot product. This operation effectively matches the key feature $\mathbf{K}(i)$ of pixel $i$ with features from $\mathbf{U}^{t-1}$ inside a corresponding $W \times W$ local patch. While our module only uses a local patch instead of the full feature map in the computation attention weights, it is empirically enough to capture the temporal correlation since the motion between adjacent frames in a video sequence is relatively small. The patch size is set to $7\times 7$ throughout our experiments. Note that we use feature maps of OS=8, which can cover a broad range of motion. \begin{figure}[t] \begin{center} \includegraphics[width=0.95\linewidth]{Figs/0413Fig2.pdf} \caption{The architecture of our temporal aggregation module.} \label{fig:tam} \end{center} \end{figure} Finally, we formulate feature aggregation for $\mathbf{I}^t$ as follows: \begin{equation} \begin{array}{rl} \mathbf{\hat{F}}^{t}(i)&=\Big\{ \begin{array}{cl} \mathbf{S}(i)+ \mathbf{\hat{U}}^{t-1}(i)+ \mathbf{\hat{U}}^{t+1}(i) &,i \in UR\\ \mathbf{S}(i) &, Otherwise \end{array} \\ \mathbf{\hat{U}}^{f}(i)&=\sum^{W^2}_{j=1} {\mathbf{A}^{f}(i,j)\mathbf{U}^{f}(i,j)}, f=\{t-1,t+1\} \end{array} \label{eq:TAM_final} \end{equation} where the modulated feature $\mathbf{\hat{U}}^{t-1} \in \mathbb{R}^{M \times C}$ is a weighted sum of all features inside the local patch using $\mathbf{A}^{t-1}$, and $\mathbf{S}$ indicates the features output by a $3\times 3$ convolution layer at frame $\mathbf{I}^t$. The features $\mathbf{\hat{F}}^t$ are fed into next convolution layer to continue the workflow of the original network. Note that the TAM's design can be seen as sharing the weight between the ``query'' and ``value'' convolution in the conventional ``KQV'' structure. We choose this design because it achieves the best performance among other weight sharing configurations. The influence of the weight sharing will be investigated later in Sec.~\ref{sec:experiment}'s ablation study. \subsection{Loss function} \label{sec:loss} The overall loss function is a composition of three different terms: the image matting term $L_{im}$, the temporal coherence term $L_{tc}$, and the target affinity term $L_{af}$. In the following, we describe these three terms in details. \noindent \textbf{Image matting term.} The image matting term $L_{im}$ is directly inherited from the deep learning-based image matting methods. This term only considers the single frame prediction results w.r.t. its corresponding ground truth. Common choices of this term are the combination of $L_1$ alpha matte loss, $L_1$ alpha gradient loss, composition loss~\cite{xu2017deep} and Laplacian loss~\cite{yu2020context}. In all our experiments, we set $L_{im}$ the same as the image matting method we chose to use as the base network. Hence, for DIM~\cite{xu2017deep}, we use alpha matte loss, alpha gradient loss, and composition loss; for GCA~\cite{li2020natural}, we only use alpha matte loss. \noindent \textbf{Temporal coherence term.} We leverage the temporal coherence metric proposed by Erofeev \textit{et al.}~\cite{Erofeev2015} as the temporal coherence term $L_{tc}$, which can be expressed as follows: \begin{equation} L^{p,q}_{tc}(i) = |(\alpha^{t-1}_{i} - \alpha^{t}_{i}) - (\hat{\alpha}^{t-1}_{i} - \hat{\alpha}^{t}_{i})|_1, \label{eq:L_tg} \end{equation} \noindent where $\alpha,\hat{\alpha}$ are the predicted and ground-truth alpha matte. $i$ denotes a pixel in the UR of frame $p$. This term is denoted by ``dtSSD'' in~\cite{Erofeev2015}, which penalizes the temporal gradient of alpha mattes in our work. There is another temporal coherence metric, termed as ``MESSDdt'' in~\cite{Erofeev2015}, which augments Eq.~\ref{eq:L_tg} with motion vectors for the purpose of better correspondence between frame $t-1$ and frame $t$. However, we found that even computed with SOTA optical flow algorithms~\cite{teed2020raft}, there are severe motion noises for translucent pixels, which hurts the performance of the network when using ``MESSDdt'' as the temporal coherence loss. Thus, we do not use ``MESSDdt'' in our implementation. \noindent \textbf{Target affinity term.} While self-attention works well in many cases, Yu \textit{et al.}~\cite{yu2020context} demonstrates that providing direct supervision for the attention weights can further boost the performance. Inspired by this work, we thus design the target affinity term $L_{af}$. For a pixel $i \in \text{UR}$ at $\mathbf{I}^t$, the target probability of having a large attention weight between $i$ and a pixel $j \in \mathbb{W}$ inside its local patch at a neighboring frame $f$ can be formulated as: \begin{equation} \begin{array}{rl} G^{f}(i,j)&=\Big\{ \begin{array}{cl} 1-s, & |\hat{\alpha}^{t}_i-\hat{\alpha}^{f}_{j}|<\theta \\ 0, & Otherwise \end{array}, \end{array} \end{equation} \noindent where $f=t-1 \text{or} t+1$, and $\theta$ is set to be $0.3$. In addition, we follow the label smoothing technique~\cite{szegedy2016rethinking} to avoid over-confident decision by introducing the parameter $s=0.2$. The target probability is computed according to the ground-truth alpha mattes. The goal of this loss term is to make the network learn to assign small affinity values to pixels with large alpha value differences. Therefore, we model the target affinity term between pixel $i$ and $j$ as \begin{equation} \begin{array}{rl} L^{f}_{af}(i,j)&=\mathbf{BCE}(\Phi(\mathbf{K}(i) \cdot \mathbf{U}^{f}(i,j)),G^{f}(i,j)) \end{array} \end{equation} where $\mathbf{BCE}$ denotes the binary cross-entropy function, and $\Phi$ denotes the sigmoid function. During training, the term $L_{af}$ is calculated as $ L_{af}=\frac{1}{2}(L^{t-1}_{af}+L^{t+1}_{af})$. In summary, our network is trained by the weighted average of these three terms: \begin{equation} L = w_{im}L_{im} + w_{tg}L_{tg} + w_{af}L_{af}. \label{eq:full_loss} \end{equation} \noindent where we set $w_{im} = 0.1$, $w_{tg} =0.5$, and $w_{af} = 0.25$ in our experiments. \begin{table*}[t] \caption{Ablation study using VideoMatting108 validation set. GCA\protect~\cite{li2020natural} and DIM\protect~\cite{xu2017deep} are used as the base image matting network structures to verify the effectiveness of the TAM. The best result is in bold, the second best is underlined. ``+F'' indicates the single image matting method is fine-tuned on our training dataset. ``+TAM'' denotes we add TAM for video matting. ``+TAM$_{share}$'' and ``+TAM$_{sep}$'' denote we share / separate all convolutions in TAM, respectively. ``MSDdt'' denotes ``MESSDdt''.} \label{tab:vm108_val} \small \centering \begin{tabularx}{\linewidth}{l|l|YYYYY|YYYYY|YYYYY} \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Loss} & \multicolumn{5}{c|}{Narrow} & \multicolumn{5}{c|}{Medium} & \multicolumn{5}{c}{Wide} \\ & & SSDA & dtSSD & MSDdt & MSE & SAD & SSDA & dtSSD & MSDdt & MSE & SAD & SSDA & dtSSD & MSDdt & MSE & SAD \\ \hline GCA+F & $L_{im}$ & 49.99 & 27.91 & 1.80 & 8.32 & 46.86 & 55.82 & 31.64 & 2.15 & 8.20 & 40.85 & 60.69 & 34.83 & 2.50 & 8.41 & 38.59 \\ +TAM & $L_{im}$ & \underline{46.86} & 26.21 & 1.48 & \underline{7.68} & \underline{44.82} & 54.01 & 29.49 & 1.78 & 7.90 & 39.51 & 59.09 & 32.55 & 2.07 & 8.18 & 37.41 \\ +TAM$_{share}$ & $L_{im}$ & 49.71 & 27.49 & 1.68 & 8.34 & 46.45 & 57.20 & 29.90 & 1.91 & 8.88 & 41.15 & 62.90 & 33.13 & 2.22 & 9.35 & 39.31 \\ +TAM$_{sep}$ & $L_{im}$ & 54.06 & 27.69 & 1.78 & 10.37 & 48.03 & 59.13 & 30.75 & 2.00 & 9.84 & 41.56 & 64.89 & 33.90 & 2.30 & 10.37 & 39.78 \\ +TAM & $L_{im}$+$L_{tc}$ & 48.35 & \underline{25.04} & \underline{1.43} & 8.00 & 45.47 & \underline{52.83} & \underline{27.81} & \underline{1.60} & \underline{7.55} & \underline{38.84} & \underline{57.51} & \underline{30.34} & \underline{1.84} & \underline{7.73} & \underline{36.57} \\ +TAM & $L_{im}$+$L_{af}$ & 46.87 & 25.70 & 1.47 & 7.70 & 45.22 & 53.00 & 28.97 & 1.72 & 7.73 & 39.47 & 58.08 & 31.97 & 2.00 & 8.05 & 37.47 \\ +TAM & $L_{im}$+$L_{tc}$+$L_{af}$ & \textbf{45.39} & \textbf{24.37} & \textbf{1.28} & \textbf{7.30} & \textbf{44.01} & \textbf{50.41} & \textbf{27.28} & \textbf{1.48} & \textbf{7.07} & \textbf{37.65} & \textbf{54.35} & \textbf{29.60} & \textbf{1.69} & \textbf{6.98} & \textbf{34.81} \\ \hline Index+F & \resizebox{13pt}{5pt}{$L_{im}$} & 52.75 & 29.49 & 1.97 & 9.78 & 50.90 & 58.53 & 33.03 & 2.33 & 9.37 & 43.53 & 64.49 & 36.39 & 2.73 & 9.73 & 41.22\\ +TAM & $L_{im}$+$L_{tc}$+$L_{af}$ & 51.18 & 26.31 & 1.52 & 8.87 & 50.02 & 57.91 & 29.36 & 1.81 & 8.78 & 43.17 & 63.56 & 32.09 & 2.10 & 9.21 & 40.97\\ \hline DIM+F & $L_{im}$ & 56.40 & 31.77 & 2.56 & 10.46 & 51.76 & 61.85 & 34.55 & 2.82 & 9.99 & 44.38 & 67.15 & 37.64 & 3.21 & 10.25 & 41.88\\ +TAM & $L_{im}$+$L_{tc}$+$L_{af}$ & 53.61 & 27.77 & 1.90 & 9.48 & 50.12 & 58.94 & 29.89 & 2.06 & 9.02 & 43.28 & 63.27 & 32.15 & 2.31 & 8.88 & 40.45\\ \hline \end{tabularx} \end{table*} \subsection{Training strategy} \label{sec:tam_training} The training of our video matting network consists a pre-training stage and a main stage. For pre-training, we input three frames $\mathbf{I}^{t-1},\mathbf{I}^{t},\mathbf{I}^{t+1}$ along with trimaps and predict the center frame alpha matte $\alpha^t$ using the supervision from $L_{im}$ only. During pre-training, all layers before the TAM in the network are initialized and fixed using the pre-trained weight of a off-the-shelf image matting network. TAM and the rest of the decoder layers are randomly initialized and trained on the DIM dataset~\cite{xu2017deep}. The dataset is augmented with random affine transformations (rotation, translation, and scaling) to generate sequences of three frames. Random flipping and cropping are also conducted for further augmentation. We pre-trained the network for 20 epochs using an input resolution of $512 \times 512$ with the Adam optimizer~\cite{KingmaB14}. For different image matting methods, we used different batch sizes and learning rates. The batch size is set to $40$ for both DIM+TAM and GCA+TAM. The learning rate is set to $10^{-5}$ for DIM+TAM, and $4\times10^{-4}$ with ``poly'' decay strategy for GCA+TAM. In the main stage, our network takes five consecutive frames $\{\mathbf{I}^{t-2},...,\mathbf{I}^{t+2}\}$ along with their corresponding trimaps as inputs. The motivation comes from the fact that the temporal coherence term requires alpha mattes of three consecutive frames, and each frame needs the features of its two neighboring frames for alpha matte prediction. The predicted $\{\alpha^{t-1},\alpha^{t},\alpha^{t+1}\}$ are used to compute the loss function in Eq.~\ref{eq:full_loss}. We use Adam as the optimizer to train the network on VideoMatting108 for 30 epochs with the same input resolution $512 \times 512$. Our data augmentation strategies include random shape augmentation, such as cropping, flipping and scaling, and random color augmentation, such as hue, saturation, gamma, and JPEG compression~\cite{yu2020context}. We again use ``poly'' decay strategy with the base learning rate of $10^{-4}$ during main training. The batch size is set to 16 and 24 for DIM+TAM and GCA+TAM respectively. \subsection{Video Trimap Generation} \label{sec:trimap_generation} We leverage the SOTA VOS network STM~\cite{oh2019video} to segment each frame into FR, BR, and UR. That is, we let STM track FR and UR as two different objects in a video and classify the remaining pixels that do not belong to FR and UR as BR. To obtain the ground-truth labels, we label the translucent pixels obtained in the construction of VideoMatting108 without any dilation as UR, and the pixels with alpha value equals one as FR. Additionally, we give UR a higher weight (4.0 in training) to achieve class-balanced cross-entropy loss since UR generally has fewer pixels than FR and BR. \noindent \textbf{Training parameters.} Same with STM, we also utilize the two-stage training strategy. First, the network is initialized from the weight pre-trained on ImageNet~\cite{deng2009imagenet}. We then use the DIM dataset augmented with random affine transformations (rotation, translation, scaling, and shearing) to pre-train the network. The network is pre-trained for 25 epochs. We then proceed to the main training stage. The only difference is that we use a larger maximum frame skip, which is 75 frames in our implementation since the videos in VideoMatting108 are much longer compared to the VOS dataset like DAVIS~\cite{Perazzi2016DAVIS}. We train the network for 150 epochs, where every epoch consists of 850 iterations. The maximum frame skipping is gradually increased by one every two epochs. We also utilize the ``stair'' learning rate strategy with the base learning rate of $10^{-5}$, $5 \times 10^{-6},10^{-6}$ and $5 \times 10^{-7}$ at 40, 80 and 120 epochs, respectively. We use the batch size of 4, input resolution of $512 \times 512$, and Adam optimizer for all of our experiments. \noindent \textbf{Inference strategy.} When generating trimaps for a new video that is not present in the training and validation sets, we found that online fine-tuning with the user-annotated trimaps at keyframes drastically improves the performance of the network in our case. During online fine-tuning, we treat the user-annotated trimap as the ground truth and use the same random affine transform technique to generate ``fake'' video sequences. Subsequently, we fine-tune the network on these sequences for 500-800 iterations with a constant learning rate of $10^{-6}$. When there is more than one frame of user-defined trimaps, a bidirectional inference strategy is used to ensemble the prediction results. Please refer to our supplementary material for more details. \subsection{A New Video Matting Dataset} \label{sec:dataset} Lacking training data is a massive barrier to deep learning-based video matting methods. For instance, the most commonly used video matting dataset from videomatting.com~\cite{Erofeev2015} has only ten test clips and three clips with ground-truth mattes, which is not enough for network training. To this end, we propose our video matting dataset, \textbf{VideoMatting108}. We rely on green screen video footages to extract ground-truth alpha mattes. First, we collect 68 high-quality (1080p and 4K) footages from the Internet~\cite{Storyblocks}. While these footages have diverse objects, we found that they generally lack several types of objects, such as fur, hair, and semi-transparent objects. Thus, we capture 40 green screen footages for these types of objects ourselves as the supplement. Next, we carefully extract the foreground object's alpha matte and color from the green screen footages using After Effects and BorisFX Primatte Studio~\cite{Primatte_Studio}. In total, our dataset consists of 108 video clips, all in 1080p resolution. The average length of the video clip is 821 frames, significantly longer than other datasets. The foreground objects cover a wide range of categories, such as human, fluffy toys, cloth (net, lace, and chiffon), smoke and plants. The background footage usually has 3D camera motion or complex scenery, adding more challenge to our dataset. We split the dataset with 80 clips in the training set and 28 clips in the validation set. Trimaps are generated and dilated on the fly with random sized kernel from $1\times1$ to $51\times51$ during training. In the validation set, trimaps are generated by dilating transparent pixels with three different kernel sizes: $11\times11$ for narrow trimaps, $25\times25$ for medium trimaps, and $41\times41$ for wide trimaps. \section{Experimental Results} \label{sec:experiment} In this section, we present the evaluation of our approach on the VideoMatting108 dataset. We also evaluate our trimap generation algorithm. Our computing platform for video matting related experiments was a 4 V100 GPU server with 2 Intel 6148 CPUs. Trimap generation related experiments were conducted on a 4 1080Ti GPU server with 2 E5-2678v3 CPUs. \begin{table}[t] \caption{Quantitative comparison between GCA+TAM and GCA~\cite{li2020natural} on the 10 test clips from videomatting.com~\cite{Erofeev2015} with different trimaps. The best result is in bold. Please see our supplementary material for quantitative results of each test video clip.} \label{tab:videomatting.com} \small \centering \begin{tabularx}{\linewidth}{l|c|YYY} \hline Method & Trimap & SSDA & dtSSD & MESSDdt \\ \hline GCA+F & \multirow{2}{*}{Narrow} & 39.40 & 30.83 & 1.43 \\ GCA+TAM & & \textbf{36.95} & \textbf{26.37} & \textbf{1.12} \\ \hline GCA+F & \multirow{2}{*}{Medium} & 44.74 & 33.42 & 1.74 \\ GCA+TAM & & \textbf{42.17} & \textbf{28.81} & \textbf{1.35} \\ \hline GCA+F & \multirow{2}{*}{Wide} & 50.45 & 36.71 & 2.14 \\ GCA+TAM & & \textbf{49.23} & \textbf{32.94} & \textbf{1.76} \\ \hline \end{tabularx} \end{table} \noindent \textbf{Evaluation metrics.} We employ SSDA (average sum of squared difference) and two temporal coherence metrics, namely dtSSD (mean squared difference of direct temporal gradients) and MESSDdt (mean squared difference between the warped temporal gradient) from~\cite{Erofeev2015} to evaluate the accuracy of the predicted video alpha mattes and their temporal coherence. Besides, we also report ``MSE'' (mean squared error) and ``SAD'' (sum of absolute difference) to verify the pixel-wise accuracy of alpha values at each frame. Lower evaluation metrics correspond to better video matting results. \begin{figure*}[ht] \begin{center} \setlength{\tabcolsep}{1pt} \resizebox{0.95\linewidth}{!}{ \begin{tabular}{cccccccccc \rotatebox{90}{\scriptsize{Frames}} & \includegraphics[width=0.15\textwidth]{MattingImgs/lion_5174/00148_im.jpg} & \includegraphics[width=0.15\textwidth]{MattingImgs/lion_5174/00333_im.jpg} & \rotatebox{90}{\scriptsize{Frames}} & \includegraphics[width=0.15\textwidth]{MattingImgs/dancing_woman_half_3/00256_im.jpg} & \includegraphics[width=0.15\textwidth]{MattingImgs/dancing_woman_half_3/00381_im.jpg} & \rotatebox{90}{\scriptsize{Frames}} & \includegraphics[width=0.15\textwidth]{MattingImgs/standing_man_half/268352_im.jpg} & \includegraphics[width=0.15\textwidth]{MattingImgs/standing_man_half/268354_im.jpg} \\ \rotatebox{90}{\scriptsize{GT}} & \includegraphics[width=0.15\textwidth]{MattingImgs/lion_5174/00148_gt.jpg} & \includegraphics[width=0.15\textwidth]{MattingImgs/lion_5174/00333_gt.jpg} & \rotatebox{90}{\scriptsize{GT}} & \includegraphics[width=0.15\textwidth]{MattingImgs/dancing_woman_half_3/00256_gt.jpg} & \includegraphics[width=0.15\textwidth]{MattingImgs/dancing_woman_half_3/00381_gt.jpg} & \rotatebox{90}{\scriptsize{GT}} & \includegraphics[width=0.15\textwidth]{MattingImgs/standing_man_half/268352_gt.jpg} & \includegraphics[width=0.15\textwidth]{MattingImgs/standing_man_half/268354_gt.jpg} \\ \rotatebox{90}{\scriptsize{GCA~\cite{li2020natural}}} & \includegraphics[width=0.15\textwidth]{MattingImgs/lion_5174/00148_gca_single.jpg} & \includegraphics[width=0.15\textwidth]{MattingImgs/lion_5174/00333_gca_single.jpg} & \rotatebox{90}{\scriptsize{Index~\cite{Lu_2019_ICCV}}} & \includegraphics[width=0.15\textwidth]{MattingImgs/dancing_woman_half_3/00256_index_single.jpg} & \includegraphics[width=0.15\textwidth]{MattingImgs/dancing_woman_half_3/00381_index_single.jpg} & \rotatebox{90}{\scriptsize{DIM~\cite{xu2017deep}}} & \includegraphics[width=0.15\textwidth]{MattingImgs/standing_man_half/268352_dim_single.jpg} & \includegraphics[width=0.15\textwidth]{MattingImgs/standing_man_half/268354_dim_single.jpg} \\ \rotatebox{90}{\scriptsize{GCA+TAM}} & \includegraphics[width=0.15\textwidth]{MattingImgs/lion_5174/00148_gca_ours.jpg} & \includegraphics[width=0.15\textwidth]{MattingImgs/lion_5174/00333_gca_ours.jpg} & \rotatebox{90}{\scriptsize{Index+TAM}} & \includegraphics[width=0.15\textwidth]{MattingImgs/dancing_woman_half_3/00256_index_ours.jpg} & \includegraphics[width=0.15\textwidth]{MattingImgs/dancing_woman_half_3/00381_index_ours.jpg} & \rotatebox{90}{\scriptsize{DIM+TAM}} & \includegraphics[width=0.15\textwidth]{MattingImgs/standing_man_half/268352_dim_ours.jpg} & \includegraphics[width=0.15\textwidth]{MattingImgs/standing_man_half/268354_dim_ours.jpg} \\ & \#149 & \#334 & & \#257 & \#382 & & \#380 & \#382 \\ & \multicolumn{2}{c}{(a) lion} & & \multicolumn{2}{c}{(b) dancing woman} & & \multicolumn{2}{c}{(c) standing man} \\ \end{tabular} } \end{center} \caption{Qualitative evaluations that illustrate the effectiveness of our TAM. Blowups are used to show the details of the alpha matte. These three video clips are from VideoMatting108 validation set, and we use the ``medium'' ground-truth trimaps to obtain the results. Please see the supplementary video for the complete results.} \label{fig:val_matting} \end{figure*} \begin{figure*}[ht] \begin{center} \setlength{\tabcolsep}{1pt} \resizebox{0.95\linewidth}{!}{ \begin{tabular}{ccccccc} \rotatebox{90}{\scriptsize{Frames}} & \includegraphics[width=0.15\textwidth]{TestImgs/fashion/00076_img.jpg} & \includegraphics[width=0.15\textwidth]{TestImgs/fashion/00078_img.jpg} & \includegraphics[width=0.122\textwidth]{TestImgs/xmas-dog/00008_im.jpg} & \includegraphics[width=0.122\textwidth]{TestImgs/xmas-dog/00021_im.jpg} \includegraphics[width=0.15\textwidth]{TestImgs/sofa/00016_img.jpg} & \includegraphics[width=0.15\textwidth]{TestImgs/sofa/00022_img.jpg} \\ \rotatebox{90}{\scriptsize{Trimap}} & \includegraphics[width=0.15\textwidth]{TestImgs/fashion/00076_tri.jpg} & \includegraphics[width=0.15\textwidth]{TestImgs/fashion/00078_tri.jpg} & \includegraphics[width=0.121\textwidth]{TestImgs/xmas-dog/00008_tri.jpg} & \includegraphics[width=0.121\textwidth]{TestImgs/xmas-dog/00021_tri.jpg} \includegraphics[width=0.15\textwidth]{TestImgs/sofa/00016_tri.jpg} & \includegraphics[width=0.15\textwidth]{TestImgs/sofa/00022_tri.jpg} \\ \rotatebox{90}{\scriptsize{GCA~\cite{li2020natural}}} & \includegraphics[width=0.15\textwidth]{TestImgs/fashion/00076_single.jpg} & \includegraphics[width=0.15\textwidth]{TestImgs/fashion/00078_single.jpg} & \includegraphics[width=0.121\textwidth]{TestImgs/xmas-dog/00008_single.jpg} & \includegraphics[width=0.121\textwidth]{TestImgs/xmas-dog/00021_single.jpg} \includegraphics[width=0.15\textwidth]{TestImgs/sofa/00016_single.jpg} & \includegraphics[width=0.15\textwidth]{TestImgs/sofa/00022_single.jpg} \\ \rotatebox{90}{\scriptsize{GCA+TAM}} & \includegraphics[width=0.15\textwidth]{TestImgs/fashion/00076_ours.jpg} & \includegraphics[width=0.15\textwidth]{TestImgs/fashion/00078_ours.jpg} & \includegraphics[width=0.121\textwidth]{TestImgs/xmas-dog/00008_ours.jpg} & \includegraphics[width=0.121\textwidth]{TestImgs/xmas-dog/00021_ours.jpg} \includegraphics[width=0.15\textwidth]{TestImgs/sofa/00016_ours.jpg} & \includegraphics[width=0.15\textwidth]{TestImgs/sofa/00022_ours.jpg} \\ & \#77 & \#79 & \#9 & \#22 & \#23 & \#92 \\ & \multicolumn{2}{c}{(a) fashion (k=4)} & \multicolumn{2}{c}{(b) xmas-dog (k=5)} & \multicolumn{2}{c}{(c) sofa (k=3)} \end{tabular} } \end{center} \caption{Comparing our GCA+TAM with GCA~\cite{li2020natural} on Internet videos. ``k'' indicates the number of the annotated keyframe trimaps. Please see the supplementary video for the complete results.} \label{fig:test_matting} \end{figure*} \noindent \textbf{Quantitative comparisons}. Table.~\ref{tab:vm108_val} shows the quantitative comparisons between our video matting networks and single image matting networks on the Videomatting108 validation dataset. For fair comparisons, we fine-tune single image matting networks on VideoMatting108 using each video frame as the image matting training data. The learning rate and input resolution are kept the same as we train our video matting network. The results are averaged over all 28 test video clips. It can be seen that our method (denoted ``+TAM'' with ``$L_{im}+L_{tc}+L_{af}$'' in the table) consistently outperform the baseline image matting networks (denoted ``GCA+F'', ``Index+F'' and ``DIM+F'') on all metrics. Furthermore, in Table.~\ref{tab:videomatting.com}, the GCA+TAM network also outperforms GCA on the testing dataset from videomatting.com~\cite{Erofeev2015}. This verifies the ability of the proposed TAM that is designed to aggregate temporal information for better video object matting results. Since GCA+TAM achieves much lower metric numbers comparing to DIM+TAM and Index+TAM, we use GCA+TAM as the default for evaluating video matting, if not mentioned otherwise. \begin{table}[t] \caption{Ablation study on the temporal window size $W$ and the number of aggregated frames used for GCA+TAM. ``nF'' denotes $n$-frames are aggregated in TAM. 2F: only uses $\mathbf{I}^{t-1}$; 3F: uses $\mathbf{I}^{t-1}$ and $\mathbf{I}^{t+1}$; 5F: uses $\mathbf{I}^{t-2}$, $\mathbf{I}^{t-1}$, $\mathbf{I}^{t+1}$ and $\mathbf{I}^{t+2}$.} \label{tab:abl_window} \small \centering \begin{tabularx}{\linewidth}{l|l|YYY} \hline $W$ & F & SSDA & dtSSD & MESSDdt \\ \hline $W=5$ & 3F & 49.75 & 24.08 & 1.30 \\ $W=7$ & 3F & 47.59 & 23.53 & 1.19 \\ $W=9$ & 3F & 52.35 & 24.29 & 1.38 \\ $W=7$ & 2F & 48.51 & 23.81 & 1.30 \\ $W=7$ & 5F & 51.25 & 24.79 & 1.32 \\ \hline \end{tabularx} \end{table} \begin{table}[t] \caption{Ablation study for trimap generation on VideoMatting108 validation set using the mIoU metric. All metrics are averaged with a per-video basis. ``nFT'' denotes how many keyframe trimaps are used in fine-tuning. 1FT: first frame as keyframe. 2FT: first+last frame as keyframes. 3FT: first+last+100-th frame as keyframes.} \label{tab:abl_trimap} \small \centering \begin{tabularx}{\linewidth}{l|YYYY} \hline Method & FR & BR & UR & Average \\ \hline STM~\cite{oh2019video} & 81.43 & 95.58 & 81.63 & 86.21 \\ STM+1FT & 85.92 & 96.62 & 82.75 & 88.43 \\ STM+2FT & 87.73 & 97.91 & 84.90 & 90.18 \\ STM+3FT & 87.72 & 97.93 & 85.04 & 90.23 \\ \hline \end{tabularx} \end{table} \noindent \textbf{Ablation studies}. We first investigate the influence of the weight sharing in TAM (see the second to the fourth row in Table~\ref{tab:vm108_val}). Different from the widely utilized ``KQV'' structure without any weight sharing, in our case we empirically found out that sharing ``query'' and ``value'' convolution achieves the best result compared with other weight sharing configurations. Note that the conventional ``KQV'' structure without weight sharing performs worse than the baseline method without TAM on the validation set. We speculate that separating all convolutions causes over-fitting during the training, since it does achieve lower training loss compared with our design. We proceed to assess the influence of different loss terms (see the fifth to the seventh row in Table~\ref{tab:vm108_val}). By only adding the temporal coherence term $L_{tc}$, we obtain performance boost across all metrics, except pixel-wise alpha value accuracy in ``narrow'' trimaps. Since the UR of a narrow trimap mainly consists of transparent pixels, we speculate that the motion ambiguities at these pixels are the main reason for the drop of alpha value accuracy. By only adding the target affinity term $L_{af}$, the temporal metrics ``dtSSD'' and ``MESSDdt'' are slightly improved while the alpha value accuracy metrics are comparable to the baseline. By combining both terms, the network achieved much improved results. For the ``narrow'' trimaps in particular, the direct supervision in $L_{af}$ could suppress the erroneous affinity values, thus improving performance. In Table.~\ref{tab:abl_window}, we analyze the influence of the temporal window size (Sec.~\ref{sec:tam}) parameter $W$ and the number of frames used in TAM. To reduce the computational cost, we conduct experiments on half of the VideoMatting108 training and validation set with ``medium'' ground-truth trimaps. We can see that the network performance achieves the best balance when $W=7$. In contrast, the network performance degrades when $W=9$. The reason may come from the difficulties of suppressing a large number of unrelated features from adjacent frames through attention. Thus, we choose $W=7$ in all of our experiments. To verify the bi-directional design, we conduct experiments on the number of aggregated frames (fourth and fifth rows). As seen in Table.~\ref{tab:abl_window}, our bi-directional design (second row, 3F) outperforms all other configurations. \noindent \textbf{STM-based trimap generation}. We use the medium trimaps from VideoMatting108 as the ground truth to evaluate the performance of STM~\cite{oh2019video} on trimap generation using the mIoU metric. As the video sequences in VideoMatting108 are long, we only use the first 200 frames in this experiment. Specifically, we choose the first, the last, and the 100-th frame as keyframes during online fine-tuning. We also gradually add their corresponding ground-truth trimaps to show their impact on online fine-tuning. The input resolution is set to $768 \times 768$, and we fine-tune the network for 500 iterations. The average time for fine-tuning STM is 7.5 minutes on one GPU. As shown in Table.~\ref{tab:abl_trimap}, online fine-tuning with the first or other frames improved the result by a large margin, especially in FR. Adding more ground-truth trimaps improves the results further. In conclusion, fine-tuning is necessary to adapt the network to user-specified keyframe trimaps, which improve the quality of generated trimaps. Table.~\ref{tab:abl_trimap} also shows that adding more keyframes improves the mIoU score marginally, which indicates that a sparse set of annotated keyframe trimaps is enough for video trimap generation. We use around three frames in all our experiments. On the other hand, inaccurate FR/BR heavily affects the matting result. As shown in the third row of Figure~\ref{fig:abl_trimap}(a), the pixels inside the gap between the two legs have erroneous high alpha values, despite the UR being roughly the same in the trimaps. In (b) and (c), the excessive URs lead to artifacts in the final alpha matte. \begin{figure}[t] \begin{center} \setlength{\tabcolsep}{1pt} \resizebox{0.95\linewidth}{!}{ \begin{tabular}{cccc} \rotatebox{90}{\scriptsize{Frames}} & \includegraphics[width=0.3\columnwidth]{TrimapFigs/man/1205274_img.jpg} & \includegraphics[width=0.3\columnwidth]{TrimapFigs/origami/33531_img.jpg} & \includegraphics[width=0.3\columnwidth]{TrimapFigs/woman/00011_img.jpg} \\ \rotatebox{90}{\scriptsize{STM~\cite{oh2019video}}} & \includegraphics[width=0.3\columnwidth]{TrimapFigs/man/1205274_tri_noft.jpg} & \includegraphics[width=0.3\columnwidth]{TrimapFigs/origami/33531_tri_noft.jpg} & \includegraphics[width=0.3\columnwidth]{TrimapFigs/woman/00011_tri_noft.jpg} \\ \rotatebox{90}{\scriptsize{STM+3FT}} & \includegraphics[width=0.3\columnwidth]{TrimapFigs/man/1205274_tri_ft.jpg} & \includegraphics[width=0.3\columnwidth]{TrimapFigs/origami/33531_tri_ft.jpg} & \includegraphics[width=0.3\columnwidth]{TrimapFigs/woman/00011_tri_ft.jpg} \\ \rotatebox{90}{\scriptsize{Matte STM}} & \includegraphics[width=0.3\columnwidth]{TrimapFigs/man/1205274_a_noft.jpg} & \includegraphics[width=0.3\columnwidth]{TrimapFigs/origami/33531_a_noft.jpg} & \includegraphics[width=0.3\columnwidth]{TrimapFigs/woman/00011_a_noft.jpg} \\ \rotatebox{90}{\scriptsize{\makecell[l]{Matte \\ STM+3FT}}} & \includegraphics[width=0.3\columnwidth]{TrimapFigs/man/1205274_a_ft.jpg} & \includegraphics[width=0.3\columnwidth]{TrimapFigs/origami/33531_a_ft.jpg} & \includegraphics[width=0.3\columnwidth]{TrimapFigs/woman/00011_a_ft.jpg} \\ & (a) & (b) & (c) \end{tabular} } \end{center} \caption{Qualitative evaluation of trimap generation. Red, blue and green corresponds to FR, BR and UR respectively. ``3FT'' denotes that we use three keyframes during fine-tuning.} \label{fig:abl_trimap} \end{figure} \noindent \textbf{Qualitative evaluations}. In Figure~\ref{fig:teaser} and Figure~\ref{fig:val_matting}, we show that our method could improve the temporal coherence comparing to single image matting networks. In the ``plant'' clip, GCA~\cite{li2020natural} produces flickering in the alpha matte although the foreground object is nearly static. As seen in the ``lion'' clip, GCA produces an erroneous white blob between the fur in the blowup. In contrast, our method could mitigate this discontinuity by aggregating temporal features. The same effect can be seen in the ``dancing woman'' clip and ``standing man'' clip when using Index~\cite{Lu_2019_ICCV} and DIM~\cite{xu2017deep} as the base network, respectively. All examples validate the effectiveness of our TAM. In Figure~\ref{fig:test_matting}, we qualitatively compare GCA+TAM with GCA on three Internet video clips. The trimaps are generated by the fine-tuned STM~\cite{oh2019video}. In the ``fashion'' clip, our method can alleviate the artifacts in the gap between the model's arm and body. In the ``xmas-dog'' and the ``sofa'' clip, not only our method can produce more detailed result (\#9 of ``xmas-dog'' and \#92 of ``sofa''), it is also more robust to inaccurate URs in the trimap (\#22 of ``xmas-dog'' and \#23 of ``sofa''). The lengths of these three clips are 89, 100 and 93 frames respectively. More results can be found in the supplementary materials. \section{Discussion} \section{Conclusion} We have developed a deep video object matting method to achieve temporally coherent video matting results. Its key feature is an attention-based temporal aggregation module to compute the temporal affinity values in feature space, which are robust to appearance changes, fast motions, and occlusions. The temporal aggregation module can be easily integrated into image matting networks to enhance video object matting performance. We constructed a video matting dataset to enable the training of video object matting and trimap generation networks. This dataset has 80 training and 28 validation foreground video sequences with ground truth alpha mattes. In the future, we plan to investigate weakly supervised video object matting methods to reduce the workload of creating high-quality video matting training data.
1,941,325,220,914
arxiv
\section{Introduction} \label{sec:1} Supersymmetric models with conserved $R$-parity contain one new stable particle which is a candidate for cold dark matter (CDM) \cite{EHNOS}. There are very strong constraints, however, forbidding the existence of stable or long lived particles which are not color and electrically neutral. The sneutrino \cite{snu} is one possible candidate, but in the MSSM, it has been excluded as a dark matter candidate by direct \cite{dir} and indirect \cite{indir} searches. Another possibility is the gravitino and is probably the most difficult to exclude. This possibility has been discussed recently in the CMSSM context \cite{gdm}. I will concentrate on the remaining possibility in the MSSM, namely the neutralinos. There are four neutralinos, each of which is a linear combination of the $R=-1$, neutral fermions \cite{EHNOS}: the wino $\tilde W^3$, the partner of the 3rd component of the $SU(2)_L$ gauge boson; the bino, $\tilde B$, the partner of the $U(1)_Y$ gauge boson; and the two neutral Higgsinos, $\tilde H_1$ and $\tilde H_2$. In general, the neutralino mass eigenstates can be expressed as a linear combination \begin{equation} \chi = \alpha \tilde B + \beta \tilde W^3 + \gamma \tilde H_1 + \delta \tilde H_2 \end{equation} The solution for the coefficients $\alpha, \beta, \gamma$ and $\delta$ for neutralinos that make up the LSP can be found by diagonalizing the mass matrix which depends on $M_1 (M_2)$ which are the soft supersymmetry breaking U(1) (SU(2)) gaugino mass terms, $\mu$, the supersymmetric Higgs mixing mass parameter and the two Higgs vacuum expectation values, $v_1$ and $v_2$. One combination of these is related to the $Z$ mass, and therefore is not a free parameter, while the other combination, the ratio of the two vevs, $\tan \beta$, is free. The most general version of the MSSM, despite its minimality in particles and interactions contains well over a hundred new parameters. The study of such a model would be untenable were it not for some (well motivated) assumptions. These have to do with the parameters associated with supersymmetry breaking. It is often assumed that, at some unification scale, all of the gaugino masses receive a common mass, $m_{1/2}$. The gaugino masses at the weak scale are determined by running a set of renormalization group equations. Similarly, one often assumes that all scalars receive a common mass, $m_0$, at the GUT scale. These too are run down to the weak scale. The remaining supersymmetry breaking parameters are the trilinear mass terms, $A_0$, which I will also assume are unified at the GUT scale, and the bilinear mass term $B$. There are, in addition, two physical CP violating phases which will not be considered here. The natural boundary conditions at the GUT scale for the MSSM would include $\mu$ and $B$ in addition to $m_{1/2}$, $m_0$, and $A_0$. In this case, upon running the RGEs down to a low energy scale and minimizing the Higgs potential, one would predict the values of $M_Z$, $\tan \beta$ (in addition to all of the sparticle masses). Since $M_Z$ is known, it is more useful to analyze supersymmetric models where $M_Z$ is input rather than output. It is also common to treat $\tan \beta$ as an input parameter. This can be done at the expense of shifting $\mu$ (up to a sign) and $B$ from inputs to outputs. This model is often referred to as the constrained MSSM or CMSSM. Once these parameters are set, the entire spectrum of sparticle masses at the weak scale can be calculated. In the CMSSM, the solutions for $\mu$ generally lead to a neutralino which which very nearly a pure $\tilde B$. \section{The CMSSM after WMAP} \label{sec:2} For a given value of $\tan \beta$, $A_0$, and $sgn(\mu)$, the resulting regions of acceptable relic density and which satisfy the phenomenological constraints can be displayed on the $m_{1/2} - m_0$ plane. In Fig. \ref{fig:UHM}a, the light shaded region corresponds to that portion of the CMSSM plane with $\tan \beta = 10$, $A_0 = 0$, and $\mu > 0$ such that the computed relic density yields \mbox{$0.1<\Omega_{\chi} h^2<0.3$}. At relatively low values of $m_{1/2}$ and $m_0$, there is a large `bulk' region which tapers off as $m_{1\!/2}$ is increased. At higher values of $m_0$, annihilation cross sections are too small to maintain an acceptable relic density and $\Omega_{\chi} h^2 > 0.3$. Although sfermion masses are also enhanced at large $m_{1\!/2}$ (due to RGE running), co-annihilation processes between the LSP and the next lightest sparticle (in this case the ${\widetilde \tau}_{\scriptscriptstyle\rm 1}$) enhance the annihilation cross section and reduce the relic density. This occurs when the LSP and NLSP are nearly degenerate in mass. The dark shaded region has $m_{{\widetilde \tau}_{\scriptscriptstyle\rm 1}}< m_\chi$ and is excluded. Neglecting coannihilations, one would find an upper bound of $\sim450{\rm \, Ge\kern-0.125em V}$ on $m_{1\!/2}$, corresponding to an upper bound of roughly $200{\rm \, Ge\kern-0.125em V}$ on $m_{\tilde B}$. The effect of coannihilations is to create an allowed band about 25-50 ${\rm \, Ge\kern-0.125em V}$ wide in $m_0$ for $m_{1\!/2} \mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} 1400{\rm \, Ge\kern-0.125em V}$, which tracks above the $m_{\tilde\tau_1}=m_\chi$ contour \cite{efo}. \begin{figure}[h] \includegraphics[height=2.3in]{newer10p.eps} \includegraphics[height=2.3in]{10pmap.eps} \caption{\label{fig:UHM} {\it The $(m_{1/2}, m_0)$ planes for (a) $\tan \beta = 10$ and $\mu > 0$, assuming $A_0 = 0, m_t = 175$~GeV and $m_b(m_b)^{\overline {MS}}_{SM} = 4.25$~GeV. The near-vertical (red) dot-dashed lines are the contours $m_h = 114$~GeV, and the near-vertical (black) dashed line is the contour $m_{\chi^\pm} = 104$~GeV. Also shown by the dot-dashed curve in the lower left is the corner excluded by the LEP bound of $m_{\tilde e} > 99$ GeV. The medium (dark green) shaded region is excluded by $b \to s \gamma$, and the light (turquoise) shaded area is the cosmologically preferred regions with \protect\mbox{$0.1\leq\Omega_{\chi} h^2\leq 0.3$}. In the dark (brick red) shaded region, the LSP is the charged ${\tilde \tau}_1$. The region allowed by the E821 measurement of $a_\mu$ at the 2-$\sigma$ level, is shaded (pink) and bounded by solid black lines, with dashed lines indicating the 1-$\sigma$ ranges. In (b), the relic density is restricted to the range $0.094 < \Omega_{\chi} h^2 < 0.129$. }} \end{figure} Also shown in Fig. \ref{fig:UHM}a are the relevant phenomenological constraints. These include the limit on the chargino mass: $m_{\chi^\pm} > 104$~GeV \cite{LEPsusy}, on the selectron mass: $m_{\tilde e} > 99$~GeV \cite{LEPSUSYWG_0101} and on the Higgs mass: $m_h > 114$~GeV \cite{LEPHiggs}. The former two constrain $m_{1/2}$ and $m_0$ directly via the sparticle masses, and the latter indirectly via the sensitivity of radiative corrections to the Higgs mass to the sparticle masses, principally $m_{\tilde t, \tilde b}$. {\tt FeynHiggs}~\cite{FeynHiggs} is used for the calculation of $m_h$. The Higgs limit imposes important constraints principally on $m_{1/2}$ particularly at low $\tan \beta$. Another constraint is the requirement that the branching ratio for $b \rightarrow s \gamma$ is consistent with the experimental measurements \cite{bsgex}. These measurements agree with the Standard Model, and therefore provide bounds on MSSM particles \cite{gam,bsgth}, such as the chargino and charged Higgs masses, in particular. Typically, the $b\rightarrow s\gamma$ constraint is more important for $\mu < 0$, but it is also relevant for $\mu > 0$, particularly when $\tan\beta$ is large. The constraint imposed by measurements of $b\rightarrow s\gamma$ also excludes small values of $m_{1/2}$. Finally, there are regions of the $(m_{1/2}, m_0)$ plane that are favoured by the BNL measurement \cite{newBNL} of $g_\mu - 2$ at the 2-$\sigma$ level, corresponding to a deviation from the Standard Model calculation \cite{Davier} using $e^+ e^-$ data. One should be however aware that this constraint is still under active discussion. The preferred range of the relic LSP density has been altered significantly by the recent improved determination of the allowable range of the cold dark matter density obtained by combining WMAP and other cosmological data: $0.094 < \Omega_{CDM} < 0.129$ at the 2-$\sigma$ level \cite{WMAP}. In the second panel of Fig. \ref{fig:UHM}, we see the effect of imposing the WMAP range on the neutralino density \cite{eoss,Baer,morewmap}. We see immediately that (i) the cosmological regions are generally much narrower, and (ii) the `bulk' regions at small $m_{1/2}$ and $m_0$ have almost disappeared, in particular when the laboratory constraints are imposed. Looking more closely at the coannihilation regions, we see that (iii) they are significantly truncated as well as becoming much narrower, since the reduced upper bound on $\Omega_\chi h^2$ moves the tip where $m_\chi = m_{\tilde \tau}$ to smaller $m_{1/2}$ so that the upper limit is now $m_{1/2} \mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} 950$ GeV or $m_\chi \mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} 400$ GeV. \begin{figure}[h] \includegraphics[height=2.3in]{50p.eps} \includegraphics[height=2.3in]{50pmap.eps} \caption{\label{fig:UHM50} {\it As in Fig. \protect{\ref{fig:UHM}} for $\tan \beta = 50$.}} \end{figure} Another mechanism for extending the allowed CMSSM region to large $m_\chi$ is rapid annihilation via a direct-channel pole when $m_\chi \sim {1\over 2} m_{A}$~\cite{funnel,EFGOSi}. Since the heavy scalar and pseudoscalar Higgs masses decrease as $\tan \beta$ increases, eventually $ 2 m_\chi \simeq m_A$ yielding a `funnel' extending to large $m_{1/2}$ and $m_0$ at large $\tan\beta$, as seen in the high $\tan \beta$ strips of Fig.~\ref{fig:UHM50}. As one can see, the impact of the Higgs mass constraint is reduced (relative to the case with $\tan \beta = 10$) while that of $b \to s \gamma$ is enhanced. Shown in Fig.~\ref{fig:strips} are the WMAP lines \cite{eoss} of the $(m_{1/2}, m_0)$ plane allowed by the new cosmological constraint $0.094 < \Omega_\chi h^2 < 0.129$ and the laboratory constraints listed above, for $\mu > 0$ and values of $\tan \beta$ from 5 to 55, in steps $\Delta ( \tan \beta ) = 5$. We notice immediately that the strips are considerably narrower than the spacing between them, though any intermediate point in the $(m_{1/2}, m_0)$ plane would be compatible with some intermediate value of $\tan \beta$. The right (left) ends of the strips correspond to the maximal (minimal) allowed values of $m_{1/2}$ and hence $m_\chi$. The lower bounds on $m_{1/2}$ are due to the Higgs mass constraint for $\tan \beta \le 23$, but are determined by the $b \to s \gamma$ constraint for higher values of $\tan \beta$. \begin{figure} \begin{center} \includegraphics[height=3.2in]{g-2sum.eps} \end{center} \caption{\label{fig:strips}\it The strips display the regions of the $(m_{1/2}, m_0)$ plane that are compatible with $0.094 < \Omega_\chi h^2 < 0.129$ and the laboratory constraints for $\mu > 0$ and $\tan \beta = 5, 10, 15, 20, 25, 30, 35, 40, 45, 50, 55$. The parts of the strips compatible with $g_\mu - 2$ at the 2-$\sigma$ level have darker shading. } \end{figure} Finally, there is one additional region of acceptable relic density known as the focus-point region \cite{fp}, which is found at very high values of $m_0$. An example showing this region is found in Fig. \ref{figfp}, plotted for $\tan \beta = 10$, $\mu > 0$, and $m_t = 175$ TeV. As $m_0$ is increased, the solution for $\mu$ at low energies as determined by the electroweak symmetry breaking conditions eventually begins to drop. When $\mu \mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} m_{1/2}$, the composition of the LSP gains a strong Higgsino component and as such the relic density begins to drop precipitously. These effects are both shown in Fig. \ref{fignofp} where the value of $\mu$ and $\Omega h^2$ are plotted as a function of $m_0$ for fixed $m_{1/2} = 300$ GeV and $\tan \beta = 10$. As $m_0$ is increased further, there are no longer any solutions for $\mu$. This occurs in the shaded region in the upper left corner of Fig. \ref{figfp}. \begin{figure} \begin{center} \includegraphics[height=2.6in]{fp10.eps} \end{center} \caption{\label{figfp}\it As in Fig. \protect\ref{fig:UHM}a, where the range in $m_0$ is extended to 5 TeV. In the shaded region at very high $m_0$, there are no solutions for $\mu$ which respect the low energy electroweak symmetry breaking conditions. } \end{figure} Fig. \ref{fignofp} also exemplifies the degree of fine tuning associated with the focus-point region. While the position of the focus-point region in the $m_0, m_{1/2}$ plane is not overly sensitive to supersymmetric parameters, it is highly sensitive to the top quark Yukawa coupling which contributes to the evolution of $\mu$ \cite{rs,ftuning}. As one can see in the figure, a change in $m_t$ of 3 GeV produces a shift of about 2.5 TeV in $m_0$. Note that the position of the focus-point region is also highly sensitive to the value of $A_0/m_0$. In Fig. \ref{fignofp}, $A_0 = 0$ was chosen. For $A_0/m_0 = 0.5$, the focus point shifts from 2.5 to 4.5 TeV and moves to larger $m_0$ as $A_0/m_0$ is increased. \begin{figure} \begin{center} \includegraphics[height=2.6in]{nofpmtnoA.eps} \end{center} \caption{\label{fignofp}\it The value of $\mu$ as a function of $m_0$ for fixed $m_{1/2} = 300$ GeV and $\tan \beta = 10$ for two choices of $m_t$ as indicated. The scale on the right gives the value of $\Omega h^2$. The curves corresponding to this is scale rise sharply at low $m_0$ to values much larger than 1. For $m_t = 175$ GeV and $m_0 \approx 2500$ GeV, the value of $\Omega h^2$ drops to acceptable values when $\mu$ becomes small. When the $m_t = 178$ GeV, $\Omega h^2$ drops at $m_0 \approx 5000$ GeV.} \end{figure} \section{A Likelihood analysis of the CMSSM} Up to now, in displaying acceptable regions of cosmological density in the $m_0, m_{1/2}$ plane, it has been assumed that the input parameters are known with perfect accuracy so that the relic density can be calculated precisely. While all of the beyond the standard model parameters are completely unknown and therefore carry no formal uncertainties, standard model parameters such as the top and bottom Yukawa couplings are known but do carry significant uncertainties. Indeed, we saw that in the case of the focus-point region, there is an intense sensitivity of the relic density to the top quark Yukawa. Other regions in the $m_0, m_{1/2}$ plane, such as those corresponding to the rapid annihilation funnels are also very sensitive to the 3rd generation Yukawas. The optimal way to combine the various constraints (both phenomenological and cosmological) is via a likelihood analysis, as has been done by some authors both before~\cite{DeBoer} and after~\cite{Baer} the WMAP data was released. When performing such an analysis, in addition to the formal experimental errors, it is also essential to take into account theoretical errors, which introduce systematic uncertainties that are frequently non-negligible. Recently, we have preformed an extensive likelihood analysis of the CMSSM \cite{eoss4}. Included is the full likelihood function for the LEP Higgs search, as released by the LEP Higgs Working Group. This includes the small enhancement in the likelihood just beyond the formal limit due to the LEP Higgs signal reported late in 2000. This was re-evaluated most recently in~\cite{LEPHiggs}, and cannot be regarded as significant evidence for a light Higgs boson. We have also taken into account the indirect information on $m_h$ provided by a global fit to the precision electroweak data. The likelihood function from this indirect source does not vary rapidly over the range of Higgs masses found in the CMSSM, but we included this contribution with the aim of completeness. The interpretation of the combined Higgs likelihood, ${\cal L}_{exp}$, in the $(m_{1/2}, m_0)$ plane depends on uncertainties in the theoretical calculation of $m_h$. These include the experimental error in $m_t$ and (particularly at large $\tan \beta$) $m_b$, and theoretical uncertainties associated with higher-order corrections to $m_h$. Our default assumptions are that $m_t = 175 \pm 5$~GeV for the pole mass, and $m_b = 4.25 \pm 0.25$~GeV for the running $\overline {MS}$ mass evaluated at $m_b$ itself. The theoretical uncertainty in $m_h$, $\sigma_{th}$, is dominated by the experimental uncertainties in $m_{t,b}$, which are treated as uncorrelated Gaussian errors: \beq \sigma_{th}^2 = \left( \frac{\partial m_h}{\partial m_t} \right)^2 \Delta m_t^2 + \left( \frac{\partial m_h}{\partial m_b} \right)^2 \Delta m_b^2 \,. \label{eq:sigmath} \end{equation} Typically, we find that $(\partial m_h/\partial m_t) \sim 0.5$, so that $\sigma_{th}$ is roughly 2-3 GeV. The combined experimental likelihood, ${\cal L}_{exp}$, from direct searches at LEP~2 and a global electroweak fit is then convolved with a theoretical likelihood (taken as a Gaussian) with uncertainty given by $\sigma_{th}$ from (\ref{eq:sigmath}) above. Thus, we define the total Higgs likelihood function, ${\cal L}_h$, as \beq {\cal L}_h(m_h) = { {\cal N} \over {\sqrt{2 \pi}\, \sigma_{th} }} \int d m^{\prime}_h \,\, {\cal L}_{exp}(m^{\prime}_h) \,\, e^{-(m^{\prime}_h-m_h)^2/2 \sigma_{th}^2 }\, , \label{eq:higlik} \end{equation} where ${\cal N}$ is a factor that normalizes the experimental likelihood distribution. In addition to the Higgs likelihood function, we have included the likelihood function based on $b \to s \gamma$. The branching ratio for these decays has been measured by the CLEO, BELLE and BaBar collaborations~\cite{bsgex}, and we took as the combined value ${\cal{B}}(b \to s \gamma)=(3.54 \pm 0.41 \pm 0.26)\times 10^{-4}$. The theoretical prediction \cite{gam,bsgth} contains uncertainties which stem from the uncertainties in $m_b$, $\alpha_s$, the measurement of the semileptonic branching ratio of the $B$ meson as well as the effect of the scale dependence. While the likelihood function based on the measurements of the anomalous magnetic moment of the muon was considered in \cite{eoss4}, it will not be discussed here. Finally, in calculating the likelihood of the CDM density, we take into account the contribution of the uncertainties in $m_{t,b}$. We will see that the theoretical uncertainty plays a very significant role in this analysis. The likelihood for $\Omega h^2$ is therefore, \beq {\cal L}_{\Omega h^2}=\frac{1}{\sqrt{2 \pi} \sigma} e^{-({\Omega h^2}^{th}-{\Omega h^2}^{exp})^2/2 \sigma^2} \,, \label{eq:likamu} \end{equation} where $\sigma^2=\sigma_{exp}^2+\sigma_{th}^2$, with $\sigma_{exp}$ taken from the WMAP \cite{WMAP} result and $\sigma_{th}^2$ from (\ref{eq:sigmath}), replacing $m_h$ by $\Omega h^2$. The total likelihood function is computed by combining all the components described above: \beq {\cal L}_{tot} = {\cal L}_h \times {\cal L}_{bs\gamma} \times {\cal L}_{\Omega_\chi h^2} (\times {\cal L}_{a_\mu}) \end{equation} The likelihood function in the CMSSM can be considered a function of two variables, ${\cal L}_{tot}(m_{1/2},m_0)$, where $m_{1/2}$ and $m_0$ are the unified GUT-scale gaugino and scalar masses respectively. Results are based on a Bayesian analysis, in which a prior range for $m_{1/2}$ is introduced in order to normalize the conditional probability obtained from the likelihood function using Bayes' theorem. Although it is possible to motivate some upper limit on $m_{1/2}$, e.g., on the basis of naturalness~\cite{nat,ftuning,eos2}, one cannot quantify any such limit rigorously. Within the selected range, we adopt a flat prior distribution for $m_{1/2}$, and normalize the volume integral: \beq \int {\cal L}_{tot} \, dm_0 \, dm_{1/2} \; = \; 1 \end{equation} for each value of $\tan \beta$, combining where appropriate both signs of $\mu$. We note that no such prior need be specified for $m_0$. For any given value of $m_{1/2}$, the theory is well defined only up to some maximum value of $m_0$, above which radiative electroweak symmetry breaking is no longer possible. We always integrate up to that point, adopting also a flat prior distribution for $m_0$. In Fig.~\ref{fig:WMAPFP} the likelihood along slices through the CMSSM parameter space for $\tan \beta = 10, A_0 = 0, \mu > 0,$ and $m_{1/2} = 300$ and 800~GeV is shown in the left and right panels, respectively, plotting the likelihood as a function of $m_0$. The solid red curves show the total likelihood function calculated including the uncertainties which stem from the experimental errors in $m_t$ and $m_b$. The peak at low $m_0$ is due to the coannihilation region. The peak at $m_0 \simeq 2500 (4500)$ GeV for $m_{1/2} = 300 (800)$ GeV is due to the focus-point region. Also shown in Fig. \ref{fig:WMAPFP} are the 68\%, 90\%, and 95\% CL (horizontal) lines, corresponding to the iso-likelihood values of the fully integrated likelihood function corresponding to the solid (red) curve. \begin{figure}[h] \includegraphics[height=1.9in]{cutFPg_300_p.eps} \includegraphics[height=1.9in]{cutFPg_800_p.eps} \caption{\label{fig:WMAPFP} {\it The likelihood function along slices in $m_0$ through the CMSSM parameter space for $\tan \beta = 10, A_0 = 0, \mu> 0$ and $m_{1/2} = 300, 800$~GeV in the left and right panels, respectively. The red (solid) curves are calculated using the current errors in $m_t$ and $m_b$, the green dashed curve with no error in $m_t$, the violet dotted lines with $\Delta m_t = 0.5$~GeV, and the blue dashed-dotted lines with $\Delta m_t = 1$~GeV. }} \end{figure} The focus-point peak is suppressed relative to the coannihilation peak at low $m_0$ because of the theoretical sensitivity to the experimental uncertainty in the top mass. We recall that the likelihood function is proportional to $\sigma^{-1}$, and that $\sigma$ which scales with $\partial (\Omega_\chi h^2 )/ \partial m_t$, is very large at large $m_0$~\cite{ftuning}. The error due to the uncertainty in $m_t$ is far greater in the focus-point region than in the coannihilation region. Thus, even though the exponential in ${\cal L}_{\Omega_\chi h^2}$ is of order unity near the focus-point region when $\Omega_\chi h^2 \simeq 0.1$, the prefactor is very small due the large uncertainty in the top mass. This accounts for the factor of $\mathrel{\raise.3ex\hbox{$>$\kern-.75em\lower1ex\hbox{$\sim$}}} 1000$ suppression seen in Fig.~\ref{fig:WMAPFP} when comparing the two peaks of the solid red curves. We note also that there is another broad, low-lying peak at intermediate values of $m_0$. This is due to a combination of the effects of $\sigma$ in the prefactor and the exponential. We expect a bump to occur when the Gaussian exponential is of order unity, i.e., $\Omega_\chi h^2 \sim \sqrt{2}\Delta m_t \, \partial \Omega_\chi h^2/\partial m_t$. $\Omega_\chi h^2 \sim 10$ at large $m_0$ for our nominal value $m_t$ = 175 GeV, but it varies significantly as one samples the favoured range of $m_t$ within its present uncertainty. The competition between the exponential and the prefactor would require a large theoretical uncertainty in $\Omega_\chi h^2$: $\partial \Omega_\chi h^2/\partial m_t \sim 2$ for $\Delta m_t = 5$ GeV. This occurs when $m_0 \sim 1000$ GeV, which is the position of the broad secondary peak in Fig.~\ref{fig:WMAPFP}a. At higher $m_0$, $\sigma$ continues to grow, and the prefactor suppresses the likelihood function until $\Omega_\chi h^2$ drops to $\sim 0.1$ in the focus-point region. As is clear from the above discussion, the impact of the present experimental error in $m_t$ is particularly important in this region. This point is further demonstrated by the differences between the curves in each panel, where we decrease {\it ad hoc} the experimental uncertainty in $m_t$. As $\Delta m_t$ is decreased, the intermediate bump blends into the broad focus-point peak. When the uncertainties in $m_t$ and $m_b$ are set to 0, we obtain a narrow peak in the focus-point region. Using the fully normalized likelihood function ${\cal L}_{tot}$ obtained by combining both signs of $\mu$ for each value of $\tan \beta$, we can determine the regions in the $(m_{1/2}, m_0)$ planes which correspond to specific CLs. Fig.~\ref{fig:contours} extends the previous analysis to the entire $(m_{1/2}, m_0)$ plane for $\tan \beta = 10$ and $A_0 = 0$, including both signs of $\mu$. The darkest (blue), intermediate (red) and lightest (green) shaded regions are, respectively, those where the likelihood is above 68\%, above 90\%, and above 95\%. Overall, the likelihood for $\mu < 0$ is less than that for $\mu > 0$ due to the Higgs and $b \to s \gamma$ constraints. Only the bulk and coannihilation-tail regions appear above the 68\% level, but the focus-point region appears above the 90\% level, and so cannot be excluded. \begin{figure}[h] \includegraphics[height=2.3in]{tanx10g_n.eps} \hspace {-.17in} \includegraphics[height=2.25in]{tanx10g_p.eps} \caption{\label{fig:contours} {\it Contours of the likelihood at the 68\%, 90\% and 95\% levels for $\tan \beta = 10$, $A_0 = 0$ and $\mu > 0$ (left panel) or $\mu < 0$ (right panel), calculated using information of $m_h$, $b \to s \gamma$ and $\Omega_{CDM} h^2$ and the current uncertainties in $m_t$ and $m_b$. }} \end{figure} The bulk region is more apparent in the right panel of Fig.~\ref{fig:contours} for $\mu > 0$ than it would be if the experimental error in $m_t$ and the theoretical error in $m_h$ were neglected. Fig.~\ref{fig:contourswithoutmt} complements the previous figures by showing the likelihood functions as they would appear if there were no uncertainty in $m_t$, keeping the other inputs the same. We see that, in this case, both the coannihilation and focus-point strips rise above the 68\% CL. \begin{figure}[h] \includegraphics[height=2.3in]{tan10g_dmt0_n.eps} \hspace {-.17in} \includegraphics[height=2.25in]{tan10g_dmt0_p.eps} \caption{\label{fig:contourswithoutmt} {\it As in Fig.~\protect\ref{fig:contours} but assuming zero uncertainty in $m_t$.}} \end{figure} Fig.~\ref{fig:contours50} shows the likelihood projection for $\tan \beta = 50$, $A_0 = 0$ and $\mu >0$. In this case, regions at small $m_{1/2}$ and $m_0$ are disfavoured by the $b \to s \gamma$ constraint. The coannihilation region is broadened by a merger with the rapid-annihilation funnel. Both the coannihilation and the focus-point regions feature strips allowed at the 68\% CL, and these are linked by a bridge at the 95\% CL. \begin{figure}[h] \centering \includegraphics[height=2.3in]{tanx50g_p.eps} \caption{\label{fig:contours50} {\it Likelihood contours as in Fig.~\ref{fig:contours}, but for $\tan \beta = 50$, $A_0 = 0$ and $\mu> 0$.}} \end{figure} \section{Beyond the CMSSM} The results of the CMSSM described in the previous sections are based heavily on the assumptions of universality of the supersymmetry breaking parameters. One of the simplest generalizations of this model relaxes the assumption of universality of the Higgs soft masses and is known as the NUHM \cite{eos3} In this case, the input parameters include $ \mu$ and $m_A,$ in addition to the standard CMSSM inputs. In order to switch $\mu$ and $m_A$ from outputs to inputs, the two soft Higgs masses, $m_1, m_2$ can no longer be set equal to $m_0$ and instead are calculated from the electroweak symmetry breaking conditions. The NUHM parameter space was recently analyzed \cite{eos3} and a sample of the results are shown in Fig. \ref{muma}. \begin{figure}[hbtp] \includegraphics[height=2.3in]{yudi1.eps} \includegraphics[height=2.3in]{yudi2.eps} \caption{\it a) The NUHM $(m_{1/2}, m_0)$ plane for $\tan \beta = 35$, (a) $\mu = 400$~GeV and $m_{A} = 700$~GeV b)the NUHM $(\mu, m_A)$ plane for $\tan \beta = 10$, $m_0 = 100$~GeV and $m_{1/2} = 300$~GeV, with $A_0 = 0$. The (red) dot-dashed lines are the contours $m_h = 114$~GeV, and the near-vertical (black) dashed lines are the contours $m_{\chi^\pm} = 103.5$~GeV. The dark (black) dot-dashed lines indicate the GUT stability constraint. Only the areas inside these curves (small $\mu$) are allowed by this constraint. The light (turquoise) shaded areas are the cosmologically preferred regions with \protect\mbox{$0.1\leq\Omega_{\chi} h^2\leq 0.3$}. The darker (blue) portion of this region corresponds to the newer WMAP densities. The dark (brick red) shaded regions is excluded because a charged particle is lighter than the neutralino, and the lighter (yellow) shaded regions is excluded because the LSP is a sneutrino. The medium (green) shaded region is excluded by $b \to s \gamma$. The regions allowed by the $g-2$ constraint are shaded (pink) and bounded by solid black lines. The solid (blue) curves correspond to $m_\chi = m_A/2$. } \label{muma} \end{figure} In the left panel of Fig. \ref{muma}, we see a $m_{1/2},m_0$ plane with a relative low value of $\mu$. In this case, an allowed region is found when the LSP contains a non-negligible Higgsino component which moderates the relic density independent of $m_0$. To the right of this region, the relic density is too small. In the right panel, we see an example of the $m_A,\mu$ plane. The crosses correspond to CMSSM points. In this single pane, we see examples of acceptable cosmological regions corresponding to the bulk region, co-annihilation region and s-channel annihilation through the Higgs pseudo scalar. Rather than relax the CMSSM, it is in fact possible to further constrain the model. While the CMSSM models described above are certainly mSUGRA inspired, minimal supergravity models can be argued to be still more predictive. Let us assume that supersymmetry is broken in a hidden sector so that the superpotential can be written as a sum of two terms, $W = F(\phi) +g(\zeta)$, where $\phi$ represents all observable fields and $\zeta$ all hidden sector fields. We furthermore must choose $g(\zeta)$ such that when $\zeta$ picks up a vacuum expectation value, supersymmetry is broken. When the potential is expanded and terms inversely proportional to Planck mass are dropped, one finds \cite{BIM} 1) scalar mass universality with $m_0 = \langle g \rangle$; 2) trilinear mass universality with $A_0 = \langle dg/d\zeta \rangle \langle \zeta \rangle + \langle g \rangle \langle \zeta \rangle^2$; and 3) $B_0 = A_0 - m_0$. In the simplest version of the theory \cite{pol}, the universal trilinear soft supersymmetry-breaking terms are $A = (3 - \sqrt{3}) m_{0}$ and bilinear soft supersymmetry-breaking term is $B = (2 - \sqrt{3}) m_{0}$, i.e., a special case of the general relation above between $B$ and $A$. Given a relation between $B_0$ and $A_0$, we can no longer use the standard CMSSM boundary conditions, in which $m_{1/2}$, $m_0$, $A_0$, $\tan \beta$, and $sgn(\mu)$ are input at the GUT scale with $\mu$ and $B$ determined by the electroweak symmetry breaking condition. Now, one is forced to input $B_0$ and instead $\tan \beta$ is calculated from the minimization of the Higgs potential \cite{eoss2}. In Fig.~\ref{fig:Polonyi}, the contours of $\tan \beta$ (solid blue lines) in the $(m_{1/2}, m_0)$ planes for two values of ${\hat A} = A_0/m_0$, ${\hat B} = B_0/m_0 = {\hat A} - 1$ and the sign of $\mu$ are displayed \cite{eoss2}. Also shown are the contours where $m_{\chi^\pm} > 104$~GeV (near-vertical black dashed lines) and $m_h > 114$~GeV (diagonal red dash-dotted lines). The excluded regions where $m_\chi > m_{\tilde \tau_1}$ have dark (red) shading, those excluded by $b \to s \gamma$ have medium (green) shading, and those where the relic density of neutralinos lies within the WMAP range $0.094 \le \Omega_\chi h^2 \le 0.129$ have light (turquoise) shading. Finally, the regions favoured by $g_\mu - 2$ at the 2-$\sigma$ level are medium (pink) shaded. \begin{figure} \includegraphics[height=2.3in]{Pp.eps} \includegraphics[height=2.3in]{2p.eps} \caption{\it Examples of $(m_{1/2}, m_0)$ planes with contours of $\tan \beta$ superposed, for $\mu > 0$ and (a) the simplest Polonyi model with ${\hat A} = 3 - \sqrt{3}, {\hat B} = {\hat A} -1$ and (b) ${\hat A} = 2.0, {\hat B} = {\hat A} -1$. In each panel, we show the regions excluded by the LEP lower limits on MSSM particles, those ruled out by $b \to s \gamma$ decay (medium green shading), and those excluded because the LSP would be charged (dark red shading). The region favoured by the WMAP range has light turquoise shading. The region suggested by $g_\mu - 2$ is medium (pink) shaded.} \label{fig:Polonyi} \end{figure} In panel (a) of Fig.~\ref{fig:Polonyi}, we see that the Higgs constraint combined with the relic density requires $\tan \beta \mathrel{\raise.3ex\hbox{$>$\kern-.75em\lower1ex\hbox{$\sim$}}} 11$, whilst the relic density also enforces $\tan \beta \mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} 20$. For a given point in the $m_{1/2} - m_0$ plane, the calculated value of $\tan \beta$ increases as ${\hat A}$ increases. This is seen in panel (b) of Fig.~\ref{fig:Polonyi}, when ${\hat A} = 2.0$, close to its maximal value for $\mu > 0$, the $\tan \beta$ contours turn over towards smaller $m_{1/2}$, and only relatively large values $25 \mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} \tan \beta \mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} 35$ are allowed by the $b \to s \gamma$ and $\Omega_{CDM} h^2$ constraints, respectively. For any given value of ${\hat A}$, there is only a relatively narrow range allowed for $\tan \beta$. \section{Detectability} The question of detectability with respect to supersymmetric models is of key importance particularly with the approaching start of the LHC. As an aid to the assessment of the prospects for detecting sparticles at different accelerators, benchmark sets of supersymmetric parameters have often been found useful, since they provide a focus for concentrated discussion \cite{oldbench,SPS,newbench}. A set of proposed post-LEP benchmark scenarios \cite{oldbench} were chosen to span the CMSSM. Five of the chosen points are in the `bulk' region at small $m_{1/2}$ and $m_0$, four are spread along the coannihilation `tail' at larger $m_{1/2}$ for various values of $\tan\beta$. Two points are in rapid-annihilation `funnels' at large $m_{1/2}$ and $m_0$. Two points were chosen in the focus-point region at large $m_0$. The proposed points range over the allowed values of $\tan\beta$ between 5 and 50. In Fig.~\ref{fig:newM}, a comparison of the numbers of different MSSM particles that should be observable at different accelerators in the various benchmark scenarios~\cite{newbench}, ordered by their consistency with $g_\mu -2$. The qualities of the prospective sparticle observations at hadron colliders and linear $e^+ e^-$ colliders are often very different, with the latters' clean experimental environments providing prospects for measurements with better precision. Nevertheless, Fig.~\ref{fig:newM} already restates the clear message that hadron colliders and linear $e^+ e^-$ colliders are largely complementary in the classes of particles that they can see, with the former offering good prospects for strongly-interacting sparticles such as squarks and gluinos, and the latter excelling for weakly-interacting sparticles such as charginos, neutralinos and sleptons. \begin{figure}[h] \centering \includegraphics[height=3.2in]{fig3.eps} \caption{\label{fig:newM} {\it Summary of the numbers of MSSM particles that may be detectable at various accelerators in the updated benchmark scenarios. We see that the capabilities of the LHC and of linear $e^+ e^-$ colliders are largely complementary. We re-emphasize that mass and coupling measurements at $e^+ e^-$ colliders are usually much cleaner and more precise than at hadron-hadron colliders such as the LHC, where, for example, it is not known how to distinguish the light squark flavours. }} \end{figure} Clearly the center of mass energy of any future linear collider is paramount towards the supersymmetry discovery potential of the machine. This is seen in Fig. \ref{fig:newM} for the benchmark points as more sparticles become observable at higher CM energy. We can emphasize this point in general models by plotting the masses of the two lightest (observable) sparticles in supersymmetric models. For example, in Fig. \ref{fig:VSP10p} \cite{eoss7}, a scatter plot of the masses of the lightest visible supersymmetric particle (LVSP) and the next-to-lightest visible supersymmetric particle (NLVSP) is shown for the CMSSM. Once again, points selected satisfy all phenomenological constraints. We do not consider the LSP itself to be visible, nor any heavier neutral sparticle that decays invisibly inside the detector, such as ${\tilde \nu} \to \nu \chi$ when ${\tilde \nu}$ is the next-to-lightest sparticle in a neutralino LSP scenario. The LVSP and the NLVSP are the lightest sparticles likely to be observable in collider experiments. \begin{figure}[h] \centering \includegraphics[height=3.2in]{VSPp_CMSSM.eps} \caption{ {\it Scatter plots of the masses of the lightest visible supersymmetric particle (LVSP) and the next-to-lightest visible supersymmetric particle (NLVSP) in the CMSSM for $\mu > 0$. The darker (blue) triangles satisfy all the laboratory, astrophysical and cosmological constraints. For comparison, the dark (red) squares and medium-shaded (green) crosses respect the laboratory constraints, but not those imposed by astrophysics and cosmology. In addition, the (green) crosses represent models which are expected to be visible at the LHC. The very light (yellow) points are those for which direct detection of supersymmetric dark matter might be possible.} \label{fig:VSP10p} } \end{figure} All points shown in Fig. \ref{fig:VSP10p} satisfy the phenomenological constraints discussed above. The dark (red) squares represent those points for which the relic density is outside the WMAP range, and for which all coloured sparticles (squarks and gluinos) are heavier than 2 TeV. The CMSSM parameter reach at the LHC has been analyzed in~\cite{Baer2}. To within a few percent accuracy, the CMSSM reach contours presented in~\cite{Baer2} coincide with the 2-TeV contour for the lightest squark (generally the stop) or gluino, so we regard the dark (red) points as unobservable at the LHC. Most of these points have $m_{NLVSP} \mathrel{\raise.3ex\hbox{$>$\kern-.75em\lower1ex\hbox{$\sim$}}} 1.2$~TeV. Conversely, the medium-shaded (green) crosses represent points where at least one squark or gluino has a mass less than 2 TeV and should be observable at the LHC. The spread of the dark (red) squares and medium-shaded (green) crosses, by as much as 500~GeV or more in some cases, reflects the maximum mass splitting between the LVSP and the NLVSP that is induced in the CMSSM via renormalization effects on the input mass parameters. The amount of this spread also reflects our cutoff $|A_0| < 1$ TeV, which controls the mass splitting of the third generation sfermions. The darker (blue) triangles are those points respecting the cosmological cold dark matter constraint. Comparing with the regions populated by dark (red) squares and medium-shaded (green) crosses, one can see which of these models would be detectable at the LHC, according to the criterion in the previous paragraph. We see immediately that the dark matter constraint restricts the LVSP masses to be less than about 1250~GeV and NLVSP masses to be less than about 1500~GeV. In most cases, the identity of the LVSP is the lighter $\tilde \tau$. While pair-production of the LVSP would sometimes require a CM energy of about 2.5~TeV, in some cases there is a lower supersymmetric threshold due to the associated production of the LSP $\chi$ with the next lightest neutralino $\chi_2$~\cite{djou}. Examining the masses and identities of the sparticle spectrum at these points, we find that $E_{CM} \mathrel{\raise.3ex\hbox{$>$\kern-.75em\lower1ex\hbox{$\sim$}}} 2.2$ TeV would be sufficient to see at least one sparticle, as shown in Table~1. Similarly, only a LC with $E_{CM} \ge 2.5$~TeV would be `guaranteed' to see two visible sparticles (in addition to the $\chi$ LSP), somewhat lower than the 3.0 TeV one might obtain by requiring the pair production of the NLVSP. Points with $m_{LVSP} \mathrel{\raise.3ex\hbox{$>$\kern-.75em\lower1ex\hbox{$\sim$}}} 700$ GeV are predominantly due to rapid annihilation via direct-channel $H,A$ poles, while points with 200 GeV $\mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} m_{LVSP} \mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}}$ 700 GeV are largely due to $\chi$-slepton coannihilation. \begin{table}[htb] \begin{center} \caption{\it Centre-of-mass energy (in TeV) required to observe one or two sparticles at a future LC in the CMSSM and NUHM.} \label{tab:alpha} \vskip .3cm \begin{tabular}{|c|c|c|c|} \hline {\it Model} & $sgn(\mu)$ & {\it one sparticle} & {\it two sparticles} \\ \hline CMSSM & $\mu > 0 $ & 2.2 & 2.6 \\ &$\mu < 0$& 2.2 & 2.5 \\ \hline NUHM & $\mu > 0 $ & 2.4 & 2.8 \\ &$\mu < 0$& 2.6& 2.9 \\ \hline \end{tabular} \end{center} \end{table} An $E_{CM} = 500$~GeV LC would be able to explore the `bulk' region at low $(m_{1/2}, m_0) $, which is represented by the small cluster of points around $m_{LVSP} \sim 200$ GeV. It should also be noted that there are a few points with $m_{LVSP} \sim 100$ GeV which are due to rapid annihilation via the light Higgs pole. These points all have very large values of $m_0$ which relaxes the Higgs mass and chargino mass constraints, particularly when $m_t = 178$ GeV. A LC with $E_{CM} = 1000$~GeV would be able to reach some way into the coannihilation `tail', but would not cover all the WMAP-compatible dark (blue) triangles. Indeed, about a third of these points are even beyond the reach of the LHC in this model. Finally, the light (yellow) filled circles are points for which the elastic $\chi$-$p$ scattering cross section is larger than $10^{-8}$~pb. Because the LSP as dark matter is present locally, there are many avenues for pursuing dark matter detection. Direct detection techniques rely on an ample neutralino-nucleon scattering cross-section. The prospects for direct detection for the benchmark points discussed above \cite{EFFMO} are shown in Fig.~\ref{fig:DM}. This figure shows rates for the elastic spin-independent and spin dependent scattering cross sections of supersymmetric relics on protons. Indirect searches for supersymmetric dark matter via the products of annihilations in the galactic halo or inside the Sun also have prospects in some of the benchmark scenarios \cite{EFFMO}. \begin{figure}[h] \includegraphics[height=1.65in]{sigmaP_scalar.eps} \includegraphics[height=1.65in]{sigmaP_spin.eps} \caption{\label{fig:DM} {\it Elastic cross sections for (a) spin-independent scattering and (b) spin-dependent scattering on protons. Our predictions (blue crosses) are compared with those of {\tt Neutdriver} \cite{Neutdriver} (red circles) for neutralino-nucleon scattering. Projected sensitivities (a) for CDMS II~\cite{Schnee:1998gf} and CRESST~\cite{Bravin:1999fc} (solid) and GENIUS~\cite{GENIUS} (dashed) and (b) for a 100 kg NAIAD array~\cite{Spooner:2000kt} are also shown. }} \end{figure} In Fig.~\ref{fig:Andyall}, we display the allowed ranges of the spin-independent cross sections in the NUHM when we sample randomly $\tan \beta$ as well as the other NUHM parameters \cite{efloso}. The raggedness of the boundaries of the shaded regions reflects the finite sample size. The dark shaded regions includes all sample points after the constraints discussed above (including the relic density constraint) have been applied. In a random sample, one often hits points which are are perfectly acceptable at low energy scales but when the parameters are run to high energies approaching the GUT scale, one or several of the sparticles mass squared runs negative \cite{fors}. This has been referred to as the GUT constraint here. The medium shaded region embodies those points after the GUT constraint has been applied. After incorporating all the cuts, including that motivated by $g_\mu - 2$, we find that the light shaded region where the scalar cross section has the range $10^{-6}$~pb $\mathrel{\raise.3ex\hbox{$>$\kern-.75em\lower1ex\hbox{$\sim$}}} \sigma_{SI} \mathrel{\raise.3ex\hbox{$>$\kern-.75em\lower1ex\hbox{$\sim$}}} 10^{-10}$~pb, with somewhat larger (smaller) values being possible in exceptional cases. If the $g_\mu - 2$ cut is removed, the upper limits on the cross sections are unchanged, but much lower values become possible: $\sigma_{SI} \ll 10^{-13}$~pb. The effect of the GUT constraint on more general supersymmetric models was discussed in \cite{eoss3}. \begin{figure}[h] \centering \includegraphics[height=3.2in]{scalarrandtbcdms.eps} \caption{\label{fig:Andyall} {\it Ranges of the spin-independent cross section in the NUHM. The ranges allowed by the cuts on $\Omega_{\chi} h^2$, $m_h$ and $b \to s \gamma$ have dark shading, those still allowed by the GUT stability cut have medium shading, and those still allowed after applying all the cuts including $g_\mu - 2$ have light shading. The pale shaded region corresponds to the extra area of points with low relic densities, whose cross sections have been rescaled appropriately. Also shown are the limits from the CDMS\protect\cite{cdms} and Edelweiss\protect\cite{edel} experiments as well as the recent CDMSII result \protect\cite{cdms2} on the neutralino-proton elastic scattering cross section as a function of the neutralino mass. The CDMSII limit is stronger than the Edelweiss limit which is stronger than the previous CDMS limit at higher $m_\chi$. The result reported by DAMA \protect\cite{dama} is found in the upper left.}} \end{figure} The results from this analysis \cite{efloso} for the scattering cross section in the NUHM (which by definition includes all CMSSM results) are compared with the previous CDMS \cite{cdms} and Edelweiss \cite{edel} bounds as well as the recent CDMSII results \cite{cdms2} in Fig.~\ref{fig:Andyall}. While previous experimental sensitivities were not strong enough to probe predictions of the NUHM, the current CDMSII bound has begun to exclude realistic models and it is expected that these bounds improve by a factor of about 20. This work was partially supported by DOE grant DE-FG02-94ER-40823. \input{referenc} \printindex \end{document}
1,941,325,220,915
arxiv
\section{Introduction} Learning a task-relevant metric among samples is a common application of machine learning, with use in retrieval, clustering, and ranking. A classic example of retrieval is in visual recognition where, given an object image, the system tries to identify the class based on an existing labeled dataset. To do this, the model can learn a measure of similarity between pairs of images, assigning small distances between images of the same object type. Given the broad successes of deep learning, there has been a recent surge of interest in deep metric learning---using neural networks to automatically learn these similarities~\citep{hoffer2015deeptriplet,huang2016local,zhang2020deepspherical}. The traditional approach to deep metric learning learns an embedding function over the input space so that a simple distance measure between pairs of embeddings corresponds to task-relevant spatial relations between the inputs. The embedding function $f$ is computed by a neural network, which is learned to encode those spatial relations. For example, we can use the basic Euclidean distance metric to measure the distance between two samples $x$ and $y$ as $\|f(x) - f(y)\|_2$. This distance is critical in two ways. First, it is used to define the loss functions, such as triplet or contrastive loss, to dictate \textit{how} this distance should be used to capture task-relevant properties of the input space. Second, since $f$ is trained to optimize the loss function, the distance influences the embedding function that is learned.\footnote{ Outside of deep learning, the classic approach to metric learning aims to fit a Mahalanobis distance, represented as $\|W(x - y)\|_2$. This can be viewed as a special case where the embedding is a linear layer.} This approach has limitations. When the underlying reference distance is asymmetric or does not follow the triangle inequality, a standard metric cannot accurately capture the data. An important example is clustering over probability distributions, where the standard k-means approach with Euclidean distance is sub-optimal, leading to alternatives like the KL-divergence \cite{banerjee2005clustering}. Other cases include textual entailment and learning graph distances which disobey the triangle inequality. Can we generalize the distance measure in a manner that can be learned? A natural class of distances that include common measures such as the squared Euclidean distance are the Bregman divergences~\cite{Bregman1967}. They are parametrized by a strictly convex function and measure the distance between two points $x$ and $y$ as the first-order Taylor approximation error of the function originating from $y$ at $x$. The goal is that by learning a Bregman divergence we can infer an appropriate metric from data, instead of fixing the Euclidean distance as the final metric between embeddings. In this work we describe Neural Bregman Divergences (NBD). Our core contributions are: 1) we present a novel approach to accurately learn Bregman measures using input convex neural networks (\cref{sec:nbd}); 2) we show that our method is superior on existing Bregman divergence tasks, including regression, ranking, and clustering (\cref{sec:experiments}); 3) we further study the performance of our method on asymmetric tasks where the underlying metric is not known to be Bregman. Our method performs reasonably on such tasks (\cref{sec:experiments_non_breg}). We also show that the previous deep divergence learning approach fails to learn effectively on many tasks. We thus obtain the first successful method for learning neural Bregman divergences, providing foundation and tooling for better developing and studying asymmetric distance learning. \section{Neural Bregman divergence learning} \label{sec:nbd} A Bregman divergence computes the divergence between two points $x$ and $y$ that live in a space $\mathcal{X}$ by taking first-order Taylor approximations of a generating function $\phi$. This generating function is defined over $\mathcal{X}$ and can be thought of as (re-)encoding points from $\mathcal{X}$. A proper and informative $\phi$ is incredibly important: different $\phi$ can capture different properties of the spaces over which they are defined. Our aim in this paper is to learn Bregman divergences by providing a neural method for learning informative functions $\phi$. More formally, let $x, y \in \mathcal{X}$, where $\mathcal{X} \subseteq \mathbb{R}^d$. The generating function $\phi$ must be a continuously differentiable, strictly convex $\phi:\mathcal{X} \to \mathbb{R}$. We refer to the \textbf{Bregman divergence} parametrized by $\phi$ as $D_\phi(x, y)$, defined as: \begin{equation} D_\phi(x, y) = \phi(x) - \phi(y) - \langle \nabla \phi(y), x - y \rangle, \label{eqn:bregman} \end{equation} where $\langle \cdot, \cdot\rangle$ represents the dot product and $\nabla \phi(y)$ is the gradient of $\phi$ evaluated at $y$. For example, if $\mathcal{X} = \mathbb{R}^d$ and $\phi$ is the squared $L_2$ norm ($\phi(y) = \|y\|_2^2$), then $\nabla \phi(y) = 2y$. This yields the divergence $D_\phi(x,y) = \|x-y\|^2_2$. A properly defined $\phi$ can capture critical, inherent properties of the underlying space. By learning $\phi$ via \cref{eqn:bregman}, we aim to automatically learn these properties. For example, Bregman divergences can capture asymmetrical relations: if $\mathcal{X}$ is the $D$-dimensional simplex representing $D$-dimensional discrete probability distributions then $\phi(x) = \langle x, \log x\rangle$ yields the KL divergence, $D_\phi(x, y) = \sum_d x_d \log\frac{x_d}{y_d}$. Perhaps surprisingly, as we show in this paper, the central requirement of a Bregman divergence---that $\phi$ is strictly convex and continuously differentiable---is not a big limitation. Focusing on the hypothesis space of Bregman divergences is valuable due to the fact that many core machine learning measures are special cases of Bregman divergences. This includes the squared Euclidean, Kullback-Leibler, and Ikura-Saito divergences. While special cases of the Bregman divergence are used today, and many general results have been proven over the space of Bregman measures, less progress has been made in \textit{learning} Bregman divergences. Prior works~\cite{cilingir2020deep,siahkamari2020learning} have examined max-affine representations of $\phi$ for mathematical convenience, as it allows the right hand side of \cref{eqn:bregman} to cancel out and to directly work with the representation $D_\phi(x,y)$. By showing that their representation results in a valid $D_\phi(x,y)$ under correct constraints they are able to apply their learning approach to maintain those constraints. However, this comes at significant cost to run-time and representational capacity~\cite{cilingir2020deep,siahkamari2020learning}. Furthermore a max-affine representation, being piecewise linear, does not yield a smooth (continuously differentiable) divergence. Instead, we advocate for learning the convex function $\phi$ directly, as we describe below. \subsection{Representing $D_\phi$ via $\phi$ directly} To obtain an appropriate convex function $\phi$ to represent $D_\phi$, we must resolve two constraints: 1) how to compute $D_\phi$ from just $\phi$ in an efficient manner; and 2) how to learn $\phi$ itself and guarantee that it is convex. We find that with some strategic design choices both of these constraints can be satisfied easily in a fully differentiable manner over the input space. This essentially reduces to learning the best $\phi$ among the class of strictly convex functions. \textbf{Efficient Computation.} The first constraint of efficient computation can be tackled using a technique known as double backpropagation~\cite{Drucker} to compute the $\nabla \phi(y)$ term. Normally computing the gradient of $\nabla \phi(y)$ would involve constructing the Hessian, with a resulting quadratic increase in computation and memory use. Double backpropagation allows us to use automatic differentiation to efficiently compute gradients with respect to the inputs in an efficient manner, and the dot-product between a gradient and another value in particular has specialized ``Jacobian vector product''~\cite{Frostig2021} operation that ensures $\langle \nabla \phi(y), x - y \rangle$ can be computed in the cost of evaluating $\phi(y)$ one additional time. Since there are already three calls to $\phi$, this is only a 25\% increase in computational overhead to backpropagate through \cref{eqn:bregman}, provided we have a learnable representation of $\phi$. This functionality has been implemented in PyTorch in the \texttt{torch.autograd.functional} API \cite{paszke2017automatic}. \textbf{Convexity of $\phi$.} To represent $\phi$, we adopt the Input Convex Neural Network (ICNN) developed by \citet{amos2017input}. The ICNN composes linear layers with non-negative weights $W^{+}$ and affine functions with unconstrained weights $U$ with convex activation functions $g(\cdot)$. The composition of these three components for the $i$th layer of an ICNN is given by \cref{eq:icnn}, with $z_i$ the $i$'th layer's input and $z_{i+1}$ the output, \begin{equation} \label{eq:icnn} z_{i+1}=g\left(W_{i}^{+} z_{i}+ U_{i} z_0 +b_{i}\right). \end{equation} By construction, the resulting neural network satisfies convexity. \Citet{chen2018optimal} and \citet{pitis2020inductive} have shown under specific conditions that ICNNs universally approximate convex functions. Prior works on the ICNN have only tried piecewise linear activation functions such as the ReLU variants for $g(\cdot) = \max(x,0)$; we instead use the Softplus activation $g(x) = \log(1 + \exp(x))$ which lends the network smoothness and strict convexity. This is an important design choice as evaluating $\nabla \phi(y)$ involves the second derivatives, which for any piecewise learning activation is zero almost everywhere. This causes vanishing gradients in the computation of $\langle \nabla \phi(y), x - y \rangle$ and restricts its capacity to learn. In our tests ReLU activation functions prevented effective learning entirely. Thus our choice of $g(x)$, combined with an appropriate parametrization of the non-negative layers in the ICNN, outperforms the default approach in divergence learning tasks. \begin{wrapfigure}[20]{r}{0.6\textwidth} \vspace{-40pt} \begin{minipage}{0.6\textwidth} \begin{algorithm}[H] \caption{\textbf{Neural Bregman Divergence (NBD) Learning}. Given data tuples $(a_i, b_i)$, our approach (1) learns $f_\theta$ to compute effective ways of featurizing $a_i$ and $b_i$; and (2) learns a function $\phi$ that can be used to compute a Bregman divergence value $\hat{y}$ between the featurized data points. The computed Bregman divergence is trained via a task-specific loss function $\ell$ to be close to a \textit{target} divergence value $y_i$. If a target divergence value isn't available, an implicit loss function can be used.} \label{algo:nbd} \begin{algorithmic}[1] \Require Dataset of pairs and target distance, Loss function $\ell(\cdot, \cdot) : \mathbb{R} \to \mathbb{R}$ \State $f_\theta \gets$ any arbitrary neural network as a feature extractor \State $\phi \gets$ a ICNN network parameterized as by \cref{eq:icnn} \For{each data tuple $(\boldsymbol{a}_i, \boldsymbol{b}_i)$ with label $y_i$ in dataset} \State $\boldsymbol{x} \gets f_\theta(\boldsymbol{a}_i)$ \Comment{\textcolor{blue}{Perform feature extraction}} \State $\boldsymbol{y} \gets f_\theta(\boldsymbol{b}_i)$ \State $\mathit{rhs} \gets \langle \nabla \phi(\boldsymbol{y}), \boldsymbol{x} - \boldsymbol{y} \rangle$ \Comment{\textcolor{blue}{Using double backprop}} \State $\hat{y} \gets \phi(\boldsymbol{x}) - \phi(\boldsymbol{y}) - \mathit{rhs}$ \Comment{\textcolor{blue}{Empirical Bregman }} \State $\ell\left(\hat{y}, y_i\right).\operatorname{backward}()$ \Comment{\textcolor{blue}{Compute gradients}} \State update parameters of $\phi$ and $\theta$ \EndFor \State \Return Jointly trained feature extractor $f_\theta$ and learned Bregman Divergence $\phi$ \end{algorithmic} \end{algorithm} \end{minipage} \end{wrapfigure} \subsection{Joint Training} The original feature space is rarely ideal for computing the distance measures between samples. Classical metric learning generally attempts to apply a linear transformation to the feature space in order to apply a fixed distance function $D(\cdot, \cdot)$ such as Euclidean distance \cite{jain2012metric, kulis2012metric}. In deep metric learning, a neural network $f_\theta$ is used to embed the samples into a latent space where the distance function is more useful \cite{musgrave2020metric}. In our approach, instead of fixing the distance function, we also learn a Bregman divergence as the measure: \begin{multline} D_\phi(f_\theta(x), f_\theta(y)) =\\ \phi(f_\theta(x)) - \phi(f_\theta(y)) - \\ \langle \nabla \phi(\tilde y), f_\theta(x) - f_\theta(y) \rangle \end{multline} with $ \tilde y$ evaluated as $f_\theta(y)$. Note we now have two sets of parameters to learn: those associated with $\phi$ and those associated with the encoder ($\theta$). During training, they are simultaneously learned through gradient descent, which involves back-propagating through the gradient function $\nabla \phi(\cdot)$ to update $\theta$ via double backpropagation \cite{Drucker}. We summarize this process in \cref{algo:nbd}. The metric model accepts two samples as input and estimates the divergence between them. If and when the target divergence value is available, the metric can be trained using a regression loss function such as mean square error. Otherwise, an implicit comparison such as triplet or contrastive loss can be used. \section{Comparison to related work} \label{sec:related} In classic metric learning methods, a linear or kernel transform on the ambient feature space is used, combined with a standard distance function such as Euclidean or cosine distance. The linear case is equivalent to Mahalanobis distance learning. Information on and examples of such approaches include \cite{xing2002distance, kulis2012metric, jain2012metric, kulis2009low}. Bregman divergences generalize many standard distance measures and can further introduce useful properties such as asymmetry. They have classically been used in machine learning for clustering, by modifying the distance metric used in common algorithms such as kmeans \cite{banerjee2005clustering, wu2009learning}. One of the first methods to learn a Bregman divergence fits a non-parametric kernel to give a local Mahalanobis metric. The coefficients for the data points are fitted using subgradient descent \cite{wu2009learning}. More recently, \citet{siahkamari2020learning} directly learns a Bregman divergence by approximating the convex function $\phi$ using piecewise affine functions. This is formulated and solved as a convex optimization problem \cite{siahkamari2020learning}. While theoretical approximation error bounds are available in this setting, such approaches do not scale well for large datasets and non-convex optimization scenarios typical for deep learning. We include this benchmark, denoted \textit{PBDL}, in our ranking and clustering tasks. Along the same vein as PBDL, \citet{cilingir2020deep} aims to learn a deep Bregman divergence using a neural network embedding followed by a separate dense subnetworks that represent piecewise affine functions Because the convex function $\phi$ is approximated from below by affine functions, the tangent to input $y$ is equivalent to the affine function at $\phi(y)$, denoted as $f_y$. Letting $f_x(x)$ represent the value at point $x$, this simplifies the Bregman divergence $D(x, y)$ as $f_x(x) - f_y(x)$. The network, consisting of the embedding layers and the affine subnetworks, are trained using gradient descent. We include this method, denoted \textit{Deep-div}, as our primary Bregman benchmark. While the method performs reasonably on Bregman ranking and clustering tasks, we observe issues when training the network as described in their paper for more complex tasks such as deep metric learning or regression. Our method directly learns a continuous input convex neural network as $\phi$ rather than a piecewise approximation, and in doing so converges consistently to lower error solutions. \Citet{pitis2020inductive} approach asymmetric distance learning by fitting a norm $N$ with modified neural networks which satisfy norm properties and using the induced distance metric $N(x-y)$. They introduce two versions that we include as baselines: one (Deepnorm) parametrizes $N$ with a modified ICNN that satisfies properties such as non-negativity and subadditivity. The second (Widenorm) computes a nonlinear transformation of a set of Mahalanobis norms. By construction, these metrics allow for asymmetry but still satisfy the triangle inequality. On the other hand, the Bregman divergence does not necessarily obey the triangle inequality. This is appealing for many situations, like image recognition where the triangle inequality may be too restrictive. As \citet{pitis2020inductive} discuss, imposing the triangle inequality on other applications, such as language processing, is not obvious and needs further study. We note that prior works are also computationally expensive, with Deepnorm requiring an $O(n^2)$ loop to compute pairwise distances, and Deep-div requiring a further $O(k)$ loop over max-affine components. In contrast, our approach can be vectorized as operations over tensors, resulting in the same computational complexity as the Euclidean distance. We discuss this further in \cref{sec:discussion}. \textbf{Information Retrieval and Bregman Divergences:} As part of our goal in developing learnable Bregman divergences, we note that there is a long history of Bregman measures for information retrieval. Early work by Clayton showed that Bregman divergences have properties similar to the triangle inequality that allow for performing database searches in sub-linear time~\cite{Cayton:2008:FNN:1390156.1390171,Cayton:2009:EBR:2984093.2984121}. Since then a number of works have explored improving nearest-neighbor retrieval speed under divergences so that more domain-specific asymmetric measures like KL-divergence or Itakura–Saito. This includes methods for accelerating exact neighbor retrieval~\cite{Nielsen2009,Song2020b,10.1145/2746539.2746595,Zhang:2009:SSB:1687627.1687630} and faster approximate retrieval~\cite{10.1145/2261250.2261255,Naidan2012,Mu2010NonMetricLH}. Our hope is that in future work these carefully chosen divergences may be instead learned from the data, in the same manner that differentiable learning has improved image, speed, and signal classification in recent years. \section{Divergence learning experiments} \label{sec:experiments} \begin{table}[t] \centering \begin{tabular}{lcccccc} \toprule & \multicolumn{2}{c}{Exponential} & \multicolumn{2}{c}{Gaussian} & \multicolumn{2}{c}{Multinomial} \\ Model & Purity & Rand Index & Purity & Rand Index & Purity & Rand Index \\ \cmidrule{2-3} \cmidrule{4-5} \cmidrule{6-7} \midrule Deep-div & $0.665_{\ 0.12}$ & $0.788_{\ 0.08}$ & $0.867_{\ 0.12}$ & $0.910_{\ 0.07}$ & $0.876_{\ 0.08}$ & $0.919_{\ 0.04}$ \\ Euclidean & $0.365_{\ 0.02}$ & $0.615_{\ 0.02}$ & $0.782_{\ 0.11}$ & $0.869_{\ 0.05}$ & $0.846_{\ 0.09}$ & $0.900_{\ 0.05}$ \\ Mahalanobis & $0.452_{\ 0.05}$ & $0.697_{\ 0.02}$ & $0.908_{\ 0.06}$ & $0.935_{\ 0.03}$ & $0.894_{\ 0.06}$ & $0.926_{\ 0.03}$ \\ NBD & $\mathbf{0.735}_{\ 0.08}$ & $\mathbf{0.830}_{\ 0.03}$ & $\mathbf{0.913}_{\ 0.05}$ & $\mathbf{0.938}_{\ 0.03}$ & $\mathbf{0.921}_{\ 0.02}$ & $\mathbf{0.939}_{\ 0.01}$ \\ PBDL & $0.718_{\ 0.08}$ & $\mathbf{0.830}_{\ 0.04}$ & $0.806_{\ 0.14}$ & $0.874_{\ 0.09}$ & $0.833_{\ 0.08}$ & $0.895_{\ 0.04}$ \\ \bottomrule \end{tabular} \caption{We cluster data generated from a mixture of exponential, Gaussian, and multinomial distributions. Learning the metric from data is superior to using a standard metric such as Euclidean. Our approach NBD\xspace furthermore outperforms all other divergence learning methods. Means and standard deviations are reported over 10 runs.} \label{tab:distr} \vspace{-5mm} \end{table} We conduct several experiments that validate our approach as an effective means of learning divergences across a number of tasks. In the first section \ref{sec:clustering} we demonstrate that NBD\xspace \textit{effectively learns standard Bregman retrieval and clustering benchmarks}, comparing favorably with the previous Bregman methods PBDL \cite{siahkamari2020learning} and Deep-div \cite{cilingir2020deep}. In addition, we construct a Bregman regression task in section \ref{sec:sim} where the labels are known divergences over raw feature vectors, so that the \textit{only learning task is that of the divergence itself}. Finally in section \cref{sec:bregMNIST} we investigate the ability of our method to \textit{learn the ground truth divergence while simultaneously learning to extract a needed representation}, training a sub-network's parameters $\theta$ and our divergence $\phi$ jointly. This is typified by the ``BregMNIST'' benchmark, which combines learning the MNIST digits with the only supervisory signal being the ground truth divergence between the digit values. Refer to the Appendix for detailed training protocols and data generation procedures. \subsection{Bregman ranking and clustering} \label{sec:clustering} Our first task expands the distributional clustering experiments in \cite{banerjee2005clustering,cilingir2020deep}. The datasets consist of mixtures of $N=1000$ points in $\mathbb{R}^{10}$ from five clusters, where the multivariate distribution given cluster identity is non-isotropic Gaussian, exponential, or multinomial. Given a distance metric, a generalized k-means algorithm can be used to cluster the data points. While standard metrics, such as L2-distance and KL-divergence, may be ideal for specific forms of data (e.g. isotropic Gaussian and simplex data, respectively), our goal is to learn an appropriate metric directly from a separate labeled training set. In particular, because Bregman divergences are uniquely associated with each member of the exponential family \cite{banerjee2005clustering}, our method is especially suited for clustering data from a wide range of distributions which may not be known ahead of time. To learn the metric from data, we apply triplet mining, including all triplets with non-zero loss \cite{hoffer2015deeptriplet}. We use the same method to train all models except for the Euclidean baseline, which requires no training, and PBDL where we directly use the authors' Python code. As shown in Table \ref{tab:distr}, our method NBD\xspace gives improved clustering over all distributions compared to all baselines. In particular, standard k-means with Euclidean distance is clearly inadequate. While the Mahalanobis baseline shows significant improvement, it is only comparable to NBD\xspace in the Gaussian case, where a matrix can be learned to scale the clusters to be isotropic. This task indicates the importance of learning flexible divergences from data. After demonstrating success in distributional clustering, we now apply our method to ranking and clustering real data (Table \ref{tab:ranking}). For the ranking tasks, the test set is treated as queries for which the learned model retrieves items from the training set in order of increasing divergence. The ranking is scored using mean average precision (MAP) and area under ROC curve (AUC). Our method again outperforms the other Bregman learning methods in the large majority of datasets and metrics. \subsection{Divergence regression} \label{sec:sim} As a confirmation that our method can faithfully represent Bregman divergences, we use simulated data to demonstrate that our method efficiently learns divergences between pairs of inputs. We generate pairs of 20-dim. vectors from a standard Normal distribution, with 10 informative features used to compute the target divergence and 10 distractors. To be more challenging and realistic, we add various levels of correlations among all features to make the informative features harder to separate. The following target divergences are used: 1) squared Euclidean distance (symmetric); 2) squared Mahalanobis distance (symmetric); 3) $\phi(x) = x \log x$ (asymmetric); 4) KL-divergence (asymmetric). In this task we compare our NBD\xspace with Deep-div and Mahalanobis, while the PBDL method does not scale to data of this size. Instead we add Deepnorm and Widenorm metrics from \citet{pitis2020inductive} as alternative baselines which do not learn Bregman divergences. \begin{wraptable}[35]{r}{.63\textwidth} \vspace{-12pt} \centering \resizebox{.85\width}{!}{ \begin{tabular}{@{}llcccc@{}} \toprule Dataset & Model & MAP & AUC & Purity & Rand Index \\ \midrule \multirow{5}{*}{abalone} & Deep-div & $0.281$ & $0.645$ & $0.377$ & $0.660$ \\ & Euclidean & $0.301$ & $0.666$ & $0.422$ & $\mathbf{0.750}$ \\ & Mahalanobis & $0.310$ & $0.677$ & $0.419$ & $\mathbf{0.750}$ \\ & NBD & $\mathbf{0.316}$ & $\mathbf{0.682}$ & $\mathbf{0.432}$ & $\mathbf{0.750}$ \\ & PBDL & $0.307$ & $0.659$ & $0.386$ & $0.735$ \\ \midrule \multirow{5}{*}{\begin{tabular}[c]{@{}l@{}}balance\\ scale\end{tabular}} & Deep-div & $0.804$ & $0.859$ & $0.869$ & $0.828$ \\ & Euclidean & $0.611$ & $0.666$ & $0.633$ & $0.568$ \\ & Mahalanobis & $0.822$ & $0.854$ & $0.851$ & $0.761$ \\ & NBD & $\mathbf{0.887}$ & $\mathbf{0.915}$ & $\mathbf{0.898}$ & $\mathbf{0.872}$ \\ & PBDL & $0.836$ & $0.855$ & $0.872$ & $0.814$ \\ \midrule \multirow{5}{*}{car} & Deep-div & $0.787$ & $0.757$ & $0.852$ & $0.750$ \\ & Euclidean & $0.681$ & $0.589$ & $0.704$ & $0.523$ \\ & Mahalanobis & $0.787$ & $0.752$ & $0.778$ & $0.654$ \\ & NBD & $\mathbf{0.820}$ & $\mathbf{0.803}$ & $\mathbf{0.860}$ & $\mathbf{0.758}$ \\ & PBDL & $0.798$ & $0.775$ & $0.854$ & $0.750$ \\ \midrule \multirow{5}{*}{iris} & Deep-div & $0.945$ & $0.967$ & $0.811$ & $0.820$ \\ & Euclidean & $0.827$ & $0.897$ & $0.820$ & $0.828$ \\ & Mahalanobis & $0.946$ & $0.973$ & $0.884$ & $0.879$ \\ & NBD & $\mathbf{0.957}$ & $\mathbf{0.977}$ & $\mathbf{0.909}$ & $\mathbf{0.902}$ \\ & PBDL & $0.943$ & $0.967$ & $0.889$ & $0.888$ \\ \midrule \multirow{5}{*}{transfusion} & Deep-div & $0.648$ & $0.525$ & $\mathbf{0.756}$ & $0.621$ \\ & Euclidean & $0.666$ & $0.536$ & $0.748$ & $0.563$ \\ & Mahalanobis & $0.680$ & $0.570$ & $0.750$ & $0.543$ \\ & NBD & $\mathbf{0.695}$ & $\mathbf{0.603}$ & $\mathbf{0.756}$ & $0.600$ \\ & PBDL & $0.637$ & $0.504$ & $0.748$ & $\mathbf{0.622}$ \\ \midrule \multirow{5}{*}{wine} & Deep-div & $\mathbf{0.983}$ & $\mathbf{0.987}$ & $0.953$ & $0.947$ \\ & Euclidean & $0.844$ & $0.884$ & $0.902$ & $0.887$ \\ & Mahalanobis & $0.949$ & $0.970$ & $0.944$ & $0.940$ \\ & NBD & $0.969$ & $0.980$ & $\mathbf{0.960}$ & $\mathbf{0.948}$ \\ & PBDL & $0.978$ & $0.982$ & $0.820$ & $0.823$ \\ \bottomrule \end{tabular} } \caption{Across several real datasets, a learned Bregman divergence is superior to Euclidean or Mahalanobis metrics for downstream ranking (MAP, AUC) and clustering (Purity, Rand Index) tasks. Our approach NBD\xspace consistently outperforms prior Bregman learning approaches, Deep-div and PBDL, on most datasets. See Appendix \cref{tab:ranking-full} for standard deviation results.} \label{tab:ranking} \end{wraptable} The results of these experiments are in \cref{fig:sim_plots}, with errors in Table \ref{tbl:sim_errors_correlation}. In the symmetric cases of the Euclidean and Mahalanobis ground-truth, our NBD\xspace method performs almost equivalently to learning a Mahalanobis distance itself. This shows that our method is not losing any representational capacity in being able to represent these standard measures. This is notably not true for the prior approaches for asymmetric learning: Deepnorm, Widenorm, and Deep-div. Notably, unlike in the clustering tasks, Deep-div is unable to accurately represent even the most common Bregman regression targets, due to its piecewise representation of $\phi$. In \cref{fig:sim_xlogx} and \cref{fig:sim_kl} two asymmetric divergences are used, and our NBD\xspace approach performs better than all existing options. Because these experiments isolate purely the issue of learning the divergence itself, we have strong evidence that our approach is the most faithful to learning a known divergence from a supervisory signal. Note that the Mahalanobis distance performed second best under all noise levels, meaning the prior asymmetric methods were less accurate at learning asymmetric measures than choosing a purely symmetric parameterization. \begin{figure*}[t] \centering \begin{subfigure}[t]{0.24\textwidth} \centering \adjustbox{max width=0.99\textwidth}{ \begin{tikzpicture} \definecolor{color0}{rgb}{0.12156862745098,0.466666666666667,0.705882352941177} \definecolor{color1}{rgb}{1,0.498039215686275,0.0549019607843137} \definecolor{color2}{rgb}{0.172549019607843,0.627450980392157,0.172549019607843} \definecolor{color3}{rgb}{0.83921568627451,0.152941176470588,0.156862745098039} \definecolor{color4}{rgb}{0.580392156862745,0.403921568627451,0.741176470588235} \begin{axis}[ legend cell align={left}, legend style={fill opacity=0.8, draw opacity=1, text opacity=1, draw=white!80!black}, tick align=outside, tick pos=left, x grid style={white!69.0196078431373!black}, xmin=-4.95, xmax=103.95, xtick style={color=black}, y grid style={white!69.0196078431373!black}, ylabel={test MAE}, ymin=-0.900737708520208, ymax=19.2864982860059, ytick style={color=black} ] \path [fill=color0, fill opacity=0.2] (axis cs:0,8.61356546954088) --(axis cs:0,5.06389361147312) --(axis cs:1,3.78738670750064) --(axis cs:2,2.97706999149818) --(axis cs:3,2.42457658616829) --(axis cs:4,2.03392944043775) --(axis cs:5,1.76400888384936) --(axis cs:6,1.57523581576256) --(axis cs:7,1.44072838194174) --(axis cs:8,1.33142704920124) --(axis cs:9,1.24143521406646) --(axis cs:10,1.16740532515902) --(axis cs:11,1.10694049237373) --(axis cs:12,1.05376862976123) --(axis cs:13,1.00176932073876) --(axis cs:14,0.963375942104362) --(axis cs:15,0.92802339403292) --(axis cs:16,0.899193964099866) --(axis cs:17,0.8793142428664) --(axis cs:18,0.853927225251274) --(axis cs:19,0.832923462336766) --(axis cs:20,0.813176402796497) --(axis cs:21,0.794873485749809) --(axis cs:22,0.766143107498364) --(axis cs:23,0.750539730883445) --(axis cs:24,0.745461395928885) --(axis cs:25,0.724929355914286) --(axis cs:26,0.712746778322584) --(axis cs:27,0.70437758833272) --(axis cs:28,0.679797478662717) --(axis cs:29,0.67245836626357) --(axis cs:30,0.653089650798786) --(axis cs:31,0.641308115729591) --(axis cs:32,0.630325105591995) --(axis cs:33,0.617543141288518) --(axis cs:34,0.61114549177928) --(axis cs:35,0.59519319488749) --(axis cs:36,0.582437487476417) --(axis cs:37,0.575388137488038) --(axis cs:38,0.56725705077144) --(axis cs:39,0.556410004620034) --(axis cs:40,0.539486161539233) --(axis cs:41,0.533237668120475) --(axis cs:42,0.520841925494783) --(axis cs:43,0.519223817957736) --(axis cs:44,0.506341772081994) --(axis cs:45,0.489163988516136) --(axis cs:46,0.492099886028679) --(axis cs:47,0.471087857224623) --(axis cs:48,0.468593996900264) --(axis cs:49,0.457012161532219) --(axis cs:50,0.452327182299622) --(axis cs:51,0.439766952001595) --(axis cs:52,0.438306007316256) --(axis cs:53,0.422901559169498) --(axis cs:54,0.411328484582464) --(axis cs:55,0.403574328820612) --(axis cs:56,0.38338231218885) --(axis cs:57,0.378110877129696) --(axis cs:58,0.368794930544414) --(axis cs:59,0.358832062579097) --(axis cs:60,0.347957743499891) --(axis cs:61,0.342160793257826) --(axis cs:62,0.331147369620256) --(axis cs:63,0.306847049818142) --(axis cs:64,0.298602829468917) --(axis cs:65,0.292526592142228) --(axis cs:66,0.278142021043491) --(axis cs:67,0.273673978780942) --(axis cs:68,0.272194313520618) --(axis cs:69,0.258591428014535) --(axis cs:70,0.253208787576285) --(axis cs:71,0.245202221228092) --(axis cs:72,0.245175414166573) --(axis cs:73,0.232898228455867) --(axis cs:74,0.241617165790149) --(axis cs:75,0.229242340251561) --(axis cs:76,0.220958270658105) --(axis cs:77,0.217121845527634) --(axis cs:78,0.214315519111188) --(axis cs:79,0.210622928541997) --(axis cs:80,0.213193194189437) --(axis cs:81,0.20502370438445) --(axis cs:82,0.199954417234258) --(axis cs:83,0.195179751106853) --(axis cs:84,0.193011417783841) --(axis cs:85,0.192368517785886) --(axis cs:86,0.192996605161649) --(axis cs:87,0.19770734246012) --(axis cs:88,0.188745095587799) --(axis cs:89,0.183326030215082) --(axis cs:90,0.178476360502647) --(axis cs:91,0.174944118759846) --(axis cs:92,0.173531073478077) --(axis cs:93,0.173520466526977) --(axis cs:94,0.166283320895997) --(axis cs:95,0.163875045196887) --(axis cs:96,0.164454485830004) --(axis cs:97,0.160360055896915) --(axis cs:98,0.154134092113117) --(axis cs:99,0.152878475177088) --(axis cs:99,0.165652185690603) --(axis cs:99,0.165652185690603) --(axis cs:98,0.166763055939734) --(axis cs:97,0.173507454262895) --(axis cs:96,0.183527271095579) --(axis cs:95,0.181465743167524) --(axis cs:94,0.182640292731755) --(axis cs:93,0.197728082695969) --(axis cs:92,0.195111687831864) --(axis cs:91,0.198087326584443) --(axis cs:90,0.206965348062111) --(axis cs:89,0.210661760528428) --(axis cs:88,0.211628106974533) --(axis cs:87,0.224568751102324) --(axis cs:86,0.223315705931364) --(axis cs:85,0.225256696870308) --(axis cs:84,0.22755173365519) --(axis cs:83,0.232944962790852) --(axis cs:82,0.235376889620626) --(axis cs:81,0.239592019304947) --(axis cs:80,0.25477125235521) --(axis cs:79,0.251332378385366) --(axis cs:78,0.253310853861936) --(axis cs:77,0.258359760002151) --(axis cs:76,0.265526270837537) --(axis cs:75,0.272366110955918) --(axis cs:74,0.283221873853457) --(axis cs:73,0.279225968232467) --(axis cs:72,0.29564065646954) --(axis cs:71,0.293674991773159) --(axis cs:70,0.296928469999703) --(axis cs:69,0.307592748986782) --(axis cs:68,0.317013961559109) --(axis cs:67,0.322558071876331) --(axis cs:66,0.326991589920331) --(axis cs:65,0.338531043800867) --(axis cs:64,0.340075192756781) --(axis cs:63,0.353318518374658) --(axis cs:62,0.370052066567489) --(axis cs:61,0.389605800198442) --(axis cs:60,0.387527412877266) --(axis cs:59,0.389414746267694) --(axis cs:58,0.395127729011657) --(axis cs:57,0.403425566898522) --(axis cs:56,0.408505126108422) --(axis cs:55,0.441769272883032) --(axis cs:54,0.450374609264175) --(axis cs:53,0.453829403424852) --(axis cs:52,0.461319206624682) --(axis cs:51,0.459087339437461) --(axis cs:50,0.479890948129964) --(axis cs:49,0.481517449101631) --(axis cs:48,0.505086336396194) --(axis cs:47,0.49932766741705) --(axis cs:46,0.519764911245592) --(axis cs:45,0.522410017564492) --(axis cs:44,0.526924811996477) --(axis cs:43,0.568624912924273) --(axis cs:42,0.552315226045337) --(axis cs:41,0.563764774557341) --(axis cs:40,0.580251916578138) --(axis cs:39,0.600260247862062) --(axis cs:38,0.612660690050399) --(axis cs:37,0.617120925596882) --(axis cs:36,0.624001252618085) --(axis cs:35,0.648130989530235) --(axis cs:34,0.669498535612789) --(axis cs:33,0.675988006350439) --(axis cs:32,0.686376616553086) --(axis cs:31,0.69672064899514) --(axis cs:30,0.710568435660056) --(axis cs:29,0.748456665169535) --(axis cs:28,0.749949355774335) --(axis cs:27,0.792257046003116) --(axis cs:26,0.795424954897834) --(axis cs:25,0.810708277091492) --(axis cs:24,0.843589127511158) --(axis cs:23,0.841496174636677) --(axis cs:22,0.862002706443592) --(axis cs:21,0.911019609584562) --(axis cs:20,0.929847715944857) --(axis cs:19,0.954118392521633) --(axis cs:18,0.976835153123462) --(axis cs:17,1.01622873637402) --(axis cs:16,1.04021542698544) --(axis cs:15,1.07818831594328) --(axis cs:14,1.13556474030683) --(axis cs:13,1.18131372077023) --(axis cs:12,1.44123483908885) --(axis cs:11,1.51174904467461) --(axis cs:10,1.57716219531917) --(axis cs:9,1.67853929534103) --(axis cs:8,1.78567243226921) --(axis cs:7,1.90698581701846) --(axis cs:6,2.06762635848348) --(axis cs:5,2.28492305919636) --(axis cs:4,2.60215340747536) --(axis cs:3,3.04854428866048) --(axis cs:2,3.93436124159) --(axis cs:1,5.51550097064573) --(axis cs:0,8.61356546954088) --cycle; \path [fill=color1, fill opacity=0.2] (axis cs:0,16.0671313918064) --(axis cs:0,15.0318353338609) --(axis cs:1,6.47263687309264) --(axis cs:2,5.44220971364498) --(axis cs:3,4.91597929293189) --(axis cs:4,4.56972420584403) --(axis cs:5,4.32458570239122) --(axis cs:6,4.15059164664908) --(axis cs:7,4.05183554700542) --(axis cs:8,3.96561580647314) --(axis cs:9,3.93230367457407) --(axis cs:10,3.90502320624001) --(axis cs:11,3.89005742841501) --(axis cs:12,3.88597051750028) --(axis cs:13,3.86480623870383) --(axis cs:14,3.8634732288996) --(axis cs:15,3.85951770206713) --(axis cs:16,3.8334097445531) --(axis cs:17,3.84920427330891) --(axis cs:18,3.80029742133222) --(axis cs:19,3.81806707565584) --(axis cs:20,3.79973766171641) --(axis cs:21,3.8231550012274) --(axis cs:22,3.82772247970397) --(axis cs:23,3.80064148587274) --(axis cs:24,3.80988498385958) --(axis cs:25,3.81736556918932) --(axis cs:26,3.79208968797592) --(axis cs:27,3.77615480321997) --(axis cs:28,3.7814748918434) --(axis cs:29,3.794195982773) --(axis cs:30,3.79952575677709) --(axis cs:31,3.80818054762763) --(axis cs:32,3.77828290887855) --(axis cs:33,3.79714539760392) --(axis cs:34,3.80223417130008) --(axis cs:35,3.78287874915373) --(axis cs:36,3.78283624508179) --(axis cs:37,3.81979669726436) --(axis cs:38,3.76932044237488) --(axis cs:39,3.76230284064282) --(axis cs:40,3.78419079745126) --(axis cs:41,3.80444812598633) --(axis cs:42,3.80650166020038) --(axis cs:43,3.76662744123718) --(axis cs:44,3.77063074590314) --(axis cs:45,3.77876254685386) --(axis cs:46,3.76063578442097) --(axis cs:47,3.75992864957022) --(axis cs:48,3.78824283297186) --(axis cs:49,3.76944964230363) --(axis cs:50,3.76583851016904) --(axis cs:51,3.76427016900864) --(axis cs:52,3.76064146605501) --(axis cs:53,3.79163184126287) --(axis cs:54,3.8006851541975) --(axis cs:55,3.80253472597269) --(axis cs:56,3.76232497943534) --(axis cs:57,3.74653235111786) --(axis cs:58,3.77604545937753) --(axis cs:59,3.76932514372808) --(axis cs:60,3.78503815871181) --(axis cs:61,3.76289701965105) --(axis cs:62,3.75994087928731) --(axis cs:63,3.75626389575471) --(axis cs:64,3.75375399132988) --(axis cs:65,3.73757829549482) --(axis cs:66,3.77241589909928) --(axis cs:67,3.78678001180017) --(axis cs:68,3.76070021171806) --(axis cs:69,3.752118597777) --(axis cs:70,3.78261069898817) --(axis cs:71,3.77410801080149) --(axis cs:72,3.76070333126915) --(axis cs:73,3.76105660850697) --(axis cs:74,3.76773109609543) --(axis cs:75,3.80382663532414) --(axis cs:76,3.7651505053893) --(axis cs:77,3.77150515328776) --(axis cs:78,3.76653036460607) --(axis cs:79,3.75081325822448) --(axis cs:80,3.76566395718442) --(axis cs:81,3.78218763495383) --(axis cs:82,3.76384887028006) --(axis cs:83,3.77536081544456) --(axis cs:84,3.76804426514519) --(axis cs:85,3.77387758334886) --(axis cs:86,3.75160305906901) --(axis cs:87,3.79506085516783) --(axis cs:88,3.76714271089375) --(axis cs:89,3.80467587057444) --(axis cs:90,3.78082117572396) --(axis cs:91,3.75003471439879) --(axis cs:92,3.76259398185207) --(axis cs:93,3.75730957210488) --(axis cs:94,3.79379496073731) --(axis cs:95,3.75030754024615) --(axis cs:96,3.79539959713975) --(axis cs:97,3.74738497203006) --(axis cs:98,3.76475650079154) --(axis cs:99,3.76573240593037) --(axis cs:99,4.02564373975037) --(axis cs:99,4.02564373975037) --(axis cs:98,4.00771467281279) --(axis cs:97,3.98182963266558) --(axis cs:96,4.06925131991347) --(axis cs:95,4.0120547809717) --(axis cs:94,4.051704523007) --(axis cs:93,4.00272721429242) --(axis cs:92,4.01850180901097) --(axis cs:91,4.00198116237123) --(axis cs:90,4.07150998577506) --(axis cs:89,4.08785619195608) --(axis cs:88,4.02680291949769) --(axis cs:87,4.0582956333779) --(axis cs:86,4.01931632747681) --(axis cs:85,4.01845071076939) --(axis cs:84,4.02199142770556) --(axis cs:83,4.04667681781553) --(axis cs:82,4.02116623592111) --(axis cs:81,4.05827393705748) --(axis cs:80,4.02771601718081) --(axis cs:79,3.99101740863547) --(axis cs:78,4.02311311378748) --(axis cs:77,4.03972776322632) --(axis cs:76,4.02486112307953) --(axis cs:75,4.09746800934953) --(axis cs:74,4.04615681792638) --(axis cs:73,4.01854687278576) --(axis cs:72,3.99613260623085) --(axis cs:71,4.05574622325816) --(axis cs:70,4.07086054836698) --(axis cs:69,4.0034459964918) --(axis cs:68,4.01073616167468) --(axis cs:67,4.05303414250688) --(axis cs:66,4.0478702159113) --(axis cs:65,3.97419802464474) --(axis cs:64,4.00320668677071) --(axis cs:63,4.00980325944752) --(axis cs:62,4.01692473019959) --(axis cs:61,4.0044774958824) --(axis cs:60,4.05300406871535) --(axis cs:59,4.0343009485469) --(axis cs:58,4.02157267543895) --(axis cs:57,3.98710641230987) --(axis cs:56,4.02606746898995) --(axis cs:55,4.10531757403544) --(axis cs:54,4.08837089260629) --(axis cs:53,4.08616585771174) --(axis cs:52,4.01062553477914) --(axis cs:51,4.01383032156143) --(axis cs:50,4.03082404298241) --(axis cs:49,4.0387805988361) --(axis cs:48,4.05076800966933) --(axis cs:47,4.00514204312794) --(axis cs:46,4.03414615476767) --(axis cs:45,4.03706517251667) --(axis cs:44,4.02988770324441) --(axis cs:43,4.03755546968201) --(axis cs:42,4.09204112544415) --(axis cs:41,4.09883483426961) --(axis cs:40,4.05899354652253) --(axis cs:39,4.02582321475675) --(axis cs:38,4.03059550712871) --(axis cs:37,4.10731635891851) --(axis cs:36,4.05207335295084) --(axis cs:35,4.05097802422273) --(axis cs:34,4.09596732609575) --(axis cs:33,4.07602488285263) --(axis cs:32,4.0401499626221) --(axis cs:31,4.08801009568292) --(axis cs:30,4.07521363582138) --(axis cs:29,4.05002033217858) --(axis cs:28,4.04557814643582) --(axis cs:27,4.03119459253198) --(axis cs:26,4.07855262439184) --(axis cs:25,4.11777090796319) --(axis cs:24,4.08533157650419) --(axis cs:23,4.06331779477708) --(axis cs:22,4.14780471145814) --(axis cs:21,4.10972870645489) --(axis cs:20,4.07441944913361) --(axis cs:19,4.11201794758838) --(axis cs:18,4.07630786685545) --(axis cs:17,4.16399747204224) --(axis cs:16,4.1343486248927) --(axis cs:15,4.19926945308424) --(axis cs:14,4.20316864223484) --(axis cs:13,4.19217586527973) --(axis cs:12,4.21784268567558) --(axis cs:11,4.2466010493682) --(axis cs:10,4.25243118905418) --(axis cs:9,4.33808909301423) --(axis cs:8,4.36863740931665) --(axis cs:7,4.53413247057271) --(axis cs:6,4.70801326134042) --(axis cs:5,4.98354573808933) --(axis cs:4,5.2346751255667) --(axis cs:3,5.45094842618432) --(axis cs:2,5.7196197897323) --(axis cs:1,8.29381267372132) --(axis cs:0,16.0671313918064) --cycle; \path [fill=color2, fill opacity=0.2] (axis cs:0,12.841037437609) --(axis cs:0,12.0673184024382) --(axis cs:1,8.73556735911301) --(axis cs:2,6.71146761776195) --(axis cs:3,5.95463548216303) --(axis cs:4,5.71883366072301) --(axis cs:5,5.59501513959566) --(axis cs:6,5.47941357525794) --(axis cs:7,5.36706604601237) --(axis cs:8,5.25525781715757) --(axis cs:9,5.16185080181983) --(axis cs:10,5.06822369023384) --(axis cs:11,4.97493546481138) --(axis cs:12,4.90164186776771) --(axis cs:13,4.82905615334868) --(axis cs:14,4.75635455984161) --(axis cs:15,4.69169825987785) --(axis cs:16,4.62337291634547) --(axis cs:17,4.56130541879343) --(axis cs:18,4.50463444638383) --(axis cs:19,4.44260976352824) --(axis cs:20,4.39000018701016) --(axis cs:21,4.34399870442875) --(axis cs:22,4.29702652304344) --(axis cs:23,4.25833204642604) --(axis cs:24,4.21625132401168) --(axis cs:25,4.17617919197338) --(axis cs:26,4.14732585395956) --(axis cs:27,4.12312899447103) --(axis cs:28,4.08679649251623) --(axis cs:29,4.06450162863508) --(axis cs:30,4.03638730985317) --(axis cs:31,4.02317054405805) --(axis cs:32,3.99510269931599) --(axis cs:33,3.98266880980485) --(axis cs:34,3.96497914927462) --(axis cs:35,3.94477725337395) --(axis cs:36,3.933753204353) --(axis cs:37,3.92330860697795) --(axis cs:38,3.90591403253364) --(axis cs:39,3.89398080275275) --(axis cs:40,3.87965893848143) --(axis cs:41,3.87614677417162) --(axis cs:42,3.87125895040139) --(axis cs:43,3.85594165123974) --(axis cs:44,3.8493653245229) --(axis cs:45,3.84623239666574) --(axis cs:46,3.83247356208771) --(axis cs:47,3.82849441416842) --(axis cs:48,3.82511219705821) --(axis cs:49,3.8197094772689) --(axis cs:50,3.81563834004885) --(axis cs:51,3.81355531313613) --(axis cs:52,3.81308289273661) --(axis cs:53,3.81094872448801) --(axis cs:54,3.80736837998165) --(axis cs:55,3.80110532394975) --(axis cs:56,3.79941097141634) --(axis cs:57,3.79622143006036) --(axis cs:58,3.79111165811183) --(axis cs:59,3.79424650286968) --(axis cs:60,3.79265284731351) --(axis cs:61,3.78491265176092) --(axis cs:62,3.78814464386911) --(axis cs:63,3.78347092979514) --(axis cs:64,3.77694859204657) --(axis cs:65,3.77533613666895) --(axis cs:66,3.77670267815793) --(axis cs:67,3.77594810462707) --(axis cs:68,3.78113974631391) --(axis cs:69,3.77627625219425) --(axis cs:70,3.77279016040661) --(axis cs:71,3.77451355924779) --(axis cs:72,3.77494302142054) --(axis cs:73,3.76318430698427) --(axis cs:74,3.76995153939621) --(axis cs:75,3.77681114160408) --(axis cs:76,3.7713025348129) --(axis cs:77,3.77315756622424) --(axis cs:78,3.77577113612049) --(axis cs:79,3.77416359200888) --(axis cs:80,3.76416822865045) --(axis cs:81,3.77136699452836) --(axis cs:82,3.77305341421861) --(axis cs:83,3.76431514881714) --(axis cs:84,3.76257054674121) --(axis cs:85,3.76035626727715) --(axis cs:86,3.76517443374955) --(axis cs:87,3.75958111206605) --(axis cs:88,3.77300923847186) --(axis cs:89,3.76481066951885) --(axis cs:90,3.76388487762151) --(axis cs:91,3.75808766558595) --(axis cs:92,3.76485626580928) --(axis cs:93,3.75764089684648) --(axis cs:94,3.76704172050111) --(axis cs:95,3.7564728131527) --(axis cs:96,3.76214583975876) --(axis cs:97,3.75984687089134) --(axis cs:98,3.75914434809804) --(axis cs:99,3.76169418376941) --(axis cs:99,4.01552298185966) --(axis cs:99,4.01552298185966) --(axis cs:98,4.0119541384844) --(axis cs:97,4.01511094491473) --(axis cs:96,4.02059284584915) --(axis cs:95,4.00980103506304) --(axis cs:94,4.02566104973204) --(axis cs:93,4.01066691616215) --(axis cs:92,4.02161808925257) --(axis cs:91,4.01057752097818) --(axis cs:90,4.01952951802872) --(axis cs:89,4.02731280714538) --(axis cs:88,4.03696002778384) --(axis cs:87,4.0069620219875) --(axis cs:86,4.01930302266118) --(axis cs:85,4.01264117560412) --(axis cs:84,4.02132577233025) --(axis cs:83,4.02059956726766) --(axis cs:82,4.04111685097915) --(axis cs:81,4.03636059666834) --(axis cs:80,4.01691491013332) --(axis cs:79,4.03436931356973) --(axis cs:78,4.0415709544862) --(axis cs:77,4.03052186823417) --(axis cs:76,4.03390255602776) --(axis cs:75,4.04168988582105) --(axis cs:74,4.02798143510127) --(axis cs:73,4.01134960058815) --(axis cs:72,4.03084482800573) --(axis cs:71,4.03264351218369) --(axis cs:70,4.02651607331735) --(axis cs:69,4.03228936441342) --(axis cs:68,4.04233733117022) --(axis cs:67,4.02747742993918) --(axis cs:66,4.02825941646691) --(axis cs:65,4.02838668997087) --(axis cs:64,4.0275203258001) --(axis cs:63,4.03921091046569) --(axis cs:62,4.04347395125418) --(axis cs:61,4.03852545223281) --(axis cs:60,4.0485258559946) --(axis cs:59,4.04973724270527) --(axis cs:58,4.04139903575617) --(axis cs:57,4.04819893940579) --(axis cs:56,4.05212908544808) --(axis cs:55,4.05324498860111) --(axis cs:54,4.05952481612431) --(axis cs:53,4.0683231642449) --(axis cs:52,4.06434382692892) --(axis cs:51,4.0613577753445) --(axis cs:50,4.06138542042885) --(axis cs:49,4.06356905158038) --(axis cs:48,4.06917898132721) --(axis cs:47,4.068362561387) --(axis cs:46,4.07286335833262) --(axis cs:45,4.08927636315074) --(axis cs:44,4.08954951490377) --(axis cs:43,4.09464272859221) --(axis cs:42,4.11529322448468) --(axis cs:41,4.11278055838861) --(axis cs:40,4.11080978608726) --(axis cs:39,4.12437825435581) --(axis cs:38,4.13318591507785) --(axis cs:37,4.15697547988525) --(axis cs:36,4.16002553303306) --(axis cs:35,4.16606745411506) --(axis cs:34,4.18937631311755) --(axis cs:33,4.20304737417545) --(axis cs:32,4.20960663982586) --(axis cs:31,4.24061418240273) --(axis cs:30,4.23754969296462) --(axis cs:29,4.27028233870094) --(axis cs:28,4.28539773406661) --(axis cs:27,4.31988750282307) --(axis cs:26,4.33395901873128) --(axis cs:25,4.35272813885751) --(axis cs:24,4.38795463403682) --(axis cs:23,4.42709987902969) --(axis cs:22,4.4558748721185) --(axis cs:21,4.49106605641517) --(axis cs:20,4.52644026810866) --(axis cs:19,4.56321127376424) --(axis cs:18,4.61728448621301) --(axis cs:17,4.66126805227591) --(axis cs:16,4.71059263948136) --(axis cs:15,4.77035446686775) --(axis cs:14,4.82514091592743) --(axis cs:13,4.89188024038911) --(axis cs:12,4.95972249685632) --(axis cs:11,5.03809395159081) --(axis cs:10,5.13755385950981) --(axis cs:9,5.25346260099185) --(axis cs:8,5.40909606213523) --(axis cs:7,5.63271975873617) --(axis cs:6,5.9417947428166) --(axis cs:5,6.36095950284005) --(axis cs:4,6.90899652675664) --(axis cs:3,7.63201415823817) --(axis cs:2,8.70543880262787) --(axis cs:1,10.4488042076595) --(axis cs:0,12.841037437609) --cycle; \path [fill=color3, fill opacity=0.2] (axis cs:0,18.3688966498911) --(axis cs:0,18.2411850863557) --(axis cs:1,17.7112753103729) --(axis cs:2,17.1065717846993) --(axis cs:3,16.3989824633261) --(axis cs:4,15.6860984651892) --(axis cs:5,15.0249907297089) --(axis cs:6,14.4224590083449) --(axis cs:7,13.8761990766079) --(axis cs:8,13.3893797472878) --(axis cs:9,12.9501390460184) --(axis cs:10,12.5443825198225) --(axis cs:11,12.1855205139143) --(axis cs:12,11.8704338921724) --(axis cs:13,11.5873934884813) --(axis cs:14,11.3386331670444) --(axis cs:15,11.110858600359) --(axis cs:16,10.9040768187416) --(axis cs:17,10.7185776615594) --(axis cs:18,10.5538442267016) --(axis cs:19,10.3999293065527) --(axis cs:20,10.2548674355203) --(axis cs:21,10.128514891978) --(axis cs:22,10.0139420415183) --(axis cs:23,9.90762217454944) --(axis cs:24,9.80989181193434) --(axis cs:25,9.71894557739858) --(axis cs:26,9.63375380101899) --(axis cs:27,9.55032770530051) --(axis cs:28,9.47372882526921) --(axis cs:29,9.40391448030294) --(axis cs:30,9.33601632669117) --(axis cs:31,9.27209566849047) --(axis cs:32,9.21332664572706) --(axis cs:33,9.15690066489444) --(axis cs:34,9.10060207484415) --(axis cs:35,9.0513368469724) --(axis cs:36,8.9996049410528) --(axis cs:37,8.94897280183987) --(axis cs:38,8.90645486111773) --(axis cs:39,8.86164673002202) --(axis cs:40,8.82187758174075) --(axis cs:41,8.78508500987629) --(axis cs:42,8.75331760942179) --(axis cs:43,8.71951777747442) --(axis cs:44,8.68669170080669) --(axis cs:45,8.65762113785069) --(axis cs:46,8.62260470098148) --(axis cs:47,8.59509746705067) --(axis cs:48,8.57226841406757) --(axis cs:49,8.54555773674669) --(axis cs:50,8.51422284124325) --(axis cs:51,8.489604133763) --(axis cs:52,8.4692288843949) --(axis cs:53,8.44453248637094) --(axis cs:54,8.42339788960005) --(axis cs:55,8.40313504527643) --(axis cs:56,8.38303972565248) --(axis cs:57,8.36093922224934) --(axis cs:58,8.34058816739814) --(axis cs:59,8.32220639832745) --(axis cs:60,8.30413403438864) --(axis cs:61,8.28678260344072) --(axis cs:62,8.26791445139362) --(axis cs:63,8.24936136611713) --(axis cs:64,8.23417782832402) --(axis cs:65,8.21952565891374) --(axis cs:66,8.20353848985102) --(axis cs:67,8.18533288218011) --(axis cs:68,8.16959185011375) --(axis cs:69,8.15970768196168) --(axis cs:70,8.14284841731414) --(axis cs:71,8.12619654057002) --(axis cs:72,8.1139641136322) --(axis cs:73,8.09942656245482) --(axis cs:74,8.0867758313679) --(axis cs:75,8.07266824441031) --(axis cs:76,8.05828914967583) --(axis cs:77,8.04614291051252) --(axis cs:78,8.03062508918313) --(axis cs:79,8.01618038088856) --(axis cs:80,7.99977462532334) --(axis cs:81,7.98879670404806) --(axis cs:82,7.9748434799129) --(axis cs:83,7.96322861794688) --(axis cs:84,7.949403491621) --(axis cs:85,7.93295946243189) --(axis cs:86,7.92078130388158) --(axis cs:87,7.90889738725057) --(axis cs:88,7.89712025837677) --(axis cs:89,7.8848363973838) --(axis cs:90,7.86834554768258) --(axis cs:91,7.85473190426574) --(axis cs:92,7.84236058976246) --(axis cs:93,7.82871753296147) --(axis cs:94,7.81758140192387) --(axis cs:95,7.80456638623422) --(axis cs:96,7.78984176696766) --(axis cs:97,7.77916439249155) --(axis cs:98,7.76461287517148) --(axis cs:99,7.75673652331912) --(axis cs:99,7.86137222607053) --(axis cs:99,7.86137222607053) --(axis cs:98,7.86894260069929) --(axis cs:97,7.8796602738496) --(axis cs:96,7.89421178120943) --(axis cs:95,7.90177275052522) --(axis cs:94,7.91384025945308) --(axis cs:93,7.92448579548905) --(axis cs:92,7.9367666583658) --(axis cs:91,7.94757694125428) --(axis cs:90,7.96169417603161) --(axis cs:89,7.97673605897606) --(axis cs:88,7.98801056030813) --(axis cs:87,7.99736672712931) --(axis cs:86,8.01129521385931) --(axis cs:85,8.02280848699031) --(axis cs:84,8.03460141916595) --(axis cs:83,8.04772668397369) --(axis cs:82,8.06508692729331) --(axis cs:81,8.07867454585022) --(axis cs:80,8.08993255851455) --(axis cs:79,8.10317469367605) --(axis cs:78,8.11708526593976) --(axis cs:77,8.13438458582537) --(axis cs:76,8.14934806498481) --(axis cs:75,8.16628439230844) --(axis cs:74,8.17929981153575) --(axis cs:73,8.19603521936484) --(axis cs:72,8.21323073601787) --(axis cs:71,8.2279044497254) --(axis cs:70,8.24548331384634) --(axis cs:69,8.26531298098184) --(axis cs:68,8.27599041256121) --(axis cs:67,8.29398128611416) --(axis cs:66,8.31491599826747) --(axis cs:65,8.33097368495833) --(axis cs:64,8.34777509322546) --(axis cs:63,8.36827054611432) --(axis cs:62,8.38846601125286) --(axis cs:61,8.40970202587243) --(axis cs:60,8.4298054543402) --(axis cs:59,8.44673408222268) --(axis cs:58,8.46931173494561) --(axis cs:57,8.4915990773112) --(axis cs:56,8.51628837900246) --(axis cs:55,8.54236259469752) --(axis cs:54,8.56757183187618) --(axis cs:53,8.5924847001118) --(axis cs:52,8.6202803166549) --(axis cs:51,8.64647870620282) --(axis cs:50,8.67752545994491) --(axis cs:49,8.71013989508924) --(axis cs:48,8.74693222883608) --(axis cs:47,8.77674027607271) --(axis cs:46,8.8136107632926) --(axis cs:45,8.85596541508714) --(axis cs:44,8.89521201114806) --(axis cs:43,8.93676668195754) --(axis cs:42,8.97784770429891) --(axis cs:41,9.02488966053387) --(axis cs:40,9.07494948023028) --(axis cs:39,9.12774663138749) --(axis cs:38,9.18485449557172) --(axis cs:37,9.24526471647068) --(axis cs:36,9.31171898778021) --(axis cs:35,9.37955927627793) --(axis cs:34,9.45220668675253) --(axis cs:33,9.52983258095517) --(axis cs:32,9.61386527932177) --(axis cs:31,9.69962064010328) --(axis cs:30,9.79557140117342) --(axis cs:29,9.89596726408183) --(axis cs:28,10.0026442114333) --(axis cs:27,10.1202725500299) --(axis cs:26,10.2433145278384) --(axis cs:25,10.3740183406516) --(axis cs:24,10.5138617993124) --(axis cs:23,10.6621202825225) --(axis cs:22,10.8215427810728) --(axis cs:21,10.9902927099508) --(axis cs:20,11.1700191408143) --(axis cs:19,11.3669415735742) --(axis cs:18,11.5745616621737) --(axis cs:17,11.7958865260627) --(axis cs:16,12.0332975505618) --(axis cs:15,12.2887038538647) --(axis cs:14,12.5556953953743) --(axis cs:13,12.8384208222283) --(axis cs:12,13.1438367631418) --(axis cs:11,13.4671439249691) --(axis cs:10,13.8149779843279) --(axis cs:9,14.1870077130148) --(axis cs:8,14.5784534538027) --(axis cs:7,14.9926100511997) --(axis cs:6,15.4395324925096) --(axis cs:5,15.9126553731963) --(axis cs:4,16.41509311908) --(axis cs:3,16.9348065991739) --(axis cs:2,17.4474349189954) --(axis cs:1,17.9217630197052) --(axis cs:0,18.3688966498911) --cycle; \path [fill=color4, fill opacity=0.2] (axis cs:0,5.19190885385677) --(axis cs:0,4.36356053192611) --(axis cs:1,3.37098852791221) --(axis cs:2,2.61739666041646) --(axis cs:3,1.9806724092743) --(axis cs:4,1.46544439787642) --(axis cs:5,1.12412458833074) --(axis cs:6,0.913740069334779) --(axis cs:7,0.777406477734254) --(axis cs:8,0.679047790758777) --(axis cs:9,0.601702400098897) --(axis cs:10,0.53948064050551) --(axis cs:11,0.488548306087815) --(axis cs:12,0.444870252195068) --(axis cs:13,0.408973747417329) --(axis cs:14,0.377119437559666) --(axis cs:15,0.349833582216528) --(axis cs:16,0.32567563914566) --(axis cs:17,0.304689154852811) --(axis cs:18,0.28601754726494) --(axis cs:19,0.268254729179747) --(axis cs:20,0.252626927916416) --(axis cs:21,0.238172426161606) --(axis cs:22,0.224547481951132) --(axis cs:23,0.212173933023112) --(axis cs:24,0.200543363434024) --(axis cs:25,0.190430712831089) --(axis cs:26,0.179918740575375) --(axis cs:27,0.170607250098485) --(axis cs:28,0.162373222740287) --(axis cs:29,0.154507787717743) --(axis cs:30,0.146599700700755) --(axis cs:31,0.13987722590555) --(axis cs:32,0.13321904851555) --(axis cs:33,0.126922939053396) --(axis cs:34,0.121435591223959) --(axis cs:35,0.116156686370164) --(axis cs:36,0.110962759108963) --(axis cs:37,0.106396068630486) --(axis cs:38,0.101828580570503) --(axis cs:39,0.0975557324384688) --(axis cs:40,0.0935807664856673) --(axis cs:41,0.0899172293236358) --(axis cs:42,0.0863753982634369) --(axis cs:43,0.0830134749832222) --(axis cs:44,0.079850636468481) --(axis cs:45,0.0767730575048274) --(axis cs:46,0.0739520128619959) --(axis cs:47,0.0710993505672976) --(axis cs:48,0.0686236131683428) --(axis cs:49,0.0660955958807242) --(axis cs:50,0.0637823902742649) --(axis cs:51,0.0614835288093858) --(axis cs:52,0.0595807432174099) --(axis cs:53,0.0574199556847471) --(axis cs:54,0.0554726169899009) --(axis cs:55,0.0535029314834965) --(axis cs:56,0.0519264904698928) --(axis cs:57,0.0501381460043455) --(axis cs:58,0.0484374483989296) --(axis cs:59,0.0469476324536781) --(axis cs:60,0.0454993400474927) --(axis cs:61,0.0441437671700739) --(axis cs:62,0.0426422847672858) --(axis cs:63,0.0413172108955362) --(axis cs:64,0.0402930707712221) --(axis cs:65,0.0391151215413612) --(axis cs:66,0.0379443612945468) --(axis cs:67,0.0367881981541043) --(axis cs:68,0.0357592113669877) --(axis cs:69,0.0347363000536827) --(axis cs:70,0.0339212338997541) --(axis cs:71,0.0329713573740755) --(axis cs:72,0.0322186144254212) --(axis cs:73,0.0314161647833919) --(axis cs:74,0.0304672164179108) --(axis cs:75,0.0296796015929949) --(axis cs:76,0.0290013107171757) --(axis cs:77,0.0281737523152722) --(axis cs:78,0.0275870525939075) --(axis cs:79,0.0270157157696385) --(axis cs:80,0.0261456724319618) --(axis cs:81,0.0255416653391484) --(axis cs:82,0.0251717342643069) --(axis cs:83,0.0243992312095072) --(axis cs:84,0.0239664056150691) --(axis cs:85,0.0231012762434617) --(axis cs:86,0.0227350157072242) --(axis cs:87,0.0220036203847412) --(axis cs:88,0.0215326408429835) --(axis cs:89,0.0224314871598144) --(axis cs:90,0.0205353756062975) --(axis cs:91,0.0201689203870221) --(axis cs:92,0.0200203768410799) --(axis cs:93,0.0190678888109916) --(axis cs:94,0.0187402896289265) --(axis cs:95,0.0181952527913112) --(axis cs:96,0.0185434630630502) --(axis cs:97,0.0175492432039115) --(axis cs:98,0.0170431257462293) --(axis cs:99,0.0168639275946157) --(axis cs:99,0.039459233816173) --(axis cs:99,0.039459233816173) --(axis cs:98,0.0398723417345891) --(axis cs:97,0.0407216715866195) --(axis cs:96,0.0416049570597123) --(axis cs:95,0.0423953158281586) --(axis cs:94,0.0432293987195457) --(axis cs:93,0.0440228177435603) --(axis cs:92,0.0453759347897254) --(axis cs:91,0.0464314704470909) --(axis cs:90,0.0465797762758741) --(axis cs:89,0.0486701590275904) --(axis cs:88,0.0488802025209969) --(axis cs:87,0.0496424309724486) --(axis cs:86,0.0506269412418111) --(axis cs:85,0.0518760718184376) --(axis cs:84,0.0531280607483212) --(axis cs:83,0.0540658972942327) --(axis cs:82,0.0551937282127016) --(axis cs:81,0.0568030380371448) --(axis cs:80,0.0579661291650175) --(axis cs:79,0.059085406927556) --(axis cs:78,0.0604662743357014) --(axis cs:77,0.0614708067849981) --(axis cs:76,0.0631980981045859) --(axis cs:75,0.064447970610681) --(axis cs:74,0.0658404593708891) --(axis cs:73,0.0677004849436347) --(axis cs:72,0.0691333380031978) --(axis cs:71,0.0707387029442356) --(axis cs:70,0.0725937774157903) --(axis cs:69,0.0742261296336981) --(axis cs:68,0.0763074097180203) --(axis cs:67,0.078250169206726) --(axis cs:66,0.0803951729335) --(axis cs:65,0.0826416363786258) --(axis cs:64,0.0854084021370363) --(axis cs:63,0.0870600599271875) --(axis cs:62,0.0898692949747485) --(axis cs:61,0.09302851022585) --(axis cs:60,0.0955643200873791) --(axis cs:59,0.0983690902870515) --(axis cs:58,0.101708680955905) --(axis cs:57,0.105183939316604) --(axis cs:56,0.109165364574273) --(axis cs:55,0.112570498101007) --(axis cs:54,0.116629783173909) --(axis cs:53,0.121131182259315) --(axis cs:52,0.126042661166249) --(axis cs:51,0.130514618410034) --(axis cs:50,0.135436319244247) --(axis cs:49,0.140677868997676) --(axis cs:48,0.146413563838061) --(axis cs:47,0.152083240980271) --(axis cs:46,0.158315047517191) --(axis cs:45,0.165012328451539) --(axis cs:44,0.171833974325507) --(axis cs:43,0.18030017635003) --(axis cs:42,0.186511291737256) --(axis cs:41,0.194498706898043) --(axis cs:40,0.203922699425085) --(axis cs:39,0.21140216046621) --(axis cs:38,0.220770130830722) --(axis cs:37,0.231563709181677) --(axis cs:36,0.240238138385989) --(axis cs:35,0.250362485622774) --(axis cs:34,0.261516304668661) --(axis cs:33,0.272839265236041) --(axis cs:32,0.285518206024575) --(axis cs:31,0.299386962801325) --(axis cs:30,0.311772955803558) --(axis cs:29,0.328293114211795) --(axis cs:28,0.343162268070584) --(axis cs:27,0.357603444989425) --(axis cs:26,0.374928887302814) --(axis cs:25,0.394851681458882) --(axis cs:24,0.412965840556276) --(axis cs:23,0.433193414177599) --(axis cs:22,0.455437813781684) --(axis cs:21,0.478113956910929) --(axis cs:20,0.503696106648238) --(axis cs:19,0.530567842654181) --(axis cs:18,0.561899913136913) --(axis cs:17,0.592821214765923) --(axis cs:16,0.628370723492813) --(axis cs:15,0.666952410962158) --(axis cs:14,0.71227218140739) --(axis cs:13,0.7626178543518) --(axis cs:12,0.818047467089625) --(axis cs:11,0.884632138947483) --(axis cs:10,0.958518189177158) --(axis cs:9,1.04507963652347) --(axis cs:8,1.14832349261347) --(axis cs:7,1.2710499685955) --(axis cs:6,1.42765594456089) --(axis cs:5,1.63367529932642) --(axis cs:4,1.92389690880998) --(axis cs:3,2.34529952790527) --(axis cs:2,2.99381502094951) --(axis cs:1,3.95126043957639) --(axis cs:0,5.19190885385677) --cycle; \addplot [semithick, color0] table { 0 6.838729540507 1 4.65144383907318 2 3.45571561654409 3 2.73656043741438 4 2.31804142395655 5 2.02446597152286 6 1.82143108712302 7 1.6738570994801 8 1.55854974073522 9 1.45998725470375 10 1.3722837602391 11 1.30934476852417 12 1.24750173442504 13 1.0915415207545 14 1.0494703412056 15 1.0031058549881 16 0.969704695542653 17 0.947771489620209 18 0.915381189187368 19 0.893520927429199 20 0.871512059370677 21 0.852946547667185 22 0.814072906970978 23 0.796017952760061 24 0.794525261720022 25 0.767818816502889 26 0.754085866610209 27 0.748317317167918 28 0.714873417218526 29 0.710457515716553 30 0.681829043229421 31 0.669014382362366 32 0.65835086107254 33 0.646765573819478 34 0.640322013696035 35 0.621662092208862 36 0.603219370047251 37 0.59625453154246 38 0.589958870410919 39 0.578335126241048 40 0.559869039058685 41 0.548501221338908 42 0.53657857577006 43 0.543924365441004 44 0.516633292039235 45 0.505787003040314 46 0.505932398637136 47 0.485207762320836 48 0.486840166648229 49 0.469264805316925 50 0.466109065214793 51 0.449427145719528 52 0.449812606970469 53 0.438365481297175 54 0.43085154692332 55 0.422671800851822 56 0.395943719148636 57 0.390768222014109 58 0.381961329778035 59 0.374123404423396 60 0.367742578188578 61 0.365883296728134 62 0.350599718093872 63 0.3300827840964 64 0.319339011112849 65 0.315528817971547 66 0.302566805481911 67 0.298116025328636 68 0.294604137539864 69 0.283092088500659 70 0.275068628787994 71 0.269438606500626 72 0.270408035318057 73 0.256062098344167 74 0.262419519821803 75 0.250804225603739 76 0.243242270747821 77 0.237740802764893 78 0.233813186486562 79 0.230977653463682 80 0.233982223272324 81 0.222307861844699 82 0.217665653427442 83 0.214062356948853 84 0.210281575719515 85 0.208812607328097 86 0.208156155546506 87 0.211138046781222 88 0.200186601281166 89 0.196993895371755 90 0.192720854282379 91 0.186515722672145 92 0.184321380654971 93 0.185624274611473 94 0.174461806813876 95 0.172670394182205 96 0.173990878462791 97 0.166933755079905 98 0.160448574026426 99 0.159265330433846 }; \addlegendentry{NBD\xspace} \addplot [semithick, color1] table { 0 15.5494833628337 1 7.38322477340698 2 5.58091475168864 3 5.18346385955811 4 4.90219966570536 5 4.65406572024027 6 4.42930245399475 7 4.29298400878906 8 4.1671266078949 9 4.13519638379415 10 4.07872719764709 11 4.0683292388916 12 4.05190660158793 13 4.02849105199178 14 4.03332093556722 15 4.02939357757568 16 3.9838791847229 17 4.00660087267558 18 3.93830264409383 19 3.96504251162211 20 3.93707855542501 21 3.96644185384115 22 3.98776359558105 23 3.93197964032491 24 3.94760828018188 25 3.96756823857625 26 3.93532115618388 27 3.90367469787598 28 3.91352651913961 29 3.92210815747579 30 3.93736969629924 31 3.94809532165527 32 3.90921643575033 33 3.93658514022827 34 3.94910074869792 35 3.91692838668823 36 3.91745479901632 37 3.96355652809143 38 3.89995797475179 39 3.89406302769979 40 3.9215921719869 41 3.95164148012797 42 3.94927139282227 43 3.90209145545959 44 3.90025922457377 45 3.90791385968526 46 3.89739096959432 47 3.88253534634908 48 3.9195054213206 49 3.90411512056986 50 3.89833127657572 51 3.88905024528503 52 3.88563350041707 53 3.9388988494873 54 3.9445280234019 55 3.95392615000407 56 3.89419622421265 57 3.86681938171387 58 3.89880906740824 59 3.90181304613749 60 3.91902111371358 61 3.88368725776672 62 3.88843280474345 63 3.88303357760112 64 3.87848033905029 65 3.85588816006978 66 3.91014305750529 67 3.91990707715352 68 3.88571818669637 69 3.8777822971344 70 3.92673562367757 71 3.91492711702983 72 3.87841796875 73 3.88980174064636 74 3.9069439570109 75 3.95064732233683 76 3.89500581423442 77 3.90561645825704 78 3.89482173919678 79 3.87091533342997 80 3.89668998718262 81 3.92023078600566 82 3.89250755310059 83 3.91101881663005 84 3.89501784642537 85 3.89616414705912 86 3.88545969327291 87 3.92667824427287 88 3.89697281519572 89 3.94626603126526 90 3.92616558074951 91 3.87600793838501 92 3.89054789543152 93 3.88001839319865 94 3.92274974187215 95 3.88118116060893 96 3.93232545852661 97 3.86460730234782 98 3.88623558680216 99 3.89568807284037 }; \addlegendentry{Deepnorm} \addplot [semithick, color2] table { 0 12.4541779200236 1 9.59218578338623 2 7.70845321019491 3 6.7933248202006 4 6.31391509373983 5 5.97798732121785 6 5.71060415903727 7 5.49989290237427 8 5.3321769396464 9 5.20765670140584 10 5.10288877487183 11 5.00651470820109 12 4.93068218231201 13 4.8604681968689 14 4.79074773788452 15 4.7310263633728 16 4.66698277791341 17 4.61128673553467 18 4.56095946629842 19 4.50291051864624 20 4.45822022755941 21 4.41753238042196 22 4.37645069758097 23 4.34271596272786 24 4.30210297902425 25 4.26445366541545 26 4.24064243634542 27 4.22150824864705 28 4.18609711329142 29 4.16739198366801 30 4.1369685014089 31 4.13189236323039 32 4.10235466957092 33 4.09285809199015 34 4.07717773119609 35 4.05542235374451 36 4.04688936869303 37 4.0401420434316 38 4.01954997380575 39 4.00917952855428 40 3.99523436228434 41 3.99446366628011 42 3.99327608744303 43 3.97529218991597 44 3.96945741971334 45 3.96775437990824 46 3.95266846021016 47 3.94842848777771 48 3.94714558919271 49 3.94163926442464 50 3.93851188023885 51 3.93745654424032 52 3.93871335983276 53 3.93963594436646 54 3.93344659805298 55 3.92717515627543 56 3.92577002843221 57 3.92221018473307 58 3.916255346934 59 3.92199187278748 60 3.92058935165405 61 3.91171905199687 62 3.91580929756165 63 3.91134092013041 64 3.90223445892334 65 3.90186141331991 66 3.90248104731242 67 3.90171276728312 68 3.91173853874207 69 3.90428280830383 70 3.89965311686198 71 3.90357853571574 72 3.90289392471313 73 3.88726695378621 74 3.89896648724874 75 3.90925051371257 76 3.90260254542033 77 3.90183971722921 78 3.90867104530334 79 3.90426645278931 80 3.89054156939189 81 3.90386379559835 82 3.90708513259888 83 3.8924573580424 84 3.89194815953573 85 3.88649872144063 86 3.89223872820536 87 3.88327156702677 88 3.90498463312785 89 3.89606173833211 90 3.89170719782511 91 3.88433259328206 92 3.89323717753092 93 3.88415390650431 94 3.89635138511658 95 3.88313692410787 96 3.89136934280396 97 3.88747890790304 98 3.88554924329122 99 3.88860858281453 }; \addlegendentry{Widenorm} \addplot [semithick, color3] table { 0 18.3050408681234 1 17.8165191650391 2 17.2770033518473 3 16.66689453125 4 16.0505957921346 5 15.4688230514526 6 14.9309957504272 7 14.4344045639038 8 13.9839166005452 9 13.5685733795166 10 13.1796802520752 11 12.8263322194417 12 12.5071353276571 13 12.2129071553548 14 11.9471642812093 15 11.6997812271118 16 11.4686871846517 17 11.257232093811 18 11.0642029444377 19 10.8834354400635 20 10.7124432881673 21 10.5594038009644 22 10.4177424112956 23 10.284871228536 24 10.1618768056234 25 10.0464819590251 26 9.93853416442871 27 9.8353001276652 28 9.73818651835124 29 9.64994087219238 30 9.56579386393229 31 9.48585815429687 32 9.41359596252441 33 9.3433666229248 34 9.27640438079834 35 9.21544806162516 36 9.1556619644165 37 9.09711875915527 38 9.04565467834473 39 8.99469668070475 40 8.94841353098551 41 8.90498733520508 42 8.86558265686035 43 8.82814222971598 44 8.79095185597738 45 8.75679327646891 46 8.71810773213704 47 8.68591887156169 48 8.65960032145182 49 8.62784881591797 50 8.59587415059408 51 8.56804141998291 52 8.5447546005249 53 8.51850859324137 54 8.49548486073812 55 8.47274881998698 56 8.44966405232747 57 8.42626914978027 58 8.40494995117188 59 8.38447024027507 60 8.36696974436442 61 8.34824231465658 62 8.32819023132324 63 8.30881595611572 64 8.29097646077474 65 8.27524967193603 66 8.25922724405924 67 8.23965708414713 68 8.22279113133748 69 8.21251033147176 70 8.19416586558024 71 8.17705049514771 72 8.16359742482503 73 8.14773089090983 74 8.13303782145182 75 8.11947631835938 76 8.10381860733032 77 8.09026374816895 78 8.07385517756144 79 8.05967753728231 80 8.04485359191895 81 8.03373562494914 82 8.01996520360311 83 8.00547765096029 84 7.99200245539347 85 7.9778839747111 86 7.96603825887044 87 7.95313205718994 88 7.94256540934245 89 7.93078622817993 90 7.9150198618571 91 7.90115442276001 92 7.88956362406413 93 7.87660166422526 94 7.86571083068848 95 7.85316956837972 96 7.84202677408854 97 7.82941233317057 98 7.81677773793538 99 7.80905437469482 }; \addlegendentry{Deep-div} \addplot [semithick, color4] table { 0 4.77773469289144 1 3.6611244837443 2 2.80560584068298 3 2.16298596858978 4 1.6946706533432 5 1.37889994382858 6 1.17069800694784 7 1.02422822316488 8 0.913685641686122 9 0.823391018311183 10 0.748999414841334 11 0.686590222517649 12 0.631458859642347 13 0.585795800884565 14 0.544695809483528 15 0.508392996589343 16 0.477023181319237 17 0.448755184809367 18 0.423958730200926 19 0.399411285916964 20 0.378161517282327 21 0.358143191536268 22 0.339992647866408 23 0.322683673600356 24 0.30675460199515 25 0.292641197144985 26 0.277423813939095 27 0.264105347543955 28 0.252767745405436 29 0.241400450964769 30 0.229186328252157 31 0.219632094353437 32 0.209368627270063 33 0.199881102144718 34 0.19147594794631 35 0.183259585996469 36 0.175600448747476 37 0.168979888906082 38 0.161299355700612 39 0.154478946452339 40 0.148751732955376 41 0.14220796811084 42 0.136443345000346 43 0.131656825666626 44 0.125842305396994 45 0.120892692978183 46 0.116133530189594 47 0.111591295773784 48 0.107518588503202 49 0.1033867324392 50 0.099609354759256 51 0.0959990736097097 52 0.0928117021918297 53 0.0892755689720313 54 0.0860512000819047 55 0.0830367147922516 56 0.0805459275220831 57 0.0776610426604748 58 0.0750730646774173 59 0.0726583613703648 60 0.0705318300674359 61 0.068586138697962 62 0.0662557898710171 63 0.0641886354113619 64 0.0628507364541292 65 0.0608783789599935 66 0.0591697671140234 67 0.0575191836804152 68 0.056033310542504 69 0.0544812148436904 70 0.0532575056577722 71 0.0518550301591555 72 0.0506759762143095 73 0.0495583248635133 74 0.0481538378944 75 0.047063786101838 76 0.0460997044108808 77 0.0448222795501351 78 0.0440266634648045 79 0.0430505613485972 80 0.0420559007984896 81 0.0411723516881466 82 0.0401827312385043 83 0.0392325642518699 84 0.0385472331816951 85 0.0374886740309497 86 0.0366809784745177 87 0.0358230256785949 88 0.0352064216819902 89 0.0355508230937024 90 0.0335575759410858 91 0.0333001954170565 92 0.0326981558154027 93 0.031545353277276 94 0.0309848441742361 95 0.0302952843097349 96 0.0300742100613813 97 0.0291354573952655 98 0.0284577337404092 99 0.0281615807053943 }; \addlegendentry{Mahalanobis} \end{axis} \end{tikzpicture} } \caption{Euclidean target} \label{fig:sim_euclidean} \end{subfigure} \hfill \begin{subfigure}[t]{0.24\textwidth} \centering \adjustbox{max width=0.99\textwidth}{ \begin{tikzpicture} \definecolor{color0}{rgb}{0.12156862745098,0.466666666666667,0.705882352941177} \definecolor{color1}{rgb}{1,0.498039215686275,0.0549019607843137} \definecolor{color2}{rgb}{0.172549019607843,0.627450980392157,0.172549019607843} \definecolor{color3}{rgb}{0.83921568627451,0.152941176470588,0.156862745098039} \definecolor{color4}{rgb}{0.580392156862745,0.403921568627451,0.741176470588235} \begin{axis}[ legend cell align={left}, legend style={fill opacity=0.8, draw opacity=1, text opacity=1, draw=white!80!black}, tick align=outside, tick pos=left, x grid style={white!69.0196078431373!black}, xmin=-4.95, xmax=103.95, xtick style={color=black}, y grid style={white!69.0196078431373!black}, ylabel={test MAE}, ymin=-1.34627705751982, ymax=29.0338225211576, ytick style={color=black} ] \path [fill=color0, fill opacity=0.2] (axis cs:0,16.2916672449944) --(axis cs:0,13.6669933893644) --(axis cs:1,11.2257556426807) --(axis cs:2,8.50766289442502) --(axis cs:3,5.51903843205281) --(axis cs:4,3.78790115659743) --(axis cs:5,3.07401164119071) --(axis cs:6,2.6232027862696) --(axis cs:7,2.27872244698997) --(axis cs:8,2.00780415487863) --(axis cs:9,1.83609327954843) --(axis cs:10,1.71167890101635) --(axis cs:11,1.61743739034575) --(axis cs:12,1.54267168079104) --(axis cs:13,1.48520900585559) --(axis cs:14,1.43670964498618) --(axis cs:15,1.37864080686454) --(axis cs:16,1.34170595052508) --(axis cs:17,1.31242365645302) --(axis cs:18,1.25636918972569) --(axis cs:19,1.24581671639863) --(axis cs:20,1.20497110918304) --(axis cs:21,1.17288895806313) --(axis cs:22,1.14216551036983) --(axis cs:23,1.10980212884314) --(axis cs:24,1.08522206017267) --(axis cs:25,1.07715950557776) --(axis cs:26,1.06735585738583) --(axis cs:27,1.03800570614398) --(axis cs:28,1.00156448495201) --(axis cs:29,1.00824955296755) --(axis cs:30,0.983377583088806) --(axis cs:31,0.951476014470994) --(axis cs:32,0.938481035326284) --(axis cs:33,0.917206402489746) --(axis cs:34,0.89219726449861) --(axis cs:35,0.884875173716436) --(axis cs:36,0.855611265213951) --(axis cs:37,0.837142380533368) --(axis cs:38,0.815875075596322) --(axis cs:39,0.802984794863559) --(axis cs:40,0.779996592696012) --(axis cs:41,0.779777501203086) --(axis cs:42,0.75802529556743) --(axis cs:43,0.745590951037332) --(axis cs:44,0.725661625459555) --(axis cs:45,0.707530965762208) --(axis cs:46,0.705045069631514) --(axis cs:47,0.695089154428426) --(axis cs:48,0.680573106666188) --(axis cs:49,0.659928286606408) --(axis cs:50,0.643025891508361) --(axis cs:51,0.621119391239746) --(axis cs:52,0.629829183515668) --(axis cs:53,0.588923004508602) --(axis cs:54,0.594202168781709) --(axis cs:55,0.565778492380594) --(axis cs:56,0.559285429608597) --(axis cs:57,0.533622261318221) --(axis cs:58,0.529884746464699) --(axis cs:59,0.502921070887011) --(axis cs:60,0.494007216200785) --(axis cs:61,0.505398507711121) --(axis cs:62,0.473718624428886) --(axis cs:63,0.449160677781205) --(axis cs:64,0.432295615767345) --(axis cs:65,0.410948125405857) --(axis cs:66,0.40613491629148) --(axis cs:67,0.3846094603735) --(axis cs:68,0.382513126436744) --(axis cs:69,0.368379922517775) --(axis cs:70,0.350806163034142) --(axis cs:71,0.340262825057497) --(axis cs:72,0.339854853998195) --(axis cs:73,0.318940099268245) --(axis cs:74,0.304424934469108) --(axis cs:75,0.301612516352082) --(axis cs:76,0.290801902359802) --(axis cs:77,0.282512702848954) --(axis cs:78,0.27329599818227) --(axis cs:79,0.257644668407307) --(axis cs:80,0.251993023378329) --(axis cs:81,0.24340718232797) --(axis cs:82,0.233488518544351) --(axis cs:83,0.226105127377979) --(axis cs:84,0.224088256397551) --(axis cs:85,0.216643660933533) --(axis cs:86,0.209487868111173) --(axis cs:87,0.207436568793438) --(axis cs:88,0.204998019597841) --(axis cs:89,0.199081838662448) --(axis cs:90,0.19073586462414) --(axis cs:91,0.189149956041755) --(axis cs:92,0.185783818595032) --(axis cs:93,0.18619297452068) --(axis cs:94,0.182039137415629) --(axis cs:95,0.180933022738382) --(axis cs:96,0.174528743441554) --(axis cs:97,0.173678209215883) --(axis cs:98,0.173086376775615) --(axis cs:99,0.171479453924923) --(axis cs:99,0.191453208244215) --(axis cs:99,0.191453208244215) --(axis cs:98,0.191629584839629) --(axis cs:97,0.20499828077308) --(axis cs:96,0.213815055672673) --(axis cs:95,0.204531832296764) --(axis cs:94,0.205384520955343) --(axis cs:93,0.223923817204479) --(axis cs:92,0.211636579799235) --(axis cs:91,0.218689481284041) --(axis cs:90,0.221529118235154) --(axis cs:89,0.223468837603904) --(axis cs:88,0.240061842141635) --(axis cs:87,0.238742596649347) --(axis cs:86,0.233236494897962) --(axis cs:85,0.240459764330825) --(axis cs:84,0.249809319934541) --(axis cs:83,0.253841100490419) --(axis cs:82,0.258380104553387) --(axis cs:81,0.277104526328977) --(axis cs:80,0.281541326619509) --(axis cs:79,0.292760146471793) --(axis cs:78,0.289296973142651) --(axis cs:77,0.314892466558572) --(axis cs:76,0.31769597754272) --(axis cs:75,0.341083800684865) --(axis cs:74,0.333812025306181) --(axis cs:73,0.362489362369888) --(axis cs:72,0.370446886966058) --(axis cs:71,0.392895071938047) --(axis cs:70,0.392436525939603) --(axis cs:69,0.417683597595852) --(axis cs:68,0.437387704150007) --(axis cs:67,0.441956464906753) --(axis cs:66,0.468425539189731) --(axis cs:65,0.469406680381547) --(axis cs:64,0.495551551088785) --(axis cs:63,0.517870602418482) --(axis cs:62,0.544820410891396) --(axis cs:61,0.569588581922821) --(axis cs:60,0.557380817030314) --(axis cs:59,0.569708460337558) --(axis cs:58,0.595270487872154) --(axis cs:57,0.602678223021175) --(axis cs:56,0.627672660855677) --(axis cs:55,0.621328045438315) --(axis cs:54,0.651687677702157) --(axis cs:53,0.659991507489892) --(axis cs:52,0.69477394491756) --(axis cs:51,0.692998541079895) --(axis cs:50,0.706022071316142) --(axis cs:49,0.721277915900611) --(axis cs:48,0.744301914314647) --(axis cs:47,0.775639219099101) --(axis cs:46,0.780536765479468) --(axis cs:45,0.794395345094929) --(axis cs:44,0.800272005801634) --(axis cs:43,0.837235766657268) --(axis cs:42,0.844630639720046) --(axis cs:41,0.873068604690367) --(axis cs:40,0.867901318375765) --(axis cs:39,0.900986241411668) --(axis cs:38,0.906534109177441) --(axis cs:37,0.950506090663125) --(axis cs:36,0.939268711694415) --(axis cs:35,0.97987236643961) --(axis cs:34,0.985200209154548) --(axis cs:33,1.02590868883411) --(axis cs:32,1.04040050656163) --(axis cs:31,1.05632620968825) --(axis cs:30,1.10441255642739) --(axis cs:29,1.12433048097054) --(axis cs:28,1.11629045196561) --(axis cs:27,1.15970557086408) --(axis cs:26,1.20840746353702) --(axis cs:25,1.21564634095442) --(axis cs:24,1.22723552516211) --(axis cs:23,1.25728469493819) --(axis cs:22,1.28194975007227) --(axis cs:21,1.3222780589199) --(axis cs:20,1.3666271027412) --(axis cs:19,1.40492638662872) --(axis cs:18,1.42401218781235) --(axis cs:17,1.48755734953352) --(axis cs:16,1.51634393490684) --(axis cs:15,1.57417045017993) --(axis cs:14,1.62600634635191) --(axis cs:13,1.66322097601188) --(axis cs:12,1.74920412665321) --(axis cs:11,1.80948710853019) --(axis cs:10,1.91919160018401) --(axis cs:9,2.02775082903311) --(axis cs:8,2.16808802333894) --(axis cs:7,2.38072283880715) --(axis cs:6,2.69435941236931) --(axis cs:5,3.27586266453438) --(axis cs:4,4.29556772564541) --(axis cs:3,6.04923199692261) --(axis cs:2,9.64695575028933) --(axis cs:1,14.1863284281608) --(axis cs:0,16.2916672449944) --cycle; \path [fill=color1, fill opacity=0.2] (axis cs:0,25.3671023891142) --(axis cs:0,21.1486757074028) --(axis cs:1,14.8875105726786) --(axis cs:2,11.7375969812992) --(axis cs:3,9.89052900577582) --(axis cs:4,8.29749374994297) --(axis cs:5,7.28923379247963) --(axis cs:6,7.10978274553307) --(axis cs:7,6.98496787199328) --(axis cs:8,6.82364990574068) --(axis cs:9,6.88144896028872) --(axis cs:10,6.83470534332174) --(axis cs:11,6.86398895666112) --(axis cs:12,6.82001932050752) --(axis cs:13,6.83437255554178) --(axis cs:14,6.83450538747946) --(axis cs:15,6.79800085968101) --(axis cs:16,6.78711933935785) --(axis cs:17,6.78460212599002) --(axis cs:18,6.74202018171681) --(axis cs:19,6.70691199814651) --(axis cs:20,6.75023437877545) --(axis cs:21,6.79544739540535) --(axis cs:22,6.77927454824006) --(axis cs:23,6.72027569091269) --(axis cs:24,6.7627551059101) --(axis cs:25,6.76551379943139) --(axis cs:26,6.70162093250591) --(axis cs:27,6.68956491976941) --(axis cs:28,6.74606954003877) --(axis cs:29,6.70194434769881) --(axis cs:30,6.78919632449734) --(axis cs:31,6.72412145064896) --(axis cs:32,6.67765067795016) --(axis cs:33,6.71081606671673) --(axis cs:34,6.76510742773169) --(axis cs:35,6.7864134027621) --(axis cs:36,6.71428132821386) --(axis cs:37,6.75713612031592) --(axis cs:38,6.70146129622543) --(axis cs:39,6.69588597459863) --(axis cs:40,6.68470594357917) --(axis cs:41,6.77458333810691) --(axis cs:42,6.74832225219649) --(axis cs:43,6.71109580338131) --(axis cs:44,6.6723018588534) --(axis cs:45,6.73012920581539) --(axis cs:46,6.70441404234326) --(axis cs:47,6.65021752332329) --(axis cs:48,6.69831276228239) --(axis cs:49,6.7005297013595) --(axis cs:50,6.63693437128364) --(axis cs:51,6.66269448503957) --(axis cs:52,6.68713368078971) --(axis cs:53,6.73579649489883) --(axis cs:54,6.7494406979015) --(axis cs:55,6.78372473849188) --(axis cs:56,6.70721923089713) --(axis cs:57,6.66260482859258) --(axis cs:58,6.68509513008469) --(axis cs:59,6.72843851417508) --(axis cs:60,6.7898715382885) --(axis cs:61,6.71025408532886) --(axis cs:62,6.67250289216265) --(axis cs:63,6.67191314897512) --(axis cs:64,6.61056207881158) --(axis cs:65,6.6210435592278) --(axis cs:66,6.70388450673568) --(axis cs:67,6.70810974660687) --(axis cs:68,6.73516283268739) --(axis cs:69,6.65552886248916) --(axis cs:70,6.75721083803005) --(axis cs:71,6.69731349022171) --(axis cs:72,6.67105592185943) --(axis cs:73,6.65479304479962) --(axis cs:74,6.69976848570138) --(axis cs:75,6.73523280142807) --(axis cs:76,6.70876617731357) --(axis cs:77,6.67345343363594) --(axis cs:78,6.70360728908185) --(axis cs:79,6.67236873569844) --(axis cs:80,6.65259552137769) --(axis cs:81,6.76655532733732) --(axis cs:82,6.63409478402172) --(axis cs:83,6.69976044304421) --(axis cs:84,6.68395509910919) --(axis cs:85,6.75506956447939) --(axis cs:86,6.6404986177228) --(axis cs:87,6.74115170920204) --(axis cs:88,6.65685579388376) --(axis cs:89,6.67954851045302) --(axis cs:90,6.69612895453866) --(axis cs:91,6.65631682123204) --(axis cs:92,6.7037284955841) --(axis cs:93,6.66346986890994) --(axis cs:94,6.71045104268464) --(axis cs:95,6.6873826178138) --(axis cs:96,6.76011227955325) --(axis cs:97,6.66512781866413) --(axis cs:98,6.69106868288597) --(axis cs:99,6.67815756387032) --(axis cs:99,7.54205588115735) --(axis cs:99,7.54205588115735) --(axis cs:98,7.57157729286273) --(axis cs:97,7.54120527815797) --(axis cs:96,7.5576982876668) --(axis cs:95,7.53293032723421) --(axis cs:94,7.592991149393) --(axis cs:93,7.55104772447385) --(axis cs:92,7.56438584869487) --(axis cs:91,7.48072280838628) --(axis cs:90,7.49594429527824) --(axis cs:89,7.48630047585476) --(axis cs:88,7.4936924416916) --(axis cs:87,7.60006634906619) --(axis cs:86,7.53574236687599) --(axis cs:85,7.62888308495502) --(axis cs:84,7.55102027066213) --(axis cs:83,7.53225631110618) --(axis cs:82,7.51105247600839) --(axis cs:81,7.64346639100578) --(axis cs:80,7.49842669033293) --(axis cs:79,7.57679374751689) --(axis cs:78,7.52511267653501) --(axis cs:77,7.52034234273125) --(axis cs:76,7.56112072962816) --(axis cs:75,7.60778089842137) --(axis cs:74,7.52883564027518) --(axis cs:73,7.47809549483254) --(axis cs:72,7.50902131304454) --(axis cs:71,7.56790168413539) --(axis cs:70,7.54887900826308) --(axis cs:69,7.53080854494086) --(axis cs:68,7.5221344288686) --(axis cs:67,7.62280360635944) --(axis cs:66,7.54080289153906) --(axis cs:65,7.44948656925121) --(axis cs:64,7.49502511753851) --(axis cs:63,7.49233493604685) --(axis cs:62,7.55191415851687) --(axis cs:61,7.57167028956941) --(axis cs:60,7.65239994862964) --(axis cs:59,7.61194717714979) --(axis cs:58,7.53755832247065) --(axis cs:57,7.5567338427738) --(axis cs:56,7.59142617010384) --(axis cs:55,7.64591100878188) --(axis cs:54,7.6425504723459) --(axis cs:53,7.64960682032741) --(axis cs:52,7.5283370147059) --(axis cs:51,7.50331281120473) --(axis cs:50,7.48961070826551) --(axis cs:49,7.55435368334509) --(axis cs:48,7.5777651440083) --(axis cs:47,7.5241872567467) --(axis cs:46,7.58043277213021) --(axis cs:45,7.62827984925867) --(axis cs:44,7.44901193560054) --(axis cs:43,7.55742436110843) --(axis cs:42,7.59912037475664) --(axis cs:41,7.62068713664488) --(axis cs:40,7.53543743817539) --(axis cs:39,7.56762123263607) --(axis cs:38,7.60246377294775) --(axis cs:37,7.61547238238997) --(axis cs:36,7.56207969219223) --(axis cs:35,7.65187929132969) --(axis cs:34,7.60921323826358) --(axis cs:33,7.56656532162963) --(axis cs:32,7.54226083378894) --(axis cs:31,7.53859624140833) --(axis cs:30,7.65259838884088) --(axis cs:29,7.51016222985653) --(axis cs:28,7.56933071071399) --(axis cs:27,7.60978785326118) --(axis cs:26,7.56762605896952) --(axis cs:25,7.6274051687693) --(axis cs:24,7.62918812630832) --(axis cs:23,7.58841883067341) --(axis cs:22,7.5883167374655) --(axis cs:21,7.63998726391675) --(axis cs:20,7.62485310494851) --(axis cs:19,7.53819139286504) --(axis cs:18,7.58488869279491) --(axis cs:17,7.6452914948189) --(axis cs:16,7.60400494093593) --(axis cs:15,7.6329509581494) --(axis cs:14,7.68181973662536) --(axis cs:13,7.76764974263531) --(axis cs:12,7.70952519192331) --(axis cs:11,7.76283853764227) --(axis cs:10,7.75543499939066) --(axis cs:9,7.71593307655617) --(axis cs:8,7.6794128511156) --(axis cs:7,7.8495142574216) --(axis cs:6,7.95690763583493) --(axis cs:5,7.96712231968264) --(axis cs:4,8.80392970434169) --(axis cs:3,11.7433900107444) --(axis cs:2,13.5146942530351) --(axis cs:1,18.5485151739848) --(axis cs:0,25.3671023891142) --cycle; \path [fill=color2, fill opacity=0.2] (axis cs:0,22.2557790851211) --(axis cs:0,18.3216417217637) --(axis cs:1,15.5236181260517) --(axis cs:2,14.1028532884361) --(axis cs:3,13.6689545738904) --(axis cs:4,13.5529384156156) --(axis cs:5,13.4428431460752) --(axis cs:6,13.2757293561155) --(axis cs:7,13.0826959114891) --(axis cs:8,12.8475292646585) --(axis cs:9,12.6279756156261) --(axis cs:10,12.385332403894) --(axis cs:11,12.1242348678097) --(axis cs:12,11.8658863180393) --(axis cs:13,11.6247493393178) --(axis cs:14,11.3731442848812) --(axis cs:15,11.12666331428) --(axis cs:16,10.8827307186105) --(axis cs:17,10.6459043836635) --(axis cs:18,10.4292930576951) --(axis cs:19,10.1821407003462) --(axis cs:20,9.94671591965626) --(axis cs:21,9.70990814750454) --(axis cs:22,9.46852212303711) --(axis cs:23,9.26083514750854) --(axis cs:24,9.03541369601277) --(axis cs:25,8.8371533726388) --(axis cs:26,8.66067470391567) --(axis cs:27,8.50557463807825) --(axis cs:28,8.35064424180495) --(axis cs:29,8.19364794169337) --(axis cs:30,8.04738947568456) --(axis cs:31,7.92218355383935) --(axis cs:32,7.77588393135554) --(axis cs:33,7.66240974669215) --(axis cs:34,7.56071581282287) --(axis cs:35,7.43852665276103) --(axis cs:36,7.35682463471102) --(axis cs:37,7.26937798981452) --(axis cs:38,7.18333222498023) --(axis cs:39,7.12252724451922) --(axis cs:40,7.05901766050247) --(axis cs:41,7.01862582550899) --(axis cs:42,6.98574269363678) --(axis cs:43,6.93851299505885) --(axis cs:44,6.9007693619701) --(axis cs:45,6.88313158237329) --(axis cs:46,6.85758018527585) --(axis cs:47,6.84231293118308) --(axis cs:48,6.82904333448618) --(axis cs:49,6.82415379773011) --(axis cs:50,6.79369040076156) --(axis cs:51,6.8066501169093) --(axis cs:52,6.80244985273356) --(axis cs:53,6.79521745338805) --(axis cs:54,6.78809120498337) --(axis cs:55,6.7738084686433) --(axis cs:56,6.78078591105798) --(axis cs:57,6.77053494506325) --(axis cs:58,6.76613156530041) --(axis cs:59,6.77137827127611) --(axis cs:60,6.76561593017002) --(axis cs:61,6.76538217336877) --(axis cs:62,6.76025502117511) --(axis cs:63,6.74939265053886) --(axis cs:64,6.73810934865496) --(axis cs:65,6.73074667350988) --(axis cs:66,6.73173185823099) --(axis cs:67,6.73974531249886) --(axis cs:68,6.73835318388126) --(axis cs:69,6.73975899717114) --(axis cs:70,6.72771684833447) --(axis cs:71,6.73974296310275) --(axis cs:72,6.73036676324296) --(axis cs:73,6.7246616371488) --(axis cs:74,6.73248130370184) --(axis cs:75,6.73432652632103) --(axis cs:76,6.72797329623156) --(axis cs:77,6.72168050924251) --(axis cs:78,6.74388978244117) --(axis cs:79,6.73432379777153) --(axis cs:80,6.71974737716635) --(axis cs:81,6.73844465191829) --(axis cs:82,6.71859644233846) --(axis cs:83,6.70739617177104) --(axis cs:84,6.70520399155229) --(axis cs:85,6.70252692002106) --(axis cs:86,6.71756351801218) --(axis cs:87,6.71291900315565) --(axis cs:88,6.72931811645231) --(axis cs:89,6.70339755926731) --(axis cs:90,6.71150756852078) --(axis cs:91,6.70177315178599) --(axis cs:92,6.70063156308344) --(axis cs:93,6.69927000592855) --(axis cs:94,6.70105324049428) --(axis cs:95,6.68955760258226) --(axis cs:96,6.71169946963625) --(axis cs:97,6.69332498474747) --(axis cs:98,6.70160847671125) --(axis cs:99,6.69868263469702) --(axis cs:99,7.52565691404972) --(axis cs:99,7.52565691404972) --(axis cs:98,7.52166543635434) --(axis cs:97,7.51961783802678) --(axis cs:96,7.53884412472411) --(axis cs:95,7.52555972478998) --(axis cs:94,7.53070296983287) --(axis cs:93,7.53365216661784) --(axis cs:92,7.53743994214524) --(axis cs:91,7.52860633429799) --(axis cs:90,7.53568017307671) --(axis cs:89,7.53157520379421) --(axis cs:88,7.55632234578727) --(axis cs:87,7.53482136091906) --(axis cs:86,7.54239888178844) --(axis cs:85,7.52235558730316) --(axis cs:84,7.52975726383915) --(axis cs:83,7.5331226524884) --(axis cs:82,7.54373272280232) --(axis cs:81,7.56926758512191) --(axis cs:80,7.54971222010335) --(axis cs:79,7.56223706190864) --(axis cs:78,7.56971711554873) --(axis cs:77,7.55055523396225) --(axis cs:76,7.54351826311496) --(axis cs:75,7.56100739956195) --(axis cs:74,7.54885192027684) --(axis cs:73,7.55486748933639) --(axis cs:72,7.56411025447758) --(axis cs:71,7.56132070165149) --(axis cs:70,7.55359895519336) --(axis cs:69,7.56094379722176) --(axis cs:68,7.56352803407528) --(axis cs:67,7.56483645680859) --(axis cs:66,7.55891596637431) --(axis cs:65,7.55371142331223) --(axis cs:64,7.56187119638898) --(axis cs:63,7.57812619406563) --(axis cs:62,7.58867031502687) --(axis cs:61,7.58700055012163) --(axis cs:60,7.5893145660458) --(axis cs:59,7.59295996775791) --(axis cs:58,7.58551943567616) --(axis cs:57,7.58570602681989) --(axis cs:56,7.60777590992591) --(axis cs:55,7.59105168455494) --(axis cs:54,7.61575820602737) --(axis cs:53,7.61021270459128) --(axis cs:52,7.61687050808594) --(axis cs:51,7.62989525509998) --(axis cs:50,7.62539675489843) --(axis cs:49,7.65956660021911) --(axis cs:48,7.67557461086701) --(axis cs:47,7.6869616914162) --(axis cs:46,7.70829512244307) --(axis cs:45,7.75362213568498) --(axis cs:44,7.77937138764334) --(axis cs:43,7.83329181451147) --(axis cs:42,7.8996157353183) --(axis cs:41,7.96867453866744) --(axis cs:40,8.06120608420782) --(axis cs:39,8.18270480351545) --(axis cs:38,8.34053824315941) --(axis cs:37,8.50885890797194) --(axis cs:36,8.70347930765144) --(axis cs:35,8.902765156321) --(axis cs:34,9.1309160765204) --(axis cs:33,9.3515948334877) --(axis cs:32,9.58286253369167) --(axis cs:31,9.82925070875424) --(axis cs:30,10.0627831775137) --(axis cs:29,10.3091470234149) --(axis cs:28,10.5550831446697) --(axis cs:27,10.7972482378983) --(axis cs:26,11.0382300964994) --(axis cs:25,11.2875351573295) --(axis cs:24,11.5386204862179) --(axis cs:23,11.8054117753023) --(axis cs:22,12.0547478481746) --(axis cs:21,12.313341224342) --(axis cs:20,12.571108380474) --(axis cs:19,12.8216542876502) --(axis cs:18,13.0658482895543) --(axis cs:17,13.2835162146845) --(axis cs:16,13.5002633609794) --(axis cs:15,13.7142182876894) --(axis cs:14,13.9244754711816) --(axis cs:13,14.1403084151988) --(axis cs:12,14.3530230091499) --(axis cs:11,14.604410838367) --(axis cs:10,14.8808562812053) --(axis cs:9,15.1789138547922) --(axis cs:8,15.5050288713278) --(axis cs:7,15.8841477571307) --(axis cs:6,16.2962223192996) --(axis cs:5,16.7754929592715) --(axis cs:4,17.3206899464043) --(axis cs:3,17.964827494998) --(axis cs:2,18.8259578484454) --(axis cs:1,20.1849004744121) --(axis cs:0,22.2557790851211) --cycle; \path [fill=color3, fill opacity=0.2] (axis cs:0,27.6529089039449) --(axis cs:0,24.3226025198018) --(axis cs:1,23.7936565110393) --(axis cs:2,23.2166460663988) --(axis cs:3,22.5612468867367) --(axis cs:4,21.8743564337697) --(axis cs:5,21.2559678502161) --(axis cs:6,20.7209467588086) --(axis cs:7,20.253569473226) --(axis cs:8,19.852645773328) --(axis cs:9,19.4973458371954) --(axis cs:10,19.1859086126844) --(axis cs:11,18.9051501371676) --(axis cs:12,18.6507408678406) --(axis cs:13,18.4198736850495) --(axis cs:14,18.2108982846966) --(axis cs:15,18.0204339923534) --(axis cs:16,17.8494263591806) --(axis cs:17,17.6828223365137) --(axis cs:18,17.5285199671886) --(axis cs:19,17.3925571115051) --(axis cs:20,17.2618945600743) --(axis cs:21,17.1427339139706) --(axis cs:22,17.0417378807569) --(axis cs:23,16.9410503199657) --(axis cs:24,16.8421736586057) --(axis cs:25,16.7586358096368) --(axis cs:26,16.6728865543622) --(axis cs:27,16.5883953579558) --(axis cs:28,16.514386187203) --(axis cs:29,16.4405139146941) --(axis cs:30,16.3616592731001) --(axis cs:31,16.2936377397736) --(axis cs:32,16.2198761706736) --(axis cs:33,16.1449783968217) --(axis cs:34,16.0753915310659) --(axis cs:35,16.0022207463258) --(axis cs:36,15.93414819909) --(axis cs:37,15.8653192782384) --(axis cs:38,15.7958885289331) --(axis cs:39,15.7329322333184) --(axis cs:40,15.6683815769007) --(axis cs:41,15.6126694420216) --(axis cs:42,15.5594304886354) --(axis cs:43,15.5009465440071) --(axis cs:44,15.4480729259974) --(axis cs:45,15.3930263893929) --(axis cs:46,15.3413789831909) --(axis cs:47,15.2926103687936) --(axis cs:48,15.2404912727736) --(axis cs:49,15.1937624772617) --(axis cs:50,15.1448847110056) --(axis cs:51,15.0976872407746) --(axis cs:52,15.050152065078) --(axis cs:53,15.0041879462313) --(axis cs:54,14.968995200699) --(axis cs:55,14.9268887446984) --(axis cs:56,14.8878840077227) --(axis cs:57,14.8480183899574) --(axis cs:58,14.8040523937853) --(axis cs:59,14.7622762915527) --(axis cs:60,14.7262853595708) --(axis cs:61,14.689083373183) --(axis cs:62,14.6548715385752) --(axis cs:63,14.6182423816729) --(axis cs:64,14.5870548032651) --(axis cs:65,14.5492127095125) --(axis cs:66,14.5112092305041) --(axis cs:67,14.4773773808107) --(axis cs:68,14.4456352353793) --(axis cs:69,14.4167712845456) --(axis cs:70,14.3855473303297) --(axis cs:71,14.3604086768679) --(axis cs:72,14.3304568869656) --(axis cs:73,14.2995165057245) --(axis cs:74,14.2699136411144) --(axis cs:75,14.2390346430356) --(axis cs:76,14.2081196787515) --(axis cs:77,14.1764844696493) --(axis cs:78,14.1493745124757) --(axis cs:79,14.1217927498206) --(axis cs:80,14.0861212990644) --(axis cs:81,14.057367397618) --(axis cs:82,14.0259743483524) --(axis cs:83,13.9935711780541) --(axis cs:84,13.9605175054026) --(axis cs:85,13.9262023775943) --(axis cs:86,13.8990637020453) --(axis cs:87,13.8673070667003) --(axis cs:88,13.842615183811) --(axis cs:89,13.8171110743348) --(axis cs:90,13.7930460594172) --(axis cs:91,13.7640264496659) --(axis cs:92,13.7399507358978) --(axis cs:93,13.7137283606542) --(axis cs:94,13.6820394979562) --(axis cs:95,13.6522318552188) --(axis cs:96,13.626178197103) --(axis cs:97,13.5950393435232) --(axis cs:98,13.5735816076328) --(axis cs:99,13.5397946966216) --(axis cs:99,16.0102790541922) --(axis cs:99,16.0102790541922) --(axis cs:98,16.0411656305264) --(axis cs:97,16.0683412157623) --(axis cs:96,16.1006008505451) --(axis cs:95,16.1294794688372) --(axis cs:94,16.1633746001476) --(axis cs:93,16.1959149397201) --(axis cs:92,16.2257244909495) --(axis cs:91,16.2545998269861) --(axis cs:90,16.2857381202703) --(axis cs:89,16.3145214126444) --(axis cs:88,16.3442864173446) --(axis cs:87,16.3714379233306) --(axis cs:86,16.4062468651747) --(axis cs:85,16.4376894782814) --(axis cs:84,16.4703978184906) --(axis cs:83,16.5069187562314) --(axis cs:82,16.5393907436072) --(axis cs:81,16.5741411117001) --(axis cs:80,16.6090615012286) --(axis cs:79,16.642386098446) --(axis cs:78,16.6744289441486) --(axis cs:77,16.7060831267863) --(axis cs:76,16.7390444753012) --(axis cs:75,16.7729918258771) --(axis cs:74,16.8070719088123) --(axis cs:73,16.839355005099) --(axis cs:72,16.8776715971245) --(axis cs:71,16.9132898755816) --(axis cs:70,16.9506869531176) --(axis cs:69,16.9895810764977) --(axis cs:68,17.0272106050748) --(axis cs:67,17.0656418503498) --(axis cs:66,17.1054465006971) --(axis cs:65,17.1505254750985) --(axis cs:64,17.1920869407127) --(axis cs:63,17.2345501038821) --(axis cs:62,17.2785734382315) --(axis cs:61,17.3237720294456) --(axis cs:60,17.3726406759924) --(axis cs:59,17.422080525057) --(axis cs:58,17.468791285007) --(axis cs:57,17.5220209777184) --(axis cs:56,17.5718991648847) --(axis cs:55,17.6216326786415) --(axis cs:54,17.6796554137216) --(axis cs:53,17.7297165744392) --(axis cs:52,17.7930614613299) --(axis cs:51,17.8537496285288) --(axis cs:50,17.9167700792369) --(axis cs:49,17.9814002831549) --(axis cs:48,18.046634378268) --(axis cs:47,18.1181589030569) --(axis cs:46,18.1868084506877) --(axis cs:45,18.2585881176464) --(axis cs:44,18.3371741137975) --(axis cs:43,18.4128187910759) --(axis cs:42,18.4941819025186) --(axis cs:41,18.5783894162141) --(axis cs:40,18.6625067897985) --(axis cs:39,18.7528505171292) --(axis cs:38,18.8488192143938) --(axis cs:37,18.9481651679693) --(axis cs:36,19.0479617608303) --(axis cs:35,19.1513672307656) --(axis cs:34,19.2598900953176) --(axis cs:33,19.3718083056523) --(axis cs:32,19.4863753552053) --(axis cs:31,19.6034414736867) --(axis cs:30,19.7175057405311) --(axis cs:29,19.8430962385042) --(axis cs:28,19.9658143895792) --(axis cs:27,20.0899723839469) --(axis cs:26,20.2243622859698) --(axis cs:25,20.3632802619371) --(axis cs:24,20.5026354285118) --(axis cs:23,20.6585615981658) --(axis cs:22,20.8144877051806) --(axis cs:21,20.9794218477481) --(axis cs:20,21.1592049119716) --(axis cs:19,21.3492032695577) --(axis cs:18,21.5465883702138) --(axis cs:17,21.7564494632097) --(axis cs:16,21.9790639616609) --(axis cs:15,22.205954938921) --(axis cs:14,22.4430656943887) --(axis cs:13,22.6927102382903) --(axis cs:12,22.9584901909159) --(axis cs:11,23.2368313691801) --(axis cs:10,23.5321450779081) --(axis cs:9,23.8446168499791) --(axis cs:8,24.1788318369422) --(axis cs:7,24.5352176056315) --(axis cs:6,24.924524941224) --(axis cs:5,25.345588037642) --(axis cs:4,25.8115510572784) --(axis cs:3,26.311053579165) --(axis cs:2,26.7902325957106) --(axis cs:1,27.2319881727985) --(axis cs:0,27.6529089039449) --cycle; \path [fill=color4, fill opacity=0.2] (axis cs:0,15.768754996024) --(axis cs:0,12.2580066311732) --(axis cs:1,10.6466635052382) --(axis cs:2,8.92612065013414) --(axis cs:3,7.25613695921554) --(axis cs:4,5.78796131816503) --(axis cs:5,4.53967039685519) --(axis cs:6,3.58330142526612) --(axis cs:7,2.91867979376993) --(axis cs:8,2.45432525791056) --(axis cs:9,2.1118857172249) --(axis cs:10,1.84613282113172) --(axis cs:11,1.63292150407301) --(axis cs:12,1.45387221425291) --(axis cs:13,1.3072217164097) --(axis cs:14,1.18196578255183) --(axis cs:15,1.07678405845905) --(axis cs:16,0.984912930270042) --(axis cs:17,0.905616727469114) --(axis cs:18,0.836064014735984) --(axis cs:19,0.775053630365276) --(axis cs:20,0.720296577763811) --(axis cs:21,0.671456445482367) --(axis cs:22,0.626452600750401) --(axis cs:23,0.587760803934545) --(axis cs:24,0.550588420134383) --(axis cs:25,0.517760204947953) --(axis cs:26,0.487283069850787) --(axis cs:27,0.459326726349928) --(axis cs:28,0.433583521742023) --(axis cs:29,0.410323262762803) --(axis cs:30,0.388294862358477) --(axis cs:31,0.368292189622829) --(axis cs:32,0.349397334959443) --(axis cs:33,0.331580647393514) --(axis cs:34,0.315510261594506) --(axis cs:35,0.300100227090246) --(axis cs:36,0.285736519670761) --(axis cs:37,0.272701472357679) --(axis cs:38,0.260308211309318) --(axis cs:39,0.248534446143413) --(axis cs:40,0.237186976194411) --(axis cs:41,0.227023261506033) --(axis cs:42,0.21711541963241) --(axis cs:43,0.207776468411336) --(axis cs:44,0.198818535783998) --(axis cs:45,0.190639557203072) --(axis cs:46,0.182329322306139) --(axis cs:47,0.17479618259697) --(axis cs:48,0.167550970950084) --(axis cs:49,0.160598330811) --(axis cs:50,0.154124992347214) --(axis cs:51,0.148152350608148) --(axis cs:52,0.142201030133806) --(axis cs:53,0.136731218438528) --(axis cs:54,0.131681365714485) --(axis cs:55,0.126767550104393) --(axis cs:56,0.122116061808903) --(axis cs:57,0.118049599753656) --(axis cs:58,0.113810629583492) --(axis cs:59,0.109938172084178) --(axis cs:60,0.106038035500375) --(axis cs:61,0.102591683445788) --(axis cs:62,0.0990877741254014) --(axis cs:63,0.0961513229924462) --(axis cs:64,0.0928195754665772) --(axis cs:65,0.0898216004400066) --(axis cs:66,0.0871179032283697) --(axis cs:67,0.0843307377760981) --(axis cs:68,0.081579827562974) --(axis cs:69,0.0791830042919489) --(axis cs:70,0.0768821803278844) --(axis cs:71,0.0745024871443157) --(axis cs:72,0.0723387020947537) --(axis cs:73,0.0701139162121765) --(axis cs:74,0.0683022013741163) --(axis cs:75,0.066185211200663) --(axis cs:76,0.0643112386043245) --(axis cs:77,0.062561080305561) --(axis cs:78,0.0607543200293394) --(axis cs:79,0.0590748632317426) --(axis cs:80,0.0574197561664875) --(axis cs:81,0.0560406806646491) --(axis cs:82,0.0542719275282029) --(axis cs:83,0.0529261322461) --(axis cs:84,0.0515761228535827) --(axis cs:85,0.0501815754849784) --(axis cs:86,0.048711730129656) --(axis cs:87,0.047418936872465) --(axis cs:88,0.0461142203184468) --(axis cs:89,0.0447774171603713) --(axis cs:90,0.043610235055674) --(axis cs:91,0.0424779048689301) --(axis cs:92,0.0414621276133382) --(axis cs:93,0.0404054930529064) --(axis cs:94,0.0393959142668106) --(axis cs:95,0.0383799195520181) --(axis cs:96,0.0373636924243674) --(axis cs:97,0.0365398560578592) --(axis cs:98,0.0355814443288604) --(axis cs:99,0.0346365596927903) --(axis cs:99,0.0654141492071051) --(axis cs:99,0.0654141492071051) --(axis cs:98,0.0669416699550192) --(axis cs:97,0.0686189559981259) --(axis cs:96,0.0701701444370841) --(axis cs:95,0.0717357600498102) --(axis cs:94,0.0733387139535887) --(axis cs:93,0.075036169765287) --(axis cs:92,0.0769824278878049) --(axis cs:91,0.078670632478784) --(axis cs:90,0.0806327631072314) --(axis cs:89,0.0825312684205181) --(axis cs:88,0.0845362928040323) --(axis cs:87,0.0870683292210275) --(axis cs:86,0.0889764315514398) --(axis cs:85,0.0913209566296068) --(axis cs:84,0.0936121983196084) --(axis cs:83,0.096071996625023) --(axis cs:82,0.0984013164514101) --(axis cs:81,0.100976399821172) --(axis cs:80,0.103686414547414) --(axis cs:79,0.106091643436189) --(axis cs:78,0.108922531203507) --(axis cs:77,0.111646923473691) --(axis cs:76,0.114495825789832) --(axis cs:75,0.117597677251549) --(axis cs:74,0.120890742397564) --(axis cs:73,0.124029225593949) --(axis cs:72,0.127463463135362) --(axis cs:71,0.131017116048491) --(axis cs:70,0.134495925650287) --(axis cs:69,0.138178677864773) --(axis cs:68,0.141754528089358) --(axis cs:67,0.145883870932919) --(axis cs:66,0.149959487422539) --(axis cs:65,0.154204415755609) --(axis cs:64,0.158521777890261) --(axis cs:63,0.163314018277524) --(axis cs:62,0.16769376757973) --(axis cs:61,0.172607758662684) --(axis cs:60,0.177721057784232) --(axis cs:59,0.183185064214383) --(axis cs:58,0.188897205455012) --(axis cs:57,0.194576948854489) --(axis cs:56,0.200284326312702) --(axis cs:55,0.206562762266861) --(axis cs:54,0.213288379762556) --(axis cs:53,0.220279186823783) --(axis cs:52,0.227508112908758) --(axis cs:51,0.235286185916624) --(axis cs:50,0.243062706533617) --(axis cs:49,0.251831426108543) --(axis cs:48,0.261048743289355) --(axis cs:47,0.271080403840255) --(axis cs:46,0.280630227518257) --(axis cs:45,0.291786547739885) --(axis cs:44,0.302954557360101) --(axis cs:43,0.315136208717774) --(axis cs:42,0.32786817001679) --(axis cs:41,0.342172777932215) --(axis cs:40,0.355414239644975) --(axis cs:39,0.371046418762898) --(axis cs:38,0.386427286006407) --(axis cs:37,0.403195188129178) --(axis cs:36,0.420493514760061) --(axis cs:35,0.438949910906428) --(axis cs:34,0.459070958158124) --(axis cs:33,0.480017554199249) --(axis cs:32,0.502420719955032) --(axis cs:31,0.525762266929041) --(axis cs:30,0.5508295072628) --(axis cs:29,0.577921131698511) --(axis cs:28,0.605892864964011) --(axis cs:27,0.637035852518938) --(axis cs:26,0.670157087880587) --(axis cs:25,0.706189720156824) --(axis cs:24,0.745095539985501) --(axis cs:23,0.788145055535822) --(axis cs:22,0.834329199996517) --(axis cs:21,0.886729871007806) --(axis cs:20,0.944375030048434) --(axis cs:19,1.00854530523882) --(axis cs:18,1.0805474103459) --(axis cs:17,1.16107105513638) --(axis cs:16,1.25303485161065) --(axis cs:15,1.35774120087679) --(axis cs:14,1.47653167018407) --(axis cs:13,1.61425913794262) --(axis cs:12,1.77432020257397) --(axis cs:11,1.9660526999904) --(axis cs:10,2.19581201643847) --(axis cs:9,2.48471662728804) --(axis cs:8,2.86404341541403) --(axis cs:7,3.36790258080282) --(axis cs:6,4.07323474060709) --(axis cs:5,5.11281421084134) --(axis cs:4,6.72779986652736) --(axis cs:3,9.0339839921191) --(axis cs:2,11.7562738862021) --(axis cs:1,14.1774139738064) --(axis cs:0,15.768754996024) --cycle; \addplot [semithick, color0] table { 0 14.9793303171794 1 12.7060420354207 2 9.07730932235718 3 5.78413521448771 4 4.04173444112142 5 3.17493715286255 6 2.65878109931946 7 2.32972264289856 8 2.08794608910878 9 1.93192205429077 10 1.81543525060018 11 1.71346224943797 12 1.64593790372213 13 1.57421499093374 14 1.53135799566905 15 1.47640562852224 16 1.42902494271596 17 1.39999050299327 18 1.34019068876902 19 1.32537155151367 20 1.28579910596212 21 1.24758350849152 22 1.21205763022105 23 1.18354341189067 24 1.15622879266739 25 1.14640292326609 26 1.13788166046143 27 1.09885563850403 28 1.05892746845881 29 1.06629001696905 30 1.0438950697581 31 1.00390111207962 32 0.98944077094396 33 0.971557545661926 34 0.938698736826579 35 0.932373770078023 36 0.897439988454183 37 0.893824235598246 38 0.861204592386882 39 0.851985518137614 40 0.823948955535889 41 0.826423052946726 42 0.801327967643738 43 0.7914133588473 44 0.762966815630595 45 0.750963155428569 46 0.742790917555491 47 0.735364186763763 48 0.712437510490417 49 0.69060310125351 50 0.674523981412252 51 0.657058966159821 52 0.662301564216614 53 0.624457255999247 54 0.622944923241933 55 0.593553268909454 56 0.593479045232137 57 0.568150242169698 58 0.562577617168426 59 0.536314765612284 60 0.52569401661555 61 0.537493544816971 62 0.509269517660141 63 0.483515640099843 64 0.463923583428065 65 0.440177402893702 66 0.437280227740606 67 0.413282962640127 68 0.409950415293376 69 0.393031760056814 70 0.371621344486872 71 0.366578948497772 72 0.355150870482127 73 0.340714730819066 74 0.319118479887644 75 0.321348158518473 76 0.304248939951261 77 0.298702584703763 78 0.28129648566246 79 0.27520240743955 80 0.266767174998919 81 0.260255854328473 82 0.245934311548869 83 0.239973113934199 84 0.236948788166046 85 0.228551712632179 86 0.221362181504567 87 0.223089582721392 88 0.222529930869738 89 0.211275338133176 90 0.206132491429647 91 0.203919718662898 92 0.198710199197133 93 0.205058395862579 94 0.193711829185486 95 0.192732427517573 96 0.194171899557114 97 0.189338244994481 98 0.182357980807622 99 0.181466331084569 }; \addplot [semithick, color1] table { 0 23.2578890482585 1 16.7180128733317 2 12.6261456171672 3 10.8169595082601 4 8.55071172714233 5 7.62817805608114 6 7.533345190684 7 7.41724106470744 8 7.25153137842814 9 7.29869101842244 10 7.2950701713562 11 7.31341374715169 12 7.26477225621541 13 7.30101114908854 14 7.25816256205241 15 7.2154759089152 16 7.19556214014689 17 7.21494681040446 18 7.16345443725586 19 7.12255169550578 20 7.18754374186198 21 7.21771732966105 22 7.18379564285278 23 7.15434726079305 24 7.19597161610921 25 7.19645948410034 26 7.13462349573771 27 7.1496763865153 28 7.15770012537638 29 7.10605328877767 30 7.22089735666911 31 7.13135884602865 32 7.10995575586955 33 7.13869069417318 34 7.18716033299764 35 7.2191463470459 36 7.13818051020304 37 7.18630425135295 38 7.15196253458659 39 7.13175360361735 40 7.11007169087728 41 7.19763523737589 42 7.17372131347656 43 7.13426008224487 44 7.06065689722697 45 7.17920452753703 46 7.14242340723673 47 7.08720239003499 48 7.13803895314534 49 7.12744169235229 50 7.06327253977458 51 7.08300364812215 52 7.1077353477478 53 7.19270165761312 54 7.1959955851237 55 7.21481787363688 56 7.14932270050049 57 7.10966933568319 58 7.11132672627767 59 7.17019284566244 60 7.22113574345907 61 7.14096218744914 62 7.11220852533976 63 7.08212404251099 64 7.05279359817505 65 7.0352650642395 66 7.12234369913737 67 7.16545667648315 68 7.12864863077799 69 7.09316870371501 70 7.15304492314657 71 7.13260758717855 72 7.09003861745199 73 7.06644426981608 74 7.11430206298828 75 7.17150684992472 76 7.13494345347087 77 7.09689788818359 78 7.11435998280843 79 7.12458124160767 80 7.07551110585531 81 7.20501085917155 82 7.07257363001506 83 7.11600837707519 84 7.11748768488566 85 7.1919763247172 86 7.0881204922994 87 7.17060902913411 88 7.07527411778768 89 7.08292449315389 90 7.09603662490845 91 7.06851981480916 92 7.13405717213949 93 7.10725879669189 94 7.15172109603882 95 7.11015647252401 96 7.15890528361003 97 7.10316654841105 98 7.13132298787435 99 7.11010672251383 }; \addplot [semithick, color2] table { 0 20.2887104034424 1 17.8542593002319 2 16.4644055684408 3 15.8168910344442 4 15.4368141810099 5 15.1091680526733 6 14.7859758377075 7 14.4834218343099 8 14.1762790679932 9 13.9034447352091 10 13.6330943425496 11 13.3643228530884 12 13.1094546635946 13 12.8825288772583 14 12.6488098780314 15 12.4204408009847 16 12.1914970397949 17 11.964710299174 18 11.7475706736247 19 11.5018974939982 20 11.2589121500651 21 11.0116246859233 22 10.7616349856059 23 10.5331234614054 24 10.2870170911153 25 10.0623442649841 26 9.84945240020752 27 9.65141143798828 28 9.4528636932373 29 9.25139748255412 30 9.05508632659912 31 8.87571713129679 32 8.6793732325236 33 8.50700229008993 34 8.34581594467163 35 8.17064590454101 36 8.03015197118123 37 7.88911844889323 38 7.76193523406982 39 7.65261602401733 40 7.56011187235514 41 7.49365018208822 42 7.44267921447754 43 7.38590240478516 44 7.34007037480672 45 7.31837685902913 46 7.28293765385946 47 7.26463731129964 48 7.2523089726766 49 7.24186019897461 50 7.20954357783 51 7.21827268600464 52 7.20966018040975 53 7.20271507898967 54 7.20192470550537 55 7.18243007659912 56 7.19428091049194 57 7.17812048594157 58 7.17582550048828 59 7.18216911951701 60 7.17746524810791 61 7.1761913617452 62 7.17446266810099 63 7.16375942230225 64 7.14999027252197 65 7.14222904841105 66 7.14532391230265 67 7.15229088465373 68 7.15094060897827 69 7.15035139719645 70 7.14065790176392 71 7.15053183237712 72 7.14723850886027 73 7.13976456324259 74 7.14066661198934 75 7.14766696294149 76 7.13574577967326 77 7.13611787160238 78 7.15680344899495 79 7.14828042984009 80 7.13472979863485 81 7.1538561185201 82 7.13116458257039 83 7.12025941212972 84 7.11748062769572 85 7.11244125366211 86 7.12998119990031 87 7.12387018203735 88 7.14282023111979 89 7.11748638153076 90 7.12359387079875 91 7.11518974304199 92 7.11903575261434 93 7.11646108627319 94 7.11587810516357 95 7.10755866368612 96 7.12527179718018 97 7.10647141138713 98 7.1116369565328 99 7.11216977437337 }; \addplot [semithick, color3] table { 0 25.9877557118734 1 25.5128223419189 2 25.0034393310547 3 24.4361502329508 4 23.8429537455241 5 23.300777943929 6 22.8227358500163 7 22.3943935394287 8 22.0157388051351 9 21.6709813435872 10 21.3590268452962 11 21.0709907531738 12 20.8046155293783 13 20.5562919616699 14 20.3269819895426 15 20.1131944656372 16 19.9142451604207 17 19.7196358998617 18 19.5375541687012 19 19.3708801905314 20 19.2105497360229 21 19.0610778808594 22 18.9281127929688 23 18.7998059590658 24 18.6724045435588 25 18.5609580357869 26 18.448624420166 27 18.3391838709513 28 18.2401002883911 29 18.1418050765991 30 18.0395825068156 31 17.9485396067301 32 17.8531257629395 33 17.758393351237 34 17.6676408131917 35 17.5767939885457 36 17.4910549799601 37 17.4067422231038 38 17.3223538716634 39 17.2428913752238 40 17.1654441833496 41 17.0955294291178 42 17.026806195577 43 16.9568826675415 44 16.8926235198975 45 16.8258072535197 46 16.7640937169393 47 16.7053846359253 48 16.6435628255208 49 16.5875813802083 50 16.5308273951213 51 16.4757184346517 52 16.4216067632039 53 16.3669522603353 54 16.3243253072103 55 16.2742607116699 56 16.2298915863037 57 16.1850196838379 58 16.1364218393962 59 16.0921784083049 60 16.0494630177816 61 16.0064277013143 62 15.9667224884033 63 15.9263962427775 64 15.8895708719889 65 15.8498690923055 66 15.8083278656006 67 15.7715096155802 68 15.7364229202271 69 15.7031761805216 70 15.6681171417236 71 15.6368492762248 72 15.6040642420451 73 15.5694357554118 74 15.5384927749634 75 15.5060132344564 76 15.4735820770264 77 15.4412837982178 78 15.4119017283122 79 15.3820894241333 80 15.3475914001465 81 15.315754254659 82 15.2826825459798 83 15.2502449671427 84 15.2154576619466 85 15.1819459279378 86 15.15265528361 87 15.1193724950155 88 15.0934508005778 89 15.0658162434896 90 15.0393920898437 91 15.009313138326 92 14.9828376134237 93 14.9548216501872 94 14.9227070490519 95 14.890855662028 96 14.8633895238241 97 14.8316902796427 98 14.8073736190796 99 14.7750368754069 }; \addplot [semithick, color4] table { 0 14.0133808135986 1 12.4120387395223 2 10.3411972681681 3 8.14506047566732 4 6.25788059234619 5 4.82624230384827 6 3.8282680829366 7 3.14329118728638 8 2.65918433666229 9 2.29830117225647 10 2.0209724187851 11 1.79948710203171 12 1.61409620841344 13 1.46074042717616 14 1.32924872636795 15 1.21726262966792 16 1.11897389094035 17 1.03334389130274 18 0.958305712540944 19 0.891799467802048 20 0.832335803906123 21 0.779093158245087 22 0.730390900373459 23 0.687952929735184 24 0.647841980059942 25 0.611974962552388 26 0.578720078865687 27 0.548181289434433 28 0.519738193353017 29 0.494122197230657 30 0.469562184810638 31 0.447027228275935 32 0.425909027457237 33 0.405799100796382 34 0.387290609876315 35 0.369525068998337 36 0.353115017215411 37 0.337948330243429 38 0.323367748657862 39 0.309790432453156 40 0.296300607919693 41 0.284598019719124 42 0.2724917948246 43 0.261456338564555 44 0.250886546572049 45 0.241213052471479 46 0.231479774912198 47 0.222938293218613 48 0.214299857119719 49 0.206214878459771 50 0.198593849440416 51 0.191719268262386 52 0.184854571521282 53 0.178505202631156 54 0.17248487273852 55 0.166665156185627 56 0.161200194060802 57 0.156313274304072 58 0.151353917519252 59 0.146561618149281 60 0.141879546642303 61 0.137599721054236 62 0.133390770852566 63 0.129732670634985 64 0.125670676678419 65 0.122013008097808 66 0.118538695325454 67 0.115107304354509 68 0.111667177826166 69 0.108680841078361 70 0.105689052989086 71 0.102759801596403 72 0.0999010826150576 73 0.0970715709030628 74 0.0945964718858401 75 0.091891444226106 76 0.0894035321970781 77 0.0871040018896262 78 0.0848384256164233 79 0.0825832533339659 80 0.0805530853569508 81 0.0785085402429104 82 0.0763366219898065 83 0.0744990644355615 84 0.0725941605865955 85 0.0707512660572926 86 0.0688440808405479 87 0.0672436330467463 88 0.0653252565612396 89 0.0636543427904447 90 0.0621214990814527 91 0.0605742686738571 92 0.0592222777505716 93 0.0577208314090967 94 0.0563673141101996 95 0.0550578398009141 96 0.0537669184307257 97 0.0525794060279926 98 0.0512615571419398 99 0.0500253544499477 }; \end{axis} \end{tikzpicture} } \caption{Mahalanobis target} \label{fig:sim_mahalanobis} \end{subfigure} \hfill \begin{subfigure}[t]{0.24\textwidth} \centering \adjustbox{max width=0.99\textwidth}{ \begin{tikzpicture} \definecolor{color0}{rgb}{0.12156862745098,0.466666666666667,0.705882352941177} \definecolor{color1}{rgb}{1,0.498039215686275,0.0549019607843137} \definecolor{color2}{rgb}{0.172549019607843,0.627450980392157,0.172549019607843} \definecolor{color3}{rgb}{0.83921568627451,0.152941176470588,0.156862745098039} \definecolor{color4}{rgb}{0.580392156862745,0.403921568627451,0.741176470588235} \begin{axis}[ legend cell align={left}, legend style={fill opacity=0.8, draw opacity=1, text opacity=1, draw=white!80!black}, tick align=outside, tick pos=left, x grid style={white!69.0196078431373!black}, xmin=-4.95, xmax=103.95, xtick style={color=black}, y grid style={white!69.0196078431373!black}, ylabel={test MAE}, ymin=0.290612192228222, ymax=5.59408272125985, ytick style={color=black} ] \path [fill=color0, fill opacity=0.2] (axis cs:0,1.99224862634117) --(axis cs:0,1.94492124339963) --(axis cs:1,1.67420688656776) --(axis cs:2,1.54430508192015) --(axis cs:3,1.45652922553216) --(axis cs:4,1.40028004951854) --(axis cs:5,1.35967134552822) --(axis cs:6,1.32457089512639) --(axis cs:7,1.29030191635361) --(axis cs:8,1.27051563107764) --(axis cs:9,1.24068348352406) --(axis cs:10,1.22412802990335) --(axis cs:11,1.20426601647934) --(axis cs:12,1.18965316660669) --(axis cs:13,1.1694498272512) --(axis cs:14,1.15663783787389) --(axis cs:15,1.14218019753937) --(axis cs:16,1.13002275463508) --(axis cs:17,1.1176452889127) --(axis cs:18,1.10608994705117) --(axis cs:19,1.0947940499084) --(axis cs:20,1.09134180315898) --(axis cs:21,1.08358247281695) --(axis cs:22,1.0701588132303) --(axis cs:23,1.06442596878958) --(axis cs:24,1.05381315548285) --(axis cs:25,1.05039662611214) --(axis cs:26,1.04479763685315) --(axis cs:27,1.03472999426312) --(axis cs:28,1.03200023144103) --(axis cs:29,1.02053989011753) --(axis cs:30,1.01625374528845) --(axis cs:31,1.01051750292196) --(axis cs:32,0.99993781960047) --(axis cs:33,0.996404392015092) --(axis cs:34,0.991090057344597) --(axis cs:35,0.981833990368969) --(axis cs:36,0.971592966455479) --(axis cs:37,0.966608814387748) --(axis cs:38,0.959393057568346) --(axis cs:39,0.948642689675831) --(axis cs:40,0.944465263112073) --(axis cs:41,0.929433298518168) --(axis cs:42,0.924065338023135) --(axis cs:43,0.914389150480843) --(axis cs:44,0.905667311633738) --(axis cs:45,0.892522544208199) --(axis cs:46,0.887195935526667) --(axis cs:47,0.880825538287997) --(axis cs:48,0.872233960035366) --(axis cs:49,0.857715680485014) --(axis cs:50,0.856198595075913) --(axis cs:51,0.834655426004831) --(axis cs:52,0.832570831887822) --(axis cs:53,0.822062737487252) --(axis cs:54,0.811866324220272) --(axis cs:55,0.799359386166413) --(axis cs:56,0.790549751993385) --(axis cs:57,0.777217216945762) --(axis cs:58,0.770251130432964) --(axis cs:59,0.751095095045208) --(axis cs:60,0.740410366782891) --(axis cs:61,0.735369157359695) --(axis cs:62,0.725930584247409) --(axis cs:63,0.719581613476433) --(axis cs:64,0.715623517567806) --(axis cs:65,0.699471173501404) --(axis cs:66,0.69777130029693) --(axis cs:67,0.693267294482384) --(axis cs:68,0.68722941575133) --(axis cs:69,0.676546918220216) --(axis cs:70,0.670090260901872) --(axis cs:71,0.665750321312087) --(axis cs:72,0.65883017162504) --(axis cs:73,0.655073799091659) --(axis cs:74,0.645963305937059) --(axis cs:75,0.644728697515935) --(axis cs:76,0.645386763471175) --(axis cs:77,0.638069198954934) --(axis cs:78,0.634000058514853) --(axis cs:79,0.62470459587563) --(axis cs:80,0.62494099213616) --(axis cs:81,0.617960519052202) --(axis cs:82,0.610339952054091) --(axis cs:83,0.609693488447088) --(axis cs:84,0.599151228421991) --(axis cs:85,0.602330576033325) --(axis cs:86,0.586944382212078) --(axis cs:87,0.58677122387562) --(axis cs:88,0.581431370717393) --(axis cs:89,0.577608027738549) --(axis cs:90,0.570118380941731) --(axis cs:91,0.56735184833214) --(axis cs:92,0.560715823412355) --(axis cs:93,0.556128185785925) --(axis cs:94,0.553836138710041) --(axis cs:95,0.55429766537505) --(axis cs:96,0.548159169820893) --(axis cs:97,0.550528876425133) --(axis cs:98,0.54472866584244) --(axis cs:99,0.531679034456933) --(axis cs:99,0.557046174779212) --(axis cs:99,0.557046174779212) --(axis cs:98,0.570602522995247) --(axis cs:97,0.576019493300412) --(axis cs:96,0.577372900974802) --(axis cs:95,0.576730565609956) --(axis cs:94,0.583183034594199) --(axis cs:93,0.583511851432805) --(axis cs:92,0.58533827757794) --(axis cs:91,0.595160907061842) --(axis cs:90,0.597200813534079) --(axis cs:89,0.606262168603919) --(axis cs:88,0.609368994094822) --(axis cs:87,0.612470859355988) --(axis cs:86,0.620399426597839) --(axis cs:85,0.627711643128663) --(axis cs:84,0.627974058633978) --(axis cs:83,0.637998246184768) --(axis cs:82,0.642485228635403) --(axis cs:81,0.645629475060449) --(axis cs:80,0.663279071709157) --(axis cs:79,0.658412324913453) --(axis cs:78,0.667710857050638) --(axis cs:77,0.67159560677525) --(axis cs:76,0.679626667442115) --(axis cs:75,0.680476874930253) --(axis cs:74,0.689245785567356) --(axis cs:73,0.693382504186947) --(axis cs:72,0.699078768690206) --(axis cs:71,0.709161408182644) --(axis cs:70,0.710616422892786) --(axis cs:69,0.722459973030395) --(axis cs:68,0.734404996060499) --(axis cs:67,0.738657708251482) --(axis cs:66,0.747089332917385) --(axis cs:65,0.746994120064982) --(axis cs:64,0.761673391128686) --(axis cs:63,0.759037516976041) --(axis cs:62,0.76779585204969) --(axis cs:61,0.778655616873805) --(axis cs:60,0.77817705558802) --(axis cs:59,0.787702784173847) --(axis cs:58,0.806068349509357) --(axis cs:57,0.808964463415668) --(axis cs:56,0.819805084788435) --(axis cs:55,0.826987734436513) --(axis cs:54,0.840145300751754) --(axis cs:53,0.850113432861869) --(axis cs:52,0.856932837850948) --(axis cs:51,0.860109807988699) --(axis cs:50,0.88278564799914) --(axis cs:49,0.888055568650321) --(axis cs:48,0.902331299580215) --(axis cs:47,0.910571945855578) --(axis cs:46,0.920803373377345) --(axis cs:45,0.930222556448946) --(axis cs:44,0.94373665781308) --(axis cs:43,0.953425732115491) --(axis cs:42,0.961709798924497) --(axis cs:41,0.97397069095581) --(axis cs:40,0.989474646822925) --(axis cs:39,0.994824545889355) --(axis cs:38,1.00712956454011) --(axis cs:37,1.01426535845873) --(axis cs:36,1.01479093259746) --(axis cs:35,1.03161597256489) --(axis cs:34,1.03610019082689) --(axis cs:33,1.04448468835041) --(axis cs:32,1.05494570020798) --(axis cs:31,1.0626696496227) --(axis cs:30,1.06396892494877) --(axis cs:29,1.0726558741062) --(axis cs:28,1.08470774521811) --(axis cs:27,1.08770126807106) --(axis cs:26,1.0972708763813) --(axis cs:25,1.10207116512728) --(axis cs:24,1.10814290047622) --(axis cs:23,1.11647751364755) --(axis cs:22,1.11995496965239) --(axis cs:21,1.13526222704267) --(axis cs:20,1.13904869786336) --(axis cs:19,1.1484881728394) --(axis cs:18,1.15456386662884) --(axis cs:17,1.16464062391892) --(axis cs:16,1.17369980179701) --(axis cs:15,1.18398338049408) --(axis cs:14,1.19804316124619) --(axis cs:13,1.20638602060342) --(axis cs:12,1.22521952740881) --(axis cs:11,1.23921178897619) --(axis cs:10,1.25383506798687) --(axis cs:9,1.26708257253673) --(axis cs:8,1.29044839696293) --(axis cs:7,1.30782304867833) --(axis cs:6,1.33661079318233) --(axis cs:5,1.37798567694798) --(axis cs:4,1.4295877267164) --(axis cs:3,1.49737853445217) --(axis cs:2,1.59195412421592) --(axis cs:1,1.7181868296086) --(axis cs:0,1.99224862634117) --cycle; \path [fill=color1, fill opacity=0.2] (axis cs:0,3.48742191229709) --(axis cs:0,3.24724971220445) --(axis cs:1,2.05935001852931) --(axis cs:2,1.90577002092101) --(axis cs:3,1.79910415251197) --(axis cs:4,1.76005110125362) --(axis cs:5,1.73805199711669) --(axis cs:6,1.72190278343514) --(axis cs:7,1.7148585174275) --(axis cs:8,1.71190936235459) --(axis cs:9,1.70799885856258) --(axis cs:10,1.70752138856746) --(axis cs:11,1.69584510357069) --(axis cs:12,1.69555391156377) --(axis cs:13,1.69568676614116) --(axis cs:14,1.69156227910158) --(axis cs:15,1.68770546775639) --(axis cs:16,1.68943505428535) --(axis cs:17,1.6886826592198) --(axis cs:18,1.68372590048209) --(axis cs:19,1.6857633964945) --(axis cs:20,1.68310430086881) --(axis cs:21,1.6834300559667) --(axis cs:22,1.67605924269728) --(axis cs:23,1.68164417826992) --(axis cs:24,1.67429716174012) --(axis cs:25,1.67887941313559) --(axis cs:26,1.68113422789461) --(axis cs:27,1.67575207714686) --(axis cs:28,1.67987225486502) --(axis cs:29,1.67871750012407) --(axis cs:30,1.67917356950804) --(axis cs:31,1.67840306076795) --(axis cs:32,1.67534592548623) --(axis cs:33,1.67572930259766) --(axis cs:34,1.67191836576007) --(axis cs:35,1.66875407651957) --(axis cs:36,1.67021495438489) --(axis cs:37,1.68217506587153) --(axis cs:38,1.67662306452319) --(axis cs:39,1.67125856060454) --(axis cs:40,1.67387783943592) --(axis cs:41,1.66853267612794) --(axis cs:42,1.67406254272305) --(axis cs:43,1.6719767384278) --(axis cs:44,1.67783380281971) --(axis cs:45,1.67361265043654) --(axis cs:46,1.67676516505829) --(axis cs:47,1.67516752081578) --(axis cs:48,1.66754851485579) --(axis cs:49,1.66979287167273) --(axis cs:50,1.66928669738266) --(axis cs:51,1.67174267673121) --(axis cs:52,1.6719372339883) --(axis cs:53,1.66771976588467) --(axis cs:54,1.67030439643796) --(axis cs:55,1.66727818939138) --(axis cs:56,1.66496418815082) --(axis cs:57,1.67166472003067) --(axis cs:58,1.66758440235913) --(axis cs:59,1.66550074182855) --(axis cs:60,1.66678248609546) --(axis cs:61,1.66799758991689) --(axis cs:62,1.66658228290715) --(axis cs:63,1.66647908627425) --(axis cs:64,1.66981447944376) --(axis cs:65,1.66202340447605) --(axis cs:66,1.66077373646564) --(axis cs:67,1.66634629247351) --(axis cs:68,1.66995677740638) --(axis cs:69,1.66645538799275) --(axis cs:70,1.67089511941963) --(axis cs:71,1.67402089732505) --(axis cs:72,1.66302595177761) --(axis cs:73,1.66682692585317) --(axis cs:74,1.65922787551788) --(axis cs:75,1.66505027376873) --(axis cs:76,1.66782518257396) --(axis cs:77,1.67440174864301) --(axis cs:78,1.6659992215187) --(axis cs:79,1.6743167885988) --(axis cs:80,1.67562854230574) --(axis cs:81,1.67480863639457) --(axis cs:82,1.66656157382408) --(axis cs:83,1.66821498537164) --(axis cs:84,1.66719891599459) --(axis cs:85,1.67499774894308) --(axis cs:86,1.67272402671293) --(axis cs:87,1.66639189696017) --(axis cs:88,1.66406576552333) --(axis cs:89,1.6721932414501) --(axis cs:90,1.6708174616514) --(axis cs:91,1.66775009246207) --(axis cs:92,1.65940528701423) --(axis cs:93,1.66870481018957) --(axis cs:94,1.66831805913832) --(axis cs:95,1.66671191131267) --(axis cs:96,1.66691930404066) --(axis cs:97,1.66336369311147) --(axis cs:98,1.66122332664754) --(axis cs:99,1.66009735704191) --(axis cs:99,1.7525918360479) --(axis cs:99,1.7525918360479) --(axis cs:98,1.75611283528382) --(axis cs:97,1.76029261156586) --(axis cs:96,1.76454323817532) --(axis cs:95,1.76319291516946) --(axis cs:94,1.7616833586575) --(axis cs:93,1.76665624136988) --(axis cs:92,1.75136500208896) --(axis cs:91,1.75900013832711) --(axis cs:90,1.75963691965556) --(axis cs:89,1.76207927035003) --(axis cs:88,1.76146492244461) --(axis cs:87,1.75657753968534) --(axis cs:86,1.7671872275341) --(axis cs:85,1.76539603271891) --(axis cs:84,1.76583634643115) --(axis cs:83,1.76234832143684) --(axis cs:82,1.76299659204404) --(axis cs:81,1.77293094884611) --(axis cs:80,1.76722335080136) --(axis cs:79,1.77022878241052) --(axis cs:78,1.76021709470444) --(axis cs:77,1.76517988397112) --(axis cs:76,1.75302366386159) --(axis cs:75,1.75880617535847) --(axis cs:74,1.759664714687) --(axis cs:73,1.76242021503123) --(axis cs:72,1.75417598049371) --(axis cs:71,1.77045588197691) --(axis cs:70,1.76655079135206) --(axis cs:69,1.76261354295105) --(axis cs:68,1.76786704270776) --(axis cs:67,1.75770457905766) --(axis cs:66,1.75305762466921) --(axis cs:65,1.75793720559577) --(axis cs:64,1.76443999201722) --(axis cs:63,1.76148343941138) --(axis cs:62,1.76531693088374) --(axis cs:61,1.76110964694529) --(axis cs:60,1.75701853865939) --(axis cs:59,1.7613422337879) --(axis cs:58,1.76066176196277) --(axis cs:57,1.76362094357406) --(axis cs:56,1.75792575974041) --(axis cs:55,1.76384331888906) --(axis cs:54,1.76025167834027) --(axis cs:53,1.75926308196486) --(axis cs:52,1.76230113627131) --(axis cs:51,1.7622686554628) --(axis cs:50,1.7690198408813) --(axis cs:49,1.76502703329044) --(axis cs:48,1.76112165306718) --(axis cs:47,1.77032881580328) --(axis cs:46,1.76769657797862) --(axis cs:45,1.76869056204718) --(axis cs:44,1.77146232195649) --(axis cs:43,1.76237546875604) --(axis cs:42,1.75777736841993) --(axis cs:41,1.76139019069335) --(axis cs:40,1.77329995215954) --(axis cs:39,1.76211450915865) --(axis cs:38,1.77281240796521) --(axis cs:37,1.77796975593125) --(axis cs:36,1.76341304841764) --(axis cs:35,1.76522282803162) --(axis cs:34,1.76387636919477) --(axis cs:33,1.77060286598144) --(axis cs:32,1.77005515496319) --(axis cs:31,1.77162996815254) --(axis cs:30,1.76865070201194) --(axis cs:29,1.77098468328149) --(axis cs:28,1.77372853325144) --(axis cs:27,1.76935744916946) --(axis cs:26,1.77030103605701) --(axis cs:25,1.77666969982014) --(axis cs:24,1.76959199523721) --(axis cs:23,1.77343358115492) --(axis cs:22,1.7655607416116) --(axis cs:21,1.77295129677581) --(axis cs:20,1.77609679661959) --(axis cs:19,1.77223772439072) --(axis cs:18,1.77215041813159) --(axis cs:17,1.78459273204098) --(axis cs:16,1.77498872933485) --(axis cs:15,1.78075232643306) --(axis cs:14,1.7849840879079) --(axis cs:13,1.79250741339376) --(axis cs:12,1.79742415265539) --(axis cs:11,1.79056109556668) --(axis cs:10,1.80494872963729) --(axis cs:9,1.80884773465845) --(axis cs:8,1.81454146238296) --(axis cs:7,1.81693611008002) --(axis cs:6,1.83382258442883) --(axis cs:5,1.85035632998597) --(axis cs:4,1.87232165951909) --(axis cs:3,1.91723950466373) --(axis cs:2,2.06012141978842) --(axis cs:1,2.2818872562764) --(axis cs:0,3.48742191229709) --cycle; \path [fill=color2, fill opacity=0.2] (axis cs:0,2.0595113013444) --(axis cs:0,2.01480564532105) --(axis cs:1,1.85583167971101) --(axis cs:2,1.7836951370721) --(axis cs:3,1.72548065243172) --(axis cs:4,1.67187628099306) --(axis cs:5,1.63672388574518) --(axis cs:6,1.60390763380315) --(axis cs:7,1.57969958435587) --(axis cs:8,1.56109652311288) --(axis cs:9,1.5465849746347) --(axis cs:10,1.53674181559792) --(axis cs:11,1.52489910278811) --(axis cs:12,1.51759748954638) --(axis cs:13,1.51202274024107) --(axis cs:14,1.50789202965198) --(axis cs:15,1.50320874856647) --(axis cs:16,1.49804563744068) --(axis cs:17,1.49700107793684) --(axis cs:18,1.49271147955667) --(axis cs:19,1.49314930931309) --(axis cs:20,1.48708526914004) --(axis cs:21,1.48937083722417) --(axis cs:22,1.4882218992092) --(axis cs:23,1.4856079139739) --(axis cs:24,1.48645207928252) --(axis cs:25,1.48540511219279) --(axis cs:26,1.48281589243728) --(axis cs:27,1.48019136099006) --(axis cs:28,1.48300707522622) --(axis cs:29,1.48603008720469) --(axis cs:30,1.48372386977907) --(axis cs:31,1.48407378069765) --(axis cs:32,1.48315861073042) --(axis cs:33,1.48096274598646) --(axis cs:34,1.48153547652276) --(axis cs:35,1.47949252616907) --(axis cs:36,1.48403650627796) --(axis cs:37,1.48397705638545) --(axis cs:38,1.4801868287744) --(axis cs:39,1.48041431274467) --(axis cs:40,1.47868284758437) --(axis cs:41,1.47852854265037) --(axis cs:42,1.47736547395607) --(axis cs:43,1.48264491929717) --(axis cs:44,1.47958848690929) --(axis cs:45,1.47849726954152) --(axis cs:46,1.48016409669999) --(axis cs:47,1.47721478716899) --(axis cs:48,1.47954018807645) --(axis cs:49,1.47876248782954) --(axis cs:50,1.4794209102755) --(axis cs:51,1.47764236371462) --(axis cs:52,1.48246186270386) --(axis cs:53,1.47873985635866) --(axis cs:54,1.47810522777723) --(axis cs:55,1.48219660380925) --(axis cs:56,1.4765734331826) --(axis cs:57,1.47906472011304) --(axis cs:58,1.48011897835488) --(axis cs:59,1.47801402024923) --(axis cs:60,1.47719631170982) --(axis cs:61,1.47722882927309) --(axis cs:62,1.47958532617347) --(axis cs:63,1.476971928841) --(axis cs:64,1.47892445724103) --(axis cs:65,1.47758895805347) --(axis cs:66,1.47689740234331) --(axis cs:67,1.47869232573995) --(axis cs:68,1.4830239842437) --(axis cs:69,1.47809562492735) --(axis cs:70,1.47787149839009) --(axis cs:71,1.47761661970202) --(axis cs:72,1.47772756093332) --(axis cs:73,1.47902312718914) --(axis cs:74,1.47631712824739) --(axis cs:75,1.47926950111402) --(axis cs:76,1.47699400468357) --(axis cs:77,1.47690808551956) --(axis cs:78,1.47942077568705) --(axis cs:79,1.47846807429321) --(axis cs:80,1.48017545142655) --(axis cs:81,1.47818466445876) --(axis cs:82,1.47567045802887) --(axis cs:83,1.47920290175662) --(axis cs:84,1.47885446394765) --(axis cs:85,1.48039055585003) --(axis cs:86,1.4760291961082) --(axis cs:87,1.4737728920009) --(axis cs:88,1.47772733967094) --(axis cs:89,1.48137352820558) --(axis cs:90,1.4765996161831) --(axis cs:91,1.47612740102293) --(axis cs:92,1.47358022423094) --(axis cs:93,1.47517332687755) --(axis cs:94,1.4744906062584) --(axis cs:95,1.47959323004594) --(axis cs:96,1.47491395353808) --(axis cs:97,1.47904663884486) --(axis cs:98,1.47904796290175) --(axis cs:99,1.47374881297454) --(axis cs:99,1.49345152983959) --(axis cs:99,1.49345152983959) --(axis cs:98,1.49801103584194) --(axis cs:97,1.49437251881912) --(axis cs:96,1.49364834746188) --(axis cs:95,1.49576048775802) --(axis cs:94,1.49187018526585) --(axis cs:93,1.49382586822133) --(axis cs:92,1.49252654024457) --(axis cs:91,1.49447922803082) --(axis cs:90,1.49202081126644) --(axis cs:89,1.49649906599201) --(axis cs:88,1.49335833588652) --(axis cs:87,1.4914162788365) --(axis cs:86,1.49466215101399) --(axis cs:85,1.49793905497456) --(axis cs:84,1.49539934947805) --(axis cs:83,1.49539822396054) --(axis cs:82,1.49449233099802) --(axis cs:81,1.49190121073452) --(axis cs:80,1.4980368097734) --(axis cs:79,1.49708033612244) --(axis cs:78,1.49412158080404) --(axis cs:77,1.49454054258815) --(axis cs:76,1.49417673862291) --(axis cs:75,1.49469736760444) --(axis cs:74,1.49244957694772) --(axis cs:73,1.49629036145323) --(axis cs:72,1.4959028038026) --(axis cs:71,1.4919847412548) --(axis cs:70,1.49276087987021) --(axis cs:69,1.49432859611147) --(axis cs:68,1.49690894384163) --(axis cs:67,1.49698934476684) --(axis cs:66,1.49536510732059) --(axis cs:65,1.49449437210095) --(axis cs:64,1.49383854070412) --(axis cs:63,1.49408367418634) --(axis cs:62,1.49648003929996) --(axis cs:61,1.49487130144387) --(axis cs:60,1.49371511165546) --(axis cs:59,1.4962345638623) --(axis cs:58,1.49594951834922) --(axis cs:57,1.49561621224983) --(axis cs:56,1.49483803341591) --(axis cs:55,1.49768169463232) --(axis cs:54,1.49850700633837) --(axis cs:53,1.49421901039651) --(axis cs:52,1.49837371812195) --(axis cs:51,1.49491343259072) --(axis cs:50,1.49396367669451) --(axis cs:49,1.49673173799036) --(axis cs:48,1.49898214443291) --(axis cs:47,1.49427397473286) --(axis cs:46,1.49872638588464) --(axis cs:45,1.4952919296518) --(axis cs:44,1.49657539312102) --(axis cs:43,1.4983880880067) --(axis cs:42,1.4921651084719) --(axis cs:41,1.49631890124815) --(axis cs:40,1.49693923417222) --(axis cs:39,1.49696144892322) --(axis cs:38,1.49836961348634) --(axis cs:37,1.49794161236149) --(axis cs:36,1.50037331872917) --(axis cs:35,1.49626143285091) --(axis cs:34,1.49836814197191) --(axis cs:33,1.49665278529914) --(axis cs:32,1.49844215068315) --(axis cs:31,1.50037010637714) --(axis cs:30,1.49780762308999) --(axis cs:29,1.50014657841929) --(axis cs:28,1.49913273470013) --(axis cs:27,1.49859866789991) --(axis cs:26,1.50176680193426) --(axis cs:25,1.50365287374878) --(axis cs:24,1.5027636920747) --(axis cs:23,1.50439654604298) --(axis cs:22,1.50446173395934) --(axis cs:21,1.50411520161962) --(axis cs:20,1.50494207715309) --(axis cs:19,1.50880777343532) --(axis cs:18,1.50860153924215) --(axis cs:17,1.51269531030779) --(axis cs:16,1.51820926444531) --(axis cs:15,1.51959848397239) --(axis cs:14,1.52433019362988) --(axis cs:13,1.52861726423484) --(axis cs:12,1.53392322362399) --(axis cs:11,1.54231816774514) --(axis cs:10,1.55365921398887) --(axis cs:9,1.56460324634308) --(axis cs:8,1.58138750602124) --(axis cs:7,1.60045618562806) --(axis cs:6,1.62772795261754) --(axis cs:5,1.65953326045436) --(axis cs:4,1.6962683742346) --(axis cs:3,1.74348125400139) --(axis cs:2,1.8022659822618) --(axis cs:1,1.8777805874653) --(axis cs:0,2.0595113013444) --cycle; \path [fill=color3, fill opacity=0.2] (axis cs:0,5.28320846637315) --(axis cs:0,5.06917419353896) --(axis cs:1,5.0416889470222) --(axis cs:2,4.98614109830447) --(axis cs:3,4.91061754554067) --(axis cs:4,4.83717277639903) --(axis cs:5,4.76601378309576) --(axis cs:6,4.69013024603258) --(axis cs:7,4.61843562371384) --(axis cs:8,4.54470586495363) --(axis cs:9,4.46307965670397) --(axis cs:10,4.37996609141977) --(axis cs:11,4.2921973103446) --(axis cs:12,4.19281221546799) --(axis cs:13,4.07988798463157) --(axis cs:14,3.97712961955337) --(axis cs:15,3.86407660124159) --(axis cs:16,3.7646708726576) --(axis cs:17,3.66884842141025) --(axis cs:18,3.58615015359279) --(axis cs:19,3.50985657738646) --(axis cs:20,3.43649112217845) --(axis cs:21,3.37037901718783) --(axis cs:22,3.31204829687809) --(axis cs:23,3.26071137743056) --(axis cs:24,3.20948357938507) --(axis cs:25,3.16831034136195) --(axis cs:26,3.12430755209483) --(axis cs:27,3.09056615100112) --(axis cs:28,3.05697579505531) --(axis cs:29,3.02854703450007) --(axis cs:30,3.00405167188254) --(axis cs:31,2.98400342253099) --(axis cs:32,2.97718859846087) --(axis cs:33,2.96318604296802) --(axis cs:34,2.96202331126887) --(axis cs:35,2.95437724131051) --(axis cs:36,2.93853825486152) --(axis cs:37,2.93144352633096) --(axis cs:38,2.91988193994309) --(axis cs:39,2.9088056191491) --(axis cs:40,2.8928820748707) --(axis cs:41,2.88159566570606) --(axis cs:42,2.8722937670598) --(axis cs:43,2.86230328201048) --(axis cs:44,2.85078935859147) --(axis cs:45,2.84080914199772) --(axis cs:46,2.83030052131982) --(axis cs:47,2.81956308969869) --(axis cs:48,2.81335268338056) --(axis cs:49,2.80356023666296) --(axis cs:50,2.79815181518018) --(axis cs:51,2.78618089369065) --(axis cs:52,2.78060646156138) --(axis cs:53,2.77252971047276) --(axis cs:54,2.7681298778888) --(axis cs:55,2.76105288911885) --(axis cs:56,2.75400496008334) --(axis cs:57,2.74828924480742) --(axis cs:58,2.7408198983905) --(axis cs:59,2.73859444801791) --(axis cs:60,2.73045510984957) --(axis cs:61,2.72367904314999) --(axis cs:62,2.72183157095545) --(axis cs:63,2.71472637619835) --(axis cs:64,2.70771438053036) --(axis cs:65,2.70390331193469) --(axis cs:66,2.69859249011225) --(axis cs:67,2.69486264371691) --(axis cs:68,2.68971424797263) --(axis cs:69,2.68136950557786) --(axis cs:70,2.68047137509294) --(axis cs:71,2.67499504479597) --(axis cs:72,2.67190622182153) --(axis cs:73,2.66592916934697) --(axis cs:74,2.66388299021307) --(axis cs:75,2.66274685956197) --(axis cs:76,2.65804422147238) --(axis cs:77,2.65324028452132) --(axis cs:78,2.650575370421) --(axis cs:79,2.65065338399616) --(axis cs:80,2.64406649780503) --(axis cs:81,2.63948843857191) --(axis cs:82,2.63642146397292) --(axis cs:83,2.63329084231994) --(axis cs:84,2.62927100528057) --(axis cs:85,2.62777100845563) --(axis cs:86,2.62290464128916) --(axis cs:87,2.61854729618692) --(axis cs:88,2.61833745423245) --(axis cs:89,2.61375651527025) --(axis cs:90,2.61185687100056) --(axis cs:91,2.60714155474498) --(axis cs:92,2.60621992608931) --(axis cs:93,2.60272435860942) --(axis cs:94,2.59994523391364) --(axis cs:95,2.60108759913715) --(axis cs:96,2.59743361291138) --(axis cs:97,2.59544818795347) --(axis cs:98,2.59395769302109) --(axis cs:99,2.59152115525873) --(axis cs:99,2.71593114195196) --(axis cs:99,2.71593114195196) --(axis cs:98,2.72098253703059) --(axis cs:97,2.72337762915469) --(axis cs:96,2.72525824092977) --(axis cs:95,2.7293214190615) --(axis cs:94,2.72935551936192) --(axis cs:93,2.73720884604146) --(axis cs:92,2.73730856715614) --(axis cs:91,2.73571520210113) --(axis cs:90,2.74226281131145) --(axis cs:89,2.74571614097975) --(axis cs:88,2.74868750787489) --(axis cs:87,2.75114439362225) --(axis cs:86,2.75683403923248) --(axis cs:85,2.7617450908384) --(axis cs:84,2.76165529222513) --(axis cs:83,2.76611047273336) --(axis cs:82,2.7720758059754) --(axis cs:81,2.7744441709862) --(axis cs:80,2.77919479497044) --(axis cs:79,2.7849277279118) --(axis cs:78,2.78809205409886) --(axis cs:77,2.79310011744605) --(axis cs:76,2.79490883104838) --(axis cs:75,2.79930176638407) --(axis cs:74,2.80414523727513) --(axis cs:73,2.80631817689529) --(axis cs:72,2.81495294082699) --(axis cs:71,2.81882190314105) --(axis cs:70,2.82432420799626) --(axis cs:69,2.82453138604405) --(axis cs:68,2.83422507545266) --(axis cs:67,2.84086116965793) --(axis cs:66,2.84634887163294) --(axis cs:65,2.85175464704968) --(axis cs:64,2.85712860653018) --(axis cs:63,2.86536096765338) --(axis cs:62,2.87187481433915) --(axis cs:61,2.8751624015013) --(axis cs:60,2.88387724819283) --(axis cs:59,2.89476574714201) --(axis cs:58,2.89776967451599) --(axis cs:57,2.90223225609793) --(axis cs:56,2.91212923842333) --(axis cs:55,2.91980654945943) --(axis cs:54,2.92821570570902) --(axis cs:53,2.93439921027309) --(axis cs:52,2.94653542737498) --(axis cs:51,2.95056827852004) --(axis cs:50,2.96917829091927) --(axis cs:49,2.97514570994145) --(axis cs:48,2.98882287025917) --(axis cs:47,2.99993147562927) --(axis cs:46,3.01051591290462) --(axis cs:45,3.02866145431576) --(axis cs:44,3.04562422198511) --(axis cs:43,3.06522293767539) --(axis cs:42,3.08659609881545) --(axis cs:41,3.11236865670198) --(axis cs:40,3.13963405494091) --(axis cs:39,3.17170677734545) --(axis cs:38,3.21551028087193) --(axis cs:37,3.26217289568646) --(axis cs:36,3.30055419051202) --(axis cs:35,3.33553142212129) --(axis cs:34,3.3895743793134) --(axis cs:33,3.45688985043408) --(axis cs:32,3.53066392407128) --(axis cs:31,3.60839251888544) --(axis cs:30,3.65554165595763) --(axis cs:29,3.6906324686229) --(axis cs:28,3.72030243751915) --(axis cs:27,3.74928227200303) --(axis cs:26,3.78212787398143) --(axis cs:25,3.82257876284541) --(axis cs:24,3.8625827276828) --(axis cs:23,3.91521123889228) --(axis cs:22,3.96531106159156) --(axis cs:21,4.03010916869474) --(axis cs:20,4.11320583191294) --(axis cs:19,4.20773327780764) --(axis cs:18,4.32112940141006) --(axis cs:17,4.44551528072802) --(axis cs:16,4.55887684030423) --(axis cs:15,4.66864568116807) --(axis cs:14,4.79233681238225) --(axis cs:13,4.89606441811908) --(axis cs:12,5.00012868724197) --(axis cs:11,5.07890295642989) --(axis cs:10,5.14689881870597) --(axis cs:9,5.20144375409315) --(axis cs:8,5.24572239203489) --(axis cs:7,5.27480241212397) --(axis cs:6,5.29791932468682) --(axis cs:5,5.31637592129064) --(axis cs:4,5.33445579415761) --(axis cs:3,5.3467733032549) --(axis cs:2,5.35301587903114) --(axis cs:1,5.33504642234222) --(axis cs:0,5.28320846637315) --cycle; \path [fill=color4, fill opacity=0.2] (axis cs:0,2.04905472233928) --(axis cs:0,1.946639295071) --(axis cs:1,1.76440074847705) --(axis cs:2,1.66343273357098) --(axis cs:3,1.61257415668099) --(axis cs:4,1.58707993069158) --(axis cs:5,1.57681364909188) --(axis cs:6,1.56884214423248) --(axis cs:7,1.56569737391242) --(axis cs:8,1.56049438175858) --(axis cs:9,1.55886386556753) --(axis cs:10,1.55653736719411) --(axis cs:11,1.5557882301732) --(axis cs:12,1.55400412711019) --(axis cs:13,1.55360636572918) --(axis cs:14,1.55497143642058) --(axis cs:15,1.55446073508387) --(axis cs:16,1.55173134663173) --(axis cs:17,1.55388627194033) --(axis cs:18,1.55422000153067) --(axis cs:19,1.55349351925067) --(axis cs:20,1.55047349122995) --(axis cs:21,1.54907030043959) --(axis cs:22,1.54986466208967) --(axis cs:23,1.55054236726456) --(axis cs:24,1.55045071619437) --(axis cs:25,1.55078574263389) --(axis cs:26,1.54864780543314) --(axis cs:27,1.55024013776466) --(axis cs:28,1.55182913839208) --(axis cs:29,1.5504113681016) --(axis cs:30,1.54996491744984) --(axis cs:31,1.54964873978325) --(axis cs:32,1.5504417678405) --(axis cs:33,1.55137317260928) --(axis cs:34,1.54935175743237) --(axis cs:35,1.54916914755299) --(axis cs:36,1.55136582268981) --(axis cs:37,1.55006281226144) --(axis cs:38,1.55243621850617) --(axis cs:39,1.55255424120133) --(axis cs:40,1.54929123413286) --(axis cs:41,1.5490261321998) --(axis cs:42,1.55072312610291) --(axis cs:43,1.55158240832798) --(axis cs:44,1.55134041645204) --(axis cs:45,1.55122529611527) --(axis cs:46,1.54914069513168) --(axis cs:47,1.55099896617251) --(axis cs:48,1.55067907906093) --(axis cs:49,1.54804244874983) --(axis cs:50,1.54759330152429) --(axis cs:51,1.54890298129969) --(axis cs:52,1.54975540827131) --(axis cs:53,1.55117205840109) --(axis cs:54,1.55185838311012) --(axis cs:55,1.55174437274727) --(axis cs:56,1.5512265971372) --(axis cs:57,1.54945943427024) --(axis cs:58,1.55078808093284) --(axis cs:59,1.54716680939924) --(axis cs:60,1.55048162947536) --(axis cs:61,1.54871810970295) --(axis cs:62,1.55002633601909) --(axis cs:63,1.54901187915248) --(axis cs:64,1.54806643984251) --(axis cs:65,1.54966353405446) --(axis cs:66,1.55110515714822) --(axis cs:67,1.54927681485642) --(axis cs:68,1.55011055752184) --(axis cs:69,1.55097402792488) --(axis cs:70,1.54701502986395) --(axis cs:71,1.55060592166137) --(axis cs:72,1.54948373461402) --(axis cs:73,1.5496158494613) --(axis cs:74,1.55141298138099) --(axis cs:75,1.55204519088282) --(axis cs:76,1.55031412889999) --(axis cs:77,1.55051037511658) --(axis cs:78,1.55018434322042) --(axis cs:79,1.55265237104266) --(axis cs:80,1.55030715716335) --(axis cs:81,1.55096102596506) --(axis cs:82,1.5488650814733) --(axis cs:83,1.54935809350126) --(axis cs:84,1.55157444881251) --(axis cs:85,1.55174848108787) --(axis cs:86,1.55047809312918) --(axis cs:87,1.5486290214633) --(axis cs:88,1.5470936896082) --(axis cs:89,1.5522643874878) --(axis cs:90,1.54939547639538) --(axis cs:91,1.54927484671823) --(axis cs:92,1.55054332049857) --(axis cs:93,1.54948957052936) --(axis cs:94,1.55047118778813) --(axis cs:95,1.55323443321669) --(axis cs:96,1.54855632684475) --(axis cs:97,1.54890261757199) --(axis cs:98,1.54894310431401) --(axis cs:99,1.55124789354564) --(axis cs:99,1.67313569905995) --(axis cs:99,1.67313569905995) --(axis cs:98,1.67056746684789) --(axis cs:97,1.67150153052982) --(axis cs:96,1.66995962558343) --(axis cs:95,1.67616974603848) --(axis cs:94,1.6720998514117) --(axis cs:93,1.6704323839435) --(axis cs:92,1.67089174953927) --(axis cs:91,1.66970273176281) --(axis cs:90,1.66966174978565) --(axis cs:89,1.6728075513448) --(axis cs:88,1.66830261929072) --(axis cs:87,1.66965781071036) --(axis cs:86,1.67186508148732) --(axis cs:85,1.67468597224058) --(axis cs:84,1.67130259268631) --(axis cs:83,1.66937680665539) --(axis cs:82,1.67003472486723) --(axis cs:81,1.67243632434621) --(axis cs:80,1.67334973561314) --(axis cs:79,1.6725487429761) --(axis cs:78,1.67058550719258) --(axis cs:77,1.67155623394816) --(axis cs:76,1.6712355092124) --(axis cs:75,1.67153714045669) --(axis cs:74,1.67323404785994) --(axis cs:73,1.66990487833207) --(axis cs:72,1.66933834726973) --(axis cs:71,1.6726849985962) --(axis cs:70,1.66777649057266) --(axis cs:69,1.67369431593702) --(axis cs:68,1.6707822532997) --(axis cs:67,1.66912901997736) --(axis cs:66,1.67336736558737) --(axis cs:65,1.67030511867076) --(axis cs:64,1.6686923676577) --(axis cs:63,1.66904076239822) --(axis cs:62,1.67259359488402) --(axis cs:61,1.66941023769391) --(axis cs:60,1.67132953792373) --(axis cs:59,1.66876521015236) --(axis cs:58,1.67128301040118) --(axis cs:57,1.66977384018959) --(axis cs:56,1.67238985984827) --(axis cs:55,1.67098143189954) --(axis cs:54,1.67149574986006) --(axis cs:53,1.67223625916483) --(axis cs:52,1.67026870061541) --(axis cs:51,1.66963056641963) --(axis cs:50,1.67041426302039) --(axis cs:49,1.66967850487362) --(axis cs:48,1.67088573836766) --(axis cs:47,1.67154691191994) --(axis cs:46,1.66992144247208) --(axis cs:45,1.6732277557697) --(axis cs:44,1.67333544554239) --(axis cs:43,1.67328099372077) --(axis cs:42,1.67147435886719) --(axis cs:41,1.67006388448314) --(axis cs:40,1.67083583979089) --(axis cs:39,1.67408084930872) --(axis cs:38,1.67550705885283) --(axis cs:37,1.67248250952099) --(axis cs:36,1.67399728880616) --(axis cs:35,1.67109576409862) --(axis cs:34,1.67306027246977) --(axis cs:33,1.674829585544) --(axis cs:32,1.67173179579883) --(axis cs:31,1.67240251512499) --(axis cs:30,1.6709769440207) --(axis cs:29,1.67196318239991) --(axis cs:28,1.67310234010828) --(axis cs:27,1.67183972737308) --(axis cs:26,1.67023146512045) --(axis cs:25,1.67303135153318) --(axis cs:24,1.67386057200347) --(axis cs:23,1.67505148573226) --(axis cs:22,1.67188666224607) --(axis cs:21,1.6740863583688) --(axis cs:20,1.67463123810457) --(axis cs:19,1.68007142978498) --(axis cs:18,1.68054312802154) --(axis cs:17,1.67864243047768) --(axis cs:16,1.67691737951369) --(axis cs:15,1.68309128785009) --(axis cs:14,1.68157725755423) --(axis cs:13,1.67994027275959) --(axis cs:12,1.68168184128885) --(axis cs:11,1.68516136877496) --(axis cs:10,1.6858199504125) --(axis cs:9,1.69139218009185) --(axis cs:8,1.69386009517013) --(axis cs:7,1.70223362647922) --(axis cs:6,1.70831484454722) --(axis cs:5,1.72221282427136) --(axis cs:4,1.73631761035456) --(axis cs:3,1.76894555831026) --(axis cs:2,1.81718089545226) --(axis cs:1,1.90200613730583) --(axis cs:0,2.04905472233928) --cycle; \addplot [semithick, color0] table { 0 1.9685849348704 1 1.69619685808818 2 1.56812960306803 3 1.47695387999217 4 1.41493388811747 5 1.3688285112381 6 1.33059084415436 7 1.29906248251597 8 1.28048201402028 9 1.2538830280304 10 1.23898154894511 11 1.22173890272776 12 1.20743634700775 13 1.18791792392731 14 1.17734049956004 15 1.16308178901672 16 1.15186127821604 17 1.14114295641581 18 1.13032690684001 19 1.1216411113739 20 1.11519525051117 21 1.10942234992981 22 1.09505689144135 23 1.09045174121857 24 1.08097802797953 25 1.07623389561971 26 1.07103425661723 27 1.06121563116709 28 1.05835398832957 29 1.04659788211187 30 1.04011133511861 31 1.03659357627233 32 1.02744175990423 33 1.02044454018275 34 1.01359512408574 35 1.00672498146693 36 0.993191949526469 37 0.990437086423238 38 0.98326131105423 39 0.971733617782593 40 0.966969954967499 41 0.951701994736989 42 0.942887568473816 43 0.933907441298167 44 0.924701984723409 45 0.911372550328573 46 0.903999654452006 47 0.895698742071788 48 0.88728262980779 49 0.872885624567668 50 0.869492121537526 51 0.847382616996765 52 0.844751834869385 53 0.836088085174561 54 0.826005812486013 55 0.813173560301463 56 0.80517741839091 57 0.793090840180715 58 0.788159739971161 59 0.769398939609528 60 0.759293711185455 61 0.75701238711675 62 0.746863218148549 63 0.739309565226237 64 0.738648454348246 65 0.723232646783193 66 0.722430316607157 67 0.715962501366933 68 0.710817205905914 69 0.699503445625305 70 0.690353341897329 71 0.687455864747365 72 0.678954470157623 73 0.674228151639303 74 0.667604545752207 75 0.662602786223094 76 0.662506715456645 77 0.654832402865092 78 0.650855457782745 79 0.641558460394541 80 0.644110031922658 81 0.631794997056325 82 0.626412590344747 83 0.623845867315928 84 0.613562643527985 85 0.615021109580994 86 0.603671904404958 87 0.599621041615804 88 0.595400182406108 89 0.591935098171234 90 0.583659597237905 91 0.581256377696991 92 0.573027050495148 93 0.569820018609365 94 0.56850958665212 95 0.565514115492503 96 0.562766035397848 97 0.563274184862773 98 0.557665594418844 99 0.544362604618073 }; \addplot [semithick, color1] table { 0 3.36733581225077 1 2.17061863740285 2 1.98294572035472 3 1.85817182858785 4 1.81618638038635 5 1.79420416355133 6 1.77786268393199 7 1.76589731375376 8 1.76322541236877 9 1.75842329661051 10 1.75623505910238 11 1.74320309956868 12 1.74648903210958 13 1.74409708976746 14 1.73827318350474 15 1.73422889709473 16 1.7322118918101 17 1.73663769563039 18 1.72793815930684 19 1.72900056044261 20 1.7296005487442 21 1.72819067637126 22 1.72080999215444 23 1.72753887971242 24 1.72194457848867 25 1.72777455647786 26 1.72571763197581 27 1.72255476315816 28 1.72680039405823 29 1.72485109170278 30 1.72391213575999 31 1.72501651446025 32 1.72270054022471 33 1.72316608428955 34 1.71789736747742 35 1.71698845227559 36 1.71681400140127 37 1.73007241090139 38 1.7247177362442 39 1.71668653488159 40 1.72358889579773 41 1.71496143341064 42 1.71591995557149 43 1.71717610359192 44 1.7246480623881 45 1.72115160624186 46 1.72223087151845 47 1.72274816830953 48 1.71433508396149 49 1.71740995248159 50 1.71915326913198 51 1.71700566609701 52 1.7171191851298 53 1.71349142392476 54 1.71527803738912 55 1.71556075414022 56 1.71144497394562 57 1.71764283180237 58 1.71412308216095 59 1.71342148780823 60 1.71190051237742 61 1.71455361843109 62 1.71594960689545 63 1.71398126284281 64 1.71712723573049 65 1.70998030503591 66 1.70691568056742 67 1.71202543576558 68 1.71891191005707 69 1.7145344654719 70 1.71872295538584 71 1.72223838965098 72 1.70860096613566 73 1.7146235704422 74 1.70944629510244 75 1.7119282245636 76 1.71042442321777 77 1.71979081630707 78 1.71310815811157 79 1.72227278550466 80 1.72142594655355 81 1.72386979262034 82 1.71477908293406 83 1.71528165340424 84 1.71651763121287 85 1.72019689083099 86 1.71995562712351 87 1.71148471832275 88 1.71276534398397 89 1.71713625590007 90 1.71522719065348 91 1.71337511539459 92 1.7053851445516 93 1.71768052577972 94 1.71500070889791 95 1.71495241324107 96 1.71573127110799 97 1.71182815233866 98 1.70866808096568 99 1.7063445965449 }; \addplot [semithick, color2] table { 0 2.03715847333272 1 1.86680613358816 2 1.79298055966695 3 1.73448095321655 4 1.68407232761383 5 1.64812857309977 6 1.61581779321035 7 1.59007788499196 8 1.57124201456706 9 1.55559411048889 10 1.5452005147934 11 1.53360863526662 12 1.52576035658518 13 1.52032000223796 14 1.51611111164093 15 1.51140361626943 16 1.50812745094299 17 1.50484819412231 18 1.50065650939941 19 1.50097854137421 20 1.49601367314657 21 1.4967430194219 22 1.49634181658427 23 1.49500223000844 24 1.49460788567861 25 1.49452899297078 26 1.49229134718577 27 1.48939501444499 28 1.49106990496318 29 1.49308833281199 30 1.49076574643453 31 1.49222194353739 32 1.49080038070679 33 1.4888077656428 34 1.48995180924733 35 1.48787697950999 36 1.49220491250356 37 1.49095933437347 38 1.48927822113037 39 1.48868788083394 40 1.4878110408783 41 1.48742372194926 42 1.48476529121399 43 1.49051650365194 44 1.48808194001516 45 1.48689459959666 46 1.48944524129232 47 1.48574438095093 48 1.48926116625468 49 1.48774711290995 50 1.48669229348501 51 1.48627789815267 52 1.4904177904129 53 1.48647943337758 54 1.4883061170578 55 1.48993914922078 56 1.48570573329926 57 1.48734046618144 58 1.48803424835205 59 1.48712429205577 60 1.48545571168264 61 1.48605006535848 62 1.48803268273671 63 1.48552780151367 64 1.48638149897257 65 1.48604166507721 66 1.48613125483195 67 1.4878408352534 68 1.48996646404266 69 1.48621211051941 70 1.48531618913015 71 1.48480068047841 72 1.48681518236796 73 1.48765674432119 74 1.48438335259755 75 1.48698343435923 76 1.48558537165324 77 1.48572431405385 78 1.48677117824554 79 1.48777420520782 80 1.48910613059998 81 1.48504293759664 82 1.48508139451345 83 1.48730056285858 84 1.48712690671285 85 1.48916480541229 86 1.4853456735611 87 1.4825945854187 88 1.48554283777873 89 1.4889362970988 90 1.48431021372477 91 1.48530331452688 92 1.48305338223775 93 1.48449959754944 94 1.48318039576213 95 1.48767685890198 96 1.48428115049998 97 1.48670957883199 98 1.48852949937185 99 1.48360017140706 }; \addplot [semithick, color3] table { 0 5.17619132995605 1 5.18836768468221 2 5.16957848866781 3 5.12869542439779 4 5.08581428527832 5 5.0411948521932 6 4.9940247853597 7 4.9466190179189 8 4.89521412849426 9 4.83226170539856 10 4.76343245506287 11 4.68555013338725 12 4.59647045135498 13 4.48797620137533 14 4.38473321596781 15 4.26636114120483 16 4.16177385648092 17 4.05718185106913 18 3.95363977750142 19 3.85879492759705 20 3.77484847704569 21 3.70024409294128 22 3.63867967923482 23 3.58796130816142 24 3.53603315353394 25 3.49544455210368 26 3.45321771303813 27 3.41992421150208 28 3.38863911628723 29 3.35958975156148 30 3.32979666392008 31 3.29619797070821 32 3.25392626126607 33 3.21003794670105 34 3.17579884529114 35 3.1449543317159 36 3.11954622268677 37 3.09680821100871 38 3.06769611040751 39 3.04025619824727 40 3.0162580649058 41 2.99698216120402 42 2.97944493293762 43 2.96376310984294 44 2.94820679028829 45 2.93473529815674 46 2.92040821711222 47 2.90974728266398 48 2.90108777681986 49 2.88935297330221 50 2.88366505304972 51 2.86837458610535 52 2.86357094446818 53 2.85346446037292 54 2.84817279179891 55 2.84042971928914 56 2.83306709925334 57 2.82526075045268 58 2.81929478645325 59 2.81668009757996 60 2.8071661790212 61 2.79942072232564 62 2.7968531926473 63 2.79004367192586 64 2.78242149353027 65 2.77782897949219 66 2.7724706808726 67 2.76786190668742 68 2.76196966171265 69 2.75295044581095 70 2.7523977915446 71 2.74690847396851 72 2.74342958132426 73 2.73612367312113 74 2.7340141137441 75 2.73102431297302 76 2.72647652626038 77 2.72317020098368 78 2.71933371225993 79 2.71779055595398 80 2.71163064638774 81 2.70696630477905 82 2.70424863497416 83 2.69970065752665 84 2.69546314875285 85 2.69475804964701 86 2.68986934026082 87 2.68484584490458 88 2.68351248105367 89 2.679736328125 90 2.67705984115601 91 2.67142837842305 92 2.67176424662272 93 2.66996660232544 94 2.66465037663778 95 2.66520450909932 96 2.66134592692057 97 2.65941290855408 98 2.65747011502584 99 2.65372614860535 }; \addplot [semithick, color4] table { 0 1.99784700870514 1 1.83320344289144 2 1.74030681451162 3 1.69075985749563 4 1.66169877052307 5 1.64951323668162 6 1.63857849438985 7 1.63396550019582 8 1.62717723846436 9 1.62512802282969 10 1.6211786588033 11 1.62047479947408 12 1.61784298419952 13 1.61677331924438 14 1.61827434698741 15 1.61877601146698 16 1.61432436307271 17 1.616264351209 18 1.6173815647761 19 1.61678247451782 20 1.61255236466726 21 1.6115783294042 22 1.61087566216787 23 1.61279692649841 24 1.61215564409892 25 1.61190854708354 26 1.60943963527679 27 1.61103993256887 28 1.61246573925018 29 1.61118727525075 30 1.61047093073527 31 1.61102562745412 32 1.61108678181966 33 1.61310137907664 34 1.61120601495107 35 1.61013245582581 36 1.61268155574799 37 1.61127266089121 38 1.6139716386795 39 1.61331754525503 40 1.61006353696187 41 1.60954500834147 42 1.61109874248505 43 1.61243170102437 44 1.61233793099721 45 1.61222652594248 46 1.60953106880188 47 1.61127293904622 48 1.61078240871429 49 1.60886047681173 50 1.60900378227234 51 1.60926677385966 52 1.61001205444336 53 1.61170415878296 54 1.61167706648509 55 1.6113629023234 56 1.61180822849274 57 1.60961663722992 58 1.61103554566701 59 1.6079660097758 60 1.61090558369954 61 1.60906417369843 62 1.61130996545156 63 1.60902632077535 64 1.6083794037501 65 1.60998432636261 66 1.6122362613678 67 1.60920291741689 68 1.61044640541077 69 1.61233417193095 70 1.6073957602183 71 1.61164546012878 72 1.60941104094187 73 1.60976036389669 74 1.61232351462046 75 1.61179116566976 76 1.61077481905619 77 1.61103330453237 78 1.6103849252065 79 1.61260055700938 80 1.61182844638824 81 1.61169867515564 82 1.60944990317027 83 1.60936745007833 84 1.61143852074941 85 1.61321722666423 86 1.61117158730825 87 1.60914341608683 88 1.60769815444946 89 1.6125359694163 90 1.60952861309052 91 1.60948878924052 92 1.61071753501892 93 1.60996097723643 94 1.61128551959991 95 1.61470208962758 96 1.60925797621409 97 1.6102020740509 98 1.60975528558095 99 1.6121917963028 }; \end{axis} \end{tikzpicture} } \caption{$x \log x$ target} \label{fig:sim_xlogx} \end{subfigure} \hfill \begin{subfigure}[t]{0.24\textwidth} \centering \adjustbox{max width=0.99\textwidth}{ \begin{tikzpicture} \definecolor{color0}{rgb}{0.12156862745098,0.466666666666667,0.705882352941177} \definecolor{color1}{rgb}{1,0.498039215686275,0.0549019607843137} \definecolor{color2}{rgb}{0.172549019607843,0.627450980392157,0.172549019607843} \definecolor{color3}{rgb}{0.83921568627451,0.152941176470588,0.156862745098039} \definecolor{color4}{rgb}{0.580392156862745,0.403921568627451,0.741176470588235} \begin{axis}[ legend cell align={left}, legend style={fill opacity=0.8, draw opacity=1, text opacity=1, draw=white!80!black}, tick align=outside, tick pos=left, x grid style={white!69.0196078431373!black}, xmin=1, xmax=30, xtick style={color=black}, y grid style={white!69.0196078431373!black}, ylabel={test MAE}, ymin=-0.0781201074672717, ymax=3, ytick style={color=black} ] \path [fill=color0, fill opacity=0.2] (axis cs:0,0.342582316003012) --(axis cs:0,0.311269630191955) --(axis cs:1,0.27966639204824) --(axis cs:2,0.262984251106413) --(axis cs:3,0.250694036725522) --(axis cs:4,0.241248616733837) --(axis cs:5,0.233837850181047) --(axis cs:6,0.22864002082113) --(axis cs:7,0.224164594839965) --(axis cs:8,0.22059124084128) --(axis cs:9,0.218133154293804) --(axis cs:10,0.214938194558574) --(axis cs:11,0.213129299730423) --(axis cs:12,0.211653224674287) --(axis cs:13,0.210076442779008) --(axis cs:14,0.208748224329925) --(axis cs:15,0.207034061700384) --(axis cs:16,0.205875959804216) --(axis cs:17,0.204847805456938) --(axis cs:18,0.204452338576736) --(axis cs:19,0.20304741422288) --(axis cs:20,0.202019031146827) --(axis cs:21,0.201844369884977) --(axis cs:22,0.200483652661595) --(axis cs:23,0.200605215445428) --(axis cs:24,0.197892496607785) --(axis cs:25,0.198497640105835) --(axis cs:26,0.197324538822369) --(axis cs:27,0.196383053718144) --(axis cs:28,0.196696789011964) --(axis cs:29,0.196117719662021) --(axis cs:30,0.194806725257635) --(axis cs:31,0.194497760746608) --(axis cs:32,0.195554872520534) --(axis cs:33,0.194038827718537) --(axis cs:34,0.19357516203262) --(axis cs:35,0.193196664581499) --(axis cs:36,0.192781805113722) --(axis cs:37,0.193052971632378) --(axis cs:38,0.193219947248337) --(axis cs:39,0.192056687314384) --(axis cs:40,0.192587326121844) --(axis cs:41,0.193262334659374) --(axis cs:42,0.192318874222222) --(axis cs:43,0.192316756226009) --(axis cs:44,0.191786734268106) --(axis cs:45,0.191051545517089) --(axis cs:46,0.192371470962924) --(axis cs:47,0.19126233185177) --(axis cs:48,0.19109745368185) --(axis cs:49,0.191489569923394) --(axis cs:50,0.190434560462546) --(axis cs:51,0.190512705097452) --(axis cs:52,0.191590197337741) --(axis cs:53,0.190952663441868) --(axis cs:54,0.191682772203602) --(axis cs:55,0.190555532020795) --(axis cs:56,0.190460731121788) --(axis cs:57,0.190824250086639) --(axis cs:58,0.190787094694985) --(axis cs:59,0.190348903713198) --(axis cs:60,0.190360024482822) --(axis cs:61,0.190051133128984) --(axis cs:62,0.190220613997238) --(axis cs:63,0.190314311229952) --(axis cs:64,0.189265746422977) --(axis cs:65,0.190319490596686) --(axis cs:66,0.190395575181234) --(axis cs:67,0.189873092373297) --(axis cs:68,0.189298261025441) --(axis cs:69,0.189830664054391) --(axis cs:70,0.190657714306915) --(axis cs:71,0.189712044268352) --(axis cs:72,0.189421397583398) --(axis cs:73,0.189015548422181) --(axis cs:74,0.189990402370504) --(axis cs:75,0.188904505345057) --(axis cs:76,0.188972525071778) --(axis cs:77,0.189729223223654) --(axis cs:78,0.189131408902808) --(axis cs:79,0.189995224825257) --(axis cs:80,0.189515379681) --(axis cs:81,0.189285006805623) --(axis cs:82,0.189759421035735) --(axis cs:83,0.189390477786141) --(axis cs:84,0.188879089028999) --(axis cs:85,0.189646661981458) --(axis cs:86,0.188907856828214) --(axis cs:87,0.189544034235109) --(axis cs:88,0.18909864826364) --(axis cs:89,0.188425292205783) --(axis cs:90,0.188721922803835) --(axis cs:91,0.188833557508271) --(axis cs:92,0.188450458746819) --(axis cs:93,0.189490081954558) --(axis cs:94,0.189549789981903) --(axis cs:95,0.189485821179602) --(axis cs:96,0.187941465688814) --(axis cs:97,0.188841904970133) --(axis cs:98,0.188827478909786) --(axis cs:99,0.189596539975186) --(axis cs:99,0.193324966986) --(axis cs:99,0.193324966986) --(axis cs:98,0.192813841477736) --(axis cs:97,0.192774250336366) --(axis cs:96,0.192075463222395) --(axis cs:95,0.192653299319691) --(axis cs:94,0.193318499965606) --(axis cs:93,0.193284598898332) --(axis cs:92,0.192176191904477) --(axis cs:91,0.19321239099713) --(axis cs:90,0.1924603385042) --(axis cs:89,0.192742906379727) --(axis cs:88,0.193080608016623) --(axis cs:87,0.192686064807148) --(axis cs:86,0.192908190601824) --(axis cs:85,0.193397098079806) --(axis cs:84,0.192379984784757) --(axis cs:83,0.193031999220771) --(axis cs:82,0.192902678564103) --(axis cs:81,0.193180989539261) --(axis cs:80,0.19275826651473) --(axis cs:79,0.193660335191375) --(axis cs:78,0.193296406693454) --(axis cs:77,0.194343007035923) --(axis cs:76,0.192635134586019) --(axis cs:75,0.193386412289907) --(axis cs:74,0.193636875242182) --(axis cs:73,0.192588244880991) --(axis cs:72,0.193426451944596) --(axis cs:71,0.193385344078638) --(axis cs:70,0.194662746489441) --(axis cs:69,0.193677511557105) --(axis cs:68,0.193096303126323) --(axis cs:67,0.193380684653833) --(axis cs:66,0.194370298171452) --(axis cs:65,0.193924313619699) --(axis cs:64,0.192712626309822) --(axis cs:63,0.194041152354312) --(axis cs:62,0.193719164807541) --(axis cs:61,0.193557096826053) --(axis cs:60,0.193914041011715) --(axis cs:59,0.193661212546059) --(axis cs:58,0.194196544227388) --(axis cs:57,0.193967098291224) --(axis cs:56,0.194013991422564) --(axis cs:55,0.194265175618263) --(axis cs:54,0.194565653439047) --(axis cs:53,0.194577673256028) --(axis cs:52,0.194239775883085) --(axis cs:51,0.194014376153692) --(axis cs:50,0.194209949886092) --(axis cs:49,0.194956109343853) --(axis cs:48,0.194475695045567) --(axis cs:47,0.194340255609701) --(axis cs:46,0.195262085164625) --(axis cs:45,0.194418308360932) --(axis cs:44,0.19461851516859) --(axis cs:43,0.195483289582148) --(axis cs:42,0.195931945619798) --(axis cs:41,0.196735606596196) --(axis cs:40,0.19601050004113) --(axis cs:39,0.195432192445723) --(axis cs:38,0.196519764831029) --(axis cs:37,0.196982842414482) --(axis cs:36,0.196560884400438) --(axis cs:35,0.196621447473961) --(axis cs:34,0.197162427167718) --(axis cs:33,0.19820827891306) --(axis cs:32,0.199149771364761) --(axis cs:31,0.198520484950414) --(axis cs:30,0.199236899143458) --(axis cs:29,0.200761865921983) --(axis cs:28,0.201210845564197) --(axis cs:27,0.201761068246629) --(axis cs:26,0.20285553078314) --(axis cs:25,0.204093586392451) --(axis cs:24,0.204425893602685) --(axis cs:23,0.206958735172681) --(axis cs:22,0.207731162796067) --(axis cs:21,0.209084733807396) --(axis cs:20,0.209841466645735) --(axis cs:19,0.211132534742828) --(axis cs:18,0.212418827652512) --(axis cs:17,0.213553475581982) --(axis cs:16,0.214847434748651) --(axis cs:15,0.215970017164667) --(axis cs:14,0.217928110607489) --(axis cs:13,0.218934886236088) --(axis cs:12,0.220605891895868) --(axis cs:11,0.222340558121241) --(axis cs:10,0.223960430417902) --(axis cs:9,0.226987073043079) --(axis cs:8,0.229404352388649) --(axis cs:7,0.233238617945756) --(axis cs:6,0.23804976052678) --(axis cs:5,0.24429113935651) --(axis cs:4,0.253761959911696) --(axis cs:3,0.266781383511066) --(axis cs:2,0.285224955156811) --(axis cs:1,0.309602987726489) --(axis cs:0,0.342582316003012) --cycle; \path [fill=color1, fill opacity=0.2] (axis cs:0,0.359434710133851) --(axis cs:0,0.340586238911966) --(axis cs:1,0.307319340498722) --(axis cs:2,0.295925091789306) --(axis cs:3,0.290045819387276) --(axis cs:4,0.285810727050956) --(axis cs:5,0.284607082403047) --(axis cs:6,0.28200640156774) --(axis cs:7,0.281762253251813) --(axis cs:8,0.281324588683285) --(axis cs:9,0.280974602614606) --(axis cs:10,0.280667185427396) --(axis cs:11,0.280697691662432) --(axis cs:12,0.280804674861169) --(axis cs:13,0.279986285844516) --(axis cs:14,0.280648618907507) --(axis cs:15,0.279918766560328) --(axis cs:16,0.279687180508361) --(axis cs:17,0.279619175616069) --(axis cs:18,0.278870171402129) --(axis cs:19,0.279499149205081) --(axis cs:20,0.278749104270481) --(axis cs:21,0.279521748002472) --(axis cs:22,0.280227065519715) --(axis cs:23,0.278968973814666) --(axis cs:24,0.278835941888919) --(axis cs:25,0.279196712090666) --(axis cs:26,0.279678074676152) --(axis cs:27,0.278332926063861) --(axis cs:28,0.278795617267859) --(axis cs:29,0.278708151174876) --(axis cs:30,0.27932424136843) --(axis cs:31,0.27861662073718) --(axis cs:32,0.277841366071881) --(axis cs:33,0.279304706768932) --(axis cs:34,0.279617783624848) --(axis cs:35,0.277685042263349) --(axis cs:36,0.279814832462845) --(axis cs:37,0.278722339608552) --(axis cs:38,0.278595020488805) --(axis cs:39,0.278559395847643) --(axis cs:40,0.27820009095014) --(axis cs:41,0.27889409923272) --(axis cs:42,0.279686170641748) --(axis cs:43,0.277918730628773) --(axis cs:44,0.278460445507128) --(axis cs:45,0.278481079119267) --(axis cs:46,0.278209381101886) --(axis cs:47,0.278274216171469) --(axis cs:48,0.278092036545605) --(axis cs:49,0.277647977288434) --(axis cs:50,0.277636350761067) --(axis cs:51,0.278025228384262) --(axis cs:52,0.277648315391685) --(axis cs:53,0.278140429985471) --(axis cs:54,0.279475278324729) --(axis cs:55,0.278709428146656) --(axis cs:56,0.27767074943613) --(axis cs:57,0.277772136959151) --(axis cs:58,0.278927812776789) --(axis cs:59,0.278259807514848) --(axis cs:60,0.278826627931137) --(axis cs:61,0.277746381637981) --(axis cs:62,0.276977250489499) --(axis cs:63,0.276941356403862) --(axis cs:64,0.278337912152895) --(axis cs:65,0.277123865486367) --(axis cs:66,0.279619572551944) --(axis cs:67,0.277861566772199) --(axis cs:68,0.277833327656797) --(axis cs:69,0.277132448900455) --(axis cs:70,0.27919175223461) --(axis cs:71,0.278760884301241) --(axis cs:72,0.278225523949756) --(axis cs:73,0.277657801562982) --(axis cs:74,0.278595240048468) --(axis cs:75,0.278839022540592) --(axis cs:76,0.277777907566927) --(axis cs:77,0.277413174143081) --(axis cs:78,0.27843126141263) --(axis cs:79,0.277992507640943) --(axis cs:80,0.277786718574691) --(axis cs:81,0.277640300652756) --(axis cs:82,0.27824495352454) --(axis cs:83,0.278045234846907) --(axis cs:84,0.277169856954202) --(axis cs:85,0.278454025447749) --(axis cs:86,0.276938471278738) --(axis cs:87,0.278549536333259) --(axis cs:88,0.278597477835063) --(axis cs:89,0.277961232598832) --(axis cs:90,0.278618177633565) --(axis cs:91,0.278241935995332) --(axis cs:92,0.278302763361055) --(axis cs:93,0.277815459984862) --(axis cs:94,0.27780165382884) --(axis cs:95,0.27790964424391) --(axis cs:96,0.278370028391109) --(axis cs:97,0.278000438766323) --(axis cs:98,0.27757087044762) --(axis cs:99,0.278729184745428) --(axis cs:99,0.292458919407252) --(axis cs:99,0.292458919407252) --(axis cs:98,0.292211211792167) --(axis cs:97,0.291905040982721) --(axis cs:96,0.291617427612716) --(axis cs:95,0.291865249675717) --(axis cs:94,0.291731769753873) --(axis cs:93,0.291324270945466) --(axis cs:92,0.292173875909728) --(axis cs:91,0.291550116114704) --(axis cs:90,0.291827844718018) --(axis cs:89,0.292188034120987) --(axis cs:88,0.292404997585571) --(axis cs:87,0.291903034581963) --(axis cs:86,0.291700445531616) --(axis cs:85,0.292095422883448) --(axis cs:84,0.291486718725577) --(axis cs:83,0.291948706142269) --(axis cs:82,0.292644312807451) --(axis cs:81,0.292092480598516) --(axis cs:80,0.291565423441402) --(axis cs:79,0.291694485322212) --(axis cs:78,0.292260697650583) --(axis cs:77,0.291460580994045) --(axis cs:76,0.291860575162667) --(axis cs:75,0.292123525715329) --(axis cs:74,0.292003354299168) --(axis cs:73,0.291690342968268) --(axis cs:72,0.292489227929618) --(axis cs:71,0.291948610448464) --(axis cs:70,0.292438248833505) --(axis cs:69,0.291178388050483) --(axis cs:68,0.292195116474419) --(axis cs:67,0.292049762179319) --(axis cs:66,0.293071987239621) --(axis cs:65,0.291667146482881) --(axis cs:64,0.292716451097838) --(axis cs:63,0.291747326311236) --(axis cs:62,0.291947550065413) --(axis cs:61,0.292205832126209) --(axis cs:60,0.292612443565191) --(axis cs:59,0.292447211655277) --(axis cs:58,0.292868187227026) --(axis cs:57,0.291482162045721) --(axis cs:56,0.292117742289791) --(axis cs:55,0.292588626707419) --(axis cs:54,0.293320871723845) --(axis cs:53,0.292341017393323) --(axis cs:52,0.292314202028766) --(axis cs:51,0.291654377576584) --(axis cs:50,0.291790837476441) --(axis cs:49,0.292095612271757) --(axis cs:48,0.291915820141305) --(axis cs:47,0.292316800438995) --(axis cs:46,0.291675701937716) --(axis cs:45,0.292651048324682) --(axis cs:44,0.291949388877791) --(axis cs:43,0.292364582963502) --(axis cs:42,0.293492959117723) --(axis cs:41,0.292995444300605) --(axis cs:40,0.29232034183998) --(axis cs:39,0.292582204761182) --(axis cs:38,0.292464309815658) --(axis cs:37,0.292815349759378) --(axis cs:36,0.293437821612778) --(axis cs:35,0.292211286185901) --(axis cs:34,0.293026342790405) --(axis cs:33,0.293180438164133) --(axis cs:32,0.291850971441089) --(axis cs:31,0.292798378429944) --(axis cs:30,0.293362017583393) --(axis cs:29,0.292887072569834) --(axis cs:28,0.292735583141077) --(axis cs:27,0.29276351409244) --(axis cs:26,0.293707322917028) --(axis cs:25,0.293670880085136) --(axis cs:24,0.29320980952379) --(axis cs:23,0.293283494453411) --(axis cs:22,0.293275355859374) --(axis cs:21,0.293370214525757) --(axis cs:20,0.292956443698065) --(axis cs:19,0.293357694266446) --(axis cs:18,0.29318752080775) --(axis cs:17,0.294025224026875) --(axis cs:16,0.293529047182018) --(axis cs:15,0.293931217449733) --(axis cs:14,0.294636684685175) --(axis cs:13,0.294016067505806) --(axis cs:12,0.295115187727078) --(axis cs:11,0.294766572889367) --(axis cs:10,0.295250285027774) --(axis cs:9,0.295809408113594) --(axis cs:8,0.29660044568078) --(axis cs:7,0.29754298626667) --(axis cs:6,0.297833583008168) --(axis cs:5,0.300010754867054) --(axis cs:4,0.300926424889708) --(axis cs:3,0.304268743887108) --(axis cs:2,0.310942785853007) --(axis cs:1,0.325849078703446) --(axis cs:0,0.359434710133851) --cycle; \path [fill=color2, fill opacity=0.2] (axis cs:0,2.12749357826747) --(axis cs:0,1.91698134136957) --(axis cs:1,1.22625642950224) --(axis cs:2,0.809820184377929) --(axis cs:3,0.577804299896998) --(axis cs:4,0.47193041390836) --(axis cs:5,0.418091397783381) --(axis cs:6,0.386186251340714) --(axis cs:7,0.364972523898769) --(axis cs:8,0.34949025235238) --(axis cs:9,0.337913738273352) --(axis cs:10,0.329713231372507) --(axis cs:11,0.322395996150608) --(axis cs:12,0.318860690415668) --(axis cs:13,0.314443492573784) --(axis cs:14,0.311704690413109) --(axis cs:15,0.307113618899676) --(axis cs:16,0.303933755871667) --(axis cs:17,0.301120607443524) --(axis cs:18,0.298403215284699) --(axis cs:19,0.297591892409482) --(axis cs:20,0.296127977393712) --(axis cs:21,0.294175307460812) --(axis cs:22,0.292718236920553) --(axis cs:23,0.290462246749138) --(axis cs:24,0.289988856353041) --(axis cs:25,0.288712672647771) --(axis cs:26,0.288465918726189) --(axis cs:27,0.287747639968051) --(axis cs:28,0.287450739329124) --(axis cs:29,0.28584841423063) --(axis cs:30,0.287105229445125) --(axis cs:31,0.285360122732648) --(axis cs:32,0.284322993097977) --(axis cs:33,0.285900413637008) --(axis cs:34,0.284626548913085) --(axis cs:35,0.283843140592533) --(axis cs:36,0.284311431730705) --(axis cs:37,0.283186188963742) --(axis cs:38,0.282767930906009) --(axis cs:39,0.282874259138095) --(axis cs:40,0.282611945209234) --(axis cs:41,0.281631405895755) --(axis cs:42,0.28154856623909) --(axis cs:43,0.280999252286142) --(axis cs:44,0.281949766000881) --(axis cs:45,0.281040766949362) --(axis cs:46,0.280898020249054) --(axis cs:47,0.281955395572276) --(axis cs:48,0.280717291607381) --(axis cs:49,0.281313456902474) --(axis cs:50,0.280940160916438) --(axis cs:51,0.279457983991262) --(axis cs:52,0.279320711622877) --(axis cs:53,0.279393740078896) --(axis cs:54,0.281388935404533) --(axis cs:55,0.280207738734029) --(axis cs:56,0.279028652503486) --(axis cs:57,0.280158465362335) --(axis cs:58,0.280393692292938) --(axis cs:59,0.279044542716015) --(axis cs:60,0.279793273863449) --(axis cs:61,0.279767481418342) --(axis cs:62,0.2787167335308) --(axis cs:63,0.278421854214389) --(axis cs:64,0.279120039316244) --(axis cs:65,0.277463789121632) --(axis cs:66,0.279219225396961) --(axis cs:67,0.278449634108172) --(axis cs:68,0.279211398557321) --(axis cs:69,0.277935375039768) --(axis cs:70,0.279242957046931) --(axis cs:71,0.278844501224148) --(axis cs:72,0.280268973787105) --(axis cs:73,0.278896027260881) --(axis cs:74,0.277386768898252) --(axis cs:75,0.278201892725804) --(axis cs:76,0.279317686945274) --(axis cs:77,0.278659469127579) --(axis cs:78,0.278215945250369) --(axis cs:79,0.278366224413314) --(axis cs:80,0.279653357788629) --(axis cs:81,0.278645784155624) --(axis cs:82,0.278156367827196) --(axis cs:83,0.278984766960157) --(axis cs:84,0.278799172483315) --(axis cs:85,0.278171061807268) --(axis cs:86,0.278035157044068) --(axis cs:87,0.27817756439721) --(axis cs:88,0.27855095275715) --(axis cs:89,0.278122010753879) --(axis cs:90,0.277010830241069) --(axis cs:91,0.278949078003613) --(axis cs:92,0.279098610063259) --(axis cs:93,0.278817062391029) --(axis cs:94,0.277625794558289) --(axis cs:95,0.277949892412325) --(axis cs:96,0.27733539017153) --(axis cs:97,0.278378412768873) --(axis cs:98,0.27776264361218) --(axis cs:99,0.277780677634984) --(axis cs:99,0.29164336347728) --(axis cs:99,0.29164336347728) --(axis cs:98,0.291770064714796) --(axis cs:97,0.291810594354439) --(axis cs:96,0.291541802166778) --(axis cs:95,0.291399050821165) --(axis cs:94,0.291570767414011) --(axis cs:93,0.292048234926476) --(axis cs:92,0.292413016180332) --(axis cs:91,0.2926315863132) --(axis cs:90,0.291368790469939) --(axis cs:89,0.2918604861435) --(axis cs:88,0.292142496422182) --(axis cs:87,0.291148383364326) --(axis cs:86,0.291941895962422) --(axis cs:85,0.291791042354312) --(axis cs:84,0.292325675246686) --(axis cs:83,0.292125068346647) --(axis cs:82,0.292366612068077) --(axis cs:81,0.292376778030299) --(axis cs:80,0.292343374764217) --(axis cs:79,0.291555249248745) --(axis cs:78,0.292015544090095) --(axis cs:77,0.292404482523676) --(axis cs:76,0.292207278817818) --(axis cs:75,0.292058095582149) --(axis cs:74,0.291363489388542) --(axis cs:73,0.292254863725244) --(axis cs:72,0.292963784258091) --(axis cs:71,0.292236342383436) --(axis cs:70,0.292412725675796) --(axis cs:69,0.291657597556719) --(axis cs:68,0.292408545220399) --(axis cs:67,0.291540861732536) --(axis cs:66,0.292560736983766) --(axis cs:65,0.291461734636303) --(axis cs:64,0.292325819162302) --(axis cs:63,0.291949201388638) --(axis cs:62,0.292049997572461) --(axis cs:61,0.292789316245029) --(axis cs:60,0.293314175350214) --(axis cs:59,0.292353210840554) --(axis cs:58,0.292798149150442) --(axis cs:57,0.292467743896699) --(axis cs:56,0.292416105275953) --(axis cs:55,0.292832651280619) --(axis cs:54,0.293360140961891) --(axis cs:53,0.292819321571698) --(axis cs:52,0.292478594451901) --(axis cs:51,0.291871808349334) --(axis cs:50,0.293116448555201) --(axis cs:49,0.293295834651023) --(axis cs:48,0.293310286428609) --(axis cs:47,0.293467805352915) --(axis cs:46,0.293068765022908) --(axis cs:45,0.293451270346933) --(axis cs:44,0.293869666734562) --(axis cs:43,0.293070434444561) --(axis cs:42,0.293640467904994) --(axis cs:41,0.293541050209795) --(axis cs:40,0.293927972577682) --(axis cs:39,0.294312392886491) --(axis cs:38,0.294202789385129) --(axis cs:37,0.29438395934533) --(axis cs:36,0.294859629785103) --(axis cs:35,0.294742643048011) --(axis cs:34,0.294990832182801) --(axis cs:33,0.295194133475775) --(axis cs:32,0.294309584162359) --(axis cs:31,0.295289273051094) --(axis cs:30,0.296304175309514) --(axis cs:29,0.29566241966855) --(axis cs:28,0.296040148948566) --(axis cs:27,0.296706471290138) --(axis cs:26,0.297094534211891) --(axis cs:25,0.297179086747828) --(axis cs:24,0.297940801742001) --(axis cs:23,0.298244949963356) --(axis cs:22,0.299744219305796) --(axis cs:21,0.300273862969689) --(axis cs:20,0.301536692755773) --(axis cs:19,0.302592357468447) --(axis cs:18,0.303719778979268) --(axis cs:17,0.305677461238829) --(axis cs:16,0.308245896978167) --(axis cs:15,0.310763447235731) --(axis cs:14,0.314561944051155) --(axis cs:13,0.317698578355425) --(axis cs:12,0.323497123896314) --(axis cs:11,0.328360124531155) --(axis cs:10,0.338368083985973) --(axis cs:9,0.349203325725824) --(axis cs:8,0.365204346958177) --(axis cs:7,0.387241148898116) --(axis cs:6,0.418636225364201) --(axis cs:5,0.466065004168727) --(axis cs:4,0.541134189102453) --(axis cs:3,0.661158159667211) --(axis cs:2,0.866706616413494) --(axis cs:1,1.28243483369662) --(axis cs:0,2.12749357826747) --cycle; \path [fill=color3, fill opacity=0.2] (axis cs:0,0.539855187802577) --(axis cs:0,0.521500879854893) --(axis cs:1,0.495884990330665) --(axis cs:2,0.480296518281413) --(axis cs:3,0.468747675975763) --(axis cs:4,0.460392397439085) --(axis cs:5,0.453684603920794) --(axis cs:6,0.449833059719369) --(axis cs:7,0.444576328496759) --(axis cs:8,0.441449776836436) --(axis cs:9,0.440412368758557) --(axis cs:10,0.438325891789003) --(axis cs:11,0.437648605547841) --(axis cs:12,0.436589262348307) --(axis cs:13,0.436755372238374) --(axis cs:14,0.438427723188233) --(axis cs:15,0.437268656421229) --(axis cs:16,0.436962603195578) --(axis cs:17,0.439054526138825) --(axis cs:18,0.440145734923889) --(axis cs:19,0.440270701570534) --(axis cs:20,0.440637396569592) --(axis cs:21,0.441908676111859) --(axis cs:22,0.443419408615425) --(axis cs:23,0.444824926094952) --(axis cs:24,0.446488011945159) --(axis cs:25,0.447338670570384) --(axis cs:26,0.44908867704855) --(axis cs:27,0.450764558321901) --(axis cs:28,0.449887241225441) --(axis cs:29,0.451294260050613) --(axis cs:30,0.451927861206675) --(axis cs:31,0.451987214510446) --(axis cs:32,0.453419932318631) --(axis cs:33,0.455426830190392) --(axis cs:34,0.455042164527138) --(axis cs:35,0.456571840419193) --(axis cs:36,0.456629040365621) --(axis cs:37,0.457197132837256) --(axis cs:38,0.458010285772912) --(axis cs:39,0.457075175074136) --(axis cs:40,0.45904976890208) --(axis cs:41,0.459296005335647) --(axis cs:42,0.459121820619324) --(axis cs:43,0.460197138558637) --(axis cs:44,0.459499339869426) --(axis cs:45,0.460120989063232) --(axis cs:46,0.460087911827671) --(axis cs:47,0.46033925612868) --(axis cs:48,0.460576109031833) --(axis cs:49,0.460491278966117) --(axis cs:50,0.460662392906008) --(axis cs:51,0.460685133283934) --(axis cs:52,0.460475767928792) --(axis cs:53,0.461629159143342) --(axis cs:54,0.461086975326269) --(axis cs:55,0.46182516692191) --(axis cs:56,0.462245143581128) --(axis cs:57,0.462086322182962) --(axis cs:58,0.461926612770828) --(axis cs:59,0.462957300866329) --(axis cs:60,0.462568400956718) --(axis cs:61,0.463093992217911) --(axis cs:62,0.463400534646905) --(axis cs:63,0.463445579570086) --(axis cs:64,0.463053577024771) --(axis cs:65,0.463028912949226) --(axis cs:66,0.463416543094811) --(axis cs:67,0.462901079358444) --(axis cs:68,0.463120115111029) --(axis cs:69,0.464079882150295) --(axis cs:70,0.464037250265245) --(axis cs:71,0.463806693751514) --(axis cs:72,0.463676212954884) --(axis cs:73,0.463479020988859) --(axis cs:74,0.464074848463892) --(axis cs:75,0.464495429030273) --(axis cs:76,0.463678135456735) --(axis cs:77,0.464149618661056) --(axis cs:78,0.464120253129554) --(axis cs:79,0.464209558065586) --(axis cs:80,0.464080546151994) --(axis cs:81,0.464593255686496) --(axis cs:82,0.465287428687543) --(axis cs:83,0.465366383636992) --(axis cs:84,0.464532735159131) --(axis cs:85,0.464232876032306) --(axis cs:86,0.464780066392008) --(axis cs:87,0.464822739750373) --(axis cs:88,0.465061748997505) --(axis cs:89,0.464620415901688) --(axis cs:90,0.465530409666971) --(axis cs:91,0.466315021255494) --(axis cs:92,0.465472360318115) --(axis cs:93,0.466171649635999) --(axis cs:94,0.466282383715773) --(axis cs:95,0.466056009585505) --(axis cs:96,0.465238986086119) --(axis cs:97,0.46605569309805) --(axis cs:98,0.46598559217829) --(axis cs:99,0.466311059682354) --(axis cs:99,0.502587785036579) --(axis cs:99,0.502587785036579) --(axis cs:98,0.501910944807743) --(axis cs:97,0.501911225052534) --(axis cs:96,0.501885579356284) --(axis cs:95,0.501990842367365) --(axis cs:94,0.502778331323798) --(axis cs:93,0.501617086389493) --(axis cs:92,0.500794450575898) --(axis cs:91,0.503178816895802) --(axis cs:90,0.502275997148876) --(axis cs:89,0.501045512621058) --(axis cs:88,0.500168994092806) --(axis cs:87,0.499568912992648) --(axis cs:86,0.500013266025798) --(axis cs:85,0.500274834901379) --(axis cs:84,0.499254586090513) --(axis cs:83,0.50080870762964) --(axis cs:82,0.500507881650795) --(axis cs:81,0.500349012844032) --(axis cs:80,0.499698430447381) --(axis cs:79,0.500119998558509) --(axis cs:78,0.499370050227888) --(axis cs:77,0.499415814364305) --(axis cs:76,0.499428953744874) --(axis cs:75,0.500001318622417) --(axis cs:74,0.498594591646633) --(axis cs:73,0.498658634428266) --(axis cs:72,0.499168226392701) --(axis cs:71,0.498843649030824) --(axis cs:70,0.499478114858504) --(axis cs:69,0.499951881721216) --(axis cs:68,0.499184071709955) --(axis cs:67,0.498320357301051) --(axis cs:66,0.499786819529039) --(axis cs:65,0.49800336240961) --(axis cs:64,0.498171805462208) --(axis cs:63,0.499107242066114) --(axis cs:62,0.498665185911261) --(axis cs:61,0.497441267823644) --(axis cs:60,0.498217301907611) --(axis cs:59,0.4988955397162) --(axis cs:58,0.497500116113551) --(axis cs:57,0.498242809102388) --(axis cs:56,0.498271281392677) --(axis cs:55,0.496180801807745) --(axis cs:54,0.496046580086023) --(axis cs:53,0.496887712785826) --(axis cs:52,0.495534311455058) --(axis cs:51,0.495037604028383) --(axis cs:50,0.495286544033231) --(axis cs:49,0.494310844739429) --(axis cs:48,0.49397141939402) --(axis cs:47,0.494764661787936) --(axis cs:46,0.494203376230292) --(axis cs:45,0.493004575194072) --(axis cs:44,0.493009252736165) --(axis cs:43,0.494148500988075) --(axis cs:42,0.492546636570871) --(axis cs:41,0.492979612581732) --(axis cs:40,0.491203467391713) --(axis cs:39,0.488085464211905) --(axis cs:38,0.489331192098029) --(axis cs:37,0.487815512089451) --(axis cs:36,0.486125728959635) --(axis cs:35,0.486407046344062) --(axis cs:34,0.48329014678522) --(axis cs:33,0.483358667157122) --(axis cs:32,0.479259220170077) --(axis cs:31,0.476202456529135) --(axis cs:30,0.476033619410848) --(axis cs:29,0.474402522220454) --(axis cs:28,0.472094168960055) --(axis cs:27,0.472723458601368) --(axis cs:26,0.468864583691094) --(axis cs:25,0.46404497019576) --(axis cs:24,0.463115559946626) --(axis cs:23,0.459346489075082) --(axis cs:22,0.456607500894869) --(axis cs:21,0.453039508378345) --(axis cs:20,0.450434161429065) --(axis cs:19,0.449877961950279) --(axis cs:18,0.446515421094686) --(axis cs:17,0.445406145603932) --(axis cs:16,0.443073043560912) --(axis cs:15,0.442596278658981) --(axis cs:14,0.442373795569905) --(axis cs:13,0.441261775143727) --(axis cs:12,0.442088749545919) --(axis cs:11,0.443314473586783) --(axis cs:10,0.444336221718585) --(axis cs:9,0.447765316184324) --(axis cs:8,0.450402526509562) --(axis cs:7,0.455208053687587) --(axis cs:6,0.460930366902386) --(axis cs:5,0.466383266696119) --(axis cs:4,0.474957432552573) --(axis cs:3,0.484660593111711) --(axis cs:2,0.496799145266103) --(axis cs:1,0.513004362944634) --(axis cs:0,0.539855187802577) --cycle; \path [fill=color4, fill opacity=0.2] (axis cs:0,5.50917292881053) --(axis cs:0,4.68668682130136) --(axis cs:1,2.91335550065153) --(axis cs:2,2.01341791638498) --(axis cs:3,1.48447381853415) --(axis cs:4,1.15395436800514) --(axis cs:5,0.94065803878933) --(axis cs:6,0.799317108498343) --(axis cs:7,0.701994484533831) --(axis cs:8,0.632665806161385) --(axis cs:9,0.580831791869798) --(axis cs:10,0.540357520274195) --(axis cs:11,0.507613142388374) --(axis cs:12,0.48044042917012) --(axis cs:13,0.457269277642759) --(axis cs:14,0.437267806693477) --(axis cs:15,0.419766189470404) --(axis cs:16,0.404166056297545) --(axis cs:17,0.390229432573764) --(axis cs:18,0.377831074989024) --(axis cs:19,0.366587288162926) --(axis cs:20,0.356498648979475) --(axis cs:21,0.347209780880662) --(axis cs:22,0.338711915690526) --(axis cs:23,0.330978389994846) --(axis cs:24,0.323794446739298) --(axis cs:25,0.317235729235575) --(axis cs:26,0.311242610669241) --(axis cs:27,0.305600592445947) --(axis cs:28,0.300489531887658) --(axis cs:29,0.295680144719084) --(axis cs:30,0.291181578596383) --(axis cs:31,0.286842555394393) --(axis cs:32,0.283009924450902) --(axis cs:33,0.279670292877893) --(axis cs:34,0.276272964970021) --(axis cs:35,0.273005513848134) --(axis cs:36,0.270284497424523) --(axis cs:37,0.267520243122976) --(axis cs:38,0.2650224418047) --(axis cs:39,0.262760936497069) --(axis cs:40,0.260441438150964) --(axis cs:41,0.258426755806307) --(axis cs:42,0.256546001259901) --(axis cs:43,0.254530650764165) --(axis cs:44,0.2527926168149) --(axis cs:45,0.251210916511693) --(axis cs:46,0.249900289174681) --(axis cs:47,0.248187192240237) --(axis cs:48,0.246853168023066) --(axis cs:49,0.245805166346212) --(axis cs:50,0.244248747557487) --(axis cs:51,0.24301960053195) --(axis cs:52,0.242188384199156) --(axis cs:53,0.241075308514155) --(axis cs:54,0.23986490740993) --(axis cs:55,0.239082923235844) --(axis cs:56,0.238148777208906) --(axis cs:57,0.237045762513876) --(axis cs:58,0.236328791369201) --(axis cs:59,0.235747304825942) --(axis cs:60,0.234900235387344) --(axis cs:61,0.233931362052299) --(axis cs:62,0.233438015419705) --(axis cs:63,0.232609183762103) --(axis cs:64,0.23188113821141) --(axis cs:65,0.231406762857134) --(axis cs:66,0.230952575301764) --(axis cs:67,0.230235661138694) --(axis cs:68,0.229988235542074) --(axis cs:69,0.229517002461967) --(axis cs:70,0.228681587144479) --(axis cs:71,0.228448815646906) --(axis cs:72,0.22801192068241) --(axis cs:73,0.227158562392998) --(axis cs:74,0.226538969382636) --(axis cs:75,0.226502078713224) --(axis cs:76,0.225771514904468) --(axis cs:77,0.22581374372753) --(axis cs:78,0.22512118704005) --(axis cs:79,0.22459620011499) --(axis cs:80,0.22469324438339) --(axis cs:81,0.224172419403307) --(axis cs:82,0.22348556999497) --(axis cs:83,0.222734522956661) --(axis cs:84,0.222544034445425) --(axis cs:85,0.222957248652889) --(axis cs:86,0.222281474234183) --(axis cs:87,0.221172847785737) --(axis cs:88,0.221653583776402) --(axis cs:89,0.220939267478433) --(axis cs:90,0.220509460107247) --(axis cs:91,0.221776863936915) --(axis cs:92,0.220283215504879) --(axis cs:93,0.219873908385887) --(axis cs:94,0.219489732395796) --(axis cs:95,0.219229560779507) --(axis cs:96,0.219494194196159) --(axis cs:97,0.218755075076051) --(axis cs:98,0.218106308818753) --(axis cs:99,0.218012753282624) --(axis cs:99,0.224147324607136) --(axis cs:99,0.224147324607136) --(axis cs:98,0.224511868515715) --(axis cs:97,0.225207504889541) --(axis cs:96,0.2255629018878) --(axis cs:95,0.22522679201069) --(axis cs:94,0.225943798809064) --(axis cs:93,0.225831606760369) --(axis cs:92,0.226685467102136) --(axis cs:91,0.227386278903152) --(axis cs:90,0.226964113577446) --(axis cs:89,0.227207469143424) --(axis cs:88,0.228062121936235) --(axis cs:87,0.228104944588556) --(axis cs:86,0.229039148130497) --(axis cs:85,0.229760246311711) --(axis cs:84,0.229757851238907) --(axis cs:83,0.230591803651997) --(axis cs:82,0.23113785898554) --(axis cs:81,0.232407333359805) --(axis cs:80,0.232749158456271) --(axis cs:79,0.232970540000493) --(axis cs:78,0.233695830038349) --(axis cs:77,0.234886539720372) --(axis cs:76,0.23530808654232) --(axis cs:75,0.236287810463463) --(axis cs:74,0.23683358301223) --(axis cs:73,0.237490916201464) --(axis cs:72,0.238728653756957) --(axis cs:71,0.23961850454098) --(axis cs:70,0.240622918839509) --(axis cs:69,0.241580379447721) --(axis cs:68,0.242599371762181) --(axis cs:67,0.243550601532459) --(axis cs:66,0.244831680759154) --(axis cs:65,0.24598909551015) --(axis cs:64,0.247161564701874) --(axis cs:63,0.248423634793729) --(axis cs:62,0.250299997688867) --(axis cs:61,0.251020591040912) --(axis cs:60,0.252635787673136) --(axis cs:59,0.254046955437501) --(axis cs:58,0.255275743415752) --(axis cs:57,0.256822467272679) --(axis cs:56,0.258632556078651) --(axis cs:55,0.260462629097352) --(axis cs:54,0.26181055729967) --(axis cs:53,0.263764910506688) --(axis cs:52,0.26561235580443) --(axis cs:51,0.267386716781562) --(axis cs:50,0.269494094719404) --(axis cs:49,0.271790329434097) --(axis cs:48,0.273916358538035) --(axis cs:47,0.276135722836973) --(axis cs:46,0.278884709083274) --(axis cs:45,0.281323494044464) --(axis cs:44,0.283738320936234) --(axis cs:43,0.286542586098653) --(axis cs:42,0.289558866993171) --(axis cs:41,0.292691530644397) --(axis cs:40,0.295742839780885) --(axis cs:39,0.299144119741106) --(axis cs:38,0.302578895230797) --(axis cs:37,0.306352740332682) --(axis cs:36,0.310404728169679) --(axis cs:35,0.314524582523835) --(axis cs:34,0.319090592368694) --(axis cs:33,0.3239804916146) --(axis cs:32,0.32891400460463) --(axis cs:31,0.334456894208369) --(axis cs:30,0.340523665785839) --(axis cs:29,0.347014527229667) --(axis cs:28,0.353867349095059) --(axis cs:27,0.361221058218065) --(axis cs:26,0.369481251184041) --(axis cs:25,0.378240091464753) --(axis cs:24,0.387906731811422) --(axis cs:23,0.398626633071357) --(axis cs:22,0.410307075302974) --(axis cs:21,0.423268773686039) --(axis cs:20,0.437681330980649) --(axis cs:19,0.453542165972968) --(axis cs:18,0.471347394192037) --(axis cs:17,0.49124356032327) --(axis cs:16,0.513569003758506) --(axis cs:15,0.538899467573053) --(axis cs:14,0.567510189051864) --(axis cs:13,0.600124364623832) --(axis cs:12,0.637799633225611) --(axis cs:11,0.681731200161268) --(axis cs:10,0.733420020250606) --(axis cs:9,0.79527031390667) --(axis cs:8,0.870750163846194) --(axis cs:7,0.966054167479312) --(axis cs:6,1.09063116420673) --(axis cs:5,1.25908407178094) --(axis cs:4,1.49753810845818) --(axis cs:3,1.85010714332905) --(axis cs:2,2.40669923932587) --(axis cs:1,3.38823163593498) --(axis cs:0,5.50917292881053) --cycle; \addplot [semithick, color0] table { 0 0.326925973097483 1 0.294634689887365 2 0.274104603131612 3 0.258737710118294 4 0.247505288322767 5 0.239064494768778 6 0.233344890673955 7 0.22870160639286 8 0.224997796614965 9 0.222560113668442 10 0.219449312488238 11 0.217734928925832 12 0.216129558285077 13 0.214505664507548 14 0.213338167468707 15 0.211502039432526 16 0.210361697276433 17 0.20920064051946 18 0.208435583114624 19 0.207089974482854 20 0.205930248896281 21 0.205464551846186 22 0.204107407728831 23 0.203781975309054 24 0.201159195105235 25 0.201295613249143 26 0.200090034802755 27 0.199072060982386 28 0.198953817288081 29 0.198439792792002 30 0.197021812200546 31 0.196509122848511 32 0.197352321942647 33 0.196123553315798 34 0.195368794600169 35 0.19490905602773 36 0.19467134475708 37 0.19501790702343 38 0.194869856039683 39 0.193744439880053 40 0.194298913081487 41 0.194998970627785 42 0.19412540992101 43 0.193900022904078 44 0.193202624718348 45 0.192734926939011 46 0.193816778063774 47 0.192801293730736 48 0.192786574363708 49 0.193222839633624 50 0.192322255174319 51 0.192263540625572 52 0.192914986610413 53 0.192765168348948 54 0.193124212821325 55 0.192410353819529 56 0.192237361272176 57 0.192395674188932 58 0.192491819461187 59 0.192005058129629 60 0.192137032747269 61 0.191804114977519 62 0.19196988940239 63 0.192177731792132 64 0.190989186366399 65 0.192121902108192 66 0.192382936676343 67 0.191626888513565 68 0.191197282075882 69 0.191754087805748 70 0.192660230398178 71 0.191548694173495 72 0.191423924763997 73 0.190801896651586 74 0.191813638806343 75 0.191145458817482 76 0.190803829828898 77 0.192036115129789 78 0.191213907798131 79 0.191827780008316 80 0.191136823097865 81 0.191232998172442 82 0.191331049799919 83 0.191211238503456 84 0.190629536906878 85 0.191521880030632 86 0.190908023715019 87 0.191115049521128 88 0.191089628140132 89 0.190584099292755 90 0.190591130654017 91 0.191022974252701 92 0.190313325325648 93 0.191387340426445 94 0.191434144973755 95 0.191069560249646 96 0.190008464455605 97 0.190808077653249 98 0.190820660193761 99 0.191460753480593 }; \addplot [semithick, color1] table { 0 0.350010474522909 1 0.316584209601084 2 0.303433938821157 3 0.297157281637192 4 0.293368575970332 5 0.29230891863505 6 0.289919992287954 7 0.289652619759242 8 0.288962517182032 9 0.2883920053641 10 0.287958735227585 11 0.287732132275899 12 0.287959931294123 13 0.287001176675161 14 0.287642651796341 15 0.28692499200503 16 0.286608113845189 17 0.286822199821472 18 0.28602884610494 19 0.286428421735764 20 0.285852773984273 21 0.286445981264114 22 0.286751210689545 23 0.286126234134038 24 0.286022875706355 25 0.286433796087901 26 0.28669269879659 27 0.28554822007815 28 0.285765600204468 29 0.285797611872355 30 0.286343129475911 31 0.285707499583562 32 0.284846168756485 33 0.286242572466532 34 0.286322063207626 35 0.284948164224625 36 0.286626327037811 37 0.285768844683965 38 0.285529665152232 39 0.285570800304413 40 0.28526021639506 41 0.285944771766663 42 0.286589564879735 43 0.285141656796138 44 0.285204917192459 45 0.285566063721975 46 0.284942541519801 47 0.285295508305232 48 0.285003928343455 49 0.284871794780095 50 0.284713594118754 51 0.284839802980423 52 0.284981258710225 53 0.285240723689397 54 0.286398075024287 55 0.285649027427038 56 0.284894245862961 57 0.284627149502436 58 0.285898000001907 59 0.285353509585063 60 0.285719535748164 61 0.284976106882095 62 0.284462400277456 63 0.284344341357549 64 0.285527181625366 65 0.284395505984624 66 0.286345779895782 67 0.284955664475759 68 0.285014222065608 69 0.284155418475469 70 0.285815000534058 71 0.285354747374853 72 0.285357375939687 73 0.284674072265625 74 0.285299297173818 75 0.28548127412796 76 0.284819241364797 77 0.284436877568563 78 0.285345979531606 79 0.284843496481578 80 0.284676071008046 81 0.284866390625636 82 0.285444633165995 83 0.284996970494588 84 0.28432828783989 85 0.285274724165599 86 0.284319458405177 87 0.285226285457611 88 0.285501237710317 89 0.285074633359909 90 0.285223011175791 91 0.284896026055018 92 0.285238319635391 93 0.284569865465164 94 0.284766711791356 95 0.284887446959813 96 0.284993728001912 97 0.284952739874522 98 0.284891041119893 99 0.28559405207634 }; \addplot [semithick, color2] table { 0 2.02223745981852 1 1.25434563159943 2 0.838263400395711 3 0.619481229782105 4 0.506532301505407 5 0.442078200976054 6 0.402411238352458 7 0.376106836398443 8 0.357347299655279 9 0.343558531999588 10 0.33404065767924 11 0.325378060340881 12 0.321178907155991 13 0.316071035464605 14 0.313133317232132 15 0.308938533067703 16 0.306089826424917 17 0.303399034341176 18 0.301061497131983 19 0.300092124938965 20 0.298832335074743 21 0.297224585215251 22 0.296231228113174 23 0.294353598356247 24 0.293964829047521 25 0.2929458796978 26 0.29278022646904 27 0.292227055629094 28 0.291745444138845 29 0.29075541694959 30 0.291704702377319 31 0.290324697891871 32 0.289316288630168 33 0.290547273556391 34 0.289808690547943 35 0.289292891820272 36 0.289585530757904 37 0.288785074154536 38 0.288485360145569 39 0.288593326012293 40 0.288269958893458 41 0.287586228052775 42 0.287594517072042 43 0.287034843365351 44 0.287909716367722 45 0.287246018648148 46 0.286983392635981 47 0.287711600462596 48 0.287013789017995 49 0.287304645776749 50 0.287028304735819 51 0.285664896170298 52 0.285899653037389 53 0.286106530825297 54 0.287374538183212 55 0.286520195007324 56 0.28572237888972 57 0.286313104629517 58 0.28659592072169 59 0.285698876778285 60 0.286553724606832 61 0.286278398831685 62 0.285383365551631 63 0.285185527801514 64 0.285722929239273 65 0.284462761878967 66 0.285889981190364 67 0.284995247920354 68 0.28580997188886 69 0.284796486298243 70 0.285827841361364 71 0.285540421803792 72 0.286616379022598 73 0.285575445493062 74 0.284375129143397 75 0.285129994153976 76 0.285762482881546 77 0.285531975825628 78 0.285115744670232 79 0.284960736831029 80 0.285998366276423 81 0.285511281092962 82 0.285261489947637 83 0.285554917653402 84 0.285562423865 85 0.28498105208079 86 0.284988526503245 87 0.284662973880768 88 0.285346724589666 89 0.28499124844869 90 0.284189810355504 91 0.285790332158407 92 0.285755813121796 93 0.285432648658752 94 0.28459828098615 95 0.284674471616745 96 0.284438596169154 97 0.285094503561656 98 0.284766354163488 99 0.284712020556132 }; \addplot [semithick, color3] table { 0 0.530678033828735 1 0.50444467663765 2 0.488547831773758 3 0.476704134543737 4 0.467674914995829 5 0.460033935308456 6 0.455381713310877 7 0.449892191092173 8 0.445926151672999 9 0.444088842471441 10 0.441331056753794 11 0.440481539567312 12 0.439339005947113 13 0.43900857369105 14 0.440400759379069 15 0.439932467540105 16 0.440017823378245 17 0.442230335871379 18 0.443330578009287 19 0.445074331760407 20 0.445535778999329 21 0.447474092245102 22 0.450013454755147 23 0.452085707585017 24 0.454801785945892 25 0.455691820383072 26 0.458976630369822 27 0.461744008461634 28 0.460990705092748 29 0.462848391135534 30 0.463980740308762 31 0.464094835519791 32 0.466339576244354 33 0.469392748673757 34 0.469166155656179 35 0.471489443381627 36 0.471377384662628 37 0.472506322463354 38 0.473670738935471 39 0.472580319643021 40 0.475126618146896 41 0.476137808958689 42 0.475834228595098 43 0.477172819773356 44 0.476254296302795 45 0.476562782128652 46 0.477145644028982 47 0.477551958958308 48 0.477273764212926 49 0.477401061852773 50 0.47797446846962 51 0.477861368656158 52 0.478005039691925 53 0.479258435964584 54 0.478566777706146 55 0.479002984364827 56 0.480258212486903 57 0.480164565642675 58 0.47971336444219 59 0.480926420291265 60 0.480392851432165 61 0.480267630020777 62 0.481032860279083 63 0.4812764108181 64 0.48061269124349 65 0.480516137679418 66 0.481601681311925 67 0.480610718329747 68 0.481152093410492 69 0.482015881935755 70 0.481757682561874 71 0.481325171391169 72 0.481422219673793 73 0.481068827708562 74 0.481334720055262 75 0.482248373826345 76 0.481553544600805 77 0.48178271651268 78 0.481745151678721 79 0.482164778312047 80 0.481889488299688 81 0.482471134265264 82 0.482897655169169 83 0.483087545633316 84 0.481893660624822 85 0.482253855466843 86 0.482396666208903 87 0.482195826371511 88 0.482615371545156 89 0.482832964261373 90 0.483903203407923 91 0.484746919075648 92 0.483133405447006 93 0.483894368012746 94 0.484530357519786 95 0.484023425976435 96 0.483562282721202 97 0.483983459075292 98 0.483948268493017 99 0.484449422359467 }; \addplot [semithick, color4] table { 0 5.09792987505595 1 3.15079356829325 2 2.21005857785543 3 1.6672904809316 4 1.32574623823166 5 1.09987105528514 6 0.944974136352539 7 0.834024326006572 8 0.751707985003789 9 0.688051052888234 10 0.6368887702624 11 0.594672171274821 12 0.559120031197866 13 0.528696821133296 14 0.50238899787267 15 0.479332828521729 16 0.458867530028025 17 0.440736496448517 18 0.42458923459053 19 0.410064727067947 20 0.397089989980062 21 0.385239277283351 22 0.37450949549675 23 0.364802511533101 24 0.35585058927536 25 0.347737910350164 26 0.340361930926641 27 0.333410825332006 28 0.327178440491358 29 0.321347335974375 30 0.315852622191111 31 0.310649724801381 32 0.305961964527766 33 0.301825392246246 34 0.297681778669357 35 0.293765048185984 36 0.290344612797101 37 0.286936491727829 38 0.283800668517748 39 0.280952528119087 40 0.278092138965925 41 0.275559143225352 42 0.273052434126536 43 0.270536618431409 44 0.268265468875567 45 0.266267205278079 46 0.264392499128977 47 0.262161457538605 48 0.260384763280551 49 0.258797747890155 50 0.256871421138446 51 0.255203158656756 52 0.253900370001793 53 0.252420109510422 54 0.2508377323548 55 0.249772776166598 56 0.248390666643778 57 0.246934114893277 58 0.245802267392476 59 0.244897130131721 60 0.24376801153024 61 0.242475976546605 62 0.241869006554286 63 0.240516409277916 64 0.239521351456642 65 0.238697929183642 66 0.237892128030459 67 0.236893131335576 68 0.236293803652128 69 0.235548690954844 70 0.234652252991994 71 0.234033660093943 72 0.233370287219683 73 0.232324739297231 74 0.231686276197433 75 0.231394944588343 76 0.230539800723394 77 0.230350141723951 78 0.2294085085392 79 0.228783370057742 80 0.22872120141983 81 0.228289876381556 82 0.227311714490255 83 0.226663163304329 84 0.226150942842166 85 0.2263587474823 86 0.22566031118234 87 0.224638896187146 88 0.224857852856318 89 0.224073368310928 90 0.223736786842346 91 0.224581571420034 92 0.223484341303507 93 0.222852757573128 94 0.22271676560243 95 0.222228176395098 96 0.222528548041979 97 0.221981289982796 98 0.221309088667234 99 0.22108003894488 }; \end{axis} \end{tikzpicture} } \caption{KL-div. target} \label{fig:sim_kl} \end{subfigure} \caption{Results when vectors with 10 real features and 10 distractor features are used to compute different specific Bregman divergences. Mean absolute error is on the y-axis and number of training epochs on the x-axis. The shaded region around each plot shows the standard deviation of the results. Note in all cases our NBD\xspace has very low variance while effectively learning the target divergence. } \label{fig:sim_plots} \end{figure*} \begin{table}[t] \centering \adjustbox{max width=\columnwidth}{ \begin{tabular}{@{}lcccccccccccc@{}} \toprule & \multicolumn{3}{c}{Euclidean} & \multicolumn{3}{c}{Mahalanobis} & \multicolumn{3}{c}{$x\log x$} & \multicolumn{3}{c}{KL} \\ \cmidrule(lr){2-4} \cmidrule(lr){5-7} \cmidrule(lr){8-10} \cmidrule(lr){11-13} Correlation & None & Med & High & None & Med & High & None & Med & High & None & Med & High \\ \midrule NBD\xspace & \textit{0.17} & \textit{0.15} & \textit{0.16} & \textit{0.16} & \textit{0.18} & \textit{0.20} & \textbf{0.52} & \textbf{0.54} & \textbf{0.57} & \textbf{0.19} & \textbf{0.19} & \textbf{0.19} \\ Deepnorm & 3.56 & 3.97 & 4.15 & 7.70 & 5.97 & 7.66 & 1.59 & 1.74 & 1.79 & 0.30 & 0.28 & 0.28 \\ Mahalanobis & \textbf{0.00} & \textbf{0.03} & \textbf{0.05} & \textbf{0.02} & \textbf{0.04} & \textbf{0.09} & \textit{1.45} & 1.67 & 1.72 & \textit{0.23} & \textit{0.22} & \textit{0.22} \\ Deep-div & 7.78 & 7.81 & 7.84 & 17.92 & 12.26 & 14.15 & 2.59 & 2.67 & 2.70 & 0.44 & 0.50 & 0.51 \\ Widenorm & 3.56 & 3.99 & 4.12 & 7.73 & 6.01 & 7.60 & 1.49 & \textit{1.48} & \textit{1.48} & 0.30 & 0.28 & 0.28 \\ \bottomrule \end{tabular} } \caption{Regression test MAE when unused distractor features are correlated (None/Med/High) with the true/used features. Best results in \textbf{bold}, second best in \textit{italics}. \label{tbl:sim_errors_correlation} } \vspace{-5mm} \end{table} \subsection{Co-learning an embedding with a divergence} \label{sec:bregMNIST} Finally, we introduce a prototype task, BregMNIST, where a neural embedding must be learned along with the divergence metric. The dataset consists of paired MNIST images, with the target distance being a Bregman divergence between the digits shown in the images. Example pairs are displayed in \cref{fig:bregMNIST_example} for the asymmetrical Bregman divergence parametrized by $\phi(x) = (x+1) \log (x+1)$. \begin{wrapfigure}[16]{r}{0.5\columnwidth} \vspace{-15pt} \centering \adjustbox{max width=0.49\columnwidth}{ \tikzset{every picture/.style={line width=0.75pt}} \begin{tikzpicture}[x=0.75pt,y=0.75pt,yscale=-1,xscale=1] \draw [fill={rgb, 255:red, 208; green, 2; blue, 27 } ,fill opacity=0.5 ] (10,100) -- (19,70) -- (61,70) -- (70,100) -- cycle ; \draw (40,120) -- (40,103) ; \draw [shift={(40,100)}, rotate = 450] [fill={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.08] [draw opacity=0] (10.72,-5.15) -- (0,0) -- (10.72,5.15) -- (7.12,0) -- cycle ; \draw [fill={rgb, 255:red, 208; green, 2; blue, 27 } ,fill opacity=0.5 ] (100,100) -- (109,70) -- (151,70) -- (160,100) -- cycle ; \draw (130,120) -- (130,103) ; \draw [shift={(130,100)}, rotate = 450] [fill={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.08] [draw opacity=0] (10.72,-5.15) -- (0,0) -- (10.72,5.15) -- (7.12,0) -- cycle ; \draw [fill={rgb, 255:red, 245; green, 166; blue, 35 } ,fill opacity=0.5 ] (60,10) -- (110,10) -- (110,40) -- (60,40) -- cycle ; \draw (40,70) -- (40,60) -- (70,60) -- (70,43) ; \draw [shift={(70,40)}, rotate = 450] [fill={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.08] [draw opacity=0] (10.72,-5.15) -- (0,0) -- (10.72,5.15) -- (7.12,0) -- cycle ; \draw (130,70) -- (130,60) -- (100,60) -- (100,43) ; \draw [shift={(100,40)}, rotate = 450] [fill={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.08] [draw opacity=0] (10.72,-5.15) -- (0,0) -- (10.72,5.15) -- (7.12,0) -- cycle ; \draw (160,150) -- (217,150) ; \draw [shift={(220,150)}, rotate = 180] [fill={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.08] [draw opacity=0] (10.72,-5.15) -- (0,0) -- (10.72,5.15) -- (7.12,0) -- cycle ; \draw (280,100) -- (280,30) -- (243,30) ; \draw [shift={(240,30)}, rotate = 360] [fill={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.08] [draw opacity=0] (10.72,-5.15) -- (0,0) -- (10.72,5.15) -- (7.12,0) -- cycle ; \draw (110,30) -- (157,30) ; \draw [shift={(160,30)}, rotate = 180] [fill={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.08] [draw opacity=0] (10.72,-5.15) -- (0,0) -- (10.72,5.15) -- (7.12,0) -- cycle ; \draw (40,150) node {\includegraphics[width=45pt,height=45pt]{imgs/pair1_2.png}}; \draw (130,150) node {\includegraphics[width=45pt,height=45pt]{imgs/pair1_1.png}}; \draw (40,85) node [align=left] {\begin{minipage}[lt]{29.14pt}\setlength\topsep{0pt} \begin{center} CNN \end{center} \end{minipage}}; \draw (130,85) node [align=left] {\begin{minipage}[lt]{29.14pt}\setlength\topsep{0pt} \begin{center} CNN \end{center} \end{minipage}}; \draw (85,25) node [align=left] {\begin{minipage}[lt]{34pt}\setlength\topsep{0pt} \begin{center} NBD \end{center} \end{minipage}}; \draw (195,25) node [align=left] {\begin{minipage}[lt]{47.6pt}\setlength\topsep{0pt} \begin{center} $\displaystyle {\displaystyle \ell (\hat{y}_{\mathit{NBD}} ,y)}$ \end{center} \end{minipage}}; \draw (40,195) node [align=left] {\begin{minipage}[lt]{27.2pt}\setlength\topsep{0pt} \begin{center} $\displaystyle a$ \end{center} \end{minipage}}; \draw (130,195) node [align=left] {\begin{minipage}[lt]{27.2pt}\setlength\topsep{0pt} \begin{center} $\displaystyle b$ \end{center} \end{minipage}}; \draw (190,135) node [align=left] {\begin{minipage}[lt]{40.8pt}\setlength\topsep{0pt} \begin{center} $\displaystyle D( a,b)$ \end{center} \end{minipage}}; \draw (300,150) node [font=\footnotesize] [align=left] {\begin{minipage}[lt]{108.8pt}\setlength\topsep{0pt} \begin{center} $\displaystyle \begin{array}{{>{\displaystyle}l}} = 4 \log\left(\frac{4}{6}\right) - (4-6) \ \\ \approx 0.378 \end{array}$ \end{center} \end{minipage}}; \end{tikzpicture} } \caption{Demonstration of the BregMNIST task. Nodes with the same color indicate weight sharing. Each image is embedded by a CNN, and the ground-truth divergence is computed from the digit values of the input images. The embeddings of each image are given to NBD\xspace, and the loss is computed from NBD\xspace's output and the true divergence. The CNN and NBD\xspace are learned jointly. } \label{fig:bregMNIST_example} \end{wrapfigure} We additionally make a harder version by substituting MNIST with CIFAR10 with the same divergence labels. In both cases the relation of features to class label is arbitrary (that is, we impose an ordinal relation among labels that does not exist in the data), meaning that the embedding function effectively maps image classes to the correct number used to compute the divergence. The results of the experiments (Fig. \ref{fig:bregmnist_all}) mirror our results in \cref{sec:sim}. For both BregMNIST and BregCIFAR NBD\xspace performs best, while prior methods of learning asymmetric measures perform worse than the Mahalanobis distance. \begin{figure}[t] \begin{subfigure}[b]{0.49\columnwidth} \centering \begin{tikzpicture} \definecolor{color0}{rgb}{0.12156862745098,0.466666666666667,0.705882352941177} \definecolor{color1}{rgb}{1,0.498039215686275,0.0549019607843137} \definecolor{color2}{rgb}{0.172549019607843,0.627450980392157,0.172549019607843} \definecolor{color3}{rgb}{0.83921568627451,0.152941176470588,0.156862745098039} \definecolor{color4}{rgb}{0.580392156862745,0.403921568627451,0.741176470588235} \begin{axis}[ legend cell align={left}, legend style={ font=\tiny, fill opacity=0.8, draw opacity=1, text opacity=1, anchor=east, draw=white!80!black }, legend pos=north east, tick align=outside, tick pos=left, x grid style={white!69.0196078431373!black}, xmin=-2, xmax=52, xtick style={color=black}, y grid style={white!69.0196078431373!black}, ymin=-0.349281168060688, ymax=10.5681231294396, ytick style={color=black}, width=1.0\columnwidth, height=0.60\columnwidth ] \path [fill=color0, fill opacity=0.2] (axis cs:0,1.66883152347459) --(axis cs:0,1.38841530460463) --(axis cs:1,0.95208905230253) --(axis cs:2,0.742621982907801) --(axis cs:3,0.610887581386741) --(axis cs:4,0.544473331958479) --(axis cs:5,0.50719034263876) --(axis cs:6,0.480312443705198) --(axis cs:7,0.418785151339808) --(axis cs:8,0.398419431699162) --(axis cs:9,0.370885749865625) --(axis cs:10,0.354000876796228) --(axis cs:11,0.344132231699354) --(axis cs:12,0.351958256476653) --(axis cs:13,0.340793484927414) --(axis cs:14,0.316928023826897) --(axis cs:15,0.300208125434285) --(axis cs:16,0.274469969064145) --(axis cs:17,0.265888616970536) --(axis cs:18,0.252426464351035) --(axis cs:19,0.259641766559796) --(axis cs:20,0.274352463370184) --(axis cs:21,0.248275272206815) --(axis cs:22,0.254543790547683) --(axis cs:23,0.262482030755031) --(axis cs:24,0.249333618929438) --(axis cs:25,0.23432340709753) --(axis cs:26,0.23919575366523) --(axis cs:27,0.229745090844864) --(axis cs:28,0.249531429386879) --(axis cs:29,0.236651093259055) --(axis cs:30,0.221195628680196) --(axis cs:31,0.227843067614106) --(axis cs:32,0.233115181055006) --(axis cs:33,0.21445619182441) --(axis cs:34,0.220593568800957) --(axis cs:35,0.220712033102615) --(axis cs:36,0.202612591546005) --(axis cs:37,0.198086327708963) --(axis cs:38,0.210473779113236) --(axis cs:39,0.221487383816409) --(axis cs:40,0.199111298910033) --(axis cs:41,0.203768117736157) --(axis cs:42,0.200099855932595) --(axis cs:43,0.200484954656497) --(axis cs:44,0.196733929642912) --(axis cs:45,0.198029182616793) --(axis cs:46,0.192806900648433) --(axis cs:47,0.198329481101308) --(axis cs:48,0.189094199070986) --(axis cs:49,0.198103970333033) --(axis cs:50,0.191764237237139) --(axis cs:51,0.191142706961558) --(axis cs:52,0.198468892653016) --(axis cs:53,0.193647849757966) --(axis cs:54,0.184704351084164) --(axis cs:55,0.185922866303754) --(axis cs:56,0.18451633896971) --(axis cs:57,0.185836117176558) --(axis cs:58,0.178662648943532) --(axis cs:59,0.190331884023116) --(axis cs:60,0.179711981730884) --(axis cs:61,0.189938676734911) --(axis cs:62,0.178840081255259) --(axis cs:63,0.198440095298563) --(axis cs:64,0.184910574236787) --(axis cs:65,0.155839616875151) --(axis cs:66,0.193704387431639) --(axis cs:67,0.178426691563132) --(axis cs:68,0.183714347956309) --(axis cs:69,0.18572871699628) --(axis cs:70,0.177967101085524) --(axis cs:71,0.179853147854028) --(axis cs:72,0.180212991596197) --(axis cs:73,0.167769688847894) --(axis cs:74,0.174129524325924) --(axis cs:75,0.180776534711792) --(axis cs:76,0.18176764350129) --(axis cs:77,0.197173443995249) --(axis cs:78,0.175001735757158) --(axis cs:79,0.175251316998249) --(axis cs:80,0.177509054070677) --(axis cs:81,0.1719214988467) --(axis cs:82,0.172300647915991) --(axis cs:83,0.16740459476212) --(axis cs:84,0.178272568889116) --(axis cs:85,0.164312578798216) --(axis cs:86,0.173876765091204) --(axis cs:87,0.16983263788684) --(axis cs:88,0.174742521177046) --(axis cs:89,0.178712560350227) --(axis cs:90,0.175694820661854) --(axis cs:91,0.173398719042366) --(axis cs:92,0.179209890509074) --(axis cs:93,0.180879046066749) --(axis cs:94,0.167962458792031) --(axis cs:95,0.173737384355777) --(axis cs:96,0.172079467926626) --(axis cs:97,0.186761403020195) --(axis cs:98,0.163557997601733) --(axis cs:99,0.171418879504487) --(axis cs:100,0.17336183163306) --(axis cs:101,0.179084189051218) --(axis cs:102,0.182650194289016) --(axis cs:103,0.172648632327126) --(axis cs:104,0.171194017955337) --(axis cs:105,0.173447176035976) --(axis cs:106,0.179758728634356) --(axis cs:107,0.174871850494427) --(axis cs:108,0.172065308096383) --(axis cs:109,0.166551145918533) --(axis cs:110,0.164669865349876) --(axis cs:111,0.165728554243335) --(axis cs:112,0.164117404069578) --(axis cs:113,0.154685610316179) --(axis cs:114,0.172034805037818) --(axis cs:115,0.170169544452935) --(axis cs:116,0.169917968002002) --(axis cs:117,0.166367440986143) --(axis cs:118,0.185515968469223) --(axis cs:119,0.165206030116178) --(axis cs:120,0.166209259215982) --(axis cs:121,0.16083738860569) --(axis cs:122,0.166565882700685) --(axis cs:123,0.164409815868697) --(axis cs:124,0.166966373372574) --(axis cs:125,0.155889330799913) --(axis cs:126,0.164611909995128) --(axis cs:127,0.167087259981849) --(axis cs:128,0.162718391169895) --(axis cs:129,0.161984606653131) --(axis cs:130,0.166984616759433) --(axis cs:131,0.165792276132798) --(axis cs:132,0.160624588637404) --(axis cs:133,0.160779104114759) --(axis cs:134,0.154097516959955) --(axis cs:135,0.164882230263197) --(axis cs:136,0.162663673580106) --(axis cs:137,0.155833345566563) --(axis cs:138,0.155138115642363) --(axis cs:139,0.17086847304345) --(axis cs:140,0.16849910697004) --(axis cs:141,0.15815110698537) --(axis cs:142,0.158681582807506) --(axis cs:143,0.166855794341738) --(axis cs:144,0.180584304405696) --(axis cs:145,0.164509954550162) --(axis cs:146,0.165411894245366) --(axis cs:147,0.160438639259201) --(axis cs:148,0.162351464102612) --(axis cs:149,0.159975914481813) --(axis cs:150,0.173591914480044) --(axis cs:151,0.166282712599181) --(axis cs:152,0.173889832937925) --(axis cs:153,0.162757982614122) --(axis cs:154,0.175315056109286) --(axis cs:155,0.165110576159316) --(axis cs:156,0.161160549048877) --(axis cs:157,0.170588634698927) --(axis cs:158,0.158278303032699) --(axis cs:159,0.164050529559293) --(axis cs:160,0.172458158256733) --(axis cs:161,0.157263166595628) --(axis cs:162,0.16274636309235) --(axis cs:163,0.16456350064624) --(axis cs:164,0.179117419063126) --(axis cs:165,0.165825358263027) --(axis cs:166,0.164720821729107) --(axis cs:167,0.163346845679727) --(axis cs:168,0.160733213970174) --(axis cs:169,0.161912607690486) --(axis cs:170,0.164450543677774) --(axis cs:171,0.167160891615183) --(axis cs:172,0.164756271289078) --(axis cs:173,0.163025302830854) --(axis cs:174,0.154502011411648) --(axis cs:175,0.159387570204863) --(axis cs:176,0.179346387725833) --(axis cs:177,0.160167710358902) --(axis cs:178,0.155107289930537) --(axis cs:179,0.164588025866778) --(axis cs:180,0.160760006025148) --(axis cs:181,0.158979418953192) --(axis cs:182,0.160984855156871) --(axis cs:183,0.158226738734919) --(axis cs:184,0.160173162041117) --(axis cs:185,0.159167599528993) --(axis cs:186,0.16559538164968) --(axis cs:187,0.157705041912974) --(axis cs:188,0.173151610764777) --(axis cs:189,0.161898273272729) --(axis cs:190,0.158964949920795) --(axis cs:191,0.176536844271568) --(axis cs:192,0.180541617451701) --(axis cs:193,0.166188325236984) --(axis cs:194,0.159549938775113) --(axis cs:195,0.166861062012238) --(axis cs:196,0.161481553431603) --(axis cs:197,0.146964481825689) --(axis cs:198,0.163568014986718) --(axis cs:199,0.151515753941161) --(axis cs:199,0.189226881785768) --(axis cs:199,0.189226881785768) --(axis cs:198,0.198470662944114) --(axis cs:197,0.209011301045558) --(axis cs:196,0.191003084067252) --(axis cs:195,0.181926901378113) --(axis cs:194,0.184575999640415) --(axis cs:193,0.191662500549606) --(axis cs:192,0.203784709395375) --(axis cs:191,0.184237219792458) --(axis cs:190,0.196388763114788) --(axis cs:189,0.184518998818183) --(axis cs:188,0.185568751421655) --(axis cs:187,0.197291305514394) --(axis cs:186,0.20421103915339) --(axis cs:185,0.192545616775786) --(axis cs:184,0.198042217673849) --(axis cs:183,0.196508844332022) --(axis cs:182,0.191258288163212) --(axis cs:181,0.195866903583276) --(axis cs:180,0.194104257509398) --(axis cs:179,0.187637748659818) --(axis cs:178,0.181324427942083) --(axis cs:177,0.190871615832046) --(axis cs:176,0.196690196651456) --(axis cs:175,0.187738424954286) --(axis cs:174,0.188622008687992) --(axis cs:173,0.198080735262712) --(axis cs:172,0.197090586973938) --(axis cs:171,0.202058608211249) --(axis cs:170,0.191214508259329) --(axis cs:169,0.191160178640691) --(axis cs:168,0.182277706316005) --(axis cs:167,0.186093466705832) --(axis cs:166,0.186269324669438) --(axis cs:165,0.187550613054264) --(axis cs:164,0.211774663389648) --(axis cs:163,0.187059676882141) --(axis cs:162,0.186268056224896) --(axis cs:161,0.17942109019644) --(axis cs:160,0.204575587985791) --(axis cs:159,0.190246387163958) --(axis cs:158,0.188428975457367) --(axis cs:157,0.198142703086794) --(axis cs:156,0.178623590345883) --(axis cs:155,0.182228803628129) --(axis cs:154,0.194350208974027) --(axis cs:153,0.182027385828413) --(axis cs:152,0.21445661262253) --(axis cs:151,0.195138645986176) --(axis cs:150,0.184360280925916) --(axis cs:149,0.187859501119918) --(axis cs:148,0.188313848902835) --(axis cs:147,0.185164570474762) --(axis cs:146,0.194027788715144) --(axis cs:145,0.192289296290979) --(axis cs:144,0.192511595414631) --(axis cs:143,0.190617066471403) --(axis cs:142,0.182608497739826) --(axis cs:141,0.192382187692364) --(axis cs:140,0.19492770949343) --(axis cs:139,0.187828720817557) --(axis cs:138,0.217513795139497) --(axis cs:137,0.188984978284068) --(axis cs:136,0.21069350916106) --(axis cs:135,0.193607157964265) --(axis cs:134,0.184490428739737) --(axis cs:133,0.186732920526278) --(axis cs:132,0.202400486559816) --(axis cs:131,0.188045821916366) --(axis cs:130,0.186773760077344) --(axis cs:129,0.204178265661322) --(axis cs:128,0.211256456623684) --(axis cs:127,0.189195731189034) --(axis cs:126,0.201993753304433) --(axis cs:125,0.222350658480788) --(axis cs:124,0.188882022451858) --(axis cs:123,0.190717614093461) --(axis cs:122,0.184611398440596) --(axis cs:121,0.190608763127709) --(axis cs:120,0.201135269220679) --(axis cs:119,0.182353153719805) --(axis cs:118,0.211489148470322) --(axis cs:117,0.184784102869525) --(axis cs:116,0.19729649117883) --(axis cs:115,0.183373331790657) --(axis cs:114,0.185036487362542) --(axis cs:113,0.240457225062468) --(axis cs:112,0.194733808193529) --(axis cs:111,0.188374683147183) --(axis cs:110,0.198550534268273) --(axis cs:109,0.19160853325589) --(axis cs:108,0.20089100956681) --(axis cs:107,0.185378396030384) --(axis cs:106,0.195638196576597) --(axis cs:105,0.204451421681309) --(axis cs:104,0.19465713398787) --(axis cs:103,0.211658239563896) --(axis cs:102,0.194741269229126) --(axis cs:101,0.212107716685705) --(axis cs:100,0.190903574498684) --(axis cs:99,0.207349993710235) --(axis cs:98,0.228081848961606) --(axis cs:97,0.192320501867959) --(axis cs:96,0.189424806680079) --(axis cs:95,0.19521794768501) --(axis cs:94,0.184508290824591) --(axis cs:93,0.197509716407311) --(axis cs:92,0.190698466157491) --(axis cs:91,0.188409658461982) --(axis cs:90,0.199453964452435) --(axis cs:89,0.20222294170208) --(axis cs:88,0.207766132225282) --(axis cs:87,0.222855743785065) --(axis cs:86,0.222912010829664) --(axis cs:85,0.237163083194811) --(axis cs:84,0.192349964670684) --(axis cs:83,0.208472904818263) --(axis cs:82,0.207809556303827) --(axis cs:81,0.222678874659846) --(axis cs:80,0.198835376375948) --(axis cs:79,0.236535704161877) --(axis cs:78,0.199490355183318) --(axis cs:77,0.280547663010824) --(axis cs:76,0.217085032651893) --(axis cs:75,0.213296743000076) --(axis cs:74,0.222564480209751) --(axis cs:73,0.23698758434928) --(axis cs:72,0.19732786500361) --(axis cs:71,0.19540640653211) --(axis cs:70,0.207058781635423) --(axis cs:69,0.207807630154523) --(axis cs:68,0.20572406661736) --(axis cs:67,0.20580032202482) --(axis cs:66,0.202291736120683) --(axis cs:65,0.286384605546495) --(axis cs:64,0.208228144368254) --(axis cs:63,0.213907781727042) --(axis cs:62,0.216777475316701) --(axis cs:61,0.210780250648511) --(axis cs:60,0.216525653881604) --(axis cs:59,0.245923421982362) --(axis cs:58,0.206374463292491) --(axis cs:57,0.206976271958802) --(axis cs:56,0.264725996588227) --(axis cs:55,0.223277128498721) --(axis cs:54,0.220265752418109) --(axis cs:53,0.221314656536285) --(axis cs:52,0.23607710711238) --(axis cs:51,0.257297046094014) --(axis cs:50,0.224895439791517) --(axis cs:49,0.227145576433249) --(axis cs:48,0.215149454941694) --(axis cs:47,0.227878210329738) --(axis cs:46,0.244684581847829) --(axis cs:45,0.216722198065198) --(axis cs:44,0.23457189010692) --(axis cs:43,0.21448540156947) --(axis cs:42,0.252703749862311) --(axis cs:41,0.239036266495411) --(axis cs:40,0.220934970983613) --(axis cs:39,0.239472014930081) --(axis cs:38,0.270993640749511) --(axis cs:37,0.24097779949784) --(axis cs:36,0.233262991149002) --(axis cs:35,0.261819776704209) --(axis cs:34,0.261174759150474) --(axis cs:33,0.251463833285835) --(axis cs:32,0.241881409559313) --(axis cs:31,0.295433952840301) --(axis cs:30,0.268866906128916) --(axis cs:29,0.270580583319467) --(axis cs:28,0.293759263180946) --(axis cs:27,0.280637179968125) --(axis cs:26,0.275005392698982) --(axis cs:25,0.287028335646916) --(axis cs:24,0.271563882777639) --(axis cs:23,0.325270156497014) --(axis cs:22,0.290215542586015) --(axis cs:21,0.272070344133822) --(axis cs:20,0.329531363362451) --(axis cs:19,0.3310347974184) --(axis cs:18,0.340347318855905) --(axis cs:17,0.323363175460342) --(axis cs:16,0.337726714819522) --(axis cs:15,0.341014482655162) --(axis cs:14,0.359669106471717) --(axis cs:13,0.373893492935898) --(axis cs:12,0.384471077210176) --(axis cs:11,0.441072286141985) --(axis cs:10,0.392414535629767) --(axis cs:9,0.475055066536811) --(axis cs:8,0.463172896849269) --(axis cs:7,0.514313534401617) --(axis cs:6,0.622933792142275) --(axis cs:5,0.538805798794956) --(axis cs:4,0.600950604885393) --(axis cs:3,0.761331408939186) --(axis cs:2,0.830325185442418) --(axis cs:1,1.15396821011812) --(axis cs:0,1.66883152347459) --cycle; \path [fill=color1, fill opacity=0.2] (axis cs:0,2.80591548495593) --(axis cs:0,2.41002391762433) --(axis cs:1,2.31479349440574) --(axis cs:2,2.1906539151653) --(axis cs:3,2.15347631556035) --(axis cs:4,2.10633137430519) --(axis cs:5,2.08471711951) --(axis cs:6,2.05654808975346) --(axis cs:7,2.10185708675656) --(axis cs:8,2.05621634140372) --(axis cs:9,2.04707949516492) --(axis cs:10,2.00712573032547) --(axis cs:11,2.01796260512548) --(axis cs:12,2.00537394073243) --(axis cs:13,1.99256796721917) --(axis cs:14,1.99021788480813) --(axis cs:15,1.98685239660135) --(axis cs:16,1.96473448102274) --(axis cs:17,1.98013228406269) --(axis cs:18,1.96188795441759) --(axis cs:19,2.01302383749719) --(axis cs:20,1.93690468497452) --(axis cs:21,1.94579166491656) --(axis cs:22,1.95943081119429) --(axis cs:23,1.94434731956489) --(axis cs:24,1.93426075128054) --(axis cs:25,1.97620870757944) --(axis cs:26,1.90316511558552) --(axis cs:27,1.95268364407466) --(axis cs:28,1.9535898302082) --(axis cs:29,1.93496384507785) --(axis cs:30,1.93167192285876) --(axis cs:31,1.93268577591114) --(axis cs:32,1.94256941289606) --(axis cs:33,1.9110624077157) --(axis cs:34,1.92511312896989) --(axis cs:35,1.94076146225701) --(axis cs:36,1.92645606692267) --(axis cs:37,1.92300470675004) --(axis cs:38,1.94561637368281) --(axis cs:39,1.91885502715376) --(axis cs:40,1.91531355979715) --(axis cs:41,1.89582741657299) --(axis cs:42,1.90743878581873) --(axis cs:43,1.90858466539698) --(axis cs:44,1.91958999723392) --(axis cs:45,1.90610663480639) --(axis cs:46,1.91099918313577) --(axis cs:47,1.92589917155282) --(axis cs:48,1.90939432565385) --(axis cs:49,1.91039862937648) --(axis cs:50,1.90489229750682) --(axis cs:51,1.88300938887469) --(axis cs:52,1.89355446525956) --(axis cs:53,1.90832054155304) --(axis cs:54,1.90016410114666) --(axis cs:55,1.90828541058595) --(axis cs:56,1.89980442588659) --(axis cs:57,1.90480787394664) --(axis cs:58,1.91940851570184) --(axis cs:59,1.90645072944225) --(axis cs:60,1.89252373321777) --(axis cs:61,1.89616752163087) --(axis cs:62,1.90076220200118) --(axis cs:63,1.89960645454124) --(axis cs:64,1.89941824634038) --(axis cs:65,1.89749762728128) --(axis cs:66,1.88650257780411) --(axis cs:67,1.91461227530385) --(axis cs:68,1.90707565258652) --(axis cs:69,1.89793841179799) --(axis cs:70,1.91400622379226) --(axis cs:71,1.9029323466303) --(axis cs:72,1.90083667587822) --(axis cs:73,1.9146546094896) --(axis cs:74,1.90342465284491) --(axis cs:75,1.89706095173274) --(axis cs:76,1.88668624748229) --(axis cs:77,1.88690214236342) --(axis cs:78,1.8770250734955) --(axis cs:79,1.8828359123164) --(axis cs:80,1.89397356896313) --(axis cs:81,1.89496668582363) --(axis cs:82,1.89533405281676) --(axis cs:83,1.91499051682225) --(axis cs:84,1.89652730489623) --(axis cs:85,1.88191405807259) --(axis cs:86,1.88244712176356) --(axis cs:87,1.89091141107136) --(axis cs:88,1.89064657305193) --(axis cs:89,1.89283343736202) --(axis cs:90,1.88759174927259) --(axis cs:91,1.90190902536167) --(axis cs:92,1.88432452315014) --(axis cs:93,1.88200862678837) --(axis cs:94,1.89540362851866) --(axis cs:95,1.89687841344975) --(axis cs:96,1.89274755497688) --(axis cs:97,1.89345358759775) --(axis cs:98,1.89952340552262) --(axis cs:99,1.89226513031358) --(axis cs:100,1.89900722040323) --(axis cs:101,1.89129707647732) --(axis cs:102,1.89441476199389) --(axis cs:103,1.89164146549426) --(axis cs:104,1.88924087786357) --(axis cs:105,1.87572944196397) --(axis cs:106,1.89942247208075) --(axis cs:107,1.89435080384365) --(axis cs:108,1.90427423294127) --(axis cs:109,1.90468811164635) --(axis cs:110,1.89041243228902) --(axis cs:111,1.89039518358122) --(axis cs:112,1.89235634935934) --(axis cs:113,1.88761497223153) --(axis cs:114,1.88331967191579) --(axis cs:115,1.89089145568885) --(axis cs:116,1.89258140759617) --(axis cs:117,1.89370755171048) --(axis cs:118,1.88912488496916) --(axis cs:119,1.89375012797015) --(axis cs:120,1.89220471276453) --(axis cs:121,1.89145384604794) --(axis cs:122,1.88949306218948) --(axis cs:123,1.89939918493857) --(axis cs:124,1.89221207518816) --(axis cs:125,1.88729397398796) --(axis cs:126,1.88622443348802) --(axis cs:127,1.88362310061312) --(axis cs:128,1.88483206901987) --(axis cs:129,1.86786606117117) --(axis cs:130,1.89106081538976) --(axis cs:131,1.87780381386856) --(axis cs:132,1.89703868147956) --(axis cs:133,1.89016359378173) --(axis cs:134,1.89599855973592) --(axis cs:135,1.88594368045492) --(axis cs:136,1.874802175382) --(axis cs:137,1.88452993290082) --(axis cs:138,1.8792987903776) --(axis cs:139,1.88881534424645) --(axis cs:140,1.89336255025452) --(axis cs:141,1.88177988008907) --(axis cs:142,1.89623502443354) --(axis cs:143,1.88237076425221) --(axis cs:144,1.88935609912549) --(axis cs:145,1.88186328226451) --(axis cs:146,1.88922625481485) --(axis cs:147,1.89680106895729) --(axis cs:148,1.88755962067146) --(axis cs:149,1.88338808392605) --(axis cs:150,1.89526781842192) --(axis cs:151,1.89503246219613) --(axis cs:152,1.88979634355862) --(axis cs:153,1.88099005637172) --(axis cs:154,1.88102617637018) --(axis cs:155,1.87774595781193) --(axis cs:156,1.88038017835438) --(axis cs:157,1.87883886381513) --(axis cs:158,1.89286273393235) --(axis cs:159,1.90223757846991) --(axis cs:160,1.88130402114775) --(axis cs:161,1.88733337386948) --(axis cs:162,1.88739613159569) --(axis cs:163,1.88521074465694) --(axis cs:164,1.88488560395995) --(axis cs:165,1.88388996549428) --(axis cs:166,1.89876947864) --(axis cs:167,1.88561388446208) --(axis cs:168,1.87966527804647) --(axis cs:169,1.89398785995976) --(axis cs:170,1.8932175482923) --(axis cs:171,1.88424981040509) --(axis cs:172,1.88947369789393) --(axis cs:173,1.87691593841665) --(axis cs:174,1.88888514740537) --(axis cs:175,1.88484022188247) --(axis cs:176,1.88349181092614) --(axis cs:177,1.89516628508762) --(axis cs:178,1.89541774130271) --(axis cs:179,1.88746948406061) --(axis cs:180,1.87310616023159) --(axis cs:181,1.88474905669068) --(axis cs:182,1.88737584053206) --(axis cs:183,1.88680597224175) --(axis cs:184,1.88057448428369) --(axis cs:185,1.88006764518256) --(axis cs:186,1.88635303386681) --(axis cs:187,1.87705625857536) --(axis cs:188,1.87500273358618) --(axis cs:189,1.88519243227762) --(axis cs:190,1.89102517587814) --(axis cs:191,1.88733543395624) --(axis cs:192,1.88359550749655) --(axis cs:193,1.894256909621) --(axis cs:194,1.87779884277503) --(axis cs:195,1.8883214247156) --(axis cs:196,1.88142450582514) --(axis cs:197,1.88469276379305) --(axis cs:198,1.89033802097894) --(axis cs:199,1.88182957005488) --(axis cs:199,1.89578019547475) --(axis cs:199,1.89578019547475) --(axis cs:198,1.89870600158118) --(axis cs:197,1.89395567227644) --(axis cs:196,1.90288678873052) --(axis cs:195,1.90826270461789) --(axis cs:194,1.89885808290322) --(axis cs:193,1.92100654052472) --(axis cs:192,1.92979158843165) --(axis cs:191,1.90125392437354) --(axis cs:190,1.92048842685547) --(axis cs:189,1.90223409903723) --(axis cs:188,1.90739519226755) --(axis cs:187,1.91685758267219) --(axis cs:186,1.90293387762077) --(axis cs:185,1.90157956017022) --(axis cs:184,1.90037683684134) --(axis cs:183,1.93047193608345) --(axis cs:182,1.89296572507692) --(axis cs:181,1.9170175176921) --(axis cs:180,1.91703506462956) --(axis cs:179,1.9129174812142) --(axis cs:178,1.90226752423836) --(axis cs:177,1.90084116930767) --(axis cs:176,1.90202528559333) --(axis cs:175,1.90542372894227) --(axis cs:174,1.91393112437655) --(axis cs:173,1.8918572597111) --(axis cs:172,1.89508066440313) --(axis cs:171,1.91543680029361) --(axis cs:170,1.90648283107934) --(axis cs:169,1.90788707566722) --(axis cs:168,1.90401206150737) --(axis cs:167,1.89380345152501) --(axis cs:166,1.90733857885894) --(axis cs:165,1.90256131462275) --(axis cs:164,1.93267727417191) --(axis cs:163,1.92235184498844) --(axis cs:162,1.91359665528862) --(axis cs:161,1.90238275066513) --(axis cs:160,1.92159706804368) --(axis cs:159,1.90982861892542) --(axis cs:158,1.9075015219728) --(axis cs:157,1.9116693468439) --(axis cs:156,1.90508440647305) --(axis cs:155,1.91428229049816) --(axis cs:154,1.90078452452322) --(axis cs:153,1.93025765004154) --(axis cs:152,1.90415123391788) --(axis cs:151,1.91121942131064) --(axis cs:150,1.90401139452974) --(axis cs:149,1.90521391058842) --(axis cs:148,1.89850744790535) --(axis cs:147,1.90884260875419) --(axis cs:146,1.89819288552405) --(axis cs:145,1.90874995177861) --(axis cs:144,1.90364382410373) --(axis cs:143,1.90550554371211) --(axis cs:142,1.90955957223851) --(axis cs:141,1.89727511926244) --(axis cs:140,1.93237254190856) --(axis cs:139,1.90345644864219) --(axis cs:138,1.90332515628912) --(axis cs:137,1.91384072645052) --(axis cs:136,1.90070181765236) --(axis cs:135,1.92368492538767) --(axis cs:134,1.90235999987254) --(axis cs:133,1.91604358624147) --(axis cs:132,1.90969117167367) --(axis cs:131,1.90389733368774) --(axis cs:130,1.91076142973124) --(axis cs:129,1.95091191724909) --(axis cs:128,1.94889046277563) --(axis cs:127,1.91112669577741) --(axis cs:126,1.90426947205626) --(axis cs:125,1.92871083872948) --(axis cs:124,1.90380253176451) --(axis cs:123,1.93846867108713) --(axis cs:122,1.90421812326584) --(axis cs:121,1.90814145033496) --(axis cs:120,1.90826891766379) --(axis cs:119,1.91064673978193) --(axis cs:118,1.90802042209489) --(axis cs:117,1.9009224989487) --(axis cs:116,1.91700380606502) --(axis cs:115,1.92257348390542) --(axis cs:114,1.9388029615414) --(axis cs:113,1.91091745412574) --(axis cs:112,1.92174057828348) --(axis cs:111,1.90866742609133) --(axis cs:110,1.93970026340495) --(axis cs:109,1.96999991764289) --(axis cs:108,1.93668084566057) --(axis cs:107,1.90881445313343) --(axis cs:106,1.91090356771036) --(axis cs:105,1.94110071626967) --(axis cs:104,1.92934642529805) --(axis cs:103,1.91891470783033) --(axis cs:102,1.9223418441081) --(axis cs:101,1.94354280398914) --(axis cs:100,1.91293607221456) --(axis cs:99,1.8994397627986) --(axis cs:98,1.92345985940048) --(axis cs:97,1.91205377429114) --(axis cs:96,1.94279726962334) --(axis cs:95,1.9258309681306) --(axis cs:94,1.90043985349886) --(axis cs:93,1.92305181947399) --(axis cs:92,1.91914501553852) --(axis cs:91,1.92252513582455) --(axis cs:90,1.92623224870181) --(axis cs:89,1.93000427301854) --(axis cs:88,1.90958571816969) --(axis cs:87,1.95256554005092) --(axis cs:86,1.90374618945089) --(axis cs:85,1.90916582073448) --(axis cs:84,1.92711417411912) --(axis cs:83,1.96704241402873) --(axis cs:82,1.92442007087098) --(axis cs:81,1.92752948994236) --(axis cs:80,1.91319712014285) --(axis cs:79,1.93602259197895) --(axis cs:78,1.95816069272459) --(axis cs:77,1.91377682606615) --(axis cs:76,1.89998562942506) --(axis cs:75,1.90036980197514) --(axis cs:74,1.92274161931848) --(axis cs:73,1.93821186132415) --(axis cs:72,1.92133346248085) --(axis cs:71,1.9178769580839) --(axis cs:70,1.9344650601204) --(axis cs:69,1.91516884032299) --(axis cs:68,1.92243765879958) --(axis cs:67,1.95572348242854) --(axis cs:66,1.93736539886139) --(axis cs:65,1.90936192557898) --(axis cs:64,1.92723451893367) --(axis cs:63,1.9251006363134) --(axis cs:62,1.92688643767777) --(axis cs:61,1.91012209161604) --(axis cs:60,1.91287246600861) --(axis cs:59,2.01486756317555) --(axis cs:58,1.94380055307334) --(axis cs:57,1.9375264561353) --(axis cs:56,1.9196166866508) --(axis cs:55,1.92236503821319) --(axis cs:54,1.95945652959446) --(axis cs:53,1.96743691427277) --(axis cs:52,1.9663805203781) --(axis cs:51,1.98945324139722) --(axis cs:50,1.95438244748067) --(axis cs:49,1.94776293926518) --(axis cs:48,1.92144697721785) --(axis cs:47,1.94875303534492) --(axis cs:46,1.95066022447989) --(axis cs:45,1.95502384119154) --(axis cs:44,1.92994528919262) --(axis cs:43,1.91696279134435) --(axis cs:42,1.95788696310171) --(axis cs:41,2.00521218379932) --(axis cs:40,1.93651728031363) --(axis cs:39,1.9873925290558) --(axis cs:38,2.0629193690769) --(axis cs:37,1.97531727944839) --(axis cs:36,1.97351470295954) --(axis cs:35,1.99875293631782) --(axis cs:34,1.95295239751555) --(axis cs:33,1.95513327099524) --(axis cs:32,1.9796556451827) --(axis cs:31,1.94302798971004) --(axis cs:30,1.98444991761823) --(axis cs:29,1.97149089687695) --(axis cs:28,1.97952636737786) --(axis cs:27,1.98129723809315) --(axis cs:26,2.00287376476269) --(axis cs:25,2.16759929966085) --(axis cs:24,1.98672345730329) --(axis cs:23,1.96241581682198) --(axis cs:22,1.9778833415233) --(axis cs:21,1.98200201670499) --(axis cs:20,2.005662782715) --(axis cs:19,2.04707683713202) --(axis cs:18,1.98790717249739) --(axis cs:17,1.99642133484523) --(axis cs:16,1.99004460747443) --(axis cs:15,2.02471465957769) --(axis cs:14,2.00996706840461) --(axis cs:13,2.06347440596122) --(axis cs:12,2.07171656105285) --(axis cs:11,2.03585860096735) --(axis cs:10,2.09220517177414) --(axis cs:9,2.13929671886249) --(axis cs:8,2.11636092052103) --(axis cs:7,2.14903233375278) --(axis cs:6,2.08831527732723) --(axis cs:5,2.18717983884114) --(axis cs:4,2.19300124440819) --(axis cs:3,2.18545178788661) --(axis cs:2,2.30445480847514) --(axis cs:1,2.39962980443002) --(axis cs:0,2.80591548495593) --cycle; \path [fill=color2, fill opacity=0.2] (axis cs:0,2.47554863761974) --(axis cs:0,2.34447275329518) --(axis cs:1,2.04412906502601) --(axis cs:2,1.98346747285406) --(axis cs:3,1.9795706984505) --(axis cs:4,1.90298481439179) --(axis cs:5,1.85060873592955) --(axis cs:6,1.83625497671001) --(axis cs:7,1.82903755405116) --(axis cs:8,1.82939552234954) --(axis cs:9,1.81382076101987) --(axis cs:10,1.80182061980849) --(axis cs:11,1.80223790031779) --(axis cs:12,1.79341732320254) --(axis cs:13,1.79926318505197) --(axis cs:14,1.7632516662877) --(axis cs:15,1.78911302308135) --(axis cs:16,1.72863658394289) --(axis cs:17,1.74842362518276) --(axis cs:18,1.74850758587067) --(axis cs:19,1.76301199382065) --(axis cs:20,1.75248802851034) --(axis cs:21,1.75211820966866) --(axis cs:22,1.72182141921972) --(axis cs:23,1.70990677439598) --(axis cs:24,1.75115456172238) --(axis cs:25,1.75593180179722) --(axis cs:26,1.73284203445381) --(axis cs:27,1.73990764531334) --(axis cs:28,1.73442060972611) --(axis cs:29,1.71665182290045) --(axis cs:30,1.73165986748673) --(axis cs:31,1.73254438276141) --(axis cs:32,1.71437366591151) --(axis cs:33,1.72985367875914) --(axis cs:34,1.70321854241819) --(axis cs:35,1.7387396252679) --(axis cs:36,1.75973835900982) --(axis cs:37,1.72433513882698) --(axis cs:38,1.70585455681038) --(axis cs:39,1.73442860105527) --(axis cs:40,1.72331533303893) --(axis cs:41,1.72707163005709) --(axis cs:42,1.71433459854344) --(axis cs:43,1.72612081869446) --(axis cs:44,1.71910615044496) --(axis cs:45,1.71136631476184) --(axis cs:46,1.71285510933162) --(axis cs:47,1.70800555635245) --(axis cs:48,1.71211638308285) --(axis cs:49,1.70946298966624) --(axis cs:50,1.74605705676607) --(axis cs:51,1.71797788358944) --(axis cs:52,1.70524255924426) --(axis cs:53,1.71536715249344) --(axis cs:54,1.70902750644356) --(axis cs:55,1.70378237829208) --(axis cs:56,1.69852109638712) --(axis cs:57,1.7050110354903) --(axis cs:58,1.69749934302356) --(axis cs:59,1.69936814631347) --(axis cs:60,1.68791556822715) --(axis cs:61,1.69611139847032) --(axis cs:62,1.70047386407863) --(axis cs:63,1.71000699385992) --(axis cs:64,1.69438881656912) --(axis cs:65,1.71410782794028) --(axis cs:66,1.69784759353923) --(axis cs:67,1.69228095608345) --(axis cs:68,1.69695412405274) --(axis cs:69,1.6977018679648) --(axis cs:70,1.6948163187306) --(axis cs:71,1.69698879401417) --(axis cs:72,1.70614971146555) --(axis cs:73,1.70147082246776) --(axis cs:74,1.69157570622195) --(axis cs:75,1.69666504620342) --(axis cs:76,1.6995969693232) --(axis cs:77,1.69624690710038) --(axis cs:78,1.70255175193889) --(axis cs:79,1.69282535863541) --(axis cs:80,1.69495641919413) --(axis cs:81,1.68725681987525) --(axis cs:82,1.70166395875561) --(axis cs:83,1.69870442345906) --(axis cs:84,1.69062420208012) --(axis cs:85,1.68482206794551) --(axis cs:86,1.69115323230116) --(axis cs:87,1.69462753867387) --(axis cs:88,1.69313167094999) --(axis cs:89,1.6875201573272) --(axis cs:90,1.69200087677752) --(axis cs:91,1.69012282219379) --(axis cs:92,1.69594001427881) --(axis cs:93,1.70220950196871) --(axis cs:94,1.69077132346237) --(axis cs:95,1.6870171274208) --(axis cs:96,1.71097173918848) --(axis cs:97,1.69886038404391) --(axis cs:98,1.69346819705015) --(axis cs:99,1.69993637213372) --(axis cs:100,1.69101067462764) --(axis cs:101,1.70326045104083) --(axis cs:102,1.69678645656953) --(axis cs:103,1.69932934154949) --(axis cs:104,1.68416061421511) --(axis cs:105,1.68609047422457) --(axis cs:106,1.69207873828547) --(axis cs:107,1.68167109035148) --(axis cs:108,1.70385635909384) --(axis cs:109,1.68914587800951) --(axis cs:110,1.70121589808577) --(axis cs:111,1.69962020778564) --(axis cs:112,1.69398360688889) --(axis cs:113,1.70840997199891) --(axis cs:114,1.69885927308917) --(axis cs:115,1.70232173364559) --(axis cs:116,1.70820965900205) --(axis cs:117,1.71505396019534) --(axis cs:118,1.69530585777916) --(axis cs:119,1.71445860102671) --(axis cs:120,1.68716171454492) --(axis cs:121,1.69334217147036) --(axis cs:122,1.69443620441425) --(axis cs:123,1.69724047336331) --(axis cs:124,1.68777219009835) --(axis cs:125,1.68440460523155) --(axis cs:126,1.69168217957849) --(axis cs:127,1.6801107411531) --(axis cs:128,1.69741847731616) --(axis cs:129,1.70321345460128) --(axis cs:130,1.69538462595355) --(axis cs:131,1.70435091930629) --(axis cs:132,1.69139359703143) --(axis cs:133,1.69711320489099) --(axis cs:134,1.69975105589966) --(axis cs:135,1.69049592151558) --(axis cs:136,1.68966047052326) --(axis cs:137,1.68194837589575) --(axis cs:138,1.68568631065703) --(axis cs:139,1.68678492119515) --(axis cs:140,1.69251365217967) --(axis cs:141,1.68709251772353) --(axis cs:142,1.69862343636566) --(axis cs:143,1.69899649796265) --(axis cs:144,1.69049756346192) --(axis cs:145,1.67520879993291) --(axis cs:146,1.67443355875935) --(axis cs:147,1.68485097750861) --(axis cs:148,1.68280241160808) --(axis cs:149,1.68452721520449) --(axis cs:150,1.7064211584991) --(axis cs:151,1.6955698794702) --(axis cs:152,1.68903391324032) --(axis cs:153,1.69416258644877) --(axis cs:154,1.69074184331132) --(axis cs:155,1.69227378317927) --(axis cs:156,1.69941763491664) --(axis cs:157,1.68613457842727) --(axis cs:158,1.67124947264794) --(axis cs:159,1.68320954691067) --(axis cs:160,1.67828035259429) --(axis cs:161,1.685611979298) --(axis cs:162,1.68385670494268) --(axis cs:163,1.68496577799727) --(axis cs:164,1.67572212479948) --(axis cs:165,1.6918326488462) --(axis cs:166,1.68474952293582) --(axis cs:167,1.68757427343918) --(axis cs:168,1.69259063474781) --(axis cs:169,1.67768610690092) --(axis cs:170,1.6912773009504) --(axis cs:171,1.68391978017732) --(axis cs:172,1.69713603562376) --(axis cs:173,1.68184163353729) --(axis cs:174,1.68144810959054) --(axis cs:175,1.68314861640932) --(axis cs:176,1.70029904257225) --(axis cs:177,1.68256784535882) --(axis cs:178,1.68217432908257) --(axis cs:179,1.69393836131622) --(axis cs:180,1.68493922531385) --(axis cs:181,1.69060301040513) --(axis cs:182,1.69560482032968) --(axis cs:183,1.69693590895023) --(axis cs:184,1.68481623423187) --(axis cs:185,1.68711307956726) --(axis cs:186,1.68369707007375) --(axis cs:187,1.69677527073484) --(axis cs:188,1.67830031638376) --(axis cs:189,1.68979001811996) --(axis cs:190,1.69653537340126) --(axis cs:191,1.69756290129295) --(axis cs:192,1.69169347374003) --(axis cs:193,1.68354240660853) --(axis cs:194,1.68572670137279) --(axis cs:195,1.69985815713914) --(axis cs:196,1.69485012460328) --(axis cs:197,1.68011400206515) --(axis cs:198,1.67788972857082) --(axis cs:199,1.70004821174892) --(axis cs:199,1.70924830085484) --(axis cs:199,1.70924830085484) --(axis cs:198,1.71540281770146) --(axis cs:197,1.71170338885358) --(axis cs:196,1.71431138348006) --(axis cs:195,1.7220672087857) --(axis cs:194,1.72447991455204) --(axis cs:193,1.71582903857046) --(axis cs:192,1.71721239002187) --(axis cs:191,1.71022680589089) --(axis cs:190,1.72012046508827) --(axis cs:189,1.7135717792128) --(axis cs:188,1.72107005591161) --(axis cs:187,1.70410441037554) --(axis cs:186,1.70918259482417) --(axis cs:185,1.71179446266144) --(axis cs:184,1.71059311139496) --(axis cs:183,1.71366219028149) --(axis cs:182,1.71044443122671) --(axis cs:181,1.71135217691558) --(axis cs:180,1.71593050182086) --(axis cs:179,1.70253027805756) --(axis cs:178,1.70765691586295) --(axis cs:177,1.70967440746787) --(axis cs:176,1.71086267818047) --(axis cs:175,1.71980590238569) --(axis cs:174,1.69795321658896) --(axis cs:173,1.70913312651825) --(axis cs:172,1.70964587622622) --(axis cs:171,1.70145953424529) --(axis cs:170,1.70104142463553) --(axis cs:169,1.69777493502642) --(axis cs:168,1.72006620891445) --(axis cs:167,1.71648996463226) --(axis cs:166,1.71467675851636) --(axis cs:165,1.71592290487618) --(axis cs:164,1.69890629984499) --(axis cs:163,1.71559698521684) --(axis cs:162,1.7104476897793) --(axis cs:161,1.71407227105677) --(axis cs:160,1.71386438703355) --(axis cs:159,1.70807479966983) --(axis cs:158,1.7211656336486) --(axis cs:157,1.70139461592774) --(axis cs:156,1.71008022217717) --(axis cs:155,1.70414985468771) --(axis cs:154,1.71393983927535) --(axis cs:153,1.71858741927327) --(axis cs:152,1.71101917065632) --(axis cs:151,1.71573042208162) --(axis cs:150,1.73128113966527) --(axis cs:149,1.71275979594205) --(axis cs:148,1.72289243073049) --(axis cs:147,1.71431405678552) --(axis cs:146,1.70445069712673) --(axis cs:145,1.71277887334971) --(axis cs:144,1.7132018941835) --(axis cs:143,1.71033619466049) --(axis cs:142,1.71143988999313) --(axis cs:141,1.71972032893708) --(axis cs:140,1.72842972722249) --(axis cs:139,1.73639220187462) --(axis cs:138,1.71542588817262) --(axis cs:137,1.71983824948953) --(axis cs:136,1.72242816206036) --(axis cs:135,1.70829461679542) --(axis cs:134,1.72458907299896) --(axis cs:133,1.72037179381202) --(axis cs:132,1.71739436874311) --(axis cs:131,1.71734359782933) --(axis cs:130,1.75024967952359) --(axis cs:129,1.73763853180696) --(axis cs:128,1.73716888211225) --(axis cs:127,1.71413750599351) --(axis cs:126,1.70487801252966) --(axis cs:125,1.71713001171562) --(axis cs:124,1.70601483630699) --(axis cs:123,1.73233777132282) --(axis cs:122,1.7214402437402) --(axis cs:121,1.73798028393583) --(axis cs:120,1.71406522322592) --(axis cs:119,1.72280691430074) --(axis cs:118,1.7264333032354) --(axis cs:117,1.72478111136838) --(axis cs:116,1.76650229559161) --(axis cs:115,1.72060588438115) --(axis cs:114,1.72293990980267) --(axis cs:113,1.73181545038344) --(axis cs:112,1.74344921867645) --(axis cs:111,1.72274384832474) --(axis cs:110,1.73448636622316) --(axis cs:109,1.72006462509184) --(axis cs:108,1.72442286196405) --(axis cs:107,1.73615508533821) --(axis cs:106,1.7451882671581) --(axis cs:105,1.72899972906064) --(axis cs:104,1.71736124733808) --(axis cs:103,1.70758006463566) --(axis cs:102,1.71536416484466) --(axis cs:101,1.71341672591153) --(axis cs:100,1.70919940790333) --(axis cs:99,1.7086127578292) --(axis cs:98,1.71153815203662) --(axis cs:97,1.7241645087631) --(axis cs:96,1.72685006628866) --(axis cs:95,1.70985616788636) --(axis cs:94,1.73544355032837) --(axis cs:93,1.71637925554624) --(axis cs:92,1.72264683588752) --(axis cs:91,1.70947357091458) --(axis cs:90,1.72309057820524) --(axis cs:89,1.72293453312919) --(axis cs:88,1.72071625232882) --(axis cs:87,1.72859756613494) --(axis cs:86,1.71009057358416) --(axis cs:85,1.73965264824101) --(axis cs:84,1.75238499563182) --(axis cs:83,1.7220627050085) --(axis cs:82,1.71516866711033) --(axis cs:81,1.70937132152795) --(axis cs:80,1.72944009569845) --(axis cs:79,1.72167690443374) --(axis cs:78,1.71137186923878) --(axis cs:77,1.71624799074202) --(axis cs:76,1.71787837342734) --(axis cs:75,1.7217450165865) --(axis cs:74,1.7238728997923) --(axis cs:73,1.71027728878026) --(axis cs:72,1.72465770020514) --(axis cs:71,1.71485814173488) --(axis cs:70,1.72844383360933) --(axis cs:69,1.72448397025771) --(axis cs:68,1.74285776130416) --(axis cs:67,1.73063968819984) --(axis cs:66,1.8207433335848) --(axis cs:65,1.73845456381768) --(axis cs:64,1.72244869687769) --(axis cs:63,1.73758425131449) --(axis cs:62,1.72581331014622) --(axis cs:61,1.7247793500305) --(axis cs:60,1.73092400563302) --(axis cs:59,1.74730471049424) --(axis cs:58,1.71986734045957) --(axis cs:57,1.72725448460553) --(axis cs:56,1.74169538529852) --(axis cs:55,1.72939258947373) --(axis cs:54,1.73767253008217) --(axis cs:53,1.73600862999633) --(axis cs:52,1.736456111143) --(axis cs:51,1.73260801814777) --(axis cs:50,1.79270551266142) --(axis cs:49,1.74694759716772) --(axis cs:48,1.73030672215702) --(axis cs:47,1.7561815888807) --(axis cs:46,1.76337193572759) --(axis cs:45,1.7334723283122) --(axis cs:44,1.74567978304961) --(axis cs:43,1.81757415429748) --(axis cs:42,1.80129981899043) --(axis cs:41,1.76501823753476) --(axis cs:40,1.75446018108689) --(axis cs:39,1.77161781332003) --(axis cs:38,1.75831059430886) --(axis cs:37,1.74735229727684) --(axis cs:36,1.78914555593769) --(axis cs:35,1.77104477519521) --(axis cs:34,1.81342766396075) --(axis cs:33,1.78978718418261) --(axis cs:32,1.77606676234547) --(axis cs:31,1.77270791893155) --(axis cs:30,1.78871574191116) --(axis cs:29,1.7764107328895) --(axis cs:28,1.75270705197891) --(axis cs:27,1.79012081709664) --(axis cs:26,1.80616598690086) --(axis cs:25,1.79662960052364) --(axis cs:24,1.81520728281726) --(axis cs:23,1.84255216754051) --(axis cs:22,1.82393080808664) --(axis cs:21,1.76343007915351) --(axis cs:20,1.7872407918136) --(axis cs:19,1.77227341944458) --(axis cs:18,1.78121468747909) --(axis cs:17,1.77184088115727) --(axis cs:16,1.83688948426771) --(axis cs:15,1.84265085240312) --(axis cs:14,1.82373612715789) --(axis cs:13,1.84635483643622) --(axis cs:12,1.814578484922) --(axis cs:11,1.83575042861593) --(axis cs:10,1.82487900663728) --(axis cs:9,1.84459252995761) --(axis cs:8,1.90961052013093) --(axis cs:7,1.97045021555257) --(axis cs:6,1.87831196931966) --(axis cs:5,1.95379504596132) --(axis cs:4,2.00889633442337) --(axis cs:3,2.17827789812237) --(axis cs:2,2.0285084115835) --(axis cs:1,2.17636413241509) --(axis cs:0,2.47554863761974) --cycle; \path [fill=color3, fill opacity=0.2] (axis cs:0,10.0718774795532) --(axis cs:0,10.0718774795532) --(axis cs:1,10.0718774795532) --(axis cs:2,10.0718774795532) --(axis cs:3,10.0718774795532) --(axis cs:4,10.0718774795532) --(axis cs:5,10.0718774795532) --(axis cs:6,10.0718774795532) --(axis cs:7,10.0718774795532) --(axis cs:8,10.0718774795532) --(axis cs:9,10.0718774795532) --(axis cs:10,10.0718774795532) --(axis cs:11,10.0718774795532) --(axis cs:12,10.0718774795532) --(axis cs:13,10.0718774795532) --(axis cs:14,10.0718774795532) --(axis cs:15,10.0718774795532) --(axis cs:16,10.0718774795532) --(axis cs:17,10.0718774795532) --(axis cs:18,10.0718774795532) --(axis cs:19,10.0718774795532) --(axis cs:20,10.0718774795532) --(axis cs:21,10.0718774795532) --(axis cs:22,10.0718774795532) --(axis cs:23,10.0718774795532) --(axis cs:24,10.0718774795532) --(axis cs:25,10.0718774795532) --(axis cs:26,10.0718774795532) --(axis cs:27,10.0718774795532) --(axis cs:28,10.0718774795532) --(axis cs:29,10.0718774795532) --(axis cs:30,10.0718774795532) --(axis cs:31,10.0718774795532) --(axis cs:32,10.0718774795532) --(axis cs:33,10.0718774795532) --(axis cs:34,10.0718774795532) --(axis cs:35,10.0718774795532) --(axis cs:36,10.0718774795532) --(axis cs:37,10.0718774795532) --(axis cs:38,10.0718774795532) --(axis cs:39,10.0718774795532) --(axis cs:40,10.0718774795532) --(axis cs:41,10.0718774795532) --(axis cs:42,10.0718774795532) --(axis cs:43,10.0718774795532) --(axis cs:44,10.0718774795532) --(axis cs:45,10.0718774795532) --(axis cs:46,10.0718774795532) --(axis cs:47,10.0718774795532) --(axis cs:48,10.0718774795532) --(axis cs:49,10.0718774795532) --(axis cs:50,10.0718774795532) --(axis cs:51,10.0718774795532) --(axis cs:52,10.0718774795532) --(axis cs:53,10.0718774795532) --(axis cs:54,10.0718774795532) --(axis cs:55,10.0718774795532) --(axis cs:56,10.0718774795532) --(axis cs:57,10.0718774795532) --(axis cs:58,10.0718774795532) --(axis cs:59,10.0718774795532) --(axis cs:60,10.0718774795532) --(axis cs:61,10.0718774795532) --(axis cs:62,10.0718774795532) --(axis cs:63,10.0718774795532) --(axis cs:64,10.0718774795532) --(axis cs:65,10.0718774795532) --(axis cs:66,10.0718774795532) --(axis cs:67,10.0718774795532) --(axis cs:68,10.0718774795532) --(axis cs:69,10.0718774795532) --(axis cs:70,10.0718774795532) --(axis cs:71,10.0718774795532) --(axis cs:72,10.0718774795532) --(axis cs:73,10.0718774795532) --(axis cs:74,10.0718774795532) --(axis cs:75,10.0718774795532) --(axis cs:76,10.0718774795532) --(axis cs:77,10.0718774795532) --(axis cs:78,10.0718774795532) --(axis cs:79,10.0718774795532) --(axis cs:80,10.0718774795532) --(axis cs:81,10.0718774795532) --(axis cs:82,10.0718774795532) --(axis cs:83,10.0718774795532) --(axis cs:84,10.0718774795532) --(axis cs:85,10.0718774795532) --(axis cs:86,10.0718774795532) --(axis cs:87,10.0718774795532) --(axis cs:88,10.0718774795532) --(axis cs:89,10.0718774795532) --(axis cs:90,10.0718774795532) --(axis cs:91,10.0718774795532) --(axis cs:92,10.0718774795532) --(axis cs:93,10.0718774795532) --(axis cs:94,10.0718774795532) --(axis cs:95,10.0718774795532) --(axis cs:96,10.0718774795532) --(axis cs:97,10.0718774795532) --(axis cs:98,10.0718774795532) --(axis cs:99,10.0718774795532) --(axis cs:100,10.0718774795532) --(axis cs:101,10.0718774795532) --(axis cs:102,10.0718774795532) --(axis cs:103,10.0718774795532) --(axis cs:104,10.0718774795532) --(axis cs:105,10.0718774795532) --(axis cs:106,10.0718774795532) --(axis cs:107,10.0718774795532) --(axis cs:108,10.0718774795532) --(axis cs:109,10.0718774795532) --(axis cs:110,10.0718774795532) --(axis cs:111,10.0718774795532) --(axis cs:112,10.0718774795532) --(axis cs:113,10.0718774795532) --(axis cs:114,10.0718774795532) --(axis cs:115,10.0718774795532) --(axis cs:116,10.0718774795532) --(axis cs:117,10.0718774795532) --(axis cs:118,10.0718774795532) --(axis cs:119,10.0718774795532) --(axis cs:120,10.0718774795532) --(axis cs:121,10.0718774795532) --(axis cs:122,10.0718774795532) --(axis cs:123,10.0718774795532) --(axis cs:124,10.0718774795532) --(axis cs:125,10.0718774795532) --(axis cs:126,10.0718774795532) --(axis cs:127,10.0718774795532) --(axis cs:128,10.0718774795532) --(axis cs:129,10.0718774795532) --(axis cs:130,10.0718774795532) --(axis cs:131,10.0718774795532) --(axis cs:132,10.0718774795532) --(axis cs:133,10.0718774795532) --(axis cs:134,10.0718774795532) --(axis cs:135,10.0718774795532) --(axis cs:136,10.0718774795532) --(axis cs:137,10.0718774795532) --(axis cs:138,10.0718774795532) --(axis cs:139,10.0718774795532) --(axis cs:140,10.0718774795532) --(axis cs:141,10.0718774795532) --(axis cs:142,10.0718774795532) --(axis cs:143,10.0718774795532) --(axis cs:144,10.0718774795532) --(axis cs:145,10.0718774795532) --(axis cs:146,10.0718774795532) --(axis cs:147,10.0718774795532) --(axis cs:148,10.0718774795532) --(axis cs:149,10.0718774795532) --(axis cs:150,10.0718774795532) --(axis cs:151,10.0718774795532) --(axis cs:152,10.0718774795532) --(axis cs:153,10.0718774795532) --(axis cs:154,10.0718774795532) --(axis cs:155,10.0718774795532) --(axis cs:156,10.0718774795532) --(axis cs:157,10.0718774795532) --(axis cs:158,10.0718774795532) --(axis cs:159,10.0718774795532) --(axis cs:160,10.0718774795532) --(axis cs:161,10.0718774795532) --(axis cs:162,10.0718774795532) --(axis cs:163,10.0718774795532) --(axis cs:164,10.0718774795532) --(axis cs:165,10.0718774795532) --(axis cs:166,10.0718774795532) --(axis cs:167,10.0718774795532) --(axis cs:168,10.0718774795532) --(axis cs:169,10.0718774795532) --(axis cs:170,10.0718774795532) --(axis cs:171,10.0718774795532) --(axis cs:172,10.0718774795532) --(axis cs:173,10.0718774795532) --(axis cs:174,10.0718774795532) --(axis cs:175,10.0718774795532) --(axis cs:176,10.0718774795532) --(axis cs:177,10.0718774795532) --(axis cs:178,10.0718774795532) --(axis cs:179,10.0718774795532) --(axis cs:180,10.0718774795532) --(axis cs:181,10.0718774795532) --(axis cs:182,10.0718774795532) --(axis cs:183,10.0718774795532) --(axis cs:184,10.0718774795532) --(axis cs:185,10.0718774795532) --(axis cs:186,10.0718774795532) --(axis cs:187,10.0718774795532) --(axis cs:188,10.0718774795532) --(axis cs:189,10.0718774795532) --(axis cs:190,10.0718774795532) --(axis cs:191,10.0718774795532) --(axis cs:192,10.0718774795532) --(axis cs:193,10.0718774795532) --(axis cs:194,10.0718774795532) --(axis cs:195,10.0718774795532) --(axis cs:196,10.0718774795532) --(axis cs:197,10.0718774795532) --(axis cs:198,10.0718774795532) --(axis cs:199,10.0718774795532) --(axis cs:199,10.0718774795532) --(axis cs:199,10.0718774795532) --(axis cs:198,10.0718774795532) --(axis cs:197,10.0718774795532) --(axis cs:196,10.0718774795532) --(axis cs:195,10.0718774795532) --(axis cs:194,10.0718774795532) --(axis cs:193,10.0718774795532) --(axis cs:192,10.0718774795532) --(axis cs:191,10.0718774795532) --(axis cs:190,10.0718774795532) --(axis cs:189,10.0718774795532) --(axis cs:188,10.0718774795532) --(axis cs:187,10.0718774795532) --(axis cs:186,10.0718774795532) --(axis cs:185,10.0718774795532) --(axis cs:184,10.0718774795532) --(axis cs:183,10.0718774795532) --(axis cs:182,10.0718774795532) --(axis cs:181,10.0718774795532) --(axis cs:180,10.0718774795532) --(axis cs:179,10.0718774795532) --(axis cs:178,10.0718774795532) --(axis cs:177,10.0718774795532) --(axis cs:176,10.0718774795532) --(axis cs:175,10.0718774795532) --(axis cs:174,10.0718774795532) --(axis cs:173,10.0718774795532) --(axis cs:172,10.0718774795532) --(axis cs:171,10.0718774795532) --(axis cs:170,10.0718774795532) --(axis cs:169,10.0718774795532) --(axis cs:168,10.0718774795532) --(axis cs:167,10.0718774795532) --(axis cs:166,10.0718774795532) --(axis cs:165,10.0718774795532) --(axis cs:164,10.0718774795532) --(axis cs:163,10.0718774795532) --(axis cs:162,10.0718774795532) --(axis cs:161,10.0718774795532) --(axis cs:160,10.0718774795532) --(axis cs:159,10.0718774795532) --(axis cs:158,10.0718774795532) --(axis cs:157,10.0718774795532) --(axis cs:156,10.0718774795532) --(axis cs:155,10.0718774795532) --(axis cs:154,10.0718774795532) --(axis cs:153,10.0718774795532) --(axis cs:152,10.0718774795532) --(axis cs:151,10.0718774795532) --(axis cs:150,10.0718774795532) --(axis cs:149,10.0718774795532) --(axis cs:148,10.0718774795532) --(axis cs:147,10.0718774795532) --(axis cs:146,10.0718774795532) --(axis cs:145,10.0718774795532) --(axis cs:144,10.0718774795532) --(axis cs:143,10.0718774795532) --(axis cs:142,10.0718774795532) --(axis cs:141,10.0718774795532) --(axis cs:140,10.0718774795532) --(axis cs:139,10.0718774795532) --(axis cs:138,10.0718774795532) --(axis cs:137,10.0718774795532) --(axis cs:136,10.0718774795532) --(axis cs:135,10.0718774795532) --(axis cs:134,10.0718774795532) --(axis cs:133,10.0718774795532) --(axis cs:132,10.0718774795532) --(axis cs:131,10.0718774795532) --(axis cs:130,10.0718774795532) --(axis cs:129,10.0718774795532) --(axis cs:128,10.0718774795532) --(axis cs:127,10.0718774795532) --(axis cs:126,10.0718774795532) --(axis cs:125,10.0718774795532) --(axis cs:124,10.0718774795532) --(axis cs:123,10.0718774795532) --(axis cs:122,10.0718774795532) --(axis cs:121,10.0718774795532) --(axis cs:120,10.0718774795532) --(axis cs:119,10.0718774795532) --(axis cs:118,10.0718774795532) --(axis cs:117,10.0718774795532) --(axis cs:116,10.0718774795532) --(axis cs:115,10.0718774795532) --(axis cs:114,10.0718774795532) --(axis cs:113,10.0718774795532) --(axis cs:112,10.0718774795532) --(axis cs:111,10.0718774795532) --(axis cs:110,10.0718774795532) --(axis cs:109,10.0718774795532) --(axis cs:108,10.0718774795532) --(axis cs:107,10.0718774795532) --(axis cs:106,10.0718774795532) --(axis cs:105,10.0718774795532) --(axis cs:104,10.0718774795532) --(axis cs:103,10.0718774795532) --(axis cs:102,10.0718774795532) --(axis cs:101,10.0718774795532) --(axis cs:100,10.0718774795532) --(axis cs:99,10.0718774795532) --(axis cs:98,10.0718774795532) --(axis cs:97,10.0718774795532) --(axis cs:96,10.0718774795532) --(axis cs:95,10.0718774795532) --(axis cs:94,10.0718774795532) --(axis cs:93,10.0718774795532) --(axis cs:92,10.0718774795532) --(axis cs:91,10.0718774795532) --(axis cs:90,10.0718774795532) --(axis cs:89,10.0718774795532) --(axis cs:88,10.0718774795532) --(axis cs:87,10.0718774795532) --(axis cs:86,10.0718774795532) --(axis cs:85,10.0718774795532) --(axis cs:84,10.0718774795532) --(axis cs:83,10.0718774795532) --(axis cs:82,10.0718774795532) --(axis cs:81,10.0718774795532) --(axis cs:80,10.0718774795532) --(axis cs:79,10.0718774795532) --(axis cs:78,10.0718774795532) --(axis cs:77,10.0718774795532) --(axis cs:76,10.0718774795532) --(axis cs:75,10.0718774795532) --(axis cs:74,10.0718774795532) --(axis cs:73,10.0718774795532) --(axis cs:72,10.0718774795532) --(axis cs:71,10.0718774795532) --(axis cs:70,10.0718774795532) --(axis cs:69,10.0718774795532) --(axis cs:68,10.0718774795532) --(axis cs:67,10.0718774795532) --(axis cs:66,10.0718774795532) --(axis cs:65,10.0718774795532) --(axis cs:64,10.0718774795532) --(axis cs:63,10.0718774795532) --(axis cs:62,10.0718774795532) --(axis cs:61,10.0718774795532) --(axis cs:60,10.0718774795532) --(axis cs:59,10.0718774795532) --(axis cs:58,10.0718774795532) --(axis cs:57,10.0718774795532) --(axis cs:56,10.0718774795532) --(axis cs:55,10.0718774795532) --(axis cs:54,10.0718774795532) --(axis cs:53,10.0718774795532) --(axis cs:52,10.0718774795532) --(axis cs:51,10.0718774795532) --(axis cs:50,10.0718774795532) --(axis cs:49,10.0718774795532) --(axis cs:48,10.0718774795532) --(axis cs:47,10.0718774795532) --(axis cs:46,10.0718774795532) --(axis cs:45,10.0718774795532) --(axis cs:44,10.0718774795532) --(axis cs:43,10.0718774795532) --(axis cs:42,10.0718774795532) --(axis cs:41,10.0718774795532) --(axis cs:40,10.0718774795532) --(axis cs:39,10.0718774795532) --(axis cs:38,10.0718774795532) --(axis cs:37,10.0718774795532) --(axis cs:36,10.0718774795532) --(axis cs:35,10.0718774795532) --(axis cs:34,10.0718774795532) --(axis cs:33,10.0718774795532) --(axis cs:32,10.0718774795532) --(axis cs:31,10.0718774795532) --(axis cs:30,10.0718774795532) --(axis cs:29,10.0718774795532) --(axis cs:28,10.0718774795532) --(axis cs:27,10.0718774795532) --(axis cs:26,10.0718774795532) --(axis cs:25,10.0718774795532) --(axis cs:24,10.0718774795532) --(axis cs:23,10.0718774795532) --(axis cs:22,10.0718774795532) --(axis cs:21,10.0718774795532) --(axis cs:20,10.0718774795532) --(axis cs:19,10.0718774795532) --(axis cs:18,10.0718774795532) --(axis cs:17,10.0718774795532) --(axis cs:16,10.0718774795532) --(axis cs:15,10.0718774795532) --(axis cs:14,10.0718774795532) --(axis cs:13,10.0718774795532) --(axis cs:12,10.0718774795532) --(axis cs:11,10.0718774795532) --(axis cs:10,10.0718774795532) --(axis cs:9,10.0718774795532) --(axis cs:8,10.0718774795532) --(axis cs:7,10.0718774795532) --(axis cs:6,10.0718774795532) --(axis cs:5,10.0718774795532) --(axis cs:4,10.0718774795532) --(axis cs:3,10.0718774795532) --(axis cs:2,10.0718774795532) --(axis cs:1,10.0718774795532) --(axis cs:0,10.0718774795532) --cycle; \path [fill=color4, fill opacity=0.2] (axis cs:0,2.34470746606768) --(axis cs:0,1.97450654417097) --(axis cs:1,1.67848497644468) --(axis cs:2,1.52453389346385) --(axis cs:3,1.50665299908785) --(axis cs:4,1.42995769548828) --(axis cs:5,1.37280804648207) --(axis cs:6,1.28598953138558) --(axis cs:7,1.35799296140345) --(axis cs:8,1.29017553976763) --(axis cs:9,1.31521879993941) --(axis cs:10,1.26481757389001) --(axis cs:11,1.23353531664371) --(axis cs:12,1.25092078793854) --(axis cs:13,1.24144970692337) --(axis cs:14,1.2382678482676) --(axis cs:15,1.1872253583302) --(axis cs:16,1.21926289555433) --(axis cs:17,1.19123847019127) --(axis cs:18,1.16530761940018) --(axis cs:19,1.14990297238454) --(axis cs:20,1.1665180501802) --(axis cs:21,1.15326357355614) --(axis cs:22,1.1254440657458) --(axis cs:23,1.12936866628784) --(axis cs:24,1.15633506277606) --(axis cs:25,1.11524254564007) --(axis cs:26,1.08658265223676) --(axis cs:27,1.07766082280913) --(axis cs:28,1.13709820309414) --(axis cs:29,1.11505711471202) --(axis cs:30,1.10386503352823) --(axis cs:31,1.10206639873388) --(axis cs:32,1.0829712615153) --(axis cs:33,1.09065117713452) --(axis cs:34,1.11333609594625) --(axis cs:35,1.10407311738601) --(axis cs:36,1.08044603909277) --(axis cs:37,1.08310626785935) --(axis cs:38,1.10675587142254) --(axis cs:39,1.07327639504701) --(axis cs:40,1.08014321319125) --(axis cs:41,1.06982318089232) --(axis cs:42,1.05950715267714) --(axis cs:43,1.06404417784716) --(axis cs:44,1.07510045760199) --(axis cs:45,1.04879169808308) --(axis cs:46,1.06662246270955) --(axis cs:47,1.04289922185531) --(axis cs:48,1.0562601688139) --(axis cs:49,1.06231489744035) --(axis cs:50,1.05525379736707) --(axis cs:51,1.04953626620665) --(axis cs:52,1.06948171744514) --(axis cs:53,1.0663922902972) --(axis cs:54,1.05938973172888) --(axis cs:55,1.07540039351244) --(axis cs:56,1.03918085682331) --(axis cs:57,1.06085227290728) --(axis cs:58,1.05329175755822) --(axis cs:59,1.06221291405215) --(axis cs:60,1.04436845522566) --(axis cs:61,1.06911410059601) --(axis cs:62,1.03468169056526) --(axis cs:63,1.05860410864435) --(axis cs:64,1.042205289067) --(axis cs:65,1.03248328984763) --(axis cs:66,1.0687883132096) --(axis cs:67,1.03351047744923) --(axis cs:68,1.02713583404614) --(axis cs:69,1.02215743398732) --(axis cs:70,0.992918141661901) --(axis cs:71,1.02741333666293) --(axis cs:72,1.03527701174802) --(axis cs:73,1.03074682929995) --(axis cs:74,1.03994158777047) --(axis cs:75,1.0241286410205) --(axis cs:76,1.03676619340453) --(axis cs:77,1.02370089964479) --(axis cs:78,1.0163086399773) --(axis cs:79,1.01422790411914) --(axis cs:80,1.0214511040282) --(axis cs:81,1.02185958302935) --(axis cs:82,1.00784234470692) --(axis cs:83,1.02286544685774) --(axis cs:84,1.02808176226764) --(axis cs:85,1.03157346526409) --(axis cs:86,1.04832066880261) --(axis cs:87,1.02672269653941) --(axis cs:88,1.02491724853982) --(axis cs:89,1.02588165869953) --(axis cs:90,1.01023025690016) --(axis cs:91,1.02875105145118) --(axis cs:92,1.01503392072725) --(axis cs:93,1.01233447077101) --(axis cs:94,1.02732110542397) --(axis cs:95,1.01447548637676) --(axis cs:96,1.00984659842715) --(axis cs:97,1.00549453098035) --(axis cs:98,1.02613308231717) --(axis cs:99,1.02483788081524) --(axis cs:100,1.02353285017504) --(axis cs:101,1.00171493249829) --(axis cs:102,1.02635684105902) --(axis cs:103,1.009981240584) --(axis cs:104,1.01571733642944) --(axis cs:105,1.02459685692184) --(axis cs:106,1.01077830937871) --(axis cs:107,1.00315775321063) --(axis cs:108,1.02298251838522) --(axis cs:109,1.02366783255305) --(axis cs:110,1.0041920676632) --(axis cs:111,1.02050297356143) --(axis cs:112,1.02637821769427) --(axis cs:113,1.01547518102054) --(axis cs:114,1.01107938026893) --(axis cs:115,1.02654962122229) --(axis cs:116,1.01965106954695) --(axis cs:117,1.02185393083107) --(axis cs:118,1.01731706958896) --(axis cs:119,1.01003294971694) --(axis cs:120,0.990751001525806) --(axis cs:121,1.00178152313979) --(axis cs:122,1.00695724791665) --(axis cs:123,1.01396418904549) --(axis cs:124,1.02447382096181) --(axis cs:125,1.01030058921407) --(axis cs:126,1.03463026481018) --(axis cs:127,1.01845870386969) --(axis cs:128,1.03455059051555) --(axis cs:129,0.994364347055138) --(axis cs:130,0.997980489038689) --(axis cs:131,1.0148216733256) --(axis cs:132,1.01749539470108) --(axis cs:133,1.02167867031159) --(axis cs:134,1.01540459186909) --(axis cs:135,1.01296309372652) --(axis cs:136,1.00545057552417) --(axis cs:137,1.02225491285545) --(axis cs:138,1.01131340959774) --(axis cs:139,1.00330798450298) --(axis cs:140,0.996755509165795) --(axis cs:141,1.02430573818655) --(axis cs:142,1.00358931752305) --(axis cs:143,1.00572189394855) --(axis cs:144,1.01422672471092) --(axis cs:145,0.997373526055508) --(axis cs:146,0.992909630982147) --(axis cs:147,1.01243909971312) --(axis cs:148,1.01246558027223) --(axis cs:149,1.01468940851654) --(axis cs:150,1.01682560395373) --(axis cs:151,1.01241744008798) --(axis cs:152,1.01384408174317) --(axis cs:153,1.00720351009124) --(axis cs:154,1.01771742510035) --(axis cs:155,1.04017939494198) --(axis cs:156,1.01214904971544) --(axis cs:157,1.02856905209406) --(axis cs:158,1.00141863158538) --(axis cs:159,1.01745801946387) --(axis cs:160,1.03222489318352) --(axis cs:161,1.02340203808214) --(axis cs:162,1.01013449721125) --(axis cs:163,1.01985010027126) --(axis cs:164,1.00305868766766) --(axis cs:165,1.01368836997257) --(axis cs:166,1.01059245127903) --(axis cs:167,1.02616020550247) --(axis cs:168,1.02231906451734) --(axis cs:169,1.00002698001641) --(axis cs:170,1.01128103697151) --(axis cs:171,1.00249172674462) --(axis cs:172,0.991280286211643) --(axis cs:173,1.00957783903426) --(axis cs:174,1.02011466297252) --(axis cs:175,0.990195731271286) --(axis cs:176,1.00160686827401) --(axis cs:177,1.01061785660658) --(axis cs:178,0.991904387849037) --(axis cs:179,0.996985988251596) --(axis cs:180,0.989370516670031) --(axis cs:181,1.00336990223117) --(axis cs:182,1.01339055106254) --(axis cs:183,1.01309096750501) --(axis cs:184,1.00363091579401) --(axis cs:185,1.00446879952175) --(axis cs:186,1.03804326237654) --(axis cs:187,1.00599282140827) --(axis cs:188,1.00818200776823) --(axis cs:189,1.00460572355989) --(axis cs:190,1.02459550234621) --(axis cs:191,0.989464938644376) --(axis cs:192,1.00989910992889) --(axis cs:193,1.01643087540962) --(axis cs:194,1.01631819402192) --(axis cs:195,1.00137344126029) --(axis cs:196,1.04239675556973) --(axis cs:197,0.994355986499033) --(axis cs:198,1.00146332776513) --(axis cs:199,1.00432471872449) --(axis cs:199,1.05550969003558) --(axis cs:199,1.05550969003558) --(axis cs:198,1.05013884508643) --(axis cs:197,1.0809259196289) --(axis cs:196,1.05900252307102) --(axis cs:195,1.05119394536691) --(axis cs:194,1.04753282869842) --(axis cs:193,1.06933977926872) --(axis cs:192,1.04711640444488) --(axis cs:191,1.11128453015903) --(axis cs:190,1.05946626332457) --(axis cs:189,1.0499620903134) --(axis cs:188,1.06153544714205) --(axis cs:187,1.04609419946576) --(axis cs:186,1.05816082774186) --(axis cs:185,1.05297152907628) --(axis cs:184,1.02762566455877) --(axis cs:183,1.0553742176604) --(axis cs:182,1.06225084259896) --(axis cs:181,1.0893448127728) --(axis cs:180,1.05188600646229) --(axis cs:179,1.04137911356267) --(axis cs:178,1.0814192911213) --(axis cs:177,1.05545565642442) --(axis cs:176,1.06270485066672) --(axis cs:175,1.08382305706642) --(axis cs:174,1.06407789913075) --(axis cs:173,1.03662615571672) --(axis cs:172,1.05722811947378) --(axis cs:171,1.06037200463966) --(axis cs:170,1.03911788499504) --(axis cs:169,1.05159779491645) --(axis cs:168,1.06094595394579) --(axis cs:167,1.06110185275558) --(axis cs:166,1.06569424610867) --(axis cs:165,1.04424281479611) --(axis cs:164,1.05068627216358) --(axis cs:163,1.05645357251927) --(axis cs:162,1.05489884324285) --(axis cs:161,1.07412799312208) --(axis cs:160,1.07646737137336) --(axis cs:159,1.05496314981713) --(axis cs:158,1.05120864578889) --(axis cs:157,1.07570135844366) --(axis cs:156,1.05008821301039) --(axis cs:155,1.07220897747929) --(axis cs:154,1.06465557409094) --(axis cs:153,1.03971607418305) --(axis cs:152,1.04675867857178) --(axis cs:151,1.05813204797965) --(axis cs:150,1.05119040060865) --(axis cs:149,1.05676034810577) --(axis cs:148,1.07077564401671) --(axis cs:147,1.04928322656557) --(axis cs:146,1.06903626039816) --(axis cs:145,1.04609876398355) --(axis cs:144,1.06485371390297) --(axis cs:143,1.05072775776959) --(axis cs:142,1.05236165789504) --(axis cs:141,1.05124863269358) --(axis cs:140,1.04782430669877) --(axis cs:139,1.05253796753101) --(axis cs:138,1.04850426217808) --(axis cs:137,1.05768559694069) --(axis cs:136,1.02878072959821) --(axis cs:135,1.04362894156706) --(axis cs:134,1.06006292312267) --(axis cs:133,1.0636701742166) --(axis cs:132,1.04303479099838) --(axis cs:131,1.0526886454305) --(axis cs:130,1.03726047108247) --(axis cs:129,1.09079547922355) --(axis cs:128,1.14788325309712) --(axis cs:127,1.03847007382501) --(axis cs:126,1.0718360762085) --(axis cs:125,1.09145111977031) --(axis cs:124,1.06766537543407) --(axis cs:123,1.07041615153068) --(axis cs:122,1.06513888531547) --(axis cs:121,1.04223378905503) --(axis cs:120,1.03850555689247) --(axis cs:119,1.05677238914261) --(axis cs:118,1.05498951572293) --(axis cs:117,1.05878147375572) --(axis cs:116,1.10378390321611) --(axis cs:115,1.05883465230677) --(axis cs:114,1.0626751449157) --(axis cs:113,1.10720758112546) --(axis cs:112,1.07953065300276) --(axis cs:111,1.05206100844845) --(axis cs:110,1.04709138723182) --(axis cs:109,1.04684026604925) --(axis cs:108,1.08375032692118) --(axis cs:107,1.04511666848126) --(axis cs:106,1.03917124124995) --(axis cs:105,1.04642350783952) --(axis cs:104,1.06512563537232) --(axis cs:103,1.04563962046946) --(axis cs:102,1.04850296881647) --(axis cs:101,1.03771778387133) --(axis cs:100,1.04225084123121) --(axis cs:99,1.05595631054523) --(axis cs:98,1.06528198917026) --(axis cs:97,1.07507120769763) --(axis cs:96,1.05275554009214) --(axis cs:95,1.06647258033466) --(axis cs:94,1.05316996055504) --(axis cs:93,1.03352133748705) --(axis cs:92,1.05261201051664) --(axis cs:91,1.05927704570153) --(axis cs:90,1.02786321462694) --(axis cs:89,1.06939774880169) --(axis cs:88,1.05719730491172) --(axis cs:87,1.09032051253652) --(axis cs:86,1.078433853521) --(axis cs:85,1.07210557183002) --(axis cs:84,1.05233377270551) --(axis cs:83,1.05588306541033) --(axis cs:82,1.05220432811413) --(axis cs:81,1.0471081885008) --(axis cs:80,1.10218690599682) --(axis cs:79,1.04221907731091) --(axis cs:78,1.06614251282422) --(axis cs:77,1.05184760613829) --(axis cs:76,1.07443647573915) --(axis cs:75,1.0434512959607) --(axis cs:74,1.08865071264457) --(axis cs:73,1.06813665649412) --(axis cs:72,1.05558125698978) --(axis cs:71,1.07749408063443) --(axis cs:70,1.16215674084542) --(axis cs:69,1.07807697915965) --(axis cs:68,1.08728276794361) --(axis cs:67,1.07097498664684) --(axis cs:66,1.11934516495508) --(axis cs:65,1.10294566332315) --(axis cs:64,1.0989565342057) --(axis cs:63,1.10437659089483) --(axis cs:62,1.09220095790276) --(axis cs:61,1.09988733563751) --(axis cs:60,1.07919144887285) --(axis cs:59,1.10140927451596) --(axis cs:58,1.10254148676551) --(axis cs:57,1.08439389904402) --(axis cs:56,1.08542026889385) --(axis cs:55,1.10597639748793) --(axis cs:54,1.09354987398401) --(axis cs:53,1.17168482571538) --(axis cs:52,1.12580179085564) --(axis cs:51,1.09968286526307) --(axis cs:50,1.07654957215502) --(axis cs:49,1.21519464883955) --(axis cs:48,1.12139691088275) --(axis cs:47,1.10520730547318) --(axis cs:46,1.16929582075297) --(axis cs:45,1.12064408911784) --(axis cs:44,1.15805059678033) --(axis cs:43,1.1343241807649) --(axis cs:42,1.26677577769701) --(axis cs:41,1.11657093837038) --(axis cs:40,1.12654190071561) --(axis cs:39,1.12951291159361) --(axis cs:38,1.13655047928547) --(axis cs:37,1.11527668196975) --(axis cs:36,1.1063226357577) --(axis cs:35,1.21132981954941) --(axis cs:34,1.26510633454997) --(axis cs:33,1.1584558022356) --(axis cs:32,1.20962107323201) --(axis cs:31,1.22578451526758) --(axis cs:30,1.14709283695517) --(axis cs:29,1.15704238022206) --(axis cs:28,1.21117974719273) --(axis cs:27,1.27324554926118) --(axis cs:26,1.19099398503131) --(axis cs:25,1.28366130110065) --(axis cs:24,1.27782311936811) --(axis cs:23,1.1799775327478) --(axis cs:22,1.26023327190069) --(axis cs:21,1.33875107297401) --(axis cs:20,1.22089144612717) --(axis cs:19,1.28342351038829) --(axis cs:18,1.23605665939315) --(axis cs:17,1.4909533308799) --(axis cs:16,1.42283944132921) --(axis cs:15,1.31263974443775) --(axis cs:14,1.33286400102134) --(axis cs:13,1.33089507304489) --(axis cs:12,1.29709180246978) --(axis cs:11,1.3794888704443) --(axis cs:10,1.33210840000221) --(axis cs:9,1.32924049533342) --(axis cs:8,1.52884563752424) --(axis cs:7,1.42666442156164) --(axis cs:6,1.66439384568961) --(axis cs:5,1.56630075440599) --(axis cs:4,1.55253236722534) --(axis cs:3,1.55558478815886) --(axis cs:2,1.93257231533742) --(axis cs:1,2.51166056379274) --(axis cs:0,2.34470746606768) --cycle; \addplot [semithick, color0] table { 0 1.52862341403961 1 1.05302863121033 2 0.78647358417511 3 0.686109495162964 4 0.572711968421936 5 0.522998070716858 6 0.551623117923737 7 0.466549342870712 8 0.430796164274216 9 0.422970408201218 10 0.373207706212997 11 0.39260225892067 12 0.368214666843414 13 0.357343488931656 14 0.338298565149307 15 0.320611304044723 16 0.306098341941833 17 0.294625896215439 18 0.29638689160347 19 0.295338281989098 20 0.301941913366318 21 0.260172808170319 22 0.272379666566849 23 0.293876093626022 24 0.260448750853539 25 0.260675871372223 26 0.257100573182106 27 0.255191135406494 28 0.271645346283913 29 0.253615838289261 30 0.245031267404556 31 0.261638510227203 32 0.237498295307159 33 0.232960012555122 34 0.240884163975716 35 0.241265904903412 36 0.217937791347504 37 0.219532063603401 38 0.240733709931374 39 0.230479699373245 40 0.210023134946823 41 0.221402192115784 42 0.226401802897453 43 0.207485178112984 44 0.215652909874916 45 0.207375690340996 46 0.218745741248131 47 0.213103845715523 48 0.20212182700634 49 0.212624773383141 50 0.208329838514328 51 0.224219876527786 52 0.217272999882698 53 0.207481253147125 54 0.202485051751137 55 0.204599997401237 56 0.224621167778969 57 0.19640619456768 58 0.192518556118011 59 0.218127653002739 60 0.198118817806244 61 0.200359463691711 62 0.19780877828598 63 0.206173938512802 64 0.196569359302521 65 0.221112111210823 66 0.197998061776161 67 0.192113506793976 68 0.194719207286835 69 0.196768173575401 70 0.192512941360474 71 0.187629777193069 72 0.188770428299904 73 0.202378636598587 74 0.198347002267838 75 0.197036638855934 76 0.199426338076591 77 0.238860553503037 78 0.187246045470238 79 0.205893510580063 80 0.188172215223312 81 0.197300186753273 82 0.190055102109909 83 0.187938749790192 84 0.1853112667799 85 0.200737830996513 86 0.198394387960434 87 0.196344190835953 88 0.191254326701164 89 0.190467751026154 90 0.187574392557144 91 0.180904188752174 92 0.184954178333282 93 0.18919438123703 94 0.176235374808311 95 0.184477666020393 96 0.180752137303352 97 0.189540952444077 98 0.19581992328167 99 0.189384436607361 100 0.182132703065872 101 0.195595952868462 102 0.188695731759071 103 0.192153435945511 104 0.182925575971603 105 0.188949298858643 106 0.187698462605476 107 0.180125123262405 108 0.186478158831596 109 0.179079839587212 110 0.181610199809074 111 0.177051618695259 112 0.179425606131554 113 0.197571417689323 114 0.17853564620018 115 0.176771438121796 116 0.183607229590416 117 0.175575771927834 118 0.198502558469772 119 0.173779591917992 120 0.18367226421833 121 0.175723075866699 122 0.175588640570641 123 0.177563714981079 124 0.177924197912216 125 0.18911999464035 126 0.18330283164978 127 0.178141495585442 128 0.18698742389679 129 0.183081436157227 130 0.176879188418388 131 0.176919049024582 132 0.18151253759861 133 0.173756012320518 134 0.169293972849846 135 0.179244694113731 136 0.186678591370583 137 0.172409161925316 138 0.18632595539093 139 0.179348596930504 140 0.181713408231735 141 0.175266647338867 142 0.170645040273666 143 0.17873643040657 144 0.186547949910164 145 0.17839962542057 146 0.179719841480255 147 0.172801604866982 148 0.175332656502724 149 0.173917707800865 150 0.17897609770298 151 0.180710679292679 152 0.194173222780228 153 0.172392684221268 154 0.184832632541656 155 0.173669689893723 156 0.16989206969738 157 0.18436566889286 158 0.173353639245033 159 0.177148458361626 160 0.188516873121262 161 0.168342128396034 162 0.174507209658623 163 0.175811588764191 164 0.195446041226387 165 0.176687985658646 166 0.175495073199272 167 0.17472015619278 168 0.171505460143089 169 0.176536393165588 170 0.177832525968552 171 0.184609749913216 172 0.180923429131508 173 0.180553019046783 174 0.17156201004982 175 0.173562997579575 176 0.188018292188644 177 0.175519663095474 178 0.16821585893631 179 0.176112887263298 180 0.177432131767273 181 0.177423161268234 182 0.176121571660042 183 0.17736779153347 184 0.179107689857483 185 0.17585660815239 186 0.184903210401535 187 0.177498173713684 188 0.179360181093216 189 0.173208636045456 190 0.177676856517792 191 0.180387032032013 192 0.192163163423538 193 0.178925412893295 194 0.172062969207764 195 0.174393981695175 196 0.176242318749428 197 0.177987891435623 198 0.181019338965416 199 0.170371317863464 }; \addlegendentry{NBD\xspace} \addplot [semithick, color1] table { 0 2.60796970129013 1 2.35721164941788 2 2.24755436182022 3 2.16946405172348 4 2.14966630935669 5 2.13594847917557 6 2.07243168354034 7 2.12544471025467 8 2.08628863096237 9 2.0931881070137 10 2.0496654510498 11 2.02691060304642 12 2.03854525089264 13 2.02802118659019 14 2.00009247660637 15 2.00578352808952 16 1.97738954424858 17 1.98827680945396 18 1.97489756345749 19 2.03005033731461 20 1.97128373384476 21 1.96389684081078 22 1.9686570763588 23 1.95338156819344 24 1.96049210429192 25 2.07190400362015 26 1.9530194401741 27 1.96699044108391 28 1.96655809879303 29 1.9532273709774 30 1.95806092023849 31 1.93785688281059 32 1.96111252903938 33 1.93309783935547 34 1.93903276324272 35 1.96975719928741 36 1.9499853849411 37 1.94916099309921 38 2.00426787137985 39 1.95312377810478 40 1.92591542005539 41 1.95051980018616 42 1.93266287446022 43 1.91277372837067 44 1.92476764321327 45 1.93056523799896 46 1.93082970380783 47 1.93732610344887 48 1.91542065143585 49 1.92908078432083 50 1.92963737249374 51 1.93623131513596 52 1.92996749281883 53 1.9378787279129 54 1.92981031537056 55 1.91532522439957 56 1.90971055626869 57 1.92116716504097 58 1.93160453438759 59 1.9606591463089 60 1.90269809961319 61 1.90314480662346 62 1.91382431983948 63 1.91235354542732 64 1.91332638263702 65 1.90342977643013 66 1.91193398833275 67 1.9351678788662 68 1.91475665569305 69 1.90655362606049 70 1.92423564195633 71 1.9104046523571 72 1.91108506917953 73 1.92643323540688 74 1.9130831360817 75 1.89871537685394 76 1.89333593845367 77 1.90033948421478 78 1.91759288311005 79 1.90942925214767 80 1.90358534455299 81 1.911248087883 82 1.90987706184387 83 1.94101646542549 84 1.91182073950767 85 1.89553993940353 86 1.89309665560722 87 1.92173847556114 88 1.90011614561081 89 1.91141885519028 90 1.9069119989872 91 1.91221708059311 92 1.90173476934433 93 1.90253022313118 94 1.89792174100876 95 1.91135469079018 96 1.91777241230011 97 1.90275368094444 98 1.91149163246155 99 1.89585244655609 100 1.9059716463089 101 1.91741994023323 102 1.90837830305099 103 1.90527808666229 104 1.90929365158081 105 1.90841507911682 106 1.90516301989555 107 1.90158262848854 108 1.92047753930092 109 1.93734401464462 110 1.91505634784698 111 1.89953130483627 112 1.90704846382141 113 1.89926621317863 114 1.91106131672859 115 1.90673246979713 116 1.9047926068306 117 1.89731502532959 118 1.89857265353203 119 1.90219843387604 120 1.90023681521416 121 1.89979764819145 122 1.89685559272766 123 1.91893392801285 124 1.89800730347633 125 1.90800240635872 126 1.89524695277214 127 1.89737489819527 128 1.91686126589775 129 1.90938898921013 130 1.9009111225605 131 1.89085057377815 132 1.90336492657661 133 1.9031035900116 134 1.89917927980423 135 1.9048143029213 136 1.88775199651718 137 1.89918532967567 138 1.89131197333336 139 1.89613589644432 140 1.91286754608154 141 1.88952749967575 142 1.90289729833603 143 1.89393815398216 144 1.89649996161461 145 1.89530661702156 146 1.89370957016945 147 1.90282183885574 148 1.89303353428841 149 1.89430099725723 150 1.89963960647583 151 1.90312594175339 152 1.89697378873825 153 1.90562385320663 154 1.8909053504467 155 1.89601412415504 156 1.89273229241371 157 1.89525410532951 158 1.90018212795258 159 1.90603309869766 160 1.90145054459572 161 1.8948580622673 162 1.90049639344215 163 1.90378129482269 164 1.90878143906593 165 1.89322564005852 166 1.90305402874947 167 1.88970866799355 168 1.89183866977692 169 1.90093746781349 170 1.89985018968582 171 1.89984330534935 172 1.89227718114853 173 1.88438659906387 174 1.90140813589096 175 1.89513197541237 176 1.89275854825974 177 1.89800372719765 178 1.89884263277054 179 1.90019348263741 180 1.89507061243057 181 1.90088328719139 182 1.89017078280449 183 1.9086389541626 184 1.89047566056252 185 1.89082360267639 186 1.89464345574379 187 1.89695692062378 188 1.89119896292686 189 1.89371326565742 190 1.90575680136681 191 1.89429467916489 192 1.9066935479641 193 1.90763172507286 194 1.88832846283913 195 1.89829206466675 196 1.89215564727783 197 1.88932421803474 198 1.89452201128006 199 1.88880488276482 }; \addlegendentry{Deepnorm} \addplot [semithick, color2] table { 0 2.41001069545746 1 2.11024659872055 2 2.00598794221878 3 2.07892429828644 4 1.95594057440758 5 1.90220189094543 6 1.85728347301483 7 1.89974388480186 8 1.86950302124023 9 1.82920664548874 10 1.81334981322289 11 1.81899416446686 12 1.80399790406227 13 1.82280901074409 14 1.79349389672279 15 1.81588193774223 16 1.7827630341053 17 1.76013225317001 18 1.76486113667488 19 1.76764270663261 20 1.76986441016197 21 1.75777414441109 22 1.77287611365318 23 1.77622947096825 24 1.78318092226982 25 1.77628070116043 26 1.76950401067734 27 1.76501423120499 28 1.74356383085251 29 1.74653127789497 30 1.76018780469894 31 1.75262615084648 32 1.74522021412849 33 1.75982043147087 34 1.75832310318947 35 1.75489220023155 36 1.77444195747375 37 1.73584371805191 38 1.73208257555962 39 1.75302320718765 40 1.73888775706291 41 1.74604493379593 42 1.75781720876694 43 1.77184748649597 44 1.73239296674728 45 1.72241932153702 46 1.7381135225296 47 1.73209357261658 48 1.72121155261993 49 1.72820529341698 50 1.76938128471375 51 1.72529295086861 52 1.72084933519363 53 1.72568789124489 54 1.72335001826286 55 1.7165874838829 56 1.72010824084282 57 1.71613276004791 58 1.70868334174156 59 1.72333642840385 60 1.70941978693008 61 1.71044537425041 62 1.71314358711243 63 1.7237956225872 64 1.7084187567234 65 1.72628119587898 66 1.75929546356201 67 1.71146032214165 68 1.71990594267845 69 1.71109291911125 70 1.71163007616997 71 1.70592346787453 72 1.71540370583534 73 1.70587405562401 74 1.70772430300713 75 1.70920503139496 76 1.70873767137527 77 1.7062474489212 78 1.70696181058884 79 1.70725113153458 80 1.71219825744629 81 1.6983140707016 82 1.70841631293297 83 1.71038356423378 84 1.72150459885597 85 1.71223735809326 86 1.70062190294266 87 1.7116125524044 88 1.7069239616394 89 1.7052273452282 90 1.70754572749138 91 1.69979819655418 92 1.70929342508316 93 1.70929437875748 94 1.71310743689537 95 1.69843664765358 96 1.71891090273857 97 1.7115124464035 98 1.70250317454338 99 1.70427456498146 100 1.70010504126549 101 1.70833858847618 102 1.70607531070709 103 1.70345470309258 104 1.7007609307766 105 1.70754510164261 106 1.71863350272179 107 1.70891308784485 108 1.71413961052895 109 1.70460525155067 110 1.71785113215446 111 1.71118202805519 112 1.71871641278267 113 1.72011271119118 114 1.71089959144592 115 1.71146380901337 116 1.73735597729683 117 1.71991753578186 118 1.71086958050728 119 1.71863275766373 120 1.70061346888542 121 1.71566122770309 122 1.70793822407722 123 1.71478912234306 124 1.69689351320267 125 1.70076730847359 126 1.69828009605408 127 1.6971241235733 128 1.7172936797142 129 1.72042599320412 130 1.72281715273857 131 1.71084725856781 132 1.70439398288727 133 1.7087424993515 134 1.71217006444931 135 1.6993952691555 136 1.70604431629181 137 1.70089331269264 138 1.70055609941483 139 1.71158856153488 140 1.71047168970108 141 1.70340642333031 142 1.7050316631794 143 1.70466634631157 144 1.70184972882271 145 1.69399383664131 146 1.68944212794304 147 1.69958251714706 148 1.70284742116928 149 1.69864350557327 150 1.71885114908218 151 1.70565015077591 152 1.70002654194832 153 1.70637500286102 154 1.70234084129333 155 1.69821181893349 156 1.70474892854691 157 1.69376459717751 158 1.69620755314827 159 1.69564217329025 160 1.69607236981392 161 1.69984212517738 162 1.69715219736099 163 1.70028138160706 164 1.68731421232224 165 1.70387777686119 166 1.69971314072609 167 1.70203211903572 168 1.70632842183113 169 1.68773052096367 170 1.69615936279297 171 1.6926896572113 172 1.70339095592499 173 1.69548738002777 174 1.68970066308975 175 1.70147725939751 176 1.70558086037636 177 1.69612112641335 178 1.69491562247276 179 1.69823431968689 180 1.70043486356735 181 1.70097759366035 182 1.7030246257782 183 1.70529904961586 184 1.69770467281342 185 1.69945377111435 186 1.69643983244896 187 1.70043984055519 188 1.69968518614769 189 1.70168089866638 190 1.70832791924477 191 1.70389485359192 192 1.70445293188095 193 1.69968572258949 194 1.70510330796242 195 1.71096268296242 196 1.70458075404167 197 1.69590869545937 198 1.69664627313614 199 1.70464825630188 }; \addlegendentry{Widenorm} \addplot [semithick, color3] table { 0 10.0718774795532 1 10.0718774795532 2 10.0718774795532 3 10.0718774795532 4 10.0718774795532 5 10.0718774795532 6 10.0718774795532 7 10.0718774795532 8 10.0718774795532 9 10.0718774795532 10 10.0718774795532 11 10.0718774795532 12 10.0718774795532 13 10.0718774795532 14 10.0718774795532 15 10.0718774795532 16 10.0718774795532 17 10.0718774795532 18 10.0718774795532 19 10.0718774795532 20 10.0718774795532 21 10.0718774795532 22 10.0718774795532 23 10.0718774795532 24 10.0718774795532 25 10.0718774795532 26 10.0718774795532 27 10.0718774795532 28 10.0718774795532 29 10.0718774795532 30 10.0718774795532 31 10.0718774795532 32 10.0718774795532 33 10.0718774795532 34 10.0718774795532 35 10.0718774795532 36 10.0718774795532 37 10.0718774795532 38 10.0718774795532 39 10.0718774795532 40 10.0718774795532 41 10.0718774795532 42 10.0718774795532 43 10.0718774795532 44 10.0718774795532 45 10.0718774795532 46 10.0718774795532 47 10.0718774795532 48 10.0718774795532 49 10.0718774795532 50 10.0718774795532 51 10.0718774795532 52 10.0718774795532 53 10.0718774795532 54 10.0718774795532 55 10.0718774795532 56 10.0718774795532 57 10.0718774795532 58 10.0718774795532 59 10.0718774795532 60 10.0718774795532 61 10.0718774795532 62 10.0718774795532 63 10.0718774795532 64 10.0718774795532 65 10.0718774795532 66 10.0718774795532 67 10.0718774795532 68 10.0718774795532 69 10.0718774795532 70 10.0718774795532 71 10.0718774795532 72 10.0718774795532 73 10.0718774795532 74 10.0718774795532 75 10.0718774795532 76 10.0718774795532 77 10.0718774795532 78 10.0718774795532 79 10.0718774795532 80 10.0718774795532 81 10.0718774795532 82 10.0718774795532 83 10.0718774795532 84 10.0718774795532 85 10.0718774795532 86 10.0718774795532 87 10.0718774795532 88 10.0718774795532 89 10.0718774795532 90 10.0718774795532 91 10.0718774795532 92 10.0718774795532 93 10.0718774795532 94 10.0718774795532 95 10.0718774795532 96 10.0718774795532 97 10.0718774795532 98 10.0718774795532 99 10.0718774795532 100 10.0718774795532 101 10.0718774795532 102 10.0718774795532 103 10.0718774795532 104 10.0718774795532 105 10.0718774795532 106 10.0718774795532 107 10.0718774795532 108 10.0718774795532 109 10.0718774795532 110 10.0718774795532 111 10.0718774795532 112 10.0718774795532 113 10.0718774795532 114 10.0718774795532 115 10.0718774795532 116 10.0718774795532 117 10.0718774795532 118 10.0718774795532 119 10.0718774795532 120 10.0718774795532 121 10.0718774795532 122 10.0718774795532 123 10.0718774795532 124 10.0718774795532 125 10.0718774795532 126 10.0718774795532 127 10.0718774795532 128 10.0718774795532 129 10.0718774795532 130 10.0718774795532 131 10.0718774795532 132 10.0718774795532 133 10.0718774795532 134 10.0718774795532 135 10.0718774795532 136 10.0718774795532 137 10.0718774795532 138 10.0718774795532 139 10.0718774795532 140 10.0718774795532 141 10.0718774795532 142 10.0718774795532 143 10.0718774795532 144 10.0718774795532 145 10.0718774795532 146 10.0718774795532 147 10.0718774795532 148 10.0718774795532 149 10.0718774795532 150 10.0718774795532 151 10.0718774795532 152 10.0718774795532 153 10.0718774795532 154 10.0718774795532 155 10.0718774795532 156 10.0718774795532 157 10.0718774795532 158 10.0718774795532 159 10.0718774795532 160 10.0718774795532 161 10.0718774795532 162 10.0718774795532 163 10.0718774795532 164 10.0718774795532 165 10.0718774795532 166 10.0718774795532 167 10.0718774795532 168 10.0718774795532 169 10.0718774795532 170 10.0718774795532 171 10.0718774795532 172 10.0718774795532 173 10.0718774795532 174 10.0718774795532 175 10.0718774795532 176 10.0718774795532 177 10.0718774795532 178 10.0718774795532 179 10.0718774795532 180 10.0718774795532 181 10.0718774795532 182 10.0718774795532 183 10.0718774795532 184 10.0718774795532 185 10.0718774795532 186 10.0718774795532 187 10.0718774795532 188 10.0718774795532 189 10.0718774795532 190 10.0718774795532 191 10.0718774795532 192 10.0718774795532 193 10.0718774795532 194 10.0718774795532 195 10.0718774795532 196 10.0718774795532 197 10.0718774795532 198 10.0718774795532 199 10.0718774795532 }; \addlegendentry{Deep-div} \addplot [semithick, color4] table { 0 2.15960700511932 1 2.09507277011871 2 1.72855310440063 3 1.53111889362335 4 1.49124503135681 5 1.46955440044403 6 1.4751916885376 7 1.39232869148254 8 1.40951058864594 9 1.32222964763641 10 1.29846298694611 11 1.30651209354401 12 1.27400629520416 13 1.28617238998413 14 1.28556592464447 15 1.24993255138397 16 1.32105116844177 17 1.34109590053558 18 1.20068213939667 19 1.21666324138641 20 1.19370474815369 21 1.24600732326508 22 1.19283866882324 23 1.15467309951782 24 1.21707909107208 25 1.19945192337036 26 1.13878831863403 27 1.17545318603516 28 1.17413897514343 29 1.13604974746704 30 1.1254789352417 31 1.16392545700073 32 1.14629616737366 33 1.12455348968506 34 1.18922121524811 35 1.15770146846771 36 1.09338433742523 37 1.09919147491455 38 1.121653175354 39 1.10139465332031 40 1.10334255695343 41 1.09319705963135 42 1.16314146518707 43 1.09918417930603 44 1.11657552719116 45 1.08471789360046 46 1.11795914173126 47 1.07405326366425 48 1.08882853984833 49 1.13875477313995 50 1.06590168476105 51 1.07460956573486 52 1.09764175415039 53 1.11903855800629 54 1.07646980285645 55 1.09068839550018 56 1.06230056285858 57 1.07262308597565 58 1.07791662216187 59 1.08181109428406 60 1.06177995204926 61 1.08450071811676 62 1.06344132423401 63 1.08149034976959 64 1.07058091163635 65 1.06771447658539 66 1.09406673908234 67 1.05224273204803 68 1.05720930099487 69 1.05011720657349 70 1.07753744125366 71 1.05245370864868 72 1.0454291343689 73 1.04944174289703 74 1.06429615020752 75 1.0337899684906 76 1.05560133457184 77 1.03777425289154 78 1.04122557640076 79 1.02822349071503 80 1.06181900501251 81 1.03448388576508 82 1.03002333641052 83 1.03937425613403 84 1.04020776748657 85 1.05183951854706 86 1.0633772611618 87 1.05852160453796 88 1.04105727672577 89 1.04763970375061 90 1.01904673576355 91 1.04401404857635 92 1.03382296562195 93 1.02292790412903 94 1.0402455329895 95 1.04047403335571 96 1.03130106925964 97 1.04028286933899 98 1.04570753574371 99 1.04039709568024 100 1.03289184570313 101 1.01971635818481 102 1.03742990493774 103 1.02781043052673 104 1.04042148590088 105 1.03551018238068 106 1.02497477531433 107 1.02413721084595 108 1.0533664226532 109 1.03525404930115 110 1.02564172744751 111 1.03628199100494 112 1.05295443534851 113 1.061341381073 114 1.03687726259232 115 1.04269213676453 116 1.06171748638153 117 1.0403177022934 118 1.03615329265594 119 1.03340266942978 120 1.01462827920914 121 1.02200765609741 122 1.03604806661606 123 1.04219017028809 124 1.04606959819794 125 1.05087585449219 126 1.05323317050934 127 1.02846438884735 128 1.09121692180634 129 1.04257991313934 130 1.01762048006058 131 1.03375515937805 132 1.03026509284973 133 1.0426744222641 134 1.03773375749588 135 1.02829601764679 136 1.01711565256119 137 1.03997025489807 138 1.02990883588791 139 1.027922976017 140 1.02228990793228 141 1.03777718544006 142 1.02797548770905 143 1.02822482585907 144 1.03954021930695 145 1.02173614501953 146 1.03097294569016 147 1.03086116313934 148 1.04162061214447 149 1.03572487831116 150 1.03400800228119 151 1.03527474403381 152 1.03030138015747 153 1.02345979213715 154 1.04118649959564 155 1.05619418621063 156 1.03111863136292 157 1.05213520526886 158 1.02631363868713 159 1.0362105846405 160 1.05434613227844 161 1.04876501560211 162 1.03251667022705 163 1.03815183639526 164 1.02687247991562 165 1.02896559238434 166 1.03814334869385 167 1.04363102912903 168 1.04163250923157 169 1.02581238746643 170 1.02519946098328 171 1.03143186569214 172 1.02425420284271 173 1.02310199737549 174 1.04209628105164 175 1.03700939416885 176 1.03215585947037 177 1.0330367565155 178 1.03666183948517 179 1.01918255090714 180 1.02062826156616 181 1.04635735750198 182 1.03782069683075 183 1.0342325925827 184 1.01562829017639 185 1.02872016429901 186 1.0481020450592 187 1.02604351043701 188 1.03485872745514 189 1.02728390693665 190 1.04203088283539 191 1.0503747344017 192 1.02850775718689 193 1.04288532733917 194 1.03192551136017 195 1.0262836933136 196 1.05069963932037 197 1.03764095306396 198 1.02580108642578 199 1.02991720438004 }; \addlegendentry{Mahalanobis} \end{axis} \end{tikzpicture} \label{fig:bregMNIST_asym} \end{subfigure} \begin{subfigure}[b]{0.49\columnwidth} \centering \begin{tikzpicture} \definecolor{color0}{rgb}{0.12156862745098,0.466666666666667,0.705882352941177} \definecolor{color1}{rgb}{1,0.498039215686275,0.0549019607843137} \definecolor{color2}{rgb}{0.172549019607843,0.627450980392157,0.172549019607843} \definecolor{color3}{rgb}{0.580392156862745,0.403921568627451,0.741176470588235} \definecolor{color4}{rgb}{0.83921568627451,0.152941176470588,0.156862745098039} \begin{axis}[ legend cell align={left}, legend style={font=\tiny,fill opacity=0.8, draw opacity=1, text opacity=1, draw=white!80!black}, tick align=outside, tick pos=left, x grid style={white!69.0196078431373!black}, xmin=-2, xmax=52, xtick style={color=black}, y grid style={white!69.0196078431373!black}, ymin=4.10439713001251, ymax=6.81024453639984, ytick style={color=black}, width=1.0\columnwidth, height=0.60\columnwidth ] \path [draw=color0, fill=color0, opacity=0.2] (axis cs:0,6.68725147247314) --(axis cs:0,6.53115252017975) --(axis cs:1,6.19887905836105) --(axis cs:2,5.99470586776733) --(axis cs:3,6.00707015991211) --(axis cs:4,5.81924715042114) --(axis cs:5,5.58496866226196) --(axis cs:6,5.38023338317871) --(axis cs:7,5.24082285404205) --(axis cs:8,5.17290687561035) --(axis cs:9,5.04828672409058) --(axis cs:10,5.0478048324585) --(axis cs:11,5.01698417663574) --(axis cs:12,4.76865386962891) --(axis cs:13,4.7194356918335) --(axis cs:14,4.72715005874634) --(axis cs:15,4.71760091781616) --(axis cs:16,4.55233364105225) --(axis cs:17,4.59089078903198) --(axis cs:18,4.50050964355469) --(axis cs:19,4.48144187927246) --(axis cs:20,4.49122314453125) --(axis cs:21,4.41988658905029) --(axis cs:22,4.40456247329712) --(axis cs:23,4.32810213088989) --(axis cs:24,4.35720291137695) --(axis cs:25,4.29534286499023) --(axis cs:26,4.30606775283813) --(axis cs:27,4.32730073928833) --(axis cs:28,4.25689506530762) --(axis cs:29,4.36384725570679) --(axis cs:30,4.26267249345779) --(axis cs:31,4.34187116622925) --(axis cs:32,4.38661975860596) --(axis cs:33,4.22739019393921) --(axis cs:34,4.30761573791504) --(axis cs:35,4.40254507064819) --(axis cs:36,4.35504369497299) --(axis cs:37,4.54195117950439) --(axis cs:38,4.27445135116577) --(axis cs:39,4.30367937088013) --(axis cs:40,4.26947183609009) --(axis cs:41,4.27099113464355) --(axis cs:42,4.33371448516846) --(axis cs:43,4.37460145950317) --(axis cs:44,4.39535875320435) --(axis cs:45,4.34102660894394) --(axis cs:46,4.34767976284027) --(axis cs:47,4.33127737045288) --(axis cs:48,4.35401515960693) --(axis cs:49,4.31907691955566) --(axis cs:50,4.36920909881592) --(axis cs:51,4.37937088012695) --(axis cs:52,4.49470377206802) --(axis cs:53,4.41487264633179) --(axis cs:54,4.43184967041016) --(axis cs:55,4.48185005187988) --(axis cs:56,4.44408229112625) --(axis cs:57,4.52032480239868) --(axis cs:58,4.32406053543091) --(axis cs:59,4.40245418548584) --(axis cs:60,4.30686731338501) --(axis cs:61,4.32721347808838) --(axis cs:62,4.52684879302979) --(axis cs:63,4.42473001480103) --(axis cs:64,4.41673259735107) --(axis cs:65,4.48204522132874) --(axis cs:66,4.38777856826782) --(axis cs:67,4.43646688461304) --(axis cs:68,4.47328453063965) --(axis cs:69,4.44674577713013) --(axis cs:70,4.42558174133301) --(axis cs:71,4.39376420974731) --(axis cs:72,4.46175493240357) --(axis cs:73,4.4780611038208) --(axis cs:74,4.47142791748047) --(axis cs:75,4.52858982086182) --(axis cs:76,4.41571998596191) --(axis cs:77,4.42854585647583) --(axis cs:78,4.40780725479126) --(axis cs:79,4.50292921066284) --(axis cs:80,4.42966041564941) --(axis cs:81,4.51490650177002) --(axis cs:82,4.41446743011475) --(axis cs:83,4.44988689422607) --(axis cs:84,4.4766209602356) --(axis cs:85,4.51321353912354) --(axis cs:86,4.50879784107208) --(axis cs:87,4.44400873184204) --(axis cs:88,4.60181180715561) --(axis cs:89,4.52557716369629) --(axis cs:90,4.52186183929443) --(axis cs:91,4.58086255550385) --(axis cs:92,4.46865272521973) --(axis cs:93,4.50805406570435) --(axis cs:94,4.44937000274658) --(axis cs:95,4.5719012260437) --(axis cs:96,4.49871759414673) --(axis cs:97,4.47179517745972) --(axis cs:98,4.5206582069397) --(axis cs:99,4.55883779525757) --(axis cs:100,4.46665678024292) --(axis cs:101,4.56843681335449) --(axis cs:102,4.5510796546936) --(axis cs:103,4.58829516649246) --(axis cs:104,4.53100318908691) --(axis cs:105,4.46415500640869) --(axis cs:106,4.4888893032074) --(axis cs:107,4.43924617767334) --(axis cs:108,4.47426166534424) --(axis cs:109,4.51716318130493) --(axis cs:110,4.54553003311157) --(axis cs:111,4.4922101020813) --(axis cs:112,4.53915710449219) --(axis cs:113,4.50499200820923) --(axis cs:114,4.54384984970093) --(axis cs:115,4.50343113899231) --(axis cs:116,4.50484771728516) --(axis cs:117,4.50949649810791) --(axis cs:118,4.46321477890015) --(axis cs:119,4.44140777587891) --(axis cs:120,4.54303607940674) --(axis cs:121,4.47910021066666) --(axis cs:122,4.51183376312256) --(axis cs:123,4.53332090377808) --(axis cs:124,4.51200761795044) --(axis cs:125,4.5215184211731) --(axis cs:126,4.5903790473938) --(axis cs:127,4.51457681655884) --(axis cs:128,4.47986862182617) --(axis cs:129,4.43598585128784) --(axis cs:130,4.49413719177246) --(axis cs:131,4.45612478256226) --(axis cs:132,4.53823585033417) --(axis cs:133,4.48923686981201) --(axis cs:134,4.50061845779419) --(axis cs:135,4.4967586517334) --(axis cs:136,4.46659660339355) --(axis cs:137,4.54589028358459) --(axis cs:138,4.53987396478653) --(axis cs:139,4.51929159164429) --(axis cs:140,4.53932838439941) --(axis cs:141,4.48902750015259) --(axis cs:142,4.46962203979492) --(axis cs:143,4.47176399230957) --(axis cs:144,4.53184984445572) --(axis cs:145,4.55726671218872) --(axis cs:146,4.48059377670288) --(axis cs:147,4.5459080696106) --(axis cs:148,4.54934349060059) --(axis cs:149,4.51826038360596) --(axis cs:150,4.5714201259613) --(axis cs:151,4.51459980010986) --(axis cs:152,4.49939651489258) --(axis cs:153,4.55235967636108) --(axis cs:154,4.59632787704468) --(axis cs:155,4.58217725753784) --(axis cs:156,4.52313461303711) --(axis cs:157,4.50261211395264) --(axis cs:158,4.46985359191895) --(axis cs:159,4.55502233505249) --(axis cs:160,4.48252935409546) --(axis cs:161,4.50944480895996) --(axis cs:162,4.45039081573486) --(axis cs:163,4.53851280212402) --(axis cs:164,4.49421224594116) --(axis cs:165,4.4676685333252) --(axis cs:166,4.47311134338379) --(axis cs:167,4.5933650970459) --(axis cs:168,4.50062694549561) --(axis cs:169,4.52356071472168) --(axis cs:170,4.53502990722656) --(axis cs:171,4.61268873214722) --(axis cs:172,4.5645094871521) --(axis cs:173,4.52607831954956) --(axis cs:174,4.55278321266174) --(axis cs:175,4.53151578903198) --(axis cs:176,4.49740009307861) --(axis cs:177,4.51610593795776) --(axis cs:178,4.5519228053093) --(axis cs:179,4.52032489776611) --(axis cs:180,4.55920333862305) --(axis cs:181,4.52237005233765) --(axis cs:182,4.5139928817749) --(axis cs:183,4.56390256881714) --(axis cs:184,4.5461217880249) --(axis cs:185,4.50901708602905) --(axis cs:186,4.478590965271) --(axis cs:187,4.4720103263855) --(axis cs:188,4.42382793426514) --(axis cs:189,4.46019786596298) --(axis cs:190,4.43154230117798) --(axis cs:191,4.64347455501556) --(axis cs:192,4.486926009655) --(axis cs:193,4.45753765106201) --(axis cs:194,4.49322271347046) --(axis cs:195,4.54515352249146) --(axis cs:196,4.46261100769043) --(axis cs:197,4.46999607086182) --(axis cs:198,4.43211669921875) --(axis cs:199,4.46462936401367) --(axis cs:199,4.59314403533936) --(axis cs:199,4.59314403533936) --(axis cs:198,4.6248484659195) --(axis cs:197,4.60726089477539) --(axis cs:196,4.59667282104492) --(axis cs:195,4.73294696807861) --(axis cs:194,4.71990375518799) --(axis cs:193,4.73457345962524) --(axis cs:192,4.70031907320022) --(axis cs:191,4.7854977607727) --(axis cs:190,4.80245876312256) --(axis cs:189,4.66465148925781) --(axis cs:188,4.68987436294556) --(axis cs:187,4.65413966417313) --(axis cs:186,4.6140100479126) --(axis cs:185,4.61305875778198) --(axis cs:184,4.67767248153686) --(axis cs:183,4.65132160186768) --(axis cs:182,4.73226070404053) --(axis cs:181,4.69251766204834) --(axis cs:180,4.70332899093628) --(axis cs:179,4.62835130691528) --(axis cs:178,4.67775821685791) --(axis cs:177,4.72015466690064) --(axis cs:176,4.74018356084824) --(axis cs:175,4.69986152648926) --(axis cs:174,4.77070055007935) --(axis cs:173,4.64472529411316) --(axis cs:172,4.6604681968689) --(axis cs:171,4.87000551223755) --(axis cs:170,4.67296514511108) --(axis cs:169,4.65717134475708) --(axis cs:168,4.68134155273438) --(axis cs:167,4.80046262741089) --(axis cs:166,4.67014408111572) --(axis cs:165,4.69367809295654) --(axis cs:164,4.75869390726089) --(axis cs:163,4.66064682006836) --(axis cs:162,4.67203292369843) --(axis cs:161,4.70115547180176) --(axis cs:160,4.63691387176514) --(axis cs:159,4.6239359164238) --(axis cs:158,4.65291538238525) --(axis cs:157,4.67775173187256) --(axis cs:156,4.616955909729) --(axis cs:155,4.66927548885345) --(axis cs:154,4.76616830825806) --(axis cs:153,4.73417217254639) --(axis cs:152,4.68733119010925) --(axis cs:151,4.73204660892487) --(axis cs:150,4.79855703115463) --(axis cs:149,4.67737083435059) --(axis cs:148,4.66878890991211) --(axis cs:147,4.65842599868774) --(axis cs:146,4.68373098373413) --(axis cs:145,4.73616180419922) --(axis cs:144,4.71993741989136) --(axis cs:143,4.65186331987381) --(axis cs:142,4.7749457359314) --(axis cs:141,4.9197057723999) --(axis cs:140,4.75702438354492) --(axis cs:139,4.64768314361572) --(axis cs:138,4.67337435483932) --(axis cs:137,4.73023769140244) --(axis cs:136,4.63690013885498) --(axis cs:135,4.67372875213623) --(axis cs:134,4.70976972579956) --(axis cs:133,4.60418863296509) --(axis cs:132,4.71904869556427) --(axis cs:131,4.82519073486328) --(axis cs:130,4.64139375686645) --(axis cs:129,4.63107805252075) --(axis cs:128,4.64237699508667) --(axis cs:127,4.69466022729874) --(axis cs:126,4.67987661361694) --(axis cs:125,4.64521903991699) --(axis cs:124,4.62803840637207) --(axis cs:123,4.68205614089966) --(axis cs:122,4.69892950057983) --(axis cs:121,4.6316370010376) --(axis cs:120,4.71086502075195) --(axis cs:119,4.67833038330078) --(axis cs:118,4.73585901260376) --(axis cs:117,4.76961635351181) --(axis cs:116,4.7216344833374) --(axis cs:115,4.63645286560059) --(axis cs:114,4.675745677948) --(axis cs:113,4.78559732437134) --(axis cs:112,4.67127685546875) --(axis cs:111,4.68152469873428) --(axis cs:110,4.72764263153076) --(axis cs:109,4.62306022644043) --(axis cs:108,4.67309885025024) --(axis cs:107,4.64741525650024) --(axis cs:106,4.64547519683838) --(axis cs:105,4.63996691226959) --(axis cs:104,4.79664821624756) --(axis cs:103,4.77207822799683) --(axis cs:102,4.80259014606476) --(axis cs:101,4.66571688890457) --(axis cs:100,4.70878210067749) --(axis cs:99,4.65019791603088) --(axis cs:98,4.6717230796814) --(axis cs:97,4.6612476348877) --(axis cs:96,4.85722408294678) --(axis cs:95,4.68430271148682) --(axis cs:94,4.63996419906616) --(axis cs:93,4.66538324356079) --(axis cs:92,4.60676441192627) --(axis cs:91,4.74570837020874) --(axis cs:90,4.61996446847916) --(axis cs:89,4.74051485061645) --(axis cs:88,4.75755004882812) --(axis cs:87,4.66629329919815) --(axis cs:86,4.67053136825561) --(axis cs:85,4.66669902801514) --(axis cs:84,4.60829880237579) --(axis cs:83,4.64827060699463) --(axis cs:82,4.52409524917603) --(axis cs:81,4.59920978546143) --(axis cs:80,4.59758644104004) --(axis cs:79,4.7166953086853) --(axis cs:78,4.65370817184448) --(axis cs:77,4.78925657272339) --(axis cs:76,4.73904581069946) --(axis cs:75,4.68129472732544) --(axis cs:74,4.58686447143555) --(axis cs:73,4.55534219741821) --(axis cs:72,4.66419801712036) --(axis cs:71,4.62652225494385) --(axis cs:70,4.7374400138855) --(axis cs:69,4.60600471496582) --(axis cs:68,4.66747007369995) --(axis cs:67,4.57534084320068) --(axis cs:66,4.70806875228882) --(axis cs:65,4.58175096511841) --(axis cs:64,4.55927734375) --(axis cs:63,4.67028617858887) --(axis cs:62,4.73359369754791) --(axis cs:61,4.66226615905762) --(axis cs:60,4.51986360549927) --(axis cs:59,4.62484045028687) --(axis cs:58,4.57684679031372) --(axis cs:57,4.63787689208984) --(axis cs:56,4.54278205156326) --(axis cs:55,4.57812404632568) --(axis cs:54,4.52514238357544) --(axis cs:53,4.64015998601913) --(axis cs:52,4.60487842559814) --(axis cs:51,4.46181728601456) --(axis cs:50,4.61207867860794) --(axis cs:49,4.5621114730835) --(axis cs:48,4.555837495327) --(axis cs:47,4.48229853153229) --(axis cs:46,4.44897480010986) --(axis cs:45,4.71979742050171) --(axis cs:44,4.56239099502563) --(axis cs:43,4.64206590652466) --(axis cs:42,4.47090463638306) --(axis cs:41,4.45059661865234) --(axis cs:40,4.47286701202393) --(axis cs:39,4.52313089370728) --(axis cs:38,4.60312509536743) --(axis cs:37,4.6070556640625) --(axis cs:36,4.5614048576355) --(axis cs:35,4.64057884216309) --(axis cs:34,4.52099857330322) --(axis cs:33,4.34453214883804) --(axis cs:32,4.72257738113403) --(axis cs:31,4.52048978805542) --(axis cs:30,4.44759912490845) --(axis cs:29,4.49704065322876) --(axis cs:28,4.47741947174072) --(axis cs:27,4.45421781539917) --(axis cs:26,4.41092138290405) --(axis cs:25,4.49600601196289) --(axis cs:24,4.50004558563232) --(axis cs:23,4.66495151519775) --(axis cs:22,4.73004159927368) --(axis cs:21,4.47402629852295) --(axis cs:20,4.60160369873047) --(axis cs:19,4.67814930438995) --(axis cs:18,4.56489667892456) --(axis cs:17,4.7437689781189) --(axis cs:16,4.7261082649231) --(axis cs:15,4.81176824569702) --(axis cs:14,4.97634153366089) --(axis cs:13,4.85676794052124) --(axis cs:12,4.87803321361542) --(axis cs:11,5.22025715351105) --(axis cs:10,5.19315494537353) --(axis cs:9,5.1975266456604) --(axis cs:8,5.32507104873657) --(axis cs:7,5.39467757701874) --(axis cs:6,5.84424505233765) --(axis cs:5,5.76579704284668) --(axis cs:4,6.02938289642334) --(axis cs:3,6.13466701507568) --(axis cs:2,6.44508218765259) --(axis cs:1,6.39919366836548) --(axis cs:0,6.68725147247314) --cycle; \path [draw=color1, fill=color1, opacity=0.2] (axis cs:0,6.30000638961792) --(axis cs:0,6.1839919090271) --(axis cs:1,6.00858774185181) --(axis cs:2,5.90758304595947) --(axis cs:3,5.88816289901733) --(axis cs:4,5.70990962982178) --(axis cs:5,5.50768089294434) --(axis cs:6,5.53253870010376) --(axis cs:7,5.41244640350342) --(axis cs:8,5.3071418762207) --(axis cs:9,5.19764728546143) --(axis cs:10,5.14697864770889) --(axis cs:11,5.10087585449219) --(axis cs:12,5.1071509552002) --(axis cs:13,5.0685375213623) --(axis cs:14,5.00708408355713) --(axis cs:15,4.95893369674683) --(axis cs:16,4.89243230819702) --(axis cs:17,4.86168546676636) --(axis cs:18,4.97412358045578) --(axis cs:19,4.82681264877319) --(axis cs:20,4.82833000183105) --(axis cs:21,4.85859413146973) --(axis cs:22,4.79132699966431) --(axis cs:23,4.82738485336304) --(axis cs:24,4.80963138580322) --(axis cs:25,4.77632761001587) --(axis cs:26,4.79015817642212) --(axis cs:27,4.85336952209473) --(axis cs:28,4.77843112945557) --(axis cs:29,4.79292211532593) --(axis cs:30,4.74407083034515) --(axis cs:31,4.75732946395874) --(axis cs:32,4.73003406524658) --(axis cs:33,4.79200820922852) --(axis cs:34,4.73711204528809) --(axis cs:35,4.76314697265625) --(axis cs:36,4.79393157958984) --(axis cs:37,4.78854494094849) --(axis cs:38,4.82098264694214) --(axis cs:39,4.79408011674881) --(axis cs:40,4.84084796905518) --(axis cs:41,4.82502497434616) --(axis cs:42,4.80835456848145) --(axis cs:43,4.80194320678711) --(axis cs:44,4.80060895681381) --(axis cs:45,4.76117372512817) --(axis cs:46,4.8148401260376) --(axis cs:47,4.79214649200439) --(axis cs:48,4.78986082077026) --(axis cs:49,4.84333171844482) --(axis cs:50,4.82102422714233) --(axis cs:51,4.80997247695923) --(axis cs:52,4.85901031494141) --(axis cs:53,4.75370273590088) --(axis cs:54,4.82004158735275) --(axis cs:55,4.84072065353394) --(axis cs:56,4.83872880935669) --(axis cs:57,4.83342761993408) --(axis cs:58,4.79157485961914) --(axis cs:59,4.86509943008423) --(axis cs:60,4.83316965103149) --(axis cs:61,4.86425180435181) --(axis cs:62,4.87230354547501) --(axis cs:63,4.84209985733032) --(axis cs:64,4.80884246826172) --(axis cs:65,4.8047122001648) --(axis cs:66,4.79512119293213) --(axis cs:67,4.84339972019196) --(axis cs:68,4.80484218597412) --(axis cs:69,4.82779664993286) --(axis cs:70,4.83673534393311) --(axis cs:71,4.87412710189819) --(axis cs:72,4.75689065217972) --(axis cs:73,4.81761445999146) --(axis cs:74,4.79568119049072) --(axis cs:75,4.80761613845825) --(axis cs:76,4.84876976013184) --(axis cs:77,4.87944812774658) --(axis cs:78,4.88341398239136) --(axis cs:79,4.83685655593872) --(axis cs:80,4.80570730209351) --(axis cs:81,4.8324559211731) --(axis cs:82,4.83484048843384) --(axis cs:83,4.8142219543457) --(axis cs:84,4.84807024002075) --(axis cs:85,4.83865251541138) --(axis cs:86,4.82782499551773) --(axis cs:87,4.86908159255981) --(axis cs:88,4.86611957550049) --(axis cs:89,4.77983837127686) --(axis cs:90,4.80132609128952) --(axis cs:91,4.86640872955322) --(axis cs:92,4.89196472167969) --(axis cs:93,4.83644151687622) --(axis cs:94,4.84460716247559) --(axis cs:95,4.80440397262573) --(axis cs:96,4.85314998626709) --(axis cs:97,4.82911233901978) --(axis cs:98,4.81212749481201) --(axis cs:99,4.85506868362427) --(axis cs:100,4.84120473861694) --(axis cs:101,4.86285314559936) --(axis cs:102,4.86963529586792) --(axis cs:103,4.84727773666382) --(axis cs:104,4.81445770263672) --(axis cs:105,4.83848667144775) --(axis cs:106,4.80016441345215) --(axis cs:107,4.82744245529175) --(axis cs:108,4.82010278701782) --(axis cs:109,4.79561147689819) --(axis cs:110,4.91066513061523) --(axis cs:111,4.77351932525635) --(axis cs:112,4.79808340072632) --(axis cs:113,4.8319354057312) --(axis cs:114,4.81842365264893) --(axis cs:115,4.81971435546875) --(axis cs:116,4.81899242401123) --(axis cs:117,4.80696573257446) --(axis cs:118,4.80697832107544) --(axis cs:119,4.84396543502808) --(axis cs:120,4.77809410095215) --(axis cs:121,4.79593877792358) --(axis cs:122,4.7753345489502) --(axis cs:123,4.7639500617981) --(axis cs:124,4.83090705871582) --(axis cs:125,4.85640783309937) --(axis cs:126,4.85507926940918) --(axis cs:127,4.87460734844208) --(axis cs:128,4.82499453544617) --(axis cs:129,4.82830581665039) --(axis cs:130,4.79133453369141) --(axis cs:131,4.78897590637207) --(axis cs:132,4.78921375274658) --(axis cs:133,4.76271324157715) --(axis cs:134,4.79493160247803) --(axis cs:135,4.79578914642334) --(axis cs:136,4.82843437194824) --(axis cs:137,4.82148942947388) --(axis cs:138,4.80431680679321) --(axis cs:139,4.81028470993042) --(axis cs:140,4.77546443462372) --(axis cs:141,4.8231972694397) --(axis cs:142,4.75891862630844) --(axis cs:143,4.77610654830933) --(axis cs:144,4.75111052036285) --(axis cs:145,4.76417779922485) --(axis cs:146,4.81075267791748) --(axis cs:147,4.79841480255127) --(axis cs:148,4.76940803527832) --(axis cs:149,4.7795036315918) --(axis cs:150,4.78235120773315) --(axis cs:151,4.78839635848999) --(axis cs:152,4.76482915639877) --(axis cs:153,4.80044755935669) --(axis cs:154,4.74329948425293) --(axis cs:155,4.80049228668213) --(axis cs:156,4.82365264892578) --(axis cs:157,4.80781375169754) --(axis cs:158,4.76452398300171) --(axis cs:159,4.74200782775879) --(axis cs:160,4.74677457809448) --(axis cs:161,4.82000732421875) --(axis cs:162,4.78836212158203) --(axis cs:163,4.76309766769409) --(axis cs:164,4.80596923828125) --(axis cs:165,4.80945510864258) --(axis cs:166,4.8431300163269) --(axis cs:167,4.82172822952271) --(axis cs:168,4.79150133132935) --(axis cs:169,4.78764590978622) --(axis cs:170,4.78046035766602) --(axis cs:171,4.78714437484741) --(axis cs:172,4.81746554136276) --(axis cs:173,4.80129108428955) --(axis cs:174,4.79076881408691) --(axis cs:175,4.79901094436646) --(axis cs:176,4.81653318405151) --(axis cs:177,4.77076969146728) --(axis cs:178,4.81127672195435) --(axis cs:179,4.79204921722412) --(axis cs:180,4.76182203292847) --(axis cs:181,4.84075422286987) --(axis cs:182,4.82973921060562) --(axis cs:183,4.85468659877777) --(axis cs:184,4.86332550048828) --(axis cs:185,4.86361284255981) --(axis cs:186,4.80914392471314) --(axis cs:187,4.80247735977173) --(axis cs:188,4.79849720001221) --(axis cs:189,4.82271327972412) --(axis cs:190,4.78110996246338) --(axis cs:191,4.77214393615723) --(axis cs:192,4.76792052268982) --(axis cs:193,4.77167930603027) --(axis cs:194,4.72287454605103) --(axis cs:195,4.75172313928604) --(axis cs:196,4.73284473419189) --(axis cs:197,4.7816689491272) --(axis cs:198,4.81705980300903) --(axis cs:199,4.78622560501099) --(axis cs:199,4.87651233673096) --(axis cs:199,4.87651233673096) --(axis cs:198,4.85718612670898) --(axis cs:197,4.84048910140991) --(axis cs:196,4.80815238952637) --(axis cs:195,4.83312301635742) --(axis cs:194,4.80547275543213) --(axis cs:193,4.84732036590576) --(axis cs:192,4.81996335983276) --(axis cs:191,4.81358652114868) --(axis cs:190,4.86758117675781) --(axis cs:189,4.90610918998718) --(axis cs:188,4.8330041885376) --(axis cs:187,4.91514263153076) --(axis cs:186,4.88767492294311) --(axis cs:185,4.90805253982544) --(axis cs:184,4.94978741407394) --(axis cs:183,4.96247539997101) --(axis cs:182,4.92870759963989) --(axis cs:181,4.89684734344482) --(axis cs:180,4.84136772155762) --(axis cs:179,4.87043037414551) --(axis cs:178,4.88096895217896) --(axis cs:177,4.85051145553589) --(axis cs:176,4.93689889907837) --(axis cs:175,4.89191522121429) --(axis cs:174,4.87399492263794) --(axis cs:173,4.88292503356934) --(axis cs:172,4.93768949508667) --(axis cs:171,4.84463300704956) --(axis cs:170,4.85303621292114) --(axis cs:169,4.82600382328034) --(axis cs:168,4.85885896682739) --(axis cs:167,4.88808183670044) --(axis cs:166,5.01536321640015) --(axis cs:165,4.93753471136093) --(axis cs:164,4.91787284612656) --(axis cs:163,4.83969316482544) --(axis cs:162,4.95641250610352) --(axis cs:161,4.86353359222412) --(axis cs:160,4.81830736160278) --(axis cs:159,4.83818216323853) --(axis cs:158,4.87088012695312) --(axis cs:157,4.91938133239746) --(axis cs:156,4.89378223419189) --(axis cs:155,4.83073034286499) --(axis cs:154,4.8734130859375) --(axis cs:153,4.88924827575684) --(axis cs:152,4.86584806442261) --(axis cs:151,4.84525289535522) --(axis cs:150,4.86284093856812) --(axis cs:149,4.8759295463562) --(axis cs:148,4.85338687896729) --(axis cs:147,4.83504962921143) --(axis cs:146,4.90535917282105) --(axis cs:145,4.87873115539551) --(axis cs:144,4.86337223052978) --(axis cs:143,4.88949489593506) --(axis cs:142,4.86191892623901) --(axis cs:141,4.89976558685303) --(axis cs:140,4.87527242183685) --(axis cs:139,4.91687871217728) --(axis cs:138,4.92800045013428) --(axis cs:137,4.86934008598328) --(axis cs:136,4.92410202026367) --(axis cs:135,4.8765398979187) --(axis cs:134,4.90064706802368) --(axis cs:133,4.86284246444702) --(axis cs:132,4.88855113983154) --(axis cs:131,4.88055095672607) --(axis cs:130,4.8730396270752) --(axis cs:129,4.855495262146) --(axis cs:128,4.91971637964249) --(axis cs:127,4.94022855758667) --(axis cs:126,4.94210748672485) --(axis cs:125,4.93131484985352) --(axis cs:124,4.90914821624756) --(axis cs:123,4.83812694549561) --(axis cs:122,4.79185066223144) --(axis cs:121,4.88945465087891) --(axis cs:120,4.88980560302734) --(axis cs:119,4.87712984800339) --(axis cs:118,4.91287202835083) --(axis cs:117,4.87361669540405) --(axis cs:116,4.89302530288696) --(axis cs:115,4.97606554031372) --(axis cs:114,4.87968482971191) --(axis cs:113,4.96571826934814) --(axis cs:112,4.89926853179932) --(axis cs:111,4.88712873458862) --(axis cs:110,4.92816925048828) --(axis cs:109,4.8803129196167) --(axis cs:108,4.89203844070435) --(axis cs:107,4.85943155288696) --(axis cs:106,4.89777946472168) --(axis cs:105,4.95914793014526) --(axis cs:104,4.88547817468643) --(axis cs:103,4.98383349895477) --(axis cs:102,4.94511194229126) --(axis cs:101,5.00730466842651) --(axis cs:100,4.9833607673645) --(axis cs:99,4.88916692733765) --(axis cs:98,4.87988157272339) --(axis cs:97,4.88452128648758) --(axis cs:96,4.92644233703613) --(axis cs:95,4.91405463218689) --(axis cs:94,4.96330642700195) --(axis cs:93,4.94072771072388) --(axis cs:92,4.96824159622192) --(axis cs:91,4.94838562011719) --(axis cs:90,4.88034887313843) --(axis cs:89,5.00084390640259) --(axis cs:88,4.94498462677002) --(axis cs:87,4.90382776260376) --(axis cs:86,4.91995658874512) --(axis cs:85,4.89020385742188) --(axis cs:84,4.93100786209106) --(axis cs:83,4.89130916595459) --(axis cs:82,4.90238304138184) --(axis cs:81,4.96909355640411) --(axis cs:80,4.89011821746826) --(axis cs:79,4.93135543584824) --(axis cs:78,5.06780157089233) --(axis cs:77,4.92389469146728) --(axis cs:76,4.94965858459473) --(axis cs:75,4.87744588851929) --(axis cs:74,4.85979604721069) --(axis cs:73,5.01860704421997) --(axis cs:72,4.92100219726562) --(axis cs:71,4.91997079849243) --(axis cs:70,4.90094966888428) --(axis cs:69,4.91429100036621) --(axis cs:68,4.8992377281189) --(axis cs:67,4.90942923545837) --(axis cs:66,4.85839385509491) --(axis cs:65,4.98817472457886) --(axis cs:64,4.87922325134277) --(axis cs:63,4.95367765426636) --(axis cs:62,4.92052459716797) --(axis cs:61,5.03337116241455) --(axis cs:60,4.91372985839844) --(axis cs:59,4.99299303531647) --(axis cs:58,4.89531173706055) --(axis cs:57,5.01955613851547) --(axis cs:56,4.97600231170654) --(axis cs:55,5.03415393829346) --(axis cs:54,4.97442960739136) --(axis cs:53,4.89789581298828) --(axis cs:52,4.93202648162842) --(axis cs:51,4.90362854003906) --(axis cs:50,4.8978173995018) --(axis cs:49,4.98229970932007) --(axis cs:48,4.91077165603638) --(axis cs:47,4.86528091430664) --(axis cs:46,4.92845134735107) --(axis cs:45,4.88465490341187) --(axis cs:44,4.95866327285767) --(axis cs:43,4.90348072052002) --(axis cs:42,4.88278894424438) --(axis cs:41,4.93804998397827) --(axis cs:40,4.98962602615356) --(axis cs:39,4.96299829483032) --(axis cs:38,4.89418506622314) --(axis cs:37,4.9129860830307) --(axis cs:36,4.91335048675537) --(axis cs:35,4.90378112792969) --(axis cs:34,4.80391206741333) --(axis cs:33,4.87659778594971) --(axis cs:32,4.88353519439697) --(axis cs:31,4.90790502786636) --(axis cs:30,4.82825651168823) --(axis cs:29,4.92022228240967) --(axis cs:28,4.95535707473755) --(axis cs:27,4.98179540634155) --(axis cs:26,4.90484991073608) --(axis cs:25,4.88998462438583) --(axis cs:24,4.97044067382812) --(axis cs:23,5.00826721191406) --(axis cs:22,4.92427949905395) --(axis cs:21,5.03889389038086) --(axis cs:20,4.90136692047119) --(axis cs:19,4.98878335952759) --(axis cs:18,5.04240155935288) --(axis cs:17,4.96864223480225) --(axis cs:16,4.99666996002197) --(axis cs:15,4.99859504938126) --(axis cs:14,5.05207319259644) --(axis cs:13,5.16289176940918) --(axis cs:12,5.14124088287353) --(axis cs:11,5.20607767105103) --(axis cs:10,5.24035797119141) --(axis cs:9,5.38376588821411) --(axis cs:8,5.44376573562622) --(axis cs:7,5.55949773788452) --(axis cs:6,5.82275428771973) --(axis cs:5,5.66435441970825) --(axis cs:4,5.82221918106079) --(axis cs:3,6.05565872192383) --(axis cs:2,6.07056903839111) --(axis cs:1,6.12776597976685) --(axis cs:0,6.30000638961792) --cycle; \path [draw=color2, fill=color2, opacity=0.2] (axis cs:0,6.13193473815918) --(axis cs:0,5.99380111694336) --(axis cs:1,5.94113397598267) --(axis cs:2,5.8442479133606) --(axis cs:3,5.69648160934448) --(axis cs:4,5.5571536064148) --(axis cs:5,5.42454748153687) --(axis cs:6,5.35530958175659) --(axis cs:7,5.27884044647217) --(axis cs:8,5.18584976196289) --(axis cs:9,5.1686882019043) --(axis cs:10,5.03964195251465) --(axis cs:11,5.03069429397583) --(axis cs:12,4.93074340820313) --(axis cs:13,4.9497670173645) --(axis cs:14,4.8940767288208) --(axis cs:15,4.83675565719605) --(axis cs:16,4.85867280960083) --(axis cs:17,4.87081708908081) --(axis cs:18,4.82771587371826) --(axis cs:19,4.814040184021) --(axis cs:20,4.79219703674316) --(axis cs:21,4.89172135591507) --(axis cs:22,4.7464506149292) --(axis cs:23,4.8065806388855) --(axis cs:24,4.76688032150269) --(axis cs:25,4.7896933555603) --(axis cs:26,4.80314979553223) --(axis cs:27,4.82060480356216) --(axis cs:28,4.83160285949707) --(axis cs:29,4.77240810394287) --(axis cs:30,4.77418603897095) --(axis cs:31,4.76462202072144) --(axis cs:32,4.77242641448975) --(axis cs:33,4.79863800048828) --(axis cs:34,4.80604725837707) --(axis cs:35,4.79347350358963) --(axis cs:36,4.76327056884766) --(axis cs:37,4.73260164260864) --(axis cs:38,4.80786638259888) --(axis cs:39,4.78902826309204) --(axis cs:40,4.80978670120239) --(axis cs:41,4.79719696044922) --(axis cs:42,4.74558547735214) --(axis cs:43,4.76636378288269) --(axis cs:44,4.78608417510986) --(axis cs:45,4.81206483840942) --(axis cs:46,4.77777051925659) --(axis cs:47,4.78173360824585) --(axis cs:48,4.82158269882202) --(axis cs:49,4.76779623031616) --(axis cs:50,4.7451470375061) --(axis cs:51,4.7654788017273) --(axis cs:52,4.78434467315674) --(axis cs:53,4.74742989063263) --(axis cs:54,4.80010099411011) --(axis cs:55,4.7754602432251) --(axis cs:56,4.82175140380859) --(axis cs:57,4.7684383392334) --(axis cs:58,4.75968055725098) --(axis cs:59,4.80832357406616) --(axis cs:60,4.77985248565674) --(axis cs:61,4.79735193967819) --(axis cs:62,4.78312377929687) --(axis cs:63,4.81488628387451) --(axis cs:64,4.790687251091) --(axis cs:65,4.75821170806885) --(axis cs:66,4.84413452148438) --(axis cs:67,4.79046030044556) --(axis cs:68,4.7831127166748) --(axis cs:69,4.77399263381958) --(axis cs:70,4.77533397674561) --(axis cs:71,4.76708926200867) --(axis cs:72,4.81962239027023) --(axis cs:73,4.81670217514038) --(axis cs:74,4.78942317962647) --(axis cs:75,4.75999927520752) --(axis cs:76,4.82476559877396) --(axis cs:77,4.77861909866333) --(axis cs:78,4.92114086151123) --(axis cs:79,4.86432447433472) --(axis cs:80,4.82026309967041) --(axis cs:81,4.87072219848633) --(axis cs:82,4.78822650909424) --(axis cs:83,4.77564430236816) --(axis cs:84,4.77799406051636) --(axis cs:85,4.761851978302) --(axis cs:86,4.78144254684448) --(axis cs:87,4.86517152786255) --(axis cs:88,4.78939809799194) --(axis cs:89,4.79222564697266) --(axis cs:90,4.77190065383911) --(axis cs:91,4.78760833740234) --(axis cs:92,4.78711703062057) --(axis cs:93,4.7803186416626) --(axis cs:94,4.79443283081055) --(axis cs:95,4.77304592132568) --(axis cs:96,4.76314754486084) --(axis cs:97,4.75696277618408) --(axis cs:98,4.82229146957397) --(axis cs:99,4.86546087265015) --(axis cs:100,4.82700052261353) --(axis cs:101,4.76644325256348) --(axis cs:102,4.79010190963745) --(axis cs:103,4.792724609375) --(axis cs:104,4.77906837463379) --(axis cs:105,4.76810369491577) --(axis cs:106,4.82592735290527) --(axis cs:107,4.73608089208603) --(axis cs:108,4.78973608016968) --(axis cs:109,4.78552329540253) --(axis cs:110,4.79905185699463) --(axis cs:111,4.74261016845703) --(axis cs:112,4.81717185974121) --(axis cs:113,4.81653099060059) --(axis cs:114,4.79052133560181) --(axis cs:115,4.81521654129028) --(axis cs:116,4.73138008117676) --(axis cs:117,4.76149954080582) --(axis cs:118,4.76484403610229) --(axis cs:119,4.8118824005127) --(axis cs:120,4.6921591758728) --(axis cs:121,4.77120752334595) --(axis cs:122,4.76951837539673) --(axis cs:123,4.72745604038239) --(axis cs:124,4.75021753311157) --(axis cs:125,4.76594915390015) --(axis cs:126,4.78589382171631) --(axis cs:127,4.74197959899902) --(axis cs:128,4.7942760014534) --(axis cs:129,4.77278547286987) --(axis cs:130,4.78578958511353) --(axis cs:131,4.81691465377808) --(axis cs:132,4.72558340072632) --(axis cs:133,4.75581798553467) --(axis cs:134,4.72528429031372) --(axis cs:135,4.76583480834961) --(axis cs:136,4.72891597747803) --(axis cs:137,4.74227083683014) --(axis cs:138,4.77189512252808) --(axis cs:139,4.82731981277466) --(axis cs:140,4.78731994628906) --(axis cs:141,4.73754344940186) --(axis cs:142,4.75952243804932) --(axis cs:143,4.79911937713623) --(axis cs:144,4.76698532104492) --(axis cs:145,4.77708435058594) --(axis cs:146,4.8290620803833) --(axis cs:147,4.79074649810791) --(axis cs:148,4.75780782699585) --(axis cs:149,4.76057004928589) --(axis cs:150,4.76796369552612) --(axis cs:151,4.73916883468628) --(axis cs:152,4.83541269302368) --(axis cs:153,4.77585668563843) --(axis cs:154,4.76874675750732) --(axis cs:155,4.82239393234253) --(axis cs:156,4.79020595550537) --(axis cs:157,4.81992692947388) --(axis cs:158,4.76825971603394) --(axis cs:159,4.77345705032349) --(axis cs:160,4.75795781612396) --(axis cs:161,4.78388547897339) --(axis cs:162,4.75750999450684) --(axis cs:163,4.73450489044189) --(axis cs:164,4.76074752807617) --(axis cs:165,4.81972629070282) --(axis cs:166,4.73243255615234) --(axis cs:167,4.74912530660629) --(axis cs:168,4.71506032943726) --(axis cs:169,4.73671445846558) --(axis cs:170,4.78925037384033) --(axis cs:171,4.74455642700195) --(axis cs:172,4.77538736820221) --(axis cs:173,4.76040040493011) --(axis cs:174,4.76143522262573) --(axis cs:175,4.72724561691284) --(axis cs:176,4.81639575958252) --(axis cs:177,4.76140003204346) --(axis cs:178,4.79376401901245) --(axis cs:179,4.7245264005661) --(axis cs:180,4.74672431945801) --(axis cs:181,4.74270303487778) --(axis cs:182,4.77079153060913) --(axis cs:183,4.78847789764404) --(axis cs:184,4.79173660516739) --(axis cs:185,4.77836008071899) --(axis cs:186,4.73003129959106) --(axis cs:187,4.75312750339508) --(axis cs:188,4.73445749282837) --(axis cs:189,4.72924709320068) --(axis cs:190,4.7356987953186) --(axis cs:191,4.74054117202759) --(axis cs:192,4.72867870330811) --(axis cs:193,4.71410217046738) --(axis cs:194,4.72036828994751) --(axis cs:195,4.73644671916962) --(axis cs:196,4.71818552017212) --(axis cs:197,4.70484867095947) --(axis cs:198,4.76247987747192) --(axis cs:199,4.78962469100952) --(axis cs:199,4.85197114229202) --(axis cs:199,4.85197114229202) --(axis cs:198,4.84217610120773) --(axis cs:197,4.84484977722168) --(axis cs:196,4.9372127532959) --(axis cs:195,4.86078653335571) --(axis cs:194,4.85053832769394) --(axis cs:193,4.8342267036438) --(axis cs:192,4.81706075668335) --(axis cs:191,4.88746248245239) --(axis cs:190,4.87414932250977) --(axis cs:189,4.80897274017334) --(axis cs:188,4.84450387954712) --(axis cs:187,4.8493595123291) --(axis cs:186,4.87432279586792) --(axis cs:185,4.89793100357056) --(axis cs:184,4.90726345777512) --(axis cs:183,4.90091123580933) --(axis cs:182,4.86709089279175) --(axis cs:181,4.88223628997803) --(axis cs:180,4.85509624481201) --(axis cs:179,4.86678314208984) --(axis cs:178,4.87793483734131) --(axis cs:177,4.8986328125) --(axis cs:176,4.89114532470703) --(axis cs:175,4.90211668014526) --(axis cs:174,4.8901255607605) --(axis cs:173,4.86216440200806) --(axis cs:172,4.88418712615967) --(axis cs:171,4.83123998641968) --(axis cs:170,4.86556148529053) --(axis cs:169,4.8547290802002) --(axis cs:168,4.87785902261734) --(axis cs:167,4.85640840530395) --(axis cs:166,4.88702697753906) --(axis cs:165,4.95147428512573) --(axis cs:164,4.82758512496948) --(axis cs:163,4.85287088394165) --(axis cs:162,4.82077121734619) --(axis cs:161,4.8658287858963) --(axis cs:160,4.85151720046997) --(axis cs:159,4.83597002029419) --(axis cs:158,4.8710865688324) --(axis cs:157,4.94232568740845) --(axis cs:156,4.9386590218544) --(axis cs:155,4.92892931461334) --(axis cs:154,4.85682802200317) --(axis cs:153,4.84719619750977) --(axis cs:152,4.91228551864624) --(axis cs:151,4.83833713531494) --(axis cs:150,4.84479465484619) --(axis cs:149,4.8798789024353) --(axis cs:148,4.87629073143005) --(axis cs:147,4.87709884643555) --(axis cs:146,4.87236498355866) --(axis cs:145,4.8523419380188) --(axis cs:144,4.84838809967041) --(axis cs:143,4.89859237670898) --(axis cs:142,4.86540145874023) --(axis cs:141,4.84437236785889) --(axis cs:140,4.89465341567993) --(axis cs:139,4.9308253288269) --(axis cs:138,4.89932851791382) --(axis cs:137,4.87933940887451) --(axis cs:136,4.79175043106079) --(axis cs:135,4.87655888319015) --(axis cs:134,4.96141653060913) --(axis cs:133,4.84391384124756) --(axis cs:132,4.8284423828125) --(axis cs:131,4.9002254486084) --(axis cs:130,4.88969507217407) --(axis cs:129,4.85758447647095) --(axis cs:128,4.91150270700455) --(axis cs:127,4.86144161224365) --(axis cs:126,4.87994651794434) --(axis cs:125,4.85735502243042) --(axis cs:124,4.88508005142212) --(axis cs:123,4.88494596481323) --(axis cs:122,4.84582891464233) --(axis cs:121,4.85499773025513) --(axis cs:120,4.83035411834717) --(axis cs:119,4.8679651260376) --(axis cs:118,4.91146726608276) --(axis cs:117,4.85170764923096) --(axis cs:116,4.87921724319458) --(axis cs:115,4.89696550369263) --(axis cs:114,4.88488788604736) --(axis cs:113,4.91138144493103) --(axis cs:112,4.91444625854492) --(axis cs:111,4.89655904769897) --(axis cs:110,4.88649225234985) --(axis cs:109,4.85029745101929) --(axis cs:108,4.91453666687012) --(axis cs:107,4.85254783630371) --(axis cs:106,4.97451682329178) --(axis cs:105,4.8577543258667) --(axis cs:104,4.89574346542358) --(axis cs:103,4.86817464828491) --(axis cs:102,4.86145315170288) --(axis cs:101,4.90119543075562) --(axis cs:100,4.95148992538452) --(axis cs:99,4.96262950897217) --(axis cs:98,4.91956663131714) --(axis cs:97,4.9195930480957) --(axis cs:96,4.89698848724365) --(axis cs:95,4.89814414978027) --(axis cs:94,4.88872667074203) --(axis cs:93,4.89989156723022) --(axis cs:92,5.08127355575562) --(axis cs:91,4.91912002563477) --(axis cs:90,4.89567947387695) --(axis cs:89,4.91696071624756) --(axis cs:88,4.92817230463028) --(axis cs:87,4.92633256912231) --(axis cs:86,4.87162437438965) --(axis cs:85,4.96846952438355) --(axis cs:84,4.90326961755753) --(axis cs:83,4.90010900497437) --(axis cs:82,4.92000217437744) --(axis cs:81,5.00956678628922) --(axis cs:80,5.01199560165405) --(axis cs:79,5.04013652801514) --(axis cs:78,5.03238708734512) --(axis cs:77,4.97820615768433) --(axis cs:76,4.91808085441589) --(axis cs:75,4.84995231628418) --(axis cs:74,4.91786470413208) --(axis cs:73,4.97739486694336) --(axis cs:72,4.9797887802124) --(axis cs:71,4.92064542770386) --(axis cs:70,4.92647085189819) --(axis cs:69,4.93944940567017) --(axis cs:68,4.90979633331299) --(axis cs:67,4.87505973100662) --(axis cs:66,4.94272794723511) --(axis cs:65,4.9472900390625) --(axis cs:64,4.96933441162109) --(axis cs:63,4.93123092651367) --(axis cs:62,4.93751831054688) --(axis cs:61,4.89392108917236) --(axis cs:60,4.92848163366318) --(axis cs:59,4.91593595504761) --(axis cs:58,4.89588270187378) --(axis cs:57,4.95088567733765) --(axis cs:56,4.95323629379272) --(axis cs:55,4.95789184570313) --(axis cs:54,4.91024923324585) --(axis cs:53,4.90325393676758) --(axis cs:52,4.99461679458618) --(axis cs:51,4.9183424949646) --(axis cs:50,4.96826486587524) --(axis cs:49,4.87904920578003) --(axis cs:48,4.9074366569519) --(axis cs:47,4.85263433456421) --(axis cs:46,4.89371652603149) --(axis cs:45,4.87347593307495) --(axis cs:44,4.87741985321045) --(axis cs:43,4.89905281066895) --(axis cs:42,4.90506601333618) --(axis cs:41,4.89423933744431) --(axis cs:40,4.90380430221558) --(axis cs:39,4.92207270383835) --(axis cs:38,4.87512369155884) --(axis cs:37,4.87225198745728) --(axis cs:36,4.86816440343857) --(axis cs:35,4.90221433639526) --(axis cs:34,4.91483892440796) --(axis cs:33,4.87562952041626) --(axis cs:32,4.87389659881592) --(axis cs:31,4.91168748378754) --(axis cs:30,4.92876081466675) --(axis cs:29,4.84777336120605) --(axis cs:28,4.91583420276642) --(axis cs:27,4.88704481124878) --(axis cs:26,4.88236600160599) --(axis cs:25,4.91228933334351) --(axis cs:24,4.87615480184555) --(axis cs:23,4.91647144317627) --(axis cs:22,4.88763828277588) --(axis cs:21,5.01495695114136) --(axis cs:20,4.94381980895996) --(axis cs:19,4.96260490417481) --(axis cs:18,5.0566689491272) --(axis cs:17,4.91880754709244) --(axis cs:16,5.04666032791138) --(axis cs:15,5.04421300888062) --(axis cs:14,5.04720838308334) --(axis cs:13,5.09628896713257) --(axis cs:12,5.06757612228394) --(axis cs:11,5.18560943603516) --(axis cs:10,5.1507040977478) --(axis cs:9,5.26479057312012) --(axis cs:8,5.29460039138794) --(axis cs:7,5.36363468170166) --(axis cs:6,5.55703659057617) --(axis cs:5,5.60270652770996) --(axis cs:4,5.65898628234863) --(axis cs:3,6.01627283096313) --(axis cs:2,6.03742218017578) --(axis cs:1,6.03470640182495) --(axis cs:0,6.13193473815918) --cycle; \path [draw=color4, fill=color4, opacity=0.2] (axis cs:0,5.96911106109619) --(axis cs:0,5.93296356201172) --(axis cs:1,5.73826017379761) --(axis cs:2,5.57327966690063) --(axis cs:3,5.45197324752808) --(axis cs:4,5.40304393768311) --(axis cs:5,5.2733980178833) --(axis cs:6,5.16344566345215) --(axis cs:7,5.02745990753174) --(axis cs:8,4.92934532165527) --(axis cs:9,4.94332885742188) --(axis cs:10,4.83050270080566) --(axis cs:11,4.8613597869873) --(axis cs:12,4.79329686403275) --(axis cs:13,4.76880456209183) --(axis cs:14,4.6481972694397) --(axis cs:15,4.64750804901123) --(axis cs:16,4.64959115982056) --(axis cs:17,4.64363089799881) --(axis cs:18,4.56017017364502) --(axis cs:19,4.54198589324951) --(axis cs:20,4.54318227767944) --(axis cs:21,4.50257196426392) --(axis cs:22,4.58442049026489) --(axis cs:23,4.50840758800507) --(axis cs:24,4.50676973342896) --(axis cs:25,4.52688140869141) --(axis cs:26,4.52238817214966) --(axis cs:27,4.51576833724976) --(axis cs:28,4.53365430831909) --(axis cs:29,4.60901861190796) --(axis cs:30,4.57019052505493) --(axis cs:31,4.59248180389404) --(axis cs:32,4.57048149108887) --(axis cs:33,4.55317644119263) --(axis cs:34,4.53414296150207) --(axis cs:35,4.51897144317627) --(axis cs:36,4.55847072601318) --(axis cs:37,4.50363883972168) --(axis cs:38,4.52014617919922) --(axis cs:39,4.53636684417725) --(axis cs:40,4.53180332660675) --(axis cs:41,4.54803286552429) --(axis cs:42,4.51475028991699) --(axis cs:43,4.5909704208374) --(axis cs:44,4.57029657363892) --(axis cs:45,4.53737926483154) --(axis cs:46,4.56533012390137) --(axis cs:47,4.5949532365799) --(axis cs:48,4.53510293960571) --(axis cs:49,4.50761613845825) --(axis cs:50,4.58538827896118) --(axis cs:51,4.53723878860474) --(axis cs:52,4.55835552215576) --(axis cs:53,4.54829416275024) --(axis cs:54,4.59001207351685) --(axis cs:55,4.51449751853943) --(axis cs:56,4.59324817657471) --(axis cs:57,4.56078232765198) --(axis cs:58,4.54388608932495) --(axis cs:59,4.63345632553101) --(axis cs:60,4.54788522720337) --(axis cs:61,4.65826234817505) --(axis cs:62,4.62561988830566) --(axis cs:63,4.61011714935303) --(axis cs:64,4.62273654937744) --(axis cs:65,4.58341856002808) --(axis cs:66,4.60379056930542) --(axis cs:67,4.57561683654785) --(axis cs:68,4.58421716690063) --(axis cs:69,4.52426347732544) --(axis cs:70,4.55477609634399) --(axis cs:71,4.54332752227783) --(axis cs:72,4.57691497802734) --(axis cs:73,4.62403736114502) --(axis cs:74,4.61878671646118) --(axis cs:75,4.5465238571167) --(axis cs:76,4.59448308944702) --(axis cs:77,4.56806640625) --(axis cs:78,4.6533971786499) --(axis cs:79,4.58511276245117) --(axis cs:80,4.60546313047409) --(axis cs:81,4.63923301696777) --(axis cs:82,4.55551357269287) --(axis cs:83,4.55397163391113) --(axis cs:84,4.57703828811646) --(axis cs:85,4.58249263763428) --(axis cs:86,4.63622798919678) --(axis cs:87,4.60815134048462) --(axis cs:88,4.61405696868897) --(axis cs:89,4.6119647026062) --(axis cs:90,4.55684280395508) --(axis cs:91,4.60179529190063) --(axis cs:92,4.60243015289307) --(axis cs:93,4.57512035369873) --(axis cs:94,4.58197374343872) --(axis cs:95,4.56843528747559) --(axis cs:96,4.58427782058716) --(axis cs:97,4.58351335525513) --(axis cs:98,4.6041353225708) --(axis cs:99,4.60036573410034) --(axis cs:100,4.56085214614868) --(axis cs:101,4.56445732116699) --(axis cs:102,4.55898418664932) --(axis cs:103,4.56254920959473) --(axis cs:104,4.5561861038208) --(axis cs:105,4.56013250350952) --(axis cs:106,4.57210865020752) --(axis cs:107,4.57473545074463) --(axis cs:108,4.57070922851562) --(axis cs:109,4.57838144302368) --(axis cs:110,4.59314413070679) --(axis cs:111,4.59353132247925) --(axis cs:112,4.54422407150269) --(axis cs:113,4.57928228378296) --(axis cs:114,4.56157207489014) --(axis cs:115,4.55627098083496) --(axis cs:116,4.58334412813187) --(axis cs:117,4.57803220748901) --(axis cs:118,4.51922245025635) --(axis cs:119,4.55866184234619) --(axis cs:120,4.54299468994141) --(axis cs:121,4.57472082614899) --(axis cs:122,4.54231626033783) --(axis cs:123,4.54152975082397) --(axis cs:124,4.60882549285889) --(axis cs:125,4.58137817382813) --(axis cs:126,4.60999870300293) --(axis cs:127,4.60499448776245) --(axis cs:128,4.58328342437744) --(axis cs:129,4.57762250900269) --(axis cs:130,4.58088687896729) --(axis cs:131,4.57349872589111) --(axis cs:132,4.57002002954483) --(axis cs:133,4.53099932909012) --(axis cs:134,4.5446168923378) --(axis cs:135,4.57318620681763) --(axis cs:136,4.60743181228638) --(axis cs:137,4.56406183242798) --(axis cs:138,4.58809757232666) --(axis cs:139,4.52135725021362) --(axis cs:140,4.56077871322632) --(axis cs:141,4.55794172286987) --(axis cs:142,4.57485122680664) --(axis cs:143,4.60527410507202) --(axis cs:144,4.56371593475342) --(axis cs:145,4.61120853424072) --(axis cs:146,4.59761610031128) --(axis cs:147,4.64285676956177) --(axis cs:148,4.61048183441162) --(axis cs:149,4.6208930015564) --(axis cs:150,4.57414016723633) --(axis cs:151,4.55136270523071) --(axis cs:152,4.58375940322876) --(axis cs:153,4.56293041467667) --(axis cs:154,4.56680641174316) --(axis cs:155,4.6207950592041) --(axis cs:156,4.58991460800171) --(axis cs:157,4.57342939376831) --(axis cs:158,4.61377763748169) --(axis cs:159,4.53796691894531) --(axis cs:160,4.59770755767822) --(axis cs:161,4.59538516998291) --(axis cs:162,4.58959484100342) --(axis cs:163,4.56369123458862) --(axis cs:164,4.56850452423096) --(axis cs:165,4.61148471832275) --(axis cs:166,4.55093278884888) --(axis cs:167,4.6047661781311) --(axis cs:168,4.56822414398193) --(axis cs:169,4.57955389022827) --(axis cs:170,4.55744104385376) --(axis cs:171,4.5755464553833) --(axis cs:172,4.57044439315796) --(axis cs:173,4.58911428451538) --(axis cs:174,4.60473113059997) --(axis cs:175,4.61573486328125) --(axis cs:176,4.62745471954346) --(axis cs:177,4.59834470748901) --(axis cs:178,4.57530651092529) --(axis cs:179,4.60350370407104) --(axis cs:180,4.59028148651123) --(axis cs:181,4.57418766021729) --(axis cs:182,4.57794017791748) --(axis cs:183,4.59181396007538) --(axis cs:184,4.61066904067993) --(axis cs:185,4.56135311126709) --(axis cs:186,4.57570457458496) --(axis cs:187,4.5485728263855) --(axis cs:188,4.53584108352661) --(axis cs:189,4.60019760131836) --(axis cs:190,4.57592792510986) --(axis cs:191,4.59289588928223) --(axis cs:192,4.56908082962036) --(axis cs:193,4.557421875) --(axis cs:194,4.56657410860062) --(axis cs:195,4.55801944732666) --(axis cs:196,4.56013118982315) --(axis cs:197,4.60524199485779) --(axis cs:198,4.58024219989777) --(axis cs:199,4.59104566574097) --(axis cs:199,4.68319254159927) --(axis cs:199,4.68319254159927) --(axis cs:198,4.646226978302) --(axis cs:197,4.72222318649292) --(axis cs:196,4.65378475189209) --(axis cs:195,4.65156497955322) --(axis cs:194,4.63189191818237) --(axis cs:193,4.66337363958359) --(axis cs:192,4.63236932754517) --(axis cs:191,4.71237707138062) --(axis cs:190,4.73739547729492) --(axis cs:189,4.63645267486572) --(axis cs:188,4.5912971496582) --(axis cs:187,4.66513023376465) --(axis cs:186,4.63004532575607) --(axis cs:185,4.63489922046661) --(axis cs:184,4.67838068008423) --(axis cs:183,4.73458557128906) --(axis cs:182,4.68929920196533) --(axis cs:181,4.63543281555176) --(axis cs:180,4.66015510559082) --(axis cs:179,4.63099060058594) --(axis cs:178,4.64413557052612) --(axis cs:177,4.69432544708252) --(axis cs:176,4.71375885009766) --(axis cs:175,4.72136707305908) --(axis cs:174,4.6897997379303) --(axis cs:173,4.72621269226074) --(axis cs:172,4.66501312255859) --(axis cs:171,4.65971059799194) --(axis cs:170,4.62375087738037) --(axis cs:169,4.67124280929565) --(axis cs:168,4.6271411895752) --(axis cs:167,4.69615058898926) --(axis cs:166,4.64897537231445) --(axis cs:165,4.7015477180481) --(axis cs:164,4.70577487945557) --(axis cs:163,4.59579343795776) --(axis cs:162,4.69071102142334) --(axis cs:161,4.64587421417236) --(axis cs:160,4.64005222320557) --(axis cs:159,4.66369972229004) --(axis cs:158,4.66134157180786) --(axis cs:157,4.6680947303772) --(axis cs:156,4.66841020584106) --(axis cs:155,4.66107962369919) --(axis cs:154,4.64499206542969) --(axis cs:153,4.61902332305908) --(axis cs:152,4.64777822494507) --(axis cs:151,4.62573347091675) --(axis cs:150,4.66294492959976) --(axis cs:149,4.65617170333862) --(axis cs:148,4.7065239906311) --(axis cs:147,4.70722780227661) --(axis cs:146,4.67761180639267) --(axis cs:145,4.65846672058106) --(axis cs:144,4.67879085540771) --(axis cs:143,4.68198223114014) --(axis cs:142,4.69177694320679) --(axis cs:141,4.69875348091125) --(axis cs:140,4.70377231121063) --(axis cs:139,4.65730972290039) --(axis cs:138,4.65800933837891) --(axis cs:137,4.66128396987915) --(axis cs:136,4.6551212310791) --(axis cs:135,4.6383355140686) --(axis cs:134,4.67231073379517) --(axis cs:133,4.65095796585083) --(axis cs:132,4.644153881073) --(axis cs:131,4.65653896331787) --(axis cs:130,4.64264078617096) --(axis cs:129,4.63162651062012) --(axis cs:128,4.678577709198) --(axis cs:127,4.70411739349365) --(axis cs:126,4.76579513549805) --(axis cs:125,4.67297623157501) --(axis cs:124,4.71813745498657) --(axis cs:123,4.66291227340698) --(axis cs:122,4.63676643371582) --(axis cs:121,4.68688907623291) --(axis cs:120,4.6398323059082) --(axis cs:119,4.66413345336914) --(axis cs:118,4.62187004089355) --(axis cs:117,4.65273761749268) --(axis cs:116,4.67592887878418) --(axis cs:115,4.68577671051025) --(axis cs:114,4.65654678344727) --(axis cs:113,4.67310848236084) --(axis cs:112,4.65804300308228) --(axis cs:111,4.69046449661255) --(axis cs:110,4.63657646179199) --(axis cs:109,4.68695945739746) --(axis cs:108,4.64391670227051) --(axis cs:107,4.64742116928101) --(axis cs:106,4.61954526901245) --(axis cs:105,4.64841413497925) --(axis cs:104,4.69493246078491) --(axis cs:103,4.65357723236084) --(axis cs:102,4.67182569503784) --(axis cs:101,4.66474056243896) --(axis cs:100,4.6849081993103) --(axis cs:99,4.73399953842163) --(axis cs:98,4.7223991394043) --(axis cs:97,4.70943593978882) --(axis cs:96,4.74245681762695) --(axis cs:95,4.65832843780518) --(axis cs:94,4.73999853134155) --(axis cs:93,4.64472795248032) --(axis cs:92,4.71894682645798) --(axis cs:91,4.71581945419311) --(axis cs:90,4.68093185424805) --(axis cs:89,4.69639291763306) --(axis cs:88,4.67188053131103) --(axis cs:87,4.72445106506348) --(axis cs:86,4.70745103597641) --(axis cs:85,4.68377246856689) --(axis cs:84,4.66533460617065) --(axis cs:83,4.63661279678345) --(axis cs:82,4.66056365966797) --(axis cs:81,4.72795419692993) --(axis cs:80,4.75202360153198) --(axis cs:79,4.75241924762726) --(axis cs:78,4.72955722808838) --(axis cs:77,4.66235570907593) --(axis cs:76,4.69917716979981) --(axis cs:75,4.66878280639648) --(axis cs:74,4.69184522628784) --(axis cs:73,4.67481250762939) --(axis cs:72,4.73829565048218) --(axis cs:71,4.72980527877808) --(axis cs:70,4.63807191848755) --(axis cs:69,4.68078508377075) --(axis cs:68,4.71187658309937) --(axis cs:67,4.65657482147217) --(axis cs:66,4.7237008357048) --(axis cs:65,4.66838941574097) --(axis cs:64,4.75577936172485) --(axis cs:63,4.67733325958252) --(axis cs:62,4.73512859344482) --(axis cs:61,4.8224326467514) --(axis cs:60,4.61742448806763) --(axis cs:59,4.71892414093018) --(axis cs:58,4.61706113815308) --(axis cs:57,4.72368459701538) --(axis cs:56,4.71313362121582) --(axis cs:55,4.65556573867798) --(axis cs:54,4.69152733802795) --(axis cs:53,4.61630258560181) --(axis cs:52,4.64738121032715) --(axis cs:51,4.60750937461853) --(axis cs:50,4.72608874320984) --(axis cs:49,4.61255794048309) --(axis cs:48,4.63098964691162) --(axis cs:47,4.68964267492294) --(axis cs:46,4.68987045288086) --(axis cs:45,4.66941452026367) --(axis cs:44,4.71039972305298) --(axis cs:43,4.69481077194214) --(axis cs:42,4.64081153869629) --(axis cs:41,4.65321893692017) --(axis cs:40,4.66766805648804) --(axis cs:39,4.6168044090271) --(axis cs:38,4.68853387832642) --(axis cs:37,4.6626088142395) --(axis cs:36,4.64117183685303) --(axis cs:35,4.60229158401489) --(axis cs:34,4.6108567237854) --(axis cs:33,4.59003327846527) --(axis cs:32,4.6669340133667) --(axis cs:31,4.67548027038574) --(axis cs:30,4.62202119827271) --(axis cs:29,4.69361597061157) --(axis cs:28,4.59732818603516) --(axis cs:27,4.62326299905777) --(axis cs:26,4.62279174566269) --(axis cs:25,4.71378383636475) --(axis cs:24,4.60031595230103) --(axis cs:23,4.60282392501831) --(axis cs:22,4.63897388458252) --(axis cs:21,4.61475076675415) --(axis cs:20,4.64561594724655) --(axis cs:19,4.66255760192871) --(axis cs:18,4.68650493621826) --(axis cs:17,4.70133819580078) --(axis cs:16,4.78873091697693) --(axis cs:15,4.789684009552) --(axis cs:14,4.74399127960205) --(axis cs:13,4.8487998008728) --(axis cs:12,4.88994770050049) --(axis cs:11,4.97702857255936) --(axis cs:10,4.9304063129425) --(axis cs:9,5.07795553207397) --(axis cs:8,5.01042528152466) --(axis cs:7,5.19962520599365) --(axis cs:6,5.33592510223389) --(axis cs:5,5.34840393066406) --(axis cs:4,5.65126266479492) --(axis cs:3,5.58350776195526) --(axis cs:2,5.69709311723709) --(axis cs:1,5.80113801956177) --(axis cs:0,5.96911106109619) --cycle; \addplot [semithick, color0, forget plot] table { 0 6.60691804885864 1 6.290114402771 2 6.22126235961914 3 6.07347564697266 4 5.92655439376831 5 5.67587823867798 6 5.587646484375 7 5.31371078491211 8 5.23191299438477 9 5.11922693252563 10 5.12797031402588 11 5.12101564407349 12 4.82511682510376 13 4.78088426589966 14 4.84298343658447 15 4.76468458175659 16 4.63460807800293 17 4.65361776351929 18 4.53116121292114 19 4.560280418396 20 4.54746866226196 21 4.44695644378662 22 4.54891662597656 23 4.49246273040771 24 4.41975145339966 25 4.37880325317383 26 4.3606297492981 27 4.39075927734375 28 4.37093505859375 29 4.42877006530762 30 4.35542945861816 31 4.44064207077026 32 4.53459091186523 33 4.28624839782715 34 4.39991636276245 35 4.50705699920654 36 4.43983716964722 37 4.57343339920044 38 4.41961317062378 39 4.39986925125122 40 4.37772665023804 41 4.36079387664795 42 4.39566688537598 43 4.50833368301392 44 4.47541065216064 45 4.52541170120239 46 4.40011444091797 47 4.4042181968689 48 4.45613803863525 49 4.44059419631958 50 4.48666496276855 51 4.42274370193481 52 4.54309034347534 53 4.52269620895386 54 4.48745737075806 55 4.53536863327026 56 4.49344825744629 57 4.5728723526001 58 4.4435887336731 59 4.51442556381226 60 4.41077556610107 61 4.46517915725708 62 4.6298677444458 63 4.53602485656738 64 4.4912748336792 65 4.53896903991699 66 4.55243978500366 67 4.50221662521362 68 4.55279159545898 69 4.52428092956543 70 4.5794623374939 71 4.51132402420044 72 4.56371688842773 73 4.51925258636475 74 4.52828178405762 75 4.59797677993774 76 4.56756343841553 77 4.5986141204834 78 4.53768911361694 79 4.62069063186645 80 4.51362342834473 81 4.55439538955688 82 4.47069711685181 83 4.54907875061035 84 4.54299592971802 85 4.58851747512817 86 4.59172840118408 87 4.54963731765747 88 4.67321681976318 89 4.62387590408325 90 4.57091121673584 91 4.66580400466919 92 4.537708568573 93 4.59000968933105 94 4.54210958480835 95 4.63067941665649 96 4.68165559768677 97 4.55944204330444 98 4.59823389053345 99 4.60450735092163 100 4.58771944046021 101 4.61024141311646 102 4.67911233901978 103 4.67889823913574 104 4.65472726821899 105 4.55916967391968 106 4.5647837638855 107 4.53935203552246 108 4.56935520172119 109 4.56861181259155 110 4.6361665725708 111 4.58218593597412 112 4.60521697998047 113 4.62311601638794 114 4.61036643981934 115 4.56913509368897 116 4.60995216369629 117 4.60967664718628 118 4.57397899627686 119 4.55487003326416 120 4.61523027420044 121 4.55537824630737 122 4.60389766693115 123 4.59634504318237 124 4.56793899536133 125 4.58052959442139 126 4.63946170806885 127 4.60459251403809 128 4.55200242996216 129 4.53512382507324 130 4.57249889373779 131 4.61958312988281 132 4.64315843582153 133 4.54501066207886 134 4.58841915130615 135 4.58524370193481 136 4.5442663192749 137 4.63079423904419 138 4.61325120925903 139 4.57223443984985 140 4.63227376937866 141 4.68045787811279 142 4.61562004089356 143 4.55130863189697 144 4.60902462005615 145 4.62152805328369 146 4.56496801376343 147 4.5948353767395 148 4.61640243530273 149 4.59826126098633 150 4.6573844909668 151 4.62326669692993 152 4.59335021972656 153 4.62708396911621 154 4.67294769287109 155 4.62535762786865 156 4.5748628616333 157 4.5901819229126 158 4.55849123001099 159 4.58386707305908 160 4.56576499938965 161 4.59439458847046 162 4.53453416824341 163 4.59360437393188 164 4.6102879524231 165 4.58067331314087 166 4.55904273986816 167 4.69447345733643 168 4.59098424911499 169 4.59113740921021 170 4.60332651138306 171 4.74134712219238 172 4.61195878982544 173 4.58231859207153 174 4.66153697967529 175 4.61534194946289 176 4.60825424194336 177 4.59670248031616 178 4.61516819000244 179 4.57299308776855 180 4.63853721618652 181 4.62131958007812 182 4.64389810562134 183 4.60301160812378 184 4.61267957687378 185 4.55520467758179 186 4.53826694488525 187 4.5655951499939 188 4.58120031356812 189 4.5735502243042 190 4.60750637054443 191 4.70308742523193 192 4.59533843994141 193 4.60251502990723 194 4.59363946914673 195 4.6317081451416 196 4.52851266860962 197 4.5386284828186 198 4.53325977325439 199 4.53085088729858 }; \addplot [semithick, color1, forget plot] table { 0 6.24199914932251 1 6.0652530670166 2 5.98668670654297 3 5.97374887466431 4 5.7628680229187 5 5.58112764358521 6 5.64985198974609 7 5.48615598678589 8 5.37025890350342 9 5.29215879440308 10 5.19077272415161 11 5.1578085899353 12 5.12377891540527 13 5.11571464538574 14 5.0322190284729 15 4.97741937637329 16 4.94439706802368 17 4.92566204071045 18 5.00562591552734 19 4.89880390167236 20 4.86271848678589 21 4.95292997360229 22 4.84909038543701 23 4.9168176651001 24 4.89304828643799 25 4.83277673721313 26 4.83936710357666 27 4.90933256149292 28 4.87597532272339 29 4.8663348197937 30 4.78516683578491 31 4.83261156082153 32 4.8056529045105 33 4.83227796554565 34 4.77066717147827 35 4.82331924438477 36 4.85122556686401 37 4.85259590148926 38 4.86325359344482 39 4.87675094604492 40 4.90622406005859 41 4.89463500976562 42 4.84547519683838 43 4.85266675949097 44 4.87118606567383 45 4.82123432159424 46 4.87059602737427 47 4.82311096191406 48 4.85548276901245 49 4.89955596923828 50 4.85941133499145 51 4.85334491729736 52 4.89268388748169 53 4.81526517868042 54 4.88383665084839 55 4.93633213043213 56 4.91114568710327 57 4.92010583877564 58 4.84344329833984 59 4.930885887146 60 4.8738317489624 61 4.94031391143799 62 4.89643135070801 63 4.90206642150879 64 4.84403285980225 65 4.87907791137695 66 4.82646913528442 67 4.87725257873535 68 4.84881229400635 69 4.86205339431763 70 4.86949644088745 71 4.89704895019531 72 4.84203004837036 73 4.91667137145996 74 4.82597160339355 75 4.83651628494263 76 4.89921417236328 77 4.89645376205444 78 4.97560777664185 79 4.88800773620605 80 4.85691432952881 81 4.90239534378052 82 4.86960754394531 83 4.85351209640503 84 4.88953905105591 85 4.8623083114624 86 4.87567434310913 87 4.88752813339233 88 4.9070520401001 89 4.8801628112793 90 4.84346313476563 91 4.911741065979 92 4.92742395401001 93 4.88713798522949 94 4.90897331237793 95 4.86501235961914 96 4.88929252624512 97 4.85257635116577 98 4.84398813247681 99 4.87118854522705 100 4.90238904953003 101 4.94431161880493 102 4.90364637374878 103 4.91293096542358 104 4.85023517608643 105 4.90311040878296 106 4.84385251998901 107 4.84343700408936 108 4.85805978775024 109 4.84028463363647 110 4.91998510360718 111 4.83389329910278 112 4.85176296234131 113 4.89570932388306 114 4.84897499084473 115 4.89379787445068 116 4.85751810073853 117 4.8351095199585 118 4.85510578155518 119 4.86099300384521 120 4.84063339233398 121 4.84657068252563 122 4.78363876342773 123 4.81136331558228 124 4.86261444091797 125 4.89431114196777 126 4.89717969894409 127 4.90808973312378 128 4.87236766815186 129 4.84201259613037 130 4.83315982818604 131 4.83277206420898 132 4.84156885147095 133 4.81729001998901 134 4.83989458084106 135 4.83786020278931 136 4.87626819610596 137 4.84516086578369 138 4.85765733718872 139 4.86561717987061 140 4.82212085723877 141 4.86360263824463 142 4.81315135955811 143 4.83466482162476 144 4.79893417358398 145 4.8277250289917 146 4.85635271072388 147 4.81672124862671 148 4.80957078933716 149 4.82795963287353 150 4.82271146774292 151 4.81316890716553 152 4.81868486404419 153 4.84434213638306 154 4.80727348327637 155 4.81349859237671 156 4.85871744155884 157 4.86475629806519 158 4.81645307540894 159 4.78935146331787 160 4.78613157272339 161 4.84341506958008 162 4.86246166229248 163 4.80206537246704 164 4.86190338134766 165 4.86826467514038 166 4.91532373428345 167 4.85492496490478 168 4.8256591796875 169 4.80602769851685 170 4.81503410339355 171 4.81510591506958 172 4.88034467697144 173 4.8367151260376 174 4.83333520889282 175 4.85019235610962 176 4.8734224319458 177 4.80838060379028 178 4.84627819061279 179 4.83026504516602 180 4.80279674530029 181 4.86682300567627 182 4.88054866790772 183 4.91003723144531 184 4.90455760955811 185 4.88776235580444 186 4.84518127441406 187 4.86398248672485 188 4.81573476791382 189 4.86381664276123 190 4.83185815811157 191 4.79417972564697 192 4.79242916107178 193 4.80290727615356 194 4.76521663665771 195 4.78982419967651 196 4.76661062240601 197 4.81181306838989 198 4.83672943115234 199 4.82652540206909 }; \addplot [semithick, color2, forget plot] table { 0 6.05994482040405 1 5.97784805297852 2 5.92700319290161 3 5.84188604354858 4 5.59460744857788 5 5.51074409484863 6 5.45072393417358 7 5.32270040512085 8 5.24228258132935 9 5.21301555633545 10 5.1086688041687 11 5.09265203475952 12 5.00459012985229 13 5.01916818618774 14 4.96817026138306 15 4.93788728713989 16 4.93636426925659 17 4.89385013580322 18 4.9375678062439 19 4.8883225440979 20 4.8775146484375 21 4.95337047576904 22 4.81704444885254 23 4.86371173858643 24 4.81300601959228 25 4.84652547836304 26 4.8502610206604 27 4.85234346389771 28 4.87286796569824 29 4.81029500961304 30 4.83755874633789 31 4.83637008666992 32 4.81921081542969 33 4.83481578826904 34 4.85260200500488 35 4.8533073425293 36 4.81861047744751 37 4.80242681503296 38 4.84761981964111 39 4.85553722381592 40 4.85570163726807 41 4.84494590759277 42 4.82533836364746 43 4.83558416366577 44 4.83175201416016 45 4.84277038574219 46 4.83825054168701 47 4.81829710006714 48 4.87064037322998 49 4.8221568107605 50 4.85670595169067 51 4.84933300018311 52 4.87245473861694 53 4.83488330841064 54 4.86214475631714 55 4.87098112106323 56 4.88525171279907 57 4.85715799331665 58 4.83404226303101 59 4.8649266242981 60 4.8549204826355 61 4.84898691177368 62 4.85620346069336 63 4.87305860519409 64 4.89432106018066 65 4.85275087356567 66 4.88651094436645 67 4.83235731124878 68 4.85202560424805 69 4.85463752746582 70 4.86098079681396 71 4.84357891082764 72 4.90101413726807 73 4.89441528320313 74 4.84124870300293 75 4.80725708007813 76 4.87144746780396 77 4.87841262817383 78 4.97927513122559 79 4.95225982666016 80 4.92199964523315 81 4.92220325469971 82 4.85411434173584 83 4.83850584030151 84 4.84060077667236 85 4.85624752044678 86 4.82625188827515 87 4.89575204849243 88 4.85725154876709 89 4.85459318161011 90 4.84350242614746 91 4.85524053573608 92 4.931214427948 93 4.85509119033813 94 4.84657373428345 95 4.83559503555298 96 4.82749071121216 97 4.83827791213989 98 4.87092905044556 99 4.91572408676147 100 4.88726062774658 101 4.83268127441406 102 4.82614192962646 103 4.82671947479248 104 4.84762363433838 105 4.81216917037964 106 4.89466562271118 107 4.79791097640991 108 4.85392980575562 109 4.8102105140686 110 4.84353008270264 111 4.81842803955078 112 4.85802011489868 113 4.87040300369263 114 4.83950710296631 115 4.85600433349609 116 4.80721807479858 117 4.80733852386475 118 4.83815565109253 119 4.83992376327515 120 4.76217985153198 121 4.81203060150146 122 4.80345687866211 123 4.80625047683716 124 4.81764879226685 125 4.81270866394043 126 4.83752956390381 127 4.80294437408447 128 4.84970531463623 129 4.81349287033081 130 4.83367147445679 131 4.86052274703979 132 4.78140735626221 133 4.79986591339111 134 4.82876176834106 135 4.82192783355713 136 4.76722507476807 137 4.8091290473938 138 4.83491916656494 139 4.87265968322754 140 4.83990268707275 141 4.7915810585022 142 4.81193361282349 143 4.84225835800171 144 4.80768671035767 145 4.81291456222534 146 4.85560731887817 147 4.83565149307251 148 4.82159557342529 149 4.8202244758606 150 4.8085503578186 151 4.78804168701172 152 4.87984161376953 153 4.81525077819824 154 4.81361656188965 155 4.87458953857422 156 4.86346826553345 157 4.87721319198608 158 4.82023792266846 159 4.80040340423584 160 4.80868244171143 161 4.83356733322144 162 4.79215259552002 163 4.79264516830444 164 4.79531688690186 165 4.88504867553711 166 4.8097297668457 167 4.80369024276733 168 4.80011529922485 169 4.79265623092651 170 4.82530307769775 171 4.78556175231934 172 4.82757997512817 173 4.80885438919067 174 4.83261852264404 175 4.81468114852905 176 4.85234251022339 177 4.83120374679565 178 4.83178625106811 179 4.79593858718872 180 4.79683980941772 181 4.81389932632446 182 4.81894121170044 183 4.8422513961792 184 4.84965009689331 185 4.82357835769653 186 4.80369930267334 187 4.80166721343994 188 4.78825445175171 189 4.76815185546875 190 4.80081844329834 191 4.8035135269165 192 4.77093553543091 193 4.77057552337646 194 4.77927417755127 195 4.79506959915161 196 4.81257677078247 197 4.77415971755981 198 4.80980167388916 199 4.82069177627563 }; \addplot [semithick, color3, forget plot] table { 0 5.95264167785645 1 5.7698447227478 2 5.63956794738769 3 5.52121734619141 4 5.53804426193237 5 5.31666851043701 6 5.24968538284302 7 5.1135425567627 8 4.96208219528198 9 5.01281967163086 10 4.87937717437744 11 4.90851898193359 12 4.83450345993042 13 4.8090202331543 14 4.69987649917603 15 4.71636419296265 16 4.71900501251221 17 4.66714239120483 18 4.61791563034058 19 4.60227174758911 20 4.59895172119141 21 4.56645565032959 22 4.61202783584595 23 4.55561637878418 24 4.55520830154419 25 4.60127677917481 26 4.57236919403076 27 4.5662636756897 28 4.57198638916016 29 4.64479179382324 30 4.59460763931274 31 4.6307430267334 32 4.61810808181763 33 4.57075862884521 34 4.57132358551025 35 4.55475387573242 36 4.59928045272827 37 4.57494354248047 38 4.59108600616455 39 4.57775278091431 40 4.59824228286743 41 4.6062629699707 42 4.57975149154663 43 4.6349663734436 44 4.6406548500061 45 4.59402141571045 46 4.62578496932983 47 4.64344539642334 48 4.57867469787598 49 4.56096210479736 50 4.64945955276489 51 4.56855592727661 52 4.60187454223633 53 4.58229837417603 54 4.64344844818115 55 4.58111562728882 56 4.65319089889526 57 4.63448781967163 58 4.58331594467163 59 4.67517261505127 60 4.58565731048584 61 4.73114824295044 62 4.68012847900391 63 4.64082117080688 64 4.68847913742065 65 4.62658109664917 66 4.6594856262207 67 4.61603689193726 68 4.63665027618408 69 4.6025242805481 70 4.59935760498047 71 4.64033184051514 72 4.64454250335693 73 4.65180835723877 74 4.65531597137451 75 4.5984561920166 76 4.64304494857788 77 4.60676584243774 78 4.68948068618774 79 4.66870470046997 80 4.67235383987427 81 4.68221836090088 82 4.60803861618042 83 4.59479484558105 84 4.6090518951416 85 4.64076232910156 86 4.67057781219482 87 4.66427345275879 88 4.64256448745728 89 4.65417881011963 90 4.61549940109253 91 4.65880737304687 92 4.6573148727417 93 4.6052098274231 94 4.65392770767212 95 4.61338186264038 96 4.64654159545898 97 4.6395884513855 98 4.66326723098755 99 4.68155832290649 100 4.62521514892578 101 4.61792020797729 102 4.61790904998779 103 4.61039686203003 104 4.61997661590576 105 4.60427331924438 106 4.593785572052 107 4.61135778427124 108 4.60731296539307 109 4.63235168457031 110 4.61491117477417 111 4.6442234992981 112 4.60113353729248 113 4.62548980712891 114 4.61443214416504 115 4.61105070114136 116 4.62700176239014 117 4.61696081161499 118 4.57141580581665 119 4.61168260574341 120 4.5914134979248 121 4.63226051330566 122 4.5890097618103 123 4.60071601867676 124 4.6528133392334 125 4.62047996520996 126 4.67233085632324 127 4.64587907791138 128 4.63440198898315 129 4.61071109771729 130 4.60993394851685 131 4.61423711776733 132 4.59870557785034 133 4.59251918792725 134 4.6080135345459 135 4.60416393280029 136 4.62942895889282 137 4.60658006668091 138 4.62545251846313 139 4.58328638076782 140 4.63276376724243 141 4.62618808746338 142 4.62785978317261 143 4.64211692810059 144 4.62185745239258 145 4.63524198532104 146 4.6358528137207 147 4.67622594833374 148 4.65707597732544 149 4.63833465576172 150 4.6226749420166 151 4.59269495010376 152 4.61576881408691 153 4.59364433288574 154 4.60589923858643 155 4.6409294128418 156 4.63019437789917 157 4.62065210342407 158 4.6354733467102 159 4.59121379852295 160 4.61764936447144 161 4.61556644439697 162 4.6379225730896 163 4.58135137557983 164 4.6362361907959 165 4.65501403808594 166 4.60238590240479 167 4.64940853118897 168 4.59768266677856 169 4.62281141281128 170 4.58719387054443 171 4.62363042831421 172 4.61919469833374 173 4.65809202194214 174 4.64177856445312 175 4.66320571899414 176 4.66847267150879 177 4.6410493850708 178 4.61489009857178 179 4.61974563598633 180 4.62750520706177 181 4.60481023788452 182 4.62867574691772 183 4.66735544204712 184 4.64452486038208 185 4.59888648986816 186 4.59989252090454 187 4.59608917236328 188 4.56191339492798 189 4.61767072677612 190 4.64201927185059 191 4.64216871261597 192 4.60158386230469 193 4.59909896850586 194 4.59934959411621 195 4.60479221343994 196 4.60677309036255 197 4.65556869506836 198 4.61230373382568 199 4.63667049407959 }; \end{axis} \end{tikzpicture} \label{fig:bregCIFAR_asym} \end{subfigure} \caption{MSE (y-axis) after epochs of training (x-axis) on asymmetric BregMNIST (left) and BregCIFAR (right). NBD\xspace performs best in both tasks. Symmetric case in Appendix.} \label{fig:bregmnist_all} \end{figure} \section{Non-Bregman Learning} \label{sec:experiments_non_breg} We have shown that our NBD\xspace method is the most effective among all available options when the underlying ground-truth is from the class of Bregman divergences. In this section we will now explore the effectiveness of our approach on tasks that are known to be non-Euclidean, but not necessarily representable by a Bregman divergence. The purpose of these experiments is to show that NBD\xspace does not depend on the underlying representation being a proper divergence in order to still be reasonably effective, and that it is still more effective then the prior Deep-div approach to Bregman learning. This is also of practical relevance to applications: just as the Euclidean metric was used for convenient properties and simplicity, without belief that the underlying system was truly Euclidean, our NBD\xspace may be valuable for developing more flexible methods that inherit the mathematical convenience of Bregman divergences. These tasks probe the efficacy of the closest Bregman approximation of the underlying divergence. We therefore expect that our method will not surpass the state-of-the-art when the task is sufficiently non-Bregman. \subsection{Approximate Semantic Distance} The first task evaluates learning symmetric distances that do not follow the triangle inequality. We group the CIFAR10 classes into two categories: man-made and natural. Within each category we select an arbitrary exemplar class (\textit{car} and \textit{deer} in our experiment). We then assign proxy distances between classes to reflect semantic similarity: 0.5 within the same class, 2 between any non-exemplar class and its category exemplar, and 8 between non-exemplar classes within a category. Pairs from different categories are not compared. Besides disobeying the triangle inequality, the distance values do not reflect any known divergence, and can be changed arbitrarily. \begin{wraptable}[12]{r}{0.35\columnwidth} \vspace{-12pt} \centering \begin{tabular}{ccc} \toprule Metric & Same & Unseen \\ \midrule NBD\xspace & \textbf{0.04} & \textbf{3.52} \\ Deepnorm & 1.23 & 4.18 \\ Mahalanobis & 2.00 & 4.56 \\ Deep-div & \textit{0.10} & \textit{4.13} \\ Widenorm & 1.39 & 4.50 \\ \bottomrule \end{tabular} \caption{MSE for CIFAR10 category semantic distance after 200 epochs. Our NBD\xspace performs the best on learned and previously unseen images.} \label{tab:semantic} \end{wraptable} Like BregCIFAR, we present pairs of images to the model, which simultaneously adjusts a neural embedding and learn a divergence function such that inter-class distances in the embedding space match the target values. This task is harder than the previous ones because it is not sufficient to learn a separable embedding for each class; the embeddings must additionally be arranged appropriately in a non-Euclidean space. The results in Table \ref{tab:semantic} indicate our method effectively learns distances that do not follow the triangle inequality. Interestingly the Deep-div approach does second-best here due to the small space of valid outputs. The other approaches by limitation adhere to the triangle inequality and do not perform as well. \subsection{Overlap distance} The overlap distance task presents pairs of the same image or different images, but with different crops taken out. A horizontal and vertical cut are chosen uniformly at random from each image. When the crops are based on the same image, the asymmetrical divergence measure between images $X$ and $Y$ is the percent intersection area: $D(X, Y) = 1 - \frac{|X \cap Y|}{|X|}$. Otherwise the divergence is 1. We use the INRIA Holidays dataset (see \cref{sec:overlap_details}). The results can be found in \cref{fig:overlap_result}, where we see NBD\xspace performs the second best of all options. \begin{wrapfigure}[10]{r}{0.5\columnwidth} \vspace{-30pt} \centering \begin{tikzpicture} \definecolor{color0}{rgb}{0.12156862745098,0.466666666666667,0.705882352941177} \definecolor{color1}{rgb}{1,0.498039215686275,0.0549019607843137} \definecolor{color2}{rgb}{0.172549019607843,0.627450980392157,0.172549019607843} \definecolor{color3}{rgb}{0.83921568627451,0.152941176470588,0.156862745098039} \definecolor{color4}{rgb}{0.580392156862745,0.403921568627451,0.741176470588235} \begin{axis}[ legend cell align={left}, legend style={ fill opacity=0.8, draw opacity=1, text opacity=1, anchor=east, draw=white!80!black, font=\tiny }, legend pos=north east, tick align=outside, tick pos=left, x grid style={white!69.0196078431373!black}, xmin=-4.95, xmax=103.95, xtick style={color=black}, y grid style={white!69.0196078431373!black}, ylabel={test MSE}, ymin=0.0111467210300196, ymax=0.33905141709415, ytick style={color=black}, width=0.5\columnwidth, height=0.3\columnwidth ] \path [fill=color0, fill opacity=0.2] (axis cs:0,0.150898320255907) --(axis cs:0,0.141706901015608) --(axis cs:1,0.102128486417063) --(axis cs:2,0.0919652577313698) --(axis cs:3,0.0849643446169995) --(axis cs:4,0.08038478293186) --(axis cs:5,0.0764351988058963) --(axis cs:6,0.0753827791807845) --(axis cs:7,0.0697698448913043) --(axis cs:8,0.0694571150232113) --(axis cs:9,0.068946543009314) --(axis cs:10,0.0664312240392604) --(axis cs:11,0.064659207026647) --(axis cs:12,0.0642879507646222) --(axis cs:13,0.0658392676026987) --(axis cs:14,0.0637617642647421) --(axis cs:15,0.0629840383808615) --(axis cs:16,0.0614544566989358) --(axis cs:17,0.0637546049591687) --(axis cs:18,0.0628498572766751) --(axis cs:19,0.067164845862317) --(axis cs:20,0.0584077751276454) --(axis cs:21,0.0568132138885669) --(axis cs:22,0.0565814113438763) --(axis cs:23,0.0546624587282398) --(axis cs:24,0.0551456792466251) --(axis cs:25,0.0549314797086247) --(axis cs:26,0.0528922190459373) --(axis cs:27,0.052899771682235) --(axis cs:28,0.0520032667670705) --(axis cs:29,0.052481928180369) --(axis cs:30,0.0507621992367222) --(axis cs:31,0.0522886775517149) --(axis cs:32,0.0536423821197827) --(axis cs:33,0.0507115314007318) --(axis cs:34,0.0521812752233475) --(axis cs:35,0.0543494385010308) --(axis cs:36,0.0509055953311587) --(axis cs:37,0.0508051852323719) --(axis cs:38,0.0509630634424079) --(axis cs:39,0.0497055490750213) --(axis cs:40,0.0491616599021456) --(axis cs:41,0.0499151228798911) --(axis cs:42,0.0482053105982854) --(axis cs:43,0.0483554423147415) --(axis cs:44,0.04713309627063) --(axis cs:45,0.0473447751662536) --(axis cs:46,0.0471373425330212) --(axis cs:47,0.0480761679162803) --(axis cs:48,0.0483688755759891) --(axis cs:49,0.047782622037202) --(axis cs:50,0.0476876471492682) --(axis cs:51,0.0478762006115475) --(axis cs:52,0.0486499248039578) --(axis cs:53,0.0477674975239106) --(axis cs:54,0.0485972961048862) --(axis cs:55,0.048497121877057) --(axis cs:56,0.0465829713166006) --(axis cs:57,0.0460921983058913) --(axis cs:58,0.048214104738156) --(axis cs:59,0.0450546120717396) --(axis cs:60,0.0449831306163299) --(axis cs:61,0.0469026614493883) --(axis cs:62,0.0440774670286426) --(axis cs:63,0.0454288547913342) --(axis cs:64,0.0460469901434154) --(axis cs:65,0.0451054404322276) --(axis cs:66,0.046010305936653) --(axis cs:67,0.0473586244867184) --(axis cs:68,0.0462869179346191) --(axis cs:69,0.0446964668895914) --(axis cs:70,0.0467920552740038) --(axis cs:71,0.0459879526871562) --(axis cs:72,0.0444993063401301) --(axis cs:73,0.0464375145368942) --(axis cs:74,0.0460412244019266) --(axis cs:75,0.0460643219331309) --(axis cs:76,0.0458512054634423) --(axis cs:77,0.0455874564036122) --(axis cs:78,0.0452927790213227) --(axis cs:79,0.0457644474201359) --(axis cs:80,0.0435026537491492) --(axis cs:81,0.0434480468523394) --(axis cs:82,0.0443764888017254) --(axis cs:83,0.0444154926974671) --(axis cs:84,0.04452263955819) --(axis cs:85,0.043582213831013) --(axis cs:86,0.0436515572469542) --(axis cs:87,0.0456454650171679) --(axis cs:88,0.0440201491390506) --(axis cs:89,0.0444117361372008) --(axis cs:90,0.0451561101930363) --(axis cs:91,0.0446944702030349) --(axis cs:92,0.0453056448292519) --(axis cs:93,0.0447175469484529) --(axis cs:94,0.0441550569812678) --(axis cs:95,0.043452348416834) --(axis cs:96,0.0450904818705429) --(axis cs:97,0.0444170721159671) --(axis cs:98,0.044445133198263) --(axis cs:99,0.043633379517251) --(axis cs:99,0.0474629144089881) --(axis cs:99,0.0474629144089881) --(axis cs:98,0.0467792302479727) --(axis cs:97,0.0481132410182264) --(axis cs:96,0.0499163893529068) --(axis cs:95,0.0460912385525416) --(axis cs:94,0.0474389953334905) --(axis cs:93,0.0468889850888052) --(axis cs:92,0.0495146102118705) --(axis cs:91,0.0482842963575196) --(axis cs:90,0.0500108889086025) --(axis cs:89,0.0477862363988863) --(axis cs:88,0.0493553012336453) --(axis cs:87,0.0481446965924817) --(axis cs:86,0.0476265189249208) --(axis cs:85,0.0481501853346174) --(axis cs:84,0.0481093993331255) --(axis cs:83,0.0473948186795814) --(axis cs:82,0.0485451929004116) --(axis cs:81,0.046649471412908) --(axis cs:80,0.047393125681717) --(axis cs:79,0.0500957537002406) --(axis cs:78,0.0478914298486113) --(axis cs:77,0.0474067865029583) --(axis cs:76,0.0500101266550689) --(axis cs:75,0.051317342879434) --(axis cs:74,0.0503129636230711) --(axis cs:73,0.0487644188470475) --(axis cs:72,0.0504032627154271) --(axis cs:71,0.0498062005263448) --(axis cs:70,0.049794254099375) --(axis cs:69,0.0523909521434591) --(axis cs:68,0.0491670358083619) --(axis cs:67,0.0514547811462543) --(axis cs:66,0.048670523409527) --(axis cs:65,0.0503905286248556) --(axis cs:64,0.0506791651376515) --(axis cs:63,0.0486235046704502) --(axis cs:62,0.0489669167952291) --(axis cs:61,0.0502517472439253) --(axis cs:60,0.0483858303006661) --(axis cs:59,0.0496220449493061) --(axis cs:58,0.053055588934501) --(axis cs:57,0.0510502551857965) --(axis cs:56,0.0522356109797709) --(axis cs:55,0.0511713362270767) --(axis cs:54,0.0544682467718342) --(axis cs:53,0.0514959499992065) --(axis cs:52,0.0517412619698192) --(axis cs:51,0.0511342564869365) --(axis cs:50,0.0513591747429933) --(axis cs:49,0.052870054783553) --(axis cs:48,0.0517279342880551) --(axis cs:47,0.0510570792207894) --(axis cs:46,0.0518267004285762) --(axis cs:45,0.0506093892791466) --(axis cs:44,0.0522712968658713) --(axis cs:43,0.0517372891014839) --(axis cs:42,0.0537026161995814) --(axis cs:41,0.0523829520808175) --(axis cs:40,0.0540962048115232) --(axis cs:39,0.0554407606583695) --(axis cs:38,0.056192240214647) --(axis cs:37,0.057015775646954) --(axis cs:36,0.0564742926669454) --(axis cs:35,0.0572887245291167) --(axis cs:34,0.0595358192576438) --(axis cs:33,0.0587520098089659) --(axis cs:32,0.0594478543652217) --(axis cs:31,0.0596019111490565) --(axis cs:30,0.0584360566002415) --(axis cs:29,0.056898826648084) --(axis cs:28,0.057452345534756) --(axis cs:27,0.0584369930110252) --(axis cs:26,0.0573780367581246) --(axis cs:25,0.0603544562893383) --(axis cs:24,0.0626515777595432) --(axis cs:23,0.0620420812264225) --(axis cs:22,0.0636279890119396) --(axis cs:21,0.0619225525222607) --(axis cs:20,0.0660487288596669) --(axis cs:19,0.071844182810855) --(axis cs:18,0.0702033873736889) --(axis cs:17,0.0652945620778415) --(axis cs:16,0.0656361479043548) --(axis cs:15,0.0687877391893861) --(axis cs:14,0.072339770697912) --(axis cs:13,0.070111521252854) --(axis cs:12,0.0678093620434146) --(axis cs:11,0.0710560123518243) --(axis cs:10,0.0714816036909185) --(axis cs:9,0.0734877381014495) --(axis cs:8,0.0758534597467624) --(axis cs:7,0.0792484278662259) --(axis cs:6,0.0866311896445558) --(axis cs:5,0.0931423729437909) --(axis cs:4,0.0971366067909641) --(axis cs:3,0.101848169449697) --(axis cs:2,0.109064433368751) --(axis cs:1,0.127131394960157) --(axis cs:0,0.150898320255907) --cycle; \path [fill=color1, fill opacity=0.2] (axis cs:0,0.102661433928556) --(axis cs:0,0.0911139218873311) --(axis cs:1,0.0862758113113036) --(axis cs:2,0.0852700701778423) --(axis cs:3,0.0803735808041757) --(axis cs:4,0.0777237287059984) --(axis cs:5,0.0726481565735923) --(axis cs:6,0.0721505813418634) --(axis cs:7,0.0718736865980132) --(axis cs:8,0.0723696925771633) --(axis cs:9,0.0702215519276458) --(axis cs:10,0.0701183931863569) --(axis cs:11,0.0697226908046122) --(axis cs:12,0.0685789307100616) --(axis cs:13,0.0665509258463306) --(axis cs:14,0.066724906697394) --(axis cs:15,0.0685884505295594) --(axis cs:16,0.067762526937609) --(axis cs:17,0.0660324026986444) --(axis cs:18,0.0662528983734325) --(axis cs:19,0.0644497028177395) --(axis cs:20,0.0640669573021265) --(axis cs:21,0.0641050751583249) --(axis cs:22,0.0626242902296623) --(axis cs:23,0.0641195890912011) --(axis cs:24,0.0651338902617783) --(axis cs:25,0.0654513529514878) --(axis cs:26,0.0626677677243215) --(axis cs:27,0.0633784546142309) --(axis cs:28,0.062397287045396) --(axis cs:29,0.0613687666535599) --(axis cs:30,0.0613525120718258) --(axis cs:31,0.0604335813112468) --(axis cs:32,0.0598206588122579) --(axis cs:33,0.0596796658169371) --(axis cs:34,0.0602314227546502) --(axis cs:35,0.0611752001799848) --(axis cs:36,0.0610939508486163) --(axis cs:37,0.0609504276587105) --(axis cs:38,0.0601110076994103) --(axis cs:39,0.0610017728728552) --(axis cs:40,0.0609113697407292) --(axis cs:41,0.0605652311654695) --(axis cs:42,0.0599258993284331) --(axis cs:43,0.059802895595807) --(axis cs:44,0.0593798371203181) --(axis cs:45,0.0582675389895842) --(axis cs:46,0.0585763061986744) --(axis cs:47,0.0592918939805865) --(axis cs:48,0.0606194163929973) --(axis cs:49,0.0588095847288852) --(axis cs:50,0.0602895144849658) --(axis cs:51,0.0588676647624653) --(axis cs:52,0.0585217899351908) --(axis cs:53,0.0597675501855988) --(axis cs:54,0.0590281623140378) --(axis cs:55,0.0594631958901927) --(axis cs:56,0.0587917589989266) --(axis cs:57,0.0595361215411162) --(axis cs:58,0.0582744719935513) --(axis cs:59,0.0591384882725145) --(axis cs:60,0.0581568455412296) --(axis cs:61,0.0582826619422759) --(axis cs:62,0.0581450964653573) --(axis cs:63,0.0590024105928342) --(axis cs:64,0.0585837071906155) --(axis cs:65,0.0579854432716185) --(axis cs:66,0.0577972523604818) --(axis cs:67,0.0578769564712617) --(axis cs:68,0.0585505719795676) --(axis cs:69,0.0583562482986875) --(axis cs:70,0.0588794262928534) --(axis cs:71,0.0584849648375729) --(axis cs:72,0.0576557676574277) --(axis cs:73,0.0586314075057862) --(axis cs:74,0.0583073116961865) --(axis cs:75,0.0585219368677606) --(axis cs:76,0.057834369556142) --(axis cs:77,0.0587736676406716) --(axis cs:78,0.0585079360345991) --(axis cs:79,0.0587762888900732) --(axis cs:80,0.05844061712198) --(axis cs:81,0.0578915486263411) --(axis cs:82,0.0586916949547591) --(axis cs:83,0.058062718714249) --(axis cs:84,0.0586311810598624) --(axis cs:85,0.0587508254693634) --(axis cs:86,0.0585137386659623) --(axis cs:87,0.0586754406301846) --(axis cs:88,0.0586501214292983) --(axis cs:89,0.0591283948248992) --(axis cs:90,0.0595306554187211) --(axis cs:91,0.0595136366630219) --(axis cs:92,0.0592434415876664) --(axis cs:93,0.0589142649037081) --(axis cs:94,0.0597239745091463) --(axis cs:95,0.059380966119571) --(axis cs:96,0.0586977292927) --(axis cs:97,0.0586406604508286) --(axis cs:98,0.0585424476439997) --(axis cs:99,0.0585098143968998) --(axis cs:99,0.0593596268143238) --(axis cs:99,0.0593596268143238) --(axis cs:98,0.0600108197753385) --(axis cs:97,0.0599218024750824) --(axis cs:96,0.0600494141546991) --(axis cs:95,0.0607699382378623) --(axis cs:94,0.061126324789617) --(axis cs:93,0.0599457801717085) --(axis cs:92,0.0610945200264655) --(axis cs:91,0.0607313566064216) --(axis cs:90,0.06028676244131) --(axis cs:89,0.0604300229721894) --(axis cs:88,0.0598649975272676) --(axis cs:87,0.0590976094396244) --(axis cs:86,0.060364888157463) --(axis cs:85,0.0603018022133224) --(axis cs:84,0.0606579384817827) --(axis cs:83,0.0599612222065807) --(axis cs:82,0.0594063092075524) --(axis cs:81,0.0601401274872644) --(axis cs:80,0.0614567558986689) --(axis cs:79,0.060403031445315) --(axis cs:78,0.0599971544404833) --(axis cs:77,0.0597919126081611) --(axis cs:76,0.0594658070643414) --(axis cs:75,0.0604584529656897) --(axis cs:74,0.0606183074291798) --(axis cs:73,0.0620274625355842) --(axis cs:72,0.0617401887396289) --(axis cs:71,0.0616176150541087) --(axis cs:70,0.0620574919658136) --(axis cs:69,0.0613521765079073) --(axis cs:68,0.0617398236379174) --(axis cs:67,0.0605308875357535) --(axis cs:66,0.060999900874) --(axis cs:65,0.0599336262569612) --(axis cs:64,0.0607857117284709) --(axis cs:63,0.0611139245607455) --(axis cs:62,0.0608751688754477) --(axis cs:61,0.0612957800352251) --(axis cs:60,0.0621330240652653) --(axis cs:59,0.0617956077062223) --(axis cs:58,0.0611318213390254) --(axis cs:57,0.0623011472286248) --(axis cs:56,0.0625645926746765) --(axis cs:55,0.0618698756992772) --(axis cs:54,0.0619859156474071) --(axis cs:53,0.0637836233225684) --(axis cs:52,0.0633361154526876) --(axis cs:51,0.0623758806505519) --(axis cs:50,0.0636890913814664) --(axis cs:49,0.0610165741523023) --(axis cs:48,0.0640493740070309) --(axis cs:47,0.0629860498093725) --(axis cs:46,0.0628144537462414) --(axis cs:45,0.0622531719555452) --(axis cs:44,0.0630412889473203) --(axis cs:43,0.0644563900329886) --(axis cs:42,0.0628897156103029) --(axis cs:41,0.0632515719322554) --(axis cs:40,0.0634887527229739) --(axis cs:39,0.0642186346727113) --(axis cs:38,0.0625524246602851) --(axis cs:37,0.0635433038996124) --(axis cs:36,0.0635911309909451) --(axis cs:35,0.0638235198102686) --(axis cs:34,0.0634074098144721) --(axis cs:33,0.0624936032760996) --(axis cs:32,0.0621546587851313) --(axis cs:31,0.0639561595134526) --(axis cs:30,0.065845386852525) --(axis cs:29,0.0667314139723556) --(axis cs:28,0.0691039137497777) --(axis cs:27,0.0657870636696131) --(axis cs:26,0.0663385480434435) --(axis cs:25,0.069063226225415) --(axis cs:24,0.0673596772049575) --(axis cs:23,0.0671859385958717) --(axis cs:22,0.0675090733748833) --(axis cs:21,0.0670102928502887) --(axis cs:20,0.0665512990283636) --(axis cs:19,0.0685770103151187) --(axis cs:18,0.071698055467777) --(axis cs:17,0.0718478987768805) --(axis cs:16,0.0713395475662896) --(axis cs:15,0.0747981369710128) --(axis cs:14,0.0758791761254057) --(axis cs:13,0.0712956155583936) --(axis cs:12,0.073699430610434) --(axis cs:11,0.0745629373949652) --(axis cs:10,0.0793389959299303) --(axis cs:9,0.0732147011431855) --(axis cs:8,0.074471848355492) --(axis cs:7,0.0765392741697327) --(axis cs:6,0.0757434017838233) --(axis cs:5,0.0754184088208093) --(axis cs:4,0.0810774782403746) --(axis cs:3,0.0827907397839362) --(axis cs:2,0.0907309689695348) --(axis cs:1,0.110206617597045) --(axis cs:0,0.102661433928556) --cycle; \path [fill=color2, fill opacity=0.2] (axis cs:0,0.0585087842300612) --(axis cs:0,0.0515653722926896) --(axis cs:1,0.0466682110548058) --(axis cs:2,0.0430808884335876) --(axis cs:3,0.0437764091485231) --(axis cs:4,0.0412078833859869) --(axis cs:5,0.0410993659153401) --(axis cs:6,0.0384947390308299) --(axis cs:7,0.0401029205183468) --(axis cs:8,0.0382434689511248) --(axis cs:9,0.0375393289318584) --(axis cs:10,0.0358470371379844) --(axis cs:11,0.0352496704898043) --(axis cs:12,0.0357952304601749) --(axis cs:13,0.0348115114037378) --(axis cs:14,0.0355850891408637) --(axis cs:15,0.0336862051698808) --(axis cs:16,0.0330827017881429) --(axis cs:17,0.0326963193880348) --(axis cs:18,0.031915347129904) --(axis cs:19,0.0310585123540572) --(axis cs:20,0.0314457742591055) --(axis cs:21,0.0324732887228175) --(axis cs:22,0.0326269749428323) --(axis cs:23,0.0327343087517939) --(axis cs:24,0.0317845190975932) --(axis cs:25,0.0327601236956299) --(axis cs:26,0.0307718956282985) --(axis cs:27,0.0320815222968037) --(axis cs:28,0.031530326346535) --(axis cs:29,0.0305550081338991) --(axis cs:30,0.0311998751315896) --(axis cs:31,0.031284313389761) --(axis cs:32,0.0303763882365634) --(axis cs:33,0.030079051444418) --(axis cs:34,0.0299503261455962) --(axis cs:35,0.0308834003061379) --(axis cs:36,0.0318846501338214) --(axis cs:37,0.031823901267209) --(axis cs:38,0.0310656578408877) --(axis cs:39,0.0298345782303648) --(axis cs:40,0.0307487442609278) --(axis cs:41,0.030824614261797) --(axis cs:42,0.0309272491871276) --(axis cs:43,0.0293647481645752) --(axis cs:44,0.0311878185754317) --(axis cs:45,0.0294995412415126) --(axis cs:46,0.0285663420255399) --(axis cs:47,0.0296223614833737) --(axis cs:48,0.0298993667607582) --(axis cs:49,0.0286335114474348) --(axis cs:50,0.0282687085732137) --(axis cs:51,0.0287820795550805) --(axis cs:52,0.0294360401164626) --(axis cs:53,0.0291936268221754) --(axis cs:54,0.0299496075285587) --(axis cs:55,0.0296421389803361) --(axis cs:56,0.0289255153826762) --(axis cs:57,0.0283517044307724) --(axis cs:58,0.0292211424466016) --(axis cs:59,0.0283350346984194) --(axis cs:60,0.0282568628431115) --(axis cs:61,0.0284055226515833) --(axis cs:62,0.0300883060835617) --(axis cs:63,0.027762336399392) --(axis cs:64,0.0283361444055936) --(axis cs:65,0.0287154307831492) --(axis cs:66,0.0291385834884591) --(axis cs:67,0.0282454509526884) --(axis cs:68,0.0287593504753087) --(axis cs:69,0.0280864391915818) --(axis cs:70,0.0286273849498937) --(axis cs:71,0.028613125425476) --(axis cs:72,0.0282666444275399) --(axis cs:73,0.0278010446700977) --(axis cs:74,0.0287420625914122) --(axis cs:75,0.0287497544318806) --(axis cs:76,0.0281511537297144) --(axis cs:77,0.0279519515571903) --(axis cs:78,0.0281428364025109) --(axis cs:79,0.0280714569246032) --(axis cs:80,0.0276203490734206) --(axis cs:81,0.0274124811345982) --(axis cs:82,0.0281310259732412) --(axis cs:83,0.028493251708564) --(axis cs:84,0.0281470429408793) --(axis cs:85,0.0276997738424225) --(axis cs:86,0.0279917763494525) --(axis cs:87,0.0272330544098878) --(axis cs:88,0.0268136604282683) --(axis cs:89,0.028542303778064) --(axis cs:90,0.0277928280282618) --(axis cs:91,0.0268894227528029) --(axis cs:92,0.0268976505728832) --(axis cs:93,0.0260514799420256) --(axis cs:94,0.0264313841849312) --(axis cs:95,0.0270668882879614) --(axis cs:96,0.0271976608096626) --(axis cs:97,0.0270553008187512) --(axis cs:98,0.0272181094651767) --(axis cs:99,0.0262373938872887) --(axis cs:99,0.0339725069985317) --(axis cs:99,0.0339725069985317) --(axis cs:98,0.0292703292304924) --(axis cs:97,0.0300476111006042) --(axis cs:96,0.0292419497610066) --(axis cs:95,0.0290553908792139) --(axis cs:94,0.0291117829591289) --(axis cs:93,0.0299538087185007) --(axis cs:92,0.0291613694397339) --(axis cs:91,0.0300331247664041) --(axis cs:90,0.0321745013248323) --(axis cs:89,0.0317509117351654) --(axis cs:88,0.0294414625909501) --(axis cs:87,0.0302749142840838) --(axis cs:86,0.0294547019577947) --(axis cs:85,0.0296268834826069) --(axis cs:84,0.0306654322635884) --(axis cs:83,0.0310149423207672) --(axis cs:82,0.0309371337858034) --(axis cs:81,0.029713094705875) --(axis cs:80,0.0284002684116258) --(axis cs:79,0.0296781221771977) --(axis cs:78,0.0297419983042724) --(axis cs:77,0.0300638144375969) --(axis cs:76,0.0301784977750406) --(axis cs:75,0.0296605466276039) --(axis cs:74,0.0305399534176801) --(axis cs:73,0.0296399060514046) --(axis cs:72,0.0292436555531005) --(axis cs:71,0.030637469995123) --(axis cs:70,0.0304545323062231) --(axis cs:69,0.0312754082984905) --(axis cs:68,0.0305144669148948) --(axis cs:67,0.0314687218278253) --(axis cs:66,0.0307528438198619) --(axis cs:65,0.0306454138587747) --(axis cs:64,0.0311083613157371) --(axis cs:63,0.0301871507205211) --(axis cs:62,0.0317979799069149) --(axis cs:61,0.0353708214093145) --(axis cs:60,0.0301525173246112) --(axis cs:59,0.0305359044669344) --(axis cs:58,0.031477516526377) --(axis cs:57,0.0306907612560257) --(axis cs:56,0.0308065082438897) --(axis cs:55,0.0317212660089064) --(axis cs:54,0.0309818895386544) --(axis cs:53,0.0315093765843492) --(axis cs:52,0.0317857531774903) --(axis cs:51,0.0318861743435401) --(axis cs:50,0.0314646323160018) --(axis cs:49,0.0325110148553797) --(axis cs:48,0.0332726957256519) --(axis cs:47,0.0322554949202155) --(axis cs:46,0.031208631217935) --(axis cs:45,0.0323099463993451) --(axis cs:44,0.0324480701202852) --(axis cs:43,0.0314524577413391) --(axis cs:42,0.0311496442617021) --(axis cs:41,0.0324321054466457) --(axis cs:40,0.0334416985991987) --(axis cs:39,0.0338085956430597) --(axis cs:38,0.0337397596669992) --(axis cs:37,0.0340243687914229) --(axis cs:36,0.0329594560039311) --(axis cs:35,0.034521409705249) --(axis cs:34,0.0335268242052129) --(axis cs:33,0.0335909944380455) --(axis cs:32,0.0334083541187832) --(axis cs:31,0.0330290491490536) --(axis cs:30,0.0346468660679038) --(axis cs:29,0.0336447360310446) --(axis cs:28,0.0347871872385557) --(axis cs:27,0.035871647049052) --(axis cs:26,0.0355541503616918) --(axis cs:25,0.0356748791677772) --(axis cs:24,0.0364479002442094) --(axis cs:23,0.036789738026885) --(axis cs:22,0.037811043498368) --(axis cs:21,0.0367824149648503) --(axis cs:20,0.0371924416999666) --(axis cs:19,0.0358908827362843) --(axis cs:18,0.034723010062374) --(axis cs:17,0.0378887548876973) --(axis cs:16,0.0366900125883067) --(axis cs:15,0.0381263634350653) --(axis cs:14,0.0388763411703393) --(axis cs:13,0.0388014259274618) --(axis cs:12,0.0396588809370915) --(axis cs:11,0.0398174145378904) --(axis cs:10,0.0407220819578179) --(axis cs:9,0.0410102155813672) --(axis cs:8,0.0415456808100752) --(axis cs:7,0.0416410052915134) --(axis cs:6,0.0413776262650571) --(axis cs:5,0.0475192344531596) --(axis cs:4,0.04480971035543) --(axis cs:3,0.0455343248254569) --(axis cs:2,0.0525954427884698) --(axis cs:1,0.0600028525471649) --(axis cs:0,0.0585087842300612) --cycle; \path [fill=color3, fill opacity=0.2] (axis cs:0,0.324146658182144) --(axis cs:0,0.324146658182144) --(axis cs:1,0.324146658182144) --(axis cs:2,0.324146658182144) --(axis cs:3,0.324146658182144) --(axis cs:4,0.324146658182144) --(axis cs:5,0.324146658182144) --(axis cs:6,0.324146658182144) --(axis cs:7,0.324146658182144) --(axis cs:8,0.324146658182144) --(axis cs:9,0.324146658182144) --(axis cs:10,0.324146658182144) --(axis cs:11,0.324146658182144) --(axis cs:12,0.324146658182144) --(axis cs:13,0.324146658182144) --(axis cs:14,0.324146658182144) --(axis cs:15,0.324146658182144) --(axis cs:16,0.324146658182144) --(axis cs:17,0.324146658182144) --(axis cs:18,0.324146658182144) --(axis cs:19,0.324146658182144) --(axis cs:20,0.324146658182144) --(axis cs:21,0.324146658182144) --(axis cs:22,0.324146658182144) --(axis cs:23,0.324146658182144) --(axis cs:24,0.324146658182144) --(axis cs:25,0.324146658182144) --(axis cs:26,0.324146658182144) --(axis cs:27,0.324146658182144) --(axis cs:28,0.324146658182144) --(axis cs:29,0.324146658182144) --(axis cs:30,0.324146658182144) --(axis cs:31,0.324146658182144) --(axis cs:32,0.324146658182144) --(axis cs:33,0.324146658182144) --(axis cs:34,0.324146658182144) --(axis cs:35,0.324146658182144) --(axis cs:36,0.324146658182144) --(axis cs:37,0.324146658182144) --(axis cs:38,0.324146658182144) --(axis cs:39,0.324146658182144) --(axis cs:40,0.324146658182144) --(axis cs:41,0.324146658182144) --(axis cs:42,0.324146658182144) --(axis cs:43,0.324146658182144) --(axis cs:44,0.324146658182144) --(axis cs:45,0.324146658182144) --(axis cs:46,0.324146658182144) --(axis cs:47,0.324146658182144) --(axis cs:48,0.324146658182144) --(axis cs:49,0.324146658182144) --(axis cs:50,0.324146658182144) --(axis cs:51,0.324146658182144) --(axis cs:52,0.324146658182144) --(axis cs:53,0.324146658182144) --(axis cs:54,0.324146658182144) --(axis cs:55,0.324146658182144) --(axis cs:56,0.324146658182144) --(axis cs:57,0.324146658182144) --(axis cs:58,0.324146658182144) --(axis cs:59,0.324146658182144) --(axis cs:60,0.324146658182144) --(axis cs:61,0.324146658182144) --(axis cs:62,0.324146658182144) --(axis cs:63,0.324146658182144) --(axis cs:64,0.324146658182144) --(axis cs:65,0.324146658182144) --(axis cs:66,0.324146658182144) --(axis cs:67,0.324146658182144) --(axis cs:68,0.324146658182144) --(axis cs:69,0.324146658182144) --(axis cs:70,0.324146658182144) --(axis cs:71,0.324146658182144) --(axis cs:72,0.324146658182144) --(axis cs:73,0.324146658182144) --(axis cs:74,0.324146658182144) --(axis cs:75,0.324146658182144) --(axis cs:76,0.324146658182144) --(axis cs:77,0.324146658182144) --(axis cs:78,0.324146658182144) --(axis cs:79,0.324146658182144) --(axis cs:80,0.324146658182144) --(axis cs:81,0.324146658182144) --(axis cs:82,0.324146658182144) --(axis cs:83,0.324146658182144) --(axis cs:84,0.324146658182144) --(axis cs:85,0.324146658182144) --(axis cs:86,0.324146658182144) --(axis cs:87,0.324146658182144) --(axis cs:88,0.324146658182144) --(axis cs:89,0.324146658182144) --(axis cs:90,0.324146658182144) --(axis cs:91,0.324146658182144) --(axis cs:92,0.324146658182144) --(axis cs:93,0.324146658182144) --(axis cs:94,0.324146658182144) --(axis cs:95,0.324146658182144) --(axis cs:96,0.324146658182144) --(axis cs:97,0.324146658182144) --(axis cs:98,0.324146658182144) --(axis cs:99,0.324146658182144) --(axis cs:99,0.324146658182144) --(axis cs:99,0.324146658182144) --(axis cs:98,0.324146658182144) --(axis cs:97,0.324146658182144) --(axis cs:96,0.324146658182144) --(axis cs:95,0.324146658182144) --(axis cs:94,0.324146658182144) --(axis cs:93,0.324146658182144) --(axis cs:92,0.324146658182144) --(axis cs:91,0.324146658182144) --(axis cs:90,0.324146658182144) --(axis cs:89,0.324146658182144) --(axis cs:88,0.324146658182144) --(axis cs:87,0.324146658182144) --(axis cs:86,0.324146658182144) --(axis cs:85,0.324146658182144) --(axis cs:84,0.324146658182144) --(axis cs:83,0.324146658182144) --(axis cs:82,0.324146658182144) --(axis cs:81,0.324146658182144) --(axis cs:80,0.324146658182144) --(axis cs:79,0.324146658182144) --(axis cs:78,0.324146658182144) --(axis cs:77,0.324146658182144) --(axis cs:76,0.324146658182144) --(axis cs:75,0.324146658182144) --(axis cs:74,0.324146658182144) --(axis cs:73,0.324146658182144) --(axis cs:72,0.324146658182144) --(axis cs:71,0.324146658182144) --(axis cs:70,0.324146658182144) --(axis cs:69,0.324146658182144) --(axis cs:68,0.324146658182144) --(axis cs:67,0.324146658182144) --(axis cs:66,0.324146658182144) --(axis cs:65,0.324146658182144) --(axis cs:64,0.324146658182144) --(axis cs:63,0.324146658182144) --(axis cs:62,0.324146658182144) --(axis cs:61,0.324146658182144) --(axis cs:60,0.324146658182144) --(axis cs:59,0.324146658182144) --(axis cs:58,0.324146658182144) --(axis cs:57,0.324146658182144) --(axis cs:56,0.324146658182144) --(axis cs:55,0.324146658182144) --(axis cs:54,0.324146658182144) --(axis cs:53,0.324146658182144) --(axis cs:52,0.324146658182144) --(axis cs:51,0.324146658182144) --(axis cs:50,0.324146658182144) --(axis cs:49,0.324146658182144) --(axis cs:48,0.324146658182144) --(axis cs:47,0.324146658182144) --(axis cs:46,0.324146658182144) --(axis cs:45,0.324146658182144) --(axis cs:44,0.324146658182144) --(axis cs:43,0.324146658182144) --(axis cs:42,0.324146658182144) --(axis cs:41,0.324146658182144) --(axis cs:40,0.324146658182144) --(axis cs:39,0.324146658182144) --(axis cs:38,0.324146658182144) --(axis cs:37,0.324146658182144) --(axis cs:36,0.324146658182144) --(axis cs:35,0.324146658182144) --(axis cs:34,0.324146658182144) --(axis cs:33,0.324146658182144) --(axis cs:32,0.324146658182144) --(axis cs:31,0.324146658182144) --(axis cs:30,0.324146658182144) --(axis cs:29,0.324146658182144) --(axis cs:28,0.324146658182144) --(axis cs:27,0.324146658182144) --(axis cs:26,0.324146658182144) --(axis cs:25,0.324146658182144) --(axis cs:24,0.324146658182144) --(axis cs:23,0.324146658182144) --(axis cs:22,0.324146658182144) --(axis cs:21,0.324146658182144) --(axis cs:20,0.324146658182144) --(axis cs:19,0.324146658182144) --(axis cs:18,0.324146658182144) --(axis cs:17,0.324146658182144) --(axis cs:16,0.324146658182144) --(axis cs:15,0.324146658182144) --(axis cs:14,0.324146658182144) --(axis cs:13,0.324146658182144) --(axis cs:12,0.324146658182144) --(axis cs:11,0.324146658182144) --(axis cs:10,0.324146658182144) --(axis cs:9,0.324146658182144) --(axis cs:8,0.324146658182144) --(axis cs:7,0.324146658182144) --(axis cs:6,0.324146658182144) --(axis cs:5,0.324146658182144) --(axis cs:4,0.324146658182144) --(axis cs:3,0.324146658182144) --(axis cs:2,0.324146658182144) --(axis cs:1,0.324146658182144) --(axis cs:0,0.324146658182144) --cycle; \path [fill=color4, fill opacity=0.2] (axis cs:0,0.152928605877602) --(axis cs:0,0.120977034962929) --(axis cs:1,0.106109880888228) --(axis cs:2,0.110963408568867) --(axis cs:3,0.0998164999237947) --(axis cs:4,0.0877671420964917) --(axis cs:5,0.090406124168235) --(axis cs:6,0.0836237280483165) --(axis cs:7,0.0859997045869903) --(axis cs:8,0.0818855261266928) --(axis cs:9,0.0915151389931138) --(axis cs:10,0.0833151508381635) --(axis cs:11,0.081374136994208) --(axis cs:12,0.0805480547820317) --(axis cs:13,0.0757691677757318) --(axis cs:14,0.0766261801712458) --(axis cs:15,0.0770711678860783) --(axis cs:16,0.0772444935680769) --(axis cs:17,0.0770079348641095) --(axis cs:18,0.0756136803039801) --(axis cs:19,0.0747557610103444) --(axis cs:20,0.076468450343598) --(axis cs:21,0.0755145476171619) --(axis cs:22,0.0768210199404373) --(axis cs:23,0.0745422978255426) --(axis cs:24,0.0741877673993859) --(axis cs:25,0.072864563934863) --(axis cs:26,0.0736355523161278) --(axis cs:27,0.0741490487238686) --(axis cs:28,0.0731240860030856) --(axis cs:29,0.0731931317419441) --(axis cs:30,0.0723704325260849) --(axis cs:31,0.074178892498367) --(axis cs:32,0.0702783820600176) --(axis cs:33,0.0711363709330813) --(axis cs:34,0.0725564693655431) --(axis cs:35,0.0693933813269633) --(axis cs:36,0.0721249672373602) --(axis cs:37,0.0713274533360154) --(axis cs:38,0.0723501156874726) --(axis cs:39,0.072322886957201) --(axis cs:40,0.0720808034384221) --(axis cs:41,0.072094962443218) --(axis cs:42,0.0725934828943096) --(axis cs:43,0.0716788468973466) --(axis cs:44,0.0723009214641341) --(axis cs:45,0.0707768837641866) --(axis cs:46,0.072063953107169) --(axis cs:47,0.0710236287111212) --(axis cs:48,0.0723334774140867) --(axis cs:49,0.0709115211367573) --(axis cs:50,0.0716055301830941) --(axis cs:51,0.0696501777659785) --(axis cs:52,0.0697738852560604) --(axis cs:53,0.0736208611555856) --(axis cs:54,0.0711764536881917) --(axis cs:55,0.0716773536695112) --(axis cs:56,0.0701359723884307) --(axis cs:57,0.070171387462988) --(axis cs:58,0.0700642968267711) --(axis cs:59,0.0687916640666759) --(axis cs:60,0.0697214609986544) --(axis cs:61,0.0710976680863368) --(axis cs:62,0.0690369415986814) --(axis cs:63,0.0701182018296727) --(axis cs:64,0.0703857469702261) --(axis cs:65,0.0689866459929366) --(axis cs:66,0.069959043088997) --(axis cs:67,0.0699356466401782) --(axis cs:68,0.0703128188275059) --(axis cs:69,0.069618445398307) --(axis cs:70,0.0702920445794382) --(axis cs:71,0.0691747556951408) --(axis cs:72,0.0704843542821222) --(axis cs:73,0.0692085121859306) --(axis cs:74,0.0694398446719097) --(axis cs:75,0.0692040405483902) --(axis cs:76,0.0697320026673107) --(axis cs:77,0.0680682576583425) --(axis cs:78,0.0699350642484927) --(axis cs:79,0.0697199778094923) --(axis cs:80,0.0691696700190699) --(axis cs:81,0.0703141622216316) --(axis cs:82,0.0708879090803099) --(axis cs:83,0.0710736813108345) --(axis cs:84,0.0716541563288543) --(axis cs:85,0.0705035148968246) --(axis cs:86,0.0687416890132915) --(axis cs:87,0.0691217044287322) --(axis cs:88,0.0685524098427121) --(axis cs:89,0.070733495196968) --(axis cs:90,0.0700002995269575) --(axis cs:91,0.0698926536027794) --(axis cs:92,0.0716384480094893) --(axis cs:93,0.0705812892640546) --(axis cs:94,0.070275058519698) --(axis cs:95,0.0690882254260136) --(axis cs:96,0.0685808785979013) --(axis cs:97,0.0687433166726168) --(axis cs:98,0.0699351809361827) --(axis cs:99,0.0696053749298623) --(axis cs:99,0.070938935282273) --(axis cs:99,0.070938935282273) --(axis cs:98,0.0717995502611745) --(axis cs:97,0.0706501440779631) --(axis cs:96,0.0702397010501166) --(axis cs:95,0.0715827774388241) --(axis cs:94,0.0727610357882015) --(axis cs:93,0.0732785620962664) --(axis cs:92,0.0754072716140764) --(axis cs:91,0.0739481391723747) --(axis cs:90,0.0717137667400561) --(axis cs:89,0.0725425107863846) --(axis cs:88,0.0721596308200534) --(axis cs:87,0.071274172336519) --(axis cs:86,0.0714860579502095) --(axis cs:85,0.0739136875758621) --(axis cs:84,0.0757668669207719) --(axis cs:83,0.0734153805215935) --(axis cs:82,0.072479014847951) --(axis cs:81,0.0719739325373559) --(axis cs:80,0.0716915163660848) --(axis cs:79,0.0719523532852495) --(axis cs:78,0.0713157547193265) --(axis cs:77,0.070638772161146) --(axis cs:76,0.0722816037617894) --(axis cs:75,0.0725838401107132) --(axis cs:74,0.0732873336632801) --(axis cs:73,0.0733086819659478) --(axis cs:72,0.0712212272637076) --(axis cs:71,0.073544122693264) --(axis cs:70,0.0730352482205115) --(axis cs:69,0.0728401037435768) --(axis cs:68,0.0729412675477306) --(axis cs:67,0.0718388170133862) --(axis cs:66,0.0738235040272826) --(axis cs:65,0.0716794245875459) --(axis cs:64,0.0725279462193949) --(axis cs:63,0.0726526011450282) --(axis cs:62,0.0723032366525851) --(axis cs:61,0.0721067884814275) --(axis cs:60,0.0735508167141676) --(axis cs:59,0.0733790453048908) --(axis cs:58,0.0748306994586674) --(axis cs:57,0.0744208738514041) --(axis cs:56,0.0756359274309434) --(axis cs:55,0.0745853307949434) --(axis cs:54,0.074681527159167) --(axis cs:53,0.0767771130971152) --(axis cs:52,0.0742797974288379) --(axis cs:51,0.0739019884575475) --(axis cs:50,0.0755986513688392) --(axis cs:49,0.0728561486601864) --(axis cs:48,0.0749881878729311) --(axis cs:47,0.0755665773159098) --(axis cs:46,0.0754728789870235) --(axis cs:45,0.074164607124409) --(axis cs:44,0.0758068068979493) --(axis cs:43,0.074896738110321) --(axis cs:42,0.0757794861370243) --(axis cs:41,0.0748263175832178) --(axis cs:40,0.0751657301938563) --(axis cs:39,0.0756595136985454) --(axis cs:38,0.0762505848101546) --(axis cs:37,0.0775800918729155) --(axis cs:36,0.0786420879164866) --(axis cs:35,0.0735416652266484) --(axis cs:34,0.0733494038138926) --(axis cs:33,0.0733713709950193) --(axis cs:32,0.0786378564863538) --(axis cs:31,0.0762805818268131) --(axis cs:30,0.0788294089732437) --(axis cs:29,0.0765159141450493) --(axis cs:28,0.0776118555261884) --(axis cs:27,0.0772981103280265) --(axis cs:26,0.0766051968045845) --(axis cs:25,0.0761751664947863) --(axis cs:24,0.0765961290468421) --(axis cs:23,0.0786227415945853) --(axis cs:22,0.0820365849208221) --(axis cs:21,0.0792954935720318) --(axis cs:20,0.0813819790721048) --(axis cs:19,0.0764749318531199) --(axis cs:18,0.0789016665807474) --(axis cs:17,0.0800239737672152) --(axis cs:16,0.0857787905095675) --(axis cs:15,0.0839147281052471) --(axis cs:14,0.0821160658605154) --(axis cs:13,0.0857416037372534) --(axis cs:12,0.10758528927219) --(axis cs:11,0.0912457809945692) --(axis cs:10,0.10738738288376) --(axis cs:9,0.108256643762738) --(axis cs:8,0.0911930525838633) --(axis cs:7,0.107169662726395) --(axis cs:6,0.0961008669023595) --(axis cs:5,0.0946178363029183) --(axis cs:4,0.101533821186093) --(axis cs:3,0.114688016200931) --(axis cs:2,0.123691009065143) --(axis cs:1,0.196129578864809) --(axis cs:0,0.152928605877602) --cycle; \addplot [semithick, color0] table { 0 0.146302610635757 1 0.11462994068861 2 0.10051484555006 3 0.0934062570333481 4 0.0887606948614121 5 0.0847887858748436 6 0.0810069844126701 7 0.0745091363787651 8 0.0726552873849869 9 0.0712171405553818 10 0.0689564138650894 11 0.0678576096892357 12 0.0660486564040184 13 0.0679753944277763 14 0.0680507674813271 15 0.0658858887851238 16 0.0635453023016453 17 0.0645245835185051 18 0.066526622325182 19 0.069504514336586 20 0.0622282519936562 21 0.0593678832054138 22 0.0601047001779079 23 0.0583522699773312 24 0.0588986285030842 25 0.0576429679989815 26 0.0551351279020309 27 0.0556683823466301 28 0.0547278061509132 29 0.0546903774142265 30 0.0545991279184818 31 0.0559452943503857 32 0.0565451182425022 33 0.0547317706048489 34 0.0558585472404957 35 0.0558190815150738 36 0.053689943999052 37 0.0539104804396629 38 0.0535776518285275 39 0.0525731548666954 40 0.0516289323568344 41 0.0511490374803543 42 0.0509539633989334 43 0.0500463657081127 44 0.0497021965682507 45 0.0489770822227001 46 0.0494820214807987 47 0.0495666235685348 48 0.0500484049320221 49 0.0503263384103775 50 0.0495234109461308 51 0.049505228549242 52 0.0501955933868885 53 0.0496317237615585 54 0.0515327714383602 55 0.0498342290520668 56 0.0494092911481857 57 0.0485712267458439 58 0.0506348468363285 59 0.0473383285105228 60 0.046684480458498 61 0.0485772043466568 62 0.0465221919119358 63 0.0470261797308922 64 0.0483630776405335 65 0.0477479845285416 66 0.04734041467309 67 0.0494067028164864 68 0.0477269768714905 69 0.0485437095165253 70 0.0482931546866894 71 0.0478970766067505 72 0.0474512845277786 73 0.0476009666919708 74 0.0481770940124989 75 0.0486908324062824 76 0.0479306660592556 77 0.0464971214532852 78 0.046592104434967 79 0.0479301005601883 80 0.0454478897154331 81 0.0450487591326237 82 0.0464608408510685 83 0.0459051556885242 84 0.0463160194456577 85 0.0458661995828152 86 0.0456390380859375 87 0.0468950808048248 88 0.046687725186348 89 0.0460989862680435 90 0.0475834995508194 91 0.0464893832802773 92 0.0474101275205612 93 0.0458032660186291 94 0.0457970261573791 95 0.0447717934846878 96 0.0475034356117249 97 0.0462651565670967 98 0.0456121817231178 99 0.0455481469631195 }; \addlegendentry{NBD\xspace} \addplot [semithick, color1] table { 0 0.0968876779079437 1 0.098241214454174 2 0.0880005195736885 3 0.0815821602940559 4 0.0794006034731865 5 0.0740332826972008 6 0.0739469915628433 7 0.074206480383873 8 0.0734207704663277 9 0.0717181265354156 10 0.0747286945581436 11 0.0721428140997887 12 0.0711391806602478 13 0.0689232707023621 14 0.0713020414113998 15 0.0716932937502861 16 0.0695510372519493 17 0.0689401507377625 18 0.0689754769206047 19 0.0665133565664291 20 0.0653091281652451 21 0.0655576840043068 22 0.0650666818022728 23 0.0656527638435364 24 0.0662467837333679 25 0.0672572895884514 26 0.0645031578838825 27 0.064582759141922 28 0.0657506003975868 29 0.0640500903129578 30 0.0635989494621754 31 0.0621948704123497 32 0.0609876587986946 33 0.0610866345465183 34 0.0618194162845612 35 0.0624993599951267 36 0.0623425409197807 37 0.0622468657791615 38 0.0613317161798477 39 0.0626102037727833 40 0.0622000612318516 41 0.0619084015488625 42 0.061407807469368 43 0.0621296428143978 44 0.0612105630338192 45 0.0602603554725647 46 0.0606953799724579 47 0.0611389718949795 48 0.0623343952000141 49 0.0599130794405937 50 0.0619893029332161 51 0.0606217727065086 52 0.0609289526939392 53 0.0617755867540836 54 0.0605070389807224 55 0.060666535794735 56 0.0606781758368015 57 0.0609186343848705 58 0.0597031466662884 59 0.0604670479893684 60 0.0601449348032475 61 0.0597892209887505 62 0.0595101326704025 63 0.0600581675767899 64 0.0596847094595432 65 0.0589595347642899 66 0.0593985766172409 67 0.0592039220035076 68 0.0601451978087425 69 0.0598542124032974 70 0.0604684591293335 71 0.0600512899458408 72 0.0596979781985283 73 0.0603294350206852 74 0.0594628095626831 75 0.0594901949167252 76 0.0586500883102417 77 0.0592827901244164 78 0.0592525452375412 79 0.0595896601676941 80 0.0599486865103245 81 0.0590158380568028 82 0.0590490020811558 83 0.0590119704604149 84 0.0596445597708225 85 0.0595263138413429 86 0.0594393134117126 87 0.0588865250349045 88 0.0592575594782829 89 0.0597792088985443 90 0.0599087089300156 91 0.0601224966347218 92 0.060168980807066 93 0.0594300225377083 94 0.0604251496493816 95 0.0600754521787167 96 0.0593735717236996 97 0.0592812314629555 98 0.0592766337096691 99 0.0589347206056118 }; \addlegendentry{Deepnorm} \addplot [semithick, color2] table { 0 0.0550370782613754 1 0.0533355318009853 2 0.0478381656110287 3 0.04465536698699 4 0.0430087968707085 5 0.0443093001842499 6 0.0399361826479435 7 0.0408719629049301 8 0.0398945748806 9 0.0392747722566128 10 0.0382845595479012 11 0.0375335425138473 12 0.0377270556986332 13 0.0368064686655998 14 0.0372307151556015 15 0.0359062843024731 16 0.0348863571882248 17 0.035292537137866 18 0.033319178596139 19 0.0334746975451708 20 0.0343191079795361 21 0.0346278518438339 22 0.0352190092206001 23 0.0347620233893394 24 0.0341162096709013 25 0.0342175014317036 26 0.0331630229949951 27 0.0339765846729279 28 0.0331587567925453 29 0.0320998720824719 30 0.0329233705997467 31 0.0321566812694073 32 0.0318923711776733 33 0.0318350229412317 34 0.0317385751754046 35 0.0327024050056934 36 0.0324220530688763 37 0.0329241350293159 38 0.0324027087539434 39 0.0318215869367123 40 0.0320952214300632 41 0.0316283598542213 42 0.0310384467244148 43 0.0304086029529572 44 0.0318179443478584 45 0.0309047438204288 46 0.0298874866217375 47 0.0309389282017946 48 0.0315860312432051 49 0.0305722631514072 50 0.0298666704446077 51 0.0303341269493103 52 0.0306108966469765 53 0.0303515017032623 54 0.0304657485336065 55 0.0306817024946213 56 0.029866011813283 57 0.029521232843399 58 0.0303493294864893 59 0.0294354695826769 60 0.0292046900838614 61 0.0318881720304489 62 0.0309431429952383 63 0.0289747435599566 64 0.0297222528606653 65 0.029680422320962 66 0.0299457136541605 67 0.0298570863902569 68 0.0296369086951017 69 0.0296809237450361 70 0.0295409586280584 71 0.0296252977102995 72 0.0287551499903202 73 0.0287204753607512 74 0.0296410080045462 75 0.0292051505297422 76 0.0291648257523775 77 0.0290078829973936 78 0.0289424173533916 79 0.0288747895509005 80 0.0280103087425232 81 0.0285627879202366 82 0.0295340798795223 83 0.0297540970146656 84 0.0294062376022339 85 0.0286633286625147 86 0.0287232391536236 87 0.0287539843469858 88 0.0281275615096092 89 0.0301466077566147 90 0.0299836646765471 91 0.0284612737596035 92 0.0280295100063086 93 0.0280026443302631 94 0.0277715835720301 95 0.0280611395835876 96 0.0282198052853346 97 0.0285514559596777 98 0.0282442193478346 99 0.0301049504429102 }; \addlegendentry{Widenorm} \addplot [semithick, color3] table { 0 0.324146658182144 1 0.324146658182144 2 0.324146658182144 3 0.324146658182144 4 0.324146658182144 5 0.324146658182144 6 0.324146658182144 7 0.324146658182144 8 0.324146658182144 9 0.324146658182144 10 0.324146658182144 11 0.324146658182144 12 0.324146658182144 13 0.324146658182144 14 0.324146658182144 15 0.324146658182144 16 0.324146658182144 17 0.324146658182144 18 0.324146658182144 19 0.324146658182144 20 0.324146658182144 21 0.324146658182144 22 0.324146658182144 23 0.324146658182144 24 0.324146658182144 25 0.324146658182144 26 0.324146658182144 27 0.324146658182144 28 0.324146658182144 29 0.324146658182144 30 0.324146658182144 31 0.324146658182144 32 0.324146658182144 33 0.324146658182144 34 0.324146658182144 35 0.324146658182144 36 0.324146658182144 37 0.324146658182144 38 0.324146658182144 39 0.324146658182144 40 0.324146658182144 41 0.324146658182144 42 0.324146658182144 43 0.324146658182144 44 0.324146658182144 45 0.324146658182144 46 0.324146658182144 47 0.324146658182144 48 0.324146658182144 49 0.324146658182144 50 0.324146658182144 51 0.324146658182144 52 0.324146658182144 53 0.324146658182144 54 0.324146658182144 55 0.324146658182144 56 0.324146658182144 57 0.324146658182144 58 0.324146658182144 59 0.324146658182144 60 0.324146658182144 61 0.324146658182144 62 0.324146658182144 63 0.324146658182144 64 0.324146658182144 65 0.324146658182144 66 0.324146658182144 67 0.324146658182144 68 0.324146658182144 69 0.324146658182144 70 0.324146658182144 71 0.324146658182144 72 0.324146658182144 73 0.324146658182144 74 0.324146658182144 75 0.324146658182144 76 0.324146658182144 77 0.324146658182144 78 0.324146658182144 79 0.324146658182144 80 0.324146658182144 81 0.324146658182144 82 0.324146658182144 83 0.324146658182144 84 0.324146658182144 85 0.324146658182144 86 0.324146658182144 87 0.324146658182144 88 0.324146658182144 89 0.324146658182144 90 0.324146658182144 91 0.324146658182144 92 0.324146658182144 93 0.324146658182144 94 0.324146658182144 95 0.324146658182144 96 0.324146658182144 97 0.324146658182144 98 0.324146658182144 99 0.324146658182144 }; \addlegendentry{Deep-div} \addplot [semithick, color4] table { 0 0.136952820420265 1 0.151119729876518 2 0.117327208817005 3 0.107252258062363 4 0.0946504816412926 5 0.0925119802355766 6 0.089862297475338 7 0.0965846836566925 8 0.086539289355278 9 0.0998858913779259 10 0.0953512668609619 11 0.0863099589943886 12 0.0940666720271111 13 0.0807553857564926 14 0.0793711230158806 15 0.0804929479956627 16 0.0815116420388222 17 0.0785159543156624 18 0.0772576734423637 19 0.0756153464317322 20 0.0789252147078514 21 0.0774050205945969 22 0.0794288024306297 23 0.0765825197100639 24 0.075391948223114 25 0.0745198652148247 26 0.0751203745603561 27 0.0757235795259476 28 0.075367970764637 29 0.0748545229434967 30 0.0755999207496643 31 0.07522973716259 32 0.0744581192731857 33 0.0722538709640503 34 0.0729529365897179 35 0.0714675232768059 36 0.0753835275769234 37 0.0744537726044655 38 0.0743003502488136 39 0.0739912003278732 40 0.0736232668161392 41 0.0734606400132179 42 0.074186484515667 43 0.0732877925038338 44 0.0740538641810417 45 0.0724707454442978 46 0.0737684160470962 47 0.0732951030135155 48 0.0736608326435089 49 0.0718838348984718 50 0.0736020907759666 51 0.071776083111763 52 0.0720268413424492 53 0.0751989871263504 54 0.0729289904236794 55 0.0731313422322273 56 0.072885949909687 57 0.072296130657196 58 0.0724474981427193 59 0.0710853546857834 60 0.071636138856411 61 0.0716022282838821 62 0.0706700891256332 63 0.0713854014873505 64 0.0714568465948105 65 0.0703330352902412 66 0.0718912735581398 67 0.0708872318267822 68 0.0716270431876183 69 0.0712292745709419 70 0.0716636463999748 71 0.0713594391942024 72 0.0708527907729149 73 0.0712585970759392 74 0.0713635891675949 75 0.0708939403295517 76 0.07100680321455 77 0.0693535149097443 78 0.0706254094839096 79 0.0708361655473709 80 0.0704305931925774 81 0.0711440473794937 82 0.0716834619641304 83 0.072244530916214 84 0.0737105116248131 85 0.0722086012363434 86 0.0701138734817505 87 0.0701979383826256 88 0.0703560203313827 89 0.0716380029916763 90 0.0708570331335068 91 0.0719203963875771 92 0.0735228598117828 93 0.0719299256801605 94 0.0715180471539497 95 0.0703355014324188 96 0.0694102898240089 97 0.0696967303752899 98 0.0708673655986786 99 0.0702721551060677 }; \addlegendentry{Mahalanobis} \end{axis} \end{tikzpicture} \caption{MSE (y-axis) for predicting the overlap between two image embeddings learned jointly with the underlying CNN. } \label{fig:overlap_result} \end{wrapfigure} We observe that Widenorm performs better on this task, especially during the initial learning process, due to the fact that it is permitted to violate the positive definiteness property of norms: $D(x, x) > 0$. Thus the method learns to map a difference of zero between embeddings to some intermediate distance with lower MSE. This can be problematic in use cases where the definiteness is an important property. \subsection{Shortest path length} Our final task is one that inherently favors the Widenorm and Deepnorm methods because they maintain the triangle inequality (i.e., no shortcuts allowed in shortest path), and so are expected to perform better than NBD\xspace. They are included so that we may further compare our NBD\xspace with the Deep-div approach, the only other method for learning deep Bregman divergences. We reproduce the experimental setup of \citet{pitis2020inductive} closely with details in \cref{sec:shortest_path_details}. The results for each method are shown in \cref{tbl:short_path_results}, which largely match our expectations. The triangle-inequality preserving measures usually perform best, which makes sense given the nature of the problem: any violation of the triangle inequality means the distance measure is ``taking a shortcut'' through the graph search space, and thus must be under-estimating the true distance to the target node. NBD\xspace and Deep-div, by being restricted to the space of Bregman divergences, have no constraint that prevents violating the triangle-inequality, and thus often under-estimate the true distances. We note that on the 3d dataset the NBD\xspace test performance is better than both deepnorm and widenorm, and is within 4\% of the widenorm method on the octagon dataset. These results show that NBD\xspace is the most viable option for learning a useful Bregman divergence, and performs considerably better than the prior Deep-div approach. We include both the train and test losses in \cref{tbl:short_path_results}, as they allow us to further confirm the impact that triangle inequality property provides in avoiding overfitting the training sub-graph. The start/end pairs used in training do not need to worry about the triangle inequality because any over/under prediction is intrinsically corrected by the loss calculation and gradient update. Thus the training loss, for this task, demonstrates primarily the ability of each method to learn the asymmetric relationships. In 3/5 cases the Mahalanobis method has the lowest training error, but then increases more significantly on the test set --- but not to an outrageous degree. This suggests that while the shortest paths problem from \citet{pitis2020inductive} does stress the ability to correctly leverage the triangle inequality to avoid overfitting, it is not an ideal benchmark for general asymmetric learning. \begin{table}[t] \centering \adjustbox{max width=\columnwidth}{ \begin{tabular}{@{}crrrrrrrrrr@{}} \toprule \multicolumn{1}{l}{} & \multicolumn{2}{c}{3d} & \multicolumn{2}{c}{3dd} & \multicolumn{2}{c}{octagon} & \multicolumn{2}{c}{taxi} & \multicolumn{2}{c}{traffic} \\ \cmidrule(lr){2-3} \cmidrule(l){4-5} \cmidrule(lr){6-7} \cmidrule(l){8-9} \cmidrule(l){10-11} Method & \multicolumn{1}{c}{Train} & \multicolumn{1}{c}{Test} & \multicolumn{1}{l}{Train} & \multicolumn{1}{l}{Test} & \multicolumn{1}{l}{Train} & \multicolumn{1}{l}{Test} & \multicolumn{1}{l}{Train} & \multicolumn{1}{l}{Test} & \multicolumn{1}{l}{Train} & \multicolumn{1}{l}{Test} \\ \midrule NBD\xspace & 4.34 & 33.49 & 19.91 & 337.59 & 4.67 & 25.32 & 3.11 & 66.27 & 2.53 & 12.03 \\ Deepnorm & 4.97 & 22.44 & 34.40 & 275.99 & 4.81 & 15.19 & 1.52 & 20.31 & 1.76 & 5.27 \\ Mahalanobis & 4.45 & 30.90 & 24.99 & 267.18 & 6.82 & 44.30 & 1.31 & 18.32 & 1.47 & 5.60 \\ Deep-div & 695.97 & 930.57 & 589.13 & 806.14 & 879.94 & 1046.08 & 489.80 & 625.16 & 399.14 & 618.94 \\ Widenorm & 4.49 & 27.92 & 25.76 & 253.65 & 5.17 & 23.46 & 1.18 & 16.20 & 1.44 & 5.21 \\ Bregman-sqrt & 5.94 & 27.59 & 27.70 & 266.25 & 8.62 & 40.18 & 1.57 & 19.02 & 1.63 & 5.23 \\ Bregman-GS & 4.50 & 30.51 & 23.78 & 266.91 & 7.26 & 43.13 & 1.17 & 16.71 & 1.55 & 5.49 \\ \bottomrule \end{tabular} } \caption{Results of learning measures on the shortest-path task. Triangle-inequality preserving deep and wide-norm are expected to perform best. Our NBD\xspace performs significantly better than previous Bregman learning approach Deep-div, and can be competitive with the triangle-inequality preserving methods. The gap between train and test loss shows the impact of triangle inequality helping to avoid over-fitting the observed sub-graph used for training. \label{tbl:short_path_results} } \vspace{-3mm} \end{table} We conduct further analysis to explore the degree to which the properties underlying Bregman divergences affect shortest path length learning. We introduce two extensions to NBD\xspace. The first is a soft modification encouraging the triangle inequality to be obeyed (Bregman-sqrt). The second has a hard constraint guaranteeing the triangle inequality (Bregman-GS). Results are in \cref{tbl:short_path_results}. For the first we draw from mathematical literature demonstrating that metrics can be induced from the square root of certain symmetrized Bregman divergences, depending on constraints on $\phi$ \cite{chen2008metrics}. We learn the square root of the Bregman divergence to provide a soft inductive bias (as an illustrative example, Euclidean distance is a metric but squared Euclidean distance is not). For the second we introduce a modification of the Bregman divergence known as the Generalized Symmetrized Bregman divergence. As shown by \citet{acharyya2013bregman}, the square root of such a divergence is guaranteed to satisfy the triangle inequality. This divergence is defined as $D_\phi^{\textit{gsb}}(x,y) = D_\phi(x,y) + D_\phi(y,x) + \frac{1}{2} \lVert x-y\rVert_2^2 + \frac{1}{2}\lVert \nabla \phi(x) - \nabla \phi(y) \rVert_2^2.$ There is an inherent tradeoff between the two extensions as Bregman-sqrt can be asymmetric but still does not require satisfying the triangle inequality, while Bregman-GS is symmetric but always satisfies the triangle inequality. We see that these two modifications to NBD\xspace are highly competitive with Deepnorm and Widenorm. Furthermore, the relative performance of each provides an indication of whether asymmetry or triangle inequality is more crucial to modeling a given dataset. These methods highlight that even when a given task is highly non-Bregman, NBD\xspace can be readily extended to relax or strengthen various assumptions to better model the data. \section{Conclusion} \label{sec:conclusion} To enable future asymmetric modeling research, we have developed the Neural Bregman Divergence (NBD). NBD jointly learns a Bregman measure and a feature extracting neural network. We show that NBD learns divergences directly or indirectly when trained jointly with a network, and that NBD still learns effectively when the underlying metric is not a divergence, allowing effective use of our tool across a wide spectrum but retaining the nice properties of Bregman divergences. \bibliographystyle{IEEEtranN}
1,941,325,220,916
arxiv
\section{Introduction} For an infinite sequence of real-valued random variables $(X_1,X_2,\ldots)$, let \begin{equation} K_n=K_n(X_1,\ldots,X_n):=\#\{X_i:1\leq i\leq n\}, \end{equation} the number of distinct values appearing in the first $n$ terms. This article focuses on the case in which the sequence $(X_1,X_2,\ldots)$ is \textit{exchangeable}, meaning that its distribution is invariant under finite permutations of the indices. It is a well-known and celebrated result of de Finetti that any infinite exchangeable sequence is a mixture of i.i.d. sequences. We explore ideas related to the following central question: \begin{quote}Given a probability distribution $(a_1,\ldots,a_n)$ on $[n]:=\{1,\ldots,n\}$, is there an infinite exchangeable sequence of random variables $(X_1,X_2,\ldots)$ such that $\PP(K_n=k)=a_k$ for $1\leq k\leq n$?\end{quote} The functional $K_n$ has been studied extensively in the context of the \textit{occupancy problem} as well as other closely related formulations including the birthday problem, the coupon collector's problem, and random partition structures \cite{MR0228020,MR0216548,MR2245368}. Much of the literature pertains to the asymptotic behavior of $K_n$ in the classical version in which the $X_i$ are i.i.d. discrete uniform random variables, as well as the general i.i.d. case. See \cite{MR2318403} for a recent survey with many references. Asymptotics of $K_n$ have also been studied for a random walk $(X_1,X_2,\ldots)$ with stationary increments \cite{MR0388547},\cite[Section 7.3]{MR2722836}. \\ Let us first consider the problem for small values of $n$. For $n=1$, the random variable $K_1$ is just the constant $1$. Next, it is easy to see that any probability distribution on $\{1,2\}$ can be achieved as the law of $K_2$ for some exchangeable sequence; indeed, for any $a\in[0,1]$, i.i.d. sampling from a distribution with a single atom having weight $\sqrt{a}$ yields $\PP(K_2=1)=a$. However, the problem is not trivial for $n=3$, as evident by the following bound due to Jim Pitman (proof in Section 3.) \begin{proposition} \label{ofbound} For $K_3$ the number of distinct values in the first $3$ terms of an infinite exchangeable sequence of random variables $(X_1,X_2,\ldots)$, \begin{equation} \PP(K_3=2)\leq\frac{3}{4}. \end{equation} \end{proposition} Here we present the main open problem and result of this article. Let $\boldsymbol{v}_{n,m}$ denote the law of $K_{n,m}:=K_n(X_{m,1},\ldots,X_{m,n})$ where $X_{m,i}$ are i.i.d. with uniform distribution on $m$ elements, i.e. \begin{equation} \boldsymbol{v}_{n,m}=\big(\PP(K_{n,m}=k):1\leq k\leq n\big) \end{equation} and let $\boldsymbol{v}_{n,\infty}=(0,\ldots,0,1)$, corresponding to the limit case $m=\infty$. Let \begin{equation} V_n:=\{\boldsymbol{v}_{n,m}:m=1,2,\ldots,\infty\} \end{equation} and let $\conv(V_n)$ denote the convex hull of $V_n$. \begin{conjecture} \label{mainconj} For $n\geq 3$, \begin{enumerate} \item[(i)] The set of extreme points of $\conv(V_n)$ is $V_n$. \item[(ii)] The set of possible laws of $K_n$ for an infinite exchangeable sequence $(X_1,X_2,\ldots)$ is $\conv(V_n)$. \end{enumerate} \end{conjecture} \vspace{0.3cm} \begin{theorem} \label{mainthm} Assertions $(i)$ and $(ii)$ are true for $n=3$. \end{theorem} \vspace{0.3cm} The rest of this article is organized as follows. Section \ref{prelims} establishes notation and the fundamentals of our approach. Section \ref{k3main} covers some properties of the law of $K_3$ leading to a proof of Theorem \ref{mainthm}, and Section \ref{highdim} aims to extend some of these results to $K_n$ for larger $n$. Section \ref{finitesection} considers a variant of the main problem for finite exchangeable sequences by appealing to the framework of exchangeable random partitions, and Section \ref{twoparamsection} explores a remarkable symmetry for $K_3$ in the Ewens-Pitman two-parameter partition model. \section{Preliminaries} \label{prelims} For an i.i.d. sequence $(X_1,X_2,\ldots)$, there is an associated \textit{ranked discrete distribution} $(p_1,p_2,\ldots)$ with $p_1\geq p_2\geq\ldots\geq 0$ and $\sum_{i=1}^\infty p_i\leq 1$ where the $p_i$ are the weights of the atoms for the law of $X_i$ in decreasing order, and $1-\sum_{i=1}^\infty p_i$ is the weight of the continuous component.\\ Consider the set \begin{equation} \nabla_\infty:=\Big\{(p_1,p_2,\ldots):p_1\geq p_2\geq\ldots\geq 0,\sum_{i=1}^\infty p_i\leq 1\Big\}, \end{equation} sometimes referred to as the infinite dimensional \textit{Kingman simplex} as in \cite{MR2596654}. The uniform distribution on $m$ elements corresponds to \begin{equation} \boldsymbol{u}_m:=\Big(\underbrace{\frac{1}{m},\ldots,\frac{1}{m}}_{m\text{ times}},0,0,\ldots\Big)\in\nabla_\infty. \end{equation} and any non-atomic law corresponds to $\boldsymbol{u}_\infty:=(0,0,\ldots)\in\nabla_\infty$. With Conjecture \ref{mainconj} and Theorem \ref{mainthm} in mind, note that \begin{equation} \big\{\boldsymbol{u}_m:m=1,2,\ldots,\infty\big\} \end{equation} is precisely the set of extreme points of $\nabla_\infty$ \cite[Theorem 4.1]{MR954608}. Any $(p_1,p_2,\ldots)\in\nabla_\infty$ has a unique representation as a convex combination of $\boldsymbol{u}_m$, $m=1,2,\ldots,\infty$ given by \begin{equation} (p_1,p_2,\ldots)=p_*\boldsymbol{u}_\infty+\sum_{i=1}^\infty(p_i-p_{i+1})\boldsymbol{u}_i,\qquad p_*=1-\sum_{i=1}^\infty p_i. \end{equation} This is a discrete version of Khintchine's representation theorem for unimodal distributions \cite{khintchine1938unimodal}.\\ It is easy to see that the law of $K_n$ for an i.i.d sequence depends only on the ranked frequencies of the atoms. For example, let \begin{equation} q_{n,i}(p_1,p_2,\ldots):=\PP\big(K_n=i\big) \end{equation} where $K_n=K_n(X_1,\ldots,X_n)$ for i.i.d. $X_i$ with ranked frequencies $(p_1,p_2,\ldots)$. Then for $n=3$, \begin{align} q_{3,1}(p_1,p_2,\ldots)&=\sum_{i=1}^\infty p_i^3 \label{q1}\\ q_{3,2}(p_1,p_2,\ldots)&=\sum_{i=1}^\infty 3p_i^2(1-p_i) \label{q2}\\ q_{3,3}(p_1,p_2,\ldots)&=1-\sum_{i=1}^\infty\big[3p_i^2-2p_i^3\big]. \end{align} For the general exchangeable case, de Finetti's theorem guarantees that the law of $K_n$ for an exchangeable sequence of random variables $(X_1,X_2,\ldots)$ is a convex combination of laws of $K_n$ for i.i.d. sequences. This property allows us to focus on the i.i.d. case and the simplification to ranked discrete distributions. \\ Note that there is an equivalent reformulation of the problem in the setting of exchangeable random partitions; see e.g. \cite{MR2245368} for relevant background on the subject. For an exchangeable random partition $\Pi=(\Pi_n)$ of $\N$, let $K_n$ denote the number of \textit{clusters} in the restriction $\Pi_n$ of $\Pi$ to $[n]$. Through \textit{Kingman's respresentation theorem} \cite{MR509954} for exchangeable random partitions of $\N$ in terms of random ranked discrete distributions, the possible laws of $K_n$ in this setting are identical to the possible laws of $K_n$ as defined originally in this paper as the number of distinct values in the first $n$ terms of an exchangeable sequence $(X_1,X_2,\ldots)$. In Sections 5 -- 7, we explore some related problems in the framework of exchangeable random partitions.\\\\\\ \textbf{Notations and conventions.} If a ranked discrete distribution $(p_1,p_2,\ldots)$ has finitely many atoms, i.e. there exists $m$ such that $p_i=0$ for all $i>m$, we call it a \textit{finite} distribution and abbreviate it as $(p_1,\ldots,p_m)$ when convenient. Since all of the functionals that we work with on $\nabla_\infty$ are symmetric functions of the arguments, we understand an equivalence between an unordered discrete distribution $(p_1,p_2,\ldots)$ and its ranked version. Unless otherwise stated, it is implicit in the appearance of $(p_1,p_2,\ldots)$ or $(p_1,\ldots,p_m)$ that the conditions $p_i\geq 0$ and $\sum p_i\leq 1$ hold. \section{Laws of $K_3$} \label{k3main} To simplify notation in this section, let \begin{equation} q_i:=q_{3,i}=\PP(K_3=i) \end{equation} where $q_i$ may be treated as a functional on $\nabla_\infty$. \begin{lemma} \label{merge} For $(p_1,\ldots,p_m)$ with $m\geq 3$ and $p_1\leq\ldots\leq p_m$, \begin{equation} q_2(p_1+p_2,p_3\ldots,p_m)\geq q_2(p_1,p_2,p_3,\ldots,p_m). \end{equation} \end{lemma} \begin{proof} Let $a=p_1$ and $b=p_2$. We have \begin{equation} q_2(a,b,p_3,\ldots,p_m)=3a^2(1-a)+3b^2(1-b)+\sum_{i=1}^m 3p_i^2(1-p_i) \end{equation} and \begin{equation} q_2(a+b,p_3,\ldots,p_m)=3(a+b)^2(1-a-b)+\sum_{i=1}^m 3p_i^2(1-p_i). \end{equation} Then \begin{align} q_2(a+b,p_3,\ldots,p_m)-q_2(a,b,p_3,\ldots,p_m)&=3(a+b)^2(1-a-b)-3a^2(1-a)-3b^2(1-b)\\ &=6ab(1-a-b)-3a^2b-3ab^2\\ &=3ab(2-3(a+b))\\ &\geq 0 \end{align} since $a$ and $b$ are the two smallest values among $\{a,b,p_3,\ldots,p_m\}$ so $a+b\leq\frac{2}{m}\leq\frac{2}{3}$ for $m\geq 3$.\\ \end{proof} This shows that for any $(p_1,\ldots,p_m)$ with $m\geq 3$, merging the two smallest values among $\{p_1,\ldots,p_m\}$ does not decrease $q_2$.\\ \begin{proof}[Proof of Proposition \ref{ofbound}] By convexity, it suffices to prove the inequality for i.i.d. sequences. Since \begin{equation} q_2(p_1,p_2,\ldots)=\sum_{i=1}^\infty 3p_i^2(1-p_i)=\lim_{m\rightarrow\infty}\sum_{i=1}^m 3p_i^2(1-p_i)=\lim_{m\rightarrow\infty}q_2(p_1,\ldots,p_m), \end{equation} it is enough to establish the inequality $q_2(p_1,\ldots,p_m)\leq\frac{3}{4}$ for finite discrete distributions $(p_1,\ldots,p_m)$. If $m=2$, then $q_2(p_1,p_2)=3p_1^2(1-p_1)+3p_2^2(1-p_2)$ which attains its maximum value of $\frac{3}{4}$ subject to $p_1,p_2\geq 0$ and $p_1+p_2\leq 1$ at $p_1=p_2=\frac{1}{2}$. For $m\geq 3$, by Lemma \ref{merge} repeatedly merging the two smallest values until no more than two nonzero values remain gives $q_2(p_1,\ldots,p_m)\leq q_2(\frac{1}{2},\frac{1}{2})=\frac{3}{4}$.\\ \end{proof} Consider the law of $K_3$ for an i.i.d. sequence $(X_1,X_2,\ldots)$ where each $X_i$ has the uniform distribution $\boldsymbol{u}_N:=(\frac{1}{N},\ldots,\frac{1}{N})$. A probability distribution $(q_1,q_2,q_3)$ of $K_3$ (on $\{1,2,3\}$) can be represented by any pair of its coordinates; here we shall work with $(q_1,q_3):=\big(\PP(K_3=1),\PP(K_3=3)\big)$. Then \begin{align} q_1(\boldsymbol{u}_N)&:=\PP(K_3(\boldsymbol{u}_N)=1)=\frac{1}{N^2}\\ q_3(\boldsymbol{u}_N)&:=\PP(K_3(\boldsymbol{u}_N)=3)=\frac{(N-1)(N-2)}{N^2}. \end{align} The set of points $\{\boldsymbol{v}_N:N\in\N\}=\{(1,0),(\tfrac{1}{4},0),(\tfrac{1}{9},\tfrac{2}{9}),(\tfrac{1}{16},\tfrac{6}{16}),(\tfrac{1}{25},\tfrac{12}{25}),(\tfrac{1}{36},\tfrac{20}{36}),\ldots\}$ where \begin{equation} \label{vndef} \boldsymbol{v}_N:=(q_1(\boldsymbol{u}_N),q_3(\boldsymbol{u}_N))=\Big(\frac{1}{N^2},\frac{(N-1)(N-2)}{N^2}\Big) \end{equation} are shown in Figures \ref{segments} and \ref{map345}, with line segments connecting consecutive points.\\\\ \begin{figure}[h!] \centering \begin{minipage}{0.5\textwidth} \centering \includegraphics[width=1\linewidth]{segments} \captionsetup{width=0.8\linewidth} \vspace{-1.7cm} \captionof{figure}{Probability distributions of $K_3$ represented as points $(q_1,q_3)=\big(\PP(K_3=1),\PP(K_3=3)\big)$ with $q_1$ horizontal and $q_3$ vertical. Shaded in black is the restricted region specified by Proposition \ref{ofbound}. The gray region is the closed convex hull of $\{\boldsymbol{v}_N:N\in\N\}$ where $\boldsymbol{v}_N$ corresponds to the distribution of $K_3$ for i.i.d. sampling from a discrete uniform distribution on $N$ elements, as defined in \eqref{vndef}.} \label{segments} \end{minipage \begin{minipage}{0.5\textwidth} \centering \includegraphics[width=1\linewidth]{map345} \vspace{-1.7cm} \captionsetup{width=0.81\linewidth} \captionof{figure}{The shaded regions (nested) correspond to the images of $\{(p_1,\ldots,p_m):p_i\geq 0,\sum p_i=1\}$ under the map $(p_1,\ldots,p_m)\mapsto\big(q_1(p_1,\ldots,p_m),q_3(p_1,\ldots,p_m)\big)$ for $m=3$ (dark), $m=4$ (dark and medium), and $m=5$ (dark, medium, and light). The existence of the gap between the left boundary of the dark region and the line segment connecting $\boldsymbol{v}_2$ and $\boldsymbol{v}_3$ is a consequence of Lemme \ref{buniform}} \label{map345} \end{minipage} \end{figure} The slope of the line connecting $\boldsymbol{v}_N=\big(\tfrac{1}{N^2},\tfrac{(N-1)(N-2)}{N^2}\big)$ and $\boldsymbol{v}_{N+1}=\big(\tfrac{1}{(N+1)^2},\tfrac{N(N-1)}{(N+1)^2}\big)$ is \begin{equation} \label{Nslope} \frac{\frac{N(N-1)}{(N+1)^2}-\frac{(N-1)(N-2)}{N^2}}{\frac{1}{(N+1)^2}-\frac{1}{N^2}}=-\frac{(N-1)(3N+2)}{2N+1}; \end{equation} this is increasing in $N$ which proves Theorem \ref{mainthm}(i). The equation of the $N$th line is given by \begin{equation} q_3-\frac{(N-1)(N-2)}{N^2}=-\frac{(N-1)(3N+2)}{2N+1}\bigg(q_1-\frac{1}{N^2}\bigg) \end{equation} or after rearranging, \begin{equation} \label{lineN} q_3+\frac{(N-1)(3N+2)}{2N+1}q_1=\frac{2N-2}{2N+1}. \end{equation} For $\boldsymbol{p}=(p_1,\ldots,p_m)$, define according to the left-hand side of \eqref{lineN} the functional \begin{equation} L_N(\boldsymbol{p}):=q_3(\boldsymbol{p})+\frac{(N-1)(3N+2)}{2N+1}q_1(\boldsymbol{p}) \end{equation} which may be reexpressed as \begin{align} L_N(\boldsymbol{p})&=1-\big(1-L_N(\boldsymbol{p})\big)\\ &=1-\bigg(1-q_3(\boldsymbol{p})-q_1(\boldsymbol{p})-\bigg[\frac{(N-1)(3N+2)}{2N+1}-1\bigg]q_1(\boldsymbol{p})\bigg)\\ &=1-q_2(\boldsymbol{p})+\frac{3(N^2-N-1)}{2N+1}q_1(\boldsymbol{p})\\ &=1-\sum_{i=1}^m 3p_i^2(1-p_i)+\frac{3(N^2-N-1)}{2N+1}\sum_{i=1}^m p_i^3\\ &=1-3\sum_{i=1}^m p_i^2+\frac{3N(N+1)}{2N+1}\sum_{i=1}^m p_i^3. \end{align} Define \begin{equation} \label{fformula} f(N):=\frac{3N(N+1)}{2N+1} \end{equation} so \begin{equation} L_N(\boldsymbol{p})=1-3\sum_{i=1}^m p_i^2+f(N)\sum_{i=1}^m p_i^3. \end{equation} To better understand the sequence of values $f(N)$, note that $f$ is increasing and \begin{equation} N<\frac{2N+2}{2N+1}(N)=\frac{2}{3}\cdot\underbrace{\frac{3N(N+1)}{2N+1}}_{f(N)}=\frac{2N}{2N+1}(N+1)<N+1. \end{equation} The first few values are $f(1)=2$, $f(2)=\frac{18}{5}$, $f(3)=\frac{36}{7}$, $f(4)=\frac{60}{9}$. \begin{lemma} \label{linebounds} For $N\geq 1$ and any $\boldsymbol{p}=(p_1,\ldots,p_m)$ with $p_1\geq\ldots\geq p_m\geq 0$ and $\sum p_i\leq 1$, \begin{equation} L_N(\boldsymbol{p})\geq\frac{2N-2}{2N+1}. \end{equation} \end{lemma} \vspace{.5cm} Geometrically, Lemma \ref{linebounds} asserts that for any $\boldsymbol{p}=(p_1,\ldots,p_m)$, the point $\big(q_1(\boldsymbol{p}),q_3(\boldsymbol{p})\big)$ lies on or above each of the lines connecting $\boldsymbol{v}_N$ and $\boldsymbol{v}_{N+1}$ for $N\in\N$. It will be shown in the proof that for $N\geq 2$, $L_N(\boldsymbol{p})=\frac{2N-2}{2N+1}$ if and only if $\boldsymbol{p}=\boldsymbol{u}_N$ or $\boldsymbol{p}=\boldsymbol{u}_{N+1}$; as for $N=1$, $L_1(\boldsymbol{p})=q_3(\boldsymbol{p})=0$ is attained if and only if $\boldsymbol{p}=(p_1,p_2)$ with $p_1+p_2=1$. \\ \indent The strategy for proving Lemma \ref{linebounds} is to show that $L_N$ is minimized at precisely $\boldsymbol{v}_N$ and $\boldsymbol{v}_{N+1}$ by reducing the domain of minimization in stages, first to $(p_1,\ldots,p_m)$ with $\sum p_i=1$, then to the uniform distributions, and finally to $\boldsymbol{u}_N$ and $\boldsymbol{u}_{N+1}$. The key to the proof is the following \textit{merging} lemma, which generalizes Lemma \ref{merge}. \begin{lemma} \label{genmerge} For $N\geq 1$ and $(p_1,\ldots,p_m)$ with $m\geq 2$, \begin{equation} L_N(p_1+p_2,p_3,\ldots,p_m)-L_N(p_1,p_2,p_3,\ldots,p_m)=3p_1p_2\big[(p_1+p_2)f(N)-2\big] \end{equation} which is positive, negative, or zero according to the sign of $p_1+p_2-\tfrac{2}{f(N)}$. \end{lemma} \begin{proof} Let $a=p_1$ and $b=p_2$. We have \begin{equation} L_N(a,b,p_3,\ldots,p_m)=1-3a^2-3b^2-3\sum_{i=3}^m p_i^2+f(N)(a^3+b^3)+f(N)\sum_{i=3}^m p_i^3 \end{equation} and \begin{equation} L_N(a+b,p_3,\ldots,p_m)=1-3(a+b)^2-3\sum_{i=3}^m p_i^2+f(N)(a+b)^3+f(N)\sum_{i=3}^m p_i^3. \end{equation} Then \begin{align} L_N(a+b,p_3,\ldots,p_m)-L_N(a,b,p_3,\ldots,p_m)&=-6ab+f(N)(3a^2b+3ab^2)\\ &=3ab\big[(a+b)f(N)-2\big]. \end{align} \end{proof} The proof of Lemma \ref{linebounds} is organized according to the following lemmas. \begin{lemma} \label{pdbest} Let $\mathcal{P}$ denote the set of all finite ranked discrete distributions, and let $\mathcal{P}^1$ denote the set of finite ranked discrete distributions $(p_1,\ldots,p_m)$ with $\sum p_i=1$. Then for any $N\geq 1$, we have the equality of sets \begin{equation} \argmin_{\boldsymbol{p}\in\mathcal{P}}L_N(\boldsymbol{p})=\argmin_{\boldsymbol{p}\in\mathcal{P}^1}L_N(\boldsymbol{p}) \end{equation} \end{lemma} \begin{proof} Let $\boldsymbol{p}_0=(p_1,\ldots,p_m)\in\mathcal{P}$ such that $\sum_{i=1}^m p_i<1$. Let $\varepsilon$ satisfy $0<\varepsilon<\min\{\frac{3}{f(N)},1-\sum_{i=1}^m p_i\}$. Then \begin{align} L_N(\varepsilon,p_1,\ldots,p_m)&=-3\varepsilon^2+f(N)\varepsilon^3+L_N(p_1,\ldots,p_m)\\ &=\varepsilon^2(f(N)\varepsilon-3)+L_N(p_1,\ldots,p_m)\\ &<L_N(p_1,\ldots,p_m). \end{align} This shows that if $\boldsymbol{p}_0\notin\mathcal{P}^1$, then $\boldsymbol{p}_0\notin\argmin_{\boldsymbol{p}\in\mathcal{P}}L_N(\boldsymbol{p})$.\\ \end{proof} \begin{lemma} \label{onlyunif} Let $\mathcal{P}^1$ denote the set of finite ranked discrete distributions $(p_1,\ldots,p_m)$ with $\sum p_i=1$, and let $\mathcal{U}:=\big\{\boldsymbol{u}_m:m\in\N\}$. Then for $N\geq 2$, we have the equality of sets \begin{equation} \argmin_{\boldsymbol{p}\in\mathcal{P}^1}L_N(\boldsymbol{p})=\argmin_{\boldsymbol{p}\in\mathcal{U}} L_N(\boldsymbol{p}) \end{equation} \end{lemma} \begin{proof} Let $\boldsymbol{p}_0=(p_1,\ldots,p_m)$, not necessarily ranked, such that $\sum_{i=1}^m p_i=1$. Suppose $\boldsymbol{p}_0$ has a pair of distinct nonzero values, say $a=p_1$ and $b=p_2$ with $a,b>0$ and $a\neq b$. Consider the three cases as designated in Lemma \ref{genmerge}, noting that $\tfrac{2}{f(N)}<1$ for $N\geq 2$. \\\\ \indent (i)\ \ If $a+b<\frac{2}{f(N)}$, then $L_N(a+b,p_3,\ldots,p_m)<L_N(a,b,p_3,\ldots,p_m)$ by Lemma \ref{genmerge}.\\ \indent (ii) If $a+b>\frac{2}{f(N)}$, \begin{align} L_N(\tfrac{a+b}{2}&,\tfrac{a+b}{2},p_3,\ldots,p_m)-L_N(a,b,p_3,\ldots,p_m)\\ &=\big(L_N(a+b,p_3,\ldots,p_m)-L_N(a,b,p_3,\ldots,p_m)\big)\\ &\indent-\big(L_N(a+b,p_3,\ldots,p_m)-L_N(\tfrac{a+b}{2},\tfrac{a+b}{2},p_3,\ldots,p_m)\big)\nonumber\\ &=3ab\big((a+b)f(N)-2\big)-3(\tfrac{a+b}{2})^2\big((a+b)f(N)-2\big)\\ &=3\big(ab-(\tfrac{a+b}{2})^2\big)\big((a+b)f(N)-2\big) \end{align} \indent\indent which is negative since $ab-(\frac{a+b}{2})^2<0$ and $(a+b)f(N)-2>0$.\\ \indent (iii) If $a+b=\frac{2}{f(N)}<1$, then there must exist a third nonzero value, say $p_3=c>0$. If \newline\indent\indent $c=\frac{2}{f(N)}$, then $a\neq c$ and $a+c>\frac{2}{f(N)}$ so $L_N(\frac{a+c}{2},\frac{a+c}{2},b,p_4,\ldots,p_m)<L_N(a,b,c,p_4,\ldots,p_m)$ \newline\indent\indent by case (ii). If $c\neq\frac{2}{f(N)}$, then by merging $a$ and $b$, which does not change $L_N$, and then \newline\indent\indent subsequently averaging $a+b$ and $c$ gives $L_N(\frac{a+b+c}{2},\frac{a+b+c}{2},p_4,\ldots,p_m)<L_N(a,b,c,p_4,\ldots,p_m)$\newline\indent\indent by case (ii) again.\\\\ Since permuting values in any discrete distribution does not change $L_N$, the analysis above holds for all ranked discrete distributions and thus shows that among $\boldsymbol{p}\in\mathcal{P}^1$, $L_N$ cannot be minimized at any $\boldsymbol{p}$ with a pair of distinct nonzero values, i.e. any non-uniform distribution.\\ \end{proof} \textbf{Remark.} As mentioned previously, for $N=1$, $$\argmin_{\boldsymbol{p}\in\mathcal{P}}L_1(\boldsymbol{p})=\{(p_1,p_2):p_1\geq p_2\geq 0, p_1+p_2=1\}$$ which differs from the general case $N\geq 2$. The reason the proof of Lemma \ref{onlyunif} fails for $N=1$ is that $f(1)=2$, so $\tfrac{2}{f(1)}=1$ and case (iii) of the proof breaks down.\\ \begin{lemma} \label{buniform} Let $\mathcal{U}:=\{\boldsymbol{u}_m:m\in\N\}$. Then for $N\geq 1$, \begin{equation} \argmin_{\boldsymbol{p}\in\mathcal{U}}L_N(\boldsymbol{p})=\{\boldsymbol{u}_N,\boldsymbol{u}_{N+1}\} \end{equation} \end{lemma} \begin{proof} The claim is obvious based on Figure \ref{segments}, which shows that the slopes between $\boldsymbol{v}_N$ and $\boldsymbol{v}_{N+1}$ for $N\in\N$ are decreasing in $N$. Indeed, the slope of the $N$th line segment is computed in \eqref{Nslope} as \begin{equation} -\frac{(N-1)(3N+2)}{2N+1}=-\frac{3N^2-N-2}{2N+1}=2-\frac{3N(N+1)}{2N+1}=2-f(N) \end{equation} which is decreasing in $N$.\\ \end{proof} \begin{proof}[Proof of Lemma \ref{linebounds}] The claim holds trivially for $N=1$. For $N\geq 2$, applying Lemmas \ref{pdbest}, \ref{onlyunif}, and \ref{buniform} yields \begin{equation} \argmin_{\boldsymbol{p}\in\mathcal{P}}L_N(\boldsymbol{p})=\argmin_{\boldsymbol{p}\in\mathcal{P}^1}L_N(\boldsymbol{p})=\argmin_{\boldsymbol{p}\in\mathcal{U}} L_N(\boldsymbol{p})=\{\boldsymbol{u}_N,\boldsymbol{u}_{N+1}\} \end{equation} and therefore for any $\boldsymbol{p}=(p_1,\ldots,p_m)$ with $p_i\geq 0$ and $\sum p_i\leq 1$, \begin{equation} L_N(\boldsymbol{p})\geq L_N(\boldsymbol{u}_N)=L_N(\boldsymbol{u}_{N+1})=\frac{2N-2}{2N+1}. \end{equation} \end{proof} \begin{proof}[Proof of Theorem \ref{mainthm}] Part (i) was proven earlier by the slope computation \eqref{Nslope} and illustrated in Figure \ref{segments}. For part (ii), Lemma \ref{linebounds} asserts that $(q_1(\boldsymbol{p}),q_2(\boldsymbol{p}),q_3(\boldsymbol{p}))\in \conv(V_3)$ for any finite ranked discrete distribution $\boldsymbol{p}$. Extension to infinite discrete distributions $(p_1,p_2,\ldots)$ follows because $\lim_{m\rightarrow\infty}q_i(p_1,\ldots,p_m)=q_i(p_1,p_2,\ldots)$, and then extension to exchangeable sequences holds by convexity.\\ \end{proof} \section{Higher dimensions} \label{highdim} This section aims to extend some of the results in the previous section to $K_n$ for larger $n$. Here $q_{n,i}:=\PP(K_n=i)$. We begin by generalizing Lemma \ref{merge} and Proposition \ref{ofbound}. \begin{lemma} \label{nmerge} For $n\geq 3$ and $(p_1,\ldots,p_m)$ with $m\geq 3$, $\sum_{i=1}^m p_i=1$, $p_1\leq\ldots\leq p_m$, \begin{equation} q_{n,2}(p_1+p_2,p_3,\ldots,p_m)\geq q_{n,2}(p_1,p_2,p_3,\ldots,p_m). \end{equation} \end{lemma} \vspace{0.5cm} The proof requires the following inequality: \begin{lemma} \label{binomineq} For $a,b>0$ and $n\geq 2$, \begin{equation} 4\big(\tfrac{n-1}{n}\big)ab(a+b)^{n-2}\leq(a+b)^n-a^n-b^n\leq nab(a+b)^{n-2} \end{equation} \end{lemma} \begin{proof} We have \begin{equation} (a+b)^n-a^n-b^n=\sum_{k=1}^{n-1}\binom{n}{k}a^{k}b^{n-k}=ab\sum_{k=0}^{n-2}\binom{n}{k+1}a^kb^{n-2-k}. \label{shiftb} \end{equation} Observe that \begin{equation} \binom{n}{k+1}=\frac{n(n-1)(n-2)!}{(k+1)k!(n-k-1)(n-k-2)!}=\frac{n(n-1)}{(k+1)(n-k-1)}\binom{n-2}{k};\\ \end{equation} the denominator $(k+1)(n-k-1)$ is no greater than $(n/2)^2$, and is minimized at $k=0$ and $k=n-2$, so \begin{equation} \label{nlb} \binom{n}{k+1}\geq\frac{n(n-1)}{(n/2)^2}\binom{n-2}{k}=4\frac{n-1}{n}\binom{n-2}{k} \end{equation} and \begin{equation} \label{nub} \binom{n}{k+1}\leq n\binom{n-2}{k}. \end{equation} The result follows by substituting inequalities \eqref{nlb} and \eqref{nub} into \eqref{shiftb} and appealing to the binomial theorem.\\ \end{proof} \begin{proof}[Proof of Lemma \ref{nmerge}] Let $a=p_1$ and $b=p_2$. We can compute $$q_{n,2}(a,b,p_3,\ldots,p_m)=\PP\big(K_n(a,b,p_3,\ldots,p_m)=2\big)$$ by conditioning on the appearance of the first two values: \begin{equation} q_{n,2}(a,b,p_3,\ldots,p_m)=\sum_{k=1}^{n-1}\binom{n}{k}a^k b^{n-k}+\sum_{k=1}^{n-1}\binom{n}{k}a^k\sum_{i=3}^m p_i^{n-k}+\sum_{k=1}^{n-1}\binom{n}{k}b^k\sum_{i=3}^m p_i^{n-k}+\sum_{3\leq i<j\leq m}\sum_{k=1}^{n-1}\binom{n}{k}p_i^kp_j^{n-k}. \end{equation} Note that the first term, which is an expression for the probability that the first two values both appear and are the only ones to appear in the first $n$ observations, is also equal to $(a+b)^n-a^n-b^n$. Similarly, \begin{equation} q_{n,2}(a+b,p_3,\ldots,p_m)=\sum_{k=1}^{n-1}\binom{n}{k}(a+b)^k\sum_{i=3}^m p_i^{n-k}+\sum_{3\leq i<j\leq m}\sum_{k=1}^{n-1}\binom{n}{k}p_i^kp_j^{n-k}. \end{equation} For $m\geq 3$, the difference after appropriate cancellations and then applying Lemma \ref{binomineq} is \begin{align} q_{n,2}&(a+b,p_3,\ldots,p_m)-q_{n,2}(a,b,p_3,\ldots,p_m)=\sum_{k=1}^{n-1}\binom{n}{k}\big[(a+b)^k-a^k-b^k\big]\sum_{i=3}^m p_i^{n-k}-\sum_{k=1}^{n-1}\binom{n}{k}a^k b^{n-k}\\ &=\underbrace{\sum_{k=1}^{n-2}\binom{n}{k}\big[(a+b)^k-a^k-b^k\big]\sum_{i=3}^m p_i^{n-k}}_{\geq 0}+n\underbrace{\big[(a+b)^{n-1}-a^{n-1}-b^{n-1}\big]}_{\geq 4(\frac{n-2}{n-1})ab(a+b)^{n-3}\geq 2ab(a+b)^{n-3}}\sum_{i=3}^m p_i-\underbrace{\big[(a+b)^n-a^n-b^n\big]}_{\leq nab(a+b)^{n-2}}\\ &\geq nab(a+b)^{n-3}\Big[2\sum_{i=3}^m p_i-(a+b)\Big]. \end{align} Since $\sum_{i=1}^m p_i=1$ and $a\leq b\leq p_3\leq\ldots\leq p_m$, it follows that $\sum_{i=3}^m p_i\geq \frac{m-2}{m}$ and $a+b\leq\frac{2}{m}$, so \begin{equation} 2\sum_{i=3}^m p_i-(a+b)\geq 2\Big(\frac{m-2}{m}\Big)-\frac{2}{m}=\frac{2(m-3)}{m}\geq 0 \end{equation} and therefore merging the two smallest values among $\{p_1,\ldots,p_m\}$ does not decrease $q_{n,2}$ provided that there are at least 3 nonzero values. \end{proof} \begin{lemma} \label{fullbetter} For any $(p_1,\ldots,p_m)$ and $n\geq 3$, \begin{equation} q_{n,2}(p_1,\ldots,p_m,p_*)\geq q_{n,2}(p_1,\ldots,p_m) \end{equation} where $p_*:=1-\sum_{i=1}^m p_i$. \end{lemma} \begin{proof} We have \begin{equation} q_{n,2}(p_1,\ldots,p_m)=\sum_{1\leq i<j\leq m}\sum_{k=1}^{n-1}\binom{n}{k}p_i^kp_j^{n-k}+\sum_{i=1}^m np_i^{n-1}p_* \end{equation} and \begin{equation} q_{n,2}(p_1,\ldots,p_m,p_*)=\sum_{1\leq i<j\leq m}\sum_{k=1}^{n-1}\binom{n}{k}p_i^kp_j^{n-k}+\sum_{i=1}^m\sum_{k=1}^{n-1}\binom{n}{k}p_i^{k}p_*^{n-k}, \end{equation} so \begin{equation} q_{n,2}(p_1,\ldots,p_m,p_*)-q_{n,2}(p_1,\ldots,p_m)=\sum_{i=1}^m\sum_{k=1}^{n-2}p_i^kp_*^{n-k}\geq 0. \end{equation} \end{proof} \begin{theorem} \label{n2bound} For any exchangeable sequence of random variables $(X_1,X_2,\ldots)$ and any $n\geq 3$, \begin{equation} \PP(K_n=2)\leq1-2^{-(n-1)}. \end{equation} \end{theorem} \begin{proof} As in the proof of Proposition \ref{ofbound}, it suffices to show that $q_{n,2}(p_1,\ldots,p_m)\leq 1-2^{-(n-1)}$ for any $(p_1,\ldots,p_m)$. If $m=2$ and $p_1+p_2=1$, then \begin{equation} q_{n,2}(p_1,p_2)=1-p_1^n-p_2^n \end{equation} which attains its maximum of $1-2^{-(n-1)}$ at $p_1=p_2=\tfrac{1}{2}$. For $m\geq 3$, by Lemmas \ref{nmerge} and \ref{fullbetter} we have \begin{equation} q_{n,2}(p_1,\ldots,p_m)\leq q_{n,2}(p_1,\ldots,p_m,p_*)\leq q_{n,2}\big(\tfrac{1}{2},\tfrac{1}{2}\big)=1-2^{-(n-1)}. \end{equation} \end{proof} The difficulty in extending the proof of Theorem \ref{mainthm}(ii) to apply to Conjecture \ref{mainconj}(ii) is that there is no simple generalization of Lemma \ref{genmerge} to higher dimensions. Lemma \ref{genmerge} is essential because it asserts that whether merging two values in a discrete distribution increases, decreases, or preserves the functionals $L_N$ is determined by only the sum of the two value to be merged. The corresponding functionals for the higher dimensional problem are more complicated and do not have the same convenient property.\\ Still, a first step would be to verify Conjecture \ref{mainconj}(i), that the set of extreme points of $\conv(V_n)$ is precisely $V_n$. Observe that \begin{equation} \boldsymbol{v}_{n,m}=\Big(\frac{S(n,k)(m)_{k\downarrow}}{m^n}:1\leq k\leq n\Big) \end{equation} where $S(n,k)$ denotes a Stirling number of the second kind, and $(m)_{k\downarrow}$ is the falling factorial \begin{equation} (m)_{k\downarrow}:=m(m-1)\cdots(m-k+1)=\frac{m!}{(m-k)!}. \end{equation} This claim that the $\boldsymbol{v}_{n,m}$, $m=1,2,\ldots,\infty$ are the extreme points of an $(n-1)$-dimensional convex body in $\R^n$ does not seem to be recorded anywhere in the vast literature on Stirling numbers. We have verified computationally using SciPy's spatial module that $\{\boldsymbol{v}_{n,m}:1\leq m\leq 30\}$ is the set of the extreme points of its own convex hull for $n\leq 7$, but numerical precision becomes an issue for larger values of $m$ and $n$. \section{Finite exchangeable sequences} \label{finitesection} In this section, we consider the distribution of $K_n$ for a \textit{finite exchangeable sequence} $(X_1,\ldots,X_m)$ with $m\geq n$. Note the deviation from the original problem: the first $m$ terms of an infinite exchangeable sequence always form a finite exchangeable sequence, but a finite exchangeable sequence need not have an embedding into an infinite one, nor one with more terms. Therefore, the set of possible laws of $K_n$ for finite exchangeable sequences $(X_1,\ldots,X_m)$ form decreasing nested subsets for $m\geq n$, all of which contain that for infinite exchangeable sequences. To analyze this problem, we shift to the framework of \textit{exchangeable random partitions}, for which we provide some background below. \\\\ A \textit{partition} of $[m]:=\{1,\ldots,m\}$ is an unordered collection of disjoint non-empty subsets $\{A_i\}$ of $[m]$ with $\bigcup_i A_i=[m]$. The $A_i$ are called the \textit{clusters} of the partition. The \textit{restriction} of a partition $\{A_i\}$ of $[m]$ to $[n]$ where $n<m$ is the partition of $[n]$ whose clusters are the nonempty members of $\{A_i\cap[n]\}$.\\\\ Any infinite sequence of random variables $(X_1,X_2,\ldots)$ induces a random partition of $\N$ according to the relation $i\sim j$ if and only if $X_i=X_j$. More precisely, a random partition $\Pi$ of $\N$ is a sequence $(\Pi_m)$ where for each $m$, $\Pi_m$ is a random partition of $[m]$, and for $n<m$, the restriction of $\Pi_m$ to $[n]$ is $\Pi_n$. For the random partition $\Pi$ of $\N$ induced by a sequence $(X_1,X_2,\ldots)$, the clusters of $\Pi_m$ are the indices associated to each distinct value among $\{X_1,\ldots,X_m\}$. For example, if $$(X_1(\omega),X_2(\omega),\ldots)=(7,6,7,8,8,7\ldots),$$ then $$\Pi_1(\omega)=\{\{1\}\},\qquad \Pi_2(\omega)=\{\{1\},\{2\}\},\qquad \Pi_3(\omega)=\{\{1,3\},\{2\}\},$$ $$\Pi_4(\omega)=\{\{1,3\},\{2\},\{4\}\}\qquad \Pi_5(\omega)=\{\{1,3\},\{2\},\{4,5\}\}\qquad \Pi_6(\omega)=\{\{1,3,6\},\{2\},\{4,5\}\}.$$\\ Observe that $K_n$ as previously defined for a sequence $(X_1,X_2,\ldots)$ counts the number of clusters of $\Pi_n$ for the associated partition $\Pi$. When $(X_1,X_2,\ldots)$ is exchangeable, it induces an \textit{exchangeable random partition} $\Pi$ of $\N$, meaning that for each $m$, the distribution of $\Pi_m$ is invariant under any deterministic permutation of $[m]$. In this scenario, associated to $\Pi$ is a function $p$ defined for all finite sequences of positive integers such that for any $m$ and any partition $\{A_1,\ldots,A_k\}$ of $[m]$, \begin{equation} \PP(\Pi_m=\{A_1,\ldots,A_k\})=p(\lvert A_1\rvert,\ldots,\lvert A_k\rvert). \end{equation} Here $p$ is called the \textit{exchangeable partition probability function (EPPF)} associated to $\Pi$. A consequence of exchangeability is that the EPPF is a symmetric function of its arguments. The probability mass function for $K_n$ can therefore be expressed in terms of the EPPF as \begin{equation} \label{cpcombo} \PP(K_n=k)=\sum_{\substack{n_1+\ldots+n_k=n\\ n_1\geq\ldots\geq n_k\geq 1}}C(n_1,\ldots,n_k)p(n_1,\ldots,n_k) \end{equation} where \begin{equation} \label{cformula} C(n_1,\ldots,n_k):=\frac{n!}{\prod_{j=1}^n(j!)^{s_j}s_j!},\qquad s_j=s_j(n_1,\ldots,n_k):=\#\{i:n_i=j\} \end{equation} counts the number of partitions of $[n]$ whose cluster sizes in descending order are given by $n_1,\ldots,n_k$. Furthermore, the EPPF $p$ must satisfy the following \textit{consistency} relation: \begin{equation} \label{eppfrec} p(n_1,\ldots,n_k)=p(n_1,\ldots,n_k,1)+\sum_{i=1}^k p(n_1,\ldots,\ n_i+1\ ,\ldots,n_k). \end{equation} Reposed in this alternate framework, the goal of this section is to understand the possible distributions of $K_n=K_n(\Pi_m)$ for an exchangeable random partition $\Pi_m$ of $[m]$ for $m\geq n$, meaning the number of clusters of the restriction $\Pi_{m\downarrow n}$ of $\Pi_m$ to $[n]$. A consequence of the exchangeability of $\Pi_m$ is that $\Pi_{m\downarrow n}$ is an exchangeable random partition of $[n]$, whose EPPF is the unique extension of the EPPF for $\Pi_m$ to positive integer compositions of $n$ according to the consistency relations \eqref{eppfrec}. Note that for $m=n$, $K_n(\Pi_n)$ can have any general probability distribution on $[n]$: for example, given such a probability distribution $(a_1,\ldots,a_n)$, define an EPPF according to \begin{equation} p(n-k+1,\underbrace{1,\ldots,1}_{k-1\text{ singletons}})=\frac{a_k}{\binom{n}{k-1}},\qquad k=1,\ldots,n \end{equation} where the rest of the values are either 0 or specified by symmetry. By construction, $p$ corresponds to an exchangeable random partition of $[n]$ such that $\PP(K_n=k)=a_n$ for $1\leq k\leq n$. However, for $m>n$, the consistency relations \eqref{eppfrec} must be satisfied, so it is not immediately clear given $n$ and $m>n$ what restrictions there are on the distribution of $K_n$, if any. \begin{proposition} \label{sharpnnm1} For $n\geq 3$, we have the sharp bound \begin{equation} \PP(K_n(\Pi_{n+1})=n-1)\leq\frac{\max\{4,n-1\}}{n+1} \end{equation} \end{proposition} \begin{proof} We have \begin{equation} \label{nnm1} \PP(K_n=n-1)=\binom{n}{2}p(2,1^{n-2})=\binom{n}{2}\big[p(3,1^{n-2})+(n-2)p(2,2,1^{n-3})+p(2,1^{n-1})\big] \end{equation} We consider the appearance of each of the three terms $p(3,1^{n-2})$, $p(2,2,1^{n-3})$, and $p(2,1^{n-1})$ in the expansion \eqref{eppfrec} of $p(n_1,\ldots,n_k)$ for $(n_1,\ldots,n_k)$ with $\sum_{i=1}^k n_i=n$ and $n_1\geq\ldots\geq n_k\geq 1$. \begin{itemize} \item $p(3,1^{n-2})$ appears in the expansion of only $p(2,1^{n-2})$ with coefficient $1$ and $p(3,1^{n-3})$ with coefficient 1. $p(3,1^{n-3})$ appears in the expansion of $\PP(K_n=n-2)$ according to \eqref{cpcombo} with coefficient $C(3,1^{n-3})=\binom{n}{3}$. \item $p(2,2,1^{n-3})$ appears in the expansion of only $p(2,1^{n-2})$ with coefficient $n-2$ and $p(2,2,1^{n-4})$ with coefficient 1. $p(2,2,1^{n-4})$ appears in the expansion of $\PP(K_n=n-2)$ according to $\eqref{cpcombo}$ with coefficient $C(2,2,1^{n-4})=3\binom{n}{4}$. \item $p(2,1^{n-1})$ appears in the expansion of only $p(2,1^{n-2})$ with coefficient $1$ and $p(1^{n})$ with coefficient $n$. $p(1^n)$ appears in the expansion of $\PP(K_n=n)$ with coefficient $C(1^n)=1$. \end{itemize} Hence the problem reduces to maximizing \eqref{nnm1} subject to the linear constraints \begin{equation} \bigg[\binom{n}{2}+\binom{n}{3}\bigg]p(3,1^{n-2})+\bigg[\binom{n}{2}(n-2)+3\binom{n}{4}\bigg]p(2,2,1^{n-3})+\bigg[\binom{n}{2}+n\bigg]p(2,1^{n-1})\leq 1. \end{equation} The maximum value of \eqref{nnm1} is evidently equal to \begin{equation} \max\bigg\{\frac{\binom{n}{2}}{\binom{n}{2}+\binom{n}{3}},\frac{\binom{n}{2}(n-2)}{\binom{n}{2}(n-2)+3\binom{n}{4}},\frac{\binom{n}{2}}{\binom{n}{2}+n}\bigg\}, \end{equation} and simplifying each of the three expressions yields \begin{equation} \max\Big\{\frac{3}{n+1},\frac{4}{n+1},\frac{n-1}{n+1}\Big\}=\frac{\max\{4,n-1\}}{n+1}. \end{equation} \end{proof} It follows from Proposition \ref{sharpnnm1} that for $n=3$, there are no restrictions on the distribution of $K_3(\Pi_4)$ on $\{1,2,3\}$. The corresponding claim cannot be made for $n\geq 4$, as $\PP(K_4(\Pi_5)=3)\leq\frac{4}{5}$ and $\PP(K_n(\Pi_{n+1})=n-1)\leq\frac{n-1}{n+1}$ for $n\geq 5$.\\ The remainder of the section will focus on $K_3(\Pi_n)$ for $n\geq 3$. Intuitively, as $n\rightarrow\infty$, the set of probability distributions of $K_3(\Pi_n)$ should tend to the corresponding set for $K_3(\Pi)$ for exchangeable random partitions $\Pi$ of $\N$, which was explicitly characterized in Section 2. We proceed by fixing $n\geq 3$, and as before, consider the parameterization $q_1=\PP(K_3(\Pi_n)=1)$ and $q_3=\PP(K_3(\Pi_n)=3)$. By repeated application of \eqref{eppfrec}, $q_1$ and $q_3$ may be written in terms of the EPPF as \begin{equation} q_1=p(3)=\sum_{\substack{1\leq k\leq n\\n_1+\ldots+n_k=n\\ n_1\geq\ldots\geq n_k\geq 1}}A(n_1,\ldots,n_k)p(n_1,\ldots,n_k) \end{equation} and \begin{equation} q_3=p(1,1,1)=\sum_{\substack{1\leq k\leq n\\ n_1+\ldots+n_k=n\\ n_1\geq\ldots\geq n_k\geq 1}}B(n_1,\ldots,n_k)p(n_1,\ldots,n_k) \end{equation} for uniquely defined nonnegative integer coefficients $A(n_1,\ldots,n_k)$ and $B(n_1,\ldots,n_k)$.The problem is to describe the set of points $(q_1,q_3)$ arising in this manner subject to \begin{equation} \sum_{\substack{1\leq k\leq n\\ n_1+\ldots+n_k=n\\ n_1\geq\ldots\geq n_k\geq 1}}C(n_1,\ldots,n_k)p(n_1,\ldots,n_k)=1 \end{equation} where $C(n_1,\ldots,n_k)$ is as defined in \eqref{cformula}. Observe that, in vector notation, \begin{align} (q_1,q_3)&=\Big(\sum A(n_1,\ldots,n_k)p(n_1,\ldots,n_k), \sum B(n_1,\ldots,n_k)p(n_1,\ldots,n_k)\Big)\\ &=\sum C(n_1,\ldots,n_k)p(n_1,\ldots,n_k)\Big(\tfrac{A(n_1,\ldots,n_k)}{C(n_1,\ldots,n_k)},\tfrac{B(n_1,\ldots,n_k)}{C(n_1,\ldots,n_k)}\Big) \end{align} This shows that any $(q_1,q_3)$ is a convex combination of points of the form $\big(\frac{A(\boldsymbol{n})}{C(\boldsymbol{n})},\frac{B(\boldsymbol{n})}{C(\boldsymbol{n})}\big)$, and thus the set of probability distributions of $K_3(\Pi_n)$ over all exchangeable random partitions $\Pi_n$ of $[n]$, expressed in the parametrization $(q_1,q_3)$, is the convex hull of the finite set of points \begin{equation} S_n:=\Big\{\Big(\tfrac{A(n_1,\ldots,n_k)}{C(n_1,\ldots,n_k)},\tfrac{B(n_1,\ldots,n_k)}{C(n_1,\ldots,n_k)}\Big):1\leq k\leq n, n_1+\ldots+n_k=1,n_1\geq\ldots\geq n_k\geq 1\Big\}. \end{equation} \vspace{-1cm} \begin{figure}[h!] \centering \centering \includegraphics[width=0.7\linewidth]{4_5_7_12_19_41} \vspace{-1.7cm} \captionsetup{width=0.5\linewidth} \captionof{figure}{The nested regions are the possible probability distributions of $K_3(\Pi_n)$ for $\Pi_n$ an exchangeable random partition of $[n]$ for $n=4,5,7,12,19,41$, which tend to the region corresponding to $K_3$ for infinite exchangeable sequences, as described in Theorem \ref{mainthm} and shown in Figure \ref{segments}.} \label{457} \end{figure}\\\\ \newpage Listed below is the sequence $(s_n)$ for the number of extreme points of the convex hull of $S_n$, $n\geq 3$: \begin{table}[h] \begin{center} \begin{tabular}{|c||c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline $n$ & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 & 16 & 17 & 18 & 19 & 20 & 21 & 22 & 23 \\ \hline $s_n$ & 3 & 3 & 4 & 4 & 5 & 5 & 6 & 6 & 7 & 6 & 8 & 7 & 8 & 8 & 9 & 8 & 10 & 9 & 10 & 10 & 11\\ \hline \end{tabular} \newline\newline\newline \begin{tabular}{|c||c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline $n$ & 24 & 25 & 26 & 27 & 28 & 29 & 30 & 31 & 32 & 33 & 34 & 35 & 36 & 37 & 38 & 39 & 40 & 41 \\ \hline $s_n$ & 9 & 12 & 11 & 11 & 11 & 13 & 11 & 13 & 12 & 13 & 13 & 14 & 12 & 15 & 14 & 14 & 13 & 16 \\ \hline \end{tabular} \end{center} \end{table} \section{The two-parameter family} \label{twoparamsection} It was shown in \cite{MR1337249} that any pair of real parameters $(\alpha,\theta)$ satisfying either of the conditions \begin{align} &\text{(i)}\ \ 0\leq\alpha<1 \text{ and } \theta>-\alpha \text{; or}\\ &\text{(ii)}\ \ \alpha<0 \text{ and } \theta=-m\alpha \text{ for some } m\in\N \end{align} correspond to an exchangeable random partition $\Pi_{\alpha,\theta}=(\Pi_n)$ of $\N$ according to the following sequential construction known as the Chinese restaurant process: for each $n\in\N$, conditionally given $\Pi_n=\{C_1,\ldots,C_k\}$, $\Pi_{n+1}$ is formed by having $n+1$ \begin{equation} \begin{split} &\text{attach to cluster }C_i\text{ with probability }\frac{\lvert C_i\rvert-\alpha}{n+\theta},\ \ 1\leq i\leq k\ ;\\ &\text{form a new cluster with probability }\frac{\theta+k\alpha}{n+\theta}. \end{split} \end{equation} The corresponding EPPF is given by \begin{equation} \label{ateppf} p_{\alpha,\theta}(n_1,\ldots,n_k)=\frac{\prod_{i=0}^{k-1}(\theta+i\alpha)\prod_{j=1}^k(1-\alpha)_{n_j-1}}{(\theta)_n} \end{equation} where $n=n_1+\ldots+n_k$ and \begin{equation} (x)_m:=x(x+1)\cdots(x+m-1)=\frac{\Gamma(x+m)}{\Gamma(x)}. \end{equation} Let $\PP_{\alpha,\theta}$ denote the law of $\Pi_{\alpha,\theta}$. \begin{figure}[h!] \vspace{-1cm} \centering \centering \includegraphics[width=0.7\linewidth]{atpspace} \captionsetup{width=0.8\linewidth} \vspace{-1.2cm} \captionof{figure}{The $(\alpha,\theta)$ parameter space.} \label{atpspace} \end{figure} The distribution of $K_3$ for $\Pi_{\alpha,\theta}$ is given by \begin{align} q_1(\alpha,\theta)&=\frac{(1-\alpha)(2-\alpha)}{(1+\theta)(2+\theta)} \label{q1at}\\ q_2(\alpha,\theta)&=\frac{3(1-\alpha)(\theta+\alpha)}{(1+\theta)(2+\theta)} \label{q2at}\\ q_3(\alpha,\theta)&=\frac{(\theta+\alpha)(\theta+2\alpha)}{(1+\theta)(2+\theta)} \label{q3at} \end{align} where \begin{equation} q_i(\alpha,\theta):=\PP_{\alpha,\theta}(K_3=i). \end{equation} For $m>0$, let \begin{equation} A_m:=\big\{(m+m\theta,\theta):-\tfrac{m}{m+1}<\theta<\tfrac{1-m}{m}\big\}\subseteq\{(\alpha,\theta):0\leq\alpha<1,\theta>-\alpha\} \end{equation} and let $A_0:=\{(0,\theta):\theta>0\}$, the parameter subspace corresponding to the well-known one-parameter Ewens sampling formula \cite{MR325177}. The line segments and one ray $\{A_m\}_{m\geq 0}$ with inverse slope $m$ in the $(\alpha,\theta)$ plane, each of which would pass through the point $(\alpha,\theta)=(0,-1)$ if extended, partition the parameter subspace $\{(\alpha,\theta):0\leq \alpha<1,\theta>-\alpha\}$. Hence the distribution of $K_3$ can be reparametrized in $m$ and $\theta$ as \begin{align} q_1^{(m)}(\theta)&=\frac{(1-m-m\theta)(2-m-m\theta)}{(1+\theta)(2+\theta)} \label{q1m}\\ q_2^{(m)}(\theta)&=\frac{3(1-m-m\theta)[m+(m+1)\theta]}{(1+\theta)(2+\theta)} \label{q2m}\\ q_3^{(m)}(\theta)&=\frac{[m+(m+1)\theta][2m+(2m+1)\theta]}{(1+\theta)(2+\theta)} \label{q3m} \end{align} It can be checked by calculus that for each fixed $m>0$, \begin{itemize} \item the function $q_1^{(m)}(\theta)$ is strictly decreasing for $\theta\in(-\tfrac{m}{m+1},\tfrac{1-m}{m})$ with $\lim_{\theta\rightarrow -\frac{m}{m+1}}q_1^{(m)}(\theta)=1$ and $\lim_{\theta\rightarrow\frac{1-m}{m}}q_1^{(m)}(\theta)=0$ \item the function $q_3^{(m)}(\theta)$ is strictly increasing for $\theta\in(-\tfrac{m}{m+1},\tfrac{1-m}{m})$ with $\lim_{\theta\rightarrow -\frac{m}{m+1}}q_3^{(m)}(\theta)=0$ and $\lim_{\theta\rightarrow\frac{1-m}{m}}q_3^{(m)}(\theta)=1$ \item the function $q_2^{(m)}(\theta)$ is strictly increasing on $\big(-\frac{m}{m+1},\tau(m)\big]$ and strictly decreasing on $\big[\tau(m),\frac{1-m}{m}\big)$, with a unique maximum value of $9-6\big(\sqrt{(m+1)(m+2)}-m\big)$ at $\theta=\tau(m):=\frac{-m^2-3m+\sqrt{(m+1)(m+2)}}{1+3m+m^2}$, which is also the unique value of $\theta$ in the domain at which $q_1^{(m)}(\theta)=q_3^{(m)}(\theta)$. \end{itemize} The properties above also hold for $m=0$ after slight modification by replacing each instance of $\frac{1-m}{m}$ with $\lim_{m\rightarrow 0^+}\frac{1-m}{m}=\infty$, and this remark also applies to subsequent discussion. \begin{figure}[h!] \centering \centering \includegraphics[width=0.9\linewidth]{ewensk3} \captionsetup{width=0.6\linewidth} \captionof{figure}{Graphs of $q_i^{(m)}(\theta)$ for $m=0$ and $\theta\in[0,5]$. Observe that $q_1$ and $q_3$ intersect at the same value of $\theta$ as where $q_2$ attains its maximum value. The corresponding graphs for every $m>0$ also share this property.} \label{ewensk3} \end{figure} \vspace{1cm}\\ \textbf{Duality.} The last observation implies that for $m\geq 0$ and any real number $p$ such that $0<p<9-6(\sqrt{(m+1)(m+2)}-m)$, there are exactly two values $\theta_{\pm}^{(m)}(p)$ with \begin{equation} \label{dualthetas} -\frac{m}{m+1}<\theta_-^{(m)}(p)<\tau(m)<\theta_+^{(m)}(p)<\frac{1-m}{m}. \end{equation} satisfying \begin{equation} q_2^{(m)}(\theta_-^{(m)}(p))=q_2^{(m)}(\theta_+^{(m)}(p)). \end{equation} For $p=9-6(\sqrt{(m+1)(m+2)}-2)$, define $\theta_-^{(m)}(p)=\theta_+^{(m)}(p)=\varphi(m)$. As $\theta_{\pm}^{(m)}(p)$ are defined as the solutions to the equation \begin{equation} \frac{3(1-m-m\theta)[m+(m+1)\theta]}{(1+\theta)(2+\theta)}=p \end{equation} or equivalently the quadratic equation \begin{equation} \label{defdualq} p(1+\theta)(2+\theta)-3(1-m-m\theta)[m+(m+1)\theta]=0, \end{equation} we have the polynomial identity \begin{equation} (\theta-\theta_+^{(m)}(p))(\theta-\theta_-^{(m)}(p))=\theta^2+\frac{3p-3+6m^2}{p+3m+3m^2}\theta+\frac{2p-3m+3m^2}{p+3m+3m^2} \end{equation} after rearranging \eqref{defdualq}. It follows that \begin{equation} \label{dualmult} \theta_+^{(m)}\theta_-^{(m)}=\frac{2p-3m+3m^2}{p+3m+3m^2}. \end{equation} For $-\frac{m}{m+1}<\theta<\frac{1-m}{m}$, define the $m$-\textit{dual} $\theta_*^{(m)}$ of $\theta$ according to \eqref{dualthetas}. Rearranging \eqref{dualmult} and simplifying gives the explicit formula \begin{equation} \label{dualform} \theta_*^{(m)}=\frac{2-m(3+m)(1+\theta)}{\theta+m(3+m)(1+\theta)}. \end{equation} \begin{figure}[h!] \centering \includegraphics[width=1\linewidth]{q2contour} \captionsetup{width=0.8\linewidth} \vspace{-1.2cm} \captionof{figure}{Contour plot of $q_2(\alpha,\theta)$. The level curves for $q_2(\alpha,\theta)\in\{0.1,0.2,0.3,0.4,0.5\}$ are shown, along with their tangent lines where they meet the curve $q_1(\alpha,\theta)=q_3(\alpha,\theta)$. Observe that each tangent line passes through the point $(\alpha,\theta)=(0,-1)$. Note that here $\alpha$ is plotted on the vertical axis, for convenience of display.} \label{q2contour} \end{figure} \begin{theorem} \label{mduality} For $m\geq 0$ and $-\frac{m}{m+1}<\theta<\frac{1-m}{m}$, we have \begin{equation} q_1^{(m)}(\theta_*^{(m)})=q_3^{(m)}(\theta)\qquad\text{and}\qquad q_3^{(m)}(\theta_*^{(m)})=q_1^{(m)}(\theta). \end{equation} \end{theorem} \begin{proof} It suffices to verify the first of the two identities since \eqref{dualform} is constructed as an involution. Let $D(m,\theta)$ be the denominator in \eqref{dualform}. Substituting and simplifying yields \begin{align} 1+\theta^{(m)}_*&=\frac{2+\theta}{D(m,\theta)}\ ;\\ 2+\theta^{(m)}_*&=\frac{(1+\theta)(1+m)(2+m)}{D(m,\theta)}\ ;\\ 1-m-m\theta^{(m)}_*&=\frac{(1+m)[m+(m+1)\theta]}{D(m,\theta)}\ ;\\ 2-m-m\theta^{(m)}_*&=\frac{(2+m)[2m+(2m+1)\theta]}{D(m,\theta)}. \end{align} Hence we have \begin{equation} q_1^{(m)}(\theta_*^{(m)})=\frac{(1-m-m\theta_*^{(m)})(2-m-m\theta_*^{(m)})}{(1+\theta_*^{(m)})(2+\theta_*^{(m)})}=\frac{[m+(m+1)\theta][2m+(2m+1)\theta]}{(1+\theta)(2+\theta)}=q_3^{(m)}(\theta) \end{equation} as desired.\\ \end{proof} \textbf{Symmetry.} A consequence of Theorem \ref{mduality} is a surprising symmetry in the set of laws of $K_3$ arising from the two-parameter model. To make this observation explicit, for any $m\geq 0$ we solve for $q_3=q_3^{(m)}$ in terms of $q_1=q_1^{(m)}$ as defined in \eqref{q1m} and \eqref{q3m} to obtain the formula \begin{equation} \label{q1q3m} q_3=\varphi_m(q_1):=1+\frac{3}{4}m+\frac{5}{4}q_1-\frac{3}{4}\sqrt{m^2+6q_1m+q_1(8+q_1)}. \end{equation} Rearranging to eliminate the radical yields the relation \begin{equation} (4+3m)(q_1+q_3)+5q_1q_3-2(q_1^2+q_3^2)-2-3m=0 \end{equation} which verifies the symmetry. For $m=0$ the identity reduces to \begin{equation} \label{hqq} h(q_1,q_3):=4(q_1+q_3)+5q_1q_3-2(q_1^2+q_3^2)-2=0. \end{equation} \begin{theorem} \label{bijthm} The mapping $(\alpha,\theta)\mapsto(q_1,q_3)$ defined by \eqref{q1at} and \eqref{q3at} is a bijection between the regions \begin{equation} \{(\alpha,\theta):0\leq\alpha<1,\ \theta>-\alpha\}\qquad\text{and}\qquad\{(q_1,q_3):h(q_1,q_3)\geq 0,\ q_1+q_3<1\} \end{equation} where $h(q_1,q_3)$ is defined as in \eqref{hqq}. \end{theorem} \begin{proof} Consider $\varphi(m,q_1):=\varphi_m(q_1)$ as in \eqref{q1q3m}. To show the desired bijection, it suffices to show that for every fixed $0<q_1<1$ that (i) $\varphi(m,q_1)$ is increasing in $m$, and (ii) $\lim_{m\rightarrow\infty}\varphi(m,q_1)=1-q_1$.\\\\ (i) \begin{equation} \frac{\partial}{\partial m}\varphi(m,q_1)=\frac{3}{4}(1-\frac{2m+6q_1}{2\sqrt{m^2+6q_1m+q_1(8+q_1)}})>\frac{3}{4}(1-\frac{2m+6q_1}{2\sqrt{m^2+6q_1m+9q_1^2}})=0 \end{equation} (ii) \begin{align} \lim_{m\rightarrow\infty}\varphi(m,q_1)&=\lim_{m\rightarrow\infty}1+\frac{5}{4}q_1+\frac{3}{4}\bigg(\frac{m^2-(m^2+6q_1m+q_1(8+q_1))}{m+\sqrt{m^2+6q_1m+q_1(8+q_1)}}\bigg)\\ &=\lim_{m\rightarrow\infty}1+\frac{5}{4}q_1+\frac{3}{4}\bigg(\frac{-6q_1-\frac{q_1(8+q_1)}{m}}{1+\sqrt{1+\frac{6q_1}{m}+\frac{q_1(8+q_1)}{m^2}}}\bigg)\\ &=1-q_1 \end{align} \end{proof} \begin{figure}[h!] \vspace{-0.5cm} \centering \begin{minipage}{0.35\textwidth} \centering \includegraphics[width=0.545\linewidth]{atcuts3trim} \label{atcuts} \end{minipage \begin{minipage}{0.7\textwidth} \centering \includegraphics[width=1\linewidth]{q1q3cuts2trim} \label{q1q3cuts} \end{minipage} \captionsetup{width=1\linewidth} \vspace{-1cm} \captionof{figure}{The bijection of Theorem \ref{bijthm}. The regions colored in different shades of gray reveal the geometry of the bijection.} \end{figure} \textbf{Explicit inverse.} Define the ratios \begin{equation} r(\alpha,\theta):=\frac{q_1(\alpha,\theta)}{q_2(\alpha,\theta)}=\frac{2-\alpha}{3(\theta+\alpha)},\qquad s(\alpha,\theta):=\frac{q_2(\alpha,\theta)}{q_3(\alpha,\theta)}=\frac{3(1-\alpha)}{(\theta+2\alpha)} \end{equation} These ratios uniquely define the law of $K_3$ for the corresponding $(\alpha,\theta)$. The map $(\theta,\alpha)\mapsto(r,s)$ can be explicitly inverted as \begin{equation} \alpha(r,s)=\frac{9r-2s}{9r-s+3rs},\qquad\theta(r,s)=\frac{3-9r+4s}{9r-s+3rs} \end{equation} Expressed in terms of $q_1$ and $q_3$, this gives the inversion formulas \begin{equation} \alpha(q_1,q_3)=\frac{4q_1+4q_3+5q_1q_3-2q_1^2-2q_3^2-2}{5q_1+2q_3+4q_1q_3-4q_1^2-q_3^2-1},\qquad \theta(q_1,q_3)=-\frac{8q_1+5q_3+4q_1q_3-4q_1^2-q_3^2-4}{5q_1+2q_3+4q_1q_3-4q_1^2-q_3^2-1} \end{equation} Note that the numerator in the formula for $\alpha(q_1,q_3)$ is equal to $h(q_1,q_3)$ as defined in \eqref{hqq}. It is easy to verify that these formulas give an algebraic inverse. Observe that the denominator which is the same in both formulas is nonvanishing on the region $\{(q_1,q_3):h(q_1,q_3)\geq0,\ q_1+q_3<1\}$, since \begin{align} 2(5q_1+2q_3+4q_1q_3-4q_1^2-q_3^2-1)&=h(q_1,q_3)+6q_1-6q_1^2+3q_1q_3>0. \end{align} \begin{corollary} For any parameters $(\alpha,\theta)$ with $0\leq\alpha<1$ and $\theta>-\alpha$, there exists a unique pair $(\alpha_*,\theta_*)$ with $0\leq\alpha_*<1$ and $\theta_*>-\alpha_*$ such that \begin{equation} q_{2\pm 1}(\alpha,\theta)=q_{2\mp 1}(\alpha_*,\theta_*). \end{equation} \end{corollary} Explicit formulas for $\alpha_*$ and $\theta_*$ in terms of $\alpha$ and $\theta$ can be computed as \begin{align} \alpha^*&=\frac{(2-3\alpha)(1+\theta)-\alpha^2}{(\theta+3\alpha)(1+\theta)+\alpha^2}\\ \theta^*&=\frac{\alpha(2+\theta)}{(\theta+3\alpha)(1+\theta)+\alpha^2}. \end{align} \textbf{Exceptional parameters.} $\alpha<0$, $\theta=-m\alpha$ for some $m\in\N$\\\\ It is well-known that in this case, the exchangeable random partition $(\Pi_n)$ of $\N$ generated according to the Chinese restaurant construction is distributed as if by sampling from a symmetric Dirichlet distribution with $m$ parameters equal to $-\alpha$ \cite{MR2245368}. Hence for fixed $m\in\N$, as $\alpha\downarrow -\infty$ the exchangeable random partition of $\N$ corresponding to the parameter pair $(\alpha,\theta)=(\alpha,-m\alpha)$ converges in distribution to that obtained by sampling from the discrete uniform distribution on $m$ elements. For $K_3$, the $(\alpha,\theta)$ to $(q_1,q_3)$ correspondence can be seen in Figure \ref{dirichlet}.\\ \vspace{-0.6cm} \begin{figure}[h!] \centering \includegraphics[width=0.7\linewidth]{q1q3dirichlet} \captionsetup{width=0.6\linewidth} \vspace{-1.7cm} \captionof{figure}{The blue curves correspond to the images of $(\alpha,\theta)=(\alpha,-m\alpha)$ for $\alpha\in(-\infty,0)$ and fixed $m$ under the $(\alpha,\theta)\mapsto(q_1,q_3)$ map, for $m=2,3,4,5,6$. The curve defined by \eqref{hqq} is included in black.} \label{dirichlet} \end{figure} \newpage \section{Complements} \label{looseends} In this section, we point out an interesting convexity property for the the law of $K_3$. With notation as in Section \ref{k3main}, for $\boldsymbol{p}\in\nabla_\infty$, let \begin{equation} \boldsymbol{Q}(\boldsymbol{p}):=\big(q_1(\boldsymbol{p}),q_3(\boldsymbol{p})\big) \end{equation} be the mapping from a ranked discrete distribution to its corresponding law of $K_3$ obtained by i.i.d. sampling. In Section \ref{k3main} we proved that the range of $\boldsymbol{Q}$ is a subset of the closed convex hull of the set of points $\{\boldsymbol{Q}(\boldsymbol{u}_N):N\in\N\}$. Here are some preliminary efforts to better understand the geometry of this mapping. \begin{proposition} For any $0\leq\lambda\leq 1$ and $N\geq 1$, \begin{equation} \boldsymbol{Q}(\lambda\boldsymbol{u}_N+(1-\lambda)\boldsymbol{u}_{2N})=\lambda^2\boldsymbol{Q}(\boldsymbol{u}_N)+(1-\lambda^2)\boldsymbol{Q}(\boldsymbol{u}_{2N}) \end{equation} \end{proposition} \begin{proof} We have \begin{equation} \lambda\boldsymbol{u}_N+(1-\lambda)\boldsymbol{u}_{2N}=\big(\underbrace{\tfrac{1+\lambda}{2N},\ldots,\tfrac{1+\lambda}{2N}}_{N\text{ times}},\underbrace{\tfrac{1-\lambda}{2N},\ldots,\tfrac{1-\lambda}{2N}}_{N\text{ times}}\big). \end{equation} Hence \begin{equation} q_1(\lambda\boldsymbol{u}_N+(1-\lambda)\boldsymbol{u}_{2N})=N\Big(\frac{1+\lambda}{2N}\Big)^3+N\Big(\frac{1-\lambda}{2N}\Big)^3=\frac{1+3\lambda^2}{4N^2} \end{equation} and \begin{align} q_3&(\lambda\boldsymbol{u}_N+(1-\lambda)\boldsymbol{u}_{2N})\\&=\binom{N}{3}\Big(\frac{1+\lambda}{2N}\Big)^3+\binom{N}{2}N\Big(\frac{1+\lambda}{2N}\Big)^2\Big(\frac{1-\lambda}{2N}\Big)+N\binom{N}{2}\Big(\frac{1+\lambda}{2N}\Big)\Big(\frac{1-\lambda}{2N}\Big)^2+\binom{N}{3}\Big(\frac{1-\lambda}{2N}\Big)^3\\ &=\binom{N}{3}\frac{1+3\lambda^2}{4N^3}+N\binom{N}{2}\frac{1-\lambda^2}{4N^3}\\ &=\frac{N-1}{3}\Big(\frac{2N-1-3\lambda^2}{4N^2}\Big). \end{align} On the other side, \begin{equation} \lambda^2 q_1(\boldsymbol{u}_N)+(1-\lambda^2)q_1(\boldsymbol{u}_{2N})=\frac{\lambda^2}{N^2}+\frac{1-\lambda^2}{4N^2}=\frac{1+3\lambda^2}{4N^2} \end{equation} and \begin{align} \lambda^2 q_3(\boldsymbol{u}_N)+(1-\lambda^2)q_3(\boldsymbol{u}_{2N})&=\lambda^2\binom{N}{3}\frac{1}{N^3}+(1-\lambda^2)\binom{2N}{3}\frac{1}{8N^3}\\ &=\frac{N(N-1)(N-2)}{6}\cdot\frac{\lambda^2}{N^3}+\frac{2N(2N-1)(2N-2)}{6}\cdot\frac{1-\lambda^2}{8N^3}\\ &=\frac{N-1}{3}\Big(\frac{2N-1-3\lambda^2}{4N^2}\Big). \end{align} \end{proof} \vspace{1cm} \textbf{Acknowledgement.} Many thanks to my advisor Jim Pitman for suggesting this problem and providing invaluable guidance. \newpage
1,941,325,220,917
arxiv
\section{Introduction}\label{sec:intro} This paper attends to the Minimum Dominating Set (MDS) problem, an intensively studied graph theoretic problem in computer science in general, as well as in distributed computing. A dominating set~$D$ in a graph~$G$ is a set of vertices such that every vertex of~$G$ either lies in~$D$ or is adjacent to a vertex in $D$. Finding a minimum dominating set is NP-complete~\cite{karp1972reducibility}, even on planar graphs of maximum degree~$3$ (cf.\ [GT2] in~\cite{michael1979computers}). Consequently, attention has shifted from computing exact solutions to approximating near optimal dominating sets. The simple greedy algorithm on $n$-vertex graphs computes an $\ln n$ approximation of a minimum dominating set~\cite{johnson1974approximation,lovasz1975ratio}, and for general graphs this algorithm is near optimal -- it is NP-hard to approximate minimum dominating sets within factor $(1-\epsilon)\cdot \ln n$ for every~$\epsilon>0$~\cite{dinur2014analytical}. The approach of algorithmic graph structure theory is to exploit structural properties of restricted graph classes for the design of efficient algorithms. For the dominating set problem this has led to a PTAS on planar graphs~\cite{Baker:1994:AAN:174644.174650}, minor closed classes of graphs with locally bounded tree-width~\cite{eppstein2000diameter}, graphs with excluded minors~\cite{grohe2003local}, and most generally, on every graph class with subexponential expansion~\cite{har2015approximation}. The problem admits a constant factor approximation on classes of bounded arboricity~\cite{bansal2017tight} and an $\mathcal{O}(\ln k)$ approximation (where $k$ denotes the size of a minimum dominating set) on classes of bounded VC-dimension~\cite{bronnimann1995almost,even2005hitting}. On the other hand, it is unlikely that polynomial-time constant factor approximations exist even on $K_{3,3}$-free graphs~\cite{siebertz2019greedy}. The general goal of algorithmic graph structure theory is to identify the broadest graph classes on which certain algorithmic techniques can be applied and hence lead to efficient algorithms for problems that are hard on general graphs. These limits of tractability are often captured by abstract notions, such as expansion, arboricity or VC-dimension of graph classes. In this paper, we study the \emph{distributed} time complexity of finding dominating sets, in the classic \textit{LOCAL model} of distributed computing~\cite{Linial:1992:LDG:130563.130578}. It is known that finding small dominating sets locally is hard: Kuhn et al.~\cite{Kuhn:2016:LCL:2906142.2742012} show that in~$r$ rounds the MDS problem on an~$n$-vertex graphs of maximum degree $\Delta$ can only be approximated within factor~$\Omega(n^{c/r^2}/r)$ and~$\Omega(\Delta^{1/(r+1)}/r)$, where~$c$ is a constant. This implies that, in general, to achieve a constant approximation ratio, every distributed algorithm requires at least~$\Omega(\sqrt{\log n/\log \log n})$ and~$\Omega(\log \Delta/\log \log \Delta)$ communication rounds. The currently best results for general graphs are by Kuhn et al.~\cite{Kuhn:2016:LCL:2906142.2742012} who present a~$(1+\epsilon)\ln \Delta$-approximation in~$\mathcal{O}(\log(n)/\epsilon)$ rounds for any~$\epsilon>0$, and by Barenboim et al.~\cite{barenboim2014fast} who present a deterministic $\mathcal{O}((\log n)^{k-1})$-time algorithm that provides an $\mathcal{O}(n^{1/k})$-approximation, for any integer parameter $k \ge 2$. For sparse graphs, the situation is more promising (an inclusion diagram of the graph classes mentioned in the following paragraph is depicted in Figure~\ref{fig:classes}, for formal definitions we refer to the referenced papers). For graphs of arboricity~$a$, Lenzen and Wattenhofer~\cite{ds-arbor} present a forest decomposition algorithm achieving a factor~$\mathcal{O}(a^2)$ approximation in randomized time~$\mathcal{O}(\log n)$, and a deterministic~$\mathcal{O}(a \log \Delta)$ approximation algorithm requiring $\mathcal{O}(\log \Delta)$ rounds. Graphs of bounded arboricity include all graphs which exclude a fixed graph as a (topological) minor and in particular, all planar graphs and any class of bounded genus. Amiri et al.~\cite{amiri2017distributed} provide a deterministic $\mathcal{O}(\log n)$ time constant factor approximation algorithm on classes of bounded expansion (which extends also to connected dominating sets). The notion of bounded expansion offers an abstract definition of uniform sparseness in graphs, which is based on bounding the density of shallow minors (these notions will be defined formally in the next section). Czygrinow et al.~\cite{fast-planar} show that for any given~$\epsilon>0$,~$(1+\epsilon)$-approximations of a maximum independent set, a maximum matching, and a minimum dominating set, can be computed in $\mathcal{O}(\log^* n)$ rounds in planar graphs, which is asymptotically optimal~\cite{ds-alternative-lowerbound}. Lenzen et al.~\cite{ds-planar} proposed a constant factor approximation on planar graphs that can be computed locally in a constant number of communication rounds. A finer analysis of Wawrzyniak~\cite{better-upper-planar} showed that the algorithm of Lenzen et al.\ in fact computes a~$52$-approximation of a minimum dominating set. Wawrzyniak~\cite{wawrzyniak2013brief} also showed that message sizes of $\mathcal{O}(\log n)$ suffice to give a constant factor approximation on planar graphs in a constant number of rounds. In terms of lower bounds, Hilke et al.~\cite{ds-ba} show that there is no deterministic local algorithm (constant-time distributed graph algorithm) that finds a~$(7-\epsilon)$-approximation of a minimum dominating set on planar graphs, for any positive constant~$\epsilon$. \begin{figure}[ht] \input{DS-approx} \caption{Inclusion diagram of sparse graph classes. } \end{figure}\label{fig:classes} \subsection{Our Contributions} The first and main contribution of this paper is a deterministic and local constant factor approximation for MDS on graphs that we call \emph{locally embeddable graphs}. A locally embeddable graph~$G$ excludes the complete bipartite graph $K_{3,t}$, for some $t\geq 3$, as a \mbox{depth-1}~ minor, that is, as a minor obtained by star contractions, and furthermore satisfies that all \mbox{depth-1}~ minors of $G$ have constant edge density. The most prominent locally embeddable graph classes are classes of bounded genus. Concretely, our result implies that MDS can be~$\mathcal{O}(g)$-approximated locally and deterministically on graphs of (both orientable or non-orientable) genus~$g$. However, also graph classes whose members do not embed into any fixed surface or which do not even have bounded expansion can be locally embeddable, e.g.\ the class of all $3$-subdivided cliques is locally embeddable and this class does not have bounded expansion. Yet, every locally embeddable class is of bounded degeneracy. Apart from generalizing the earlier result of Lenzen et al.~\cite{ds-planar} for planar graphs to a larger graph family, we introduce new techniques by arguing about densities and combinatorial properties of shallow minors only and show that all topological arguments used in~\cite{ds-planar} can be avoided. The abstract notion of local embeddability yields exactly the ingredients for these arguments to work and therefore offers valuable insights on the limits of algorithmic techniques. This is a contribution going beyond the mere presentation of efficient algorithms for concrete example graph classes. Our second main contribution is the presentation of a local and deterministic MDS approximation algorithm with the following properties. Given a graph $G$ from a fixed class $\mathscr{C}$ of graphs with sub-logarithmic expansion, a constant factor approximation of an MDS $D$ and any $\epsilon>0$, the algorithm uses $\mathcal{O}(\log^* n)$ rounds and computes from $D$ a $(1+\epsilon)$-approximate MDS of $G$ (here, the $\mathcal{O}$-notation hides constants depending on $\epsilon$). Graphs of sub-logarithmic expansion include all proper minor closed classes, and in particular all classes of bounded genus. Our methods are based on earlier work of Czygrinow et al.~\cite{fast-planar}. In combination with our constant-factor approximation on graphs of bounded genus, we obtain $(1+\epsilon)$-approximations in $\mathcal{O}(\log^*n)$ communication rounds on graphs of bounded genus. In combination with Amiri et al.'s result~\cite{amiri2017distributed} on graphs of bounded expansion, we obtain $(1+\epsilon)$-approximations in $\mathcal{O}(\log n)$ deterministic rounds on graphs of sub-logarithmic expansion. Again, the abstract notion of sub-logarithmic expansion constitutes the border of applicability of these algorithmic techniques. We observe that the methods of Czygrinow et al.~\cite{fast-planar} for maximum weighted independent set and maximum matching extend to graphs of sub-logarithmic expansion, however, we focus on the dominating set problem for the sake of a consistent presentation. \subsection{Novelty} Our main technical contribution is a new analysis of a sligthly modified variant of the elegant algorithm by Lenzen et al.~\cite{ds-planar} for planar graphs. As we will show, with a slight modification, the algorithm also works on locally embeddable graphs, however, the analysis needs to be changed significantly. Prior works by Lenzen et al.~\cite{ds-planar} and Wawrzyniak~\cite{better-upper-planar} heavily depend on topological properties of planar graphs. For example, their analyses exploit the fact that each cycle in a planar graph defines an ``inside'' and an ``outside'' region, without any edges connecting the two; this facilitates a simplified accounting and comparison to the optimal solution. In the case of locally embeddable graphs, such global, topological properties do not exist. In contrast, in this paper we leverage the inherent local properties of our low-density graphs, which opens a new door to approach the problem. A second interesting technique developed in this paper is based on \emph{preprocessing}: we show that the constants involved in the approximation can be further improved by a local preprocessing step. Another feature of our modified algorithm is that it is \emph{first-order definable}. More precisely, there is a first order formula $\varphi(x)$ with one free variable, such that in every planar graph~$G$ the set $D=\{v \in V(G) : G\models\varphi(v)\}$ corresponds exactly to the computed dominating set. In particular, the algorithm can be modified such that it does not rely on any \emph{maximum} operations, such as finding the neighbor of maximal degree. \subsection{Organization} \noindent The remainder of this paper is organized as follows. We introduce some preliminaries in \cref{sec:model}. The constant-factor constant-time approximation result is presented in \cref{sec:local-approx}, and the $\mathcal{O}(\log^* n)$-time approximation scheme is presented in \cref{sec:star-approx}. We conclude in \cref{sec:FO}. \section{Preliminaries}\label{sec:model} \vspace{2mm} \noindent \textbf{Graphs.} We consider finite, undirected, simple graphs. Given a graph~$G$, we write $V(G)$ for its vertices and~$E(G)$ for its edges. Two vertices~$u,v\in V(G)$ are adjacent or neighbors if~$\{u,v\}\in E(G)$. The degree~$d_G(v)$ of a vertex~$v\in V(G)$ is its number of neighbors in~$G$. We write~$N(v)$ for the set of neighbors and~$N[v]$ for the closed neighborhood~$N(v)\cup\{v\}$ of~$v$. For~$A\subseteq V(G)$, we write~$N[A]$ for~$\bigcup_{v\in A}N[v]$. We let~$N^1[v]:=N[v]$ and $N^{i+1}[v]:=N[N^i[v]]$ for~$i>1$. If $E'\subseteq E$, we write $N_{E'}(v)$ for the set $\{u \in V(G) : \{u,v\}\in E'\}$. A graph $G$ has radius at most~$r$ if there is a vertex $v\in V(G)$ such that $N^r[v]=V(G)$. The \emph{arboricity} of $G$ is the minimum number of forests into which its edges can be partitioned. A graph~$H$ is a subgraph of a graph~$G$ if~$V(H)\subseteq V(G)$ and~$E(H)\subseteq E(G)$. The edge density of~$G$ is the ratio $|E(G)|/|V(G)|$. It is well known that the arboricity of a graph is within factor $2$ of its \emph{degeneracy}, that is, $\max_{H\subseteq G}|E(H)|/|V(H)|$. For~$A\subseteq V(G)$, the graph $G[A]$ induced by~$A$ is the graph with vertex set~$A$ and edge set $\{\{u,v\}\in E(G) : u,v\in A\}$. For~$B\subseteq V(G)$ we write~$G-B$ for the graph~$G[V(G)\setminus B]$. \smallskip \noindent\textbf{Bounded depth minors and locally embeddable graphs.} A graph~$H$ is a minor of a graph~$G$, written~$H\preceq G$, if there is a set~$\{G_v : v\in V(H)\}$ of pairwise vertex disjoint and connected subgraphs $G_v\subseteq G$ such that if~$\{u,v\}\in E(H)$, then there is an edge between a vertex of~$G_u$ and a vertex of~$G_v$. We say that~$G_v$ is \emph{contracted} to the vertex~$v$. If $G_1,\ldots, G_k\subseteq V(G)$ are pairwise vertex disjoint and connected subgraphs of $G$, then we write $G/G_1/\ldots/G_k$ for the minor obtained by contracting the subgraphs~$G_i$ (observe that the order of contraction does not matter as the $G_i$'s are vertex disjoint). We call the set $\{G_v : v\in V(H)\}$ a \emph{minor model} of $H$ in~$G$. We say that two minor models $\{G^1_v : v\in V(H)\}$ and $\{G^2_v : v\in V(H)\}$ of $H$ in a graph $G$ disjoint if the sets $\bigcup_{v\in V(H)} V(G^1_v)$ and $\bigcup_{v\in V(H)} V(G^2_v)$ are disjoint. A star is a connected graph~$G$ such that at most one vertex of~$G$, called the center of the star, has degree greater than one. A graph~$H$ is a \emph{depth-$\mathit{1}$ minor} of~$G$ if~$H$ is obtained from a subgraph of~$G$ by star contractions, that is, if there is a set~$\{G_v : v\in V(H)\}$ of pairwise vertex disjoint stars~$G_v\subseteq G$ such that if~$\{u,v\}\in E(H)$, then there is an edge between a vertex of~$G_u$ and a vertex of~$G_v$. More generally, for a non-negative integer $r$, a graph $H$ is a \emph{depth-$r$ minor} of $G$, written $H\preceq_r G$, if there is a set~$\{G_v : v\in V(H)\}$ of pairwise vertex disjoint connected subgraphs $G_v\subseteq G$ of radius at most $r$ such that if~$\{u,v\}\in E(H)$, then there is an edge between a vertex of~$G_u$ and a vertex of~$G_v$. We write~$K_{t,3}$ for the complete bipartite graph with partitions of size~$t$ and~$3$, respectively. A graph~$G$ is a \emph{locally embeddable graph} if it excludes~$K_{3,t}$ as a \mbox{depth-1}~ minor for some~$t\ge 3$ and if $|E(H)|/|V(H)|\leq c$ for some constant $c$ and all \mbox{depth-1}~ minors $H$ of $G$. More generally, we write $\nabla_r(G)$ for $\max_{H\preceq_r G}|E(H)|/|V(H)|$. A class $\mathscr{C}$ of graphs has \emph{bounded expansion} if there is a function $f:\mathbb{N}\rightarrow\mathbb{N}$ such that $\nabla_r(G)\leq f(r)$ for all graphs $G\in \mathscr{C}$. This is equivalent to demanding that the arboricity of each depth-$r$ minor of $G$ is functionally bounded by~$r$. The class $\mathscr{C}$ has \emph{sub-logarithmic expansion} if the bounding function $f(r)\in o\,(\log r)$. Note that if every graph $G\in \mathscr{C}$ excludes a fixed minor, then $\mathscr{C}$ has constant expansion, hence classes of sub-logarithmic expansion generalize proper minor closed classes of graphs. We refer to Figure~\ref{fig:classes} for the inclusion between the above defined classes. \pagebreak \smallskip\noindent\textbf{Bounded genus graphs.} The (orientable, resp.\ non-orientable) genus of a graph is the minimal number~$\ell$ such that the graph can be embedded on an (orientable, resp.\ non-orientable) surface of genus~$\ell$. We write~$g(G)$ for the orientable genus of~$G$ and~$\tilde{g}(G)$ for the non-orientable genus of~$G$. Every connected planar graph has orientable genus~$0$ and non-orientable genus~$1$. In general, for connected~$G$, we have~$\tilde{g}(G)\leq 2g(G)+1$. On the other hand, there is no bound for~$g(G)$ in terms of~$\tilde{g}(G)$. As all our results apply to both variants, for ease of presentation, and as usual in the literature, we will simply speak of the genus of a graph in the following. We do not make explicit use of any topological arguments and hence refer to~\cite{graphsurface} for more background on graphs on surfaces. We will use the following facts about bounded genus graphs. \smallskip The first lemma states that graphs of genus~$g$ are closed under taking minors. \begin{lemma}\label{lem:closureminor} If~$H\preceq G$, then~$g(H)\leq g(G)$ and~$\tilde{g}(H)\leq \tilde{g}(G)$. \end{lemma} One of the arguments we will use is based on the fact that bounded genus graphs exclude large bipartite graphs as minors. The lemma follows immediately from \cref{lem:closureminor} and from the fact that $g(K_{m,n})=\left\lceil \frac{(m-2)(n-2)}{4}\right\rceil$ and $\tilde{g}(K_{m,n})=\left\lceil \frac{(m-2)(n-2)}{2}\right\rceil$ (see e.g.\ Theorem~4.4.7 in~\cite{graphsurface}). \begin{lemma}\label{lem:exclude} If~$g(G)=g$, then~$G$ excludes~$K_{3,4g+3}$ as a minor and if~$\tilde{g}(G)=\tilde{g}$, then~$G$ excludes~$K_{3,2\tilde{g}+3}$ as a minor. \end{lemma} Graphs of bounded genus do not contain many disjoint copies of minor models of~$K_{3,3}$: this is a simple consequence of the fact that the orientable genus of a connected graph is equal to the sum of the genera of its blocks (maximal connected subgraphs without a cut-vertex) and a similar statement holds for the non-orientable genus, see Theorem~4.4.2 and Theorem~4.4.3 in~\cite{graphsurface}. \begin{lemma} \label{lem:decreasegenus} A graph $G$ contains at most~$\max\{g(G), 2\tilde{g}(G)\}$ disjoint copies of minor models of~$K_{3,3}$. \end{lemma} Finally, note that graphs of bounded genus have small edge density. It is straightforward to obtain the following from the generalized Euler formula $n-e+f\leq \chi(G)$~\cite{graphsurface} for example see~\cite{saeedthesis}. \begin{lemma}\label{lem:dens} Every graph with at least $3$ vertices satisfies $|E(G)| \leq 3 \cdot |V(G)| + 6 g(G) - 6$ and~$|E(G)| \leq 3 \cdot |V(G)| + 3 \tilde{g}(G) - 3$. \end{lemma} \begin{lemma}\label{lem:degeneracy} Let $\mathcal{G}$ be a class of graphs of genus at most $g$. Then the degeneracy and edge density of every graph $G\in \mathcal{G}$ is bounded by $5\sqrt{g}$. \end{lemma} \begin{proof} Recall that the degeneracy of a graph $G$ is defined as $\max_{H\subseteq G}|E(H)|/|V(H)|$, which in particular bounds the edge density $|E(G)|/|V(G)|$. It hence suffices to bound the degeneracy of $G$. If $g=0$ the claim trivially holds, as in this case $G$ is planar and hence $\max_{H\subseteq G}|E(H)|/|V(H)|\leq 3$ by \cref{lem:closureminor} and \cref{lem:dens}. Now assume $g\geq 1$ (we prove the lemma for graphs with orientable genus $g$, the proof for graphs of non-orientable genus $g$ is analogous). We fix any subgraph $H\subseteq G$. We may assume that $H$ has at least $5\sqrt{g}$ vertices, otherwise, the statement is trivially true (as in this case every vertex of $H$ has degree (in $H$) less than $5\sqrt{g}$). By \cref{lem:closureminor} and \cref{lem:dens}, we have $|E(H)|\leq 3\cdot |V(H)| + 6g(H) - 6\leq 3\cdot |V(H)| + 6g$. This implies $|E(H)|/|V(H)|\leq 3+6g/|V(H)|\leq 3+6g/(5\sqrt{g}) \leq 5\sqrt{g}$, as claimed. \end{proof} As an immediate corollary from \cref{lem:closureminor}, \cref{lem:exclude} and \cref{lem:dens}, we get that if $\mathcal{G}$ is a class of graphs of bounded genus, then $\mathcal{G}$ is a class of locally embeddable graphs. \medskip \noindent\textbf{Dominating sets.} Let~$G$ be a graph. A set~$D\subseteq V(G)$ \emph{dominates}~$G$ if all vertices of $G$ lie either in~$D$ or are adjacent to a vertex of~$D$, that is, if~$N[D]=V(G)$. A minimum dominating set~$D$ is a dominating set of minimum cardinality (among all dominating sets). The size of a minimum dominating set of~$G$ is denoted~$\gamma(G)$. \noindent\textbf{$f$-Approximation.} Let~$f:\mathbb{N}\rightarrow\mathbb{R}^+$. Given an~$n$-vertex graph~$G$ and a set~$D\subseteq V(G)$, we say that~$D$ is an~$f$-approximation for the dominating set problem, if~$D$ is a dominating set of~$G$ and~$|D| \leq f(n)\cdot \gamma(G)$. An algorithm computes an $f$-approximation for the dominating set problem on a class~$\mathscr{C}$ of graphs if for all~$G\in\mathscr{C}$ it computes a set~$D$ which is an~$f$-approximation for the dominating set problem. If~$f$ maps every number to a fixed constant~$c$, we speak of a constant factor approximation. \smallskip\noindent\textbf{Distributed complexity.} We consider the standard \textit{LOCAL} model of distributed computing~\cite{Linial:1992:LDG:130563.130578}, see also~\cite{local-survey} for a recent survey. A distributed system is modeled as a graph~$G$. At each vertex~$v\in V(G)$ there is an independent agent/host/processor with a unique identifier~$\mathit{id}(v)$. Initially, each agent has no knowledge about the network, but only knows its own identifier. Information about other agents can be obtained through message passing, i.e., through repeated interactions with neighboring vertices, which happens in synchronous communication rounds. In each round the following operations are performed: \begin{enumerate} \item[(1)] Each vertex performs a local computation (based on information obtained in previous rounds). \item[(2)] Each vertex~$v$ sends one message to each of its neighbors. \item[(3)] Each vertex~$v$ receives one message from each of its neighbors. \end{enumerate} The \emph{distributed complexity} of the algorithm is defined as the number of communication rounds until all agents terminate. We call a distributed algorithm $r$-local, if its output depends only on the~$r$-neighborhoods ~$N^r[v]$ of its vertices. Observe that an $r$-local algorithm can (trivially) be implemented in $r$ rounds in the \textit{LOCAL} model. \section{A Constant Local MDS Approximation}\label{sec:local-approx} \noindent Let us start by revisiting the MDS approximation algorithm for planar graphs by Lenzen et al.~\cite{ds-planar}, see Algorithm~\ref{alg:lenzen}. The algorithm works in two phases. In the first phase, it adds all vertices whose (open) neighborhood cannot be dominated by a small number of vertices (to be precise, by at most~$6$ vertices) to a set~$D$. It has been shown in~\cite{ds-planar} that the set~$D$ is small (at most $4$ times larger than a minimum dominating set) in planar graphs. In the second phase, the algorithm defines a dominator function~$dom$ which maps every vertex~$v$ that is not dominated yet by~$D$ to its dominator. The dominator~$dom(v)$ of~$v$ is chosen arbitrary among those vertices of~$N[v]$ which dominate the maximal number of vertices not dominated yet. \begin{algorithm}[ht] \caption{~~Dominating Set Approximation Algorithm for Planar Graphs~\cite{ds-planar}} \begin{algorithmic}[1] \vspace{2mm} \STATE Input: Planar graph~$G$ \medskip \STATE~$(*$ \emph{Phase 1} ~$*)$ \STATE~$D \gets \emptyset$ \STATE \textbf{for}~$v\in V$ (in parallel) \textbf{do} \STATE \qquad\textbf{if} there does not exist a set~$A\subseteq V(G)\setminus \{v\}$ such that~$N(v)\subseteq N[A]$ and~$|A|\leq 6$ \textbf{then} \STATE \qquad \qquad $D\gets D\cup \{v\}$ \STATE \qquad \textbf{end if} \STATE \textbf{end for} \medskip \STATE~$(*$ \emph{Phase 2} ~$*)$ \STATE~$D'\gets \emptyset$ \STATE \textbf{for}~$v\in V$ (in parallel) \textbf{do} \STATE \qquad $d_{G-D}(v)\gets |N[v]\setminus N[D]|$ \STATE \qquad \textbf{if}~$v\in V\setminus N[D]$ \textbf{then} \STATE \qquad \qquad $\Delta_{G-D}(v)\gets \max_{w\in N[v]}d_{G-D}(w)$ \STATE \qquad \qquad choose any~$dom(v)$ from $N[v]$ with $d_{G-D}(dom(v))=\Delta_{G-D}(v)$ \STATE \qquad \qquad $D'\gets D'\cup \{dom(v)\}$ \STATE \qquad \textbf{end if} \STATE \textbf{end for} \STATE \textbf{return}~$D\cup D'$ \end{algorithmic}\label{alg:lenzen} \end{algorithm} We now propose the following small change to the algorithm. As additional input, we require an integer~$c$ which bounds the edge density of \mbox{depth-1}~ minors of $G$ and we replace the condition~$|A|\leq 6$ in Line~5 by the condition~$|A|\leq 2c$. In the rest of this section, we show that the modified algorithm computes a constant factor approximation on any locally embeddable class of graphs. Note that the algorithm does not have to compute the edge density of~$G$, which is not possible in a local manner. Rather, we leverage \cref{lem:dens} which upper bounds the edge density for any fixed class of bounded genus graphs: this upper bound can be used as an input to the local algorithm. We first show that the set~$D$ computed in Phase~1 of the algorithm is small. The following lemma is a straightforward generalization of Lemma~6.3 of~\cite{ds-planar}, which in fact does not use topological arguments at all. \begin{lemma}\label{thm:largeneighbourhood} Let~$G$ be a graph and let~$M$ be a minimum dominating set of~$G$. Assume that for some constant~$c$ all \mbox{depth-1}~ minors~$H$ of~$G$ satisfy~$|E(H)|/|V(H)|\leq c$. Let \begin{align*} D\coloneqq & \{v\in V(G)~:~\text{there is no set $A\subseteq V(G)\setminus\{v\}$ such that $N(v)\subseteq N[A]$ and $|A|\leq 2c$}\}.\end{align*} Then~$|D|\leq (c+1)\cdot |M|$. \end{lemma} \begin{proof} Let~$H$ be the induced subgraph of $G$ with~$V(H)=M\cup N[D\setminus M]$. Since $M$ is a dominating set, we can fix for each $v\in N[D\setminus M]\setminus (D\cup M)$ a vertex $m_v\in M$ that is adjacent to $v$. Then for each $m\in M$, the subgraph $G_m$ which consists of the central vertex $m$ and all $v\in N[D\setminus M] \setminus (D\cup M)$ such that $m=m_v$ and all edges $\{m,v\}$ is a star. Furthermore, observe that for different $m_1,m_2\in M$ the starts $G_{m_1}$ and $G_{m_2}$ are vertex disjoint. \pagebreak We construct a \mbox{depth-1}~ minor~$\tilde{H}$ of~$H$ by contracting the star subgraphs~$G_m$ for~$m\in M$ into vertices $v_m$. Then (all non-trivial inequalities will be explained below) \begin{align} (c+1)\cdot |D\setminus M| & = (2c+1)\cdot |D\setminus M|-c\cdot |D\setminus M| \nonumber\\ & \leq \sum_{w\in D\setminus M} d_{\tilde{H}}(w)-|E(\tilde{H}[D\setminus M])|\\ & \leq |E(\tilde{H})|\\ & \leq c\cdot |V(\tilde{H})|\\ & = c\cdot (|D\setminus M|+ |M|), \end{align} and hence~$|D\setminus M|\leq c\cdot|M|$, which implies the claim. \bigskip \begin{enumerate} \item Let~$w\in D\setminus M$. As~$N_G(w)$ cannot be covered by less than~$(2c+1)$ elements from~$V(G)\setminus \{w\}$ (by definition of~$D$),~$w$ also has at least~$(2c+1)$ neighbors in~$\tilde{H}$. Hence $\sum_{w\in D\setminus M} d_{\tilde{H}}(w)\geq (2c+1)\cdot |D\setminus M|$. On the other hand, every subgraph $\tilde{H}'$ of $\tilde{H}$ has at most~$c\cdot |V(\tilde{H}')|$ edges (every subgraph of a \mbox{depth-1}~ minor is also a \mbox{depth-1}~ minor of $G$ and we assume that every \mbox{depth-1}~ minor of $G$ has edge density at most $c$). Hence $\tilde{H}[D\setminus M]$ has at most $c\cdot |D\setminus M|$ edges. \item Every edge $\{v,w\}\in \tilde{H}$ with $v,w\in D\setminus M$ is counted twice in the sum $\sum_{w\in D\setminus M} d_{\tilde{H}}(w)$, once when we count $d_{\tilde{H}}(v)$ and once when counting $d_{\tilde{H}}(w)$. By subtracting the number of edges that run between vertices of $D\setminus M$ we get the second inequality. \item The third inequality holds by assumption on the density of \mbox{depth-1}~ minors of $G$. \item By construction, all vertices of $N[D\setminus M]\setminus D$ disappear into some star $G_m$, hence $\tilde{H}$ has exactly $|D\setminus M|+|M|$ vertices. \end{enumerate} \end{proof} \begin{assumption} For the rest of this section, we fix a graph $G$ which is locally embeddable, that is, $G$ excludes $K_{3,t}$ for some $t$ as \mbox{depth-1}~ minor and all \mbox{depth-1}~ minors~$H$ of~$G$ satisfy $|E(H)|/|V(H)|\leq c$ for some constant~$c$ (hence, \cref{thm:largeneighbourhood} can be applied). Furthermore, we fix~$M$ and~$D$ as in \cref{thm:largeneighbourhood}. \end{assumption} Let us write~$R$ for the set~$V(G)\setminus N[D]$ of vertices which are not dominated by~$D$. The algorithm defines a dominator function~$dom:R\rightarrow N[R]\subseteq V(G)\setminus D$. The set~$D'$ computed by the algorithm is the image~$dom(R)$, which is a dominating set of vertices in~$R$. As~$R$ contains the vertices which are not dominated by~$D$,~$D'\cup D$ is a dominating set of~$G$. This simple observation proves that the algorithm correctly computes a dominating set of~$G$. Our aim is to find a bound on~$|dom(R)|$. \smallskip We fix an ordering of~$M$ as~$m_1,\ldots, m_{|M|}$ such that the vertices of~$M\cap D$ are first (minimal) in the ordering and inductively define a minimal set~$E'\subseteq E(G)$ such that~$M$ is also a dominating set with respect to~$E'$ as follows. For $i=1$, we add all edges~$\{m_1,v\}\in E(G)$ with~$v\in N(m_1)\setminus M$ to~$E'$. If for some $i\geq 1$ we have defined the set of edges $E'$ which are incident with $m_1,\ldots, m_i$, we continue to add for $i+1$ all edges~$\{m_{i+1}, v\}\in E(G)$ with~$v\in N(m_{i+1}) \setminus(M\cup N_{E'}(\{m_1,\ldots, m_{i}\}))$. \smallskip For~$m\in M$, let~$G_m$ be the star subgraph of~$G$ with center~$m$ and all vertices~$v$ with~$\{m,v\}\in E'$. Let~$H$ be the \mbox{depth-1}~ minor of~$G$ which is obtained by contracting all stars~$G_m$ for~$m\in M$. This construction is visualized in Figure~\ref{fig:construction}. In the figure, solid (undirected) lines represent edges from~$E'$, edges incident with $m\in M$ which are not in~$E'$ are dashed. We want to count the \emph{endpoints} of directed edges, which represent the dominator function~$dom$. \begin{figure}[h!] \centering \begin{tikzpicture} \filldraw[draw=black,fill=black!10!white,rounded corners=10pt] (1,1) -- (0.15, -0.9) -- (1.85,-0.9) -- cycle; \draw[fill=black] (1, 0.5) circle (1mm); \node at (1.5, 0.8) {$m_1$}; \node at (0.1, 0.1) {$G_{m_1}$}; \foreach \x in {0.5, 1, 1.5} { \draw[fill=black] (\x, -0.7) circle (0.4mm); \draw (\x,-0.7) -- (1,0.5); } \filldraw[draw=black,fill=black!10!white,rounded corners=10pt] (3,1) -- (2.15, -0.9) -- (3.85,-0.9) -- cycle; \draw[fill=black] (3, 0.5) circle (1mm); \node at (3.5, 0.8) {$m_2$}; \foreach \x in {2.5, 3, 3.5} { \draw[fill=black] (\x, -0.7) circle (0.4mm); \draw (\x,-0.7) -- (3,0.5); } \draw[fill=black] (4.25, -0.1) circle (0.4mm); \draw[fill=black] (4.55, -0.1) circle (0.4mm); \draw[fill=black] (4.85, -0.1) circle (0.4mm); \draw[dashed] (3,0.5) -- (1.5, -0.7); \filldraw[draw=black,fill=black!10!white,rounded corners=10pt] (6,1) -- (5.15, -0.9) -- (6.85,-0.9) -- cycle; \draw[fill=black] (6, 0.5) circle (1mm); \node at (6.8, 0.7) {$m_{|M|}$}; \foreach \x in {5.5, 6, 6.5} { \draw[fill=black] (\x, -0.7) circle (0.4mm); \draw (\x,-0.7) -- (6,0.5); } \draw (2.5,-0.7) edge[out=210,in=330,->] (1.58, -0.74); \draw (5.45,-0.74) edge[out=210,in=330,<-] (3.5, -0.7); \draw (6.4,-0.74) edge[out=210,in=330,<-] (6, -0.7); \end{tikzpicture} \caption{The graphs $G_m$. Solid (undirected) lines represent edges from $E'$, directed edges represent the dominator function $dom$. Dashed lines represent edges incident with $m\in M$ which are not in $E'$.} \label{fig:construction} \end{figure} In the following, we call a directed edge which represents the function~$dom$ a \emph{$dom$-edge}. We did not draw~$dom$-edges that either start or end in~$M$. When counting~$|dom(R)|$, we may simply add a term~$2|M|$ to estimate the number of endpoints of those edges. We also did not draw a~$dom$-edge starting in~$G_{m_1}$. In the figure, we assume that the vertex~$m_1$ belongs to $M\cap D$. Hence every vertex~$v$ from~$N[m_1]$ is dominated by a vertex from~$D$ and the function is thus not defined on such~$v$. However, the vertices of $N(m_1)$ may still serve as dominators, as shown in the figure. \smallskip The graph $H$ has~$|M|$ vertices and by our assumption on the density of \mbox{depth-1}~ minors of~$G$, it has at most~$c\cdot |M|$ edges. \smallskip Our analysis proceeds as follows. We distinguish between two types of~$dom$-edges, namely those which go from one star to another star and those which start and end in the same star. By the star contraction, all edges which go from one star to another star are represented by a single edge in~$H$. We show in \cref{lem:edgerepresentative} that each edge in~$H$ does not represent many such~$dom$-edges with distinct endpoints. As~$H$ has at most~$c\cdot |M|$ edges, we will end up with a number of such edges that is linear in~$|M|$. On the other hand, all edges which start and end in the same star completely disappear in~$H$. In \cref{lem:insidestars} we show that these star contractions ``absorb'' only few such edges with distinct endpoints. \smallskip We first show that an edge in~$H$ represents only few~$dom$-edges with distinct endpoints. For each $m\in M\setminus D$, we fix a set~$C_m\subseteq V(G)\setminus \{m\}$ of size at most~$2c$ which dominates~$N_{E'}(m)$. The existence of such a set follows from the definition of the set $D$. Recall that we assume that~$G$ excludes~$K_{3,t}$ as \mbox{depth-1}~ minor. \begin{lemma}\label{lem:edgerepresentative} Let~$1\leq i<j\leq |M|$. Let ~$N_i:=N_{E'}(m_i)$ and~$N_j:=N_{E'}(m_j)$. \begin{enumerate} \item If $m_j\in M\setminus D$, then \[|\{u \in N_j: \text{ there is~$v\in N_i$ with~$\{u,v\}\in E(G)\}|\leq 2ct$}.\] \item If~$m_i\in M\setminus D$ (and hence~$m_j\in M\setminus D$), then \[|\{u \in N_i: \text{ there is~$v\in N_j$ with~$\{u,v\}\in E(G)\}|\leq 4ct$}.\] \end{enumerate} \end{lemma} \smallskip \begin{proof} By definition of~$E'$, we may assume that $m_i\not\in C_{m_j}$ ($m_i$ is not connected to $N_{E'}(m_j)$ and hence it can be safely removed if it appears in $C_{m_j}$). Let~$c\in C_{m_j}$ be arbitrary. Then there are at most~$t-1$ distinct vertices~$u_1,\ldots, u_{t-1}\in (N_j\cap N(c))$ such that there are~$v_1,\ldots, v_{t-1}\in N_i$ (possibly not distinct) with~$\{u_k, v_k\}\in E(G)$ for all~$k$,~$1\leq k\leq t-1$. Otherwise, we can contract the star with center~$m_i$ and branch vertices~$N(m_i)\setminus \{c\}$ and thereby find~$K_{3,t}$ as \mbox{depth-1}~ minor, a contradiction. See Figure~\ref{fig:contraction} for an illustration in the case of an excluded~$K_{3,3}$. Possibly,~$c\in N_j$ and it is connected to a vertex of~$N_i$, hence we have at most~$t$ vertices in~$N_j\cap N[c]$ with a connection to $N_i$. As~$|C_{m_j}|\leq 2c$, we conclude the first item. Regarding the second item, let~$c\in C_{m_i}$ be arbitrary. If~$c\neq m_j$, we conclude just as above, that there are at most~$t-1$ distinct vertices~$u_1,\ldots, u_{t-1}\in (N_i\cap N(c))$ such that there are~$v_1,\ldots, v_{t-1}\in N_j$ (possibly not distinct) with $\{u_k, v_k\}\in E(G)$ for all~$k$,~$1\leq k\leq t-1$ and hence at most~$t$ vertices in~$N_i\cap N[c]$ with a connection to~$N_j$. Now assume~$c=m_j$. Let~$c'\in C_{m_j}$. There are at most~$t-1$ distinct vertices~$u_1, \ldots, u_{t-1}\in (N_i\cap N_E(m_j))$ such that there are vertices~$v_1,\ldots, v_{t-1}\in N_j\cap N(c)$ (possibly not distinct) with~$\{u_k, v_k\}\in E(G)$ for all~$k$,~$1\leq k\leq t-1$. Again, considering the possibility that~$c'\in N_i$, there are at most~$t$ vertices in~$N_i\cap N_E(m_j)$ with a connection to~$N_j\cap N(c)$. As~$|C_{m_j}|\leq 2c$, we conclude that in total there are at most~$2ct$ vertices in~$N_i\cap N_E(m_j)$ with a connection to~$N_j$. In total, there are hence at most~$(2c-1)t + 2ct\leq 4ct$ vertices of the described form. \begin{figure}[h!] \centering \begin{tikzpicture} \begin{scope}[xshift=4cm] \begin{scope}[yscale=1,xscale=-1] \draw[fill=black] (1, 0.5) circle (1mm); \draw[fill=black] (1, -1.7) circle (1mm); \node at (1.7, -2) {$c_1\in C_1$}; \foreach \x in {0.5, 1, 1.5} { \draw[fill=black] (\x, -0.7) circle (0.4mm); \draw (\x,-0.7) -- (1,-1.7); \draw (\x,-0.7) -- (1,0.5); } \node at (0.3, -0.6) {u}; \node at (0.8, -0.6) {v}; \node at (1.7, -0.6) {w}; \filldraw[draw=black,fill=black!10!white,rounded corners=10pt] (2.4,1) rectangle (3.8,-1.2); \draw[fill=black] (3, 0.5) circle (1mm); \node at (1.3, 0.7) {$m_2$}; \node at (3.3, 0.7) {$m_1$}; \foreach \x in {2.7, 3.3} { \draw[fill=black] (\x, -0.7) circle (0.4mm); \draw (\x,-0.7) -- (3,0.5); } \draw (2.7,-0.7) edge[out=210,in=330,-] (0.5, -0.7); \draw (2.7,-0.7) edge[out=210,in=330,-] (1, -0.7); \draw (3.3,-0.7) edge[out=210,in=330,-] (1.5, -0.7); \end{scope} \end{scope} \node at (4.5, -0.7) {$\Longrightarrow$}; \begin{scope}[xshift=5.5cm] \draw[fill=black] (0, 0.5) circle (1mm); \node at (0.3, 0.7) {$m_2$}; \filldraw[draw=black,fill=black!10!white,rounded corners=5pt] (0.6,1) rectangle (1.6,0.2); \draw[fill=black] (1, 0.5) circle (1mm); \node at (1.3, 0.7) {$m_1$}; \draw[fill=black] (2, 0.5) circle (1mm); \node at (2.3, 0.7) {$c_1$}; \draw[fill=black] (0, -1.5) circle (1mm); \node at (0.3, -1.7) {$u$}; \draw[fill=black] (1, -1.5) circle (1mm); \node at (1.3, -1.7) {$v$}; \draw[fill=black] (2, -1.5) circle (1mm); \node at (2.3, -1.7) {$w$}; \foreach \x in {0, 1, 2} { \foreach \y in {0,1,2} { \draw (\x,0.5) -- (\y, -1.5); } } \end{scope} \end{tikzpicture} \caption{Visualisation of the proof of \cref{lem:edgerepresentative} in the case of excluded $K_{3,3}$} \label{fig:contraction} \end{figure} \end{proof} We write~$Y$ for the set of all vertices~$\{u\in N_{E'}(m_i) :$ $m_i\not\in D$ and there is~$v\in N_{E'}(m_j)$,~$j\neq i$ and~$\{u,v\}\in E(G)\}$. \begin{corollary}\label{crl:numedgesbetweendiamonds} $|Y|\leq 6c^2t\cdot |M|$. \end{corollary} \begin{proof} Each of the~$c\cdot |M|$ many edges in~$H$ represents edges between~$N_i$ and~$N_j$, where~$N_i$ and~$N_j$ are defined as above. By the previous lemma, if~$i<j$, there are at most~$2ct$ vertices in~$N_i\cap Y$ and at most~$4ct$ vertices in~$N_j\cap Y$. Hence in total, each edge accounts for at most~$6ct$ vertices in~$Y$. \end{proof} We continue to count the edges which are inside the stars. First, we show that every vertex has small degree inside its own star. \begin{lemma}\label{lem:edgestosamestar} Let~$m\in M\setminus D$ and let~$v\in N_{E'}(m)\setminus C_m$. Then \[|\{u \in N_{E'}(m) : \{u,v\}\in E(G)\}|\leq 2c(t-1).\] \end{lemma} \smallskip \begin{proof} Let~$c\in C_m$. By the same argument as in \cref{lem:edgerepresentative}, there are at most~$t-1$ distinct vertices~$u_1,\ldots, u_{t-1}\in (N_{E'}(m)\cap N(c))$ such that~$\{u_k, v\}\in E(G)$ for all~$k$,~$1\leq k\leq t-1$. \end{proof} Let~$C\coloneqq \bigcup_{m\in M\setminus D}C_m$. We show that there are only few vertices which are highly connected to~$M\cup C$. Let~$Z:=\{u \in N_{E'}(M\setminus D) : |N(u)\cap (M\cup C)|>4c\}$. \begin{lemma}\label{lem:Z} \[|Z|< |M\cup C|.\] \end{lemma} \smallskip \begin{proof} Assume that~$|Z|> |M\cup C|$. Then the subgraph induced by~$Z\cup M\cup C$ has more than $\frac{1}{2}4c|Z|$ edges and~$|Z\cup M\cup C|$ vertices. Hence its edge density is larger than $2c|Z|/(|Z\cup M\cup C|)> 2c|Z|/(2|Z|)= c$, contradicting our assumption on the edge density of \mbox{depth-1}~ minors of~$G$ (which includes its subgraphs). \end{proof} Finally, we consider the image of the~$dom$-function inside the stars. \begin{lemma}\label{lem:insidestars} \[\left|\bigcup_{m\in M\setminus D}\{u\in N_{E'}(m) : dom(u)\in (N_{E'}(m)\setminus (Y\cup Z))\}\right|\] \[ \leq (2(t-1)+4)c|M|.\] \end{lemma} \smallskip \begin{proof} Fix some~$m\in M\setminus D$ and some $u\in N_{E'}(m)$ with~$dom(u)\in N_{E'}(m)\setminus (Y\cup Z)$. Because~$dom(u)\not\in Y$,~$dom(u)$ is not connected to a vertex of a different star, except possibly for vertices from~$M$. Because~$dom(u)\not\in Z$, it is however connected to at most~$4c$ vertices from~$M\cup C$. Hence it is connected to at most~$4c$ vertices from different stars. According to \cref{lem:edgestosamestar}, $dom(u)$ is connected to at most~$2c(t-1)$ vertices from the same star. Hence the degree of~$dom(u)$ is at most $4c+2c(t-1)$. Because~$u$ preferred to choose~$dom(u)\in N_{E'}(m)$ over~$m$ as its dominator, we conclude that~$m$ has at most~$4c+2c(t-1)$~$E'$-neighbors. Hence, in total there can be at most~$(2(t-1)+4)c\cdot |M|$ such vertices. \end{proof} We are now ready to put together the numbers. \begin{lemma}\label{lem:mainlemma} If all \mbox{depth-1}~ minors $H$ of $G$ have edge density at most~$c$ and $G$ excludes~$K_{3,t}$ as \mbox{depth-1}~ minor, then the modified algorithm computes a~$6c^2t+(2t+5)c+4$ approximation for the minimum dominating set problem on~$G$. \end{lemma} \begin{proof} Since~$M$ is a dominating set also with respect to the edges~$E'$, it suffices to bound~$|\{ dom(u) : u\in (N_{E'} [M\setminus D]\setminus N[D])\}|$. This set is partitioned into the following (not necessarily disjoint) sets. First, all endpoints of $dom(R)$ that go from one star to another star are found in one of the sets $Y=\{u\in N_{E'}(m_i) :$ there is~$v\in N_{E'}(m_j)$, $i\neq j$ and~$\{u,v\}\in E(G)\}$, $dom(R)\cap M$ and $dom(M)$. All other dom-edges connect vertices inside individual stars. Here, $dom(R)$ splits into those vertices which are highly connected to $M\cup C$, that is, the set $Z=\{u \in N_{E'}(M\setminus D) : |N(u)\cap (M\cup C)|>4c\}$, the set $C$ and the set $Y$ (which will not be counted twice though). All other dom-edges lead to vertices which lie neither in $Y$ nor in $Z$. In the previous lemmas we have bounded the sizes of each of the described sets. The set~$D$ has size at most~$(c+1)|M|$ according to \cref{thm:largeneighbourhood}. According to \cref{crl:numedgesbetweendiamonds}, the set~$Y$ has size at most $6c^2t|M|$. In particular, there are at most so many vertices~$dom(u)\in N_{E'}(m_i)$ with $u\in N_{E'}(m_j)$ for~$i\neq j$. Clearly, $|dom(R)\cap M|\leq |M|$ and $|dom(M)|\leq |M|$. According to \cref{lem:Z}, the set~$Z$ satisfies~$|Z|< |M\cup C|$. We have~$|C|\leq 2c|M|$, as each~$C_m$ has size at most~$2c$. It remains to count the image of~$dom$ inside the stars which do not point to~$Y$ or~$Z$. According to \cref{lem:insidestars}, this image has size at most~$(2(t-1)+4)c|M|$. In total, we can bound $|dom(R)|$ by \begin{align*} (c+1)|M| & +6c^2t|M|+2|M|+(2c+1)|M| + (2(t-1)+4)c|M| \leq \; (6c^2t+(2t+5)c+4)|M|. \end{align*} \end{proof} Our theorem for bounded genus graphs is now a corollary of \cref{lem:exclude}, \ref{lem:degeneracy} and \ref{lem:mainlemma}. \begin{theorem}\label{thm:main} Let~$\mathcal{C}$ be a class of graphs of orientable genus at most~$g$ (non-orientable genus at most~$\tilde{g}$ resp.). The modified algorithm computes an~$\mathcal{O}(g^2)$-approximation ($\mathcal{O}(\tilde{g}^2)$-approximation resp.) for the dominating set in a constant number of communication rounds. \end{theorem} For the special case of planar graphs, our analysis shows that the algorithm computes a~$199$-approximation. This is not much worse than Lenzen et al.'s original analysis (130), however, off by a factor of almost~$4$ from Wawrzyniak's~\cite{better-upper-planar} improved analysis (52). \subsection{Improving the Approximation Factor with Preprocessing}\label{sec:improved} \noindent We now show the approximation factors related to the genus~$g$, derived in the previous section, can be improved using a local preprocessing step. Given a graph~$G$ and a vertex~$v\in V(G)$, let $K=\{K_1,\ldots,K_j\}$ denote the set of minimal subgraphs of $G$ containing $v$ such that for all $1\le i \le j$, $K_{3,3}$ is a \mbox{depth-1}~ minor of $K_i$. Let $K_h\in K$ be the one with lexicographically smallest identifiers in $K$. We call $K_h$ the $v$-canonical subgraph of $G$ and we denote it by $K_v$. If $K=\emptyset$ we set $K_v:=\emptyset$. \begin{lemma}\label{lem:findk33} Given a graph~$G$ and a vertex~$v\in V(G)$. The $v$-canonical subgraph~$K_v$ of $v$ can be computed locally in at most~$6$ communication rounds. Furthermore, $K_v$ has at most $24$ vertices. \end{lemma} \begin{proof}\label{alg:findk33} The proof is constructive. As $K_{3,3}$ has diameter $2$, every minimal subgraph of $G$ containing $K_{3,3}$ as a \mbox{depth-1}~ minor has diameter at most $6$ (every edge may have to be replaced by a path of length $3$). Therefore, it suffices to consider the subgraph induced by the vertices at distance at most $6$ from $v$, $H=G[N^6(v)]$, and find the lexicographically minimal subgraph which contains $K_{3,3}$ as \mbox{depth-1}~ minor in~$H$ which includes~$v$ as a vertex. If this is the case, we output it as~$K_v$; otherwise we output the empty set. Furthermore,~$K_{3,3}$ has~$9$ edges and hence a minimal subgraph containing it as \mbox{depth-1}~ minor has at most~$24$ vertices (again, every edge is subdivided at most twice and $2\cdot 9+6=24$). \end{proof} To improve the approximation factor, we propose the following modified algorithm, see Algorithm~\ref{alg:modified-approx}. We first carry out the first phase of Algorithm~\ref{alg:lenzen} with density parameter $10\sqrt{g}$ (the parameter is twice the edge density of the input graph). In the following preprocessing phase we eliminate all copies of depth-$1$ minor models of $K_{3,3}$ that $G$ possibly contains. By \cref{lem:decreasegenus} we know that there are at most $g$ (where $g$ is the genus of the graph) disjoint such models. As guaranteed by \cref{lem:findk33}, the vertices can make a canonical local choice on which model the delete. After $g$ elimination rounds we are left with a locally embeddable graph (with the parameter $t=3$) and we call the second phase of Algorithm~\ref{alg:lenzen}. \begin{algorithm}[t] \caption{Dominating Set Approximation for Graphs of Genus~$\leq g$} \label{alg:modified-approx} \begin{algorithmic}[1] \vspace{2mm} \STATE Input: Graph~$G$ of genus at most $g$ \smallskip \STATE \textbf{Run Phase~1 of Modified Algorithm~\ref{alg:lenzen} with density parameter $10\sqrt{g}$ to obtain set $D$} \smallskip \STATE~$(*$ \emph{Preprocessing Phase} ~$*)$ \STATE \textbf{for}~$v\in V-D$ (in parallel) \textbf{do} \STATE \qquad \textbf{compute~$K_v$ in~$G-D$ (see \cref{lem:findk33}}) \STATE \textbf{end for} \STATE \textbf{for}~$i=1..g$ \textbf{do} \STATE \qquad \textbf{for}~$v\in V-D$ (in parallel) \textbf{do} \STATE \qquad \qquad \textbf{if}~$K_v\neq\emptyset$ \textbf{then} \textbf{chosen : = true} \STATE \qquad \qquad \textbf{for all~$u\in N^{12}(v)$ do} \STATE \qquad \qquad \qquad \textbf{if~$K_u\cap K_v \neq\emptyset$ and~$u < v$}\textbf{ then chosen := false} \STATE \qquad \qquad \textbf{end for} \STATE \qquad \qquad \textbf{if (chosen = true) then }$D := D\cup V(K_v)$ \STATE \qquad \textbf{end for} \STATE \textbf{end for} \smallskip \STATE \textbf{Run Phase~2 of Algorithm~\ref{alg:lenzen}~} \end{algorithmic} \end{algorithm} \begin{theorem}\label{thm:modified} Algorithm~\ref{alg:modified-approx} provides a~$24g+\mathcal{O}(1)$ MDS approximation for graphs of genus at most~$g$, and requires~$12g+\mathcal{O}(1)$ communication rounds. \end{theorem} \begin{proof} The resulting vertex set is clearly a dominating set. It remains to bound its size. As Phase~1 is unchanged, the computed set $D$ is at most $6\sqrt{g}$ times larger than an optimal dominating set by \cref{lem:dens}: the algorithm is called with parameter $2c$ and outputs a set at most $c+1$ times larger than an optimal dominating set; here, $c=5\sqrt{g}$ according to \cref{lem:degeneracy}. \smallskip In the following preprocessing phase, if for two vertices~$u\neq v$ we choose both~$K_u$ and $K_v$, then they must be disjoint: Since the diameter of any \mbox{depth-1}~ minor of $K_{3,3}$ is at most~$6$, if two such canonical subgraphs $K_u$ and $K_v$ intersect, then the distance between $u,v$ can be at most~$12$. Hence, each vertex $v$ can decide in its $12$-neighborhood whether its canonical subgraph $K_v$ is the smallest among all choices. On the other hand, by \cref{lem:decreasegenus}, there are at most~$g$ disjoint such minor models. So in the \emph{preprocessing} phase, we can remove at most $g$ disjoint subgraphs~$K_v$ (and add their vertices to the dominating set) and thereby select at most~$24g$ extra vertices for the dominating set. Once the \emph{preprocessing} phase is finished, the remaining graph is locally embeddable. Observe that if the input graph $G$ is planar, no vertices will be added to $D$ in the preprocessing phase. \smallskip In order to compute the size of the set in the third phase, we can use the analysis of \cref{lem:mainlemma} for~$t=3$, which together with the first phase and preprocessing phase, results in a~$24g+\mathcal{O}(1)$-approximation guarantee. \smallskip To count the number of communication rounds, note that the only change happens in the second phase. In that phase, in each iteration, we need~$12$ communication rounds to compute the~$12$-neighborhood. Therefore, the number of communication rounds is~$12g + \mathcal{O}(1)$. \end{proof} This significantly improves the approximation upper bound of \cref{thm:main}: namely from $4(6c^2+2c)g + \mathcal{O}(1)$, where $c=\mathcal{O}(\sqrt{g})$, hence from $\mathcal{O}(g^2)$ to~$24g + \mathcal{O}(1)$, at the price of~$12g$ extra communication rounds. \subsection{A Logical Perspective}\label{sec:stoneage} \noindent Interestingly, as we will elaborate in the following, a small modification of Algorithm~\ref{alg:lenzen} can be interpreted both from a distributed computing perspective, namely as a local algorithm of constant distributed time complexity, as well as from a logical perspective. First order logic has atomic formulas of the form $x=y, x<y$ and~$E(x,y)$, where~$x$ and~$y$ are first-order variables and $E$ is a binary relation symbol. The set of first order formulas is closed under Boolean combinations and existential and universal quantification over the vertices of a graph. To define the semantics, we inductively define a satisfaction relation~$\models$, where for a graph~$G$, a formula $\phi(x_1,\ldots, x_k)$ and vertices~$v_1,\ldots, v_k \in V(G)$,~$G\models\phi(v_1,\ldots, v_k)$ means that~$G$ satisfies~$\phi$ if the free variables~$x_1,\ldots, x_k$ are interpreted as~$v_1,\ldots, v_k$, respectively. The free variables of a formula are those that have an instance not in the scope of a quantifier, and we write~$\phi(x_1, \ldots , x_k)$ to indicate that the free variables of the formula~$\phi$ are among~$x_1,\ldots, x_k$. For~$\phi(x_1,x_2)=x_1<x_2$, we have~$G\models \phi(v_1,v_2)$ if~$v_1<v_2$ with respect to the ordering~$<$ of~$V(G)$ and for~$\phi(x_1,x_2)= E(x_1,x_2)$ we have~$G\models\phi(v_1,v_2)$ if~$\{v_1,v_2\}\in E(G)$. The meaning of the equality symbol, the Boolean connectives, and the quantifiers is as expected. A first-order formula~$\phi(x)$ with one free variable naturally defines the set~$\phi(G)=\{v\in V(G) : G\models\phi(v)\}$. We say that a formula~$\phi$ defines an~$f$-approximation to the dominating set problem on a class~$\mathscr{C}$ of graphs, if~$\phi(G)$ is an~$f$-approximation of a minimum dominating set for every graph~$G\in\mathscr{C}$. Observe that first-order logic is not able to count, in particular, no fixed formula can determine a neighbor of maximum degree in Line~14 of the algorithm. Also note however that the only place in our analysis which refers to the dominator function~$dom$ explicitly is \cref{lem:insidestars}. The proof of the lemma in fact shows that we do not have to choose a vertex of maximal residual degree, but that it suffices to choose a neighbour of degree greater than~$4c+2c(t-1)$ if such a vertex exists, or any vertex, otherwise. For every fixed class of bounded genus, this number is a constant. We use the binary predicate $<$ to make a unique choice of a dominator in this case. Then we define $D$ by the following formula \begin{align*} \varphi_D(x) = \neg (\exists x_1\ldots \exists x_{2c}\forall y\left(E(x,y)\rightarrow \bigvee_{1\leq i\leq 2c} E(y,x_i)\right) \end{align*} and $D'$ by \begin{align*} \psi_{D'}(x) = \exists y \Big(E(x,y)\wedge \forall z\big(\varphi_D(z)\rightarrow \neg E(y,z)\big)\wedge \xi_{\max}(x,y)\Big), \end{align*} where $\xi_{\max}(x,y)$ states that $x$ is the maximum (residual) degree neighbour of $y$ up to threshold $4c+2c(t-1)$. We can express this cumbersome formula with $4c+2c(t-1)$ quantifiers. Note that the formulas $\varphi_D$ and $\psi_{D'}$ are different in spirit. While $\varphi_D$ directly describes a property of vertices which causes them to be included in the dominating set, in the formula $\psi_{D'}(x)$ we state the existence of an element which is not yet dominated by $D$ and which elects $x$ as a dominator. \section{$(1+\epsilon)$-Approximations}\label{sec:star-approx} In this section we show how to extend techniques developed by Czygrinow et al.~\cite{fast-planar} to find $(1+\epsilon)$-approximate dominating set for planar graphs to graphs of sub-logarithmic expansion. These graphs are very general classes of sparse graphs, including planar graphs and all classes that exclude a fixed minor. We focus on the dominating set problem, however, the approximations for the maximum weight independent set problem and maximum matching problem proposed by Czygrinow et al.\ can be extended in a similar way. \smallskip Our notation in this section closely follows that of Czygrinow et al.~\cite{fast-planar}. In particular, we will work with vertex and edge weighted graphs, that is, every graph $G$ is additionally equipped with two weight functions $\omega:V(G)\rightarrow \mathbb{R}^+$ and $\bar{\omega}:E(G)\rightarrow \mathbb{R}^+$. If $H\subseteq G$ is a subgraph of $G$, then we write $\omega(H)$ for $\sum_{v\in V(H)}\omega(v)$ and $\bar{\omega}(H)$ for $\sum_{e\in E(H)}\bar{\omega}(e)$. If $\{G_1,\ldots,G_n\}$ is a minor model of a graph $H$ in a weighted graph $G$, then $H$ naturally inherits a weight function $\omega_H$ from $G$ as follows. If $u,v\in V(H)$ are represented by the subgraphs $G_u$ and $G_v$ in the minor model, then $\omega(u)=\sum_{w\in V(G_v)}\omega(w)$ and if $\{u,v\}\in E(H)$, then $\bar{\omega}_H(\{u,v\})=\sum_{e\in E(G), e\cap V(G_u)\neq \emptyset, e\cap V(G_v)\neq \emptyset}\bar{\omega}(e)$. \subsection{Clustering Algorithm} We first generalize the partitioning algorithm provided by Czygrinow et al.\ to graphs with sub-logarithmic expansion. \begin{definition}[Pseudo-Forest] A \emph{pseudo-forest} is a directed graph $\vec{F}$ in which every vertex has out-degree at most~$1$. \end{definition} For a directed graph $\vec{F}$, we write $F$ for the underlying undirected graph of~$\vec{F}$. The following lemma is a straightforward generalization of Fact~1 of~\cite{fast-planar}. \begin{lemma} \label{lem:pseudoforest} Let $G$ be a graph of arboricity $a$ with an edge-weight function $\bar{\omega}$. There is a distributed procedure which in two rounds finds a pseudo-forest $\vec{F}$ such that $F$ is a subgraph of $G$ with $\bar{\omega}(F)\geq \bar{\omega}(G)/(2a)$. \end{lemma} \begin{proof} We run the following algorithm. For every vertex~$v$, we choose one edge $\{v,w\}$ of largest weight, and direct it from $v$ to $w$. If we happen to choose an edge $\{v,w\}$ for both vertices $v$ and~$w$, we direct it from $v$ to $w$, using the larger identifier as a tie breaker. Hence every vertex has out-degree at most one and the algorithm outputs a pseudo-forest $\vec{F}$. Let us show that $\bar{\omega}(F)\geq \bar\omega(G)/(2a)$. Without loss of generality we assume that $G$ has no isolated vertices (we make a statement about edge weights only). As $G$ has arboricity at most $a$, there exists a forest cover $\mathcal{F}$ into at most $a$ forests. So one of the forests $T\in \mathcal{F}$ collects weight $\bar{\omega}(T)\ge \bar{\omega}(G)/a$. Now associate with each vertex $v$ of $T$ the value $w_T(v)$ which is the weight of the edge connecting it to its parent (if it exists). Similarly, write $w_F(v)$ for the weight of the arc $(v,w)$ or $(w,v)$ in $\vec{F}$ that was chosen in the algorithm. Observe that we may be double counting edges here, but only once. Hence we have $\bar\omega(F)\geq \sum_{v\in V(G)}w_F(v)/2\geq \sum_{v\in V(T)}w_F(v)/2\geq \sum_{v\in V(T)}w_T(v)/2\geq \bar\omega(G)/(2a)$. \end{proof} It is straightforward to generalize also Lemma~2 of~\cite{fast-planar}. \begin{lemma}[\textsc{HeavyStar}]\label{lem:heavystar} There is a local algorithm which takes an edge weighted $n$-vertex graph $G$ of arboricity at most $a$ as input and in $\mathcal{O}(\log^*n)$ rounds computes a partition of $V(G)$ into vertex disjoint stars $H_1,\ldots, H_x\subseteq V(G)$ such that $H=G/H_1/\ldots H_x$ has total weight $\bar{\omega}_H(H)\leq (1-1/(8a))\cdot \bar{\omega}(G)$. \end{lemma} We refrain from presenting a proof of this lemma, as the proof is literally a copy of the proof given in \cite{fast-planar}. Czygrinow et al.~\cite{fast-planar} use only the fact that planar graphs have arboricty $3$, while we make the statement for graphs of arboricity $a$. Hence only numbers must be adapted in the proof (from $24$ in their work to $8a$ in our case). We refer the reader to the very accessible presentation in~\cite{fast-planar}. \smallskip We come to the final clustering algorithm. We fix a function $f(r)\in o(\log r)$ which bounds the expansion (density of depth-$r$ minors) of the input graphs $G$. Recall that arboricity is within factor~$2$ of density of subgraphs, hence the depth-$r$ minors of $G$ have arboricity bounded by $2f(r)$. By iteratively taking \mbox{depth-1}~ minors, we obtain minors at exponential depth, as stated in the next lemma. \begin{lemma}[Proposition 4.1, statement (4.4) of \cite{nevsetvril2012sparsity}]\label{lem:iterative-density} Taking a depth-$1$ minor of a \mbox{depth-$1$} minor for $r$ times gives a depth\;-\;$(3^r-1)/2$ minor of $G$. \end{lemma} In particular, when iterating the star contraction routine of Algorithm~\ref{alg:clustering}, in iteration $i$ we are dealing with a subgraph of arboricity $2f((3^i-1)/2)\eqqcolon g(i)$ which is sublinear in $i$. Hence, we may apply \cref{lem:heavystar} with arboricity parameter $i$ in iteration $i$. Note that we do not require the arboricity as an input to the algorithm of \cref{lem:heavystar}. Note furthermore, that we have \[\lim_{i\rightarrow \infty}\big(1-1/(8g(i)\big)^i\leq \lim_{i\rightarrow \infty}e^{-i/g(i)}=0,\] hence for every $\epsilon>0$ there is a constant $i_0$ depending only on $\epsilon$ and $g$ (and not on the graph size~$n$) such that $\big(1-1/(8g(i_0)\big)^{i_0}\leq \epsilon$ (we may assume that the function $g$ is monotone, as the density of depth-$r$ minors cannot be smaller than the density of depth-$r'$ minors for $r'\leq r$). \begin{algorithm}[H] \caption{Clustering} \begin{algorithmic}[1] \vspace{2mm} \STATE Input: $G$ with $\nabla_r(G)\leq f(r)$, $\epsilon>0$ and $i_0$ with $\big(1-1/(8g(i_0)\big)^{i_0}\leq \epsilon$ \smallskip \STATE \textbf{for} $i=1,\ldots, i_0$ \STATE \qquad Call the algorithm of \cref{lem:heavystar} to find vertex disjoint stars $H_1,\ldots, H_x$ in $G$ \STATE \qquad $H\leftarrow G/H_1/\ldots /H_x$ with weights modified accordingly \STATE \textbf{end for} \STATE \textbf{return} $\{C_i=V(H_i) : 1\leq i\leq x\}$. \end{algorithmic}\label{alg:clustering} \end{algorithm} \begin{lemma}[Clustering] \label{lem:cluster} Let $c\geq 1$ be a constant and let $G$ be a graph with $\nabla_r(G)\leq f(r)$. If the clustering algorithm gets $G$ and $\epsilon>0$ as input, then it returns a set of clusters $C_1,\ldots,C_x$ partitioning $G$, such that, each cluster has radius at most $(3^{i_0}-1)/2$ (where $i_0$ is the number of iterations in the algorithm). Moreover, if we contract each $C_i$ to a single vertex to obtain a graph~$H$, then $\bar{\omega}_H(H) \le \epsilon \cdot \bar{\omega}(G)$. The algorithm uses $\mathcal{O}_{\epsilon}(\log^* n)$ communication rounds. \end{lemma} In the above lemma we use the notation $\mathcal{O}_{\epsilon}$ to express that we are treating all constants depending on $\epsilon$ as constants. \begin{proof} As described above, the graph $G_i$ we are dealing with in iteration $i$ has arboricity at most $2f(3^i-1)/2)=g(i)$, which is sublinear in $i$. By applying \cref{lem:heavystar} to~$G_i$, we compute in $\mathcal{O}(\log^*n)$ rounds a partition of $V(G_i)$ into vertex disjoint stars $H_1,\ldots, H_x\subseteq V(G_i)$ such that $H=G_i/H_1/\ldots H_x$ has total weight $\bar{\omega}_H(H)\leq (1-1/(8g(i))\cdot \bar{\omega}(G_i)$. Note that by \cref{lem:iterative-density}, the graph $G_i$ obtained in iteration $i$ is a depth-$((3^i-1)/2$ minor of $G$. Hence, by induction, after~$i$ iterations, the edge weight of the graph $G_i$ is at most $\big(1-1/(8g(i)\big)^{i}$. As argued above, there exists~$i_0$ such that $\big(1-1/(8g(i_0)\big)^{i_0}\leq \epsilon$, at which time we stop the algorithm. As in each round we invest at most time $\mathcal{O}(\log^* n)$, in total we invest at most $\mathcal{O}(\log^*n\cdot i_0)=\mathcal{O}_{\epsilon}(\log^* n)$ time to compute the clustering. \end{proof} \subsection{Approximation for Minimum Dominating Set} We are ready to prove the main theorem of this section. \begin{theorem}\label{thm:ds-main} There exists a deterministic distributed algorithm which gets as input \begin{enumerate} \item an $n$-vertex graph $G$ of sub-logarithmic expansion, \item a $c$-approximation of a minimum dominating set $D$ of $G$ for some constant $c$, and \item a real parameter $\epsilon>0$. \end{enumerate} The algorithm runs in $\mathcal{O}_{\epsilon,c}(\log^*n)$ rounds and outputs a $(1+\epsilon)$-approximation of a minimum dominating set of $G$. \end{theorem} \begin{corollary}\label{crl:approx} Let $\mathcal{C}$ be a class of graphs of sub-logarithmic expansion. Assume there exists an algorithm which computes $c$-approximations of dominating sets on graphs from $\mathcal{C}$ in $t$ rounds. Then there exists an algorithm which for every $\epsilon>0$ computes a $(1+\epsilon)$-approximation of a minimum dominating set on every $n$-vertex graph $G\in\mathscr{C}$ in $\mathcal{O}_{\epsilon,c}(t+ \log^*n)$ rounds. \end{corollary} We have chosen to present this extension of Czygrinow et al.~\cite{fast-planar} because it connects very well to the results we obtained in the previous section. In particular, \cref{crl:approx} in combination with \cref{thm:main} gives a deterministic distributed $(1+\epsilon)$-approximation algorithm in $\mathcal{O}_{\epsilon,g}(\log^* n)$ rounds for dominating sets on graphs of genus at most $g$. We can similarly combine the corollary the result of Amiri et al.~\cite{amiri2017distributed} to obtain \mbox{$(1+\epsilon)$}-approximations in $\mathcal{O}(\log n)$ rounds on graphs of sub-logarithmic expansion. \begin{proof}[Proof of \cref{thm:ds-main}] Let $G$ be the given input graph and let $D$ be a dominating set of $G$ with $|D|\leq c\cdot \gamma(G)$, say $D=\{w_1,\ldots, w_k\}$ (recall that $\gamma(G)$ denotes the size of a minimum size dominating set of $G$). Associate each vertex $v\in V(G)\setminus D$ with one of its dominators, say with the one of minimum identifier, to obtain a partition $(W_1,\ldots, W_k)$ of $G$ into clusters of radius $1$. This partition is obtained in a single communication round. The graph $H=G/W_1/\ldots /W_k$ is a depth-$1$ minor of $G$ with $k$ vertices and at most $\nabla_1(G)\cdot k$ edges. Define an edge weight function on $E(H)$ by assigning unit weight to each edge. Set $\delta=\epsilon/(2c\nabla_1(G))$. Apply the algorithm of \cref{lem:cluster} with parameter $\delta$ to find a partition $(V_1,\ldots, V_l)$ of $V(H)$ such that the weight between different clusters is at most $\delta\cdot |E(H)|$. The algorithm runs in $\mathcal{O}_{\delta}(\log^* n)=\mathcal{O}_{\epsilon,c}(\log^* n)$ communication rounds. By uncontracting the partitions $V_i$ and $W_i$, we obtain a partition $(U_1,\ldots,U_l)$ of $V(G)$, where each~$U_i$ has constant radius. Find an optimal dominating set $S_i$ of $G[U_i]$ in each subgraph $G[U_i]$ and return the union $S=\bigcup_{1\leq i\leq l} S_i$ of these dominating sets. As the algorithm has already learned the subgraphs $G[U_i]$, by the infinite computational power of each processor in the \textit{LOCAL} model, we can compute such a dominating set in one round. This completes the description of the algorithm. Note that instead of solving the dominating set optimally on each $G[U_i]$, which may be considered an abuse of the \textit{LOCAL} model by some, we can compute a sufficiently good approximation of an optimal dominating set. For this, we can use the PTAS~\cite{har2015approximation} for dominating sets on graphs of polynomial expansion. Since the $U_i$ form a partition of $V(G)$, it is clear that $S$ is a dominating set of $G$. Denote by~$S^*$ a dominating set of cardinality $\mathrm{Opt}$. Let $S'$ be obtained from $S^*$ by adding for each $U_i$ all vertices $w\in U_i$ which have a neighbor in a different cluster $U_j$. Then $S'\cap U_i$ is a dominating set of $G[U_i]$. Furthermore, we have \[|S'|\leq |S^*|+2\delta |E(H)|\leq \gamma(G)+2c\delta \nabla_1(G)\cdot \gamma(G)=(1+\epsilon)\cdot \gamma(G).\] Observe that the local solutions $S_i$ cannot be worse than the solution $S'\cap U_i$, hence \[|S|=\sum_{1\leq i\leq l}|S_i|\leq \sum_{1\leq i\leq l}|S'\cap U_i|=|S'|\leq (1+\epsilon)\cdot \gamma(G).\] \end{proof} \section{Conclusion}\label{sec:FO} \noindent This paper presented the first constant round, constant factor local MDS approximation algorithm for locally embeddable graphs, a class of graphs which is more general than planar graphs. Moreover, we have shown how our result can also be used to derive a $\mathcal{O}(\log^*{n})$-time distributed approximation scheme for bounded genus graphs. Our proofs are purely combinatorial and avoid all topological arguments. For the family of bounded genus graphs, topological arguments helped to improve the obtained approximation ratio in a preprocessing step. We believe that this result constitutes a major step forward in the quest for understanding for which graph families such local approximations exist. Besides the result itself, we believe that our analysis introduces several new techniques which may be useful also for the design and analysis of local algorithms for more general graphs, and also problems beyond MDS. In particular, we believe that the notion of bounded depth minors and not the commonly used notion of excluded minors will be the right notions in the setting of local, distributed computing. Moreover, this paper established an interesting connection between distributed computing and logic, by presenting a local approximation algorithm which is first-order logic definable. This also provides an interesting new perspective on the recently introduced notion of stone-age distributed computing~\cite{stoneage}: distributed algorithms making minimal assumptions on the power of a node. Avoiding counting in the arising formulas allows for example an implementation of the algorithm in the circuit complexity class $\mathrm{AC}^0$, that is, an implementation by circuits of polynomial size and constant depth. It remains open whether the local constant approximation result can be generalized to sparse graphs beyond bounded genus graphs. Also, it will be interesting to extend our study of first-order definable approximations. \bibliographystyle{abbrv}
1,941,325,220,918
arxiv
\section{Distortions from Channel and Hardware} \label{sec:improve} \begin{figure*} \centering \begin{subfigure}[b]{0.23\textwidth} \includegraphics[width=\textwidth]{Figures/Channel_mag.pdf} \caption{Actual Channel (Mag)} \label{fig:channel_m} \end{subfigure} \quad \begin{subfigure}[b]{0.23\textwidth} \includegraphics[width=\textwidth]{Figures/Channel_mag.pdf} \caption{Estimated Channel (Mag)} \label{fig:ch_training_m} \end{subfigure} \quad \begin{subfigure}[b]{0.23\textwidth} \includegraphics[width=\textwidth]{Figures/Channel_ph.pdf} \caption{Actual Channel (Ph)} \label{fig:channel_p} \end{subfigure} \quad \begin{subfigure}[b]{0.23\textwidth} \includegraphics[width=\textwidth]{Figures/Channel_ph.pdf} \caption{Estimated Channel (Ph)} \label{fig:ch_training_p} \end{subfigure} % \caption{Channel Estimation (Magnitude \& Phase) of frequency selective channel for training signal based estimation technique.} \label{fig:channel_est} \vspace{-10pt} \end{figure*} Imperfections at the received are induced due to the wireless multipath channel and the underlying hardware. In this section, we illustrate the methodologies that we introduce to estimate and counteract the imperfections to yield better performance. \begin{comment} From the previous analysis, it is clear that the introduced PLS scheme based on time domain samples scrambling destroy all the OFDM properties, moreover the way of reconstructing the distorted wave form becomes very challenging due to the following reasons: \begin{itemize}[leftmargin=*] \item \textbf{Channel Estimation and correction} From (10), we notice that the channel estimation plays an important role in evaluating the performance of the algorithm, so if we have imperfect channel estimation, the performance will be affected. So the channel estimation is considered a challenging problem due to the received corrupted pilots resulted from the randomization process and the loss of orthogonality for the OFDM waveform. \item \textbf{Phase Noise correction} As mentioned before, the received OFDM pilots are already corrupted due to time domain scrambling. So we need a different way to make phase noise correction to increase the system performance especially in real practical scenarios (i.e Over the Air (OTA) transmission). \end{itemize} In the rest of this section, we present the the mathematical formulation for channel estimation and Phase noise correction. \end{comment} \subsection{Training signal based channel estimation} \label{subsec:channel} \begin{comment} In \S~\ref{sec:system_rx}, we assumed full knowledge of the channel, $H_{F}$, which will be estimated in this section. We employ two different techniques: 1) Preamble based and 2) Training signal based channel estimation. \end{comment} \begin{comment} In the previous mathematical analysis, we assume that we have full channel knowledge (i.e Full CSI), however this assumption is not practical in most of practical scenarios. In this subsection, we introduce two different ways of channel estimation. The first way depends on using the initial preamble channel estimation to obtain Full CSI and decode the received secured wave form. The second is using a defined training symbol between the transmitter and receiver to obtain full channel knowledge. \end{comment} \begin{comment} \noindent 1. \textbf{\underline{Preamble based channel estimation}}: Legacy 802.11~\cite{802_11_spec} receiver uses long preamble for initial channel estimation, where 52 subcarriers are used and provides accurate estimates for the OFDM symbols. However, {\texttt{Rand-OFDM}} spreads beyond the 52 subcarriers, as shown in figure~\ref{fig:rand_ofdm_fft}, which requires our system to estimate the channel for all 64 subcarriers. In this work, we use linear channel extrapolation techniques in order to predict the missing sub-carriers in the long preamble at the end of the spectrum. The extrapolation process is done for the magnitude and phase separately. If $H(k)$ is the magnitude of the channel frequency response at sub-carrier $k$, then the slope of the channel magnitude for extrapolation is given by $m = H(k+1)-H(k)$. To calculate the value of $i^{th}$ subcarrier, we use the straight line equation $H(i)=m\times i+b$, where $b$ is the intercept. We choose $k$ and $k+1$ at the end of the spectrum bands to account for fine grain changes in channel. We use the same convention for phase, except we perform one more step to wrap the value of phase to be between $[0,2\pi]$. The estimated phase of the channel is given by $\phi_{estimated} = f_{wrap}(\phi_{extrapolated})$, where $f_{wrap}(\phi)$ is wrapping function to make the value of $\phi$ between $[0,2\pi]$. This preamble-based estimation technique can precisely realize moderately flat fading channels, but often will miscalculate the channel effects in high frequency selective fading channels. Although non-linear channel estimation techniques may reduce the error, they do not perform any better in absence of any meaningful estimates at the corner of the spectrum. This channel estimation error will accumulate and propagate down the receiver chain, if not corrected earlier. We simulated a frequency selective Rayleigh fading channel with parameters as shown in table~\ref{tbl:channel} to test the worst case scenarios. Figures~\ref{fig:channel_m} and~\ref{fig:channel_p} shows the magnitude and phase of one of the instances of the channel. When we pass {\texttt{Rand-OFDM}} through this channel and perform preamble based training, we had to extrapolate at both ends of the spectrum, as shown by red colors of the subcarriers in figures~\ref{fig:ch_preamble_m} and~\ref{fig:ch_preamble_p} respectively. Although phase estimates were quite accurate, we notice that magnitude estimations were incorrect in severe channel conditions. This observation leads to designing a training signal that spans all 64 carriers, as detailed below. \end{comment} \begin{comment} For any OFDM frame, long preamble is used use to give the receiver an initial channel estimation by transmitting a known pattern to the receiver. However this channel estimation provides the receiver with only 52 sub-carrier excluding the guard bands and the DC null sub-carriers. Due to time domain randomization, the power of the transmitted OFDM symbol spreads over the whole channel bandwidth including the Guard bands. In this work, we use channel extrapolation techniques in order to predict the missing sub-carriers. There are various ways for extrapolation, however inaccurate channel extrapolation can affect the system performance. In this work, we use linear channel extrapolation for the preamble channel estimation to obtain full channel estimation including the guard bands. The interpolation process is done for the magnitude and phase separately and can be performed as: \end{comment} \begin{comment} \note{What are the channel parameters used to generate figure~\ref{fig:channel_est}?} Path delay = [0 100 200 300 500 700] ns Path gains =[0 -3.6 -7.2 -10.8 -18 -25.2] dB Doppler shift = 3 Hz. \begin{equation} m = mag(H(k+1))-mag(H(K)) \end{equation} where $m$ is the slope of channel magnitude and $H(k)$ is the channel frequency response value at sub-carrier $K$, the straight line equation is performed to calculate the the missing sub-carriers using the slope $m$ such that : \begin{equation} H(k-1) = m_{k}(K-1)+b_{k} \end{equation} where $m_{i}$ and $b_{i}$ are the slope and the intercepted line part calculated from $H(k)$ and $H(k+1)$. Similarly, the same method of interpolation is done on the phase and can be given as follows: \begin{equation} \phi_{estimated} = f_{wrap}(\phi_{extrapolated}) \end{equation} where $f_{wrap}(\phi)$ is wrapping function to make the value of $\phi$ between $[0,2\pi]$. From the previous analysis, it is clear that the extrapolation method can be suitable for flat and slow fading channels, however for medium and fast frequency selective channel, the error in channel interpolation will propagate, and the performance of the system could be degraded due to channel estimation imperfection. The system will use this channel estimation to equalize the whole OFDM frame without using the pilot symbols due the the corruption added through the randomization process. \end{comment} Accurate estimation of channel is essential in high frequency selective fading channels, specially at higher order modulations, where minimal error in channel estimation will lead to significant error in demodulation. In \S~\ref{sec:system_rx}, we assumed complete knowledge of the channel $H_{F}$, which is an incorrect assumption in practical systems. Partial channel estimation on only 52 subcarriers of long preamble and extrapolating that for 64 subcarriers, as required for {\texttt{Rand-OFDM}}, yields poor performance in multipath channels. Hence, we design a training signal that spans all 64 subcarriers to better estimate the channel at the intended receiver. We use a shared secret data and randomize it based on the same shared key between \textit{Alice} and \textit{Bob} to create one BPSK modulated {\texttt{Rand-OFDM}} signal, as described in \S~\ref{sec:system_tx}. This OFDM symbol is transmitted after the preamble as shown in figure~\ref{fig:packet}. Channel estimation at the receiver requires knowledge of transmitted waveform, which in this case is secured by the shared key. Hence, we add another layer of security, where Eve will not be able to estimate the channel accurately as the training signal is dependent on the shared secret key, and physical layer authentication can be initiated using the training signal \cite{Auth1,Auth2}. If $X_{Tr}$ is the BPSK-modulated OFDM signal, then the {\texttt{Rand-OFDM}} training signal can be derived from equation~\ref{eq:randTx} as $x_{Tr} =P_{CP}RF^{-1}X_{Tr}$. The received training signal can be derived from equation~\ref{eq:rand_rx} as \begin{equation} y_{Tr} =H_{t}P_{CP}RF^{-1}X_{Tr}+N_{Tr} \end{equation} Since $X_{Tr}$ and $R$ matrices are shared between the transmitter and the intended receiver only, the channel can be estimated by only \textit{Bob}. Ignoring the noise, channel frequency response can be estimated by: \begin{equation} H_{F}= \textstyle \frac{FTy_{Tr}}{X_{Tr}} \end{equation} Figure~\ref{fig:channel_est} shows the actual and the estimated phase and magnitude of the channel in frequency selective fading conditions. It is obvious that training signal channel estimation gives an accurate channel estimation for the whole 64 sub-carriers rather than the preamble channel estimation which only concerns on 52 sub-carriers . \begin{comment} Channel estimation plays a vital role in decoding the data, which is also illustrated in \S~\ref{subsec:perf_chEst}. This paper shows that long preamble with 52 subcarriers can be used for channel estimation in flat fading channels, and training signal based estimates provide more accurate results. In presence of the training signal, the short and long preambles are used only to detect the start of the packet. We can, in future, use a different narrowband signal to synchronize the transmitter and receiver, from which channel information cannot be extracted. This will provide another layer of security as \textit{Eve} will not be able to estimate the channel without the shared secret key. \end{comment} \begin{comment} In the previous section, channel extrapolation for preamble channel estimation is used to obtain full channel knowledge, but this assumption can be valid in flat channels or slowly frequency selective channel, however if the channel is medium of high frequency selective channel, channel interpolation gives inaccurate channel estimation due to the high variation of the channel. In this section, we transmit an additional training signal to improve the channel estimation such that we achieve full channel estimation especially for frequency selective channels. It is based on sending a defined shared symbol between the transmitter and the receiver. In this work, we assume that the training signal $X_{S}$ is modulated in BPSK modulation, then it transmitted over the OFDM data symbols and can be given by: \begin{equation} x_{s} =P_{CP}RF^{-1}X_{s} \end{equation} where $F^{-1}$ is IFFT matrix, $R$ is the randomization matrix based on the shared key between the transmitter and receiver and $P_{CP}$ is the cyclic prefix extension matrix. At the receiver side, the received training signal is given by: \begin{equation} y_{s} =H_{t}P_{CP}RF^{-1}X_{s} \end{equation} where $H_{t}$ is CIR matrix. At the receiver side, the training $X_{s}$ is known, then the receiver calculate the randomized version for the training signal as: \begin{equation} X_{rs} =FRF^{-1}X_{s} \end{equation} where $F$ is the FFT matrix, then $Y_{s}$ is used to determine training signal channel estimation as follows: \begin{equation} H_{T}= \frac{FTy_{s}}{X_{rs}} \end{equation} The receiver use the training signal channel estimation $H_{T}$ channel to estimate guard bands and DC sub-carriers to have more accurate channel estimation especially in medium or frequency selective channels. The number of training signal symbols can be increased for given OFDM frame to increase the accuracy of the estimated channel and to track channel variations in frequency selective channels, due to the absence of pilots in secure OFDM symbol. Fig. \ref{fig:channel_est} shows the estimated channel response for medium frequency selective channel. It is clear that the extrapolated channel can not track the channel variation for the guard bands, however by using the training signal, the accuracy of channel estimation increases. \end{comment} \begin{comment} \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=\textwidth]{Figures/QPSK_IQ_Before.pdf} \caption{QPSK Before Phase correction} \label{fig:qpsk_before} \end{subfigure} \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=\textwidth]{Figures/QPSK_IQ_After.pdf} \caption{QPSK After Phase correction.} \label{fig:qpsk_after} \end{subfigure} \begin{subfigure}[b]{0.22\textwidth} \includegraphics[width=\textwidth]{Figures/64QAM_B_4.eps} \caption{64QAM Before Phase correction} \label{fig:64qam_before} \end{subfigure} \begin{subfigure}[b]{0.22\textwidth} \includegraphics[width=\textwidth]{Figures/64QAM_A_4.eps} \caption{64QAM After Phase correction} \label{fig:64qam_after} \end{subfigure} \end{comment} \begin{figure} \centering \begin{subfigure}[b]{0.23\textwidth} \includegraphics[width=\textwidth]{Figures/16QAM_IQ_Before.pdf} \caption{Before Phase correction} \label{fig:16qam_before} \end{subfigure} \quad \begin{subfigure}[b]{0.23\textwidth} \includegraphics[width=\textwidth]{Figures/16QAM_IQ_After.pdf} \caption{After Phase correction} \label{fig:16qam_after} \end{subfigure} \caption{Clustering based residual phase offset correction for over the air indoor experiments using 16QAM modulation.} \label{fig:phase_corr} \end{figure} \subsection{Clustering based phase offset correction} \label{subsec:cluster} Device impairments introduce difference between the carrier frequency of the receiver and that of the transmitter. Coarse and fine carrier frequency offset correction blocks are introduced at the receiver to estimate and correct those offsets based on short and long preambles respectively, as shown in figure~\ref{fig:system}. Residual carrier frequency offset of conventional OFDM received waveform is calculated based on the pilot subcarriers. In {\texttt{Rand-OFDM}}, the pilots inserted in frequency domain do not capture the channel frequency response at those subcarriers as the pilot energy gets spread due to the process of randomization. Hence, we introduce a clustering based algorithm to track the residual phase offset due to hardware impairments. It is to be noted here that the purpose of this block is not to estimate the variation in channel per OFDM symbol, which can be accurately estimated by inserting pilot subcarriers in every OFDM symbol. This block is intended to estimate and correct the offset between the transmitter and receiver pairs, which do not change from one OFDM symbol to the next. We use K-mediods clustering algorithm~\cite{kaufman1987clustering}, where the input to the algorithm is the In-Phase and Quadrature values of the constellation points of all the subcarriers of all received OFDM symbols in a packet and the number of expected clusters based on the modulation order. The resultant $C$ cluster centers are then examined to estimate the residual phase offset. We choose the farthest cluster points in each quadrant to determine the phase offset. There are also the highest energy point indicating a total of 4 cluster centers for QAM or QPSK modulations and only 2 for BPSK. The rationale for choosing the highest energy point within a quadrant is that there exists only one such point in the transmitted constellation to which it can be mapped to. This is a generic approach and can be scaled to even higher order modulations, like 256-QAM and beyond. The phase offset is then calculated per quadrant as: \begin{equation} \theta_{estimated,i}=arg(X_{Ti}/C_{max,i}) \end{equation} where $X_{Ti}$ is the farthest transmitted constellation point within a quadrant and $C_{max,i}$ is the maximum energy cluster center of the same quadrant $i$. Averaging 2 values in BPSK or 4 for other modulation orders, we calculate residual phase offset as: \begin{equation} \theta_{estimated}= \textstyle \frac{1}{M}\sum_{i=1}^{M} \theta_{estimated,i} \end{equation} where $M$ is 2 for BPSK and 4 for other modulations. In the last step, we correct the residual phase offset by multiplying the estimated phase to the received signal to generated the corrected signal, $X_{Fc}$, which can then be used for demodulation. \begin{equation} X_{Fc}=X_{F}e^{j\theta_{estimated}} \end{equation} where $X_{F}$ is the received OFDM signal before correction. Figure~\ref{fig:phase_corr} shows an example scenario of over-the-air experiments at 15dB SNR, where the residual phase offset is corrected based on the proposed clustering algorithm. \begin{comment} In conventional OFDM system, Phase noise correction (PNC) algorithms depends on the received pilots in the received OFDM symbol. The conventional algorithms depend on comparing the received pilot symbols to the reference ones to predict the phase noise added on the received data symbol due to channel variations. However the transmitted pilots are corrupted due to time domain scrambling, hence the comparing is no longer feasible to take place with the reference ones. In this work, we introduce a new method in order to estimate the phase noise offset without using the corrupted received pilots. This method is based on K-mediods clustering algorithm. For a received OFDM symbol $X_{F}$, we used K-mediods clustering algorithm to calculate the received constellation centers as follows: \begin{equation} C=f_{K-medoids}(X_{F}) \end{equation} where $f_{K-medoids}$ is the K-mediods clustering algorithms and $C$ is the received constellation centers resulting from the clustering algorithm. Then, minimum euclidean distance algorithm is performed to map the constellation center to the nearest constellation point assuming that the transmitted constellation is known. The phase offset is calculated as : \begin{equation} \theta_{estimated}=\frac{1}{K}\sum_{i=0}^{K-1} arg(X_{Ti}/C_{i}) \end{equation} where $K$ is the number of selected centroids from modulation order $M$, $X_{Ti}$ is the nearest point in the transmitted constellation and $C_{i}$ is the centroid center. From (20), we obtain an average value of the phase offset added on the received signal. Hence, the corrected received signal $X_{Fc}$ is given by: \begin{equation} X_{Fc}=X_{F}e^{j\theta_{estimated}} \end{equation} where $X_{F}$ is the received IQ samples before correction. In this work, we choose the maximum $K$ corner points (i.e for $M$ QAM modulation $K=4$) for two reasons: \textbf{First} to reduce the algorithm computation power. The order of complexity of the proposed algorithm can be given by: \begin{equation} O(K)=K^2 \end{equation} so as the number of the selected points increased, we need to make square the number of euclidean distance calculation with respect to all the constellation points which required high computational power, however the added phase noise is relatively the same for all the constellation points. \textbf{Second}, we try to increase the accuracy of the estimated angle. \begin{lemma} Given that the maximum phase error is happened when the line segment joined between the received and transmitted constellation points $d$ is perpendicular, the phase correction angle of the further points are smaller than the nearer ones for the same distance $d$. \end{lemma} \begin{proof} As shown in Fig \ref{Clus_alg}, \begin{equation*} tan\theta = \frac{d}{y} \quad\mathrm{and}\quad tan\phi = \frac{d}{x} \end{equation*} where d is the euclidean distance , x and y are the vectors are the line segments joined between the origin and the cluster points and $x > y$. By applying small angle approximation\cite{small} \begin{equation} \frac{\theta}{\phi} = \frac{x}{y} >1 \end{equation} \end{proof} As shown in Fig \ref{Clus_alg}, the maximum phase angle correction is happened when the received constellation point is apart from the transmitted point by a perpendicular distance $d$, therefore we can derive the received phase angles for the nearest and the farthest constellation point as follows: where d is the euclidean distance , x and y are the vectors are the line segments joined between the origin and the cluster points and $x > y$. \note{where is x and y in figure?} Small angle approximation\cite{small} can be applied of $\theta$ and $\phi$, consequently: \begin{equation} \frac{\theta}{\phi} = \frac{x}{y} >1 \end{equation} From (23), we deduce that the points near to the origin points are rotated more rather than the far points for the same distance $d$, consequently we apply the algorithm on the further points to have more accurate estimate for the phase noise angle introduced on the received constellation. As shown in Fig.\ref{Phase_Corr}, The received constellations for QPSK and 16 QAM for over the air (OTA) experiment are presented. It is clear that the received constellation are rotated due to the absence of phase noise correction algorithms. The received corrupted constellations affect the system performance in terms of BER because the symbols mapping depends on the minimum euclidean distance, however this mapping is no longer valid especially in higher modulation order. By applying clustering phase correction algorithm, it is clear that the algorithm successes to predict the phase noise offset and de-rotate the received constellation such that the receiver can use the same mapping algorithm despite of the absence of pilots. \begin{figure} \centering \includegraphics[width=0.6\linewidth]{Figures/IQ_ther.PNG} \caption{Estimation of Phase Error: an absolute error \textit{d} yields larger phase error $(\theta)$ when the chosen constellation point is near the origin.} \label{Clus_alg} \end{figure} \end{comment} \section{Conclusion} \label{sec:conclude} In this work, we present {\texttt{Rand-OFDM}}, a secure OFDM transmission based on time domain scrambling using a shared secret key between the transmitter and the receiver. We perform channel estimation and equalization to retrieve the signal. Furthermore, we introduce a secured training signal to accurately estimate the channel followed by cluster based residual phase error correction. Over the air experiments show the success probability of this system. In future, the key generation and management can be developed based on channel state for lightweight secured physical layer encryption. \begin{comment} The mathematical formulation for the secured OFDM transmission is presented for the legitimate user and the eavesdropper. Moreover, a new equalization technique is presented to recover the OFDM signal for the intended user. The system performance has been evaluated for both ideal and practical assumptions for channel estimation for different simulated channel environments and different modulation order. Then over the air (OTA) experiments has been done to evaluate the system performance in practical channel impairments. Different numerical results show that secure communication can be achieved by introducing time domain scrambling and use the physical properties of the channel and the signal to achieve non computational secure communication (i.e information theocratic secure). \end{comment} \section{Background} \label{sec:back} \begin{comment} \note{Never start with a negative remark.} One of the major problems in the wireless communication is the inter-symbol interference (ISI) for wireless channel. ISI is defined as receiving multiple versions of the previous symbols added on the intended received symbol due to multiple paths between the transmitter and the receiver which is known by frequency selective channels. According to~\cite{OFDM}, the OFDM system divides the frequency selective channel to small flat fading channels to robust the ISI effect. \end{comment} Orthogonal Frequency Division Multiplexing (OFDM) is a digital multicarrier modulation technique, where subcarriers are orthogonal to each other. This is achieved by Inverse Fast Fourier transform (IFFT) on the modulated data stream. For $N$ parallel data streams, $N-point$ IFFT is performed on the complex digital modulated signal $X(n)$. The time domain signal $x(n)$, can be expressed as: \begin{equation}\label{eq:ofdm_tx} x[n] = IFFT(X[n]) = \sum_{i=1}^{N-1}{X(i)e^{\frac{j2\pi i n}{N}}} \end{equation} In order for the IFFT/FFT to create an intersymbol interference (ISI) free channel, the channel must appear to provide a circular convolution. Hence, a cyclic prefix of length v is appended after the IFFT operation, resulting in N + v samples, which are then sent in serial through the wireless channel. The duality between circular convolution in the time domain and simple multiplication in the frequency domain is a property unique to the Discrete Fourier Transform (DFT), which can be utilized to represent the received OFDM signal $Y(n) = H(n)X(n)$. \begin{comment} The principal of OFDM system is to modulate the data into parallel orthogonal sub-carriers using the N-point inverse Fourier transform to each OFDM block to obtain the time domain symbol $x(m)$ which is given by: \begin{equation} x(m) = IFFT[X(n)] = \sum_{i=1}^{N-1}{X(i)e^{\frac{j2\pi i m}{N}}} \end{equation} where $X(n)$ are the IQ symbols. \end{comment} \begin{comment} Also, cyclic prefix (CP) is added at the beginning of the OFDM symbol to convert the linear convolution to circular convolution. The circular property introduced by adding the CP, converts the multi-path effect of the channel into flat fating after taking N-point FFT which can be easily equalized, so \end{comment} At the receiver, the cyclic prefix is discarded, and the N received symbols are demodulated, using an FFT operation, which results in N data symbols. The received frequency domain signal $Y(n)$ is given by: \begin{equation} Y[n] = FFT[y(n)] = \sum_{i=1}^{N-1}{y(i)e^{\frac{-j2\pi i n}{N}}} \end{equation} where $y$ is the received OFDM symbol in the time domain. \begin{comment} From the previous analysis, The orthogonal property is the key property for OFDM transmission, this property is appeared when FFT is applied to any circular matrix. In other words, the SVD of any circular matrix can be given by: \begin{equation} H_{c} = F^{-1}H_{\lambda}F \end{equation} where $H_{\lambda}$ is a diagonal matrix, $F$ and $F^{-1}$ are the FFT and IFFT matrix respectively. Adding (CP) on the transmitted OFDM symbol at the transmitter, then removing it at the receiver side, converts the received channel matrix to circular channel. Moreover the way of reception converts the received channel matrix into diagonal matrix which is easily to be equalized independently as flat fading channels. However any scrambling in the time domain samples, destroys the structure of the OFDM symbol, consequently the received wave form is distorted and ISI occurs. Moreover if the CP added based on the scrambled data, the traditional channel equalization will be no longer valid to reconstruct the original data. In the next section, we will introduce our scheme in channel equalization after performing time domain scrambling on the transmitted symbols based on the shared key between the transmitter and the receiver. \end{comment} \section{Cryptanalysis} \label{sec:cryptanalysis} In this section, we perform security analysis on {\texttt{Rand-OFDM}} to evaluate its resiliency against various types of attacks. For simplicity, let's assume that the received OFDM symbol at \textit{Eve} is: \begin{equation} Y_{E} = FRF^{-1}X_{F} \label{Rec_eq} \end{equation} According to Shannon secrecy~\cite{shannon1949communication}, the system can be perfectly secure if the key size equals to the data size such that: \begin{equation} E(R_{i}) \geq E(X_{F}) \end{equation} where $E(X)$ is the entropy of the random variable $X$. This analysis can easily be realized in bit level, however in physical layer we observe it from two different viewpoints. \textit{First}, the system achieves perfect secrecy on symbol level as both the data symbol size and the key size are equal to the FFT size N. In other words, \textit{Eve} can not deduce any information with one symbol. \textit{Second}, if the system has a set of keys $\textit{K} = [K_{1}, K_{2}, K_{3}, ..., K_{V}]$, and OFDM frame $\textit{X}=[X_{1}, X_{2}, X_{3}, ..., X_{n}]$, the mutual information between the encrypted and original symbols equal to: \begin{equation} \lim_{V\to n} I(X_{E},X_{F})=0 \end{equation} \begin{comment} Hence, as the key space $M$ increases, the system achieves perfect secrecy. In other words, as the FFT length $N$ increases, the system achieves perfect secrecy. Although our evaluation uses same key for all OFDM symbols in a packet, it is possible to derive it from a set of keys to make the key space larger. \end{comment} \begin{comment} based on cryptography analytic prospective. For any encryption algorithm, there are two attacks which known as brust-force attack and crypt-analytic attack. Brust-force attack is defined by applying all possible keys on the encrypted signal to break the key assuming that the algorithm is known. However crypt-analytic attack is defined as analyzing the encrypted signal through detecting known patterns or determine some statistical characteristics that can help to predict the key. \end{comment} \subsection{Brute-force attack} In this attack, the eavesdropper knows the encryption and the decryption algorithm, however the key is unknown. The eavesdropper performs an exhaustive search on all possible keys to decrypt the cipher data (i.e the encrypted data). The strength of the algorithm is proportional to the length of the key, which is the size of FFT for {\texttt{Rand-OFDM}}. The number of all possible keys $L$ is given by $L=N!$, where $N$ is the FFT size. So the probability of success $P_{s}$ to predict the key is uniformly distributed and is given by $P_{s} = \frac{1}{L}$. FFT size has been chosen to be 64, which is minimum for 802.11a/g~\cite{802_11_spec}, and can be a much larger value of 1024 or 2048 in newer wideband Wi-Fi and 5G standards. \subsection{Cipher text attack} In this attack, $Y_{E}$ is only available with the decryption function $D$. \textit{Eve} tries to predict $R_{E}$ by attempting different keys based on the statistical properties on the cipher data. However, since the security layer is introduced in physical layer, especially in time domain, the statistical properties of the received waveform does not change, for example received power or the peak to average power ration (PAPR). So \textit{Eve} has to try all possible keys to decrypt the data, resulting in Brute-force attack. \subsection{Chosen plain text attack} In this attack, $Y_{E}$ and $X_{F}$ are known at the eavesdropper. In other words, \textit{Eve} knows some pattern and the corresponding cipher patterns. For fixed key, \textit{Eve} can solve equation~\ref{Rec_eq} and deduce the corresponding $R_{E}$. However the security of the algorithm can be increased by using a defined shared set of keys $K=[K_{1}, K_{2}, ..., K_{V}]$, and the selected shared key changed dynamically from one symbol to another based on certain distribution $f_{K}(K)$. In this case, \textit{Eve} has to deduce the statistical properties of the key distribution for multiple attacks assuming $K$ is known. \begin{comment} In this attack, the eavesdropper knows the encryption and the decryption algorithm, however the key is unknown. The eavesdropper tries to try all the possible keys to decrypt the cipher data (i.e the encrypted data). It is clear that the strength of the algorithm is based on the key length. In other words if the key is too long, predicting and breaking the encryption algorithm is unfeasible, however this assumption is not practical. In our work, time domain scrambling introduced based on a shared key between the transmitter and receiver. The key size equals to the size of FFT $N$, so the number of all possible keys $L$ can be given by: \begin{equation} L = N! \end{equation} So the probability of success $P_{s}$ to predict the key and encrypt the data is uniformly distributed and is given by : \begin{equation} P_{s} = \frac{1}{L} \end{equation} \subsubsection{Crypt-analytic attack} In the crypt-analytic attacks, the Eve tries to use the cipher data or the plain data (i.e un encrypted data) if available or both of them to predict the key rather than trying through all the possible keys. In this work, we can say that the security of the algorithm increased because it is based on time domain scrambling. In other words, there are infinite number of values can be received by the Eve due to the Gaussian properties of the transmission media (i.e wireless channel), however minimum euclidean distance algorithm can be used to perform crypt-analytic attacks. In this section, we will try to concentrate on the possible attacks ignoring the physical channel impairments added on the signal (i.e channel faded response and Gaussian noise). For simplicity, the Received OFDM symbol at the Eve:. \begin{equation} Y_{E} = FRF^{-1}X_{F} \end{equation} According to Shannon secrecy definition ~\cite{shannon1949communication}, the system can be prefect secure if the key size equals to the data size such that : \begin{equation} H(R_{i}) \geq H(X_{F}) \end{equation} where $H(X)$ is the entropy of the random variable $X$. This analysis can be seen clearly in bit level or MAC layer analysis, however in physical layer we can see it from two different points. \textbf{First}, the system achieves perfect secrecy in on symbol level because both the data symbol size and the key size are equal to the FFT size N. In other words, the Eve can not deduce any information at one symbol level. \textbf{Second}, if the system has set of keys $\textit{K} = [K_{1},K_{2},K_{3},.......,K_{M}]$, and OFDM frame $ \textit{X}=[X_{1},X_{2},X_{3},.......,X_{n}]$, the Mutual information between the encrypted and original symbols equals to : \begin{equation} \lim_{M\to\infty} I(X_{E},X_{F})=0 \end{equation} From (28), we deduce that as the key space $M$ increases, the system achieves the perfect secrecy. In other words, as the FFT length $N$ increases, the system achieves perfect secrecy. Assuming that the Eve know the decryption algorithm so the decrypted cipher data at the Eve in frequency domain can be given by : \begin{equation} Y_{E} = D(y_{E},R_{E})=FR_{E}^{-1}RF^{-1}X_{F} \end{equation} where $D(x,y)$ is the decryption function of $x$ using $y$ as the received cipher data and $R_{E}$ is the predicted key by the Eve. The rest of the section we will analyze all possible crypto-analytic attack. \begin{itemize}[leftmargin=*] \item {\textbf{Cipher text attack}} In this attack, $Y_{E}$ is only available with the decryption function $D$. The Eve tries to predict $R_{E}$ through trying some keys based on the statistical properties on the cipher data, however due to the security layer is introduced in physical layer and especially in time domain, the statistical properties of the received wave form does not change in terms of the received power or the peak to average power ration (PAPR), due to adding a normalized phase noise according to (7), so the Eve has to try all possible keys to decrypt the data (i.e brust force attack) \item {\textbf{Chosen plain text}} In this attack, $Y_{E}$ and $X_{F}$ is known at the Eve, in other words the Eve knows some pattern and the corresponding cipher patterns. For fixed key, the Eve can solve (26) and deduce that $R_{E}$. However the security of the algorithm can be increased by using a defined set of keys $\textbf{K}=[K_{1},K_{2},.........,K_{m}]$,and the selected shared key changed dynamically from one symbol to another based on certain distribution $f_{K}(K)$, so the Eve has to deduce the statistical properties of the key distribution for multiple attacks assuming $\textbf{K}$ is known. \end{itemize} \end{comment} \section{Over the Air experiments} \label{sec:exp} We perform extensive over-the-air experiments in indoor scenario to validate {\texttt{Rand-OFDM}}. Figure~\ref{fig:experiment} shows one of our transceiver nodes equipped with USRP X310~\cite{X310} with 10 $dBi$ antenna. It is connected to an Intel NUC (NUC7i7BNH) with i7-7567U processor and 16GB DDR4 memory for faster processing of the I/O. The experiments are performed in multiple locations in a multipath-rich indoor environment in both line of sight and non-line-of-sight scenarios. We present the results for an average of all those locations to eliminate any dependencies on channel. Also, all the experiments were performed at 20MHz bandwidth and 2.484GHz frequency to avoid any interference from the Wi-Fi Access Points operating in the same area. Each data point in our result is an average of 500 OFDM symbols for both legacy OFDM and {\texttt{Rand-OFDM}}. \begin{wrapfigure}{R}{0.2\textwidth} \vspace{-15pt} \begin{center} \includegraphics[width=0.2\textwidth]{Figures/X310_Photo.jpg} \end{center} \caption{Experimental setup of one node.} \label{fig:experiment} \vspace{-10pt} \end{wrapfigure} Figure~\ref{BER_OTA} shows the BER performance for legacy and {\texttt{Rand-OFDM}} transmissions for different modulation orders. It is evident that there is an SNR gap between the {\texttt{Rand-OFDM}} and the Legacy OFDM transmission. This gap is due to the loss of orthogonal property of OFDM signal and channel estimation imperfections. This is the SNR penalty that we incur to secure a waveform in time domain. Moreover, the SNR gap decreases at higher modulation order due to the higher operated SNR, which enables the receiver to decrease the error space in channel estimation using the training signal. In other words, the error introduced due to time-domain modification can be reconstructed back more efficiently at a higher SNR. \begin{comment} Furthermore, as detailed in \S~\ref{subsec:channel}, training signal based channel estimation performs better than preamble based channel estimation in a practical environment as it can measure the channel properties for all 64 subcarriers beyond the conventional 52 data subcarriers. \end{comment} Figure~\ref{fig:phase} presents the phase angle correction distribution for different modulation orders over all packets at different SNRs. The residual phase error is dependent on the hardware impairments and not on modulation order or channel characteristics. This is evident from the values, which varies between 0.05 to 0.12 radians. The phase correction results indicate that there exists significant residual error, which needs to be corrected to improve the performance. \begin{comment} In additional on simulating the channel impairments using mathematical models for rayeligh fading channel \cite{ray1,ray2}, we implemented an indoor setup of {\texttt{Rand-OFDM}} as shown in Figure~\ref{fig:experiment}. The transmitter and the receiver are equipped with an USRP B210~\cite{b210} with 6 $dBi$ antenna, which is connected to an Intel NUC (NUC7i7BNH) with i7-7567U processor and 16GB DDR4 memory for faster processing of the I/O. We used 20MHz bandwidth for all our experiments. \end{comment} \begin{comment} In this experiment, we transmit a packet consists of 10 secure OFDM transmission. The packet is repeated periodically at the transmitter side, however multiple packets are received at the receiver side and post processing is applied to decrypt the secure data packet based on the shared randomizer key between the transmitter and the receiver. We use Wi-Fi long preamble to perform cross-correlation for packet detection. Moreover a Legacy OFDM transmission is performed using the same setup to act as a pinch mark for {\texttt{Rand-OFDM}} transmission. In this experiment, multiple measurements have been taken in different locations to try to cover all the possible scenarios and to see the effect of channel impairments, then we perform averaging to obtain the BER curves for the regular and {\texttt{Rand-OFDM}} transmission for different modulation order. Fig.\ref{BER_OTA} shows the BER performance for regular and {\texttt{Rand-OFDM}} transmission for different modulation order. It is clear that there is a SNR gap between the {\texttt{Rand-OFDM}} and the Legacy OFDM transmission. This gap due to destroying the OFDM signal through time domain randomization and the imperfections in channel estimation due to the absence of pilots. Moreover the SNR gap depends on the secure way for the OFDM transmission. For the channel extrapolation algorithm, the SNR gap varies from 6 to 8 dB according to the modulation order. This is due to the imperfection in full channel estimation (i.e 64 FFT point channel estimation) from the preamble channel estimation (i.e 52 FFT point channel estimation). However, it is clear that the training signal OFDM transmission reduces this gap from 2 to 4 dB according to the modulation order. This is due to the extracting the missing guard band channel estimation through the accurate channel estimation obtained from the training signal symbol. So we can conclude that the training signal {\texttt{Rand-OFDM}} transmission is better in medium and high frequency selective channel and increases the system performance. Fig. \ref{fig:phase} presents the phase angle correction distribution for different modulation order. It is clear that the clustering phase correction algorithm success to identify the phase offset added due to OTA transmission. Moreover the mean phase angle for the three modulation order have the approximately the same set of values because the phase noise is channel dependant. In other words, for the same channel, the phase noise is the same whatever the order of modulation used. This results ensures the accuracy of the proposed clustering phase offset correction algorithm. \end{comment} \begin{comment} In Fig\ref{IQ_OTA}, The received IQ samples are presented for regular, secure extrapolated OFDM transmission, secure stamped OFDM transmission and the eavesdropper respectively. For the Eve, it is clear that the received constellation is completely distorted due to the presence of the randomization matrix. Also the shape of the received constellation for the Eve has a circular shape due to the added phase on the IQ samples due to the loss of the orthogonality of OFDM transmission by the randomization matrix as mentioned before. For the legitimate user, it is clear that the received constellations are clear and corresponding to BPSK modulation. \end{comment} \begin{comment} \begin{figure} \centering \begin{subfigure}[b]{0.23\textwidth} \includegraphics[width=\textwidth]{Figures/IQ_Reg_OTA.jpg} \caption{Regular OFDM Transmission} \label{fig:gull} \end{subfigure} ~ \begin{subfigure}[b]{0.23\textwidth} \includegraphics[width=\textwidth]{Figures/IQ_R_18dB.jpg} \caption{Secure Extrapolated OFDM Transmission} \label{fig:tiger} \end{subfigure} \begin{subfigure}[b]{0.23\textwidth} \includegraphics[width=\textwidth]{Figures/IQ_S_OTAdB.jpg} \caption{Secure Stamped OFDM Transmission} \label{fig:tiger} \end{subfigure} \begin{subfigure}[b]{0.23\textwidth} \includegraphics[width=\textwidth]{Figures/IQ_R_OTA.jpg} \caption{Illegitimate user} \label{fig:tiger} \end{subfigure} ~ \caption{Received BPSK Constellation for OTA indoor channel at 18 dB - \note{Unnecessary}} \label{IQ_OTA} \end{figure} \end{comment} \begin{figure} \centering \includegraphics[width=0.4\linewidth]{Figures/Phase_box_X310.pdf} \caption{Residual phase offset correction for different modulation orders.} \label{fig:phase} \end{figure} \section{Introduction} As new spectrum (sub-6GHz, mmWave and TeraHertz) becomes available for communication and coexistence of frequency-agile cognitive heterogeneous nodes becomes a norm, we need to rethink physical layer security to provide maximum secrecy in a broadcast channel. There has been a growing interest recently among multiple federal agencies to utilize the spectrum in a collegial way by heterogeneous devices. This also indicates that wireless signals will be vulnerable to various security attacks, which was unforeseen in prior extremely regulated framework. This motivates us to investigate in physical layer secured communication, which is hard to decipher by an eavesdropper in a hostile scenario, as well as practical enough to be accepted for mass deployments. Prior work~\cite{Poor19, Survey, sdr_sec, sec_cogradio_13, sec_cogradio_15} utilizes imperfection of the communication channel to establish secrecy by physical layer methods without the need of a shared secret key. However, channel imperfections are not enough to provide high secrecy capacity when the eavesdropper has a high signal-to-noise ratio (SNR) or has similar quantized channel state as the intended receiver. Higher layer encryption~\cite{sec_survey_16} provides computationally hard secrecy, but there exists key management and distribution issues. Also, since the waveform remains unchanged in all the above mentioned scenarios, it is plausible for the eavesdropper to restrict cryptanalysis search space within that waveform. To address these issues for future wireless agile radios, we introduce physical layer security, where we modify the OFDM waveform to completely disrupt its orthogonality properties of the subcarriers based on a shared secret key. At the receiver, we perform extensive channel estimation and reconstruct the waveform. The secret key can be derived from the channel or stored in the radio hardware, which will minimize the key distribution issues. In this paper, we only focus on modification of the time-domain signal at the transmitter and reconstruction of that back at the receiver. \begin{figure} \centering \includegraphics[width=0.8\linewidth]{Figures/case.pdf} \vspace{-10pt} \caption{{\texttt{Rand-OFDM}}: A system overview.} \label{fig:case} \end{figure} If $M$ is the message that \textit{Alice} needs to transmit to \textit{Bob}, she can encrypt it with the shared secret key ($K$) to produce $X$. As it reaches \textit{Bob}, it passes through the channel $H_{AB}$, such that the received signal is $Y_B=XH_{AB}+N_B$, where $N_B$ is the noise. An eavesdropper, \textit{Eve}, will experience a different channel and will receive $Y_E=XH_{AE}+N_E$. If channel imperfections are used to encrypt, secrecy capacity is a function of received SNR at \textit{Eve}. In this scenario, we propose {\texttt{Rand-OFDM}}, where we modify the time domain signal, such that the transmitted signal, $X$, is no longer a known waveform (OFDM, CDMA, etc.). Figure~\ref{fig:case} shows an overview of {\texttt{Rand-OFDM}}, where we randomize the time domain OFDM signal based on $K$, such that the resultant $X$ does not have orthogonality property. Due to the spectral efficiency of OFDM and its use in most standards, like Wi-Fi~\cite{802_11_spec}, 5G~\cite{lte_book}, we have chosen it as the candidate for generating the base waveform. Our approach has two distinct benefits over prior work. {\bf \textit{Firstly}}, secret key based approach to modify the signal indicates that even at high SNR or even if \textit{Eve} has full channel knowledge between \textit{Alice} and \textit{Bob}, $H_{AB}$, she will not be able to decode $X$. In other words, if \textit{Eve} gets access to $Y_{E}'=XH_{AB}+N_B$, she will not be able to decode $X$ without the key $K$, which is used to generate $X$. {\bf \textit{Secondly}}, by modifying the time domain signal based on a key, we ensure that it appears as a noise or an unknown signal to \textit{Eve}. The combination of the key and data will create a different signal every time it is transmitted. Thus it creates a larger search space for \textit{Eve} to attack this waveform. It is to be noted here that any higher layer encryption or bit level interleaving can be used in conjunction with {\texttt{Rand-OFDM}} to provide multiple layers of protection from different security threats. The major challenges of {\texttt{Rand-OFDM}} are: 1) to design a computationally light operation to modify the OFDM waveform in time-domain, 2) to estimate the channel effects accurately at the receiver, which looses frequency domain properties due to the time domain modifications, 3) to hide the channel parameters in a way to make it difficult for the eavesdropper to estimate the channel accurately, and 4) to architect a design to correct the residual phase offsets due to the absence of pilot subcarriers in frequency domain. To the best of our knowledge, we are the first ones to \textit{modify the time domain wireless OFDM signal to loose the orthogonality of the signal and introduce novel channel estimation technique to recover the signal back at the receiver and evaluated the system in practical over-the-air scenarios}. Hence, the key contributions of our work can be listed as follows: \begin{comment} Wireless communications becomes an essential part in our life due to the spreading of the wireless devices which featured by high mobility and ease to use. Moreover the increasing demand in broadband, navigating and social services make the wireless networks have a huge amount of data, however the security of this huge amount of data becomes challenging due to the broadcast nature of wireless channels. Therefore, new security techniques are urgently needed to secure the data in shared media. In other words, the users require confidential secure transmission for their calls, data and financial transactions. Most of the encryption and decryption algorithms are designed to used in the upper layers in the OSI model like the network, transportation and application layer, however the physical properties of the signal can give some information of the signal which allow the Eve to determine the the transmitted signal. Moreover, It has been proved that that information theory secured (i.e the encryption scheme can not be broken) by introducing physical layer security (PLS) on the transmitted signal rather that the computationally secured (i.e the encryption scheme strength is function in the resources power) that is given by the well known encryption schemes used in upper layers. Orthogonal frequency division multiplexing (OFDM) is considered the most popular wireless transmission scheme that used in the existing technology as LTE-A and wifi. For frequency selective channel such as broadband wireless channel, OFDM gives higher spectral efficiency and robustness of inter symbol interference (ISI) . However OFDM frames have some unique features such as the cyclic prefix (CP) and the pilot bands which enable an eavesdropper to capture the signal and easily recover the original message. Consequently, secure OFDM transmission become challenging due to the unique features that characterizing the OFDM signal. Also, any significant change in the form of the OFDM signal, a huge performance degradation is happen due to loss of orthogonality for OFDM signal especially in wireless channel. In our work, we try to achieve secure OFDM transmission through changing the significant features of the OFDM signal such that the transmitted signal can not be reconstructed by any receiver rather the the legitimate receiver. Therefore the contribution can be listed as follows: \end{comment} \begin{figure*}[h] \centering \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=\textwidth]{Figures/signal_before_randomization64.pdf} \caption{Time domain OFDM signal before randomization} \label{fig:ofdm} \end{subfigure} \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=\textwidth]{Figures/signal_after_randomization64.pdf} \caption{Time domain {\texttt{Rand-OFDM}} signal after randomization} \label{fig:rand_ofdm} \end{subfigure} \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=\textwidth]{Figures/Transmitted_symbol_before_R.pdf} \caption{Frequency response of {\texttt{Rand-OFDM}} signal} \label{fig:rand_ofdm_fft} \end{subfigure} \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=\textwidth]{Figures/scatterplot_small.pdf} \caption{Constellation diagram of BPSK-modulated {\texttt{Rand-OFDM}}.} \label{fig:rand_ofdm_const} \end{subfigure} \caption{Time domain and frequency domain representation of {\texttt{Rand-OFDM}} at the transmitter.} \label{fig:randomized_out} \vspace{-10pt} \end{figure*} \noindent \underline{Key-based Time Domain Security}: We designed a secret key based time domain modification of the OFDM signal to generate a \textit{new waveform}, {\texttt{Rand-OFDM}}, which does not retain any OFDM properties that are essential to combat the multipath effects of the channel. At the receiver, we successfully reconstruct the signal by channel estimation and decryption of the {\texttt{Rand-OFDM}} waveform. \noindent \underline{Secured Training Signal}: We designed a novel training signal based on a shared phrase or data with the same randomization key, such that only the intended receiver is able to correctly decode it to estimate the channel accurately at the receiver. \noindent \underline{Clustering based Phase Offset Correction}: We scramble the signal in time domain, where the frequency domain pilots change in both phase and magnitude and cannot be used as known symbols for channel estimation or phase tracking. We introduced a \textit{unique} solution to track the residual phase offset by utilizing K-medoids clustering algorithm underneath. \noindent \underline{Cryptanalysis}: We perform cryptographic analysis on the {\texttt{Rand-OFDM}} signal, and provide insights on the resiliency of the waveform, specially for higher orders of FFT, as envisioned in next generation of wireless systems. \noindent \underline{Practical Evaluation}: In midst of mostly theoretical research in time domain physical layer security, we formulated the mathematical problem for {\texttt{Rand-OFDM}} and provided a receiver design that is essential for successfully decoding the signal in multipath-rich environment. We also evaluated our system using extensive experiments over the air in an indoor environment for the protocol to be embraced for practical deployment. \begin{comment} The rest of the paper is organized as follows. In \S~\ref{sec:related} we discuss the recent advances in physical layer security and our contributions in perspective of prior work in the domain. \S~\ref{sec:back} provides a background on the OFDM signal and its properties, followed by system design to encrypt and decrypt the {\texttt{Rand-OFDM}} signal in \S~\ref{sec:system}. Distortions due to wireless multipath channel and differences in devices are mitigated in \S~\ref{sec:improve}. Security analysis and attack models are discussed in \S~\ref{sec:cryptanalysis}. Evaluation of {\texttt{Rand-OFDM}} is performed both in simulation as well as over-the-air experiments in \S~\ref{sec:eval} and \S~\ref{sec:exp} respectively. Conclusion and future work are detailed in \S~\ref{sec:conclude}. \end{comment} \section{Evaluation} \label{sec:eval} In this section, we present the performance analysis of the {\texttt{Rand-OFDM}} system in comparison with legacy 802.11a/g in different channel models. Although we do not use any pilot symbols, they are still inserted as in the legacy system such that same number of data bits are transmitted for both the cases. We used MATLAB to encrypt and decrypt the signals and used the channel models to perform extensive simulations for both legacy OFDM and {\texttt{Rand-OFDM}}. For the rest of the paper, the suffixes (L) and (R) are used for legacy OFDM~\cite{802_11_spec} and {\texttt{Rand-OFDM}} transmissions with full channel knowledge respectively. In addition, we use (R-T) using training signal based channel estimation. \begin{table} \centering \begin{tabular}{lccc} \hline Channel & AWGN & Flat & Frequency Selective \\ \hline Model & - & Rayleigh & Rayleigh \\ No. of taps & 0 & 1 &6 \\ Path delays & 0 & 0 & [0,100,200,300,500,700] ns\\ Path loss & 0 & 0 & [0,-3.6,-7.2,-10.8,-18,-25.2] dB\\ Doppler & 0 & 0 & 3Hz \\ \hline \end{tabular} \captionof{table}{Channel Models} \label{tbl:channel} \end{table} \subsection{Without Channel Estimation} We evaluate the performance of {\texttt{Rand-OFDM}} receiver blocks as described in \S~\ref{sec:system_rx} in the basic scenario, where no channel estimation is required. \begin{comment} \begin{figure} \centering \includegraphics[width=0.8\linewidth]{Figures/BER_AWGN.pdf} \caption{BER of {\texttt{Rand-OFDM}} and legacy OFDM for AWGN channel.} \label{BER_AWGN_Trad} \vspace{-10pt} \end{figure} \end{comment} \begin{figure*} \centering \begin{subfigure}[b]{0.245\textwidth} \includegraphics[width=\textwidth]{Figures/BER_AWGN.pdf} \caption{AWGN Channel.} \label{BER_AWGN_Trad} \end{subfigure} \begin{subfigure}[b]{0.245\textwidth} \includegraphics[width=\textwidth]{Figures/BER_Full_CSI.pdf} \caption{Indoor channel.} \label{BER_Full_CSI} \end{subfigure} \begin{subfigure}[b]{0.245\textwidth} \includegraphics[width=\textwidth]{Figures/BER_Flat_Fading.pdf} \caption{Flat fading channel.} \label{fig:train_flat} \end{subfigure} \begin{subfigure}[b]{0.245\textwidth} \includegraphics[width=\textwidth]{Figures/BER_Freq_sel_ch.pdf} \caption{Indoor channel.} \label{fig:train_sel} \end{subfigure} \caption{BER of {\texttt{Rand-OFDM}} compared to legacy OFDM. Figures (a,b) with complete channel knowledge and (c,d) with training signal channel estimation.} \label{fig:ber_preamble} \vspace{-15pt} \end{figure*} \begin{comment} \begin{figure*} \centering \begin{subfigure}[b]{0.35\textwidth} \includegraphics[width=\textwidth]{Figures/BER_Ray_fading.pdf} \caption{Flat fading channel.} \label{fig:preamble_flat} \end{subfigure} \quad \begin{subfigure}[b]{0.35\textwidth} \includegraphics[width=\textwidth]{Figures/BER_Freq_sel_ch_EX.pdf} \caption{Frequency selective indoor channel.} \label{fig:preamble_sel} \end{subfigure} \caption{BER of {\texttt{Rand-OFDM}} compared to legacy OFDM with preamble based channel estimation.} \label{fig:ber_preamble} \vspace{-10pt} \end{figure*} \end{comment} \begin{comment} \begin{figure*} \centering \begin{subfigure}[b]{0.35\textwidth} \includegraphics[width=\textwidth]{Figures/BER_Traind_Flat_Fading.pdf} \caption{Flat fading channel.} \label{fig:train_flat} \end{subfigure} \quad \begin{subfigure}[b]{0.35\textwidth} \includegraphics[width=\textwidth]{Figures/BER_Freq_sel_ch.pdf} \caption{Frequency selective indoor channel.} \label{fig:train_sel} \end{subfigure} \caption{BER of {\texttt{Rand-OFDM}} compared to legacy OFDM with training signal based channel estimation.} \label{fig:ber_train} \vspace{-10pt} \end{figure*} \end{comment} \subsubsection{AWGN Channel} The first case for evaluation is Additive White Gaussian Noise (AWGN) channel, where the channel frequency response matrix $H_{F}=I$, since both time and frequency coefficients are unity. Figure~\ref{BER_AWGN_Trad} shows the BER performance of {\texttt{Rand-OFDM}} in AWGN channel for different modulation orders. Performance of {\texttt{Rand-OFDM}} is close to that of legacy OFDM signal. This is due to the fact that OFDM structure, even when lost, can be reconstructed back at the receiver in the absence of channel effects. From equation~\ref{eq:WF_A_CH_Est}, it is evident that if the eavesdropper does not have the key to generate the matrix $R$, she can not decode the packet. This is shown in the results as well, where the BER curve for \textit{Eve} remains constant and never go down even at higher SNRs. This is a major advantage of using key-based physical layer security over channel based encryption techniques. We choose not to show the BER curve for \textit{Eve} as she was unable to decode in the best possible channel. \begin{comment} In this section, we present the performance analysis of the {\texttt{Rand-OFDM}} system in terms of physical layer analysis. The physical layer analysis is defined as the BER curves for the legitimate and illegitimate user for different scenarios and different channel models. For the rest of the paper,the suffixes (L) and (R)are used for Legacy OFDM and {\texttt{Rand-OFDM}} transmission respectively. In addition, we use (R-E) for {\texttt{Rand-OFDM}} transmission using only preamble channel estimation , however (R-T) using training symbol channel estimation in addition to the preamble channel estimation. \end{comment} \begin{comment} \begin{figure} \centering \begin{subfigure}[b]{0.2\textwidth} \includegraphics[width=\textwidth]{Figures/legitimate_constellation.png} \caption{Intended user} \label{fig:gull} \end{subfigure} ~ \begin{subfigure}[b]{0.2\textwidth} \includegraphics[width=\textwidth]{Figures/scatterplot_Eve.png} \caption{illegitimate user} \label{fig:tiger} \end{subfigure} ~ \caption{Received BPSK Constellation For AWGN channel at 40 dB. \note{still in jpeg and cannot be seen.}} \label{AWGN_BPSK_IQ} \end{figure} \end{comment} \begin{comment} \begin{figure} \centering \begin{subfigure}[b]{0.2\textwidth} \includegraphics[width=\textwidth]{Figures/RX_IQ_AWGN_QPSK.jpg} \caption{Intended user} \label{AWGN_QPSK_IQ} \end{subfigure} ~ \begin{subfigure}[b]{0.2\textwidth} \includegraphics[width=\textwidth]{Figures/Tx_IQ_AWGN_QPSK.jpg} \caption{illegitimate user} \label{fig:tiger} \end{subfigure} ~ \caption{Received QPSK Constellations for AWGN channel 15 dB}\label{AWGN_QPSK_IQ} \end{figure} \end{comment} \begin{comment} \subsection{AWGN Channel} In this section, we evaluate the system performance for AWGN channel. For AWGN, there is no need for frequency channel equalization (i.e $H_{F} = I$) because the time/ frequency channel coefficients are unity. Form (8), we can notice that the received waveform at the eavesdropper is distorted by the randomization matrix. In other words, the OFDM frame can reconstruct its orthogonal property through de-randomization in the time domain using the shared key between the transmitter and legitimate user. Fig.~\ref{BER_AWGN_Trad} BER performance of AWGN channel for different modulation order is shown. It is clear that the BER performance of the eavesdropper remains constant for a wide range of SNR which implies how the eavesdropper user suffer in order to decode the received distorted waveform despite of the simple channel. This phenomena is happened due to the absence of the shared key for the eavesdropper, consequently the eavesdropper can not decode for all the scenarios if the shared key between the transmitter and receiver is not available. Moreover, the BER performance for secure and conventional OFDM systems is presented. It is shown that the {\texttt{Rand-OFDM}} receiver can reconstruct the original transmitted OFDM symbol, and there is no extra penalty paid to achieve secure OFDM transmission. \end{comment} \begin{comment} \begin{figure} \centering \includegraphics[scale=0.35]{Figures/BER_AW} \caption{BER of the proposed Algorithm for AWGN channel. \note{NOT required}} \label{BER_AWGN_Eve} \end{figure} \end{comment} \begin{comment} \begin{figure} \centering \includegraphics[width=0.7\linewidth]{Figures/BER_Full_CSI.pdf} \caption{BER of {\texttt{Rand-OFDM}} and legacy OFDM for frequency selective channel model with complete channel knowledge.} \label{BER_Full_CSI} \vspace{-10pt} \end{figure} \end{comment} \subsubsection{Complete channel knowledge} In this section, the performance of the proposed system is evaluated assuming that we have full channel knowledge (i.e $H_{F}$ is known). Based on ITU-R recommendation, we assume frequency selective indoor channel model with parameters shown in table~\ref{tbl:channel}. We assume that legacy receiver also knows the $H_{F}$ matrix. Figure~\ref{BER_Full_CSI} shows the performance of {\texttt{Rand-OFDM}} in a frequency selective channel when the channel is known at the receiver. If channel is known, the SNR gap between legacy OFDM and {\texttt{Rand-OFDM}} is due to the modification of the waveform, which cannot be retrieved at the receiver. \begin{figure*} \centering \begin{subfigure}[b]{0.23\textwidth} \includegraphics[width=\textwidth]{Figures/BER_OTA_BPSK.pdf} \caption{BPSK} \label{fig:gull} \end{subfigure} \quad \begin{subfigure}[b]{0.23\textwidth} \includegraphics[width=\textwidth]{Figures/BER_OTA_QPSK.pdf} \caption{QPSK} \label{fig:tiger} \end{subfigure} \quad \begin{subfigure}[b]{0.23\textwidth} \includegraphics[width=\textwidth]{Figures/BER_OTA_16QAM.pdf} \caption{16QAM} \label{fig:gull} \end{subfigure} \quad \begin{subfigure}[b]{0.23\textwidth} \includegraphics[width=\textwidth]{Figures/BER_OTA_64QAM.pdf} \caption{64QAM} \label{fig:gull} \end{subfigure} \caption{BER of {\texttt{Rand-OFDM}} compared with legacy OFDM for over-the-air experiments using different modulation orders.} \label{BER_OTA} \vspace{-10pt} \end{figure*} \subsection{With Channel Estimation} \label{subsec:perf_chEst} \vspace{-1pt} In this section, we analyze the performance of the channel estimation techniques, as elaborated in \S~\ref{subsec:channel} in two different channel models, `Flat Fading' and `Frequency Selective Fading', as shown in table~\ref{tbl:channel}. \begin{comment} \subsubsection{Preamble based Channel Estimation} Long preamble is used to estimate the channel for both legacy OFDM and {\texttt{Rand-OFDM}} to evaluate the performance and its efficacy to be used in flat fading channel as well as frequency selective channel in indoor environment. Figure~\ref{fig:preamble_flat} shows that in a moderately flat channel, it can be used to realize the channel parameters as the SNR penalty is $\approx5dB$. This gap is due to a) loss of orthogonality of the OFDM signal and b) inaccuracies in channel estimation. In frequency selective channel, the channel estimation error dominates so much that BER curves reach saturation at around $10^{-1}$, as shown in figure~\ref{fig:preamble_sel}. This is also explained in figure~\ref{fig:ch_preamble_m}, where the channel estimation at the end of the spectrum remains inaccurate due to severe multipath effects. If error correction codes are used in conjunction with {\texttt{Rand-OFDM}}, this technique may have a potential for use in extremely multipath reach environment. \end{comment} Training signal is used to estimate the wireless channel effect for {\texttt{Rand-OFDM}} transmissions, while legacy OFDM signal did not require this extra OFDM symbol. Figure~\ref{fig:train_flat} shows the performance of the training signal based channel estimation, which performs well due to the accuracy of the channel estimation using the training signal channel estimation. there is a small SNR penalty around $\approx 2dB$ due to the loss of orthogonality due to the randomization process at the transmitter. This is expected as the channel effects are minor in a relative flat channel. Figure~\ref{fig:train_sel} shows the performance in a multipath rich environment, where the channel is extremely frequency selective. Results indicate that the newly introduced training signal is not only secured, but also provides accurate channel estimation for the signal to be reconstructed back with minimal SNR penalty. The SNR gap between legacy OFDM and {\texttt{Rand-OFDM}} is $\approx 4dB$, indicating that it can be embraced in various practical scenarios. \begin{comment} \subsection{Channel Estimation from the Preamble} In the previous section, we present the system performance with the proposed algorithm assuming Full channel knowledge (i.e AWGN channel can be considered as known channel), however this assumption is not practical to use for radio transmission (i.e this assumption is valid in visible light communication (VLC)), so we need to find a practical way to obtain an accurate channel estimation. In this section, we present the system performance using the available preamble channel estimation using either channel extrapolation for the available channel estimation or the training signal channel estimation. \end{comment} \begin{comment} \begin{figure} \centering \includegraphics[width=0.7\linewidth]{Figures/BER_Ray_fading.pdf} \caption{BER of {\texttt{Rand-OFDM}} and legacy OFDM for Rayleigh Flat Fading channel} \label{BER_Ray} \end{figure} \end{comment} \begin{comment} To evaluate the performance of the channel extrapolation algorithm, we transmit and OFDM frame consists of short, long preamble, signal OFDM symbol and 100 secure OFDM data symbol using time domain randomization over a Rayleigh flat fading channel. We introduce a small Doppler shift equals to 3 Hz (i.e $F_{D} = 3 Hz$) to simulate indoor channel environment. Fig.\ref{BER_Ray} shows the BER of Legacy OFDM and {\texttt{Rand-OFDM}} transmission. It is clear that there is 5 dB gap between conventional and secure OFDM transmission due to channel estimation imperfection. This gap due to the absence of pilot in channel estimation and equalization in addition to the loss of orthogonal OFDM properties during transmission. In other words, there is a SNR penalty in order to achieve secure OFDM transmission. \end{comment} \begin{comment} \begin{figure} \centering \includegraphics[scale=0.35]{Figures/BER_Ray_WR.jpg} \caption{BER of the proposed Algorithm for Rayleigh Fading channel} \label{BER_RAY_EVEl} \end{figure} \end{comment} \begin{comment} \subsection{Channel Estimation from the Training Signal} In this section, we present the performance analysis of training signal channel estimation. In the previous section, we introduce the performance analysis of the channel extrapolation algorithm, however the performance of the system can be affected by the accuracy of the extrapolation especially in medium and higher frequency selective channels. The following results shows that by using this method in channel estimation, the performance of the system improves due to the higher accuracy in channel estimation especially in high frequency selective channel. In this work, we use only one training signal symbol for 100 OFDM data symbol. OFDM frame is transmitted over a high frequency selective channel consists of 6 taps. Based on ITU-R recommendation,The path delays vector is given by [0,100,200,300,500,700] ns, while the path gain vector is given by [0,-3.6,-7.2,-10.8,-18,-25.2] dB. Doppler frequency shift is introduced by 3 Hz. In Fig. \ref{stamp_a}, BER performance is presented for legacy and {\texttt{Rand-OFDM}} transmission using training signal symbol. It is clear that the {\texttt{Rand-OFDM}} transmission using training signal symbol achieves a secure communication over frequency selective channel. There is 5 to 7 dB penalty to achieve the same performance of conventional OFDM transmission due to the absence of pilot symbols and loss of OFDM properties due to time domain randomization. On the other hand, the BER performance for legacy and {\texttt{Rand-OFDM}} transmission using channel extrapolation is presented in Fig.\ref{stamp_b}. It is clear that the {\texttt{Rand-OFDM}} transmission can not decode the data over a high frequency channel due to channel estimation imperfection. \end{comment} \section{Related Work} \label{sec:related} \begin{comment} In this section, we will try to introduce some of the related works considering PLS to add more security and confidentiality between the transmitter and legitimate user. According to~\cite{Survey}, PLS can be classified as SINR-based (Key-less) approach and complexity based (Key) approach. The first category depends on channel coding, channel based adaptation and injection of artificial (noise / interference) signals. In other words, this category tries to make performance degradation for the eavesdropper through making the received SINR much lower that the legitimate user. In this section, we present some of the works that have been done in time,frequency and space domain respectively. \end{comment} \noindent \textbf{Time domain security}: A fairly decent amount of research has been done in recent years to modify the time domain signal to achieve security in physical layer. However, most of them are either theoretical or are dependent on impractical assumptions. In~\cite{infocom}, authors introduced an OFDM encryption scheme based on multiplying the real and the imaginary parts of the generated OFDM by dynamic values based on a shared secret key between the transmitter and the receiver. This changes the signal values, which changes the Peak-to-Average Power Ratio (PAPR) of the OFDM symbol. The results are evaluated only for simulated AWGN channel, where there is no channel effects. Authors in~\cite{Post_IFFT} uses a random shuffling based on a secret key. The algorithm has been evaluated for a flat fading channel, where channel effects are negligible. These works do not attempt to introduce any channel or phase correction at the receiver due to theoretical nature of these research. In~\cite{cp}, authors propose a time domain physical layer security scheme based on modifying the length of the cyclic prefix (CP) to be equal to the channel impulse response of the intended user, whereas the channel impulse of \textit{Eve} may be longer than the intended user. This introduces inter symbol interference (ISI) to the received signal of the Eve. However, if \textit{Eve} has a better channel compared to \textit{Bob}, this technique fails. In~\cite{129}, authors propose adding an artificial noise to the time domain OFDM signal such that when it passes through the receiver channel, it gets accumulated on the Cyclic Prefix. The receiver can decode the message after removing the CP, while the eavesdropper's signal can not be recovered due to the presence of the noise. This is also a theoretical attempt without any receiver modification in practical scenarios. \noindent \textbf{Frequency domain secrecy}: Modifying the frequency domain signal is a common technique to achieve secured~\cite{79, 83, ofdmind, Dummy} or covert communication~\cite{secret_radio_13}. In~\cite{79}, the transmitter uses the non fading subcarriers to the intended user for data transmission. The assumption of this theoretical work is that \textit{Eve}'s channel has a completely different deep fading, which might not be a practical assumption. In~\cite{83}, the authors proposed an algorithm to provide pre-coder and post-coder matrices to make the channel matrix of the legitimate user to be diagonal. Also the authors in~\cite{ofdmind} proposed an optimal channel selection for channel indices to maximize the SINR for the legitimate user to achieve secure communication. In~\cite{Dummy}, the author proposed an encryption algorithm for OFDM system based on dummy data insertion of a portion of sub carrier to make performance degradation at the eavesdropper. It is to be noted that frequency domain modification retains the OFDM properties and reveals the waveform characteristics to the eavesdropper. \noindent \textbf{Space domain secrecy}: Use of MIMO antennas~\cite{101,103,104} to beamform towards the intended receiver and/or steer null towards the eavesdropper is another way to enable physical layer security. The two major assumptions in these set of work are a) \textit{Eve} will have fewer antenna elements, which cannot be accurate in the world of electronic warfare and b) \textit{Eve} is not in the path of beam pattern radiation, which is inaccurate specially when devices are getting smaller and can be placed anywhere in plain sight. \begin{comment} \noindent \textbf{Secrecy in other domains}: Although {\texttt{Rand-OFDM}} is designed and tested for sub-6GHz wireless channel, it can be used in other domains as well for secured communication, like VLC, mmWave and TeraHertz. In~\cite{VLC1, VLC2, VLC3}, authors proposed OCDMA for secure communication using optical codes. Then, in~\cite{VLC4,VLC5} authors showed that the code can be detected from the transmitted waveform characteristics and the data can be decoded by eavesdropper. Authors in~\cite{radar2} generated a noise signal and then transmitted on polarized channel to obtain a noisy OFDM waveform for radar communication. Authors in~\cite{radar1} propose a way to randomize the transmitted OFDM wave form in the frequency domain such that the signal can not be eavesdropped and use it for radar communication. Although the encryption might have some similarities to the sub-6GHz band, the receiver design is domain specific as it is based on the channel or the frequency that it is designed for. \end{comment} \begin{comment} For complexity based (key) approach, it is designed as there is a shared secret key between the transmitter and receiver used for encryption and decryption (symmetric cipher model). Most of the work in this area is introduced in frequency domain security. In other words, most of the research effort tries to find a efficient way for both data interleaving and key generation. In ~\cite{193}, the author introduced a way to generate precoding matrices used to precode OFDM sub-carriers based on secret keys. In ~\cite{195} and ~\cite{204}, dynamic sub-carrier allocation is introduced based on the channel state information (CSI) between the transmitter and the receiver to provide security for both the transmitter and the intended receiver. In ~\cite{199}, the author sends dummy data on several random selected sub-carriers. Also the training sequence is replaced by secure trained sequence in order to prevent the Eve from channel estimation and equalization. \end{comment} \subsection{Receiver Design of {\texttt{Rand-OFDM}}} \label{sec:system_rx} The receiver design of {\texttt{Rand-OFDM}} is based on legacy 802.11a/g~\cite{802_11_spec} receiver blocks, where yellow colored blocks are the ones which we added or modified, as shown in figure~\ref{fig:system}. Receiver design starts with a packet detection block, followed by Carrier Frequency Offset (CFO) correction and removal of cyclic prefix. FFT follows right after that to convert the data to the frequency domain. Based on the modifications at the transmitter, it might seem that we need to insert the \textit{Derandomizer} block just before the data goes into the FFT block. One of the major components before the FFT is the channel correction, as the received waveform can be represented as: \begin{equation}\label{eq:rand_rx} Y_{t} = H_{t}X_{t} + N_{t} = H_{t}P_{CP}RF^{-1}X_{F} + N_{t} \end{equation} where $H_{t}$ is the channel impulse response in time domain and $N_{t}$ is the noise. At this point, we need to estimate and correct the channel impairments at the receiver. Time domain channel estimation and correction has been shown to be computationally more expensive~\cite{time_dom_1}, due to which we choose to perform it in frequency domain. Derandomizing the signal before extracting the channel information would make the channel estimation problem intractable. Hence, we convert the signal to frequency domain to extract the frequency domain channel response, which is first corrected on the signal. Once the cyclic prefix is removed, we perform N-point FFT to convert the time domain signal $Y_{t}$ to frequency domain. The received waveform in frequency domain can then be expressed as: \begin{equation}\label{eq:rx_freq} X_{rF} = FTY_{t} = H_{F}FR F^{-1}X_{F} + N_{F} \end{equation} where $H_{F}$ is the channel frequency response. Channel estimation for {\texttt{Rand-OFDM}} is discussed in details in \S~\ref{subsec:channel}. In the following discussion, we assume that we know the channel $H_{F}$ and inverse it yielding received frequency domain waveform $\Tilde{X}_{rF}$ without channel impairments. \begin{equation} \Tilde{X}_{rF} = H_{F}^{-1} X_{rF} = F R F^{-1}X_{F} + \Tilde{N}_{F} \end{equation} This is the output of the channel correction block as shown in figure~\ref{fig:system}, which is in frequency domain. We introduced the randomization in time domain, and hence it is necessary to convert it back to time domain as a part of the decryption block. After the signal is passed through the IFFT block of the decryption process, it can be represented as \begin{equation} F^{-1}\Tilde{X}_{rF} = F^{-1}FR F^{-1}X_{F} + \Tilde{N}_{F} = R F^{-1}X_{F} + \Tilde{N}_{F} \label{eq:WF_A_CH_Est} \end{equation} Now, this time domain signal is passed through the \textit{Derandomization} block to yield the correct time domain data symbols. \begin{equation}\label{eq:decrypted1} R^{-1}R F^{-1}X_{F} + \Tilde{N_{F}} = F^{-1}X_{F} + \Tilde{N}_{F} \end{equation} The final step is to perform FFT on equation~\ref{eq:decrypted1} to demodulate the data subcarriers. \begin{equation}\label{eq:decrypted} FF^{-1}X_{F} + \Tilde{N_{F}}=X_{F} + \Tilde{N}_{F} \end{equation} We introduce another Phase Offset Correction phase, which is detailed in \S~\ref{subsec:cluster}. \begin{comment} According to~\cite{802_11_spec}, the receiver block started by packet detection block. The packet detection block is used to determine the packet start. There are two different ways for packet detection. The first one is used envelope detector to detect the packet while the other way is to correlate the received waveform with the long preamble. In this work, we use the correlator for packet detection. Long preamble is used to correlate because of the good properties of correlation inserted in its structure. The second block is carrier frequency offset correction (CFO). In this work we use short and long preamble to perform coarse and fine frequency offset for carrier frequency offset removal. Then, the received wave form at the receiver is given by : \begin{equation} Y_{t} = H_{t}P_{CP}RF^{-1}X_{F} + N \end{equation} where $X_{F}$ is the mapped IQ symbols, $F^{-1}$ is the IFFT function, $R$ is the randomization time scrambling matrix, $P_{CP}$ is the cyclic prefix insertion matrix and $H_{t}$ is the CIR matrix in time domain. At the receiver side, the cyclic prefix is removed, then N-point FFT is taken to the time domain signal, so the resulted waveform in frequency domain is given by : \begin{equation} X_{rF} = FTY{t} = H_{F}FR_{F} F^{-1}X_{F} + N_{F} \end{equation} where $X_{rF}$ is the received waveform in the frequency domain , $H_{F}$ is the CIR matrix in the frequency domain. By introducing $X_{rF}$ to the channel correction block as shown in Fig. \ref{fig:system} and assuming we have full channel knowledge, we get: \begin{equation} \Tilde{{X_{rF}}} = H_{F}^{-1} X_{rF} = FR_{F} F^{-1}X_{F} + \Tilde{N_{F}} \end{equation} From(10), we notice that the de-randomization process $R^{-1}$ easily to be obtained from time domain representation of $\Tilde{{X_{rF}}}$, then we convert it again to frequency response to demodulate the data sub-carriers as follows: \begin{equation} FR^{-1}F^{-1}{\Tilde{{X_{rF}}}} = X_{F} + \Tilde{N_{F}} \end{equation} where $X_{F}$ is the frequency response of the data symbols. \end{comment} \section{System Design of {\texttt{Rand-OFDM}}} \label{sec:system} \begin{figure*} \centering \includegraphics[width=0.75\linewidth]{Figures/system.pdf} \caption{Transmitter and receiver blocks in {\texttt{Rand-OFDM}}. Blocks in yellow are either added or modified in the legacy system.} \label{fig:system} \end{figure*} In this section, we introduce the system design of {\texttt{Rand-OFDM}}, which predominantly has modifications at both the transmitter and the receiver side. The key idea of {\texttt{Rand-OFDM}} lies in randomizing the FFT output or the time domain digital complex signals to loose the OFDM properties, such that it appears as a wideband noise to the eavesdropper. Figure~\ref{fig:randomized_out} shows such an example a) time domain signal after the IFFT output, followed by b) its randomized version, which is transmitted over the air. The c) frequency response of the transmitted {\texttt{Rand-OFDM}} signal and d) its constellation plot show that we have successfully destroyed the OFDM properties for the signal to appear as a noise at the transmitter side, even before the channel impairments are introduced. Figure~\ref{fig:system} shows the transmitter and receiver modules, which are discussed in \S~\ref{sec:system_tx} and \S~\ref{sec:system_rx} respectively. \subsection{Transmitter Design of {\texttt{Rand-OFDM}}} \label{sec:system_tx} {\texttt{Rand-OFDM}} transmitter is based on Wi-Fi~\cite{802_11_spec} like OFDM~\cite{ofdm} building blocks. We introduce a new block termed \textit{Randomizer} after the IFFT block in the OFDM transmitter chain, as shown in figure~\ref{fig:system}. The \textit{Randomizer} introduces time domain scrambling of the resultant time domain complex samples from the IFFT block. The time domain scrambling is based on a shared secret key between the transmitter and the receiver pair. In other words, this is the symmetric key that is used both in encryption and decryption process. For example, if the original OFDM symbol is $[a_0,a_1,a_2,a_3,a_4,a_5]$, then the resulted randomized sequence is $[a_5,a_3,a_1,a_2,a_4]$, where the key is $[5,3,1,2,4]$. The cyclic prefix (CP) is added after that to be able to extract frequency response of the channel at the receiver, which is detailed in \S~\ref{sec:system_rx}. \begin{comment} As shown in Fig\ref{fig:system}, we introduce the same OFDM system according to \cite{OFDM}, however we introduce some new blocks to perform PLS in time domain and reconstruct the OFDM structure at the receiver side. The first block called randomizer which introduce time domain scrambling based on shared key between the transmitter and the receiver for each OFDM symbol, so that the resulted OFDM symbol is distorted and the OFDM waveform properties are lost. For example, if the original OFDM symbol is $[a_0,a_1,a_2,a_3,a_4,a_5]$, then the resulted randomized sequence is $[a_5,a_3,a_1,a_2,a_4]$ where the key is $[5,3,1,2,4]$. At the receiver side, we introduce the de-randomizer block at the receiver side to arrange the received samples, however this arrangement is taken place in the time domain. Until now, the reason of performing de-randomization process in the time domain is not clear. In this section, we present the mathematical formulation for the transmitted wave form, them the way of processing at the receiver side. \end{comment} As the \textit{Randomizer} block is added in the OFDM transmitter chain, the time domain {\texttt{Rand-OFDM}} signal, $x_{t}$, can be expressed as: \begin{equation}\label{eq:randTx} x_{t} = P_{CP}RF^{-1}X_{F} \end{equation} where $P_{CP}$ is the cyclic prefix matrix, $R$ is the randomizer matrix and $F^{-1}$ is the inverse Fourier Transform Matrix. By randomizing in time domain, the secured transmitted wave form has lost all the OFDM properties. In other words, the {\texttt{Rand-OFDM}} transmitted wave form $X_{t}$ can given by: \begin{equation} X_{t} = FT x_{t} = FR F^{-1}X_{F} \end{equation} where $T$ is the truncation matrix for cyclic prefix removal and $F$ is the N-FFT matrix and can be given as: \begin{equation} F= \begin{bmatrix} W^{0,0} & W^{0,1} & .........&W^{0,N-1}\\ W^{1,0} & W^{1,1} & .........&W^{1,N-1}\\ .\\ .\\ W^{N-1,0} & W^{N-1,1} & .........&W^{N-1,N-1} \end{bmatrix} \end{equation} where $W^{n,k}=e^{-2j\pi\frac{nk}{N}}$ and N is the FFT size. \begin{equation}\label{eq:randTD} F R F^{-1} =\frac{1}{N} \sum{e^{2j\pi\frac{n(k-m)}{N}}}= \begin{cases} I,&R=I \\ Q,&R \ne I \end{cases} \end{equation} where $I$ is the identity matrix and $Q$ is the generated transformation matrix due to the presence of $R$. From equation~\ref{eq:randTD}, we can conclude that if there is no randomization in the time domain samples (i.e $R_F = I$), the OFDM symbol retains its orthogonality property. Otherwise, a magnitude and phase noise is added to the received OFDM symbol. This distorts the original OFDM symbol by transforming the original constellation into a random constellation pattern which depends on the randomization matrix $R$. We would like to highlight here that the changes introduced in this stage do not depend on the original modulation order of the transmitted constellation. Moreover, the transformation matrix spreads the power of the OFDM symbol over the whole bandwidth including the guard bands. Figure~\ref{fig:randomized_out} shows the transformation of an OFDM symbol to a {\texttt{Rand-OFDM}} symbol. In this case, 52 subcarriers were modulated with BPSK modulated signal, generating the time domain OFDM signal as in figure~\ref{fig:ofdm}. The resultant signal goes through a \textit{Randomizer} to generate the {\texttt{Rand-OFDM}} waveform as shown in figure~\ref{fig:rand_ofdm}. If we perform FFT on this waveform, we notice that the magnitude of the subcarriers are varying randomly, as well as the power spreads to all 64 subcarriers and not confined to only 52, as we started with. This is evident in figure~\ref{fig:rand_ofdm_fft}, which is due to the matrix multiplication of $R$. Figure~\ref{fig:rand_ofdm_const} shows the phase noise that is induced due to the randomization. These imperfections require significant modifications in the receiver design, which is explained in \S~\ref{sec:system_rx}. \begin{figure} \centering \includegraphics[width=0.9\linewidth]{Figures/packet.pdf} \caption{Packet Structure of {\texttt{Rand-OFDM}}.} \label{fig:packet} \end{figure} The packet structure of {\texttt{Rand-OFDM}} is shown in figure~\ref{fig:packet}, where the short and long preambles of legacy 802.11~\cite{802_11_spec} system are used to detect the start of the packet. Rest of the MAC layer packet structure remains same except they are modified just before transmission to the {\texttt{Rand-OFDM}} waveform. A training signal of one OFDM symbol duration is appended to improve the channel estimation at the intended receiver, the design of which is detailed in \S~\ref{subsec:channel}. \begin{comment} constellation diagram and the frequency response for a transmitted symbol in BPSK modulation. It is clear that the transmitted constellation of BPSK is completely changed due to the time domain scrambling by introducing $R$ at the transmitter side. Moreover the frequency response of the transmitted symbol shows that the power of the transmitted OFDM symbol spreads over the whole bandwidth of the channel including the guard bands, consequently the receiver requires full channel knowledge to equalize and receive the {\texttt{Rand-OFDM}} signal. \end{comment} \begin{comment} \begin{figure}[h] \centering \begin{subfigure}[b]{0.49\linewidth} \includegraphics[width=\textwidth]{Figures/scatterplot_small.pdf} \caption{Transmitted BPSK constellation} \label{fig:gull} \end{subfigure} \begin{subfigure}[b]{0.49\linewidth} \includegraphics[width=\textwidth]{Figures/Transmitted_symbol_before_R.pdf} \caption{Frequency response of transmitted BPSK Symbol} \label{fig:tiger} \end{subfigure} \caption{Constellation diagram and frequency response of BPSK modulated {\texttt{Rand-OFDM}} transmitted signal. \note{check the fontsize, label etc..}} \label{Randomized} \end{figure} \end{comment} \begin{comment} \begin{figure} \centering \includegraphics[width=\linewidth]{Figures/System_blok_diagram.jpg} \caption{Proposed OFDM System} \label{fig:my_label} \end{figure} \end{comment}
1,941,325,220,919
arxiv
\section{Introduction} \label{sec:1} A long-standing problem of high energy theoretical physics is the formulation of a fundamental theory unifying the four interactions. Superstring theory in ten dimensions and M-theory in eleven seem to provide a promising theoretical framework where this unification could be achieved. However, there are many shortcomings originating from this theoretical formulation.\par First of all, these kinds of theories are defined in dimensions $D>4$, and, since we live in a four-dimensional universe, a fundamental requirement for any predictable model is the presence of a mechanism of dimensional reduction from ten or eleven dimensions to four. Moreover, the non-perturbative dynamics of the theory is far from being understood, and there is no mechanism to select a vacuum state for our universe (i.e.\ it is not clear how to formulate a phenomenological viable description for the model). Finally, there are more symmetries than those observed experimentally. These models, in fact, encode Supersymmetry (SUSY), but our universe is not supersymmetric and its gauge interactions are well described, at our energy scales, by the Standard Model (SM). Therefore deriving a phenomenologically viable model from string/M-theory also requires the definition of suitable mechanisms of supersymmetry breaking.\par \bigskip \paragraph{Spontaneous compactification.} The simplest way for deriving a four-dimensional theory from a higher dimensional one is through \emph{spontaneous compactification} which generalizes the original Kaluza-Klein (KK) compactification of five-dimensional general relativity on a circle. We consider the low-energy dynamics of superstring/M-theory on space-time solutions with geometry of the form \begin{equation} {\mathbf{M_4}}^{\scalebox{0.6}{(1,3)}}\times~\mathscr{M}_{\text{int}}\;, \end{equation} where ${\mathbf{M_4}}^{\scalebox{0.6}{(1,3)}}$ is the maximally symmetric four dimensional space-time with Lorentzian signature and $\mathscr{M}_{\text{int}}$ is a compact internal manifold. The $D=10$ or $D=11$ fields, excitations of the microscopic fundamental theory, are expanded in normal modes ($Y_{(n)}$) on the internal manifold \begin{equation} \Phi(x^\mu,\,y^\alpha)\=\sum_{(n)} \Phi_{(n)}(x^\mu)\;Y_{(n)}(y^\alpha)\;, \end{equation} the coefficients $\Phi_{(n)}$ of this expansion describing massive fields in ${\mathbf{M_4}}^{\scalebox{0.6}{(1,3)}}$ with mass of the order of $\frac1R$, where $R$ is the ``size'' of the internal manifold $\mathscr{M}_{\text{int}}$. These are the Kaluza-Klein states, forming an infinite tower.\par In many cases, a consistent truncation of the massless modes $\Phi_{(0)}$ is well described by a $D=4$ Supergravity theory (SUGRA), an effective field theory consistently describing superstring dynamics on the chosen background at energies $\Lambda$, where \begin{equation} \Lambda~\ll~\frac1R~\ll~\text{string scale}\;. \end{equation} The effective supergravity has ${\mathbf{M_4}}^{\scalebox{0.6}{(1,3)}}$ as vacuum solution, and its general features depend on the original microscopic theory and on the chosen compactification. In fact, the geometry of $\mathscr{M}_{\text{int}}$ affects the amount of supersymmetry of the low-energy SUGRA, as well as its internal symmetries.\par\medskip \subparagraph{Internal manifold, compactification and dualities.} According to the Kaluza-Klein procedure, the isometries of $\mathscr{M}_{\text{int}}$ induce gauge symmetries in the lower-dimensional theory gauged by the vectors originating from the metric in the reduction mechanism (KK vectors). The internal manifold $\mathscr{M}_{\text{int}}$ also affects the field content of the $D=4$ theory, which arrange in supermultiplets according to the residual (super)symmetry of the vacuum solution ${\mathbf{M_4}}^{\scalebox{0.6}{(1,3)}}$.\par The compactification of superstring/M-theory on a \emph{Ricci-flat} internal manifold (like a torus or a Calabi Yau space) in the absence of fluxes of higher-order form field-strengths, yields, in the low-energy limit, an effective four-dimensional SUGRA, which involves the massless modes on ${\mathbf{M_4}}^{\scalebox{0.6}{(1,3)}}$. The latter is an ungauged theory, namely the vector fields are not minimally coupled to any other field of the theory. At the classical level, ungauged supergravity models feature an on-shell global symmetry group, which was conjectured to encode the known superstring/M-theory dualities \cite{Hull:1994ys}. The idea behind these dualities is that superstring/M-theory provide a redundant description for the same microscopic degrees of freedom: different compactifications of the theory turns out to define distinct descriptions of the same quantum physics. These descriptions are connected by dualities, which also map the correspondent low-energy description into one another. The global symmetry group $G$ of the classical $D=4$ supergravity is in part remnant of the symmetry of the original higher dimensional theory, i.e.\ invariance under reparametrizations in $\mathscr{M}_{\text{int}}$ \footnote{ in part they originate from gauge symmetries associated with the higher dimensional antisymmetric tensor fields }.\par\medskip \paragraph{Ungauged vs Gauged models.} From a phenomenological point of view, extended supergravity models on four dimensional Minkowski vacua, obtained through ordinary Kaluza-Klein reduction on a Ricci-flat manifold, are not consistent with experimental observations. These models typically contain a certain number of massless scalar fields -- which are associated with the geometry of the internal manifold $\mathscr{M}_{\text{int}}$ -- whose vacuum expectation values (vevs) define a continuum of degenerate vacua. In fact, there is no scalar potential that encodes any scalar dynamics, so we cannot avoid the degeneracy. This turns into an intrinsic lack of predictiveness for the model, in addition to a field-content of the theory which comprises massless scalar fields coupled to gravity, whose large scale effects are not observed in our universe.\par Another feature of these models, as we said above, is the absence of a internal local-symmetry gauged by the vector fields. This means that no matter field is charged under a gauge group, hence the name \emph{ungauged supergravity}.\par \medskip Realistic quantum field theory models in four dimensions, therefore, require the presence of a non-trivial scalar potential, which could solve (in part or completely) moduli-degeneracy problem and, on the other hand, select a vacuum state for our universe featuring desirable physical properties like, for instance \begin{enumerate}[-,itemsep=1ex] \item introduce mass terms for the scalars; \item support the presence of some effective cosmological constant; \item etc. \end{enumerate} \smallskip The phenomenologically uninteresting ungauged SUGRAs can provide a general framework for the construction of realistic model. In a $D=4$ extended supergravity model (i.e.\ having $\mathcal{N}>1$ susy), it is possible to introduce a scalar potential, without explicitly breaking supersymmetry, through the so-called \emph{gauging procedure} \cite{deWit:1982ig,deWit:2002vt,deWit:2003hq,deWit:2005hv,deWit:2005ub,Hull:1984vg,Hull:1984qz,Trigiante:2007ki,Samtleben:2008pe}. The latter can be seen as a \emph{deformation} of an ungauged theory and consists in promoting some suitable subgroup $G_g$ of the global symmetry group of the Lagrangian to \emph{local} symmetry. This can be done by introducing minimal couplings for the vector fields, mass deformation terms and the scalar potential itself. The coupling of the (formerly abelian) vector fields to the new local gauge group gives us matter fields that are charged under this new local gauge symmetry.\par In particular, in the presence of fluxes of higher-order form field-strengths across cycles of the internal manifold \begin{equation} \langle \, \int_{\Sigma_p} F_{(p)} \;\, \rangle ~\neq~ 0\;, \end{equation} the non-linear dynamics of the low lying modes (or of a consistent truncation thereof) is, in most cases, captured by a $D=4$ theory which is gauged.\par The gauge group $G_g$ of the lower dimensional SUGRA depends on the geometry of the internal manifold and on the possible internal fluxes \begin{center} \begin{tikzpicture}[>=latex] \path (0,0) node [inner sep=3pt,outer sep=4pt] (G) {$G_g$}; \path (G)--++(0-45:2.2cm) node [inner sep=3pt,outer sep=4pt] (M) {geom.\ of $\mathscr{M}_{\text{int}}$}; \path (G)--++(180+45:2.2cm) node [inner sep=3pt,outer sep=4pt] (F) {int. fluxes}; \draw[->] (G)--++(F); \draw[->] (G)--++(M); \draw[<->] (M)--++(F); \end{tikzpicture} \end{center} The fluxes and the structure of the internal manifold, aside from the gauge symmetry, also induce masses and a scalar potential $V(\phi)$ (for reviews on flux-compactifications see \cite{Grana:2005jc,Blumenhagen:2006ci,Douglas:2006es}). These mass terms produce, in general, supersymmetry breaking already at the classical level (which is phenomenologically desirable) and the presence of a scalar potential lift the moduli degeneracy (already at the tree level) and may produce an effective cosmological constant term \begin{center} \begin{tikzpicture}[>=latex] \path (0,0) node[draw,inner sep=8pt,outer sep=2.5pt,align=center] (MF) {geom.\ of $\mathscr{M}_{\text{int}}$\,, \\[1.2ex] int. fluxes}; \path (MF)--++(30:3cm) node [draw,ellipse,inner sep=3pt,outer sep=3pt] (m) {masses}; \path (MF)--++(-30:3cm) node [draw,ellipse,inner sep=3pt,outer sep=3pt] (V) {$V(\phi)$}; \path (m.east)--++(0:2cm) node [inner sep=3pt,outer sep=3pt] (sb) {SUSY breaking}; \path (V.east)--++(30:2cm) node [inner sep=3pt,outer sep=3pt] (sm) {scalar masses}; \path (V.east)--++(-30:2cm) node [inner sep=3pt,outer sep=3pt] (L) {cosm. constant}; \draw[->] (MF) to [out=80, in=180] (m.180); \draw[->] (MF) to [out=-80, in=180] (V.180); \draw[->] (m) to [out=0, in=180] (sb.180); \draw[->] (V) to [out=60, in=180] (sm.180); \draw[->] (V) to [out=-60, in=180] (L.180); \end{tikzpicture} \end{center} Supergravity theories in $D$ dimensions are consistently defined independently of their higher-dimensional origin, and are totally defined by \begin{enumerate}[$\circ$,itemsep=1ex] \item amount of supersymmetry; \item field content; \item local symmetry, gauged by the vector fields (feature of gauged SUGRAs). \end{enumerate} \medskip When originating from superstring/M-theory compactifications, gauged SUGRAs offer a unique window on the perturbative low-energy dynamics of these theories, since they describe the full non-linear dynamics of the low lying modes. In general, there is a correspondence between vacua of the microscopic fundamental theory and vacua of the low-energy supergravity. However, there are several gauged SUGRAs whose superstring/M-theory origin is not known.\par Gauged supergravities are obtained from ungauged ones, with the same field content and amount of SUSY, through the gauging previously mentioned procedure, which is well-defined and works provided the gauge group $G_g$ satisfies some stringent conditions originating from the requirement of gauge invariance and supersymmetry. \begin{center} \begin{tikzpicture}[>=latex] \path (0,0) node[draw,inner sep=4pt,outer sep=5pt,align=center] (SSM) {SS/M-theory \\[0.5ex] $D=10,\,11$}; \path (SSM.north east)--++(45:2.5cm) node [inner sep=2pt,outer sep=2pt,align=center] (f0) {\begin{minipage}{3cm} \begin{align*} \begin{sqcases} \;\;{\mathbf{M_4}}^{\scalebox{0.6}{(1,3)}}\times~\mathscr{M}_{\text{int}}\,,\\ \;\;\mbox{Ricci flat}\,,\\ \;\;{\rm flux}=0 \end{sqcases} \end{align*} \end{minipage}}; \path (SSM.south east)--++(-45:2.5cm) node [inner sep=2pt,outer sep=2pt] (fne0) {\begin{minipage}{3cm} \begin{align*} \begin{sqcases} \;\;{\mathbf{M_4}}^{\scalebox{0.6}{(1,3)}}\times~\mathscr{M}_{\text{int}}\,,\\[\jot] \;\;{\rm flux}\neq0 \end{sqcases} \end{align*} \end{minipage}}; \path (f0.east)--++(0:3.5cm) node [inner sep=2pt,outer sep=5pt,align=left] (US) {\begin{minipage}{3cm} \begin{align*} \begin{sqcases} \;\;\text{\underline{Ungauged SUGRA}}\,,\\[\jot] \quad\circ\;\;\parbox[t]{1.\textwidth}{\linespread{1.08}\selectfont{global symmetry group $G$ encoding SS/M-th. dualities}} \end{sqcases} \end{align*} \end{minipage}}; \path (fne0.east)--++(0:3.5cm) node [inner sep=2pt,outer sep=4pt] (GS) {\begin{minipage}{3cm} \begin{align*} \begin{sqcases} \;\;\text{\underline{Gauged SUGRA}}\,,\\[\jot] \quad\circ\quadG_g\,,\\ \quad\circ\quad\text{masses}\,,\\ \quad\circ\quad V(\phi)\neq0 \end{sqcases} \end{align*} \end{minipage}}; \draw[->] (SSM.north) to [out=80, in=190] (f0.west); \draw[->] (SSM.south) to [out=-80, in=170] (fne0.west); \draw[->] (f0.east) to [out=0, in=180] node[pos=0.5,above]{\scriptsize $D$-dim} (US.west); \draw[->] (fne0.east) to [out=0, in=180] node[pos=0.5,above]{\scriptsize $D$-dim} (GS.west); \draw[->] (US.south) to node[pos=0.5,left,align=center]{{\scriptsize gauging} \\ {\scriptsize of $G_g\in G$} } (GS.north); \end{tikzpicture} \end{center} As mentioned above, gauging is the only known way to introduce a scalar potential in extended supergravities without an explicit breaking of the supersymmetry. However this procedure will in general break the global symmetry group of the ungauged theory. The latter indeed acts as a generalized electric-magnetic duality and is thus broken by the minimal couplings, which only involve the electric vector fields. As a consequence of this, in a gauged supergravity we loose track of the string/M-theory dualities, which were described by global symmetries of the original ungauged theories. \par The drawback can be avoided using the {\em embedding tensor} formulation of the gauging procedure \cite{Cordaro:1998tx,Nicolai:2000sc,deWit:2002vt,deWit:2005ub,deWit:2007mt} in which all deformations involved by the gauging is encoded in a single object, the embedding tensor, which is itself covariant with respect to the global symmetries of the ungauged model. This allows to formally restore such symmetries at the level of the gauged field equations and Bianchi identities, provided the embedding tensor is transformed together with all the other fields. The global symmetries of the ungauged theory now act as equivalences between gauged supergravities. Since the embedding tensor encodes all background quantities in the compactification describing the fluxes and the structure of the internal manifold, the action of the global symmetry group on it allows to systematically study the effect of dualities on flux compactifications.\par\bigskip \noindent These lectures are organized as follows.\par In Sect.\ \ref{sec:2} we briefly review the general structure of ungauged supergravity theories.\par In Sect.\ \ref{sec:3} we discuss the gauging procedure in the electric symplectic frame and comment on the relation between the embedding tensor and the internal fluxes and the action on the latter of dualities. We end the section by discussing, as an example, the gauging of the maximal four dimensional theory.\par In Sect.\ \ref{sec:4} we review a manifestly covariant formulation of the gauging procedure and introduce the notion of tensor hierarchy in higher dimensions.\par A more complete and detailed recent review of gauged supergravities can be found in \cite{Trigiante:2016mnt}. \section{Review of ungauged supergravities}\label{sec:2} Let us recall some basic aspects of the extended ungauged $D=4$ supergravity.\par \paragraph{Field content and bosonic action.} The bosonic sector consists in the graviton $g_{\mu\nu}(x)$, $n_v$ vector fields $A^\Lambda_\mu(x)$, $n_s$ scalar fields $\phi^s(x)$ and is described by bosonic Lagrangian of the following general form \footnote{ using the ``mostly minus'' convention and \;$8\pi\mathrm{G}_{\textsc{n}}=c=\hbar=1$ } \begin{equation} \frac{1}{e}\;\mathscr{L}_{\text{bos}}~= \-\frac{R}{2} \+\frac{1}{2}\,\mathcalboondox{G}_{st}(\phi)\,\partial_\mu\phi^s\,\partial^\mu\phi^t \+\frac{1}{4}\,\mathcal{I}_{\Lambda\Sigma}(\phi)\,F^\Lambda_{\mu\nu}\,F^{\Sigma\,\mu\nu} \+\frac{1}{8\,e}\,\mathcal{R}_{\Lambda\Sigma}(\phi)\,\epsilon^{\mu\nu\rho\sigma}\,F^\Lambda_{\mu\nu} \,F^{\Sigma}_{\rho\sigma}\,, \label{boslagr} \end{equation} where $e=\sqrt{|{\rm Det}(g_{\mu\nu})|}$ and the $n_v$ vector field strengths are defined as usual: \begin{align} F^\Lambda_{\mu\nu}=\partial_\mu A^\Lambda_\nu-\partial_\nu A^\Lambda_\mu\;.\\ \nonumber \end{align} \bigskip Let us comment on the general characteristics of the above action. \begin{enumerate}[$\circ$,itemsep=1.5ex] \item{The scalar fields $\phi^s$ are described by a non-linear $\sigma$-model, that is they are coordinates of a non-compact, \emph{Riemannian} $n_s$-dimensional differentiable manifold (target space), named \emph{scalar manifold} and to be denoted by $\mathscr{M}_{\rm scal}$. The positive definite metric on the manifold is $\mathcalboondox{G}_{st}(\phi)$. The corresponding kinetic part of the Lagrangian density reads: \begin{equation} \mathscr{L}_{\text{scal}}=\frac{e}{2}\,\mathcalboondox{G}_{st}(\phi)\,\partial_\mu \phi^s\partial^\mu \phi^t\,. \end{equation} The $\sigma$-model action is clearly invariant under the action of global (i.e.\ space-time independent) isometries of the scalar manifold. As we shall discuss below, the group $G$ can be promoted to a global symmetry group of the field equations and Bianchi identities (i.e.\ \emph{on-shell global symmetry group}) provided its (non-linear) action on the scalar fields is combined with an electric-magnetic duality transformation on the vector field strengths and their magnetic duals. } \item{The two terms containing the vector field strengths will be called vector kinetic terms. A general feature of supergravity theories is that the scalar fields are non-minimally coupled to the vector fields as they enter these terms through symmetric matrices $\mathcal{I}_{\Lambda\Sigma}(\phi),\,\mathcal{R}_{\Lambda\Sigma}(\phi)$ which contract the vector field strengths. The former $\mathcal{I}_{\Lambda\Sigma}(\phi)$ is negative definite and generalizes the $-1/g^2$ factor in the Yang-Mills kinetic term. The latter $\mathcal{R}_{\Lambda\Sigma}(\phi)$ generalizes the $\theta$-term.} \item{There is a ${\rm U}(1)^{n_v}$ gauge invariance associated with the vector fields: \begin{equation} A_\mu^\Lambda\rightarrow A_\mu^\Lambda+\partial_\mu\zeta^\Lambda\;; \end{equation} and all the fields are neutral with respect to this symmetry group.} \item{There is no scalar potential. In an ungauged supergravity a scalar potential is allowed only for $\mathcal{N}=1$ (called the \emph{F-term potential}). In extended supergravities a non-trivial scalar potential can be introduced without explicitly breaking supersymmetry only through the \emph{gauging procedure}, which implies the introduction of a local symmetry group to be gauged by the vector fields of the theory and which will be extensively dealt with in the following.} \end{enumerate} The fermion part of the action is totally determined by supersymmetry once the bosonic one is given. Let us discuss in some detail the scalar sector and its mathematical description. \subsection{Scalar sector and coset geometry}\label{ghsect} As mentioned above the scalar fields $\phi^s$ are coordinates of a Riemannian scalar manifold $\mathscr{M}_{\rm scal}$, with metric $\mathcalboondox{G}_{st}(\phi)$. The isotropy group $H$ of $\mathscr{M}_{\rm scal}$ has the general form \begin{equation} H\=H_{\rm R}\times H_{\rm matt}\,,\label{Hgroup} \end{equation} where $H_{\rm R}$ is the R--symmetry group and $H_{\rm matt}$ is a compact group acting on the matter fields. The gravitino and spin-$\frac12$ fields will transform in representations of the $H$ group. The maximal theory $\mathcal{N}=8$ describes the gravitational multiplet only and thus $H=H_{\rm R}=\mathrm{SU}(8)$. The isometry group $G$ of $\mathscr{M}_{\rm scal}$ clearly defines the global symmetries of the scalar action.\par In $\mathcal{N}>2$ theories the scalar manifold is constrained by supersymmetry to be homogeneous symmetric, namely to have the general form \begin{equation} \mathscr{M}_{\rm scal}\=\frac{G}{H}\,, \end{equation} where $G$ is the semisimple non-compact Lie group of isometries and $H$ its maximal compact subgroup. Generic homogeneous spaces $\mathscr{M}_{\rm scal}$ can always be written in the above form though $G$ need not be semisimple. \begin{table}\label{tabl} \begin{center} {\scriptsize \renewcommand{\arraystretch}{1.7} \begin{tabular}{ | c|| c | c| c|c|} \hline $\mathcal{N}$ & $\dfrac{G}{H}$ & $n_s$ &$n_v$& ${\Scr R}_v$\\[2ex] \hline\hline & & & & \\ 8 & $\frac{{\rm E}_{7(7)}}{{\rm SU}(8)}$ & 70 & 28 &{\bf 56} \\ & & & & \\ \hline & & & & \\ 6 & $\frac{\mathrm{SO}^*(12)}{{\rm U}(6)}$ & 30 &16& ${\bf 32}_c$\\ & & & & \\ \hline & & & & \\ 5 & $\frac{{\rm SU}(5,1)}{{\rm U}(5)}$ & 10 &10&{\bf 20}\\ & & & & \\ \hline & & & & \\ 4 & $\frac{{\rm SL}(2,\mathbb{R})}{{\rm SO}(2)}\times \frac{{\rm SO}(6,n)}{{\rm SO}(6)\times {\rm SO}(n)}$ & 6n+2 &n+6 & ${\bf (2,6+n)}$\\ & & & & \\ \hline & & & & \\ 3 & $\frac{{\rm SU}(3,n)}{{\rm S}[{\rm U}(3)\times{\rm U}(n)]}$ & 6n &3+n & $({\bf 3+n})+({\bf 3+n})'$\\ & & & & \\ \hline \end{tabular} \caption{Homogeneous symmetric scalar manifolds in $\mathcal{N}>2$ supergravities, their real dimensions $n_s$ and the number $n_v$ of vector fields.}} \end{center} \end{table} \begin{table} \begin{center} {\scriptsize \renewcommand{\arraystretch}{1.7} \begin{tabular}{ | c|| c | c| c|c|} \hline $\mathcal{N}$ & $\dfrac{G}{H}$ & $n_s$ &$n_v$&${\Scr R}_v$ \\[2ex] \hline\hline & & & & \\ & $\frac{{\rm SU}(1,n)}{{\rm U}(n)}$ & 2n &n+1&$({\bf 1+n})+({\bf 1+n})'$\\ & & & & \\ & $\frac{{\rm SL}(2,\mathbb{R})}{{\rm SO}(2)}\times\frac{{\rm SO}(2,n-1)}{{\rm SO}(2)\times {\rm SO}(n-1)}$ & $2n$ &n+1 & ${\bf (2,n+1)}$\\ & & & & \\ & $\frac{{\rm SU}(1,1)}{{\rm U}(1)}$ & 2 &2&${\bf 4}$\\ & & & & \\ 2, SK & $\frac{{\rm Sp}(6)}{{\rm U}(3)}$ & 12 & 7 &${\bf 14}'$\\ & & & & \\ & $\frac{{\rm SU}(3,3)}{{\rm S}[{\rm U}(3)\times{\rm U}(3)]}$ & 18 &10& ${\bf 20}$\\ & & & & \\ & $\frac{{\rm SO}^*(12)}{{\rm U}(6)}$ & 30 &16& ${\bf 32}_c$\\ & & & & \\ & $\frac{{\rm E}_{7(-25)}}{{\rm U}(1)\times {\rm E}_6}$ & 54 &28& ${\bf 56}$\\ & & & & \\ \hline & & & & \\ & $\frac{{\rm SU}(2,n_H)}{{\rm S}[{\rm U}(2)\times {\rm U}(n_H)]}$ & $4n_H$ & & \\ & & & & \\ & $\frac{{\rm SO}(4,n_H)}{{\rm SO }(4)\times {\rm SO}(n_H)}$ & $4n_H$ & & \\ & & & & \\ & $\frac{{\rm G}_{2(2)}}{{\rm SU }(2)\times {\rm SU }(2)}$ & $8$ & & \\ & & & & \\ & $\frac{{\rm F}_{4(+4)}}{{\rm SU}(2)\times {\rm USp}(6)}$ & $28$ & & \\ & & & & \\ 2, QK& $\frac{{\rm E}_{6(+2)}}{{\rm SU}(2)\times {\rm SU}(6)}$ & $40$ & & \\ & & & & \\ & $\frac{{\rm E}_{7(-5)}}{{\rm SU}(2)\times {\rm SO}(12)}$ & $64$ & & \\ & & & & \\ & $\frac{{\rm E}_{8(-24)}}{{\rm SU}(2)\times {\rm E}_7}$ & $112$ & & \\ & & & & \\ & $\frac{{\rm USp}(2,2n_H)}{{\rm USp}(2)\times {\rm USp}(2n_H)}$ & $4n_H$ & & \\ \hline \end{tabular} \caption{Homogeneous symmetric special K\"ahler (SK) and quaternionic K\"ahler (QK) scalar manifolds in $\mathcal{N}=2$ supergravities, their real dimensions $n_s$ and the number $n_v$ of vector fields.}}\label{table2} \end{center} \end{table} The action of an isometry transformation $g\in G$ on the scalar fields $\phi^r$ parametrizing $\mathscr{M}_{\rm scal}$ is defined by means of a \emph{coset representative} $\textsl{L}(\phi)\in G/H$ as follows: \begin{equation} g\cdot \textsl{L}(\phi^r)=\textsl{L}(g\star\phi^r)\cdot h(\phi^r,g)\,,\label{gLh} \end{equation} where $g\star\phi^r$ denote the transformed scalar fields, non-linear functions of the original ones $\phi^r$, and $h(\phi^r,g)$ is a \emph{compensator} in $H$. The coset representative is defined modulo the right-action of $H$ and is fixed by the chosen parametrization of the manifold. Of particular relevance in supergravity is the so-called \emph{solvable parametrization}, which corresponds to fixing the action of $H$ so that $\textsl{L}$ belongs to a solvable Lie group% \footnote{ a solvable Lie group $G_S$ can be described (locally) as a the Lie group generated by \emph{solvable Lie algebra} $\mathscr{S}$: $G_S=\exp(\mathscr{S}) $. A Lie algebra $\mathscr{S}$ is solvable iff, for some $k>0$, ${\bf D}^k \mathscr{S}=0$, where the \emph{derivative} ${\bf D}$ of a Lie algebra $\mathfrak{g}$ is defined as follows: \,${\bf D}\mathfrak{g}\equiv [\mathfrak{g},\mathfrak{g}]$, \;${\bf D}^n\mathfrak{g}\equiv [{\bf D}^{n-1}\mathfrak{g},{\bf D}^{n-1}\mathfrak{g}]$. In a suitable basis of a given representation, elements of a solvable Lie group or a solvable Lie algebra are all described by upper (or lower) triangular matrices } $G_S=\exp(\mathscr{S})$, generated by a solvable Lie algebra $\mathscr{S}$ and defined, in the symmetric case, by the Iwasawa decomposition of $G$ with respect to $H$. The scalar fields are then parameters of the solvable Lie algebra $\mathscr{S}$: \begin{align} \textsl{L}(\phi^r)&= e^{\phi^r T_r}\in \exp(\mathscr{S})\,, \end{align} where $\{T_r\}$ is a basis of $\mathscr{S}$ ($r=1,\dots,\,n_s$). All homogeneous scalar manifolds occurring in supergravity theories admit this parametrization, which is useful when the four-dimensional supergravity originates from the Kaluza-Klein reduction of a higher-dimensional one on some internal compact manifold. The solvable coordinates directly describe dimensionally reduced fields and moreover this parametrization makes the shift symmetries of the metric manifest.\par The Lie algebra $\mathfrak{g}$ of $G$ can be decomposed into the Lie algebra $\mathfrak{H}$ generating $H$, and a coset space $\mathfrak{K}$: \begin{equation} \mathfrak{g}=\mathfrak{H}\oplus \mathfrak{K}\,,\label{ghkdec} \end{equation} where in general we have: \begin{equation} [\mathfrak{H},\,\mathfrak{H}]\subset \mathfrak{H}\;;\qquad [\mathfrak{H},\,\mathfrak{K}]\subset \mathfrak{K}\;;\qquad [\mathfrak{K},\,\mathfrak{K}]\subset \mathfrak{H}\oplus\mathfrak{K}\;,\label{hkh} \end{equation} that is the space $\mathfrak{K}$ supports a representation $\mathcal{K}$ of $H$ with respect to its adjoint action. An alternative choice of parametrization corresponds to defining the coset representative as an element of $\exp(\mathfrak{K})$: \begin{align} \textsl{L}(\phi^r)&= e^{\phi^r K_r} \;\in\, \exp(\mathfrak{K})\,, \end{align} where $\{K_r\}$ is a basis of $\mathfrak{K}$. As opposed to the solvable parametrization, the coset representative is no-longer a group element, since $\mathfrak{K}$ does not close an algebra, see last of eqs.\ (\ref{hkh}). The main advantage of this parametrization is that the action of $H$ on the scalar fields is \emph{linear}: \begin{align} \forall h\in H\;:\quad h\,\textsl{L}(\phi^r)=h\,e^{\phi^r K_r}\,h^{-1}\,h=e^{\phi^r\,h\,K_r\,h^{-1}}\,h=\textsl{L}(\phi^{\prime r})\,h\,, \end{align} where $\phi^{\prime r}=(h^{-1})_s{}^r\,\phi^s$, and $h_s{}^r$ describes $h$ in the representation $\mathcal{K}$. This is not the case for the solvable parametrization since $[\mathfrak{H},\,\mathscr{S}]\nsubseteq \mathscr{S}$.\par In all parametrizations, the origin $\mathcal{O}$ is defined as the point in which the coset representative equals the identity element of $G$ and thus the $H$-invariance of $\mathcal{O}$ is manifest: $\textsl{L}(\mathcal{O})=\Id$.\par If the manifold, besides being homogeneous, is also \emph{symmetric}, the space $\mathfrak{K}$ can be defined so that: \begin{equation} [\mathfrak{K},\,\mathfrak{K}]\subset \mathfrak{H}\,. \end{equation} In this case the eq.\ (\ref{ghkdec}) defines the Cartan decomposition of $\mathfrak{g}$ into \emph{compact} and \emph{non-compact} generators, in $\mathfrak{H}$ and $\mathfrak{K}$, respectively. This means that, in a given matrix representation of $\mathfrak{g}$, a basis of the carrier vector space can be chosen so that the elements of $\mathfrak{H}$ and of $\mathfrak{K}$ are represented by anti-hermitian and hermitian matrices, respectively. \par The geometry of $\mathscr{M}_{\rm scal}$ is described by vielbein and an $H$-connection constructed out of the left-invariant one-form \begin{equation} \Omega\=\textsl{L}^{-1}\,d\textsl{L}\,\in\,\mathfrak{g}\,,\label{omegapro} \end{equation} satisfying the Maurer-Cartan equation: \begin{equation} d\Omega+\Omega\wedge \Omega=0\,.\label{MCeq} \end{equation} The Vielbein and $H$-connection are defined by decomposing $\Omega$ according to (\ref{ghkdec}) \begin{equation} \Omega(\phi)=\mathcal{P}(\phi)+\mathpzc{w}(\phi)\,; \quad\quad \mathpzc{w}\in\mathfrak{H}\,,\quad \mathcal{P}\in\mathfrak{K}\,.\label{Vom} \end{equation} Let us see how these quantities transform under the action of $G$. For any $g\in G$, using eq.\ (\ref{gLh}), we can write $\textsl{L}(g\star \phi)=g\,\textsl{L}(\phi)\,h^{-1}$, so that: \begin{align} \Omega(g\star \phi)&=h\,\textsl{L}(\phi)^{-1}\,g^{-1} d(g\,\textsl{L}(\phi)\,h^{-1})= h\,\textsl{L}(\phi)^{-1}\,d\textsl{L}(\phi)\;h^{-1}+h\;dh^{-1}\,. \end{align} From (\ref{Vom}) we find: \begin{align} &\mathcal{P}(g\star \phi)+\mathpzc{w}(g\star \phi)=h\,\mathcal{P}(\phi)\,h^{-1}+h\,\mathpzc{w}(\phi)h^{-1}+h\;dh^{-1}\,. \end{align} Since $h\;dh^{-1}$ is the left-invariant 1-form on $\mathfrak{H}$, it has value in this algebra. Projecting the above equation over $\mathfrak{K}$ and $\mathfrak{H}$, we find: \begin{align} \mathcal{P}(g\star \phi)&=h\,\mathcal{P}(\phi)\,h^{-1}\,,\label{Ptra}\\ \mathpzc{w}(g\star \phi)&=h\,\mathpzc{w}(\phi)\,h^{-1}+h\;dh^{-1}\,.\label{omtra} \end{align} We see that $\mathpzc{w}$ transforms as an $H$-connection while the matrix-valued one-form $\mathcal{P}$ transforms linearly under $H$. The vielbein of the scalar manifold are defined by expanding $\mathcal{P}$ in a basis $\{K_{\underline{s}}\}$ of $\mathfrak{K}$ (underlined indices $\underline{s},\underline{r},\underline{t},\dots$ are rigid tangent-space indices, as opposed to the curved coordinate indices $s,r,t,\dots$): \begin{equation} \mathcal{P}(\phi)=V^{\underline{s}}( \phi)\,K_{\underline{s}}\,. \end{equation} From (\ref{Ptra}) it follows that the vielbein 1-forms $V^{\underline{s}}( \phi)=V_s{}^{\underline{s}}( \phi)d\phi^s$ transform under the action of $G$ as follows: \begin{equation} V^{\underline{s}}(g\star \phi)\=V^{\underline{t}}( \phi)\,(h^{-1})_{\underline{t}}{}^{\underline{s}}\=h^{\underline{s}}{}_{\underline{t}}V^{\underline{t}}( \phi)\,.\label{Vtra} \end{equation} For symmetric spaces, from (\ref{MCeq}) it follows that $\mathpzc{w}$ and $\mathcal{P}$ satisfy the following conditions \begin{align} \mathscr{D}\mathcal{P}&~\equiv~ d\mathcal{P}+\mathpzc{w}\wedge \mathcal{P}+\mathcal{P}\wedge \mathpzc{w}\=0\,,\label{DP}\\ R(\mathpzc{w})&~\equiv~ d\mathpzc{w}+\mathpzc{w}\wedge \mathpzc{w}\=-\mathcal{P}\wedge \mathcal{P}\,,\label{RW} \end{align} where we have defined the $H$-covariant derivative $\mathscr{D}\mathcal{P}$ of $\mathcal{P}$ and the $\mathfrak{H}$-valued curvature $R(\mathpzc{w})$ of the manifold. The latter can be written in components: \begin{equation} R(\mathpzc{w})=\frac{1}{2}\,R_{rs}\,d\phi^r\wedge d\phi^s \quad\Rightarrow\quad R_{rs}=-[\mathcal{P}_r,\,\mathcal{P}_s]\in \mathfrak{H}\,.\label{Rcompo} \end{equation} We define the metric at the origin $\mathcal{O}$ as the $H$-invariant matrix: \begin{equation} \eta_{\underline{s}\underline{t}}\equiv k\,{\rm Tr}(K_{\underline{s}}\,K_{\underline{t}})>0\,, \end{equation} where $k$ is a positive number depending on the representation, so that the metric in a generic point reads: \begin{equation} ds^2(\phi)\equiv\mathcalboondox{G}_{st}(\phi)d\phi^s\,d\phi^t\equiv V_s{}^{\underline{s}}( \phi)V_t{}^{\underline{t}}( \phi)\eta_{\underline{s}\underline{t}}\,d\phi^s\,d\phi^t=k\,{\rm Tr}(\mathcal{P}_s\,\mathcal{P}_t)\,. \end{equation} As it follows from eqs.\ (\ref{Ptra}), (\ref{Vtra}), the above metric is manifestly invariant under global $G$-transformations acting on $\textsl{L}$ to the left (as well as local $H$-transformations acting on $\textsl{L}$ to the right): \begin{equation} ds^2(g\star \phi)=ds^2(\phi)\;. \end{equation} The $\sigma$-model Lagrangian can be written in the form: \begin{equation} \mathscr{L}_{\text{scal}}=\frac{e}{2}\, \mathcalboondox{G}(\phi)_{st}\partial_\mu\phi^s\,\partial^\mu\phi^t =\frac{e}{2}\,k\,\mathrm{Tr}\big(\mathcal{P}_\mu(\phi)\,\mathcal{P}^\mu(\phi)\big)\,, \qquad \mathcal{P}_\mu=\mathcal{P}_s\frac{\partial\phi^s}{\partial x^\mu}\,,\;\quad\label{lagrscal} \end{equation} and, just as the metric $ds^2$, is manifestly invariant under global $G$ and local $H$-transformations acting on $\textsl{L}$ as in (\ref{gLh}).\par The bosonic part of the equations of motion for the scalar fields can be derived from the Lagrangian (\ref{boslagr}) and read: \begin{align} \mathscr{D}_\mu (\partial^\mu \phi^s)&=\frac{1}{4}\,\mathcalboondox{G}^{st}\,\left[F_{\mu\nu}^\Lambda\, \partial_t\,\mathcal{I}_{\Lambda\Sigma}\,F^{\Sigma\, \mu\nu}+F_{\mu\nu}^\Lambda \partial_t\, \mathcal{R}_{\Lambda\Sigma}\,{}^*F^{\Sigma\, \mu\nu}\right]\,,\label{scaleqs} \end{align} where $\partial_s\equiv \frac{\partial}{\partial \phi^s}$, while $\mathscr{D}_\mu$ also contains the Levi-Civita connection $\tilde{\Gamma}$ on the scalar manifold: \begin{equation} \mathscr{D}_\mu (\partial_\nu \phi^s)\equiv \nabla_\mu(\partial_\nu \phi^s)+\tilde{\Gamma}^s_{t_1 t_2}\partial_\mu \phi^{t_1}\,\partial_\nu\phi^{t_2}\,, \end{equation} $\nabla_\mu$ being the covariant derivative containing the Levi-Civita connection on space-time.\par Let us end this paragraph by introducing, in the coset geometry, the Killing vectors describing the infinitesimal action of isometries on the scalar fields. Let us denote by $t_\alpha$ the infinitesimal generators of $G$, defining a basis of its Lie algebra $ \mathfrak{g}$ and satisfying the corresponding commutation relations \begin{equation} [t_\alpha,\,t_\beta]={\bf f}_{\alpha\beta}{}^\gamma\,t_\gamma\,,\label{talg} \end{equation} ${\bf f}_{\alpha\beta}{}^\gamma$ being the structure constants of $\mathfrak{g}$. Under an infinitesimal $G$-transformation generated by $\epsilon^\alpha\,t_\alpha$ ($\epsilon^\alpha\ll 1$): \begin{equation} g\approx \Id+\epsilon^\alpha\,t_\alpha\,, \end{equation} the scalars transform as: \begin{equation} \phi^s\rightarrow \phi^s+\epsilon^\alpha\,k^s_\alpha(\phi)\,, \end{equation} $k^s_\alpha(\phi)$ being the Killing vector associated with $t_\alpha$. The action of $g$ on the scalars is defined by eq.\ (\ref{gLh}), neglecting terms of order $O(\epsilon^2)$: \begin{equation} (\Id+\epsilon^\alpha\,t_\alpha)\,\textsl{L}(\phi)=\textsl{L}(\phi+\epsilon^\alpha\,k_\alpha)(\Id-\frac{1}{2}\,\epsilon^\alpha W_\alpha ^I\,J_I)\,, \end{equation} where $(\Id-\frac{1}{2}\,\epsilon^\alpha W_\alpha ^I\,J_I)$ denotes, expanded to linear order in $\epsilon$, the compensating transformation $h(\phi,g)$, $\{J_I\}$ being a basis of $\mathfrak{H}$. Equating the terms proportional to $\epsilon^\alpha$, multiplying to the left by $\textsl{L}^{-1}$ and using the expansion (\ref{Vom}) of the left-invariant 1-form, we end up with the following equation: \begin{equation} \textsl{L}^{-1}t_\alpha\textsl{L}\=k_\alpha^s\,(\mathcal{P}_s+\mathpzc{w}_s)-\frac{1}{2}\,W_\alpha ^I\,J_I\=k_\alpha^s\,V_s{}^{\underline{s}}\,K_{\underline{s}}+\frac{1}{2}\,(k_\alpha^s\omega_s^I-W_\alpha ^I)\,J_I\,,\label{kespans} \end{equation} where we have expanded the $H$-connection along $J_I$ as follows: \begin{equation} \mathpzc{w}_s=\frac{1}{2}\,\omega^I_s\,J_I\,. \end{equation} Eq.\ (\ref{kespans}) allows to compute $k_\alpha$ for homogeneous scalar manifolds by projecting $\textsl{L}^{-1}t_\alpha\textsl{L}$ along the directions of the coset space $\mathfrak{K}$. These Killing vectors satisfy the following algebraic relations (note the minus sign on the right hand side with respect to (\ref{talg}) : \begin{equation} [k_\alpha,\,k_\beta]=-{\bf f}_{\alpha\beta}{}^\gamma\,k_\gamma\,, \end{equation} We can split, according to the general structure (\ref{Hgroup}), the $H$-generators $J_I$ into $H_{\rm R}$-generators $J_{{\bf a}}$ (${{\bf a}}=1,\dots,{\rm dim}(H_{\rm R})$) and $H_{\rm matt}$-generators $J_{{\bf m}}$ (${\bf m}=1,\dots,{\rm dim}(H_{\rm matt})$), and rewrite (\ref{kespans}) in the form: \begin{equation} \textsl{L}^{-1}t_\alpha \textsl{L}\= k_\alpha^s\,V_s{}^{\underline{s}}\,K_{\underline{s}}-\frac{1}{2}\,\mathscr{P}_\alpha^{{\bf a}}\,J_{{\bf a}}-\frac{1}{2}\,\mathscr{P}_\alpha^{{\bf m}}\,J_{{\bf m}}\,.\label{kespans2} \end{equation} The quantities \begin{equation} \mathscr{P}_\alpha^{{\bf a}}=- (k_\alpha^s\omega_s^{{\bf a}}-W_\alpha ^{{\bf a}})\,, \end{equation} generalize the so called \emph{momentum maps} in $\mathcal{N}=2$ theories, which provide a Poissonian realization of the isometries $t_\alpha$. One can verify the general property: \begin{equation} k_\alpha^s\,R^{{\bf a}}_{st}=\mathscr{D}_t \mathscr{P}_\alpha^{{\bf a}}\,,\label{KRP} \end{equation} where $\mathscr{D}_s$ denotes the $H$-covariant derivative and we have expanded the curvature $R[\mathpzc{w}]$ defined in (\ref{RW}) along $J_I$: \begin{equation} R[\mathpzc{w}]=\frac{1}{2}\,R^I_{st}\,d\phi^s\wedge d\phi^t\,J_I\,. \end{equation} These objects are important in the gauging procedure since they enter the definition of the the gauged connections for the fermion fields as well as gravitino-shift matrix $\mathbb{S}_{AB}$ (see Sect.\ \ref{sec:3}). For all those isometries which do not produce compensating transformations in $H_{\rm R}$, $W_\alpha^{{\bf a}}=0$ and $ \mathscr{P}_\alpha^{{\bf a}}$ are easily computed to be $$ \mathscr{P}_\alpha^{{\bf a}}=- k_\alpha^s\omega_s^{{\bf a}}\,.$$ This is the case, in the solvable parametrization, for all the isometries in $\mathscr{S}$, which include translations in the axionic fields.\par In $\mathcal{N}=2$ models with non-homogeneous scalar geometries, though we cannot apply the above construction of $k_\alpha,\,\mathscr{P}_\alpha^{{\bf a}}$, the momentum maps are constructed from the Killing vectors as solutions to the differential equations (\ref{KRP}). In general, in these theories, with each isometry $t_\alpha$ of the scalar manifold, we can associate the quantities $\mathscr{P}_\alpha^{{\bf a}},\,\mathscr{P}_\alpha^{{\bf m}}$ which are related to the corresponding Killing vectors $k_\alpha$ through general relations (see \cite{Andrianopoli:1996cm} for a comprehensive account of $\mathcal{N}=2$ theories). \subsection{Vector sector} We can associate with the electric field strengths $F_{\mu\nu}^{\Lambda}$ their magnetic duals $\mathpzc{G}_{\Lambda\,\mu\nu}$ defined as: \begin{align} \mathpzc{G}_{\Lambda\,\mu\nu}&\equiv -\epsilon_{\mu\nu\rho\sigma} \frac{\partial \mathscr{L}_4}{\partial F^\Lambda_{\rho\sigma}}=\mathcal{R}_{\Lambda\Sigma}\,F^\Sigma_{\mu\nu}-\mathcal{I}_{\Lambda\Sigma}\,{}^*F^\Sigma_{\mu\nu}\,,\label{GF} \end{align} where we have omitted fermion currents in the expression of $\mathpzc{G}_\Lambda$ since we are only focussing for the time being on the bosonic sector of the theory. In ordinary Maxwell theory (no scalar fields), $\mathcal{I}_{\Lambda\Sigma}=-\delta_{\Lambda\Sigma}$ and $\mathcal{R}_{\Lambda\Sigma}=0$, so that $\mathpzc{G}_{\Lambda\,\mu\nu}$ coincides with the Hodge-dual of $F^\Lambda_{\mu\nu}$: $\mathpzc{G}_{\Lambda}={}^* F^{\Lambda}$.\par In terms of $F^\Lambda$ and $\mathpzc{G}_{\Lambda}$ the bosonic part of the Maxwell equations read \begin{equation} \nabla^{\mu}({}^*F^\Lambda_{\mu\nu}) = 0\,; \qquad \nabla^{\mu }({{}^*\mathpzc{G}}_{\Lambda\,\mu\nu}) = 0\,, \label{biafieq} \end{equation} In order to set the stage for the discussion of global symmetries, it is useful to rewrite the scalar and vector field equations in a different form. Using (\ref{GF}) and the property that ${}^*{}^* F^\Lambda=-F^\Lambda$, we can express ${}^* F^\Lambda$ and ${}^* \mathpzc{G}_\Lambda$ as linear functions of $F^\Lambda$ and $\mathpzc{G}_\Lambda$: \begin{align} {}^* F^\Lambda&= \mathcal{I}^{-1\,\Lambda\Sigma}\,(\mathcal{R}_{\Sigma\Gamma}\,F^\Gamma-\mathpzc{G}_\Sigma)\;;\\ {}^*\mathpzc{G}_\Lambda&= (\mathcal{R}\mathcal{I}^{-1}\mathcal{R}+\mathcal{I})_{\Lambda\Sigma}\,F^\Sigma-(\mathcal{R}\mathcal{I}^{-1})_\Lambda{}^\Sigma\,\mathpzc{G}_\Sigma\,,\label{GF2} \end{align} where, for the sake of simplicity, we have omitted the space-time indices. It is useful to arrange $F^\Lambda$ and $\mathpzc{G}_\Lambda$ in a single $2n_v$-dimensional vector $\mathbb{F}\equiv (\mathbb{F}^M)$ of two-forms: \begin{equation} \mathbb{F}= \left(\frac{1}{2}\,\mathbb{F}^M_{\mu\nu}\,dx^\mu\wedge dx^\nu\right) \equiv \left(\begin{matrix}F^\Lambda_{\mu\nu}\cr \mathpzc{G}_{\Lambda\mu\nu}\end{matrix}\right)\,\frac{dx^\mu\wedge dx^\nu}{2}\,,\label{bbF} \end{equation} in terms of which the Maxwell equations read: \begin{equation} d\mathbb{F}=0\,,\label{Max} \end{equation} and eqs.\ (\ref{GF2}) are easily rewritten in the following compact form: \begin{eqnarray} {}^*\mathbb{F}=-\mathbb{C}\mathcal{M}(\phi^s)\,\mathbb{F}\,,\label{FCMF} \end{eqnarray} where \begin{equation} \mathbb{C}=(\mathbb{C}^{MN})\equiv\left(\begin{matrix} \mathbf{0} & \Id \cr -\Id & \mathbf{0} \end{matrix}\right)\,,\label{C} \end{equation} $\Id$, $\mathbf{0}$ being the $n_v\times n_v$ identity and zero-matrices, respectively, and \begin{equation} \mathcal{M}(\phi)= (\mathcal{M}(\phi)_{MN})\equiv \left(\begin{matrix}(\mathcal{R}\mathcal{I}^{-1}\mathcal{R}+\mathcal{I})_{\Lambda\Sigma} & -(\mathcal{R}\mathcal{I}^{-1})_\Lambda{}^\Gamma\cr -(\mathcal{I}^{-1}\mathcal{R})^\Delta{}_\Sigma & \mathcal{I}^{-1\, \Delta \Gamma}\end{matrix}\right)\,,\label{M} \end{equation} is a symmetric, negative-definite matrix, function of the scalar fields. The reader can easily verify that this matrix is also symplectic, namely that: \begin{equation} \mathcal{M}(\phi)\mathbb{C}\mathcal{M}(\phi)=\mathbb{C}\,. \end{equation} This matrix contains $\mathcal{I}_{\Lambda\Sigma}$ and $\mathcal{R}_{\Lambda\Sigma}$ as components, and therefore defines the non-minimal coupling of the scalars to the vector fields.\par After some algebra, we can also rewrite eqs.\ (\ref{scaleqs}) in a compact form as follows \begin{align} \mathscr{D}_\mu (\partial^\mu\phi^s)&= \frac{1}{8}\,\mathcalboondox{G}^{st}\,\mathbb{F}^T_{\mu\nu}\partial_t\mathcal{M}(\phi)\,\mathbb{F}^{\mu\nu}\,,\label{scaleqs2} \end{align} \subsection{Coupling to gravity} We can now compute the Einstein equations: \begin{equation} R_{\mu\nu}-\frac{1}{2}\,g_{\mu\nu}\,R=T^{(S)}_{\mu\nu}+T^{(V)}_{\mu\nu}+T^{(F)}_{\mu\nu}\,,\label{EEQ1} \end{equation} where the three terms on the right hand side are the energy-momentum tensors of the scalars, vectors and fermionic fields, respectively. The first two can be cast in the following general form \begin{align} T^{(S)}_{\mu\nu}&= \mathcalboondox{G}_{rs}(\phi)\,\partial_\mu \phi^r\partial_\nu \phi^s-\frac{1}{2}\,g_{\mu\nu}\,\mathcalboondox{G}_{rs}(\phi)\,\partial_\rho \phi^r\partial^\rho \phi^s\,,\\ T^{(V)}_{\mu\nu}&=\left({F}^T_{\mu\rho}\,\mathcal{I}\,F_{\nu}{}^\rho-\frac{1}{4}\,g_{\mu\nu}\,(F^T_{\rho\sigma} \mathcal{I} F^{\rho\sigma})\right)\,,\label{tv} \end{align} where in the last equation the vector indices $\Lambda,\Sigma$ have been suppressed for the sake of notational simplicity. It is convenient for our next discussion, to rewrite, after some algebra, the right hand side of (\ref{tv}) as follows \begin{equation} T^{(V)}_{\mu\nu}=\frac{1}{2}\,\mathbb{F}^T_{\mu\rho}\,\mathcal{M}(\phi)\,\mathbb{F}_{\nu}{}^\rho\,, \end{equation} so that eq.\ (\ref{EEQ1}) can be finally recast in the following form: \begin{equation} R_{\mu\nu}=\mathcalboondox{G}_{rs}(\phi)\,\partial_\mu \phi^r\partial_\nu \phi^s+\frac{1}{2}\,\mathbb{F}^T_{\mu\rho}\,\mathcal{M}(\phi)\,\mathbb{F}_{\nu}{}^\rho+\dots\,,\label{EEQ2} \end{equation} where the ellipses refer to fermionic terms.\par The scalar fields enter the kinetic terms of the vector fields through the matrices $\mathcal{I}(\phi)$ and $\mathcal{R}(\phi)$. As a consequence of this, a symmetry transformation of the scalar part of the Lagrangian will not in general leave the vector field part invariant. \subsection{Global symmetry group}\label{gsg} In extended supergravity models ($\mathcal{N}>1$) the (identity sector of the) global symmetry group $G$ of the scalar action can be promoted to a global invariance \cite{Gaillard:1981rj} of, at least, the field equations and the Bianchi identities, provided its (non-linear) action on the scalar fields is associated with a linear transformation on the vector field strengths $F^\Lambda_{\mu\nu}$ and their magnetic duals $\mathpzc{G}_{\Lambda\,\mu\nu}$: \begin{align} g\in G\,:\; \begin{cases} \,\,\,\,\,\,\phi^r &\rightarrow \quad g\star\phi^r\;\; \qquad\qquad\qquad\qquad\qquad\qquad\qquad\text{(non--linear)},\\[\jot] \left(\begin{matrix} F^\Lambda \cr \mathpzc{G}_\Lambda \end{matrix}\right) &\rightarrow\quad\mathscr{R}_v[g]\cdot \left(\begin{matrix} F^\Lambda\cr \mathpzc{G}_\Lambda \end{matrix}\right)= \left(\begin{matrix} A[g]^\Lambda{}_\Sigma & B[g]^{\Lambda\Sigma}\cr C[g]_{\Lambda\Sigma} & D[g]_\Lambda{}^\Sigma \end{matrix}\right) \,\left(\begin{matrix} F^\Sigma\cr \mathpzc{G}_\Sigma \end{matrix}\right) \,\quad\text{(linear)}. \end{cases}\label{dual}\nonumber\\ \end{align} The transformations (\ref{dual}) are clearly a symmetry of the scalar action and of the Maxwell equations ($d\mathbb{F}=0$) if $F^\Lambda$ and $\mathpzc{G}_\Lambda$ were independent, since the latter are clearly invariant with respect to any linear transformation on $\mathbb{F}^M$. The definition $\mathpzc{G}_\Lambda$ in (\ref{GF}) as a function of $F^\Lambda,\,{}^*F^\Lambda$ and the scalar fields, which is equivalently expressed by the twisted self-duality condition (\ref{FCMF}), however poses constraints on the $2n_v\times 2n_v$ matrix $\mathscr{R}_v[g]=(\mathscr{R}_v[g]^M{}_N)$. In order for (\ref{dual}) to be an invariance of the vector equations of motion (\ref{Max}) and (\ref{FCMF}) the following conditions have to be met: \begin{itemize} \item[i)]{for each $g\in G$ (more precisely in the identity sector of $G$), the matrix $\mathscr{R}_v[g]$ should be \emph{symplectic}, namely \begin{equation} \mathscr{R}_v[g]^{T}\mathbb{C}\,\mathscr{R}_v[g]=\mathbb{C}\,;\label{SSym} \end{equation} } \item[ii)]{the symplectic, scalar dependent, matrix $\mathcal{M}(\phi)$ should transform as follows: \begin{equation} \mathcal{M}(g\star \phi)=\mathscr{R}_v[g]^{-T}\mathcal{M}(\phi)\,\mathscr{R}_v[g]^{-1}\,,\label{traM} \end{equation} where we have used the short-hand notation $\mathscr{R}_v[g]^{-T}\equiv (\mathscr{R}_v[g]^{-1})^T$. } \end{itemize} The reader can indeed verify that conditions i) and ii) are sufficient to guarantee invariance of (\ref{FCMF}) under (\ref{dual}). The symplectic transformation $\mathscr{R}_v[g]$, associated with each element $g$ of $G$, mixes electric and magnetic field strengths, acting therefore as a generalized electric--magnetic duality and defines a \emph{symplectic representation} $\mathscr{R}_v$ of $G$: \begin{equation} \forall g\in G\,\,\,\stackrel{\mathscr{R}_v}{\longrightarrow}\,\,\,\,\,\mathscr{R}_v[g]\in {\rm Sp}(2n_v,\,\mathbb{R})\,. \end{equation} The field strengths and their magnetic duals transform therefore, under the duality action (\ref{dual}) of $G$ in a $2n_v$-dimensional symplectic representation.\par We denote by $\mathscr{R}_{v*}=\mathscr{R}_v^{-T}$ the representation dual to $\mathscr{R}_v$, acting on covariant symplectic vectors, so that, for any ${\bf g}\in G$: \begin{align} \mathscr{R}_{v*}[{\bf g}]&=(\mathscr{R}_{v*}[{\bf g}]_M{}^N)=\mathscr{R}_v[{\bf g}]^{-T}=-\mathbb{C}\mathscr{R}_v[{\bf g}]\mathbb{C}\,\,\,\Rightarrow \nonumber\\&\Rightarrow\,\,\, \mathscr{R}_{v*}[{\bf g}]_M{}^N=\mathbb{C}_{MP}\,\mathscr{R}_v[{\bf g}]^P{}_Q\,\mathbb{C}^{NQ}\,,\end{align} where we have used the property that $\mathscr{R}_v$ is a symplectic representation% \footnote{ the symplectic indices {\small $M,\,N,\dots$} are raised (and lowered) with the symplectic matrix $\mathbb{C}^{MN}$ ($\mathbb{C}_{MN}$) using north-west south-east conventions: $X^{M}=\mathbb{C}^{MN}\,X_{N}$ (and $X_M=\mathbb{C}_{NM}\,X^{N}$) }.\par From (\ref{SSym}) and (\ref{traM}), it is straightforward to verify the manifest $G$-invariance of the scalar field equations and the Einstein equations written in the forms (\ref{scaleqs2}) and (\ref{EEQ2}).\par Conditions i) and ii) are verified in extended supergravities as a consequence of supersymmetry. In these theories indeed supersymmetry is large enough as to connect certain scalar fields to vector fields and, as a consequence of this, symmetry transformations on the former imply transformations on the latter (more precisely transformations on the vector field strengths $F^\Lambda$ and their duals $\mathpzc{G}_\Lambda$). The existence of a symplectic representation $\mathscr{R}_v$ of $G$, together with the definition of the matrix $\mathcal{M}$ and its transformation property (\ref{traM}), are built-in in the mathematical structure of the scalar manifold. More precisely they follow from the definition on $\mathscr{M}_{\rm scal}$ of a \emph{flat symplectic structure}. Supersymmetry totally fixes $\mathcal{M}(\phi)$ and thus the coupling of the scalar fields to the vectors, aside from a freedom in the choice of the basis of the symplectic representation (\emph{symplectic frame}) which amounts to a change in the definition of $\mathcal{M}(\phi)$ by a constant symplectic transformation $E$: \begin{equation} \mathcal{M}(\phi)\rightarrow \mathcal{M}'(\phi)=E\mathcal{M}(\phi)E^T\,.\label{MEtra} \end{equation} Clearly if $E\in \mathscr{R}_{v*}[G]\subset {\rm Sp}(2n_v,\mathbb{R})$, its effect on $\mathcal{M}(\phi)$ can be offset be a redefinition of the scalar fields, by virtue of eq.\ (\ref{traM}). On the other hand if $E$ a were block-diagonal matrix, namely an element of ${\rm GL}(n_v,\mathbb{R})\subset {\rm Sp}(2n_v,\mathbb{R})$, it could be reabsorbed in a local redefinition of the field strengths. Inequivalent symplectic frames are then connected by symplectic matrices $E$ defined modulo redefinitions of the scalar and vector fields, namely by matrices in the coset \cite{deWit:2002vt}: \begin{equation} E\,\in \,{\rm GL}(n_v,\mathbb{R})\backslash {\rm Sp}(2n_v,\mathbb{R})/ \mathscr{R}_{v*}[G]\,,\label{generalE} \end{equation} where the quotient is defined with respect to the left-action of ${\rm GL}(n_v,\mathbb{R})$ (local vector redefinitions) and to the right-action of $ \mathscr{R}_{v*}[G]$ (isometry action on the scalar fields).\par A change in the symplectic frame amounts to choosing a different embedding $\mathscr{R}_v$ of $G$ inside ${\rm Sp}(2n_v,\,\mathbb{R})$, which is not unique. This affects the form of the action, in particular the coupling of the scalar fields to the vectors. However, at the ungauged level, it only amounts to a redefinition of the vector field strengths and their duals which has no physical implication. In the presence of a gauging, namely if vectors are minimally coupled to the other fields, the symplectic frame becomes physically relevant and may lead to different vacuum-structures of the scalar potential.\par We emphasize here that the existence of this symplectic structure on the scalar manifold is a general feature of all extended supergravites, including those $\mathcal{N}=2$ models in which the scalar manifold is not even homogeneous (i.e.\ the isometry group, if it exists, does not act transitively on the manifold itself). In the $\mathcal{N}=2$ case, only the scalar fields belonging to the vector multiplets are non-minimally coupled to the vector fields, namely enter the matrices $\mathcal{I}(\phi),\,\mathcal{R}(\phi)$, and they span a \emph{special K\"ahler} manifold. On this manifold a flat symplectic bundle is defined% \footnote{ a special K\"ahler manifold is in general characterized by the product of a ${\rm U}(1)$-bundle, associated with its K\"ahler structure (with respect to which the manifold is Hodge K\"ahler), and a flat symplectic bundle. See for instance \cite{Andrianopoli:1996cm} for an in depth account of this issue }, which fixes the scalar dependence of the matrices $\mathcal{I}(\phi),\,\mathcal{R}(\phi)$, aside from an initial choice of the symplectic frame, and the matrix $ \mathcal{M}(\phi)$ defined in (\ref{M}) satisfies the property (\ref{traM}).\par If the scalar manifold is homogeneous, we can consider at any point the coset representative $\textsl{L}(\phi)\in G$ in the symplectic, $2n_v$-dimensional representation $\mathscr{R}_v $: \begin{equation} \textsl{L}(\phi)\,\,\,\stackrel{\mathscr{R}_v}{\longrightarrow}\,\,\,\,\,\mathscr{R}_v[\textsl{L}(\phi)]\in {\rm Sp}(2n_v,\,\mathbb{R})\,. \end{equation} In general the representation $\mathscr{R}_v[H]$ of the isotropy group $H$ may not be orthogonal, that is $\mathscr{R}_v[H]\nsubseteq {\rm SO}(2n_v)$. In this case we can always change the basis of the representation% \footnote{ we label the new basis by underlined indices } by means of a matrix $\mathcal{S}$% \begin{equation} \mathcal{S}=(\mathcal{S}^N{}_{\underline{M}}) \,\in {\rm Sp}(2n_v,\,\mathbb{R})/{\rm U}(n) \end{equation} such that, in the rotated representation $\underline{\mathscr{R}}_v\equiv \mathcal{S}^{-1}\mathscr{R}_v\,\mathcal{S}$: \begin{equation} \underline{\mathscr{R}}_v[H]\equiv \mathcal{S}^{-1}\mathscr{R}_v[H]\,\mathcal{S}\subset {\rm SO}(2n_v) \quad\Leftrightarrow\quad \underline{\mathscr{R}}_v[h]^T\underline{\mathscr{R}}_v[h]=\Id\;,\quad \forall h\in H\,.\label{hort} \end{equation} For any point $\phi$ on the scalar manifold define now the \emph{hybrid coset-representative matrix} $\mathbb{L}(\phi)=(\mathbb{L}(\phi)^M{}_{\underline{N}})$ as follows: \begin{equation} \mathbb{L}(\phi)\equiv \mathscr{R}_v[\textsl{L}(\phi)]\mathcal{S} \quad\Leftrightarrow\quad \mathbb{L}(\phi)^M{}_{\underline{N}}\equiv \mathscr{R}_v[\textsl{L}(\phi)]^M{}_N\mathcal{S}^N{}_{\underline{N}}\,.\label{hybrid} \end{equation} We also define the matrix \begin{equation} \mathbb{L}(\phi)_M{}^{\underline{N}}~\equiv~ \mathbb{C}_{MP}\,\mathbb{C}^{\underline{NQ}}\;\mathbb{L}(\phi)^P{}_{\underline{Q}}\;. \end{equation} Notice that, as a consequence of the fact that the two indices of $\mathbb{L}$ refer to two different symplectic bases, $\mathbb{L}$ itself is not a matrix representation of the coset representative $\textsl{L}$. From (\ref{gLh}), the property of $\mathscr{R}_v$ of being a representation and the definition (\ref{hybrid}) we have: \begin{equation} \forall {\bf g}\in G \;:\quad \mathscr{R}_v[{\bf g}]\,\mathbb{L}(\phi)=\mathbb{L}({\bf g}\star\phi)\,\underline{\mathscr{R}}_v[h]\,,\label{gLh2} \end{equation} where $h\equiv h(\phi,{\bf g})$ is the compensating transformation. The hybrid index structure of $\mathbb{L}$ poses no consistency problem since, by (\ref{gLh2}), the coset representative is acted on to the left and to the right by two different groups: $G$ and $H$, respectively. Therefore, in our notations, underlined symplectic indices {\footnotesize $\underline{M},\,\underline{N},\dots$} are acted on by $H$ while non-underlined ones by $G$.\par The $ \mathcal{M}(\phi)$ is then expressed in terms of the coset representative as follows: \begin{equation} \mathcal{M}(\phi)_{MN}=\mathbb{C}_{MP}\mathbb{L}(\phi)^P{}_{\underline{L}}\mathbb{L}(\phi)^R{}_{\underline{L}}\,\mathbb{C}_{RN} \;\;\Leftrightarrow\;\; \mathcal{M}(\phi)=\mathbb{C}\mathbb{L}(\phi)\,\mathbb{L}(\phi)^T\,\mathbb{C}\,,\label{Mcos} \end{equation} where summation over the index {\footnotesize $\underline{L}$} is understood. The reader can easily verify that the definition of the matrix $\mathcal{M}(\phi)$ given above is indeed consistent, in that it is $H$-invariant, and thus only depends on the point $\phi$, and transforms according to (\ref{traM}): \begin{align} \forall g\in G \;:\quad \mathcal{M}(g\star\phi)&=\mathbb{C}\mathbb{L}(g\star\phi)\,\mathbb{L}(g\star\phi)^T\mathbb{C}=\nonumber\\&= \mathbb{C}\mathscr{R}_v[g]\,\mathbb{L}(\phi)(\underline{\mathscr{R}}_v[h]^{-1}\,\underline{\mathscr{R}}_v[h]^{-T})\mathbb{L}(\phi)^T\mathscr{R}_v[g]^T\mathbb{C}=\nonumber\\& =\mathscr{R}_v[g]^{-T}\mathbb{C}\mathbb{L}(\phi)\,\mathbb{L}(\phi)^T \mathbb{C}\mathscr{R}_v[g]^{-1}=\nonumber\\ &=\mathscr{R}_v[g]^{-T}\mathcal{M}(\phi)\mathscr{R}_v[g]^{-1}\,, \end{align} where we have used eq.\ (\ref{gLh2}), the orthogonality property (\ref{hort}) of $\underline{\mathscr{R}}_v[h]$ and the symplectic property of $\mathscr{R}_v[g]$. From the definition (\ref{Mcos}) of $\mathcal{M}$ in terms of the coset representative, it follows that for symmetric scalar manifolds the scalar Lagrangian (\ref{lagrscal}) can also be written in the equivalent form: \begin{equation} \mathscr{L}_{\text{scal}}=\frac{e}{2}\, \mathcalboondox{G}_{st}(\phi)\partial_\mu\phi^s\,\partial^\mu\phi^t =\frac{e}{8}\,k\,\mathrm{Tr}\big(\mathcal{M}^{-1}\partial_\mu\mathcal{M}\,\mathcal{M}^{-1}\partial^\mu\mathcal{M}\big)\,,\label{lagrscalM} \end{equation} where $k$ depends on the representation $\mathscr{R}_v$ of $G$. \par The transformation properties of the matrices $\mathcal{I}_{\Lambda\Sigma}$ and $\mathcal{R}_{\Lambda\Sigma}$ under $G$ can be inferred from (\ref{traM}) and can be conveniently described by defining the complex symmetric matrix \begin{equation} \mathcalboondox{N}_{\Lambda\Sigma}\equiv \mathcal{R}_{\Lambda\Sigma}+i\,\mathcal{I}_{\Lambda\Sigma}\,. \end{equation} Under the action of a generic element $g\in G$, \,$\mathcalboondox{N}$ transforms as follows: \begin{equation} \mathcalboondox{N}(g\star\phi)=(C[g]+D[g]\,\mathcalboondox{N}(\phi))(A[g]+B[g]\,\mathcalboondox{N}(\phi))^{-1}\,,\label{Ntra} \end{equation} where $A[g],\,B[g],\,C[g]\,,D[g]$ are the $n_v\times n_v$ blocks of the matrix $\mathscr{R}_v[g]$ defined in (\ref{dual}).\par \subparagraph{Parity.} We have specified above that only the elements of $G$ which belong to the identity sector, namely which are continuously connected to the identity, are associated with symplectic transformations. There may exist isometries $g\in G$ which do not belong to the identity sector and are associated with \emph{anti-symplectic} matrices ${\bf A}[g]$: \begin{equation} \mathcal{M}(g\star \phi)={\bf A}[g]^{-T}\,\mathcal{M}(\phi)\,{\bf A}[g]\;; \quad\; {\bf A}[g]^T\mathbb{C}{\bf A}[g]=-\mathbb{C}\,. \end{equation} Anti-symplectic matrices do not close a group but can be expressed as the product of a symplectic matrix ${\bf S}$ times a fixed anti-symplectic one ${\bf P}$, that is ${\bf A}={\bf S}\,{\bf P}$. In a suitable symplectic frame, the matrix ${\bf P}$ can be written in the following form: \begin{equation} {\bf P}=\left(\begin{matrix} \Id & \mathbf{0} \cr \mathbf{0} & -\Id \end{matrix}\right)\,.\label{Pmatrix} \end{equation} Due to their being implemented by anti-symplectic duality transformations (\ref{dual}), these isometries leave eq.\ (\ref{FCMF}) invariant up to a sign which can be offset by a \emph{parity transformation}, since under parity one has $\,*\,\rightarrow\,-*\,$\,.\, Indeed one can show that these transformations are a symmetry of the theory provided they are combined with parity. Notice that this poses no problem with the generalized theta-term since, as parity reverses the sign of $\epsilon^{\mu\nu\rho\sigma}F^\Lambda_{\mu\nu} F^\Sigma_{\rho\sigma}$, under ${\bf P}$ we have: \begin{equation} \mathcal{I}_{\Lambda\Sigma}\rightarrow\mathcal{I}_{\Lambda\Sigma}\;;\qquad \mathcal{R}_{\Lambda\Sigma}\rightarrow -\mathcal{R}_{\Lambda\Sigma}\,, \end{equation} see equation (\ref{Ntra}), so that the corresponding term $\epsilon^{\mu\nu\rho\sigma}F^\Lambda_{\mu\nu} F^\Sigma_{\rho\sigma}\mathcal{R}_{\Lambda\Sigma}$ in the Lagrangian is invariant. The global symmetry group of the theory is therefore described by a group \begin{equation} G=G_0\times \mathbb{Z}_2=\{G_0,\,G_0\cdot p\}\,, \end{equation} where $G_0$ is the \emph{proper duality} group defined by the identity sector of $G$ and $p$ is the element of $G$ which corresponds, in a suitable symplectic frame, to the anti-symplectic matrix ${\bf P}$\,:\; ${\bf P}={\bf A}[p]$. \subparagraph{Example.} Let us discuss the simple example of the lower-half complex plane \begin{equation} G/H={\rm SL}(2,\mathbb{R})/{\rm SO}(2)\,. \end{equation} This manifold is parametrized by a complex coordinate $z$, with ${\rm Im}(z)<0$. As symplectic representation of $G={\rm SL}(2,\mathbb{R})$ we can choose the fundamental representation and the following basis of generators of $\mathfrak{g}=\mathfrak{sl}(2,\mathbb{R})$: \begin{align} \mathfrak{sl}(2,\mathbb{R})=\{\sigma^1,\,i\,\sigma^2,\sigma^3\}=\left\{\left(\begin{matrix}0 & 1 \cr 1 & 0\end{matrix}\right),\,\left(\begin{matrix}0 & 1 \cr -1 & 0\end{matrix}\right),\,\left(\begin{matrix}1 & 0 \cr 0 & -1\end{matrix}\right)\right\}\,. \end{align} The subalgebra $\mathscr{S}$ of upper-triangular generators \begin{align} \mathscr{S}=\{\sigma^3,\,\sigma^+\}\,\,,\,\,\,\,\sigma^+\equiv \left(\begin{matrix}0 & 1 \cr 0 & 0\end{matrix}\right)\,. \end{align} defines the solvable parametrization $\phi^s=(\varphi,\,\chi)$, in which the coset representative $\mathbb{L}$ has the following form: \begin{align} \mathbb{L}(\varphi,\,\chi)\equiv e^{\chi \sigma^+}\,e^{\frac{\varphi}{2}\sigma^3}=\left( \begin{array}{ll} 1 & \chi \\ 0 & 1 \end{array} \right)\left( \begin{array}{ll} e^{\varphi /2} & 0 \\ 0 & e^{-\varphi /2} \end{array} \right)\,\in\,\, e^{\mathscr{S}}\,. \end{align} The relation between the solvable coordinates and $z$ is \begin{equation} z\=z_1+i\,z_2\=\chi-i\,e^{\varphi}\,. \end{equation} The metric reads: \begin{equation} ds^2=\frac{{d\varphi }^2}{2}+\frac{1}{2}{d\chi }^2 e^{-2 \varphi }=\frac{1}{2z_2^2}\,dz d\bar{z}\;; \end{equation} and the matrix $\mathcal{M}(\phi)_{MN}$ reads: \begin{align} \mathcal{M}(z,\,\bar{z})_{MN}=\mathbb{C}_{MP}\,\mathbb{L}(\phi)^P{}_{\underline{L}}\,\mathbb{L}(\phi)^R{}_{\underline{L}}\, \mathbb{C}_{RN}= \frac{1}{z_2}\left( \begin{array}{cc} 1 & -{z_1} \\ -{z_1} & |z|^2 \end{array} \right)\,. \end{align} The generic isometry which is continuously connected to the identity is a holomorphic transformation of the form \begin{equation} z\rightarrow z'=\frac{a z +b}{ c z +d}\,,\qquad ad-bc=1\,, \end{equation} corresponding to the ${\rm SL}(2,\mathbb{R})$ transformation ${\bf S}=\left(\begin{matrix}a & b\cr c & d\end{matrix}\right)$ with ${\rm det}({\bf S})=1$. The reader can easily verify that: \begin{equation} \mathcal{M}(z',\,\bar{z}')={\bf S}^{-T}\mathcal{M}(z,\,\bar{z}){\bf S}^{-1}\,. \end{equation} We also have the following isometry: \begin{equation} z\rightarrow -\bar{z}\,,\label{pisom} \end{equation} which is not in the identity sector of the isometry group, and corresponds to the anti-symplectic transformation ${\bf P}={\rm diag}(1,-1)$ in that: \begin{equation} \mathcal{M}(-\bar{z},\,-z)={\bf P}^{-T}\mathcal{M}(z,\,\bar{z}){\bf P}^{-1}\,. \end{equation} This corresponds to a parity transformation whose effect is to change the sign of the pseudo-scalar $\chi$ while leaving the scalar $\varphi$ inert: \begin{equation} \mbox{parity}:\;\;\chi\rightarrow -\chi\,\,,\,\,\,\,\varphi\rightarrow \varphi\,. \end{equation} Notice that the correspondence between the linear transformation ${\bf P}$ and the isometry (\ref{pisom}) exists since ${\bf P}$ is an \emph{outer-automorphism} of the isometry algebra $\mathfrak{g}=\mathfrak{sl}(2,\mathbb{R})$, namely: \begin{equation} {\bf P}^{-1}\mathfrak{sl}(2,\mathbb{R}){\bf P}\=\mathfrak{sl}(2,\mathbb{R})\,, \end{equation} while ${\bf P}$ is \emph{not} in ${\rm SL}(2,\mathbb{R})$ and the above transformation cannot be offset by any conjugation by ${\rm SL}(2,\mathbb{R})$ elements. Analogous outer-automorphisms implementing parity can be found in other extended supergravities, including the maximal one in which $G=\Exc\times \mathbb{Z}_2$\, \cite{Ferrara:2013zga}. \subparagraph{Solitonic solutions, electric-magnetic charges and duality.} Ungauged supergravities only contain fields which are neutral with respect to the ${\rm U}(1)^{n_v}$ gauge-symmetry of the vector fields. These theories however feature \emph{solitonic solutions}, namely configurations of neutral fields which carry ${\rm U}(1)^{n_v}$ electric-magnetic charges. These solutions are typically black holes in four dimensions or black branes in higher and have been extensively studied in the literature. On a charged dyonic solution of this kind, we define the electric and magnetic charges as the integrals% \footnote{ the electric and magnetic charges $(e,m)$ are expressed in the rationalized-Heaviside-Lorentz (RHL) system of units }: \begin{align} e_\Lambda\equiv\int_{S^2} \mathpzc{G}_{\Lambda}=\frac{1}{2}\,\int_{S^2} \mathpzc{G}_{\Lambda\,\mu\nu}\,dx^\mu\wedge dx^\nu \,\,,\nonumber\\ m^\Lambda\equiv\int_{S^2} F^{\Lambda}=\frac{1}{2}\,\int_{S^2} F^{\Lambda}{}_{\mu\nu}\,dx^\mu\wedge dx^\nu \,, \end{align} where $S^2$ is a spatial two-sphere. They define a symplectic vector $\Gamma^M$: \begin{align} \Gamma=(\Gamma^M)=\left(\begin{matrix}m^\Lambda\cr e_\Lambda\end{matrix}\right)=\int_{S^2} \mathbb{F}^M\,. \end{align} These are the \emph{quantized charges}, namely they satisfy the Dirac-Schwinger-Zwanziger quantization condition for dyonic particles \cite{Dirac:1931kp,Schwinger:1966nj,Zwanziger:1968rs}: \begin{equation} \Gamma_2^T\mathbb{C}\Gamma_1=m_2^{\Lambda}\,e_{1\Lambda}-m_1^{\Lambda}\,e_{2\Lambda}= 2\pi\,\hbar\,c\,n\,\,\,;\,\,\,\,n\in \mathbb{Z}\,.\label{DZS} \end{equation} At the quantum level, the dyonic charges therefore belong to a symplectic lattice and this breaks the duality group $G$ to a suitable discrete subgroup $G(\mathbb{Z})$ which leaves this symplectic lattice invariant: \begin{equation} G(\mathbb{Z})\equiv G\cap {\rm Sp}(2n_v,\mathbb{Z})\,. \end{equation} This discrete symmetry group of surviving quantum corrections (or a suitable extension thereof) was conjectured in \cite{Hull:1994ys} to encode all known string/M-theory dualities. \subsection{Symplectic frames and Lagrangians}\label{sframes} As pointed out earlier, the duality action $\mathscr{R}_v[G]$ of $G$ depends on which elements, in the basis of the ${\bf 2\,n_v}$ representation, are chosen to be the $n_v$ electric vector fields (appearing in the Lagrangian) and which their magnetic duals namely on the choice of the \emph{symplectic frame} which determines the embedding of the group $G$ inside $\mathrm{Sp}(2n_v,\,\mathbb{R})$. Different choices of the symplectic frame may yield inequivalent Lagrangians (that is Lagrangians that are not related by local field redefinitions) with different global symmetries. Indeed, the global symmetry group of the Lagrangian% \footnote{ here we only consider \emph{local} transformations on the fields } is defined as the subgroup $G_{el}\subset G$, whose duality action is linear on the electric field strengths \begin{equation} g\in G_{el}\;: \quad \mathscr{R}_v[g]= \left(\begin{matrix} A^\Lambda{}_\Sigma & \mathbf{0} \cr C_{\Lambda\Sigma} & D_\Lambda{}^\Sigma \end{matrix}\right)\,,\label{ge} \end{equation} where $D=A^{-T}$ by the symplectic condition, so that \begin{align} g\in G_{el}\;:\quad &F^\Lambda \rightarrow\,F^{\prime\Lambda}=A^\Lambda{}_\Sigma\,F^\Sigma\;,\nonumber\\ &\mathpzc{G}_\Lambda \rightarrow\,\mathpzc{G}'_{\Lambda}=C_{\Lambda\Sigma}\,F^\Sigma+ D_\Lambda{}^\Sigma\,\mathpzc{G}_\Sigma \,.\label{Gel} \end{align} Indeed, as the reader can verify using eq.\ (\ref{Ntra}), under the above transformation the matrices $\mathcal{I},\,\mathcal{R}$ transform as follows: \begin{equation} \mathcal{I}_{\Lambda\Sigma}\rightarrow D_\Lambda{}^\Pi D_\Sigma{}^\Delta\,\mathcal{I}_{\Pi\Delta} \,;\quad\; \mathcal{R}_{\Lambda\Sigma}\rightarrow D_\Lambda{}^\Pi D_\Sigma{}^\Delta\,\mathcal{R}_{\Pi\Delta}+C_{\Lambda\Pi}\,D_{\Sigma}{}^\Pi\,, \end{equation} and the consequent variation of the Lagrangian reads \begin{equation} \mathscr{L}_{\text{bos}}= \frac{1}{8}\,C_{\Lambda\Pi}\,A^\Pi{}_{\Sigma}\epsilon^{\mu\nu\rho\sigma}\,F^\Lambda_{\mu\nu}F^\Sigma_{\rho\sigma}\,,\label{deltaLC} \end{equation} which is a \emph{total derivative} since $C_{\Lambda\Pi}\,A^\Pi{}_{\Sigma}$ is constant. These transformations are called \emph{Peccei-Quinn transformations } and follow from shifts in certain axionic scalar fields. They are a symmetry of the classical action, while invariance of the perturbative path-integral requires the variation (\ref{deltaLC}), integrated over space-time, to be proportional through an integer to $2\pi \hbar$. This constrains the symmetries to close to a discrete subgroup $G(\mathbb{Z})$ of $G$ whose duality action is implemented by integer-valued matrices ${\mathscr{R}_v}[g]$. Such restriction of $G$ to $G(\mathbb{Z})$ in the quantum theory was discussed earlier as a consequence of the Dirac-Schwinger-Zwanziger quantization condition for dyonic particles (\ref{DZS}).\par From (\ref{Gel}) we see that, while the vector field strengths $F^\Lambda_{\mu\nu}$ and their duals $\mathpzc{G}_{\Lambda\,\mu\nu}$ transform together under $G$ in the ($2n_v$--dimensional) symplectic representation $\mathscr{R}_v$, the vector field strengths alone transform linearly under the action of $G_{el}$ in a smaller representation ${\bf n_v}$, defined by the $A$-block in (\ref{ge}).\par\medskip Different symplectic frames of a same ungauged theory may originate from different compactifications. A distinction here is in order. In $\mathcal{N}\geq 3$ theories, scalar fields always enter the same multiplets as the vector fields. Supersymmetry then implies their non-minimal coupling to the latter and that the scalar manifold is endowed with a symplectic structure associating with each isometry a constant symplectic matrix. In $\mathcal{N}=2$ theories, scalar fields may sit in vector multiplets or hypermultiplets. The former span a \emph{special K\"ahler manifold}, the latter a \emph{quaternionic K\"ahler} one, so that the scalar manifold is always factorized in the product of the two: \begin{equation} \mathscr{M}_{\rm scal}^{\scalebox{0.5}{$\,(\mathcal{N}=2)$}}\=\Ms_{\textsc{sk}}\times \Ms_{\textsc{qk}}\,.\label{SKQK} \end{equation} The scalar fields in the hypermultiplets are not connected to vector fields through supersymmetry and consequently they do not enter the matrices $\mathcal{I}(\phi)$ and $\mathcal{R}(\phi)$. As a consequence of this the isometries of the Quaternionic-K\"ahler manifolds spanned by these scalars are associated with trivial duality transformations \begin{equation} g\,\in\,\text{isom. of}\;\Ms_{\textsc{qk}} \;\;\;\,\,\,\Rightarrow\quad \mathscr{R}_v[g]=\Id\;,\label{qisom} \end{equation} while only $\Ms_{\textsc{sk}}$ features a flat symplectic structure which defines the embedding of its isometry group inside ${\rm Sp}(2n_v,\mathbb{R})$ and the couplings of the vector multiplet-scalars to the vector fields through the matrix $\mathcal{M}(\phi)$. It is important to remark that such structure on a special K\"ahler manifold exists even if the manifold itself is not homogeneous. This means that one can still define the symplectic matrix $\mathbb{L}(\phi)$ and, in terms of the components $\mathcal{I}_{\Lambda\Sigma}$ and $\mathcal{R}_{\Lambda\Sigma}$, also the matrix $\mathcal{M}(\phi)$ as in (\ref{Mcos}), although $\mathbb{L}(\phi)$ has no longer the interpretation of a coset representative for non-homogeneous manifolds.\par It is convenient for later purposes to rewrite the transformation properties of the bosonic fields the group $G$, discussed in this section, in the following infinitesimal form: \begin{align} G\;:\quad \begin{cases} \delta\,\mathbb{L} = \Lambda^\alpha\,t_\alpha\,\mathbb{L}\;,\nonumber\\ \delta \mathbb{F}^{{M}}_{\mu\nu} = -\Lambda^{\alpha}\,(t_{\alpha})_{{{N}}}{}^{{{M}}}\;\mathbb{F}_{\mu\nu}^{{{N}}}\,, \end{cases} \end{align} in terms of the infinitesimal generators $t_\alpha$ of $G$ earlier introduced, satisfying the relation (\ref{talg}). The matrices $(t_{\alpha})_{M}{}^{N}$ define the infinitesimal duality action of $G$ and are symplectic generators \begin{equation} (t_{\alpha})_{M}{}^{N}\,\mathbb{C}_{NP} = (t_{\alpha})_{P}{}^{N}\,\mathbb{C}_{NM}\, \qquad\; {\scriptstyle M},\,{\scriptstyle N},\dotsc=1,\dotsc,\,2n_v\;. \end{equation} This is equivalently stated as the property of the tensor $t_{\alpha\,MN}\equiv (t_{\alpha})_{M}{}^{P}\,\mathbb{C}_{PN}$ of being symmetric in {\footnotesize $M\,N$}: \begin{equation} (t_\alpha)_{MN}=(t_\alpha)_{NM}\,. \end{equation} \subsection{The fermionic sector} \label{fsector} Fermions in supergravity transform covariantly with respect to the isotropy group $H$ of the scalar manifold, which has the general form (\ref{Hgroup}), while they do not transform under $G$, as opposed to the bosonic fields. Bosons and fermions have therefore definite transformation properties with respect to different groups of internal symmetry. The matrix $\mathbb{L}$, defining the coset representative for homogeneous scalar manifolds, transforms under the action of $G$ to the left and of $H$ to the right, according to (\ref{gLh}) \begin{equation} G\,\,\rightarrow \,\,\,\mathbb{L}\,\,\,\leftarrow\,\, H\,, \end{equation} and thus has the right index structure to ``mediates'' in the Lagrangian between bosons and fermions. This means that we can construct $G$-invariant terms by contracting $\mathbb{L}$ to the left by bosons (scalars, vectors and their derivatives), and to the right by fermions \begin{equation} (\mbox{Bosons}) \star \mathbb{L}(\phi)\star (\mbox{Fermions})\,,\label{BLF} \end{equation} the two $\star$ symbols denote some contraction of indices: $G$-invariant to the left and $H$-invariant to the right. The ``Boson'' part of (\ref{BLF}) may also contain $\mathbb{L}$ and its derivatives. These are the kind of terms occurring in the field equations. If under a transformation $g\in G$, symbolically: \begin{equation} \mbox{Bosons}\;\rightarrow\;\mbox{Bosons}'=\mbox{Bosons}\star g^{-1}\,, \end{equation} and the \emph{fermions are made to transform under the compensating transformation} $h(\phi,g)$ in (\ref{gLh}): \begin{equation} \mbox{Fermions}\;\rightarrow\;\mbox{Fermions}'=h(\phi,g)\star \mbox{Fermions}\,.\label{Hfermi} \end{equation} Using (\ref{gLh}) we see that (\ref{BLF}) remains invariant: \begin{equation} (\mbox{Bosons})' \star \mathbb{L}(g\star\phi)\star (\mbox{Fermions}')=(\mbox{Bosons}) \star \mathbb{L}(\phi)\star (\mbox{Fermions})\,. \end{equation} The Lagrangian is manifestly invariant under local $H$-transformations since the covariant derivatives on the fermion fields contain the $H$-connection% \footnote{ we define $\mathpzc{w}_\mu\equiv \mathpzc{w}_s\,\partial_\mu\phi^s$ } $\mathpzc{w}_\mu$: \begin{equation} \mathscr{D}_\mu\xi=\nabla_\mu\xi+\mathpzc{w}_\mu\star \xi\,,\label{Dxi} \end{equation} where, as usual, the $\star$ symbol denotes the action of the $\mathfrak{H}$-valued connection $\mathpzc{w}_\mu$ on $\xi$ in the corresponding $H$-representation. The reader can verify that (\ref{Dxi}) is indeed covariant under local $H$-transformations (\ref{Hfermi}), provided $\mathpzc{w}$ is transformed according to (\ref{omtra}). As opposed to the gauge groups we are going to introduce by the gauging procedure, which involve minimal couplings to the vector fields of the theory, the local $H$-symmetry group of the ungauged theory is not gauged by the vector fields, but by a \emph{composite connection} $\mathpzc{w}_\mu$, which is a function of the scalar fields and their derivatives. The minimal coupling $\mathpzc{w}_\mu\star \xi$ is an example of the boson-fermion interaction term (\ref{BLF}).\par It is useful to write the coupling (\ref{BLF}) in the following form: \begin{equation} {\bf f}(\phi,\mbox{Bosons}) \star (\mbox{Fermions})\,,\label{BLF2} \end{equation} where we have introduced the $H$-covariant \emph{composite field}: \begin{equation} {\bf f}(\phi,\mbox{Bosons})\equiv (\mbox{Bosons}) \star \mathbb{L}(\phi)\,, \end{equation} obtained by \emph{dressing} the bosonic fields and their derivatives with the coset-representative so as to obtain an $H$-covariant quantity with the correct $H$-index structure to contract with fermionic currents. Indeed under a $G$-transformation \begin{equation} {\bf f}(g\star\phi,\mbox{Bosons}')\equiv {\bf f}(\phi,\mbox{Bosons})\star h(\phi,g)^{-1}\,, \end{equation} The manifest $H$-invariance of the supergravity theory requires the supersymmetry transformation properties of the femionic fields to be $H$-covariant. Indeed such transformation rules, which in rigid supersymmetric theories (i.e.\ theories which are invariant only under global supersymmetry) can be schematically described as follows% \footnote{ this is a schematic representation in which we have suppressed the Lorentz indices and gamma-matrices }: \begin{equation} \delta\mbox{Fermion}=\sum_{{\tiny \mbox{Bosons}}}\partial\mbox{Boson}\cdot\epsilon\,, \end{equation} and in supergravity theories have the following general $H$-covariant form% \footnote{ the gravitino field has an additional term $\mathscr{D}\epsilon$ which is its variation as the gauge field of local supersymmetry } \begin{equation} \delta\mbox{Fermion}=\sum_{{\tiny \mbox{Bosons}}}{\bf f}(\phi,\mbox{Bosons})\cdot\epsilon\,, \end{equation} where the space-time derivatives of the bosonic fields are dressed with the scalars in the definition of ${\bf f}(\phi,\mbox{Bosons})$. Examples of composite fields ${\bf f}(\phi,\mbox{Bosons})$ are the vielbein of the scalar manifold (pulled back on space-time) $\mathcal{P}_\mu\equiv \mathcal{P}_s\,\partial_\mu\phi^s$, the $H$-connection $\mathpzc{w}_\mu$ in (\ref{Dxi}), the dressed vector field-strengths \begin{equation} {\bf F}(\phi,\partial A)^{\;\underline{M}}_{\mu\nu}\equiv -(\mathbb{L}(\phi)^{-1})^{\underline{M}}{}_N\,\mathbb{F}^N_{\mu\nu}\,, \end{equation} or the \emph{$\mathbb{T}$-tensor}, to be introduced later, in which the bosonic field to be dressed by the coset representative is the \emph{embedding tensor} $\Theta$ defining the choice of the gauge algebra. \section{Gauging Supergravities} \label{sec:3} We have reviewed the field content and the Lagrangian of ungauged supergravity, as well as the action of the global symmetry group $G$. Now we want to discuss how to construct a gauged theory from an ungauged one.\par In the following, we will employ a covariant formalism in which the possible gaugings will be encoded into an object called embedding tensor, that can be characterized group-theoretically \cite{Cordaro:1998tx,Nicolai:2000sc,deWit:2002vt}. \subsection{The gauging procedure step-by-step}\label{gaugingsteps} As anticipated in the Introduction, the gauging procedure consists in promoting a suitable global symmetry subgroup $G_g\subset G_{el}$ of the Lagrangian to a local symmetry gauged by the vector fields of the theory. This requirement gives us a preliminary condition \begin{equation} \mathrm{dim}(G_g) ~\le~ n_v\;.\label{preliminary} \end{equation} As explained in Sect.\ \ref{sframes}, \emph{different symplectic frames correspond to ungauged Lagrangians with different global symmetry groups $G_{el}$ and thus to different choices for the possible gauge groups.}\par The first condition for the global symmetry subgroup $G_g$ to become a viable gauge group, is that there should exist a subset $\{A^{\hat{\Lambda}}\}$ of the vector fields% \footnote{ we describe by hatted-indices those pertaining to the symplectic frame in which the Lagrangian is defined } which transform under the co-adjoint representation of the duality action of $G_g$. These fields will become the \emph{gauge vectors} associated with the \emph{generators} $X_{\hat\Lambda}$ of the subgroup $G_g$. \par We shall name \emph{electric frame} the symplectic frame defined by our ungauged Lagrangian and labeled by hatted indices.\par Note that, once the gauge group is chosen within $G_{el}$, its action on the various fields is fixed, being it defined by the action of $G_g$ as a global symmetry group of the ungauged theory (duality action on the vector field strengths, non-linear action on the scalar fields and indirect action through $H$-compensators on the fermionic fields): fields are thus automatically associated with representations of $G_g$.\par After the initial choice of $G_g$ in $G_{el}$, the first part of the procedure is quite standard in the construction of non-abelian gauge theories: we introduce a gauge-connection, gauge-curvature (i.e.\ non-abelian field strengths) and covariant derivatives. We will also need to introduce an extra topological term needed for the gauging of the Peccei-Quinn transformations (\ref{deltaLC}). This will lead us to construct a gauged Lagrangian $\mathscr{L}_{\text{gauged}}^{(0)}$ with manifest local $G_g$-invariance. Consistency of the construction will imply constraints on the possible choices of $G_g$ inside $G$. The minimal couplings will however break supersymmetry.\par The second part of the gauging procedure consists in further deforming the Lagrangian $\mathscr{L}_{\text{gauged}}^{(0)}$ in order to restore the original supersymmetry of the ungauged theory and, at the same time, preserving local $G_g$-invariance. \smallskip \subparagraph{\textbf{Step 1.} Choice of the gauge algebra.} We start by introducing the gauge connection: \begin{equation} \Omega_{g}=\Omega_{g\,\mu}dx^\mu\;; \quad \Omega_{g\,\mu}\equiv g\,A^{\hat{\Lambda}}_\mu\,X_{\hat{\Lambda}}\,,\label{gconnection} \end{equation} $g$ being the coupling constant. The gauge-algebra relations can be written \begin{equation} \left[X_{\hat{\Lambda}},\,X_{\hat{\Sigma}}\right]\=f_{{\hat{\Lambda}}{\hat{\Sigma}}}{}^{\hat{\Gamma}}\,X_{\hat{\Gamma}}\,,\label{gaugealg} \end{equation} and are characterized by the structure constants $f_{{\hat{\Lambda}}{\hat{\Sigma}}}{}^{\hat{\Gamma}}$. This closure condition should be regarded as a constraint on $X_{\hat{\Lambda}}$, since the structure constants are not generic but fixed in terms of the action of the gauge generators on the vector fields as global symmetry generators of the original ungauged theory. To understand this, let us recall that $G_g$ is a subgroup of $G_{el}$ and thus its electric-magnetic duality action, as a global symmetry group, will have the form (\ref{ge}). The duality action on the vector field strengths and their duals of the infinitesimal generators $X_{\hat{\Lambda}}$ will then by represented by a symplectic matrix of the form (\ref{ge}) \begin{equation} \left(X_{\hat{\Lambda}}\right)^{\hat{M}}{}_{\hat{N}}\= \left(\begin{matrix} X_{\hat{\Lambda}}{}^{{\hat{\Lambda}}}{}_{{\hat{\Sigma}}} & \mathbf{0} \cr X_{{\hat{\Lambda}}\,{\hat{\Gamma}}{\hat{\Sigma}}} & X_{{\hat{\Lambda}}\,{\hat{\Gamma}}}{}^{\hat{\Delta}} \end{matrix}\right) \,,\label{xsymp} \end{equation} where $X_{\hat{\Lambda}}{}^{{\hat{\Lambda}}}{}_{{\hat{\Sigma}}}$ and $X_{{\hat{\Lambda}}\,{\hat{\Gamma}}}{}^{\hat{\Delta}}$ are the infinitesimal generators of the $A$ and $D$-blocks in (\ref{ge}) respectively, while $X_{{\hat{\Lambda}}\,\hat{\Gamma}\hat{\Sigma}}$ describes the infinitesimal $C$-block. It is worth emphasizing here that we do not identify the generator $X_{{\hat{\Lambda}}}$ with the symplectic matrix defining its electric-magnetic duality action. As pointed our in Sect.\ \ref{sframes}, there are isometries in $\mathcal{N}=2$ models which do not have duality action, see eq.\ (\ref{qisom}), namely for which the matrix in (\ref{xsymp}) is null.\par The variation of the field strengths under an infinitesimal transformation $\xi^{{\hat{\Lambda}}}\,X_{{\hat{\Lambda}}}$, whose duality action is described by (\ref{xsymp}), is: \begin{equation} \delta \mathbb{F}^{\hat{M}}=\xi^{{\hat{\Lambda}}}\,(X_{{\hat{\Lambda}}}){}^{\hat{M}}{}_{\hat{N}}\,\mathbb{F}^{\hat{N}} \;\;\Rightarrow\;\; \begin{cases}\delta F^{\hat{\Lambda}}= \xi^{{\hat{\Gamma}}}X_{{\hat{\Gamma}}}{}^{\hat{\Lambda}}{}_{\hat{\Sigma}}\,F^{\hat{\Sigma}}\,,\cr \delta\mathpzc{G}_{\hat{\Lambda}}= \xi^{{\hat{\Gamma}}}X_{{\hat{\Gamma}}\,{\hat{\Lambda}}{\hat{\Sigma}}} F^{\hat{\Sigma}}+\xi^{{\hat{\Gamma}}}X_{{\hat{\Gamma}}{\hat{\Lambda}}}{}^{\hat{\Sigma}}\,\mathpzc{G}_{\hat{\Sigma}}\,. \end{cases} \label{deltas} \end{equation} The imposed symplectic condition on the matrix $X_{\hat{\Lambda}}$ implies the properties: \begin{align} X_{{\hat{\Lambda}} \hat{M}}{}^{\hat{P}}\,\mathbb{C}_{\hat{N} \hat{P}}=X_{{\hat{\Lambda}} \hat{N}}{}^{\hat{P}}\,\mathbb{C}_{\hat{M} \hat{P}} \;\quad\Leftrightarrow\quad\; \begin{cases} X_{\hat{\Lambda}}{}^{{\hat{\Sigma}}}{}_{{\hat{\Gamma}}}\=-X_{{\hat{\Lambda}}{\hat{\Gamma}}}{}^{\hat{\Sigma}}\,,\cr X_{{\hat{\Lambda}}\,{\hat{\Gamma}}{\hat{\Sigma}}}\=X_{{\hat{\Lambda}}\,{\hat{\Sigma}}{\hat{\Gamma}}}\,. \end{cases} \label{sympconde} \end{align} The condition that $A^{\hat{\Lambda}}_\mu$ transform in the co-adjoint representation of the gauge group: \begin{equation} \delta F^{\hat{\Lambda}}= \xi^{{\hat{\Gamma}}}\,f_{{\hat{\Gamma}}{\hat{\Sigma}}}{}^{\hat{\Lambda}}F^{\hat{\Sigma}}\,, \end{equation} and the transformation properties (\ref{deltas}), leads us to identify the structure constants of the gauge group in (\ref{gaugealg}) with the diagonal blocks of the symplectic matrices $X_{\hat{\Lambda}}$: \begin{equation} f_{{\hat{\Gamma}}{\hat{\Sigma}}}{}^{\hat{\Lambda}}=-X_{{\hat{\Gamma}}{\hat{\Sigma}}}{}^{\hat{\Lambda}}\,,\label{idenfx} \end{equation} so that the closure condition reads \begin{equation} \left[X_{\hat{\Lambda}},\,X_{\hat{\Sigma}}\right]\=-X_{{\hat{\Lambda}}{\hat{\Sigma}}}{}^{\hat{\Gamma}}\,X_{\hat{\Gamma}}\,,\label{gaugealg2} \end{equation} and is a quadratic constraint on the tensor $X_{{\hat{\Lambda}}}{}^{\hat{M}}{}_{\hat{N}}$. The identification (\ref{idenfx}) also implies \begin{align} X_{({\hat{\Gamma}}{\hat{\Sigma}})}{}^{\hat{\Lambda}}=0\,.\label{lin1} \end{align} \medskip The closure condition (\ref{gaugealg2}) can thus be interpreted in two equivalent ways: \begin{enumerate}[$\circ$,itemsep=1ex] \item{the vector fields $A^{\hat{\Lambda}}_\mu$ transform in the co-adjoint representation of $G_g$ under its action as global symmetry, namely \begin{equation} {\bf n_v}=\text{co-adj}(G_g)\,; \end{equation} } \item{the gauge generators $X_{{\hat{\Lambda}}}$ are invariant under the action of $G_g$ itself: \begin{equation} \delta_{{\hat{\Lambda}}}X_{{\hat{\Sigma}}}\equiv [X_{\hat{\Lambda}},\,X_{{\hat{\Sigma}}}]+X_{{\hat{\Lambda}}{\hat{\Sigma}}}{}^{\hat{\Gamma}}\,X_{\hat{\Gamma}}=0\,. \end{equation}} \end{enumerate} \medskip \subparagraph{\textbf{Step 2.} Introducing gauge curvatures and covariant derivatives.} Having defined the gauge connection (\ref{gconnection}) we also define its transformation property under a local $G_g$-transformation $ {\bf g}(x)\in G_g$: \begin{equation} \Omega_g \;\rightarrow\; \Omega_g'={\bf g}\,\Omega_g\,{\bf g}^{-1}+d {\bf g}\,{\bf g}^{-1}=g\,A^{\prime \hat{\Lambda}}\,X_{\hat{\Lambda}}\,.\label{Omtrasg} \end{equation} Under an infinitesimal transformation \;${\bf g}(x)\equiv \Id+g\,\zeta^{\hat{\Lambda}}(x)\,X_{\hat{\Lambda}}$,\, eq.\ (\ref{Omtrasg}) implies the following transformation property of the gauge vectors: \begin{equation} \delta A^{\hat{\Lambda}}_\mu\=\mathcal{D}_\mu\zeta^{\hat{\Lambda}}~\equiv~ \partial_\mu\zeta^{\hat{\Lambda}}+g\,A_\mu^{\hat{\Sigma}}X_{\hat{\Sigma}\hat{\Gamma}}{}^{\hat{\Lambda}}\,\zeta^{\hat{\Gamma}} \,, \end{equation} where we have introduced the $G_g$-covariant derivative of the gauge parameter $\mathcal{D}_\mu\zeta^{\hat{\Lambda}}$.\par As usual in the construction of non-abelian gauge-theories, we define the gauge curvature% \footnote{ here we use the following convention for the definition of the components of a form: $\omega_{(p)}=\frac{1}{p!}\,\omega_{\mu_1\dots\mu_p}\,dx^{\mu_1}\wedge \dots dx^{\mu_p}$ } \begin{equation} g\,\mathcal{F}=g\,F^{\hat{\Lambda}}\,X_{\hat{\Lambda}}=\frac{g}{2}\,F^{\hat{\Lambda}}_{\mu\nu}\,dx^\mu\wedge dx^\nu\,X_{\hat{\Lambda}}\equiv d\Omega_g-\Omega_g\wedge \Omega_g\,,\label{calF} \end{equation} which, in components, reads: \begin{equation} F_{\mu\nu}^{{\hat{\Lambda}}} \= \partial_\mu A^{\hat{\Lambda}}_\nu-\partial_\nu A^{\hat{\Lambda}}_\mu - g\,f_{{\hat{\Gamma}}{\hat{\Sigma}}}{}^{\hat{\Lambda}}\,A^{\hat{\Gamma}}_\mu\,A^{\hat{\Sigma}}_\nu\= \partial_\mu A^{\hat{\Lambda}}_\nu-\partial_\nu A^{\hat{\Lambda}}_\mu + g\,X_{{\hat{\Gamma}}{\hat{\Sigma}}}{}^{\hat{\Lambda}}\,A^{\hat{\Gamma}}_\mu\,A^{\hat{\Sigma}}_\nu\;.\label{defF} \end{equation} The gauge curvature transforms covariantly under a transformation $ {\bf g}(x)\in G_g$: \begin{equation} \mathcal{F} \;\rightarrow\; \mathcal{F}'={\bf g}\,\mathcal{F}\,{\bf g}^{-1}\,,\label{gaugecovF} \end{equation} and satisfies the Bianchi identity: \begin{equation} \mathcal{D}\mathcal{F}\equiv d\mathcal{F}-\Omega_g\wedge \mathcal{F}+\mathcal{F}\wedge \Omega_g=0 \;\;\Leftrightarrow\;\; \mathcal{D} F^{{\hat{\Lambda}}}\equiv dF^{{\hat{\Lambda}}}+g\,X_{{\hat{\Sigma}}{\hat{\Gamma}}}{}^{{\hat{\Lambda}}}A^{{\hat{\Sigma}}}\wedge F^{{\hat{\Lambda}}}=0\,, \end{equation} where we have denoted by $\mathcal{D} F^{{\hat{\Lambda}}}$ the $G_g$--covariant derivative acting on $F^{{\hat{\Lambda}}}$. In the original ungauged Lagrangian we then replace the abelian field strengths by the new $G_g$-covariant ones: \begin{equation} \partial_\mu A^{\hat{\Lambda}}_\nu-\partial_\nu A^{\hat{\Lambda}}_\mu\,\,\rightarrow\,\,\,\,\partial_\mu A^{\hat{\Lambda}}_\nu-\partial_\nu A^{\hat{\Lambda}}_\mu + g\,X_{{\hat{\Gamma}}{\hat{\Sigma}}}{}^{\hat{\Lambda}}\,A^{\hat{\Gamma}}_\mu\,A^{\hat{\Sigma}}_\nu\,.\label{replaceF} \end{equation} After having given the gauge fields a $G_g$-covariant description in the Lagrangian through the non-abelian field strengths, we now move to the other fields. The next step in order to achieve local invariance of the Lagrangian under $G_g$ consists in replacing ordinary derivatives by covariant ones \begin{equation} \dmu\;\;\longrightarrow\;\; \mathcal{D}_\mu\=\dmu-g\,A^{\hat{\Lambda}}\,X_{\hat{\Lambda}} \,.\label{covder} \end{equation} As it can be easily ascertained, the covariant derivatives satisfy the identity which is well known from gauge theories: \begin{equation} \mathcal{D}^2=-g\,\mathcal{F}=-g\,F^{\hat{\Lambda}}\,X_{\hat{\Lambda}} \quad\Leftrightarrow\quad [\mathcal{D}_\mu,\,\mathcal{D}_\nu]=-g\,F^{\hat{\Lambda}}_{\mu\nu}\,X_{\hat{\Lambda}}\,.\label{D2F} \end{equation} Aside from the vectors and the metric, the remaining bosonic fields are the scalars $\phi^s$, whose derivatives are covariantized using the Killing vectors $k_{\hat{\Lambda}}$ associated with the action of the gauge generator $X_{\hat{\Lambda}}$ as an isometry: \begin{equation} \dmu\;\;\longrightarrow\;\; \mathcal{D}_\mu\phi^s\=\dmu\phi^s-g\,A^{\hat{\Lambda}}\,k^s_{\hat{\Lambda}}(\phi)\,,\label{covderphi} \end{equation} The replacement (\ref{covder}), and in particular (\ref{covderphi}), amounts to the \emph{introduction of minimal couplings} for the vector fields.\par\smallskip Care is needed for the fermion fields which, as we have discussed above, do not transform directly under $G$, but under the corresponding compensating transformations in $H$. This was taken into account by writing the $H$-connection $\mathpzc{w}$ in the fermion $H$-covariant derivatives. Now we need to promote such derivatives to $G_g$-covariant ones, by minimally coupling the fermions to the gauge fields. This is effected by modifying the $H$-connection.\par For homogeneous scalar manifolds redefine the left-invariant 1-form $\Omega$ (pulled-back on space-time), defined on them in (\ref{omegapro}), by a \emph{gauged} one obtained by covariantizing the derivative on the coset representative: \begin{equation} \Omega_\mu= \textsl{L}^{-1}\partial_\mu \textsl{L} \;\;\;\longrightarrow\;\;\; \hat{\Omega}_\mu\equiv \textsl{L} ^{-1}\mathcal{D} \textsl{L} = \textsl{L}^{-1}\left(\partial_\mu-g\,A^{\hat{\Lambda}}_\mu\,X_{\hat{\Lambda}}\right) \textsl{L} =\hat{\mathcal{P}}_\mu+\hat{\mathpzc{w}}_\mu \label{hatOm} \end{equation} where, as usual, the space-time dependence of the coset representative is defined by the scalar fields $\phi^s(x)$:\; $\partial_\mu \textsl{L}\equiv \partial_s \textsl{L}\, \partial_\mu\phi^s$.\par The \emph{gauged} vielbein and connection are related to the ungauged ones as follows: \begin{align} \hat{\mathcal{P}}_\mu&={\mathcal{P}}_\mu-g\,A^{\hat{\Lambda}}_\mu\,{\mathcal{P}}_{\hat{\Lambda}}\;; \;\;\quad\;\; \hat{\mathpzc{w}}_\mu= {\mathpzc{w}}_\mu-g\, A^{\hat{\Lambda}}_\mu\,{\mathpzc{w}}_{\hat{\Lambda}}\,.\label{gaugedPW} \end{align} The matrices ${\mathcal{P}}_{\hat{\Lambda}},\,{\mathpzc{w}}_{\hat{\Lambda}}$ begin the projections onto $\mathfrak{K}$ and $\mathfrak{H}$, respectively, of $\textsl{L}^{-1}X_{\hat{\Lambda}}\textsl{L}$: \begin{align} {\mathcal{P}}_{\hat{\Lambda}}&\equiv \left.\textsl{L}^{-1}X_{\hat{\Lambda}}\textsl{L}\right\vert_{\mathfrak{K}}\;; \quad\; {\mathpzc{w}}_{\hat{\Lambda}}\equiv \left.\textsl{L}^{-1}X_{\hat{\Lambda}}\textsl{L}\right\vert_{\mathfrak{H}}\,.\label{wPproj} \end{align} Using eq.\ (\ref{kespans2}) we can express the above quantities as follows: \begin{align} {\mathcal{P}}_{\hat{\Lambda}}&= k_{\hat{\Lambda}}^s\,V_s{}^{\underline{s}}\,K_{{\underline{s}}}\;; \quad {\mathpzc{w}}_{\hat{\Lambda}}= -\frac{1}{2}\,\mathscr{P}_{\hat{\Lambda}}^{{\bf a}}\,J_{{\bf a}}-\frac{1}{2}\,\mathscr{P}_{\hat{\Lambda}}^{{\bf m}}\,J_{{\bf m}}\,,\label{gaugedPW2} \end{align} where $\mathscr{P}_{\hat{\Lambda}}^{{\bf a}}$ were defined in Sect.\ \ref{ghsect}.\par\smallskip For non-homogeneous scalar manifolds we cannot use the construction (\ref{hatOm}) based on the coset representative. Nevertheless we can still define $\mathscr{P}_{\hat{\Lambda}}^{{\bf m}},\,\mathscr{P}_{\hat{\Lambda}}^{{\bf a}}$ in terms of the Killing vectors, see discussion below eq.\ (\ref{KRP}). From these quantities one then defines gauged vielbein $\hat{P}_\mu$ and $H$-connection $\hat{\mathpzc{w}}_\mu$ using (\ref{gaugedPW}) and (\ref{gaugedPW2}), where now $K_{{\underline{s}}}$ should be intended as a basis of the tangent space to the manifold at the origin (and not as isometry generators) and $\{J_{{\bf a}},\,J_{{\bf m}}\}$ a basis of the holonomy group.\par Notice that, as a consequence of eqs.\ (\ref{gaugedPW2}) and (\ref{gaugedPW}), the gauged vielbein 1-forms (pulled-back on space-time) can be written as the ungauged ones in which the derivatives on the scalar fields are replaced by the covariant ones (\ref{covderphi}). This is readily seen by applying the general formula (\ref{kespans}) for homogeneous manifolds to the isometry $X_{\hat{\Lambda}}$ in (\ref{hatOm}), and projecting both sides of this equation on the coset space $\mathfrak{K}$: \begin{equation} \hat{\mathcal{P}}_\mu=\mathcal{P}_s\,\mathcal{D}_\mu z^s\,.\label{hatPPDz} \end{equation} Consequently the replacement (\ref{covderphi}) is effected by replacing everywhere in the Lagrangian $\mathcal{P}_\mu$ by $\hat{\mathcal{P}}_\mu$.\par Consider now a local $G_g$-transformation ${\bf g}(x)$ whose effect on the scalars is described by eq.\ (\ref{gLh}):\; ${\bf g}\textsl{L}(\phi)=\textsl{L}({\bf g}\star\phi)\,h(\phi,{\bf g})$.\; From (\ref{hatOm}) and from the fact that $\mathcal{D}$ is the $G$-covariant derivative, the reader can easily verify that: \begin{equation} \hat{\Omega}_\mu(g\star \phi)=h\,\hat{\Omega}_\mu(\phi)\,h^{-1}+hdh^{-1} \;\;\Rightarrow\;\; \begin{cases} \hat{\mathcal{P}}(g\star \phi)=h\,\hat{\mathcal{P}}(\phi)\,h^{-1}\,,\cr \hat{\mathpzc{w}}(g\star \phi)=h\,\hat{\mathpzc{w}}(\phi)\,h^{-1}+hdh^{-1}\,, \end{cases} \label{PWhattra} \end{equation} where $ h=h(\phi,{\bf g})$. By deriving (\ref{hatOm}) we find the \emph{gauged} Maurer-Cartan equations: \begin{equation} d\hat{\Omega}+\hat{\Omega}\wedge \hat{\Omega}=-g\,\textsl{L}^{-1}\mathcal{F}\textsl{L}\;, \end{equation} where we have used (\ref{D2F}). Projecting the above equation onto $\mathfrak{K}$ and $\mathfrak{H}$ we find the gauged version of eqs.\ (\ref{DP}), (\ref{RW}): \begin{align} \mathscr{D}\hat{\mathcal{P}}&\equiv d\hat{\mathcal{P}}+\hat{\mathpzc{w}}\wedge \hat{P}+\hat{P}\wedge \hat{\mathpzc{w}}=-g\,F^{\hat{\Lambda}}\,{\mathcal{P}}_{\hat{\Lambda}}\,,\label{DP2}\\ \hat{R}(\hat{\mathpzc{w}})&\equiv d\hat{\mathpzc{w}}+\hat{\mathpzc{w}}\wedge \hat{\mathpzc{w}}=-\hat{\mathcal{P}}\wedge \hat{\mathcal{P}}-g\,F^{\hat{\Lambda}}\,{\mathpzc{w}}_{\hat{\Lambda}}\,.\label{RW2} \end{align} The above equations are manifestly $G_g$-invariant. Using (\ref{hatPPDz}) one can easily verify that the gauged curvature 2-form (with value in $\mathfrak{H}$) can be written in terms of the curvature components $R_{rs}$ of the manifold, given in eq.\ (\ref{Rcompo}), as follows: \begin{equation} \hat{R}(\hat{\mathpzc{w}})=\frac{1}{2}\,R_{rs}\,\mathscr{D}\phi^r\wedge \mathscr{D}\phi^s-g\,F^{\hat{\Lambda}}\,{\mathpzc{w}}_{\hat{\Lambda}}\,. \end{equation} The gauge-covariant derivatives, when acting on a generic fermion field $\xi$, is defined using $\hat{\mathpzc{w}}_\mu$, so that (\ref{Dxi}) is replaced by \begin{equation} \mathcal{D}_\mu\xi=\nabla_\mu\xi+\hat{\mathpzc{w}}_\mu\star \xi\,.\label{Dxi2} \end{equation} Summarizing, local invariance of the action under $G_g$ requires replacing everywhere in the Lagrangian the abelian field strengths by the non abelian ones, eq.\ (\ref{replaceF}) and the ungauged vielbein $\mathcal{P}_\mu$ and $H$-connection $\mathpzc{w}_\mu$ by the gauged ones: \begin{equation} \mathcal{P}_\mu\;\rightarrow\;\hat{\mathcal{P}}_\mu\;; \;\quad\; \mathpzc{w}_\mu\;\rightarrow\;\hat{\mathpzc{w}}_\mu\,.\label{replacePW} \end{equation} Clearly supersymmetry of the gauged action would require as a necessary, though not sufficient, condition to perform the above replacements also in the supersymmetry transformation laws of the fields. \par \subparagraph{\textbf{Step 3.} Introducing topological terms.} If the symplectic duality action (\ref{xsymp}) of $X_{\hat{\Lambda}}$ has a non-vanishing off-diagonal block $X_{{\hat{\Lambda}}{\hat{\Gamma}}{\hat{\Sigma}}}$, that is if the gauge transformations include Peccei-Quinn shifts, then an infinitesimal (local) gauge transformation $\xi^{\hat{\Lambda}}(x)\,X_{{\hat{\Lambda}}}$ would produce a variation of the Lagrangian of the form (\ref{deltaLC}): \begin{equation} \delta\mathscr{L}_{\text{bos}}= \frac{g}{8}\,\xi^{\hat{\Lambda}}(x)X_{{\hat{\Lambda}}{\hat{\Gamma}}{\hat{\Sigma}}}\epsilon^{\mu\nu\rho\sigma}\, F^{\hat{\Gamma}}_{\mu\nu}F^{\hat{\Sigma}}_{\rho\sigma}\,.\label{deltaLX} \end{equation} Being $\xi^{\hat{\Lambda}}(x)$ a local parameter, the above term is no longer a total derivative and thus the transformation is not a symmetry of the action. In \cite{deWit:1984px} it was proven that the variation (\ref{deltaLX}) can be canceled by adding to the Lagrangian a topological term of the form \begin{equation} \mathscr{L}_{\rm top.} =\frac{1}{3}\,g\,\epsilon^{\mu\nu\rho\sigma}\,X_{{\hat{\Lambda}}{\hat{\Gamma}}{\hat{\Sigma}}}\;A^{\hat{\Lambda}}_\mu\, A^{\hat{\Sigma}}_\nu\, \left(\partial_\rho A^{\hat{\Gamma}}_\sigma+\frac{3}{8}\,g\,X_{{\hat{\Delta}}{\hat{\Pi}}}{}^{\hat{\Gamma}}\,A^{\hat{\Delta}}_\rho\,A^{\hat{\Pi}}_\sigma\right) \,,\label{top} \end{equation} provided the following condition holds \begin{equation} X_{({\hat{\Lambda}}{\hat{\Gamma}}{\hat{\Sigma}})}\=0\;.\label{xsymmetr} \end{equation} We will see in the following that condition (\ref{xsymmetr}), together with the closure constraint (\ref{gaugealg2}), is part of a set of constraints on the gauge algebra which is implied by supersymmetry. Indeed, even if the Lagrangian $\mathscr{L}^{(0)}_{g}$ constructed so far is locally $G_g$-invariant, the presence of minimal couplings explicitly breaks both supersymmetry and the duality global symmetry $G$. \par \paragraph{Gauge algebra and embedding tensor.} We have seen that the gauging procedure corresponds to promoting some suitable subgroup $G_g\subset G_{el}$ to a local symmetry. This subgroup is defined selecting a subset of generators within the global symmetry algebra $\mathfrak{g}$ of $G$. Now, all the information about the gauge algebra can be encoded in a $G_{el\,}$-covariant object $\theta$, which expresses the gauge generators as linear combinations of the global symmetry generators $t_\alpha$ of the subgroup $G_{el}\subset G$ \begin{equation} X_{\hat{\Lambda}}=\theta_{\hat{\Lambda}}{}^\sigma\,t_\sigma\;; \quad\quad \theta_{\hat{\Lambda}}{}^\sigma \in {\bf n_v}\times\mathrm{adj}(G_{el})\;, \label{gentheta} \end{equation} with \,${\hat{\Lambda}}=1,\,\dotsc,\,n_v$\; and with \,$\sigma=1,\dotsc,\,\mathrm{dim}(G_{el})$. \;The advantage of this description is that the $G_{el\,}$-invariance of the original ungauged Lagrangian $\mathscr{L}$ is restored at the level of the gauged Lagrangian $\mathscr{L}_{\rm gauged}$, to be constructed below, provided $\theta_{\hat{\Lambda}}{}^\sigma$ is transformed under $G_{el}$ as well. However, the full global symmetry group $G$ of the field equations and Bianchi identities is still broken, since the parameters $\theta_{\hat{\Lambda}}{}^\sigma$ can be viewed as a number $n_{el}=\mathrm{dim}(G_{el})$ of electric charges, whose presence manifestly break electric-magnetic duality invariance. In other words we are working in a specific symplectic frame defined by the ungauged Lagrangian we started from. \par \medskip We shall give later on a definition of the gauging procedure which is completely freed from the choice of the symplectic frame. For the time being, it is useful to give a description of the gauge algebra (and of the consistency constraints on it) which does not depend on the original symplectic frame, namely which is manifestly $G$-covariant. This is done by encoding all information on the initial symplectic frame in a symplectic matrix $E\equiv(E_M{}^N)$ and writing the gauge generators, through this matrix, in terms of new generators \begin{equation} X_M=(X_\Lambda,\,X^\Lambda) \end{equation} which are at least twice as many as the $X_{\hat{\Lambda}}$: \begin{eqnarray} \left(\begin{matrix} X_{\hat{\Lambda}} \cr 0 \end{matrix}\right) =E\,\left(\begin{matrix} X_\Lambda\cr X^\Lambda \end{matrix}\right) \;.\label{EXL} \end{eqnarray} This description is therefore redundant and this is the price we have to pay in order to have a manifestly symplectic covariant formalism. We can then rewrite the gauge connection in a symplectic fashion \begin{align} A^{\hat{\Lambda}}\,X_{\hat{\Lambda}} = A^{\hat{\Lambda}}\,E_{\hat{\Lambda}}{}^\Lambda\,X_\Lambda +A^{\hat{\Lambda}}\,E_{{\hat{\Lambda}}\,\Lambda}\,X^\Lambda =\AL\,X_\Lambda+\ALd\,X^\Lambda=A^M_\mu\,X_M\;,\label{syminvmc} \end{align} where we have introduced the vector fields $\AL$ and the corresponding dual ones $\ALd$, that can be regarded as components of a symplectic vector \begin{eqnarray} \AM\equiv(\AL,\,\ALd)\,. \end{eqnarray} These are clearly not independent, since they are all expressed in terms of the only electric vector fields $A^{\hat{\Lambda}}$ of our theory (those entering the vector kinetic terms): \begin{eqnarray} \AL=E_{\hat{\Lambda}}{}^\Lambda\,A^{\hat{\Lambda}}_\mu\;,\qquad \ALd=E_{{\hat{\Lambda}}\,\Lambda}\,A^{\hat{\Lambda}}_\mu\;. \end{eqnarray} In what follows, it is useful to adopt this symplectic covariant description in terms of $2n_v$ vector fields $\AM$ and $2n_v$ generators $X_M$, bearing in mind the above definitions through the matrix $E$, which connects our initial symplectic frame to any other.\par \smallskip The components of the symplectic vector $X_M$ are generators in the isometry algebra $\mathfrak{g}$ and thus can be expanded in a basis $t_\alpha$ of generators of $G$: \begin{equation} X_M=\Theta_M{}^\alpha\,t_\alpha\,,\qquad \alpha=1,\dotsc,\,\mathrm{dim}(G) \,.\label{Thdef} \end{equation} The coefficients of this expansion $\Theta_M{}^\alpha$ represent an extension of the definition of $\theta$ to a $G$-covariant tensor: \begin{equation} \theta_\Lambda{}^\sigma \,\;\dashrightarrow\;\; \Th \equiv (\theta^{\Lambda\,\alpha},\;\theta_\Lambda{}^\alpha)\,; \qquad \Th\,\in\,\mathscr{R}_{v*}\times\mathrm{adj}(G) \,,\label{embtens} \end{equation} which describes the explicit embedding of the gauge group $G_g$ into the global symmetry group $G$, and combines the full set of deformation parameters of the original ungauged Lagrangian. The advantage of this description is that it allows to recast all the consistency conditions on the choice of the gauge group into $G$-covariant (and thus independent of the symplectic frame) constraints on $\Theta$.\par We should however bear in mind that, just as the redundant set of vectors $A^M_\mu$, also the components of $\Theta_M{}^\alpha$ are not independent since, by eq.\ (\ref{EXL}), \begin{equation} \theta_{\hat{\Lambda}}{}^\alpha=E_{\hat{\Lambda}}{}^M\,\Theta_M{}^\alpha\;, \;\quad\; 0=E^{\hat{\Lambda}\,M}\,\Theta_M{}^\alpha \,,\label{elET} \end{equation} so that \begin{equation} \mathrm{dim}(G_g)\=\mathrm{rank}(\theta)=\mathrm{rank}(\Theta)\,. \end{equation} The above relations (\ref{elET}) imply for $\Theta_M{}^\alpha$ the following symplectic-covariant condition: \begin{equation} \Theta_\Lambda{}^\alpha\,\Theta^{\Lambda\,\beta}-\Theta_\Lambda{}^\beta\,\Theta^{\Lambda\,\alpha}=0 \quad\Leftrightarrow\quad \mathbb{C}^{MN}\Theta_M{}^\alpha\Theta_N{}^\beta=0\quad \,.\label{locality} \end{equation} Vice versa, one can show that if $\Theta_M{}^\alpha$ satisfies the above conditions, there exists a symplectic matrix $E$ which can rotate it to an electric frame, namely such that eqs.\ (\ref{elET}) are satisfied for some $\theta_{\hat{\Lambda}}{}^\alpha$.\, Equations (\ref{locality}) define the so-called \emph{locality constraint} on the embedding tensor $\Theta_M{}^\alpha$ and they clearly imply: \begin{equation} \mathrm{dim}(G_g)=\mathrm{rank}(\Theta)\le n_v\;, \end{equation} which is the preliminary consistency condition (\ref{preliminary}).\par The electric-magnetic duality action of $X_M$, in the generic symplectic frame defined by the matrix $E$, is described by the tensor: \begin{equation} X_{MN}{}^P~\equiv~ \Theta_M{}^\alpha\,t_{\alpha\,N}{}^P\= E^{-1}{}_M{}^{\hat{M}}E^{-1}{}_N{}^{\hat{N}}\,X_{\hat{M}\hat{N}}{}^{\hat{P}}\,E_{\hat{P}}{}^P\,.\label{XEhatX} \end{equation} For each value of the index {\footnotesize $M$}, the tensor $X_{MN}{}^P$ should generate symplectic transformations. This implies that: \begin{equation} X_{MNP}\equiv X_{MN}{}^Q\mathbb{C}_{QP}=X_{MPN}\,, \end{equation} which is equivalent to eqs.\ (\ref{sympconde}). The remaining linear constraints (\ref{lin1}), (\ref{xsymmetr}) on the gauge algebra can be recast in terms of $X_{MN}{}^P$ in the following symplectic-covariant form: \begin{equation} X_{(MNP)}=0 \;\quad\Leftrightarrow\quad\; \begin{cases} 2\,X_{(\Lambda\Sigma)}{}^\Gamma=X^\Gamma{}_{\Lambda\Sigma}\,,\cr 2\,X^{(\Lambda\Sigma)}{}_\Gamma=X_\Gamma{}^{\Lambda\Sigma}\,,\cr X_{(\Lambda\Sigma\Gamma)}=0\,. \end{cases} \label{lconstr} \end{equation} Notice that the second of equations (\ref{lconstr}) implies that in the electric frame, in which $X^{\hat{\Lambda}}=0$, also the $B$-block (i.e.\ the upper-right one) of the infinitesimal gauge generators $\mathscr{R_v}[X_{\hat{\Lambda}}]$ vanishes, being $X_{\hat{\Gamma}}{}^{\hat{\Lambda}\hat{\Sigma}}=0$, so that the gauge transformations are indeed in $G_{el}$.\par Finally, the closure constraints (\ref{gaugealg2}) can be written, in the generic frame, in the following form: \begin{equation} [X_M,\,X_N]=-X_{MN}{}^P\,X_P \quad\Leftrightarrow\quad \Theta_M{}^\alpha\Theta_N{}^\beta{\rm f}_{\alpha\beta}{}^\gamma+\Theta_M{}^\alpha\,t_{\alpha\,N}{}^P\Theta_P{}^\gamma=0\,. \end{equation} The above condition can be rephrased, in a $G$-covariant fashion, as the condition that the \emph{embedding tensor $\Theta_M{}^\alpha$ be invariant under the action of the gauge group it defines}: \begin{align} \delta_M\Theta_N{}^\alpha=0\,.\\ \nonumber \end{align} Summarizing we have found that consistency of the gauging requires the following set of linear and quadratic algebraic, $G$-covariant constraints to be satisfied by the embedding tensor: \begin{enumerate}[$\circ$,itemsep=1ex] \item{\emph{Linear constraint:} \begin{align} X_{(MNP)}&=0\,,\label{linear2} \end{align} } \item{\emph{Quadratic constraints:} \begin{align} &\mathbb{C}^{MN}\Theta_M{}^\alpha\Theta_N{}^\beta=0\,,\label{quadratic1}\\ &[X_M,\,X_N]=-X_{MN}{}^P\,X_P\,.\label{quadratic2} \end{align} } \end{enumerate} The linear constraint (\ref{linear2}) amount to a projection of the embedding tensor on a specific $G$-representation $\mathscr{R}_\Theta$ in the decomposition of the product $\mathscr{R}_{v*}\times {\rm Adj}(G)$ with respect to $G$ \begin{equation} \mathscr{R}_{v*}\times {\rm Adj}(G)\;\;\stackrel{G}{\longrightarrow}\;\; \mathscr{R}_\Theta +\, \dots \end{equation} and thus can be formally written as follows: \begin{equation} \mathbb{P}_\Theta\cdot \Theta=\Theta\,, \end{equation} where $\mathbb{P}_\Theta$ denotes the projection on the representation $\mathscr{R}_\Theta$. For this reason (\ref{linear2}) is also named \emph{representation constraint}.\par The first quadratic constraint (\ref{quadratic1}) guarantees that a symplectic matrix $E$ exists which rotates the embedding tensor $\Theta_M{}^\alpha$ to an electric frame in which the \emph{magnetic components} $\Theta^{\hat{\Lambda}\,\alpha}$ vanish. The second one (\ref{quadratic2}) is the condition that the gauge algebra close within the global symmetry one $\mathfrak{g}$ and implies that $\Theta$ is a singlet with respect to $G_g$. In a general theory, the three constraints \eqref{linear2}, \eqref{quadratic1} and \eqref{quadratic2} should be imposed independently. As we shall prove below, in theories (such as the maximal one) in which all scalar fields enter the same supermultiplets as the vector ones, the locality constraint \eqref{quadratic1} follows from the other two. In maximal supergravity, however, the closure constraint \eqref{quadratic2} follows from \eqref{linear2} and \eqref{quadratic1} and thus, once the linear constraint is imposed, the two quadratic ones are equivalent.\par\smallskip The second part of the gauging procedure, which we are going to discuss below, has to do with restoring supersymmetry after minimal couplings have been introduced and the $G_g$-invariant Lagrangian $\mathscr{L}_{\text{gauged}}^{(0)}$ have been constructed. As we shall see, the supersymmetric completion of $\mathscr{L}_{\text{gauged}}^{(0)}$ requires no more constraints on $G_g$ (i.e.\ on $\Theta$) than the linear (\ref{linear2}) and quadratic ones (\ref{quadratic1}), (\ref{quadratic2}) discussed above.\par\medskip As a final remark let us prove that the locality constraint (\ref{quadratic1}) is independent of the others only in theories featuring scalar isometries with no duality action, namely in which the symplectic duality representation $\Scr{R}_v$ of the isometry algebra $\mathfrak{g}$ is not faithful. This is the case of the quaternionic isometries in $\mathcal{N} = 2$ theories, see eq.\ (\ref{qisom}) of Sect.\ \ref{sframes}. Let us split the generators $t_\alpha$ of $G$ into $t_{\ell}$, which have a non-trivial duality action, and $t_{m}$, which do not: \begin{equation} (t_{\ell})_M{}^N\neq 0\;; \;\quad\; (t_{m})_M{}^N= 0\,. \end{equation} From equation (\ref{quadratic2}) we derive, upon symmetrization of the {\footnotesize $M,\,N$} indices, the following condition: \begin{equation} X_{(MN)}{}^P\,X_{P}=X_{(MN)}{}^P\,\Theta_{P}{}^\alpha\,t_\alpha=0\,,\label{quad2n} \end{equation} where $t_\alpha$ on the right hand side are \emph{not} evaluated in the $\mathscr{R}_v$ representation and thus are all non-vanishing. Using the linear constraint (\ref{linear2}) we can then rewrite $X_{(MN)}{}^P$ as follows: \begin{equation} X_{(MN)}{}^P=-\frac{1}{2}\,\mathbb{C}^{PQ}\,X_{QMN}=-\frac{1}{2}\,\mathbb{C}^{PQ}\,\Theta_Q{}^\ell t_{\ell\,MN}\,, \end{equation} so that (\ref{quad2n}) reads \begin{equation} \mathbb{C}^{QP}\,\Theta_Q{}^\ell\Theta_{P}{}^\alpha\,t_\alpha\,t_{\ell\,MN}\,=0 \,.\label{quad2nbis} \end{equation} Being $t_\alpha$ and $t_{\ell\,MN}$ independent for any $\alpha$ and $\ell$, conditions (\ref{linear2}) and (\ref{quadratic2}) only imply \emph{part of} the locality constraint (\ref{quadratic1}): \begin{equation} \mathbb{C}^{QP}\,\Theta_Q{}^\ell\Theta_{P}{}^\alpha=0\,,\label{quad2n2} \end{equation} while the remaining constraints (\ref{quadratic1}) \begin{equation} \mathbb{C}^{QP}\,\Theta_Q{}^m\Theta_{P}{}^{n}=0\,,\label{quad2n3} \end{equation} need to be imposed independently. Therefore in theories in which all scalar fields sit in the same supermultiplets as the vector ones, as it is the case of $\mathcal{N}>2$ or $\mathcal{N}=2$ with no hypermultiplets, the locality condition \eqref{quadratic1} is not independent but follows from the other constraints. \subsection{The gauged Lagrangian} The three steps described above allow us to the construction of a Lagrangian $\mathscr{L}_{\text{gauged}}^{(0)}$ which is locally $G_g$-invariant starting from the ungauged one. Now we have to check if this deformation is compatible with local supersymmetry. As it stands, the Lagrangian $\mathscr{L}_{\text{gauged}}^{(0)}$ is no longer invariant under supersymmetry, due to the extra contributions that arise from variation of the vector fields in the covariant derivatives.\par Consider, for instance, the supersymmetry variation of the (gauged) Rarita-Schwinger term in the Lagrangian \begin{equation} \mathscr{L}_{\textsc{rs}}=i\,e\,\bar{\psi}^A_\mu\gamma^{\mu\nu\rho}\mathcal{D}_\nu\psi_{A\,\rho}~+~\text{h.c.}\;, \end{equation} where $\mathcal{D}_\nu$ is the gauged covariant derivative defined in eq.\ (\ref{Dxi2}). Under supersymmetry variation of $\psi_\mu$: \begin{equation} \delta\psi_\mu=\mathcal{D}_\mu \epsilon+\dots\,, \end{equation} $\epsilon$ being the local supersymmetry parameter% \footnote{ the ellipses refer to terms containing the vector field strengths }. The variation of $\mathscr{L}_{\textsc{rs}}$ produces a term \begin{align} \delta\mathscr{L}_{\textsc{rs}}&\=\dots+2i\,e\,\bar{\psi}^A_\mu\gamma^{\mu\nu\rho}\mathcal{D}_\nu \mathcal{D}_{\rho}\epsilon_A~+~\text{h.c.}\=\nonumber\\ &\=-i\,g\,e\,\bar{\psi}^A_\mu\gamma^{\mu\nu\rho}F_{\nu\rho}^{\hat{\Lambda}}\,(\mathpzc{w}_{\hat{\Lambda}}\epsilon)_A~+~\text{h.c.} \;,\label{RSvar} \end{align} where we have used the property (\ref{D2F}) of the gauge covariant derivative. Similarly we can consider the supersymmetry variation of the spin-$1/2$ fields: \begin{equation} \delta \lambda^{\mathcal{I}}=i\,\hat{\mathcal{P}}_\mu^{\mathcal{I}\,A}\,\gamma^\mu\epsilon_A+\dots\,, \end{equation} where the dots denote terms containing the vector fields and $\hat{\mathcal{P}}_\mu^{\mathcal{I}\,A}$ is a specific component of the $\mathfrak{K}$-valued matrix $\hat{\mathcal{P}}_\mu$. The resulting variation of the corresponding kinetic Lagrangian contains terms of the following form: \begin{align} \delta\left(-ie\,\bar{\lambda}_{\mathcal{I}}\gamma^\mu \mathcal{D}_\mu\lambda^{\mathcal{I}}~+~\text{h.c.}\right)&=\dots-2i\,e\,\bar{\lambda}_{\mathcal{I}}\gamma^{\mu\nu} \mathcal{D}_\mu\hat{P}_\nu^{\mathcal{I}\,A}\,\epsilon_A~+~\text{h.c.}=\nonumber\\&=\dots+ig\,e\,\bar{\lambda}_{\mathcal{I}}\gamma^{\mu\nu} F_{\mu\nu}^{\hat{\Lambda}}\,{\mathcal{P}}_{\hat{\Lambda}}^{\mathcal{I}\,A}\,\epsilon_A~+~\text{h.c.} \label{lambdatra} \end{align} We see that the supersymmetry variation of the minimal couplings in the fermion kinetic terms have produced $O(g)$-terms which contain the tensor \begin{equation} F_{\mu\nu}^{\hat{\Lambda}}\,\textsl{L}^{-1}X_{\hat{\Lambda}} \textsl{L}=\mathbb{F}_{\mu\nu}^{M}\,\textsl{L}^{-1}X_M \textsl{L} \label{genvar} \end{equation} projected on $\mathfrak{H}$ and contracted with the $\bar{\psi}\epsilon$ current in (\ref{RSvar}), or restricted to $\mathfrak{K}$ and contracted with the $\bar{\lambda}\epsilon$ current in the second case (\ref{lambdatra}). On the right hand side of (\ref{genvar}) the summation over the gauge generators has been written in the symplectic invariant form defined in eq.\ (\ref{syminvmc}):\; $\mathbb{F}^M\, X_M\equiv F^{\hat{\Lambda}}\,E_{\hat{\Lambda}}{}^M\,X_M$\,.\, These are instances of the various terms occurring in the supersymmetry variation $\delta\mathscr{L}_{\text{gauged}}^{(0)}$. Just as (\ref{RSvar}) and (\ref{lambdatra}), these terms are proportional to an $H$-tensor defined as follows% \footnote{ in the formulas below we use the coset representative in which the first index (acted on by $G$) is in the generic symplectic frame defined by the matrix $E$ and which is then related to the same matrix in the electric frame (labeled by hatted indices) as follows: \begin{equation} \textsl{L}(\phi)_{\hat{M}}{}^{\underline{N}}~=~E_{\hat{M}}{}^P\,\textsl{L}(\phi)_{P}{}^{\underline{N}} \quad\Rightarrow\quad \mathcal{M}(\phi)_{\hat{M}\hat{N}}=E_{\hat{M}}{}^PE_{\hat{N}}{}^Q\mathcal{M}(\phi)_{PQ}\,, \end{equation} last equation being (\ref{MEtra}) }: \begin{align} \mathbb{T}(\Theta,\phi)_{\underline{M}}&~\equiv~ \frac{1}{2}\,\mathbb{L}(\phi)^{-1}{}_{\underline{M}}{}^N\,\textsl{L}(\phi)^{-1}X_N\,\textsl{L}(\phi)\= \frac{1}{2}\,\mathbb{L}(\phi)^{-1}{}_{\underline{M}}{}^N\,\Theta_N{}^\beta\,\textsl{L}(\phi)_\beta{}^\alpha\,t_\alpha\=\nonumber\\ &\=\mathbb{T}(\Theta,\phi)_{\underline{M}}{}^\alpha\,t_\alpha \,,\label{TT} \end{align} where \begin{equation} \mathbb{T}(\Theta,\phi)_{\underline{M}}{}^\alpha\equiv \frac{1}{2}\,\mathbb{L}(\phi)^{-1}{}_{\underline{M}}{}^N\,\Theta_N{}^\beta \textsl{L}(\phi)_\beta{}^\alpha= \frac{1}{2}\,(\textsl{L}^{-1}(\phi)\star \Theta)_{\underline{M}}{}^\alpha\,, \end{equation} where $\star$ denotes the action of $\textsl{L}^{-1}$ as an element of $G$ on $\Theta_M{}^\alpha$ in the corresponding $\mathscr{R}_\Theta$-representation. The tensor $\mathbb{T}(\phi,\,\Theta)=\textsl{L}^{-1}(\phi)\star \Theta$ is called the \emph{$\mathbb{T}$-tensor} and was first introduced in \cite{deWit:1982ig}.\par If $\Theta$ and $\phi$ are simultaneously transformed with $G$, the $\mathbb{T}$-tensor transforms under the corresponding $H$-compensator: \begin{align} \forall {\bf g}\in G\;:\quad &\mathbb{T}({\bf g}\star\phi,\,{\bf g}\star\Theta)=\frac{1}{2}\, \textsl{L}^{-1}({\bf g}\star\phi)\star ({\bf g}\star\Theta)=\nonumber\\ &=\frac{1}{2}\,(h({\bf g},\phi)\star \textsl{L}^{-1}(\phi)\star {\bf g}^{-1})\star ({\bf g}\star\Theta)=h({\bf g},\phi)\star\mathbb{T}(\phi,\,\Theta)\,. \end{align} This quantity $\mathbb{T}$ naturally belongs to a representation of the group $H$ and is an example of \emph{composite field} discussed at the end of Sect.\ \ref{fsector}.\par If, on the other hand, we fix $\phi$ and only transform $\Theta$, $\mathbb{T}$ transforms in the same $G$-representation $\mathscr{R}_\Theta$ as $\Theta$, being $\mathbb{T}$ defined (aside for the factor $1/2$) by acting on the embedding tensor with the $G$-element $\mathbb{L}^{-1}$. As a consequence of this, $\mathbb{T}$ satisfies the same constraints (\ref{linear2}), (\ref{quadratic1}) and (\ref{quadratic2}) as $\Theta$: \begin{align} \mathbb{T}_{\underline{NM}}{}^{\underline{N}}\=\mathbb{T}_{(\underline{MNP})}\=0\,,\nonumber\\ \mathbb{C}^{\underline{MN}}\,\mathbb{T}_{\underline{M}}{}^\alpha\,\mathbb{T}_{\underline{N}}{}^\beta\=0\,,\nonumber\\ [\mathbb{T}_{\underline{M}},\,\mathbb{T}_{\underline{N}}]+\mathbb{T}_{\underline{MN}}{}^{\underline{P}}\, \mathbb{T}_{\underline{P}}\=0\,,\label{Tids} \end{align} where we have defined $\mathbb{T}_{\underline{MN}}{}^{\underline{P}}\equiv \mathbb{T}_{\underline{M}}{}^\alpha\,t_{\alpha \underline{N}}{}^{\underline{P}}$. Equations (\ref{Tids}) have been originally derived within maximal supergravity in \cite{deWit:1982ig}, and dubbed \emph{$\mathbb{T}$-identities}% \footnote{ recall that in maximal supergravity the locality constraint follows from the linear and the closure ones }.\par Notice that, using eqs.\ (\ref{wPproj}) and (\ref{gaugedPW2}) we can rewrite the $\mathbb{T}$-tensor in the following form: \begin{equation} \mathbb{T}_{\underline{M}}\=\frac{1}{2}\,\mathbb{L}^{-1}{}_{\underline{M}}{}^N\,\Theta_N{}^\alpha\left(k_{\alpha}^s\,V_s{}^{\underline{s}}\,K_{{\underline{s}}}-\frac{1}{2}\,\mathscr{P}_{\alpha}^{{\bf a}}\,J_{{\bf a}}-\frac{1}{2}\,\mathscr{P}_{\alpha}^{{\bf m}}\,J_{{\bf m}}\right)\,, \label{Tgen} \end{equation} which can be extended to $\mathcal{N}=2$ theories with non-homogeneous scalar manifolds, see discussion at the end of this section. \par\smallskip To cancel the supersymmetry variations of $\mathscr{L}_{\text{gauged}}^{(0)}$ and to construct a gauged Lagrangian $\mathscr{L}_{\text{gauged}}$ preserving the original supersymmetries, one can apply the general Noether method (see \cite{VanNieuwenhuizen:1981ae} for a general review) which consists in adding new terms to $\mathscr{L}_{\text{gauged}}^{(0)}$ and to the supersymmetry transformation laws, iteratively in the gauge coupling constant. In our case the procedure converges by adding terms of order one ($\Delta\mathscr{L}_{\text{gauged}}^{(1)}$) and two ($\Delta\mathscr{L}_{\text{gauged}}^{(2)}$) in $g$, so that \begin{equation} \mathscr{L}_{\text{gauged}}=\mathscr{L}_{\text{gauged}}^{(0)}+\Delta\mathscr{L}_{\text{gauged}}^{(1)}+\Delta\mathscr{L}_{\text{gauged}}^{(2)}\,. \end{equation} The additional $O(g)$-terms are of \emph{Yukawa type} and have the general form: \begin{equation} e^{-1}\Delta\mathscr{L}_{\text{gauged}}^{(1)}=g\left(2\bar{\psi}^A_\mu\;\gamma^{\mu\nu}\;\psi_\nu^B\;\mathbb{S}_{AB} ~+~i\,\bar{\lambda}^{\mathcal{I}}\;\gamma^\mu\;\psi_{\mu\,A}\;\mathbb{N}_{\mathcal{I}}{}^A ~+~\bar{\lambda}^{\mathcal{I}}\,\lambda^{\mathcal{J}}\;\mathbb{M}_{\mathcal{IJ}}\right) ~+~\text{h.c.} \;,\label{fmassterms} \end{equation} characterized by the scalar-dependent matrices $\mathbb{S}_{AB}$ and $\mathbb{N}^{\mathcal{I} A}$ called \emph{fermion shift matrices}, and a matrix $\mathbb{M}^{\mathcal{IJ}}$ that can be rewritten in terms of the previous mixed mass tensor $\mathbb{N}^{\mathcal{I} A}$ (see the subsequent sections).\par The $O(g^2)$-terms consist of a scalar potential: \begin{equation} e^{-1}\Delta\mathscr{L}_{\text{gauged}}^{(2)}=-g^2\,V(\phi) \,.\label{spot} \end{equation} At the same time the fermionic supersymmetry transformations need to be suitably modified. To this end, we shall \emph{add order--$g$ terms to the fermion supersymmetry transformation rules} of the gravitino ($\psi_{\mu A}$) and of the other fermions ($\chi^\mathcal{I}$) \begin{align} \delta_\epsilon\psi_{\mu A}&\=\mathcal{D}_\mu\epsilon_A +i\,g\;\mathbb{S}_{AB}\;\gamma_\mu\,\epsilon^B+\dotsc\,,\nonumber\\ \delta_\epsilon\lambda_{\mathcal{I}}&\=g\,\mathbb{N}_{\mathcal{I}}{}^{A}\,\epsilon_A+\dotsc \label{fermshifts} \end{align} depending on the same matrices $\mathbb{S}_{AB},\,\mathbb{N}_{\mathcal{I}}{}^{A}$ entering the mass terms. The fermion shift-matrices are composite fields belonging to some appropriate representations $\mathscr{R}_S,\,\mathscr{R}_N$ of the $H$ group, such that (\ref{fmassterms}) is $H$-invariant.\par These additional terms in the Lagrangian and supersymmetry transformation laws are enough to cancel the original $O(g)$ variations in $\delta\mathscr{L}_{\text{gauged}}^{(0)}$ --- like (\ref{RSvar}) and (\ref{lambdatra}), together with new $O(g)$ terms depending on $\mathbb{S}$ and $\mathbb{N}$ in the supersymmetry variation of $\mathscr{L}_{\text{gauged}}^{(0)}$ --- provided the shift-tensors $\mathbb{S}_{AB},\,\mathbb{N}^{\mathcal{I} A}$ are identified with suitable $H$-covariant components of the $\mathbb{T}$-tensor: \begin{equation} \mathscr{R}_\Theta\;\stackrel{H}{\longrightarrow}\;\mathscr{R}_N+\mathscr{R}_S+\mathscr{R}_{{\tiny \mbox{other}}}\,, \end{equation} and that additional $H$-representations $\mathscr{R}_{{\tiny \mbox{other}}}$ in the $\mathbb{T}$-tensor do not enter the supersymmetry variations of the Lagrangian. This can be formulated as a $G$-covariant restriction on the representation $\mathscr{R}_\Theta$ of the $\mathbb{T}$-tensor or, equivalently, of embedding tensor, which can be shown to be no more than the representation constraint (\ref{linear2}) discussed earlier.\par The identification with components of the $\mathbb{T}$-tensor defines the expression fermion shift-tensors as $H$-covariant composite fields in terms of the embedding tensor and the scalar fields: \begin{equation} \mathbb{S}_{AB}=\mathbb{S}_{AB}(\phi,\Theta)=\left.\mathbb{T}(\phi,\Theta)\right\vert_{\mathscr{R}_S}\,; \quad\; \mathbb{N}_{\mathcal{I}}{}^A=\mathbb{N}_{\mathcal{I}}{}^A(\phi,\Theta)=\left.\mathbb{T}(\phi,\Theta)\right\vert_{\mathscr{R}_N}\,. \end{equation} Finally, in order to cancel the $O(g^2)$-contributions resulting from the variations (\ref{fermshifts}) in (\ref{fmassterms}), we need to add an \emph{order-$g^2$ scalar potential} $V(\phi)$ whose expression is totally determined by supersymmetry as a bilinear in the shift matrices by the condition \begin{equation} \delta_B{}^A\,V(\phi)\;=\; g^2\,\left(\mathbb{N}_{\mathcal{I}}{}^{A}\,\mathbb{N}^{\mathcal{I}}{}_{B}-12\;\mathbb{S}^{AC}\,\mathbb{S}_{BC}\right)\,,\label{WID} \end{equation} where we have defined \,$\mathbb{N}^{\mathcal{I}}{}_{A}\equiv (\mathbb{N}_{\mathcal{I}}{}^{A})^*$ \,and\, $\mathbb{S}^{AB}\equiv (\mathbb{S}_{AB})^*$. The above condition is called \emph{potential Ward identity} \cite{Ferrara:1985gj,Cecotti:1984wn} (for a comprehensive discussion of the supersymmetry constraints on the fermion shifts see \cite{D'Auria:2001kv}). This identity defines the scalar potential as a quadratic function of the embedding tensor and non-linear function of the scalar fields. As a constraint on the fermion shifts, once these have been identified with components of the $\mathbb{T}$-tensor, it follows from the $\mathbb{T}$-identities (\ref{Tids}) or, equivalently, from the quadratic constraints (\ref{quadratic1}), (\ref{quadratic2}) on $\Theta$. The derivation of quadratic supersymmetry constraints on the fermion shifts in maximal supergravity from algebraic constraints (i.e.\ scalar field independent) on the embedding tensor, was originally accomplished in \cite{Cordaro:1998tx}, though in a specific symplectic frame, and in maximal $D=3$ theory in \cite{Nicolai:2000sc}. In \cite{deWit:2002vt} the four-dimensional result was extended to a generic symplectic frame of the $\mathcal{N}=8$ model, i.e.\ using the $G$-covariant constraint (\ref{linear2}),(\ref{quadratic1}), (\ref{quadratic2}) on the embedding tensor% \footnote{ in a generic gauged model, supersymmetry further require the fermion shifts to be related by differential ``gradient flow'' relations \cite{D'Auria:2001kv} which can e shown to follow from the identification of the shifts with components of the $\mathbb{T}$-tensor and the geometry of the scalar manifold }.\par\smallskip Let us comment on the case of $\mathcal{N}=2$ theories with a non-homogeneous scalar manifold (\ref{SKQK}). In this case we cannot define a coset representative. However, as mentioned earlier, one can still define a symplectic matrix $\mathbb{L}^M{}_{\underline{N}}$ depending on the complex scalar fields in the vector multiplets (which has no longer the interpretation of a coset representative). We can then define the $\mathbb{T}$-tensor in these theories as in (\ref{Tgen}) where $\{K_{{\underline{s}}}\}$ should be intended as a basis of the tangent space to the origin (and not as isometry generators), while $\{J_I\}=\{J_{{\bf a}},\,J_{{\bf m}}\}$ are holonomy group generators% \footnote{ the $H_{\rm R}={\rm U}(2)$-generators $\{J_{{\bf a}}\}$ naturally split into a ${\rm U}(1)$-generator $J_0$ of the K\"ahler transformations on $\Ms_{\textsc{sk}}$ and ${\rm SU}(2)$-generators $J_x$ ($x=1,2,3$) in the holonomy group of the quaternionic K\"ahler manifold $\mathscr{M}_{\textsc{qk}}$ }. Recall that $\{\mathscr{P}_{\alpha}^{{\bf a}},\,\mathscr{P}_{\alpha}^{{\bf m}}\}$ enter the definition of the gauged composite connection (\ref{gaugedPW2}) on the scalar manifold and, as mentioned earlier, are related to the Killing vectors by general properties of the spacial K\"ahler and quaternionic K\"ahler geometries \cite{Andrianopoli:1996cm}.\par\smallskip It is a characteristic of supergravity theories that -- in contrast to globally supersymmetric ones -- by virtue of the negative contribution due to the gravitino shift-matrix, the scalar potential is in general not positive definite, but may, in particular, feature AdS vacua. These are maximally symmetric solutions whose negative cosmological constant is given by the value of the potential at the corresponding extremum:\, $\Lambda=V_0<0$.\; Such vacua are interesting in the light of the AdS/CFT holography conjecture \cite{Maldacena:1997re}, according to which stable AdS solutions describe conformal critical points of a suitable gauge theory defined on the boundary of the space. In this perspective, domain wall solutions to the gauged supergravity interpolating between AdS critical points of the potential describe renormalization group (RG) flow (from an ultra-violet to an infra-red fixed point) of the dual gauge theory and give important insights into its non-perturbative properties. The spatial evolution of such holographic flows is determined by the scalar potential $V(\phi)$ of the gauged theory.\par In some cases the effective scalar potential $V(\phi)$, at the classical level, is non--negative and defines vacua with vanishing cosmological constant in which supersymmetry is spontaneously broken and part of the moduli are fixed. Models of this type are generalizations of the so called ``no--scale'' models \cite{Cremmer:1983bf}, \cite{Ellis:1984bm}, \cite{Barbieri:1985wq} which were subject to intense study during the eighties. \subsection{Dualities and flux compactifications}\label{dfcomp} Let us summarize what we have learned so far. \begin{enumerate}[$\circ$,itemsep=1ex] \item{The most general local internal symmetry group $G_g$ which can be introduced in an extended supergravity is defined by an embedding tensor $\Theta$, covariant with respect to the on-shell global symmetry group $G$ of the ungauged model and defining the embedding of $G_g$ inside $G$. Since a scalar potential $V(\phi)$ can only be introduced through the gauging procedure, $\Theta$ also defines the most general choice for \,$V=V(\phi,\Theta)$. } \item{Consistency of the gauging at the level of the bosonic action requires $\Theta$ to satisfy a number of (linear and quadratic) $G$-covariant constraints. The latter, besides completely determining the gauged bosonic action, also allow for its consistent (unique) supersymmetric extension.} \item{Once we find a solution $\Theta_M{}^\alpha$ to these algebraic constraints, a suitable symplectic matrix $E$, which exists by virtue of (\ref{quadratic1}), will define the corresponding electric frame, in which its magnetic components vanish.} \end{enumerate} Although we have freed our choice of the gauge group from the original symplectic frame, the resulting gauged theory is still defined in an electric frame and thus depends on the matrix $E$: whatever solution $\Theta$ to the constraints is chosen for the gauging, the kinetic terms of the gauged Lagrangian are always written in terms of the only \emph{electric} vector fields $A^{\hat{\Lambda}}_\mu$, namely of the vectors effectively involved in the minimal couplings, see eq.\ (\ref{syminvmc}). We shall discuss in the next section a more general formulation of the gauging which no longer depends on the matrix $E$.\par \paragraph{Dual gauged supergravities.} All the deformations of the ungauged model required by the gauging procedure depend on $\Theta$ in a manifestly $G$-covariant way. This means that, if we transform all the fields $\Phi$ (bosons and fermions) of the model under $G$ (the fermions transforming under corresponding compensating transformations in $H$) and at the same time transform $\Theta$ and the matrix $E$, the field equations and Bianchi identities -- collectively denoted by $\mathscr{E}(E,\,\Phi,\,\Theta)=0$ -- are left invariant: \begin{equation} \forall g\in G\;:\;\; \mathscr{E}(E,\,\Phi,\,\Theta)=0 \;\;\Leftrightarrow\;\; \mathscr{E}(E',\,g\star\Phi,\,g\star\Theta)=0 \quad\;\; (\text{with } \, E'=E\,\mathscr{R}_{v}[g]^T)\,. \end{equation} Since the embedding tensor $\Theta$ is a \emph{spurionic}, namely non-dynamical, object, the above on-shell invariance should not be regarded as a symmetry of a single theory, but rather as an equivalence (or proper duality) between two different theories, one defined by $\Theta$ and the other by $g\star \Theta$.\, Gauged supergravities are therefore classified in \emph{orbits} with respect to the action of $G$ (or better $G(\mathbb{Z})$) on $\Theta$. This property has an important bearing on the study of flux compactifications mentioned in the Introduction. Indeed, in all instances of flux compactifications, the internal fluxes manifest themselves in the lower-dimensional effective gauged supergravity as components of the embedding tensor defining the gauging \cite{Angelantonj:2003rq,D'Auria:2003jk,deWit:2003hq}: \begin{equation} \Theta= \mbox{Internal Fluxes}\,. \end{equation} This allows us to formulate a precise correspondence between general fluxes (form, geometric and non-geometric) and the gauging of the resulting supergravity. Moreover, using this identification, the quadratic constraints (\ref{quadratic1}), (\ref{quadratic2}) precisely reproduce the consistency conditions on the internal fluxes deriving from the Bianchi identities and field equations in the higher dimensional theory such as, in the presence of RR fluxes, the tadpole cancelation condition \cite{Grana:2005jc,Angelantonj:2003rq,deWit:2003hq}. \par Consider the limit in which the lower-dimensional gauged theory provides a reliable description of the low-energy string or M-theory dynamics on a flux background. This limit is defined by the condition that the flux-induced masses in the effective action be much smaller than the scale of the Kaluza-Klein masses (of order $1/R$, where $R$ is the size of the internal manifold)% \footnote{ for string theory compactifications we should also require this latter scale to be negligible compared to the mass-scale of the string excitations (order $1/\sqrt{\alpha'}$) }: \begin{equation} \mbox{Flux-induced masses}\,\,\ll\,\,\frac{1}{R}\,.\label{sugracond} \end{equation} In this case, fields and fluxes in the lower-dimensional supergravity arrange in representations with respect to the characteristic symmetry group $G_{int}$ the internal manifold would have in the absence of fluxes. In the case of compactifications on $T^n$, such characteristic group is ${\rm GL}(n,\,\mathbb{R})$, acting transitively on the internal metric moduli.\par In general, in the absence of fluxes, $G_{int}$ is a global symmetry group of the action: $G_{int}\subset G_{el}$.\; By branching $\mathscr{R}_\Theta$ with respect to $G_{int}$, we can identify within $\Theta$ the components corresponding to the various internal fluxes. The effect of any such background quantities in the compactification is reproduced by simply switching on the corresponding components of $\Theta$. The gauging procedure does the rest and the resulting gauged model is thus uniquely determined. Since, as mentioned earlier at the end of Sect.\ \ref{gsg}, a suitable subgroup $G(\mathbb{Z})$ of $G$ was conjectured to encode all known string/M-theory dualities, the embedding tensor formulation of the gauging procedure provides an ideal theoretical laboratory where to systematically study the effects of these dualities on fluxes. Some elements of $G(\mathbb{Z})$ will map gauged supergravity descriptions of known compactifications into one another, see Fig.\ \ref{fig1}.\par \begin{figure}[H] \centerline{\includegraphics[width=0.6\textwidth]{Fig1.pdf}} \caption{\scriptsize Dualities between known flux compactifications (``GS'' stands for ``gauged supergravity'').}\label{fig1} \end{figure} \noindent Other elements of $G(\mathbb{Z})$ will map gauged supergravities, originating from known compactifications, into theories whose string or M-theory origin is unknown, see Fig.\ \ref{fig2}. \begin{figure}[H] \centerline{\includegraphics[width=0.6\textwidth]{Fig2.pdf}} \caption{\scriptsize Dualities connecting known flux compactifications to unknown ones.} \label{fig2} \end{figure} \noindent In this case we can use the duality between the corresponding low-energy descriptions to make sense of new compactifications as ``dual'' to known ones.\par The so-called \emph{non-geometric} fluxes naturally fit in the above description as dual to certain compactifications with NS-NS H-flux. If we consider superstring theory compactified to four-simensions on a six-torus $T^6$ without fluxes, the resulting (classical) ungauged supergravity features a characteristic ${\rm O}(6,6)$ global symmetry group, which contains the T-duality group ${\rm O}(6,6;\mathbb{Z})$ and which acts transitively on the moduli originating from the metric and Kalb-Ramond $B$-field in ten dimensions. The $G$-representation $\mathscr{R}_\Theta$ of the embedding tensor, defining the most general gauging, contains the representation ${\bf 220}$ of ${\rm O}(6,6)$ \begin{equation} \mathscr{R}_\Theta\,\stackrel{{\rm O}(6,6)}{\longrightarrow}\; {\bf 220}~+~\dots \end{equation} which in turn branches with respect to the characteristic group $G_{int}={\rm GL}(6,\mathbb{R})$ of the torus as follows: \begin{equation} {\bf 220}\;\stackrel{{\rm GL}(6,\mathbb{R})}{\longrightarrow}\; {\bf 20}_{-3}~+~({\bf 84+6})_{-1}~+~({\bf 84'+6'})_{+1}~+~{\bf 20}_{+3}\,.\label{220} \end{equation} The component ${\bf 20}_{-3}$ can be identified with the H-flux $H_{\alpha\beta\gamma}$ (that is the flux of the field strength of the Kalb-Ramond field $B$) along a 3-cycle of the torus. Switching on only the ${\bf 20}_{-3}$ representation in $\Theta$, the gauging procedure correctly reproduces the couplings originating from a toroidal dimensional reduction with H-flux. What (\ref{220}) tells us is that the action of the T-duality group ${\rm O}(6,6;\mathbb{Z})$ will generate, from an H-flux in the ${\bf 20}_{-3}$, all the other representations: \begin{align} {\bf 84+6}_{-1} \;&:\;\; \tau_{\alpha\beta}{}^\gamma\,,\nonumber\\ {\bf 84'+6'}_{+1} \;&:\;\; Q_{\alpha}{}^{\beta\gamma}\,,\nonumber\\ {\bf 20}_{+3} \;&:\;\; R^{\alpha\beta\gamma}\,. \end{align} The first tensor $\tau_{\alpha\beta}{}^\gamma$ is an instance of \emph{geometric flux}, being a background quantity which characterizes the geometry of the internal manifold. It describes a compactification on a space which is no longer a torus, but is locally described by a group manifold \cite{Scherk:1979zr} with structure constants $\tau_{\alpha\beta}{}^\gamma$. The constraint (\ref{quadratic2}) indeed implies for $\tau_{\alpha\beta}{}^\gamma$ the Jacobi identity: $\tau_{[\alpha\beta}{}^\gamma\tau_{\sigma]\gamma}{}^{\delta}=0$.\; This new internal manifold is called \emph{twisted torus} \cite{Kaloper:1999yr} (see also \cite{Grana:2005jc} and references therein).\par The T-duality picture is completed by the remaining two representations, described by the tensors $Q_{\alpha}{}^{\beta\gamma},\,R^{\alpha\beta\gamma}$. Their interpretation as originating from a string theory compactification is more problematic, since in their presence the internal space cannot be given a global or even local description as a differentiable manifold. For this reason they are called \emph{non-geometric} fluxes \cite{Mathai:2004qq,Hull:2004in,Shelton:2005cf} (see also \cite{Grana:2005jc} and references therein). The $H,\,\tau,\,Q,\,R$-fluxes can all be given a unified description as quantities defining the geometry of more general internal manifolds, having the T-duality group as structure group. Such manifolds are defined in the context of \emph{generalized geometry} \cite{Hitchin:2010qz,Gualtieri:2003dx} (see also \cite{Grana:2005jc} and references therein), by doubling the tangent space to the internal manifold in order to accommodate a representation of ${\rm O}(6,6)$ and introducing on it additional geometric structures, or of \emph{double geometry}/\emph{double field theory} \cite{Hull:2007jy,Dabholkar:2005ve,Hull:2009mi,Hohm:2010pp}, in which the internal manifold itself is enlarged, and parametrized by twice as many coordinates as the original one.\par Finally there are gauged supergravities which are not $G(\mathbb{Z})$-dual to models with a known string or M-theory origin, Fig.\ \ref{fig3}.\; \begin{figure}[!h] \centerline{\includegraphics[width=0.6\textwidth]{Fig3.pdf}} \caption{\scriptsize Intrinsically non-geometric theories.}\label{fig3} \end{figure} \noindent Finding an ultra-violet completion of these theories, which are sometimes called \emph{intrinsically non-geometric}, in the context of string/M-theory is an open challenge of theoretical high-energy physics. Progress in this direction has been achieved in the context of extended generalized geometry \cite{Hull:2007zu,Pacheco:2008ps} or exceptional field theory \cite{Hohm:2013pua,Hohm:2013uia,Hohm:2014qga}.\par If the hierarchy condition (\ref{sugracond}) is not met, the gauged supergravity cannot be intended as a description of the low-energy string/M-theory dynamics, but just as a \emph{consistent truncation} of it, as in the case of the spontaneous compactification of $D=11$ supergravity on $\text{AdS}_4\times S^7$. In this case, the back-reaction of the fluxes on the internal geometry will manifest in extra geometric fluxes, to be identified with additional components of $\Theta$.\par \paragraph{Vacua and dualities.} The scalar potential \begin{equation} V(\phi,\Theta)\;=\;\frac{g^2}{\mathcal{N}} \,\left(\mathbb{N}_{\mathcal{I}}{}^{A}\,\mathbb{N}^{\mathcal{I}}{}_{B}-12\;\mathbb{S}^{AC}\,\mathbb{S}_{BC}\right)\,,\label{Pot} \end{equation} being expressed as an $H$-invariant combination of composite fields (the fermion shifts), is invariant under the simultaneous action of $G$ on $\Theta$ and $\phi^s$: \begin{equation} \forall g\in G \;:\quad V(g\star\phi,\,g\star\Theta)=V(\phi,\,\Theta)\,. \end{equation} This means that, if $V(\phi,\,\Theta)$ has an extremum in $\phi_0$ \begin{equation} \left.\frac{\partial}{\partial\phi^s}V(\phi,\,\Theta)\right\vert_{\phi_0}=0\,, \end{equation} $V(\phi,g\star\Theta)$ has an extremum at $\phi'_0=g\star \phi_0$ with the same properties (value of the potential at the extremum and its derivatives): \begin{equation} \left.\frac{\partial}{\partial\phi^s}V(\phi,g\star\Theta)\right\vert_{g\star\phi_0}=0\,, \qquad g\in G\;. \end{equation} If the scalar manifold is homogeneous, we can map any point $\phi_0$ to the origin $\mathcal{O}$, where all scalars vanish, by the inverse of the coset representative $\textsl{L}(\phi_0)^{-1}\in G$. We can then map a generic vacuum $\phi_0$ of a given theory (defined by an embedding tensor $\Theta$) to the origin of the theory defined by \,$\Theta'=\textsl{L}(\phi_0)^{-1}\star \Theta$.\; As a consequence of this, when looking for vacua with given properties (residual (super)symmetry, cosmological constant, mass spectrum etc.), with no loss of generality we can compute all quantities defining the gauged theory -- fermion shifts and mass matrices -- at the origin: \begin{equation} \mathbb{N}(\mathcal{O},\,\Theta)\,,\;\;\mathbb{S}(\mathcal{O},\,\Theta)\,,\;\;\mathbb{M}(\mathcal{O},\,\Theta)\,, \end{equation} and translate the properties of the vacuum in conditions on $\Theta$. In this way, we can search for the vacua by scanning though all possible gaugings \cite{Dibitetto:2011gm,DallAgata:2011aa,inverso2010sitter}. \subsection{Gauging \,$\mathcal{N}=8$\,,\; $D=4$} \paragraph{Ungauged action.} The four dimensional maximal supergravity is characterized by having $\mathcal{N}=8$ supersymmetry (that is $32$ supercharges), which is the maximal amount of supersymmetry allowed by a consistent theory of gravity.\par\smallskip We shall restrict ourselves to the (ungauged) $\mathcal{N}=8$ theory with no antisymmetric tensor field -- which would eventually be dualized to scalars. The theory, firstly constructed in \cite{Cremmer:1978ds,Cremmer:1979up}, describes a single massless graviton supermultiplet consisting of the graviton $g_{\mu\nu}$, $8$ spin-$3/2$ gravitini $\psi^A_\mu$ ($A=1,\dots, 8$) transforming in the fundamental representation of the R--symmetry group $\mathrm{SU}(8)$, $28$ vector fields $A^\Lambda_\mu$ (with $\Lambda=0,\dotsc,\,27$), $56$ spin-$1/2$ dilatini $\chi_{ABC}$ in the ${\bf 56}$ of $\mathrm{SU}(8)$ and $70$ real scalar fields $\phi^r$: \begin{align} \big[\;\; 1\;\times\;\underbracket[0.15pt][3pt]{g_{\mu\nu}}_{\mathclap{j=2}}\;,\quad 8\;\times\;\underbracket[0.15pt][3pt]{\psi^A_\mu}_{\mathclap{j=\frac32}}\;,\quad 28\;\times\;\underbracket[0.15pt][3pt]{A^\Lambda_\mu}_{\mathclap{j=1}}\;,\quad 56\;\times\;\underbracket[0.15pt][3pt]{\chi_{ABC}}_{\mathclap{j=\frac12}}\;,\quad 70\;\times\;\underbracket[0.15pt][3pt]{\phi^r}_{\mathclap{j=0}} \;\;\;\big] \;.\label{N8} \end{align} The scalar fields are described by a non-linear $\sigma$-model on the Riemannian manifold $\mathscr{M}_{\rm scal}$, that in the $\mathcal{N}=8$ model has the form \begin{eqnarray} \mathscr{M}_{\rm scal}\=\frac{G}{H}\=\frac{\Exc}{\mathrm{SU}(8)}\,, \end{eqnarray} the isometry group being $G=\Exc$, and $H=\mathrm{SU}(8)$ being the R--symmetry group. The bosonic Lagrangian has the usual form (\ref{boslagr}). The global symmetry group of the maximal four-dimensional theory $G=\Exc$ has 133 generators $t_\alpha$. The (abelian) vector field strengths $F^\Lambda=dA^\Lambda$ and their magnetic duals $\mathpzc{G}_\Lambda$ together transform in the $\mathscr{R}_v={\bf 56}$ fundamental representation of the $\Exc$ duality group with generators $(t_\alpha)_M{}^N$, so that \begin{equation} \delta \mathbb{F}^M_{\mu\nu} ~= \left(\begin{matrix} \delta F^\Lambda_{\mu\nu} \cr \delta \mathpzc{G}_{\Lambda\,\mu\nu} \end{matrix}\right) =~ -\Lambda^\alpha\,(t_\alpha)_N{}^M\,\mathbb{F}^N_{\mu\nu}\;. \end{equation} \paragraph{Gauging.} According to our general discussion of Sect.\ (\ref{gaugingsteps}), the most general gauge group $G_g$ which can be introduced in this theory is defined by an embedding tensor $\Theta_M{}^\alpha$ ({\footnotesize $M$}$~=1,\dots, 56$\, and \,$\alpha=1,\dots, 133$), which expresses the gauge generators $X_M$ as linear combinations of the the global symmetry group ones $t_\alpha$ (\ref{Thdef}). The embedding tensor encodes all parameters (couplings and mass deformations) of the gauged theory. This object is solution to the $G$-covariant constraints (\ref{linear2}), (\ref{quadratic1}), (\ref{quadratic2}).\par The embedding tensor formally belongs to the product \begin{equation} \Theta_M{}^\alpha ~\in~ \mathscr{R}_v\otimes\mathrm{adj}(G) \={\bf 56}\;\otimes\;{\bf 133} \= {\bf 56}\;\oplus\;{\bf 912}\;\oplus\;{\bf 6480}\;.\label{56133} \end{equation} The linear constraint (\ref{linear2}) sets to zero all the representation in the above decomposition which are contained in the 3-fold symmetric product of the ${\bf 56}$ representation: \begin{equation} X_{(MNP)}~\in~ ({\bf 56}\;\otimes\;{\bf 56}\;\otimes\;{\bf 56})_{{\rm sym.}} \,\rightarrow\;{\bf 56}\;\oplus\;{\bf 6480}\;\oplus\;{\bf 24320}\,. \end{equation} The representation constraint therefore selects the ${\bf 912}$ as the representation $\mathscr{R}_\Theta$ of the embedding tensor% \footnote{ we can relax this constraint by extending this representation to include the ${\bf 56}$ in (\ref{56133}). Consistency however would require the gauging of the scaling symmetry of the theory (which is never an off-shell symmetry), also called \emph{trombone symmetry} \cite{LeDiffon:2008sh,LeDiffon:2011wt}. This however leads to gauged theories which do not have an action. We shall not discuss these gaugings here }.\par The quadratic constraints pose further restrictions on the $\Exc$-orbits of the ${\bf 912}$ representation which $\Theta_M{}^\alpha$ should belong to. In particular the locality constraint implies that the embedding tensor can be rotated to an electric frame through a suitable symplectic matrix $E$, see eq.\ (\ref{elET}).\par Steps 1,2 and 3 allow to construct the bosonic gauged Lagrangian in this electric frame. We shall discuss in Sect.\ \ref{sec:4} a frame-independent formulation of the gauging procedure in which, for a given solution $\Theta$ to the constraints, we no longer need to switch to the corresponding electric frame.\par The complete supersymmetric gauged Lagrangian is then obtained by adding fermion mass terms, a scalar potential and additional terms in the fermion supersymmetry transformation rules, according to the prescription given in Step 4. All these deformations depend on the fermion shift matrices $\mathbb{S}_{AB},\,\mathbb{N}_\mathcal{I}{}^A$. In the maximal theory $\mathcal{I}=[ABC]$ labels the spin-$1/2$ fields $\chi_{ABC}$ and the two fermion shift-matrices are conventionally denoted by the symbols $A_1=(A_{AB}),\,A_2=(A^D{}_{ABC})$. The precise correspondence is% \footnote{ in the previous sections we have used, for the supergravity fields, notations which are different from those used in the literature of maximal supergravity (e.g. in \cite{deWit:2007mt}) in order to make contact with the literature of gauged $\mathcal{N}<8$ theories, in particular $\mathcal{N}=2$ ones \cite{Andrianopoli:1996cm}. Denoting by a hat the quantities in \cite{deWit:2007mt}, the correspondence between the two notations is: \begin{align} &\hat{\gamma}^\mu =i\gamma^\mu\,;\quad \hat{\gamma}_5 =\gamma_5\,,\nonumber\\ &\hat{\epsilon}_i=\frac{1}{\sqrt{2}}\,\epsilon^A\,;\quad \hat{\epsilon}^i=\frac{1}{\sqrt{2}}\,\epsilon_A\,;\quad (i=A)\,,\nonumber\\ &\hat{\psi}_{i\mu} =\sqrt{2}\,\psi^A_\mu\,;\quad \hat{\psi}^i_{\mu} =\sqrt{2}\,\psi_{A\,\mu}\,;\quad (i=A)\,,\nonumber\\ &\hat{\chi}_{ijk}=\chi^{ABC}\,;\quad \hat{\chi}^{ijk}=\chi_{ABC}\,;\quad ([ijk]=[ABC])\,,\nonumber\\ &\hat{A}_{ij}=(\hat{A}_{ij})^*=A^{AB}\,;\quad \hat{A}_i{}^{jkl}=(\hat{A}^i{}_{jkl})^*=A^A{}_{BCD}\,;\quad (i=A,\,j=B,\,k=C,\,l=D)\,,\nonumber\\ &\mathcal{V}^{\Lambda\,ij}= -\frac{i}{\sqrt{2}}\,\mathbb{L}^\Lambda{}_{AB}\,;\quad \mathcal{V}_{\Lambda}{}^{ij}= \frac{i}{\sqrt{2}}\,\mathbb{L}_{\Lambda AB}\,;\quad (i=A,\,j=B)\,, \end{align} where in the last line the $28\times 28$ blocks of $\mathcal{V}_M{}^{\underline{N}}$ have been put in correspondence with those of $\mathbb{L}^M{}_{\underline{N}}$. The factor $\sqrt{2}$ originates from a different convention with the contraction of antisymmetric couples of ${\rm SU}(8)$-indices:\; $\hat{V}_{ij}\hat{V}^{ij}=\frac{1}{2}\,V^{AB}\,V_{AB}$ }: \begin{equation} \mathbb{S}_{AB}=-\frac{1}{\sqrt{2}}\,A_{AB}\,;\quad \mathbb{N}_{ABC}{}^D=-\sqrt{2}\,A^D{}_{ABC}{}\,, \end{equation} where \begin{equation} A_{AB}=A_{BA}\,;\quad\; A_{ABC}{}^D=A_{[ABC]}{}^D\,;\quad\; A_{DBC}{}^D=0\,. \end{equation} The above properties identify the ${\rm SU}(8)$ representations of the two tensors: \begin{equation} A_{AB}\in {\bf 36}\,;\quad\; A_{ABC}{}^D\in {\bf 420}\,. \end{equation} The $\mathbb{T}$-tensor, defined in (\ref{TT}) as an $\Exc$-object, transforms in $\mathscr{R}_\Theta={\bf 912}$, while as an ${\rm SU}(8)$-tensor it belongs to the following sum of representations: \begin{equation} \mathbb{T}~\in~ {\bf 912}\; \stackrel{{\rm SU}(8)}{\longrightarrow}\;\, {\bf 36}\;\oplus\;\ov{{\bf 36}}\;\oplus\; {\bf 420}\;\oplus\;\ov{{\bf 420}}\;, \end{equation} which are precisely the representations of the fermion shift-matrices and their conjugates $A_{AB}\,A^{AB},\;A^A{}_{BCD},\;A_A{}^{BCD}$.\; This guarantees that the $O(g)$-terms in the supersymemtry variation of $\mathscr{L}_{{\rm gauged}}^{(0)}$, which depend on the $\mathbb{T}$-tensor, only contain ${\rm SU}(8)$-structures which can be canceled by the new terms containing the fermion shift-matrices. This shows that the linear condition $\Theta \in \mathscr{R}_\Theta$ is also required by supersymmetry.\par The same holds for the quadratic constraints, in particular for (\ref{quadratic2}), which implies the $\mathbb{T}$-identities and also the Ward identity (\ref{WID}) for the potential \cite{deWit:1982ig,deWit:2007mt}: \begin{equation} V(\phi)\,\delta^B_A\= \frac{g^2}{6}\,\mathbb{N}^{CDE}{}_A\mathbb{N}_{CDE}{}^B-12\,g^2\,\mathbb{S}_{AC}\mathbb{S}^{BC} \=\frac{g^2}{3}A^B{}_{CDE} A_A{}^{CDE}-6\,g^2\, A_{AC}\,A^{BC}\,, \end{equation} from which we derive: \begin{equation} V(\phi)=g^2\,\left( \frac{1}{24}\,|A^B{}_{CDE}|^2-\frac{3}{4}\, |A_{AB}|^2\right)\,. \end{equation} The scalar potential can also be given in a manifestly $G$-invariant form \cite{deWit:2007mt}\,: \begin{equation} V(\phi)= -\frac{g^2}{672}\,\Big(X_{MN}{}^{R}\,X_{PQ}{}^{S}\,\mathcal{M}^{MP}\,\mathcal{M}^{NQ}\,\mathcal{M}_{RS} + 7\,X_{MN}{}^{Q}\,X_{PQ}{}^{N}\,\mathcal{M}^{MP}\Big)\;, \label{potentialN8} \end{equation} where $\mathcal{M}^{MN}$ is the inverse of the (negative definite) matrix $\mathcal{M}_{MN}$ defined in (\ref{M}) and, as usual, $X_{MN}{}^{R}$ describe the symplectic duality action of the generators $X_M$ in the $\mathscr{R}_{v*}$-representation:\, $X_{MN}{}^{R}\equiv \mathscr{R}_{v*}[X_M]_N{}^R$. \subsection{Brief account of old and new gaugings} As mentioned in Sect.\ (\ref{gaugingsteps}), different symplectic frames (i.e.\ different ungauged Lagrangians) correspond to different choices for the viable gauge groups and may originate from different compactifications (see \cite{deWit:2002vt} for a study of the different symplectic frames for the ungauged maximal theory).\par The toroidal compactification of eleven dimensional theory performed in \cite{Cremmer:1978ds}, upon dualization of all form-fields to lower order ones, yields an ungauged Lagrangian with global symmetry $G_{el}={\rm SL}(8,\mathbb{R})$. We shall refer to this symplectic frame as the ${\rm SL}(8,\mathbb{R})$-frame. The first gauging of the maximal theory was performed in this symplectic frame by choosing $G_g={\rm SO}(8)\subset {\rm SL}(8,\mathbb{R})$ \cite{deWit:1982ig}. The scalar potential features a maximally supersymmetric anti-de Sitter vacuum which corresponds \cite{deWit:1986iy} to the spontaneous compactification of eleven dimensional supergravity on $\text{AdS}_4\times S^7$. The range of possible gaugings in the ${\rm SL}(8,\mathbb{R})$-frame was extended to include non-compact and non semisimple groups $G_g={\rm CSO}(p,q,r)$ (with $p+q+r=8$) \cite{Hull:1984qz}. These were shown in \cite{Cordaro:1998tx} to exhaust all possible gaugings in this frame.\par The discovery of inequivalent Lagrangian formulations of the ungauged maximal theory broadened the choice of possible gauge groups. Flat-gaugings in $D = 4$ describing Scherk-Schwarz reductions of maximal $D = 5$ supergravity \cite{Cremmer:1979uq} and yielding no-scale models, were first constructed in \cite{Andrianopoli:2002mf}. The corresponding symplectic frame is the one originating from direct dimensional reduction of the maximal five-dimensional theory on a circle and has a manifest off-shell symmetry which contains the global symmetry group of the parent model% \footnote{ see Table \ref{tab:T-tensor-repr} at the end of Sect.\ \ref{sec:4} } ${\rm E}_{6(6)}$: one has in fact $G_{el}={\rm O}(1,1)\times {\rm E}_{6(6)}$.\par\smallskip Exploiting the freedom in the initial choice of the sympectic frame, it was recently possible to discover a new class of gauging generalizing the original ${\rm CSO}(p,q,r)$ ones \cite{Dall'Agata:2012sx,Dall'Agata:2012bb,Dall'Agata:2014ita}. These models are obtained by gauging, in a different frame, the same ${\rm CSO}(p,q,r)$.\par Consider two inequivalent frames admitting $G_g={\rm CSO}(p,q,r)$ as gauge group, namely for each of which ${\rm CSO}(p,q,r)\subset G_{el}$. Let $\hat{\mathscr{R}}_v$ and ${\mathscr{R}}_v$ be the corresponding symplectic duality representations of $G$. We can safely consider one of them ($\hat{\mathscr{R}}_v$) as electric. The duality action of the gauge generators $\hat{\mathscr{R}}_{v*}$ and ${\mathscr{R}}_{v*}$ are described by two tensors $X_{\hat{M}\hat{N}}{}^{{\hat{P}}}$ and $X_{MN}{}^P$, respectively, related by a suitable matrix $E$ (\ref{XEhatX}): \begin{equation} X_{\hat{M}\hat{N}}{}^{{\hat{P}}}=E_{\hat{M}}{}^M\,E_{\hat{N}}{}^N\,(E^{-1})_P{}^{{\hat{P}}}\,X_{MN}{}^P\,. \end{equation} The matrices $\mathcal{M}(\phi)$ in the two frames are then related by (\ref{MEtra}). The two embedding tensors describe the same gauge group provided that $\{X_{M}\}$ and $\{E\,X_M\,E^{-1}\}$ define different bases of the same gauge algebra $\mathfrak{g}_g=\mathfrak{cso}(p,q,r)$ in the Lie algebra $\mathfrak{e}_{7(7)}$ of ${\rm E}_{7(7)}$. In other words, $E$ should belong to the \emph{normalizer} of $\mathfrak{cso}(p,q,r)$ in ${\rm Sp}(2n_v,\mathbb{R})$. At the same time the effect of $E$ should not be offset by local (vector and scalar field) redefinitions, see (\ref{generalE}). The duality action of $G_g$ in both $\hat{\mathscr{R}}_{v*}$ and ${\mathscr{R}}_{v*}$ is block-diagonal: \begin{equation} \hat{\mathscr{R}}_{v*}[G_g]={\mathscr{R}}_{v*}[G_g]=\left(\begin{matrix}G_g & \mathbf{0} \cr\mathbf{0} & G_g^{-T}\end{matrix}\right)\,. \end{equation} For semisimple gauge groups $G_g={\rm SO}(p,q)$ (with $p+q=8$), it was shown in \cite{Dall'Agata:2014ita} that the most general $E$ belongs to an ${\rm SL}(2,\mathbb{R})$-subgroup of ${\rm Sp}(56,\mathbb{R})$ and has the general form: \begin{equation} E=\left( \begin{matrix} a\,\Id & b\,\eta\cr c\,\eta & d\,\Id \end{matrix} \right) ~\in~ {\rm Sp}(56,\mathbb{R})\:;\quad\; ad-bc=1 \,,\label{Eimage} \end{equation} where $\eta_{\Lambda\Sigma}$ is the $\mathfrak{so}(p,q)$-Cartan Killing metric, normalized so that $\eta^2=\Id$. The most general ${\rm SL}(2,\mathbb{R})$-matrix can be written, using the Iwasawa decomposition, as follows: \begin{equation} \left(\begin{matrix}a & b\cr c & d\end{matrix}\right)=\left(\begin{matrix}\lambda & 0\cr 0 & \frac{1}{\lambda}\end{matrix}\right)\left(\begin{matrix}1 & \vartheta \cr 0 & 1\end{matrix}\right)\left(\begin{matrix}\cos(\omega) & \sin(\omega) \cr -\sin(\omega) & \cos(\omega)\end{matrix}\right)\,. \end{equation} The leftmost block corresponds in $E$ to an unphysical rescaling of the vectors (in ${\rm GL}(28,\mathbb{R})$). The middle block realizes, in going from the unhatted frame to the hatted one, a constant shift in the generalized $\theta$-angle matrix $\mathcal{R}$:\, $\mathcal{R}\rightarrow \mathcal{R}+\vartheta\,\eta$. This can have effects at the quantum level, but does not affect field equations \cite{Dall'Agata:2014ita}.\par The rightmost block has, on the other hand, important bearing on the physics of the classical theory. Let $E(\omega)$ be the symplectic image (\ref{Eimage}) of this block only, and let ${\mathscr{R}}_{v}$ be the ${\rm SL}(8,\mathbb{R})$-frame, where the ${\rm CSO}(p,q,r)$ gaugings were originally constructed and in which the matrices $\mathbb{L}$ and $\mathcal{M}$ are given by well know general formulas \cite{Cremmer:1978ds,deWit:1982ig}.\, For $\omega\neq 0$, this frame is no longer electric, but is related to the electric one by $E(\omega)$. Using (\ref{elET}) we can write: \begin{equation} X_{\hat{\Lambda}}=\cos(\omega) X_\Lambda+ \sin(\omega)\eta_{\Lambda\Sigma}\,X^\Sigma\,; \quad\; 0=-\sin(\omega) \eta^{\Lambda\Sigma}\,X_\Sigma+ \cos(\omega)X^\Lambda\,, \end{equation} where $(\eta^{\Lambda\Sigma})\equiv \eta^{-1}=\eta$.\; The above relation is easily inverted: \begin{equation} X_\Lambda=\cos{(\omega)}\,X_{\hat{\Lambda}}\,\,,\,\,\,\,\,\,X^\Lambda=\sin{(\omega)}\,\eta^{\Lambda\Sigma}X_{\hat{\Sigma}}\,. \end{equation} We can then write the symplectic invariant connection (\ref{syminvmc}) in the following way: \begin{equation} \Omega_{g\,\mu}=A^M_\mu\,X_M=A^\Lambda_\mu\,X_\Lambda+A_{\Lambda\,\mu}\,X^\Lambda=(\cos{\omega}\,A^\Lambda_\mu+\sin(\omega)\,A_{\Lambda\,\mu}) X_{\hat{\Lambda}}=A^{\hat{\Lambda}}_\mu\,X_{\hat{\Lambda}}\,. \end{equation} In other words, the gauging defined by $X_M$ amounts to gauge, in the ${\rm SL}(8,\mathbb{R})$-frame, the same ${\rm SO}(p,q)$-generators by a linear combination of the electric $A^\Lambda_\mu$ and magnetic $A_{\Lambda\,\mu}$ vector fields. The true electric vectors are all and only those entering the gauge connection, that is $A^{\hat{\Lambda}}_\mu$, and define the electric frame. We shall denote by $\Theta[\omega]$ the corresponding embedding tensor.\par The gauged model can be constructed either directly in the ${\rm SL}(8,\mathbb{R})$-frame, using the covariant formulation to be discussed in Sect.\ \ref{sec:4}, or in the electric frame, along the lines described in Sect.\ \ref{sec:3}. The range of values of $\omega$ is restricted by the discrete symmetries of the theory. One of these is parity (see Sect.\ \ref{gsg}), whose duality representation ${\bf P}$ in the ${\rm SL}(8,\mathbb{R})$-frame has the form (\ref{Pmatrix}) \cite{Ferrara:2013zga}. The reader can verify that its effect on the $\mathbb{T}$-tensor (\ref{TT}) is: \begin{align} \mathbb{T}(\Theta[\omega],\phi)_{\underline{M}}&\={\bf P}\star \mathbb{T}(\Theta[-\omega],\phi_p) \label{PTtras} \end{align} by using the properties \begin{equation} {\bf P}_{\hat{M}}{}^{\hat{N}}\,{\bf P}^{-1} X_{\hat{N}}{\bf P}=X_{\hat{M}}\,;\quad\; {\bf P}^{-1} E(\omega){\bf P}=E(-\omega)\,;\quad\; {\bf P}^{-1}\mathbb{L}(\phi){\bf P}=\mathbb{L}(\phi_p)\,, \end{equation} where $\phi_p$ denote the parity-transformed scalar fields. Eq.\ (\ref{PTtras}) shows that parity maps $\phi$ into $\phi_p$ and $\omega$ in $-\omega$. In other words $\omega$ is \emph{parity-odd parameter}. The overall ${\bf P}$ transformation on $\mathbb{T}$ in (\ref{PTtras}) is ineffective, since it will cancel everywhere in the Lagrangian, being ${\bf P}$ an ${\rm O}(2n_v)$-transformation. Similarly, we can use other discrete global symmetries of the ungauged theory, which include the ${\rm SO}(8)$-triality transformations $S_3\subset {\rm E}_{7(7)}$ for the ${\rm SO}(8)$-gauging, to further restrict the range of values of $\omega$. One finds that \cite{Dall'Agata:2012bb,Dall'Agata:2014ita}: \begin{align} &\omega~\in~\left[0,\frac{\pi}{8}\right]\;,\quad \text{${\rm SO}(8)$ and $\mathrm{SO}(4,4)$ gaugings}\,,\nonumber\\ &\omega~\in~\left[0,\frac{\pi}{4}\right]\;,\quad \text{other non-compact ${\rm SO}(p,q)$ gaugings}\,. \end{align} These are called ``$\omega$-rotated'' ${\rm SO}(p,q)$-models, or simply ${\rm SO}(p,q)_{\omega}$-models. The ${\rm SO}(8)$ ones, in particular, came as a surprise since they contradicted the common belief that the original de Wit-Nicolai ${\rm SO}(8)$-gauged model was unique.\par For the non-semisimple ${\rm CSO}(p,q,r)$-gaugings, the non-trivial matrix $E$ does not depend on continuous parameters but is fixed, thus yielding for each gauge group only one rotated-model \cite{Dall'Agata:2012sx,Dall'Agata:2014ita}.\par Even more surprisingly, these new class of gauged theories feature a broader range of vacua than the original models. In this sense the $\omega\rightarrow 0$ limit can be considered a singular one, in which some of the vacua move to the boundary of the moduli space at infinity and thus disappear.\par Consider for instance the ${\rm SO}(8)_{\omega}$-models. They all feature an $\text{AdS}_4$, $\mathcal{N}=8$ vacuum at the origin with the same cosmological constant and mass spectrum as the original ${\rm SO}(8)$ theory. The parameter $\omega$ manifests itself in the higher order interactions of the effective theory. They also feature new vacua, which do not have counterparts in the $\omega=0$ model. Fig.\ \ref{fign8} illustrates some of the vacua of the de Wit-Nicolai model ($\omega=0$), namely those which feature a residual symmetry group $G_2\subset{\rm SO}(8)$.\par \begin{figure}[!h] \centerline{\includegraphics[width=0.8\textwidth]{FigN8.png}} \caption{\scriptsize The $G_2$-invariant vacua of the de Wit-Nicolai model, with their interpretation in terms of compactifications of the eleven-dimensional theory.}\label{fign8} \end{figure} Fig.\ \ref{fign82} shows the $G_2$-invariant vacua of a particular ${\rm SO}(8)_\omega$ model and the disappearance of one of the vacua in the $\omega\rightarrow 0$ limit \cite{Dall'Agata:2012bb}. \begin{figure}[!h] \centerline{\includegraphics[width=1\textwidth]{fign82.png}} \caption{\scriptsize On the left the $G_2$-invariant vacua of the ${\rm SO}(8)_\omega$ model, with $\omega=\frac{\pi}{8}$. The dashed lines represent identifications of vacua due to a discrete symmetry of the theory which is a combination of triality and parity. All of them have an $\omega=0$ counterpart, except the lowest one, marked by a circle, which disappears in the $\omega\rightarrow 0$ limit.}\label{fign82} \end{figure} The vacua of these models have been extensively studied \cite{Borghese:2012qm,Borghese:2012zs,Borghese:2013dja,Guarino:2013gsa,Gallerati:2014xra} also in the context of renormalization group flows interpolating between (or simply originating from) AdS vacua \cite{Tarrio:2013qga,Pang:2015mra} and AdS black holes \cite{Anabalon:2013eaa,Lu:2014fpa,Wu:2015ska,Anabalon:2017yhv}.\par Determining a string or M-theory origin of the $\omega$-rotated models is, to date, an open problem \cite{Lee:2015xga}. They seem to provide examples of what we named \emph{intrinsically non-geometric} models in Sect.\ \ref{dfcomp}. The only exceptions so far are the dyonic nonsemisimple gaugings ${\rm CSO}(p, q,r)$. Indeed, the ${\rm ISO}(p,7-p)$ gaugings were shown to be related to compactifications of massive Type-IIA theory \cite{Guarino:2015jca,Guarino:2015qaa,Guarino:2015vca,Cassani:2016ncu}. The $p=7$ theory features $\mathcal{N}=2$ \cite{Guarino:2015jca} and $\mathcal{N}=3$ \cite{Gallerati:2014xra,Pang:2015vna,Pang:2015rwd} AdS-vacua, all corresponding to backgrounds with topology $\text{AdS}_4\times S^6$. The uplift of the generic ${\rm CSO}(p, q,r)$-model to Type-IIA or Type-IIB theory was eventually achieved in \cite{Inverso:2016eet}. \section{Duality covariant gauging}\label{sec:4} Let us discuss in this section a formulation of the gauging procedure in four-dimensions which was developed in \cite{deWit:2005ub,deWit:2007mt} and which no longer depends on the matrix $E$, so that the kinetic terms are not written in terms of the vector fields in the electric frame.\par \paragraph{Step 1, 2 and 3 revisited.} We start from a symplectic-invariant gauge connection of the form% \footnote{ here, for the sake of simplicity, we reabsorb the gauge coupling constant $g$ into $\Theta$:\; $g\,\Theta\rightarrow \Theta$ }: \begin{equation} \Omega_{g\mu}\equiv A^M_\mu\,X_M=A^\Lambda_\mu\,X_\Lambda+A_\Lambda^\mu\,X_\Lambda=A^M_\mu\,\Theta_M{}^\alpha\,t_\alpha\,,\label{newcon} \end{equation} where $\Theta_M{}^\alpha$ satisfies the constraints (\ref{linear2}), (\ref{quadratic1}), (\ref{quadratic2}). The fields $A^\Lambda_\mu$ and $A_{\Lambda\,\mu}$ are now taken to be independent. This is clearly a redundant choice and, as we shall see, half of them play to role of auxiliary fields. Eq.\ (\ref{quadratic1}) still implies that at most $n_v$ linear combinations $A^{\hat{\Lambda}}_\mu$ of the $2n_v$ vectors $A^\Lambda_\mu,\,A_{\Lambda\,\mu}$ effectively enter the gauge connection (and thus the minimal couplings): \begin{equation} A^M_\mu\,X_M=A^{\hat{\Lambda}}_\mu\,X_{\hat{\Lambda}}\,, \end{equation} where $X_{\hat{\Lambda}}$ are defined in eq.\ (\ref{elET}) through the matrix $E$, whose existence is guaranteed by (\ref{quadratic1}), and where $A^{\hat{\Lambda}}_\mu\equiv E^{-1}{}_M{}^{\hat{\Lambda}}\,A^M_\mu$.\par In the new formulation we wish to discuss, however, the vectors $A^\Lambda_\mu$ instead of $A^{\hat{\Lambda}}_\mu$ enter the kinetic terms. The covariant derivatives are then defined in terms of (\ref{newcon}) as in Step 2 of the Section \ref{gaugingsteps}, and, as prescribed there, should replace ordinary derivative everywhere in the action. The infinitesimal gauge variation of $A^M$ reads: \begin{equation} \delta A^M_\mu=\mathcal{D}_\mu\zeta^M\equiv \partial_\mu\zeta^M+\,A^N_\mu X_{NP}{}^M\,\zeta^P\,,\label{deltaA} \end{equation} where, as usual, $X_{MP}{}^R\equiv \mathscr{R}_{v*}[X_{M}]_{P}{}^R$. We define for this set of electric-magnetic vector fields a symplectic covariant generalization $\mathbb{F}^M$ of the non-abelian field strengths $F^{\hat{\Lambda}}$ (\ref{defF}): \begin{equation} {F}^M_{\mu\nu}\equiv \partial_\mu A^M_\nu-\partial_\nu A^M_\mu+\,X_{[NP]}{}^M\,A^N_\mu A^P_\nu \quad\Leftrightarrow\quad {F}^M\equiv dA^M+\frac{g}{2}\,X_{NP}{}^M\,A^N\wedge A^P\,,\label{FMdef} \end{equation} where in the last equation we have used the form-notation for the fields strengths. The gauge algebra-valued curvature $\mathcal{F}$ is defined as in (\ref{calF}): \begin{equation} \mathcal{F}\equiv {F}^M\,X_M\,.\label{gcurv} \end{equation} The first problem one encounters in describing the vectors $A^\Lambda_\mu$ in the kinetic terms is that, in a symplectic frame which is not the electric one, such fields are not well defined, since their curvatures fail to satisfy the Bianchi identity. This comes with no surprise, since the components $\Theta^{\Lambda\,\alpha}$ of the embedding tensor are nothing but \emph{magnetic charges}. One can indeed verify that: \begin{equation} \mathcal{D}{F}^M~\equiv~d{F}^M+\,X_{NP}{}^M\,A^N\wedge {F}^P \=\,X_{(PQ)}{}^M\,A^P\wedge\left( dA^Q+\frac{g}{3}\,X_{RS}{}^QA^R\wedge A^S\right) ~\neq~ 0 \,.\label{Bianchifail} \end{equation} In particular $\mathcal{D}F^\Lambda\neq 0$ since $X_{(PQ)}{}^\Lambda=-\frac{1}{2}\,\Theta^{\Lambda\alpha}\,t_{\alpha\,M}{}^P\mathbb{C}_{PN}\neq 0$, being in the non-electric frame $\Theta^{\Lambda\alpha}\neq 0$.\; To deduce (\ref{Bianchifail}) we have used the quadratic constraint (\ref{quadratic2}) on the gauge generators $X_M$ in the $\mathscr{R}_{v*}$-representation, which reads: \begin{equation} X_{MP}{}^R X_{NR}{}^Q-X_{NP}{}^R X_{MR}{}^Q+X_{MN}{}^R X_{RP}{}^Q=0\,. \end{equation} From the above identity, after some algebra, one finds: \begin{equation} X_{[MP]}{}^R X_{[NR]}{}^Q+X_{[PN]}{}^R X_{[MR]}{}^Q+X_{[NM]}{}^R X_{[PR]}{}^Q=-(X_{NM}{}^R\,X_{(PR)}{}^Q)_{[MNP]}\,,\label{nojacobi} \end{equation} that is the \emph{generalized structure constants} $X_{[MP]}{}^R$ entering the definition (\ref{FMdef}) do not satisfy the Jacobi identity, and this feature is at the root of (\ref{Bianchifail}). Related to this is the non-gauge covariance of ${F}^M$. The reader can indeed verify that (using the form-notation): \begin{equation} \delta F^M=-\,X_{NP}{}^M\,\zeta^N\,F^P+\,\left(2 \,X_{(NP)}{}^M\,\zeta^N\,F^P-X_{(NP)}{}^M\,A^N\wedge \delta A^P\right)\neq -\,X_{NP}{}^M\,\zeta^N\,F^P\,,\label{deltsFnc} \end{equation} where $\delta A^M$ is given by (\ref{deltaA}) and where we have used the general property \begin{equation} \delta F^M=\mathcal{D}\delta A^M+X_{(PQ)}{}^M\,A^P\wedge \delta A^Q \,,\label{deltaFgen} \end{equation} valid for generic $\delta A^M$.\; We also observe that the obstruction to the Bianchi identity (\ref{Bianchifail}), as well as the non-gauge covariant terms in (\ref{deltsFnc}), are proportional to a same tensor $X_{(MN)}{}^P$. This quantity, as a consequence of (\ref{quadratic2}) and (\ref{quad2n}), vanishes if contracted with the gauge generators $X_M$, namely with the first index of the embedding tensor: $X_{(MN)}{}^P\,\Theta_P{}^\alpha=0$. Therefore the true electric vector fields $A^{\hat{\Lambda}}_\mu$ and the gauge connection which only depends on them, are perfectly well defined. Indeed, one can easily show using the matrix $E$ that the gauge curvature (\ref{gcurv}) only contains the field strengths $F^{\hat{\Lambda}}$ associated with $A^{\hat{\Lambda}}$ and defined in (\ref{defF}): \begin{equation} \mathcal{F} ~\equiv~ {F}^M\,X_M \= F^{\hat{\Lambda}}\,X_{\hat{\Lambda}}\,. \end{equation} On the other hand, using (\ref{Bianchifail}) and (\ref{quad2n}) we have: \begin{equation} \mathcal{D}\mathcal{F}\= \mathcal{D}{F}^M\,X_M\=0\,.\label{BianchiFgauge} \end{equation} The gauge covariance (\ref{gaugecovF}) of $\mathcal{F}$, and thus of $F^{\hat{\Lambda}}$, is also easily verified by the same token, together with eq.\ (\ref{D2F}):\; $\mathcal{D}^2=-\mathcal{F}$.\par\smallskip In order to construct gauge-covariant quantities describing the vector fields, we combine the vector field strengths $F^M_{\mu\nu}$ with a set of massless antisymmetric tensor fields% \footnote{ these fields will also be described as 2-forms $B_\alpha\equiv \frac{1}{2}\,B_{\mu\nu}\,dx^\mu\wedge dx^\nu$ } $B_{\alpha\,\mu\nu}$ in the adjoint representation of $G$ through the matrix \begin{equation} Z^{M\,\alpha}\equiv \frac{1}{2}\,\mathbb{C}^{MN}\,\Theta_N{}^\alpha\,,\label{defZ} \end{equation} and define the following new field strengths: \begin{align} \mathcal{H}^M_{\mu\nu}~\equiv~ F^M_{\mu\nu}+Z^{M\,\alpha}\,B_{\alpha\,\mu\nu}\;:\;\; \begin{cases} \mathcal{H}^\Lambda=dA^\Lambda+\frac{1}{2}\,\Theta^{\Lambda\alpha}\,B_\alpha\,,\cr \mathcal{H}_\Lambda=dA_\Lambda-\frac{1}{2}\,\Theta_{\Lambda}{}^{\alpha}\,B_\alpha\,. \end{cases} \label{HZB} \end{align} From the definition (\ref{defZ}) and (\ref{quadratic1}) we have: \begin{equation} Z^{M\,\alpha}\,\Theta_{M}{}^\beta=0 \quad\Leftrightarrow\quad Z^{M\,\alpha}\,X_{M}=0\,.\label{Zort} \end{equation} The reader can verify, using the linear constraint (\ref{linear2}), that: \begin{equation} X_{(NP)}{}^M=-\frac{1}{2}\,\mathbb{C}^{MQ}\,X_{QN}{}^R\mathbb{C}_{RP}=-\frac{1}{2}\, \mathbb{C}^{MQ}\,\Theta_Q{}^\alpha\,t_{\alpha\,N}{}^R\mathbb{C}_{RP}=-Z^{M\,\alpha}\,t_{\alpha\,NP}\,,\label{linear22} \end{equation} where, as usual, we have defined $t_{\alpha\,NP}\equiv t_{\alpha\,N}{}^R\mathbb{C}_{RP}$.\par The reason for considering the combination (\ref{HZB}) is that the non-covariant terms in the gauge variation of $F^M_{\mu\nu}$, being proportional to $X_{(NP)}{}^M$, that is to $Z^{M\,\alpha}$, can be canceled by a corresponding variation of the tensor fields $\delta B_{\alpha\mu\nu}$: \begin{align} \delta \mathcal{H}^M&=\,X_{PN}{}^M\,\zeta^N\,F^P+Z^{M\alpha}\,\left(\delta B_\alpha+t_{\alpha NP}\,A^N\wedge \delta A^P\right)=\nonumber\\ &=X_{PN}{}^M\,\zeta^N\,\mathcal{H}^P+Z^{M\alpha}\,\left(\delta B_\alpha+t_{\alpha NP}\,A^N\wedge \delta A^P\right)=\nonumber\\ &=-X_{NP}{}^M\,\zeta^N\,\mathcal{H}^P+2\,X_{(NP)}{}^M\,\zeta^N\,\mathcal{H}^P+Z^{M\alpha}\,\left(\delta B_\alpha+t_{\alpha NP}\,A^N\wedge \delta A^P\right)=\nonumber\\ &=-X_{NP}{}^M\,\zeta^N\,\mathcal{H}^P+Z^{M\alpha}\,\left[\delta B_\alpha+t_{\alpha NP}\,(A^N\wedge \delta A^P-2\,\zeta^N\,\mathcal{H}^P)\right]\,, \end{align} where, in going from the first to the second line, we have used (\ref{Zort}), so that\, $X_{PN}{}^M\,F^P=X_{PN}{}^M\,\mathcal{H}^P$.\; If we define: \begin{equation} \delta B_\alpha ~\equiv~ t_{\alpha NP}\,(2\,\zeta^N\,\mathcal{H}^P-A^N\wedge \delta A^P)\,,\label{Btra1} \end{equation} the term proportional to $Z^{M\,\alpha}$ vanishes and $ \mathcal{H}^M$ transforms covariantly. The kinetic terms in the Lagrangian are then written in terms of $\mathcal{H}^\Lambda_{\mu\nu}$: \begin{equation} \frac{1}{e}\mathscr{L}_{v,\,{\rm kin}}= \frac{1}{4}\,\mathcal{I}_{\Lambda\Sigma}(\phi)\,\mathcal{H}^\Lambda_{\mu\nu}\,\mathcal{H}^{\Sigma\,\mu\nu} +\frac{1}{8\,e}\,\mathcal{R}_{\Lambda\Sigma}(\phi)\,\epsilon^{\mu\nu\rho\sigma}\,\mathcal{H}^\Lambda_{\mu\nu} \,\mathcal{H}^{\Sigma}_{\rho\sigma}\,. \label{bosoniclagr} \end{equation} The above transformation property (\ref{Btra1}) should however be modified since the quantity we want to transform covariantly is not quite $\mathcal{H}^M$, but rather the symplectic vector: \begin{equation} \mathpzc{G}^M\equiv \left(\begin{matrix}\mathcal{H}^\Lambda\cr \mathpzc{G}_\Lambda\end{matrix}\right)\;;\quad\; \mathpzc{G}_{\Lambda\,\mu\nu}\equiv -\epsilon_{\mu\nu\rho\sigma} \frac{\partial \mathscr{L}}{\partial \mathcal{H}^\Lambda_{\rho\sigma}}\,, \end{equation} corresponding, in the ungauged theory, to the field-strength-vector $\mathbb{F}^M$ of eq.\ (\ref{bbF}), and which contains inside $\mathpzc{G}_\Lambda$ fermion bilinears coming from Pauli terms in the Lagrangian. Consistency of the construction will then imply that the two quantities $\mathcal{H}^M$ and $\mathpzc{G}^M$, which are off-shell different since the former depends on the magnetic vector fields $A_\Lambda$ as opposed to the latter, \emph{will be identified on-shell} by the equation \begin{equation}(\mathcal{H}^M-\mathpzc{G}^M)\,\Theta_M{}^\alpha=(\mathcal{H}_\Lambda-\mathpzc{G}_\Lambda)\,\Theta^{\Lambda\,\alpha}=0\,.\label{HG0} \end{equation} These equations will in particular identify the field strengths of the auxiliary fields $A_\Lambda$ in $\mathcal{H}_\Lambda$ with the duals to $\mathcal{H}^\Lambda$. The best that we can do is to make $\mathpzc{G}^M$ on-shell covariant under $G_g$, namely upon use of (\ref{HG0}). To this end, we modify eq.\ (\ref{Btra1}) as follows: \begin{equation} \delta B_\alpha\equiv t_{\alpha NP}\,(2\,\zeta^N\,\mathpzc{G}^P-A^N\wedge \delta A^P)\,,\label{Btra12} \end{equation} so that the variations of the symplectic vectors $\mathcal{H}^M$ and $\mathpzc{G}^M$ read: \begin{align} \delta \mathcal{H}^M&=-X_{NP}{}^M\,\zeta^N\,\mathcal{H}^P~+~\text{non-covariant terms}\,,\nonumber\\ \delta \mathpzc{G}^M&=-X_{NP}{}^M\,\zeta^N\,\mathpzc{G}^P~+~\text{non-covariant but on-shell vanishing terms}\,. \end{align} \par Consistent definition of $B_\alpha$ requires the theory to be gauge-invariant with respect to transformations parmetrized by 1-forms: $\Xi_\alpha=\Xi_{\alpha\mu}\,dx^\mu$. Such transformations should in turn be $G_g$-invariant and leave $\mathcal{H}^M$ unaltered: \begin{equation} A^M\rightarrow A^M+\delta_\Xi A^M\;;\quad\; B_\alpha\rightarrow B_\alpha+\delta_\Xi B_\alpha \quad\Rightarrow\quad \delta_\Xi\mathcal{H}^M=0\,. \end{equation} Let us use (\ref{deltaFgen}) then to write \begin{equation} \delta_\Xi\mathcal{H}^M=\mathcal{D}\delta_\Xi A^M+Z^{M\,\alpha}\,\left(\delta_\Xi B_{\alpha}+t_{\alpha NP}\,A^N\wedge \delta_\Xi A^P\right)\,. \end{equation} If we set \begin{equation} \delta_\Xi A^M=-Z^{M\alpha}\,\Xi_\alpha\,,\label{deltaxi1} \end{equation} the invariance of $\mathcal{H}^M$ implies: \begin{equation} \delta_\Xi B_{\alpha}=\mathcal{D}\Xi_\alpha-t_{\alpha NP}\,A^N\wedge \delta_\Xi A^P\,,\label{deltaxi2} \end{equation} where \begin{equation} \mathcal{D}\Xi_\alpha\equiv d\Xi_\alpha+\Theta_M{}^\beta\,{\rm f}_{\beta\alpha}{}^\gamma A^M\wedge\Xi_\gamma\,. \end{equation} Let us now introduce field strengths for the 2-forms: \begin{equation} \mathcal{H}^{(3)}_\alpha\equiv \mathcal{D}B_\alpha-t_{\alpha PQ}A^P\wedge\left( dA^Q+\frac{1}{3}\,X_{RS }{}^Q\,A^R\wedge A^S\right)\,. \end{equation} Writing the forms in components, \begin{equation} \mathcal{H}^{(3)}_\alpha=\frac{1}{3!}\,\mathcal{H}_{\alpha\,\mu\nu\rho}\,dx^\mu\wedge dx^\nu\wedge dx^\rho\;;\quad\; \mathcal{D}B_\alpha=\frac{1}{2}\,\mathcal{D}_{\mu}B_{\alpha\,\nu\rho}\,dx^\mu\wedge dx^\nu\wedge dx^\rho\,, \end{equation} we have: \begin{equation} \mathcal{H}_{\alpha\,\mu\nu\rho}=3\,\mathcal{D}_{[\mu}B_{\alpha\,\nu\rho]}-6\,t_{\alpha PQ}\left(A^P_{[\mu}\partial_\nu A_{\rho]}^Q+\frac{1}{3}\,X_{RS }{}^Q\,A^P_{[\mu}A^R_\nu A^S_{\rho]}\right)\,. \end{equation} The reader can verify that the following Bianchi identities hold: \begin{align} \mathcal{D}\mathcal{H}^M&= Z^{M\alpha}\,\mathcal{H}^{(3)}_\alpha\,,\label{Bid1n}\\ \mathcal{D}\mathcal{H}^{(3)}_\alpha &= X_{NP}{}^M\,\mathcal{H}^N\wedge \mathcal{H}^P\,. \end{align} Just as in Step 3. of Section \ref{gaugingsteps}, gauge invariance of the bosonic action requires the introduction of topological terms, so that the final gauged bosonic Lagrangian reads: \begin{eqnarray} \mathscr{L}_{\text{bos}} &=& -\frac{e}{2}\,R+\frac{e}{2}\,\mathcalboondox{G}_{st}(\phi)\,\mathcal{D}_\mu\phi^s\,\mathcal{D}^\mu\phi^t+ \nonumber\\ &&+\frac{e}{4} \, {\cal I}_{\Lambda\Sigma}\,\mathcal{H}_{\mu\nu}{}^{\Lambda} \mathcal{H}^{\mu\nu\,\Sigma} +\frac{1}{8} {\cal R}_{\Lambda\Sigma}\;\varepsilon^{\mu\nu\rho\sigma} \mathcal{H}_{\mu\nu}{}^{\Lambda} \mathcal{H}_{\rho\sigma}{}^{\Sigma}+ \nonumber\\[.9ex] &&{}+\frac{1}{8}\, \varepsilon^{\mu\nu\rho\sigma}\, \theta^{\Lambda\alpha}\,B_{\mu\nu\,\alpha} \, \Big( 2\,\partial_{\rho} A_{\sigma\,\Lambda} + X_{MN\,\Lambda} \,A_\rho{}^M A_\sigma{}^N -\frac{1}{4}\,\theta_{\Lambda}{}^{\beta}B_{\rho\sigma\,\beta} \Big)+ \nonumber\\[.9ex] &&{} +\frac{1}{3}\, \varepsilon^{\mu\nu\rho\sigma}X_{MN\,\Lambda}\, A_{\mu}{}^{M} A_{\nu}{}^{N} \Big(\partial_{\rho} A_{\sigma}{}^{\Lambda} +\frac{1}{4} X_{PQ}{}^{\Lambda} A_{\rho}{}^{P}A_{\sigma}{}^{Q}\Big)+ \nonumber\\[.9ex] &&{} +\frac{1}{6}\, \varepsilon^{\mu\nu\rho\sigma}X_{MN}{}^{\Lambda}\, A_{\mu}{}^{M} A_{\nu}{}^{N} \Big(\partial_{\rho} A_{\sigma}{}_{\Lambda} +\frac{1}{4}\, X_{PQ\Lambda} A_{\rho}{}^{P}A_{\sigma}{}^{Q}\Big) \,. \label{boslag2} \end{eqnarray} The Chern-Simons terms in the last two lines generalize those in eq.\ (\ref{top}). On top of them, gauge invariance of the action requires the introduction of new topological terms, depending on the $B$-fields, which appear in the third line of (\ref{boslag2}). Notice that if the magnetic charges $\Theta^{\Lambda\,\alpha}$ vanish (i.e.\ we are in the electric frame), $B_\alpha$ disappear from the action, since the second line of (\ref{boslag2}) vanish as well as the $B$-dependent Stueckelberg term in $\mathcal{H}^\Lambda$. \par The constraints (\ref{linear2}), (\ref{quadratic1}) and (\ref{quadratic2}) are needed for the consistent construction of the gauged bosonic action, which is uniquely determined. Just as discussed in Sect.\ (\ref{gaugingsteps}), they are also enough to guarantee its consistent supersymmetric completion through Step 4, which equally applies to this more general construction.\par\bigskip Some comments are in order. \begin{enumerate}[itemsep=1.5ex] \item[i)]{The construction we are discussing in this Section requires the introduction of additional fields: $n_v$ magnetic potentials $A_{\Lambda\mu}$ and a set of antisymmetric tensors $B_{\alpha\,\mu\nu}$. These new fields come together with extra gauge-invariances (\ref{deltaA}), (\ref{deltaxi1}), (\ref{deltaxi2}), which guarantee the correct counting of physical degrees of freedom. As we shall discuss below these fields can be disposed of using their equations of motion.} \item[ii)]{It is known that in $D$-dimensions there is a duality that relates $p$-forms to $(D-p-2)$-forms, the corresponding field strengths having complementary order and being related by a Hodge-like duality. In four dimensions vectors are dual to vectors, while scalars are dual to antisymmetric tensor fields. From this point of view, we can understand the 2-forms $B_\alpha$ as ``dual'' to the scalars in the same way as $A_{\Lambda}$ are ``dual'' to $A^\Lambda$. This relation can be schematically illustrated as follows: \begin{equation} \partial_{[\mu} B_{\nu\rho]}~\propto~ e\,\epsilon_{\mu\nu\rho\sigma}\partial^\sigma \phi~+~\dots\,. \end{equation} More precisely, we can write the non-local relation between $B_\alpha$ and $\phi^s$ in a $G$-covariant fashion as a Hodge-like duality between $\mathcal{H}^{(3)}_\alpha$ and the Noether current ${\bf j}_\alpha$ of the sigma model describing the scalar fields, associated with the generator $t_\alpha$: \begin{equation} \mathcal{H}_{\alpha\,\mu\nu\rho}\propto e\,\epsilon_{\mu\nu\rho\sigma}\,{\bf j}_\alpha^\sigma\;;\quad\; {\bf j}_\alpha^\mu\equiv \frac{\delta \mathscr{L}_{\text{bos}}}{\delta \partial_\mu\phi^s}\,k_\alpha^s\,,\label{duaBphi} \end{equation} $k_\alpha^s$ being the Killing vector corresponding to $t_\alpha$. This motivated the choice of the 2-forms in the adjoint representation of $G$. In the gauged theory we will find a $G_g$-invariant version of (\ref{duaBphi}), see discussion below.} \item[iii)]{It can be shown that the presence of the extra fields $B_\alpha$ and $A_{\Lambda}$ in the action is related to non-vanishing magnetic components $\Theta^{\Lambda\,\alpha}$ of the embedding tensor. In the electric frame in which $\Theta^{\Lambda\,\alpha}=0$, these fields disappear altogether from the Lagrangian and we are back to the gauged action described in Sect.\ \ref{gaugingsteps}.} \item[iv)]{The kinetic terms in the Lagrangian only describe fields in the ungauged theory, while the extra fields enter topological terms or Stueckelberg-like couplings and satisfy first order equations, see discussion below. This feature is common to the $G$-covariant construction of gauged supergravities in any dimensions \cite{deWit:2004nw,deWit:2005hv,Samtleben:2005bp,deWit:2008ta}.} \item[v)]{The dyonic embedding tensor $\Theta_M{}^\alpha$ determines a splitting of the $2n_v$ vector fields $A^M_\mu$ into the truly electric ones $A^{\hat{\Lambda}}_\mu$, which are singled out by the combination $A^M_\mu \Theta_M{}^\alpha$ and thus define the gauge connection. The remaining ones $\tilde{A}^M_\mu$ correspond to non-vanishing components of $Z^{M\,\alpha}$, that is to the components along which the Jacobi identity is not satisfied, see (\ref{nojacobi}). These latter vectors, of which there are at most $n_v$ independent, can be then written as $\tilde{A}^M_\mu=Z^{M\,\alpha} A_{\alpha\,\mu}$ and are ill-defined, since the corresponding field strengths do not satisfy the Bianchi identity. An other problem with the vectors $\tilde{A}^M_\mu$ is that they are not part of the gauge connection, but in general are charged under the gauge group, that is are minimally coupled to $A^{\hat{\Lambda}}_\mu$. These fields cannot therefore be consistently described as vector fields. However, this poses no consistency problem for the theory, since $\tilde{A}^M_\mu$ can be gauged away by a transformation (\ref{deltaxi1}), (\ref{deltaxi2}) proportional to $\Xi_\alpha$. In a vacuum, they provide the two degrees of freedom needed by some of the tensor fields $B_\alpha$ to become massive, according to the \emph{anti-Higgs} mechanism \cite{antiH1,Cecotti:1987qr}. In the electric frame, these vectors become magnetic ($A_{\hat{\Lambda}\,\mu}$) and disappear from the action. This phenomenon also occurs in higher dimensions: the vectors $\tilde{A}^M_\mu$ which do not participate in the gauge connection but are charged with respect to the gauge group, are gauged away by a transformation associated with some of the antisymmetric tensor fields which, in a vacuum, become massive.} \item[vi)]{An important role in this construction was played by the linear constraint (\ref{linear2}), in particular by the property (\ref{linear22}) implied by it, which allowed to cancel the non-covariant terms in the gauge variation of $F^\Lambda$ by a corresponding variation of the antisymmetric tensor fields. It turns out that a condition analogous to (\ref{linear22}) represents the relevant linear constraint on the embedding tensor needed for the construction of gauged theories in higher dimensions \cite{deWit:2004nw,deWit:2005hv,Samtleben:2005bp,deWit:2008ta}.} \end{enumerate} \bigskip Let us now briefly discuss the bosonic field equations for the antisymmetric tensor fields and the vectors. The variation of the action with respect to $B_{\alpha\,\mu\nu}$ yields equations (\ref{HG0}). By fixing the $\Xi_\alpha$-gauge freedom, we can gauge away the ill-defined vectors $\tilde{A}^M_\mu=Z^{M\,\alpha} A_{\alpha\,\mu}$ and then solve eqs.\ (\ref{HG0}) in $B_\alpha$ as a function of the remaining field strengths, which are a combination of the $F^{\hat{\Lambda}}$ only. Substituting this solution in the action, the latter will only describe the $A^{\hat{\Lambda}}_{\mu}$ vector fields and no longer contain magnetic ones or antisymmetric tensors. In other words by eliminating $B_\alpha$ through equations (\ref{HG0}) we effectively perform the rotation to the electric frame and find the action discussed in Sect.\ \ref{gaugingsteps}.\par By varying the action with respect to $A^M_\mu$ we find the following equations: \begin{equation} \mathcal{D}_{[\mu}\mathpzc{G}^M_{\rho\sigma]}=2\,e\,\mathbb{C}^{MN}\,\epsilon_{\mu\nu\rho\sigma}\,\mathcal{D}^\sigma \phi^s\mathcalboondox{G}_{sr}\,k_N^r=2\,e\,\mathbb{C}^{MN}\,\epsilon_{\mu\nu\rho\sigma}\,{\bf j}_N^\sigma\,,\label{Max2} \end{equation} which are the manifestly $G$-covariant form of the Maxwell equations. The right-hand-side is proportional to the electric current \begin{equation} {\bf j}_N^\sigma ~\equiv~ \mathcal{D}^\sigma \phi^s\mathcalboondox{G}_{sr}\,k_N^r \=\Theta_N{}^\alpha\, \mathcal{D}^\sigma \phi^s\mathcalboondox{G}_{sr}\,k_\alpha^r \=\Theta_N{}^\alpha\,{\bf j}_\alpha^\sigma\,. \end{equation} If we contract both sides of (\ref{Max2}) with $\Theta_M{}^\alpha$, we are singling out the Bianchi identity for the fields strengths $F^{\hat{\Lambda}}$ of the vectors which actually participate in the minimal couplings. By using the locality condition on $\Theta$, we find: \begin{equation} \mathcal{D}_{[\mu}\mathpzc{G}^M_{\rho\sigma]}\,\Theta_M{}^\alpha=2\,e\,\mathbb{C}^{MN}\,\Theta_M{}^\alpha\,\Theta_N{}^\beta\epsilon_{\mu\nu\rho\sigma}\,\mathcal{D}^\sigma \phi^s\mathcalboondox{G}_{sr}\,k_\beta^r=0\,,\label{Bianchigauge} \end{equation} which are nothing but the Bianchi identities for $F^{\hat{\Lambda}}$. This is consistent with our earlier discussion, see eq.\ (\ref{BianchiFgauge}), in which we showed that the locality condition implies that the Bianchi identity for the gauge curvature have no magnetic source term, so that the gauge connection is well defined% \footnote{ in our earlier discussion we showed that $\mathcal{D}\mathcal{H}^M\,\Theta_M{}^\alpha=\mathcal{D}F^M\,\Theta_M{}^\alpha=0$. This is consistent with eq.\ (\ref{Bianchigauge}) since on-shell $\mathcal{H}^M\Theta_M{}^\alpha=\mathpzc{G}^M\Theta_M{}^\alpha$ }.\par Now we can use the Bianchi identity (\ref{Bid1n}) to rewrite eq.\ (\ref{Bianchigauge}) as a dualization equation generalizing (\ref{duaBphi}). To this end, we consider only the upper components of (\ref{Bianchigauge}), corresponding to the field equations for $A_{\Lambda\,\mu}$: \begin{equation} Z^{\Lambda\alpha}\,\mathcal{H}_{\alpha\,\mu\nu\rho}=12\,e\,Z^{\Lambda\alpha}\,\epsilon_{\mu\nu\rho\sigma}\, \mathcal{D}^\sigma \phi^s\mathcalboondox{G}_{sr}\,k_\alpha^r\,.\label{Hduaga} \end{equation} When the gauging involves translational isometries \cite{deWit:2005ub}, $\phi^I\rightarrow \phi^I+c^I$, the above equations can be solved in the fields $A_\Lambda$ contained in the covariant derivative. This is done by first using the $\zeta$-gauge freedom associated with $A_\Lambda$ to gauge away the scalar fields $\phi^I$ acted on by the translational isometries. Eqs.\ (\ref{Hduaga}) are then solved in the fields $A_\Lambda$, which are expressed in terms of the remaining scalars, the vectors $A^\Lambda$ and the field strengths of the antisymmetric tensors. Substituting this solution in the action, we obtain a theory in which no vectors $A_\Lambda$ appear and the scalar fields $\phi^I$ have been effectively dualized to corresponding tensor fields $B_{I\,\mu\nu}$. The latter become dynamical and are described by kinetic terms. These theories were first constructed in the framework of $\mathcal{N}=2$ supergravity in \cite{Dall'Agata:2003yr,D'Auria:2004yi}, generalizing previous results \cite{Louis:2002ny}.\par The gauged theory we have discussed in this section features a number of non-dynamical extra fields. This is the price we have to pay for a manifest $G$-covariance of the field equations and Bianchi identities. The embedding tensor then defines how the physical degrees of freedom are distributed within this larger set of fields, by fixing the gauge symmetry associated with the extra fields and solving the corresponding non-dynamical field equations (\ref{HG0}), (\ref{Hduaga}). \paragraph{A view on higher dimensions.} As mentioned in point ii) above, there are equivalent formulations of ungauged supergravities in $D$-dimensions obtained from one another by dualizing certain $p$-forms $C_{(p)}$ (i.e.\ rank-$p$ antisymmetric tensor fields) into $(D-p-2)$-forms $C_{(D-p-2)}$ through an equation of the type: \begin{equation} dC_{(p)}={}^*dC_{(D-p-2)}~+~\dots\,. \end{equation} Such formulations feature in general different global symmetry groups. This phenomenon is called \emph{Dualization of Dualities} and was studied in \cite{Cremmer:1997ct}. The scalar fields in these theories are still described by a non-linear sigma model and in $D\ge 6$ the scalar manifold is homogeneous symmetric. Just as in four dimensions, the scalars are non-minimally coupled to the $p$-form fields (see below) and the global symmetry group $G$ is related to the isometry group of the scalar manifold and thus is maximal in the formulation of the theory in which the scalar sector is maximal, that is in which all forms are dualized to lower order ones. This prescription, however, does not completely fix the ambiguity related to duality in even dimensions $D=2k$, when order-$k$ field strengths, corresponding to rank-$(k-1)$ antisymmetric tensor fields $C_{(k-1)}$, are present. In fact, after having dualized all forms to lower-order ones, we can still dualize $(k-1)$-forms $C_{(k-1)}$ into $(k-1)$-forms $\tilde{C}_{(k-1)}$. This is the electric-magnetic duality of the four-dimensional theory, related to the vector fields, and also occurs for instance in six dimensions with the 2-forms and in eight dimensions with the 3-forms.\par Duality transformations interchanging $C_{(k-1)}$ with $\tilde{C}_{(k-1)}$, and thus the corresponding field equations with Bianchi identities, are encoded in the group $G$, whose action on the scalar fields, just as in four dimensions, is combined with a linear action on the $k$-form field strengths ${F}_{(k)}$ and their duals $\tilde{F}_{(k)}$: \begin{align} g\in G \;:\quad \begin{cases} {F}_{(k)}\rightarrow {F}_{(k)}'=A[g]\,{F}_{(k)}+B[g]\,\tilde{F}_{(k)}\,,\cr \tilde{F}_{(k)}\rightarrow \tilde{F}_{(k)}'=C[g]\,{F}_{(k)}+D[g]\,\tilde{F}_{(k)}\,. \end{cases} \end{align} As long as the block $B[g]$ is non-vanishing, this symmetry can only be on-shell since the Bianchi identity for the transformed ${F}_{(k)}$, which guarantees that the transformed elementary field $C_{(k-1)}'$ be well defined, only holds if the field equations $d\tilde{F}_{(k)}=0$ for $C_{(k-1)}$ are satisfied \cite{Tanii:1998px}: \begin{equation} d{F}_{(k)}'=A[g]\,d{F}_{(k)}+B[g]\,d\tilde{F}_{(k)}=B[g]\,d\tilde{F}_{(k)}=0\,. \end{equation} The field strengths ${F}_{(k)}$ and $\tilde{F}_{(k)}$ transform in a linear representation $\mathscr{R}$ of $G$ defined by the matrix: \begin{equation} g\in G\;\;\stackrel{\mathscr{R}}{\longrightarrow }\;\;\mathscr{R}[g]=\left(\begin{matrix} A[g] & B[g]\cr C[g] & D[g] \end{matrix}\right)\,.\label{RgD} \end{equation} Just as in four dimensions, depending on which of the $C_{(k-1)}$ and $\tilde{C}_{(k-1)}$ are chosen to be described as elementary fields in the Lagrangian, the action will feature a different global symmetry $G_{el}$, though the global symmetry group $G$ of the field equations and Bianchi identities remains the same. The constraints on $\mathscr{R}$ derive from the non-minimal couplings of the scalar fields to the $(k-1)$-forms which are a direct generalization of those in four dimensions between the scalars and the vector fields% \footnote{ the Hodge dual ${}^*\omega$ of a generic $q$-form $\omega$ is defined as: \begin{equation} {}*\omega_{\mu_1\dots \mu_{D-q}}=\frac{e}{q!}\,\epsilon_{\mu_1\dots\mu_{D-q}\nu_1\dots \nu_q}\,\omega^{\nu_1\dots \nu_q}\,, \end{equation} where $\epsilon_{01\dots D-1}=1$.\; One can easily verify that ${}^{**}\omega=(-)^{q(D-q)}\,(-)^{D-1}\,\omega$ }, see (\ref{bosoniclagr} \begin{equation} \mathscr{L}_{{\rm kin},\,C}= -\frac{e\varepsilon}{2k!}\left(\mathcal{I}_{\Lambda\Sigma}(\phi)\,F^\Lambda_{\mu_1\dots \mu_k}\,F^{\Lambda\,\mu_1\dots \mu_k}+\mathcal{R}_{\Lambda\Sigma}(\phi)\,F^\Lambda_{\mu_1\dots \mu_k}\,{}^*F^{\Lambda\,\mu_1\dots \mu_k}\right)\,,\label{kinC} \end{equation} where $\mu=0,\dots, D-1$\, and \,$\Lambda,\Sigma=1,\dots, n_k$, being $n_k$ the number of $(k-1)$-forms $C_{(k-1)}$\, and \,$\varepsilon\equiv (-)^{k-1}$.\par\smallskip The matrices $\mathcal{I}_{\Lambda\Sigma}(\phi),\,\mathcal{R}_{\Lambda\Sigma}(\phi)$ satisfy the following properties: \begin{equation} \mathcal{I}_{\Lambda\Sigma}=\mathcal{I}_{\Sigma\Lambda}<0\,,\quad\; \mathcal{R}_{\Lambda\Sigma}=-\varepsilon\,\mathcal{R}_{\Sigma\Lambda}\,. \end{equation} Just as we did in four dimensions, see eq.\ (\ref{GF}), we define dual field strengths (omitting the fermion terms): \begin{equation} \mathpzc{G}_{\Lambda\,\mu_1\dots\,\mu_k}\equiv\varepsilon\,\epsilon_{\mu_1\dots\,\mu_k\nu_1\dots\nu_k}\, \frac{\delta \mathscr{L}}{\delta F^\Lambda_{\nu_1\dots \nu_k}} \quad\Rightarrow\quad \mathpzc{G}_\Lambda=-\mathcal{I}_{\Lambda\Sigma}\,{}^*F^\Sigma-\varepsilon\, \mathcal{R}_{\Lambda\Sigma}\,F^\Sigma\,,\label{defGk} \end{equation} and define the vector of field strengths: \begin{align} \mathbb{F}= (\mathbb{F}^M)\equiv\left(\begin{matrix} F^\Lambda \cr \mathpzc{G}_\Lambda\end{matrix}\right)\,. \end{align} The definition (\ref{defGk}) can be equivalently written in terms of the \emph{twisted self-duality condition} \cite{Cremmer:1997ct}: \begin{equation} {}^*\mathbb{F}=-\mathbb{C}_\varepsilon\,\mathcal{M}(\phi)\,\mathbb{F}\,,\label{TSDCD} \end{equation} which generalizes (\ref{FCMF}), where \begin{equation} \mathbb{C}_\varepsilon\equiv (\mathbb{C}^{MN})\equiv \left(\begin{matrix} \mathbf{0} & \Id \cr \varepsilon\,\Id & \mathbf{0} \end{matrix}\right) \,,\label{Ce} \end{equation} $\Id$, $\mathbf{0}$ being the $n_k\times n_k$ identity and zero-matrices, respectively, and \begin{equation} \mathcal{M}(\phi)= (\mathcal{M}(\phi)_{MN})\equiv \left(\begin{matrix}(\mathcal{I}-\varepsilon\,\mathcal{R}\mathcal{I}^{-1}\mathcal{R})_{\Lambda\Sigma} & -(\mathcal{R}\mathcal{I}^{-1})_\Lambda{}^\Gamma\cr \varepsilon (\mathcal{I}^{-1}\mathcal{R})^\Delta{}_\Sigma & \mathcal{I}^{-1\, \Delta \Gamma}\end{matrix}\right)\,.\label{Me} \end{equation} The reader can easily verify that: \begin{equation} \mathcal{M}^T\,\mathbb{C}_\varepsilon\mathcal{M}=\mathbb{C}_\varepsilon\,. \end{equation} For $\varepsilon=-1$, which is the case of the vector fields in four dimensions, $\mathbb{C}_\varepsilon$ is the symplectic invariant matrix and $\mathcal{M}$ is a symmetric, symplectic matrix. For $\varepsilon=+1$, which is the case of 2-forms in six dimensions, $\mathbb{C}_\varepsilon$ is the ${\rm O}(n_k,n_k)$-invariant matrix and $\mathcal{M}$ a symmetric element of ${\rm O}(n_k,n_k)$.\par The Maxwell equations read: \begin{equation} d\mathbb{F}=0\,.\label{MDd} \end{equation} In order for (\ref{RgD}) to be a symmetry of eqs.\ (\ref{TSDCD}) and (\ref{MDd}) we must have: \begin{equation} \mathcal{M}(g\star \phi)=\mathscr{R}[g]^{-T}\mathcal{M}( \phi)\mathscr{R}[g]^{-1}\,, \end{equation} and \begin{equation} \mathscr{R}[g]^{T}\mathbb{C}_\varepsilon \mathscr{R}[g]=\mathbb{C}_\varepsilon\,. \end{equation} This means that in $D=2k$ dimensions: \begin{align} \text{$k$ even}\,&:\quad\; \mathscr{R}[G]\subset {\rm Sp}(2n_k,\mathbb{R})\,,\nonumber\\ \text{$k$ odd}\,&:\quad\; \mathscr{R}[G]\subset {\rm O}(n_k,n_k)\,. \end{align} All other forms of rank $p\neq k-1$, which include the vector fields in $D>4$, will transform in linear representations of $G$. The corresponding kinetic Lagrangian only feature the first term of (\ref{kinC}), with no generalized theta-term ($\mathcal{R}=0$).\par If we compactify Type IIA/IIB or eleven-dimensional supergravity on a torus down to $D$-dimensions, we end up with an effective ungauged, maximal theory in $D$ dimensions, featuring form-fields of various order. Upon dualizing all form-fields to lower order ones, we end up with a formulation of the theory in which $G$ is maximal, and is described by the non-compact real form ${\rm E}_{11-D(11-D)}$ of the group ${\rm E}_{11-D}$. Here we use the symbol ${\rm E}_{11-D(11-D)}$ as a short-hand notation for the following groups: \begin{align} D&=9 \;:\quad\; G={\rm E}_{2(2)}\equiv {\rm GL}(2,\mathbb{R})\,,\nonumber\\ D&=8 \;:\quad\; G={\rm E}_{3(3)}\equiv {\rm SL}(2,\mathbb{R})\times {\rm SL}(3,\mathbb{R})\,,\nonumber\\ D&=7 \;:\quad\; G={\rm E}_{4(4)}\equiv {\rm SL}(5,\mathbb{R})\,,\nonumber\\ D&=6 \;:\quad\; G={\rm E}_{5(5)}\equiv {\rm SO}(5,5)\,,\nonumber\\ D&=5 \;:\quad\; G={\rm E}_{6(6)}\,,\nonumber\\ D&=4 \;:\quad\; G={\rm E}_{7(7)}\,,\nonumber\\ D&=3 \;:\quad\; G={\rm E}_{8(8)}\,. \end{align} Only for $D\le 5$, ${\rm E}_{11-D(11-D)}$ is a proper exceptional group. The ungauged four-dimensional maximal supergravity was originally obtained from compactification of the eleven-dimensional one and dualization of all form-fields to lower order ones in \cite{Cremmer:1978ds}, where the ${\rm E}_{7(7)}$ on-shell symmetry was found.\par In $D=10$ Type IIA and IIB theories feature different global symmetry groups: $G_{IIA}={\rm SO}(1,1)$ and $G_{IIB}= {\rm SL}(2,\mathbb{R})$, respectively. The latter encodes the conjectured S-duality symmetry of Type IIB string theory. In this theory $G_{IIB}$ does not act as a duality group since the 5-form field strength is self-dual and is a $G_{IIB}$-singlet.\par A $G$-covariant gauging \cite{deWit:2004nw,deWit:2005hv,Samtleben:2005bp,deWit:2008ta} is effected starting from the formulation of the ungauged theory in which $G$ is maximal and promoting a suitable global symmetry group of the Lagrangian $G_g\subset G$ to local symmetry. The choice of the gauge group is still completely encoded in a $G$-covariant embedding tensor $\Theta$: \begin{equation} \Theta\in \mathscr{R}_{v*}\times {\rm adj}(G)\,,\label{rtr} \end{equation} subject to a linear constraint, generalizing (\ref{linear22}), which singles out in the above product a certain representation $\mathscr{R}_\Theta$ for the embedding tensor, and a quadratic one expressing the $G_g$-invariance of $\Theta$. In Table \ref{tab:T-tensor-repr} we give, in the various $D$-dimensional maximal supergravities, the representations $\mathscr{R}_\Theta$ of $\Theta$.\par Just as in the duality covariant construction of the four-dimensional gaugings discussed above, one introduces all form-fields which are dual to the fields of the ungauged theory. All the form-fields will transform in representations of $G$ and dual forms of different order will belong to conjugate representations. In $D=2k$, in the presence of rank-$(k-1)$ antisymmetric tensors, this amounts to introducing the fields $\tilde{C}_{(k-1)\,\Lambda}$ dual to the elementary ones ${C}^\Lambda_{(k-1)}$, just as we did for the vector fields in four dimensions. Together they transform in the representation $\mathscr{R}$ discussed above. By consistency, each form-field is associated with its own gauge invariance. Only the fields of the original ungauged theory are described by kinetic terms, the extra fields enter in topological terms and in Stueckelberg-like combinations within the covariant field strengths. The latter, for a generic $p$-form field, can be schematically represented in the form (we suppress all indices) \begin{equation} F_{(p+1)}=\mathcal{D}C_{(p)}+Y_p[\Theta]\cdot C_{(p+1)}+\dots\,. \end{equation} where $Y_p[\Theta]$ is a constant \emph{intertwiner} tensor constructed out of $\Theta$ and of $G$-invariant tensors. The gauge variation of the $p$-form has the following schematic expression: \begin{equation} \delta C_{(p)}=Y_p[\Theta]\cdot\Xi_{(p)}+\mathcal{D}\Xi_{(p-1)}+\dots\label{gfree} \end{equation} The embedding tensor defines, through the tensors $Y_p[\Theta]$, a splitting of the $p$-forms into physical fields and unphysical ones. The former will in general become massive by ``eating'' corresponding unphysical $(p-1)$-forms, while the latter, whose field strengths fail to satisfy the Bianchi identity, are in turn gauged away and become degrees of freedom of massive $(p+1)$-forms. The constraints on the embedding tensor and group theory guarantee the consistency of the whole construction.\par Just as in the four-dimensional model discussed above, the embedding tensor defines the distribution of the physical degrees of freedom among the various fields by fixing the gauge freedom (\ref{gfree}) and solving the non-dynamical field equations. These $G$-covariant selective couplings between forms of different order, determined by a single object $\Theta$, define the so-called \emph{tensor hierarchy} and was developed in the maximal theories, in \cite{deWit:2005hv,Samtleben:2005bp,deWit:2008ta} as a general $G$-covariant formulation of the gauging procedure in any dimensions. In this formalism the maximal gauged supergravity in $D=5$ was constructed in \cite{deWit:2004nw}, generalizing previous works \cite{Gunaydin:1985cu,Andrianopoli:2000fi}. The general gauging of the six and seven -dimensional maximal theories were constructed in \cite{Bergshoeff:2007ef} and \cite{Samtleben:2005bp} respectively, extending previous works \cite{Pernici:1984fe}. In $D=8$ the most general gaugings were constructed in \cite{Bergshoeff:2003ri}. Finally, the $D=9$ gauged theory was studied in \cite{Cowdall:2000sq,Bergshoeff:2002mb,Hull:2002wg}. We refer to these works for the details of the construction in the different cases. \begin{table}[t] \centering \renewcommand{\arraystretch}{1.7} \begin{tabular}{ | M{0.5cm} | M{2cm} | M{3cm} | M{6cm} | \hline $D$ &$G$& $H$ & $\Theta$ \\ \hline 7 & ${\rm SL}(5)$ & ${\rm USp}(4)$ & ${\bf 10}\times {\bf 24}= {\bf 10}+\underline{\bf 15}+ \underline{\overline{\bf 40}}+ {\bf 175}$ \\ 6 & ${\rm SO}(5,5)$ & ${\rm USp}(4) \times {\rm USp}(4)$ & ${\bf 16}\times{\bf 45} = {\bf 16}+ \underline{\bf 144} + {\bf 560}$ \\ 5 & ${\rm E}_{6(6)}$ & ${\rm USp}(8)$ & ${\bf 27}\times{\bf 78} = {\bf 27} + \underline{\bf 351} + \overline{\bf 1728}$ \\ 4 & ${\rm E}_{7(7)}$ & ${\rm SU}(8)$ & ${\bf 56}\times{\bf 133} = {\bf 56} + \underline{\bf 912} + {\bf 6480}$ \\ 3 & ${\rm E}_{8(8)}$ & ${\rm SO}(16)$ & ${\bf 248}\times{\bf 248} = \underline{\bf 1} + {\bf 248} + \underline{\bf 3875} +{\bf 27000} + {\bf 30380}$ \\ \hline \end{tabular} \caption{\small Decomposition of the embedding tensor $\Theta$ for maximal supergravities in various space-time dimensions in terms of irreducible ${\rm G}$ representations \cite{deWit:2002vt,deWit:2005hv}. Only the underlined representations are allowed by supersymmetry. The R-symmetry group ${\rm H}$ is the maximal compact subgroup of ${\rm G}$. }\label{tab:T-tensor-repr} \end{table} \newpage \hypersetup{linkcolor=blue} \phantomsection \addtocontents{toc}{\protect\addvspace{4.5pt} \addcontentsline{toc}{section}{References} \bibliographystyle{mybibstyle} \section{\@startsection{section}{1}{\z@}% {-3.5ex \@plus -1.3ex \@minus -.7ex}% {2.3ex \@plus.4ex \@minus .4ex}% {\normalfont\Large\bfseries}} \renewcommand\subsection{\@startsection{subsection}{2}{\z@}% {-2.3ex\@plus -1ex \@minus -.5ex}% {1.2ex \@plus .3ex \@minus .3ex}% {\normalfont\large\bfseries}} \renewcommand\subsubsection{\@startsection{subsubsection}{3}{\z@}% {-2.3ex\@plus -1ex \@minus -.5ex}% {1ex \@plus .2ex \@minus .2ex}% {\normalfont\normalsize\bfseries}} \renewcommand\paragraph{\@startsection{paragraph}{4}{\z@}% {1.75ex \@plus1ex \@minus.2ex}% {-1em}% {\normalfont\normalsize\bfseries}} \renewcommand\subparagraph{\@startsection{subparagraph}{5}{\z@ {1.75ex \@plus1ex \@minus .2ex}% {-1em}% {\normalfont\normalsize\itshape}} \makeatother \usepackage{titletoc} \addtocontents{toc}{\addvspace{-0.75em}} \titlecontents{section} [1.25em] {\addvspace{0.7em plus 0pt}\small} {\thecontentslabel\hspace{0.75em}}{ {\hspace{0.5em}\titlerule*[0.5em]{.}\contentspage} [\addvspace{0.0em plus 0pt}] \titlecontents{subsection} [2.75em] {\addvspace{0.075em plus0pt}\fns} {\thecontentslabel\hspace{0.75em}}{\thecontentslabel\hspace{0.75em}} {\hspace{0.5em}\titlerule*[0.5em]{.}\small\contentspage} [\addvspace{0.075em plus 0pt}] \setcounter{tocdepth}{2} \usepackage{environ} \makeatletter \NewEnviron{subalign}[1]{ \begin{subequations}\label{#1} \renewcommand{\theequation}{\theparentequation{.\small\alph{equation}}}% \begin{align} \BODY \end{align} \end{subequations} } \makeatother \makeatletter \newenvironment{subeqs}% {\begingroup% \setlength{\abovedisplayskip}{10pt plus 4pt minus 9pt}% \setlength{\abovedisplayshortskip}{0pt plus 2pt minus 2pt}% \setlength{\belowdisplayskip}{12pt plus 3pt minus 9pt}% \setlength{\belowdisplayshortskip}{7pt plus 3pt minus 4pt}% \begin{subequations}% \renewcommand{\theequation}{\theparentequation{.\fns\roman{equation}}}% }% {\end{subequations}\ignorespacesafterend% \endgroup}% \makeatother \makeatletter \newenvironment{subeqsds}[2]% {\begingroup% \setlength{\abovedisplayskip}{{#1}pt plus 2pt minus 9pt}% \setlength{\abovedisplayshortskip}{0pt plus 0pt minus 2pt}% \setlength{\belowdisplayskip}{{#2}pt plus 3pt minus 9pt}% \setlength{\belowdisplayshortskip}{7pt plus 3pt minus 4pt}% \begin{subequations}% \renewcommand{\theequation}{\theparentequation{.\fns\roman{equation}}}% }% {\end{subequations}\ignorespacesafterend% \endgroup}% \makeatother \makeatletter \newenvironment{equationds}[2]% {\begingroup% \setlength{\abovedisplayskip}{{#1}pt plus 3pt minus 9pt}% \setlength{\abovedisplayshortskip}{0pt plus 3pt}% \setlength{\belowdisplayskip}{{#2}pt plus 3pt minus 9pt}% \setlength{\belowdisplayshortskip}{7pt plus 3pt minus 4pt}% \begin{equation}% }% {\end{equation}\ignorespacesafterend% \endgroup}% \makeatother \makeatletter \newenvironment{eqsplit}[1]% {% \begin{equation}% \begin{split}{#1}% }% {\end{split}% \end{equation}\ignorespacesafterend% }% \makeatother \usepackage{xcolor} \definecolor{Green}{rgb}{0.05, 0.45, 0.25} \definecolor{dogwoodrose}{rgb}{0.8, 0.1, 0.55} \definecolor{RRed}{rgb}{0.7, 0.1, 0.525} \newcommand\hmmax{1} \newcommand\bmmax{1} \usepackage{bm} \usepackage{dsfont} \DeclareMathAlphabet{\mathpzc}{OT1}{pzc}{m}{it} \DeclareMathAlphabet{\mathcal}{OMS}{cmsy}{m}{n} \DeclareSymbolFontAlphabet{\Scr}{rsfs} \DeclareMathAlphabet{\mathbold}{U}{BOONDOX-ds}{m}{n} \SetMathAlphabet{\mathbold}{bold}{U}{BOONDOX-ds}{b}{n} \DeclareMathAlphabet{\mathcalboondox}{U}{BOONDOX-calo}{m}{n} \SetMathAlphabet{\mathcalboondox}{bold}{U}{BOONDOX-calo}{b}{n} \DeclareMathAlphabet{\mathbcalboondox}{U}{BOONDOX-calo}{b}{n} \newcommand\eqlinkcol{RRed} \newcommand\eqcol{black} \makeatletter \renewcommand{\theequation}{\arabic{equation} \renewcommand{\thetable}{\arabic{table}\, \renewcommand{\thefigure}{\arabic{figure}\, \renewcommand{\figurename}{Figure~\!\!\!} \renewcommand{\subtablename}{\tablename} \renewcommand{\thesubtable}{\arabic{table}.{\fns\roman{subtable}}\, \renewcommand{\subfigurename}{fig.~\!\!\!} \renewcommand{\thesubfigure}{\arabic{figure}.{\fns\roman{subfigure}}\, \makeatother \usepackage[breaklinks=true,backref=page]{hyperref} \hypersetup{ bookmarks=true, pdfmenubar=true, pdffitwindow=false, pdfpagemode={UseNone}, pdfstartview={FitH}, pdfauthor={}, pdfsubject={}, pdfcreator={}, pdfproducer={}, colorlinks=true, bookmarks=true, bookmarksnumbered=true, plainpages, a4paper, linktoc=page, citecolor=blue, filecolor=black, linkcolor=\eqlinkcol, urlcolor=Green, } \renewcommand*{\backref}[1]{} \renewcommand*{\backrefalt}[4]{% \ifcase #1 % \relax \or ~{\small [\textsc{p.~\fns{\!#2}}]} \else ~{\small [\textsc{p.~\fns{\!#2}}]}% \fi} \renewcommand*{\backrefsep}{,\,} \renewcommand*{\backreftwosep}{,\,} \renewcommand*{\backreflastsep}{,\,} \usepackage{footnotebackref} \usepackage{hypernat} \makeatletter \newenvironment{sqcases}{% \matrix@check\sqcases\env@sqcases }{% \endarray\right.% } \def\env@sqcases{% \let\@ifnextchar\new@ifnextchar \left\lbrack \def\arraystretch{1.2}% \array{@{}l@{\quad}l@{}}% } \makeatother \newcolumntype{M}[1]{>{\centering\arraybackslash}m{#1}} \def\mathcal{N}{\mathcal{N}} \def\mathscr{M}{\mathscr{M}} \def\mathscr{M}_{\text{int}}{\mathscr{M}_{\text{int}}} \def\mathcal{I}{\mathcal{I}} \def\mathcal{R}{\mathcal{R}} \def\+{~+~} \def\-{~-~} \def\={~=~} \def\*{{}^*} \def\nonumber{\nonumber} \def\nonumber\\{\nonumber\\} \def\epsilon{\epsilon} \def\bar{\epsilon}{\bar{\epsilon}} \def\mathrm{G}_{\textsc{n}}{\mathrm{G}_{\textsc{n}}} \def\mathcalboondox{G}{\mathcalboondox{G}} \def\mathscr{L}_{\text{bos}}{\mathscr{L}_{\text{bos}}} \def\mathscr{L}_{\text{scal}}{\mathscr{L}_{\text{scal}}} \def\mathscr{L}_{\text{gauged}}{\mathscr{L}_{\text{gauged}}} \def\mathscr{L}{\mathscr{L}} \def\mathbb{C}{\mathbb{C}} \def\textsl{L}{\textsl{L}} \def\mathbb{L}{\mathbb{L}} \def\mathcal{P}{\mathcal{P}} \def\mathscr{P}{\mathscr{P}} \def\mathscr{E}{\mathscr{E}} \def\mathpzc{w}{\mathpzc{w}} \def\mathscr{S}{\mathscr{S}} \def\mathfrak{g}{\mathfrak{g}} \def\mathfrak{H}{\mathfrak{H}} \def\mathfrak{K}{\mathfrak{K}} \def\mathrm{Tr}{\mathrm{Tr}} \def\mathcal{M}{\mathcal{M}} \def\mathbold{M}{\mathbold{M}} \def\mathcal{D}{\mathcal{D}} \def\mathpzc{G}{\mathpzc{G}} \defG_g{G_g} \newcommand\Id{\mathds{1}} \def\mathbf{0}{\mathbf{0}} \def\mathcal{O}{\mathcal{O}} \def\mathscr{M}_{\rm scal}{\mathscr{M}_{\rm scal}} \def{\mathbf{M_4}}^{\scalebox{0.6}{(1,3)}}{{\mathbf{M_4}}^{\scalebox{0.6}{(1,3)}}} \def\Ms_{\textsc{sk}}{\mathscr{M}_{\textsc{sk}}} \def\Ms_{\textsc{qk}}{\mathscr{M}_{\textsc{qk}}} \def\mathscr{R}{\mathscr{R}} \def\mathbb{T}{\mathbb{T}} \def{\bf S}{{\bf S}} \def{\Lambda_e}{{\Lambda_e}} \def{\Sigma_e}{{\Sigma_e}} \def{\Gamma_e}{{\Gamma_e}} \def{\Delta_e}{{\Delta_e}} \def{\Pi_e}{{\Pi_e}} \def\mathrm{dim}{\mathrm{dim}} \def\mathrm{rank}{\mathrm{rank}} \def\mathrm{adj}{\mathrm{adj}} \def\text{AdS}{\text{AdS}} \def\mathrm{Sp}{\mathrm{Sp}} \def\mathrm{U}{\mathrm{U}} \def\mathrm{SU}{\mathrm{SU}} \def\mathrm{SO}{\mathrm{SO}} \def\mathrm{ISO}{\mathrm{ISO}} \newcommand\fns{\footnotesize} \newcommandx{\dmu}[1][1=\mu,usedefault]{\partial_{#1}} \newcommand{\ov}[1]{{\overline{#1}}} \newcommand{\und}[1]{{\underline{#1}}} \newcommandx\AL[2][1=\Lambda,2=\mu,usedefault]{A^{#1}_{#2}} \newcommandx\ALd[2][1=\Lambda,2=\mu,usedefault]{A_{{#1}\,{#2}}} \newcommandx\ALe[2][1={\Lambda_e},2=\mu,usedefault]{A^{#1}_{#2}} \newcommandx\ALed[2][1={\Lambda_e},2=\mu,usedefault]{A_{{#1}\,{#2}}} \newcommandx\AM[2][1=M,2=\mu,usedefault]{A^{#1}_{#2}} \newcommandx\XLe[1][1={\Lambda_e},usedefault]{X_{#1}} \newcommandx{\Th}[2][1=M,2=\alpha,usedefault]{\Theta_{#1}{}^{#2}} \newcommandx{\T}[2][1=M,2=\alpha,usedefault]{\varmathbb{T}_{\und{#1}}{}^{\und{#2}}} \newcommandx{\TT}[3][1=M,2=N,3=P,usedefault]{\varmathbb{T}_{\und{#1#2}}{}^{\und{#3}}} \newcommandx{\LCu}[4][1=\mu,2=\nu,3=\rho,4=\sigma,usedefault]{\varepsilon^{{#1}{#2}{#3}{#4}}} \newcommandx{\LCd}[4][1=\mu,2=\nu,3=\rho,4=\sigma,usedefault]{\varepsilon_{{#1}{#2}{#3}{#4}}} \newcommandx{\Exc}[1][1=7,usedefault]{{\rm E}_{{#1}({#1})}}
1,941,325,220,920
arxiv
\section{Introduction} \appendices \section{Additional Visual Results on Shape Part Segmentation} In this section, we put more visual results of our method on the downstream shape part segmentation. We simply employ the Linear Classifier setting for this downstream task. Figure \ref{fig:partsegall} shows the visual results of 32 models of 16 categories, involving 2 models per category. As we can see from the figure, with our unsupervised learned representations, a simple linear classifier for the downstream task can generate very similar visual results to the ground truth segmentation. It further confirms the effectiveness of our unsupervised method in learning distinguishable representations. \begin{figure*}[htbp] \centering \begin{minipage}[b]{0.49\linewidth} \begin{minipage}[b]{0.24\linewidth} \centerline{Ground truth} \end{minipage} \begin{minipage}[b]{0.24\linewidth} \centerline{Our result} \end{minipage} \begin{minipage}[b]{0.24\linewidth} \centerline{Ground truth} \end{minipage} \begin{minipage}[b]{0.24\linewidth} \centerline{Our result} \end{minipage} \end{minipage} \begin{minipage}[b]{0.49\linewidth} \begin{minipage}[b]{0.24\linewidth} \centerline{Ground truth} \end{minipage} \begin{minipage}[b]{0.24\linewidth} \centerline{Our result} \end{minipage} \begin{minipage}[b]{0.24\linewidth} \centerline{Ground truth} \end{minipage} \begin{minipage}[b]{0.24\linewidth} \centerline{Our result} \end{minipage} \end{minipage} \\ \centering \begin{minipage}[b]{0.49\linewidth} \begin{minipage}[b]{0.24\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/segmentation_results/Airplane_gt_1.png}} \end{minipage} \begin{minipage}[b]{0.24\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/segmentation_results/Airplane_pred_1.png}} \end{minipage} \begin{minipage}[b]{0.24\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/segmentation_results/Airplane_gt_2.png}} \end{minipage} \begin{minipage}[b]{0.24\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/segmentation_results/Airplane_pred_2.png}} \end{minipage} \centerline{airplane} \end{minipage} \begin{minipage}[b]{0.49\linewidth} \begin{minipage}[b]{0.24\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/segmentation_results/Bag_gt_1.png}} \end{minipage} \begin{minipage}[b]{0.24\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/segmentation_results/Bag_pred_1.png}} \end{minipage} \begin{minipage}[b]{0.24\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/segmentation_results/Bag_gt_2.png}} \end{minipage} \begin{minipage}[b]{0.24\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/segmentation_results/Bag_pred_2.png}} \end{minipage} \centerline{bag} \end{minipage} \\ \centering \begin{minipage}[b]{0.49\linewidth} \begin{minipage}[b]{0.24\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/segmentation_results/Cap_gt_1.png}} \end{minipage} \begin{minipage}[b]{0.24\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/segmentation_results/Cap_pred_1.png}} \end{minipage} \begin{minipage}[b]{0.24\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/segmentation_results/Cap_gt_2.png}} \end{minipage} \begin{minipage}[b]{0.24\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/segmentation_results/Cap_pred_2.png}} \end{minipage} \centerline{cap} \end{minipage} \begin{minipage}[b]{0.49\linewidth} \begin{minipage}[b]{0.24\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/segmentation_results/Car_gt_1.png}} \end{minipage} \begin{minipage}[b]{0.24\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/segmentation_results/Car_pred_1.png}} \end{minipage} \begin{minipage}[b]{0.24\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/segmentation_results/Car_gt_2.png}} \end{minipage} \begin{minipage}[b]{0.24\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/segmentation_results/Car_pred_2.png}} \end{minipage} \centerline{car} \end{minipage} \\ \centering \begin{minipage}[b]{0.49\linewidth} \begin{minipage}[b]{0.24\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/segmentation_results/Chair_gt_1.png}} \end{minipage} \begin{minipage}[b]{0.24\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/segmentation_results/Chair_pred_1.png}} \end{minipage} \begin{minipage}[b]{0.24\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/segmentation_results/Chair_gt_2.png}} \end{minipage} \begin{minipage}[b]{0.24\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/segmentation_results/Chair_pred_2.png}} \end{minipage} \centerline{chair} \end{minipage} \begin{minipage}[b]{0.49\linewidth} \begin{minipage}[b]{0.24\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/segmentation_results/Earphone_gt_1.png}} \end{minipage} \begin{minipage}[b]{0.24\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/segmentation_results/Earphone_pred_1.png}} \end{minipage} \begin{minipage}[b]{0.24\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/segmentation_results/Earphone_gt_2.png}} \end{minipage} \begin{minipage}[b]{0.24\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/segmentation_results/Earphone_pred_2.png}} \end{minipage} \centerline{earphone} \end{minipage} \\ \centering \begin{minipage}[b]{0.49\linewidth} \begin{minipage}[b]{0.24\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/segmentation_results/Guitar_gt_1.png}} \end{minipage} \begin{minipage}[b]{0.24\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/segmentation_results/Guitar_pred_1.png}} \end{minipage} \begin{minipage}[b]{0.24\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/segmentation_results/Guitar_gt_2.png}} \end{minipage} \begin{minipage}[b]{0.24\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/segmentation_results/Guitar_pred_2.png}} \end{minipage} \centerline{guitar} \end{minipage} \begin{minipage}[b]{0.49\linewidth} \begin{minipage}[b]{0.24\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/segmentation_results/Knife_gt_1.png}} \end{minipage} \begin{minipage}[b]{0.24\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/segmentation_results/Knife_pred_1.png}} \end{minipage} \begin{minipage}[b]{0.24\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/segmentation_results/Knife_gt_2.png}} \end{minipage} \begin{minipage}[b]{0.24\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/segmentation_results/Knife_pred_2.png}} \end{minipage} \centerline{knife} \end{minipage} \\ \centering \begin{minipage}[b]{0.49\linewidth} \begin{minipage}[b]{0.24\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/segmentation_results/Lamp_gt_1.png}} \end{minipage} \begin{minipage}[b]{0.24\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/segmentation_results/Lamp_pred_1.png}} \end{minipage} \begin{minipage}[b]{0.24\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/segmentation_results/Lamp_gt_2.png}} \end{minipage} \begin{minipage}[b]{0.24\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/segmentation_results/Lamp_pred_2.png}} \end{minipage} \centerline{lamp} \end{minipage} \begin{minipage}[b]{0.49\linewidth} \begin{minipage}[b]{0.24\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/segmentation_results/Laptop_gt_1.png}} \end{minipage} \begin{minipage}[b]{0.24\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/segmentation_results/Laptop_pred_1.png}} \end{minipage} \begin{minipage}[b]{0.24\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/segmentation_results/Laptop_gt_2.png}} \end{minipage} \begin{minipage}[b]{0.24\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/segmentation_results/Laptop_pred_2.png}} \end{minipage} \centerline{laptop} \end{minipage} \\ \centering \begin{minipage}[b]{0.49\linewidth} \begin{minipage}[b]{0.24\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/segmentation_results/Motorbike_gt_1.png}} \end{minipage} \begin{minipage}[b]{0.24\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/segmentation_results/Motorbike_pred_1.png}} \end{minipage} \begin{minipage}[b]{0.24\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/segmentation_results/Motorbike_gt_2.png}} \end{minipage} \begin{minipage}[b]{0.24\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/segmentation_results/Motorbike_pred_2.png}} \end{minipage} \centerline{motorbike} \end{minipage} \begin{minipage}[b]{0.49\linewidth} \begin{minipage}[b]{0.24\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/segmentation_results/Mug_gt_1.png}} \end{minipage} \begin{minipage}[b]{0.24\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/segmentation_results/Mug_pred_1.png}} \end{minipage} \begin{minipage}[b]{0.24\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/segmentation_results/Mug_gt_2.png}} \end{minipage} \begin{minipage}[b]{0.24\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/segmentation_results/Mug_pred_2.png}} \end{minipage} \centerline{mug} \end{minipage} \\ \centering \begin{minipage}[b]{0.49\linewidth} \begin{minipage}[b]{0.24\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/segmentation_results/Pistol_gt_1.png}} \end{minipage} \begin{minipage}[b]{0.24\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/segmentation_results/Pistol_pred_1.png}} \end{minipage} \begin{minipage}[b]{0.24\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/segmentation_results/Pistol_gt_2.png}} \end{minipage} \begin{minipage}[b]{0.24\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/segmentation_results/Pistol_pred_2.png}} \end{minipage} \centerline{pistol} \end{minipage} \begin{minipage}[b]{0.49\linewidth} \begin{minipage}[b]{0.24\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/segmentation_results/Rocket_gt_1.png}} \end{minipage} \begin{minipage}[b]{0.24\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/segmentation_results/Rocket_pred_1.png}} \end{minipage} \begin{minipage}[b]{0.24\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/segmentation_results/Rocket_gt_2.png}} \end{minipage} \begin{minipage}[b]{0.24\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/segmentation_results/Rocket_pred_2.png}} \end{minipage} \centerline{rocket} \end{minipage} \\ \centering \begin{minipage}[b]{0.49\linewidth} \begin{minipage}[b]{0.24\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/segmentation_results/Skateboard_gt_1.png}} \end{minipage} \begin{minipage}[b]{0.24\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/segmentation_results/Skateboard_pred_1.png}} \end{minipage} \begin{minipage}[b]{0.24\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/segmentation_results/Skateboard_gt_2.png}} \end{minipage} \begin{minipage}[b]{0.24\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/segmentation_results/Skateboard_pred_2.png}} \end{minipage} \centerline{skateboard} \end{minipage} \begin{minipage}[b]{0.49\linewidth} \begin{minipage}[b]{0.24\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/segmentation_results/Table_gt_1.png}} \end{minipage} \begin{minipage}[b]{0.24\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/segmentation_results/Table_pred_1.png}} \end{minipage} \begin{minipage}[b]{0.24\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/segmentation_results/Table_gt_2.png}} \end{minipage} \begin{minipage}[b]{0.24\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/segmentation_results/Table_pred_2.png}} \end{minipage} \centerline{table} \end{minipage} \caption{Some examples of all 16 categories in ShapeNet Part dataset. } \label{fig:partsegall} \end{figure*} \section{Overview of Segmentation} We also achieve the task of point cloud semantic segmentation, including shape part segmentation and scene segmentation. Different from the 3D object classification task, we need to gain all the point-wise features in the point cloud, which is the key to solve the segmentation task. For our unsupervised contrastive learning, \jc{as shown in Figure \ref{fig:overview_seg}}, we still consider the original point cloud and its transformed point cloud as a contrastive pair. However, in order to ensure that the feature of each point in the point cloud will be learned, we use the mean of point-wise cross entropy to evaluate the point cloud similarity, and try to maximize the similarity of the positive pair (all other pairs of point clouds in the minibatch are viewed as negative pairs). In this unsupervised manner, our framework can learn the feature of each point in the point cloud. \begin{figure*}[htbp]\footnotesize \centering \begin{minipage}[b]{0.95\linewidth} {\label{} \includegraphics[width=1\linewidth]{figures/overview_seg_structure.pdf}} \end{minipage} \caption{ Overview of our unsupervised contrastive learning for the downstream segmentation task. All the point clouds in the minibatch will be mapped into the feature space. The designed contrastive loss (shown in the main paper) encourages a pair of point clouds (original point cloud and its transformed point cloud) to be consistent in the feature space, and the point-wise features of the same point ID also tend to be consistent. } \label{fig:overview_seg} \end{figure*} \section{Additional Visual Results on Scene Segmentation} In this section, we show more visual results on scene segmentation. Similarly, we utilize the Linear Classifier setting for this downstream task. Figure \ref{fig:secenesegALL} shows the visual results of several scenes. We can observe from the figure that our method produces close segmentation results to the ground truth. This demonstrates the capability of our unsupervised representation learning method. \begin{figure}[htb]\footnotesize \centering \begin{minipage}[b]{0.45\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/scene_results/scene2_gt.png}} \end{minipage} \hspace{0.3cm} \begin{minipage}[b]{0.45\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/scene_results/scene2_pred.png}} \end{minipage} \\ \vspace{0.5cm} \centering \begin{minipage}[b]{0.45\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/scene_results/scene3_gt.png}} \centerline{Ground truth} \end{minipage} \hspace{0.3cm} \begin{minipage}[b]{0.45\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/scene_results/scene3_pred.png}} \centerline{Our result} \end{minipage} \caption{Visual result of scene segmentation. } \label{fig:secenesegALL} \end{figure} \ifCLASSOPTIONcaptionsoff \newpage \fi \newpage \bibliographystyle{IEEEtran} \section{Conclusion} \label{sec:conclusion} We have presented an unsupervised representation learning method for 3D point cloud data. We identified that rotation is a very useful transformation for generating a contrastive version of an original point cloud. Unsupervised representations are learned via maximizing the correspondence between paired point clouds (i.e. an original point cloud and its contrastive version). Our method is simple to implement and does not require expensive computing resources like TPU. We evaluate our unsupervised representations for the downstream tasks including 3D object classification, shape part segmentation and scene segmentation. Experimental results demonstrate that our method generates impressive performance. In the future, We would like to exploit semi-supervised techniques like \cite{feng2020dmt} to improve the performance. We would also like to extend our approach to other interesting applications such as 3D object detection. \section{Introduction} \label{sec:introduction} Point cloud, as an effective representation for 3D geometric data, has attracted noticeable attention recently. It has been used for learning based segmentation, classification, object detection, etc. Promising results have been achieved among those application fields. In this work, we focus on the use of 3D point clouds for classification and segmentation tasks. They respectively target to automatically recognize 3D objects and predict segment labels, which are crucial in multimedia computing, robotics, etc. Most of existing methods for 3D point cloud analysis \cite{wu20153d,riegler2017octnet,wang2017cnn, su2015multi,li2020end,lyu2020learning, qi2017pointnet,qi2017pointnet++,li2018pointcnn} use annotated data for training. Nevertheless, annotation is time-consuming and costly, especially for a considerable amount of data. In the real world, it is particularly challenging to have annotated data for training all the time. Unsupervised learning is a good alternative \cite{achlioptas2018learning,yang2018foldingnet,han2019multi,zhao20193 }. For example, Latent-GAN \cite{achlioptas2018learning} used a deep architecture of Autoencoder (AE), and trained a minimal GAN in the AE's latent space for learning representations of point clouds. FoldingNet \cite{yang2018foldingnet} proposed a new AE to get the codeword which can represent the high dimensional embedding point cloud, and the fully-connected decoder was replaced with the folding-based decoder. MAP-VAE \cite{han2019multi} conducted half-to-half predictions (splitting point cloud into a front half and a back half with several angles), and then combined them with global self-supervision to capture the geometry and structure of the point cloud. 3D-PointCapsNet \cite{zhao20193d} used the encoder-decoder structure, and concatenated the features from the encoder to form the point capsules. These methods usually employ the AE as the backbone, and often suffer from the curse of less quality representations. As a result, they may still induce less desired performance on downstream tasks (e.g. classification and segmentation). Motivated by the above analysis, we propose an unsupervised representation learning method which is applied to the downstream 3D object classification and semantic segmentation. Our core idea is to maximize the agreement or consistency between the representations of the original point cloud and its transformed version (i.e. contrastive version). We demonstrate an elegant approach for 3D point cloud representation learning, which is simple yet effective. In particular, we only generate a transformed version of the original point cloud, thus forming a contrastive pair of this point cloud (i.e. pair in point cloud level). We then feed them into a shared base encoder network (e.g. former part of PointNet \cite{qi2017pointnet} with global feature), followed by a subsequent projection head network (e.g. latter part of PointNet: several mlp layers). The agreement maximization is imposed on the outputs of the projection head network, to facilitate the training efficiency and better preserve the rich representations output from the encoder. Since there are no labels involved in training, it is unsupervised representation learning for 3D point cloud data. To validate our unsupervised method, we conduct experiments for the object classification task on ModelNet40 and ModelNet10, the shape part segmentation task on ShapeNet Part dataset, and the scene segmentation task on the S3DIS dataset. Extensive results show that our unsupervised contrastive representative learning enables impressive outcomes in terms of the three tasks. Our method generally outperforms state-of-the-art unsupervised techniques, and is even comparable to certain supervised counterparts. The contributions of this paper are: \begin{itemize} \item an unsupervised representation learning approach which is simple yet effective on 3D point cloud data, \item \jc{a simple transformation in generating a good contrastive version of an original point cloud, which is better than other complex transformations,} \item two variants of point cloud based contrastive losses for downstream classification and segmentation, respectively, \item experiments and analysis on three tasks (classification, shape part segmentation and scene segmentation), as well as ablation studies for discussing the key elements in our approach. \end{itemize} \section{Method} \label{sec:method} In this work, we take 3D object classification and semantic segmentation (shape and scene) as the downstream tasks of our unsupervised contrastive representation learning. To clearly elaborate our method, we take downstream object classification as an example when designing the unsupervised stage, and we will later explain how to extend it to shape and scene segmentation. Given an unlabeled point cloud, we first use a transformation (i.e. rotation) to generate its transformed version, thus constructing a contrastive point cloud pair for this original point cloud. They are fed into a base encoder network, in order to learn a pair of global features or representations. The global features are then passed to a projection head network, to obtain another pair of representations. These two representations from the projection head network target to reach a maximum agreement with the aid of a contrastive loss function. Figure \ref{fig:overview} shows the framework of our unsupervised contrastive representation learning. \subsection{Unsupervised Contrastive Representation Learning} \label{sec:unsupervisedcontrastive} \textbf{Contrastive transformation.} Unlike 2D images, point cloud data often have an irregular distribution in 3D space, and have a complex degree of freedom. Given this, it is more difficult to identify the practically useful transformations for constructing a good contrastive pair of a point cloud. Similar to SimCLR \cite{chen2020simple}, we can utilize two different types of transformations for a single point cloud (e.g. cropping, rotation), and generate two transformed versions. As an alternative, we can also utilize one transformation only, and pair the original point cloud with the transformed version. To reduce the complexity, we choose the latter strategy, that is, a pair of the original point cloud and its transformed counterpart. Common transformations in 3D space are \jc{rotation, cutout, crop, scaling, smoothing, noise corruption, etc.} Since heavy noise corruption will destroy the object shapes, we exclude this for transformations applied here. Jittering is analogous to light noise, and we use jittering for data augmentation, following the protocol of the state-of-the-art point based methods. In this work, we select rotation as the transformation, and use it to generate a transformed version for the original point cloud. We provide the discussion of the choice in ablation studies (Section \ref{sec:ablation}). \textbf{Base encoder network.} Point based networks, such as PointNet \cite{qi2017pointnet}, DGCNN \cite{wang2019dynamic}, Pointfilter \cite{Zhang2020}, often involve a pooling layer to output the global feature for an input point cloud. The former part of a point based network before this layer (inclusive) can be naturally viewed as a based encoder in our framework. In other words, the input point cloud can be encoded into a latent representation vector (i.e. the global feature). In this sense, we can simply extract this former part of any such point based networks as a base encoder network in our unsupervised contrastive representation framework. In this work, we select some state-of-the-art point based networks including PointNet and DGCNN as the backbone, and extract their former parts as our base encoder accordingly. It is interesting to discover that the encoders involving T-Net (i.e. transformation net) will hinder the learning of unsupervised contrastive representations. We deduce that T-Net accounts for various rotated point cloud augmentations, which degrades the ability of capturing a large contrast between the input pair. As such, we remove the original T-Net (i.e. transformation net) in these encoders, if involved. We show the results of different encoders in Section \ref{sec:results}. \textbf{Projection head network.} Point based networks usually have several fully connected layers to bridge the global feature with the final $k$-class vector. Similar to the encoder, we can also simply extract the latter part of a point based network as the projection head. Alternatively, it is also flexible to customize a projection head network by designing more or fewer fully connected layers. Mathematically, the final $k$-class vector (or representation vector) can be formulated as \begin{equation}\label{eq:kvector} \begin{aligned} \mathbf{z_i} &= H(E(\mathbf{P})), \\ \mathbf{z_j} &= H(E(\mathbf{P}')), \end{aligned} \end{equation} where $\mathbf{P}$ is an original point cloud and $\mathbf{P}'$ is its transformed counterpart. $E$ and $H$ denote the encoder network and the projection head network, respectively. \textbf{Contrastive loss function.} We first randomly select $n$ samples, and use the selected transformation (i.e. rotation) to generate another $n$ corresponding transformed counterparts, resulting in $n$ pairs ($2n$ samples) constituting the minibatch. Analogous to SimCLR \cite{chen2020simple}, we also do not explicitly define positive or negative pairs. Instead, we select a pair as the positive pair, and the remaining $(n-1)$ pairs (i.e. $2(n-1)$ samples) are simply regarded as negative pairs. As for the unsupervised loss function, InfoNCE \cite{oord2018representation} is a widely-used loss function for unsupervised representation learning of 2D images. More recently, \cite{xie2020pointcontrast} also utilized a similar loss for contrastive scene representation learning. Inspired by them, we also introduce a variant as our unsupervised loss function, which is defined as \begin{equation}\label{eq:contrastiveloss} \begin{aligned} L = - \frac{1}{|S|}\sum_{(i, j)\in S} \log{\frac{\exp(\mathbf{z_i}\cdot\mathbf{z_j}/\tau)}{\sum_{(\cdot, t)\in S, t \neq j}\exp(\mathbf{z_i}\cdot\mathbf{z_t}/\tau)}}, \end{aligned} \end{equation} where $S$ is the set of all positive pairs (point cloud level), and $\tau$ is a temperature parameter. $||$ denotes the cardinality of the set. The loss is computed using all the contrastive pairs, and is equivalent to applying the cross entropy with pseudo labels (e.g. $0\sim15$ for $16$ pairs). We found it works very well in our unsupervised contrastive representation learning. \subsection{Downstream 3D Object Classification} \label{sec:method-classificaion} We take 3D object classification as the first downstream task in this work, to validate our unsupervised representation learning. The above designed scheme is immediately ready for the unsupervised representation learning to facilitate the downstream classification task. In particular, we will utilize two common schemes for validation here. One is to train a linear classification network by taking the learned representations of our unsupervised learning as input. Here, the learned representation is the global feature. We did not choose the $k$-class representation vector as it had less discriminative features than the global feature in our framework, and it induced a poor performance (see Section \ref{sec:ablation}). The other validation scheme is to initialize the backbone with the unsupervised trained model and perform a supervised training. We will demonstrate the classification results for these two validation schemes in Section \ref{sec:results}. \subsection{Downstream Semantic Segmentation} \label{sec:semanticsegmentation} To demonstrate our unsupervised representation learning, we also extend the above unsupervised learning scheme to the downstream semantic segmentation, including shape part segmentation and scene segmentation. Since it is a different task from 3D object classification, we need to design a new scheme to facilitate unsupervised training. We still use the rotation to generate a transformed version of an original point cloud (e.g. a shape point cloud or a split block from the scene), and view them as a contrastive pair (i.e. point cloud level). As for segmentation, each point in the point cloud has a feature representation. For unsupervised representation learning, we compute the mean of all point-wise cross entropy in order to evaluate the overall similarity within the minibatch. We therefore define a loss function for semantic segmentation. \begin{equation}\fontsize{7pt}{\baselineskip}\selectfont \label{eq:contrastiveloss} \begin{aligned} L = - \frac{1}{|S|}\sum_{(a, b)\in S} \frac{1}{|P_{(a, b)}|}\sum_{(i, j)\in P_{(a, b)}} \log{\frac{\exp(\mathbf{z_i}\cdot\mathbf{z_j}/\tau)}{\sum_{(\cdot, t)\in P_{(a, b)}, t \neq j}\exp(\mathbf{z_i}\cdot\mathbf{z_t}/\tau)}}, \end{aligned} \end{equation} where $S$ is the set of all positive pairs (i.e. point cloud $a$ and $b$), and $P_{(a, b)}$ is the set of all point pairs (i.e. the same point id) of the point cloud $a$ and $b$. Similarly, we apply the cross entropy with pseudo labels which match the point indices (e.g. $0\sim2047$ for $2048$ points). \section{Related Work} \label{sec:relatedwork} Unlike 2D images, which consist of regular and uniform pixels, point cloud data are often irregular, sparse and contaminated with noise/outliers during the obtaining procedure of scanning and processing \cite{Zhang2020,LUdening2020,lu2020,lu2017}. 3D point cloud learning techniques can be generally classified into three categories: (1) voxel based \cite{wu20153d,riegler2017octnet,wang2017cnn}, (2) view based \cite{su2015multi,su2018splatnet,zhou2019multi,li2020end,lyu2020learning} and (3) point based \cite{qi2017pointnet,qi2017pointnet++,li2018pointcnn,wu2019pointconv,xu2018spidercnn,liu2019relation,komarichev2019cnn,wang2019dynamic,lin2020convolution}. Voxel based methods often involve resolution and memory issues, and view based approaches are often criticized for the tedious pre-processing, i.e. projecting each 3D object onto 2D image planes. Point based techniques are capable of learning features from point cloud data straightforwardly. In fact, most of these methods are supervised. \textbf{Voxel based techniques.} 3D volumetric CNNs (Convolutional Neural Network) imitates classical 2D CNNs by performing voxelization on the input point cloud. 3D ShapeNets was designed for learning volumetric shapes \cite{wu20153d}. Riegler et al. proposed OctNet for deep learning with sparse 3D data \cite{riegler2017octnet}. Wang et al. presented an Octree-based CNN for 3D shape analysis, which was called O-CNN \cite{wang2017cnn}. These methods are proposed to improve 3D volumetric CNNs and reach high volume resolutions. \textbf{View based methods.} View based methods are to project 3D point cloud data onto the regular image planes. For example, MVCNNs used multiple images rendered from the 3D shapes to fit classical 2D CNNs \cite{su2015multi}. Su et al. proposed to utilize a sparse set of samples in a high-dimensional lattice as the representation of a collection of points \cite{su2018splatnet}. \ad{Zhou et al. proposed the multi-view saliency guided deep neural network (MVSG-DNN) which contains three modules to capture and extract the features of individual views to compile 3D object descriptors for 3D object retrieval and classification \cite{zhou2019multi}. Xu et al. used a LSTM-based network to recurrently aggregate the 3D objects shape embedding from an image sequence and estimate images of unseen viewpoints, aiming at the fusion of multiple views' features \cite{xu2019learning}. Huang et al. devised a view mixture model (VMM) to decompose the multiple views into a few latent views for the descriptor construction \cite{huang2020learning}. } Li et al. presented an end-to-end framework to learn local multi-view descriptors for 3D point clouds \cite{li2020end}. Lyu et al. projected 3D point clouds into 2D image space by learning the topology-preserving graph-to-grid mapping \cite{lyu2020learning}. \textbf{Point based methods.} PointNet is a seminal work on point based learning \cite{qi2017pointnet}. In PointNet, max-pooling operation is used to learn permutation-invariant features. The original authors introduced PointNet++, a hierarchical neural network that applied PointNet recursively on a nested partitioning of the input point set \cite{qi2017pointnet++}. It achieved better learning outcomes than PointNet. Later, pointCNN was introduced to learn an X-transformation from the input points, to promote the weighting of the input features and the permutation of the points into a latent order \cite{li2018pointcnn}. PointConv, a density re-weighted convolution, was proposed to fully approximate the 3D continuous convolution on any set of 3D points \cite{wu2019pointconv}. Xu et al. proposed SpiderCNN to extract geometric features from point clouds \cite{xu2018spidercnn}. Liu et al. designed a Relation-Shape Convolutional Neural Network to learn the geometric topology constraints among points \cite{liu2019relation}. Simonovsky et al. generalized the convolution operator from regular grids to arbitrary graphs and applied it to point cloud classification \cite{simonovsky2017dynamic}. Parametric Continuous Convolution was introduced to exploit parameterized kernel functions that spanned the full continuous vector space \cite{wang2018deep}. Li et al. came up with a self-organizing network which applied hierarchical feature aggregation using self-organizing map \cite{li2018so}. It included a point cloud auto-encoder as pre-training to improve network performance. Komarichev et al. presented an annular convolution operator to better capture the local neighborhood geometry of each point by specifying the (regular and dilated) ring-shaped structures and directions in the computation \cite{komarichev2019cnn}. Zhao et al. put forwarded PointWeb to enhance local neighborhood features for point cloud processing \cite{zhao2019pointweb}. Xie et al. developed a new representation by adopting the concept of shape context as the building block and designed a model (ShapeContextNet) for point cloud recognition \cite{xie2018attentional}. Wang et al. designed a new neural network module dubbed EdgeConv which acts on graphs dynamically computed in each layer \cite{wang2019dynamic}. More recently, Fujiwara et al. proposed to embed the distance field to neural networks \cite{fujiwara2020neural}. Lin et al. defined learnable kernels with a graph max-pooling mechanism for their 3D Graph Convolution Networks (3D-GCN) \cite{lin2020convolution}. Yan et al. presented the adaptive sampling and the local-nonlocal modules for robust point cloud processing \cite{yan2020pointasnl}. \ad{Qiu et al. proposed a network considering both low-level geometric information of 3D space points explicitly and high-level local geometric context of feature space implicitly \cite{qiu2021geometric}. Chen et al. presented a hierarchical attentive pooling graph network (HAPGN) for segmentation which includes the gated graph attention network to get a better representation of local features and hierarchical graph pooling module to learn hierarchical features \cite{chen2020hapgn}. Liu et al. devised a point context encoding module (PointCE) and a semantic context encoding loss (SCE-loss) to capture the rich semantic context of a point cloud adaptively, achieving improved segmentation performance \cite{liu2020semantic}. } \begin{figure*}[htbp] \centering \begin{minipage}[b]{0.95\linewidth} {\label{} \includegraphics[width=1\linewidth]{figures/overview_net_structure.pdf}} \end{minipage} \caption{Overview of our unsupervised contrastive representation learning method. Given a point cloud, the transformation (rotation in this work) is used to the get transformed version of the original point cloud, which defines a pair. Then, the pairs are input to the base encoder network (e.g. PointNet or DGCNN) to learn the global feature of each model. The projection head is further used to reduce the global feature dimension and for effective loss degradation. The contrastive loss encourages a pair of point clouds to be consistent in the feature space. } \label{fig:overview} \end{figure*} \textbf{Unsupervised representation learning.} Yang et al. proposed an autoencoder (AE), referred to as FoldingNet, for unsupervised learning on point cloud data \cite{yang2018foldingnet}. MAP-VAE was proposed to enable the learning of global and local geometry by jointly leveraging global and local self-supervision \cite{han2019multi}. Rao et al. presented bidirectional reasoning between the local structures and the global shape for unsupervised representation learning of point clouds \cite{rao2020global}. It used a much larger RSCNN as backbone (4$\times$RSCNN) \cite{liu2019relation}. \ad{Zhang et al. presented an explainable machine learning method for point cloud classification by building local-to-global features through iterative one-hop information exchange, and feeding the feature vector to a random forest classifier for classification \cite{zhang2020pointhop}. } Different from them, we create a contrastive pair for each point cloud, and our framework simply consists of an encoder network and a head network. The encoder outputs global representations (features) for downstream networks and the head outputs projection features (a smaller size) for calculating the loss. More recently, Xie et al. presented an unsupervised pre-training framework called PointContrast for high-level scene understanding tasks \cite{xie2020pointcontrast}. Their findings demonstrated that the learned representation could generalize across domains. \cite{xie2020pointcontrast} focused on 3D scenes (pretrained on a very large-scale generated dataset (about $1$ terabyte), and sophisticatedly considered matched points (i.e. common points) of two different views (at least 30\% overlap) as pairs. \jc{Unlike that, our point cloud level based approach simply uses a rotational transformation to generate a transformed version of an original point cloud. It can easily get a great pose discrepancy, without requiring point cloud overlap to satisfy the demand of obtaining a certain number of matched points.} \jc{In essence, a pair of matched points are treated as a pair in \cite{xie2020pointcontrast} to learn point-level features, while a pair of point clouds (a point cloud consisting of a series of points) are regarded as a pair in our work. Treating the point clouds as the pair in our method has the advantage of learning better global representations when compared with \cite{xie2020pointcontrast}. It is also intuitive and straightforward to use point cloud level, while PointContrast \cite{xie2020pointcontrast} can hardly obtain point cloud representations directly and is suitable for point-wise tasks, e.g., scene segmentation. In comparison, our global feature of point cloud level can be easily used in both point cloud level and point-wise tasks (e.g., classification and segmentation).} Meanwhile, for unsupervised learning we derive two variants of contrastive losses based on point clouds, which respectively facilitate two different types of downstream tasks (i.e., classification and segmentation). Finally, the backbone networks are different: they use a Sparse Residual U-Net \jc{which requires voxelization of point clouds}, while we use a simple encoder-head structure. \section{Experimental Results} \label{sec:results} \subsection{Datasets} \textbf{Object classification.} We utilize ModelNet40 and ModelNet10 \cite{wu20153d} for 3D object classification. We follow the same data split protocols of PointNet-based methods \cite{qi2017pointnet, qi2017pointnet++, wang2019dynamic} for these two datasets. For ModelNet40, the train set has $9,840$ models and the test set has $2,468$ models, and the datset consists of $40$ categories. For ModelNet10, $3,991$ models are for training and $908$ models for testing. It contains $10$ categories. For each model, we use $1,024$ points with only $(x,y,z)$ coordinates as the input, which is also consistent with previous works. \jc{Note that some methods \cite{yang2018foldingnet,gadelha2018multiresolution,achlioptas2018learning,zhao20193d} are pre-trained under the ShapeNet55 dataset \cite{chang2015shapenet}. We also conduct a version of ShapeNet55 training for the classification task. We used the same dataset as \cite{zhao20193d}, which has $57,448$ models with $55$ categories, and all models will be used for unsupervised training. Following the same setting of previous work, we use $2,048$ points as input. } \jc{We provide comparison experiments with PointContrast \cite{xie2020pointcontrast} for the classification task, and they use the ShapeNetCore \cite{chang2015shapenet} for finetuning. The dataset contains $51,127$ pre-aligned shapes from $55$ categories, which has $35,708$ models for training, $5,158$ models for validation and $10,261$ models for testing. We use $1,024$ points as input which is the same as PointContrast \cite{xie2020pointcontrast}.} \textbf{Shape part segmentation.} We use the ShapeNet Part dataset \cite{yi2016scalable} for shape part segmentation, which consists of $16,881$ shapes from $16$ categories. Each object involves 2 to 6 parts, with a total number of $50$ distinct part labels. We follow the official dataset split and the same point cloud sampling protocol as \cite{chang2015shapenet}. Only the point coordinates are used as input. Following \cite{qi2017pointnet++,wang2019dynamic}, we use mean Intersection-over-Union (mIoU) as the evaluation metric. \textbf{Scene segmentation.} We also evaluate our model for scene segmentation on Stanford Large-Scale 3D Indoor Spaces Dataset (S3DIS) \cite{armeni2017joint}. This dataset contains 3D scans of $271$ rooms and $6$ indoor areas, covering over $6,000 m^2$. We follow the same setting as \cite{qi2017pointnet++, wang2019dynamic}. Each room is split with $1m \times 1m$ area into little blocks, and we sampled $4,096$ points of each block. Each point is represented as a 9D vector, which means the point coordinates, RGB color and normalized location for the room. Each point is annotated with one of the $13$ semantic categories. We also follow the same protocol of adopting the six-fold cross validation for the six area. \textit{Please refer to the \jc{appendices} for additional information and visual results. } \begin{table*}[htbp] \centering \caption{ Classification results of previous methods and our method, on the datasets of ModelNet40 and ModelNet10. Note that * represents that the model is trained on ShapeNet55. }\label{table:classification} \begin{tabular}{l c c c c c} \hline \tabincell{c}{Methods} & \tabincell{c}{Supervised} & \tabincell{c}{Input Data} & \tabincell{c}{Resolution\\e.g. \# Points} & \tabincell{c}{ModelNet40\\ Accuracy} & \tabincell{c}{ModelNet10\\Accuracy} \\ \hline PointNet \cite{qi2017pointnet} & yes & xyz & 1k & 89.2 & - \\ Kd-Net (depth=10) \cite{klokov2017escape} & yes & tree & $2^{10} \times3$ & 90.6 & 93.3\\ PointNet++ \cite{qi2017pointnet++} & yes & xyz & 1k & 90.7 & -\\ KCNet \cite{shen2018mining} & yes & xyz & 1k & 91.0 & 94.4 \\ MRTNet \cite{gadelha2018multiresolution} & yes & xyz & 1k & 91.2 & - \\ DGCNN \cite{wang2019dynamic} & yes & xyz & 1k & 92.9 & - \\ SO-Net \cite{li2018so} & yes & xyz & 2k & 90.9 & 94.1\\ KPConv \cite{thomas2019kpconv} & yes & xyz & 6.8k & 92.9 & -\\ PointNet++ \cite{qi2017pointnet++} & yes & xyz, normal & 5k & 91.9 & -\\ SO-Net \cite{li2018so} & yes & xyz, normal & 5k & 93.4 & -\\ O-CNN \cite{wang2017cnn} & yes & xyz, normal & - & 90.6 & -\\ PointCNN \cite{li2018pointcnn} & yes & xyz & 1k & 92.2 & -\\ PCNN \cite{atzmon2018point} & yes & xyz & 1k & 92.3 & 94.9\\ Point2Sequence \cite{liu2019point2sequence} & yes & xyz & 1k & 92.6 & 95.3\\ RS-CNN (voting) \cite{liu2019relation} & yes & xyz & 1k & 93.6 & -\\ Neural Implicit \cite{fujiwara2020neural} & yes & weights & $1024\times256$ & 92.2 & 95.7\\ PointASNL \cite{yan2020pointasnl} & yes & xyz & 1k & 92.9 & 95.7\\ 3D-GCN \cite{lin2020convolution} & yes & xyz & 1k & 92.1 & 93.9\\ \ad{HAPGN} \cite{chen2020hapgn} & yes & xyz & 1k & 91.7 & -\\ \ad{MVSG-DNN} \cite{zhou2019multi} & yes & views & 12 & 92.3 & 94.0\\ \hline VIPGAN \cite{han2019view} & no & views & 12 & 91.98 & 94.05\\ Latent-GAN* \cite{achlioptas2018learning} & no & xyz & 2k & 85.70 & 95.30\\ Latent-GAN \cite{achlioptas2018learning} & no & xyz & 2k & 87.27 & 92.18\\ FoldingNet* \cite{yang2018foldingnet} & no & xyz & 2k & 88.40 & 94.40\\ FoldingNet \cite{yang2018foldingnet} & no & xyz & 2k & 84.36 & 91.85\\ MRTNet* \cite{gadelha2018multiresolution} & no & xyz & multi-resolution & 86.40 & - \\ 3D-PointCapsNet* \cite{zhao20193d} & no & xyz & 2k & 88.90 & - \\ 3D-PointCapsNet (Linear Classifier) \cite{zhao20193d} & no & xyz & 1k & 87.46 & - \\ \ad{PointHop} \cite{zhang2020pointhop} & no & xyz & 1k & 89.10 & - \\ MAP-VAE \cite{han2019multi} & no & xyz & 2k & 90.15 & 94.82\\ Global-Local (RSCNN-Large) \cite{rao2020global} & no & xyz & 1k & 92.9 & - \\ \hline Ours (PointNet, Linear Classifier) & no & xyz & 1k & 88.65 & 90.64\\ Ours (DGCNN, Linear Classifier) & no & xyz & 1k & 90.32 & \jc{95.09}\\ \jc{Ours* (DGCNN, Linear Classifier)} & no & xyz & 2k & \jc{89.37} & -\\ Ours (PointNet, Pretraining) & yes & xyz & 1k & 90.44 & 94.38\\ Ours (DGCNN, Pretraining) & yes & xyz & 1k & 93.03 & 95.93\\ \hline \end{tabular} \end{table*} \subsection{Experimental Setting} We use Adam optimizer for our unsupervised representation training. We implemented our work with TensorFlow, and use a single RTX TITAN V GPU for training (DGCNN using multiple GPUs). For downstream 3D object classification on ModelNet40, ModelNet10, \jc{ShapeNet55 and ShapeNetCore}, we use a batch size of $32$ (i.e. $16$ contrastive pairs) for training and testing. We use the same dropouts with the original methods accordingly, i.e. $0.7$ for PointNet as backbone, $0.5$ for DGCNN as backbone. The initial decay rate of batch normalization is $0.5$, and will be increased no lager than $0.99$. The training starts with a $0.001$ learning rate, and is decreased to $0.00001$ with an exponential decay. We employ DGCNN as the backbone for semantic segmentation. As for shape part segmentation on ShapeNet Part dataset, we utilize a batch size of $16$ (i.e. $8$ constrasive pairs) for training. We use a batch size of $12$ (i.e. $6$ constrasive pairs) for scene segmentation on S3DIS. For the two tasks, we simply use a batch size of $1$ during testing, and the other settings follow DGCNN. \subsection{3D Object Classification} We conduct two kinds of experiments to evaluate the learned representations of our unsupervised contrastive learning. We first train a simple linear classification network with the unsupervised representations as input. Secondly, we take our unsupervised representation learning as pretraining, and initialize the weights of the backbone before supervised training. Table \ref{table:classification} shows 3D object classification results for our method and a wide range of state-of-the-art techniques. \textbf{Linear classification evaluation.} In this part, we use the former part of PointNet \cite{qi2017pointnet} and DGCNN \cite{wang2019dynamic} as the base encoder, and use the latter mlp layers as the projection head. The learned features are used as the input for training the linear classification network. We use the test accuracy as the evaluation of our unsupervised contrastive learning. Comparisons are reported in Table \ref{table:classification}. Regarding linear classification evaluation, our method with DGCNN as backbone always performs better than our method using PointNet as backbone, for example, $95.09\%$ versus $90.64\%$ for ModelNet10, $90.32\%$ versus $88.65\%$ for ModelNet40. This is due to a more complex point based structure of DGCNN. Our method with DGCNN as backbone also outperforms most unsupervised techniques, \ad{like two recent methods PointHop ($1.22\%$ gain) and MAP-VAE ($0.17\%$ gain), } and is comparable to some supervised methods, for example, $90.32\%$ versus $90.6\%$ (O-CNN) and $90.9\%$ (SO-Net with xyz) on ModelNet40, $95.09\%$ versus $93.9\%$ (supervised 3D-GCN) on ModelNet10. The Global-Local method \cite{rao2020global} mined rich semantic and structural information, and used a larger RSCNN as backbone (4$\times$RSCNN) \cite{liu2019relation}, resulting in a better accuracy than our method. \jc{Notice that some methods used a larger ShapeNet55 dataset for training \cite{yang2018foldingnet,gadelha2018multiresolution,achlioptas2018learning,zhao20193d}. Although the previous work \cite{han2019multi} re-implemented them by training on ModelNet40, they use $2048$ points rather than our $1024$. To provide additional insights, we re-implement and train a state-of-the-art method (3D-PointCapsNet \cite{zhao20193d}) on ModelNet40 with $1024$ points, and train a linear classifier for evaluation. We choose this method since its code is publicly available and it is recent work. From Table \ref{table:classification}, it is obvious that our method (DGCNN as backbone) still outperforms 3D-PointCapsNet by a $2.86\%$ margin.} \jc{To show our learned representations have the transfer capability, we also train our method (DGCNN as backbone) on ShapeNet55 dataset for unsupervised contrastive learning and then feed the ModelNet40 dataset to the trained model to get point cloud features. We use these features to train a linear classifier on ModelNet40 for evaluation. From Table \ref{table:classification} we can see that our method achieves the best result compared with other state-of-art methods on the same settings (Latent-GAN \cite{achlioptas2018learning}, FoldingNet \cite{yang2018foldingnet}, and 3D-PointCapsNet \cite{zhao20193d}), exceeding them by $3.67\%$, $0.97\%$, and $0.47\%$, respectively. } \textbf{Pretraining evaluation.} In addition to the above evaluation using a linear classifier, we further utilize the pre-training evaluation to demonstrate the efficacy of our unsupervised contrastive representation learning. Specifically, we also select PointNet and DGCNN as the backbone, in which the part before and including the global feature is regarded as the base encoder, and the remaining classification branch (i.e. several mlp layers) as the projection head. After our unsupervised representation training, we initialize the corresponding network with the unsupervised trained model, and then perform the supervised training. Table \ref{table:classification} shows the comparison results of our method and the state-of-the-art 3D object classification techniques (both unsupervised and supervised). The pretraining evaluation based on our unsupervised representation learning and the backbone of PointNet sees an improvement over the original PointNet, increased from $89.2\%$ to $90.44\%$ ($1.24\%$ increase) on ModelNet40. Regarding ModelNet10, the accuracy of PointNet as our backbone for pretraining evaluation is $94.38\%$, which is on par with the supervised 3D-GCN ($93.9\%$) and outperforms some unsupervised methods. It is interesting to see that our method (DGCNN as backbone) is the best one on ModelNet10 in the pretraining evaluation, while the second best is achieved by two very recent supervised methods (Neural Implicit \cite{fujiwara2020neural} and PointASNL \cite{yan2020pointasnl}). \cite{fujiwara2020neural} even used a large weight matrix as the input for classification training. For ModelNet40, our method (DGCNN as backbone) achieves $93.03\%$, outperforming almost all techniques including both supervised and unsupervised ones. For example, our method in this case outperforms the very recent supervised methods including 3D-GCN \cite{lin2020convolution}, Neural Implicit \cite{fujiwara2020neural} and PointASNL \cite{yan2020pointasnl}. Compared to using PointNet as backbone, taking DGCNN as backbone achieves a better classification accuracy, for example, $95.93\%$ versus $94.38\%$, $93.03\%$ versus $90.44\%$. Similarly, we believe this is mainly because DGCNN exploits richer information than PointNet. \begin{table}[b] \centering \caption{ Comparison results of PointContrast and our method, on the dataset of ShapeNetCore with Pretraining evaluation. Note that * represents that the model is trained on ScanNet.} \label{table:compare_pointcontrast} \begin{tabular}{l c} \hline \tabincell{c}{Methods} & \tabincell{c}{Accuracy}\\ \hline Trained from scratch (PointContrast \cite{xie2020pointcontrast}) & 85.1\\ PointContrast* \cite{xie2020pointcontrast} & 85.7\\ Trained from scratch (original DGCNN \cite{wang2019dynamic}) & 84.0\\ Ours (DGCNN as backbone) & 86.2\\ \hline \end{tabular} \end{table} \setlength{\tabcolsep}{2pt} \begin{table*}[htb]\footnotesize \centering \caption{Shape part segmentation results of our method and state-of-the-art techniques on ShapeNet Part dataset. }\label{table:segmentation} \begin{tabular}{l c c c c c c c c c c c c c c c c c c c} \hline Methods & Supervised & \tabincell{c}{class\\mIOU} & \tabincell{c}{instance\\mIOU} & air. & bag & cap & car & chair & ear. & guit. & kni. & lam. & lap. & mot. & mug & pist. & rock. & ska. & tab.\\ \hline Kd-Net \cite{klokov2017escape} & yes & 77.4 & 82.3 & 80.1 & 74.6 & 74.3 & 70.3 & 88.6 & 73.5 & 90.2 & 87.2 & 81.0 & 84.9 & 87.4 & 86.7 & 78.1 & 51.8 & 69.9 & 80.3\\ MRTNet \cite{gadelha2018multiresolution} & yes & 79.3 & 83.0 & 81.0 & 76.7 & 87.0 & 73.8 & 89.1 & 67.6 & 90.6 & 85.4 & 80.6 & 95.1 & 64.4 & 91.8 & 79.7 & 87.0 & 69.1 & 80.6\\ PointNet \cite{qi2017pointnet} & yes & 80.4 & 83.7 & 83.4 & 78.7 & 82.5 & 74.9 & 89.6 & 73.0 & 91.5 & 85.9 & 80.8 & 95.3 & 65.2 & 93.0 & 81.2 & 57.9 & 72.8 & 80.6\\ KCNet \cite{shen2018mining} & yes & 82.2 & 84.7 & 82.8 & 81.5 & 86.4 & 77.6 & 90.3 & 76.8 & 91.0 & 87.2 & 84.5 & 95.5 & 69.2 & 94.4 & 81.6 & 60.1 & 75.2 & 81.3\\ RS-Net \cite{huang2018recurrent} & yes & 81.4 & 84.9 & 82.7 & 86.4 & 84.1 & 78.2 & 90.4 & 69.3 & 91.4 & 87.0 & 83.5 & 95.4 & 66.0 & 92.6 & 81.8 & 56.1 & 75.8 & 82.2\\ SO-Net \cite{li2018so} & yes & 81.0 & 84.9 & 82.8 & 77.8 & 88.0 & 77.3 & 90.6 & 73.5 & 90.7 & 83.9 & 82.8 & 94.8 & 69.1 & 94.2 & 80.9 & 53.1 & 72.9 & 83.0\\ PointNet++ \cite{qi2017pointnet++} & yes & 81.9 & 85.1 & 82.4 & 79.0 & 87.7 & 77.3 & 90.8 & 71.8 & 91.0 & 85.9 & 83.7 & 95.3 & 71.6 & 94.1 & 81.3 & 58.7 & 76.4 & 82.6\\ DGCNN \cite{wang2019dynamic} & yes & 82.3 & 85.2 & 84.0 & 83.4 & 86.7 & 77.8 & 90.6 & 74.7 & 91.2 & 87.5 & 82.8 & 95.7 & 66.3 & 94.9 & 81.1 & 63.5 & 74.5 & 82.6\\ KPConv \cite{thomas2019kpconv} & yes & 85.1 & 86.4 & 84.6 & 86.3 & 87.2 & 81.1 & 91.1 & 77.8 & 92.6 & 88.4 & 82.7 & 96.2 & 78.1 & 95.8 & 85.4 & 69.0 & 82.0 & 83.6\\ Neural Implicit \cite{fujiwara2020neural} & yes & - & 85.2 & 84.0 & 80.4 & 88.0 & 80.2 & 90.7 & 77.5 & 91.2 & 86.4 & 82.6 & 95.5 & 70.0 & 93.9 & 84.1 & 55.6 & 75.6 & 82.1\\ 3D-GCN \cite{lin2020convolution} & yes & 82.1 & 85.1 & 83.1 & 84.0 & 86.6 & 77.5 & 90.3 & 74.1 & 90.9 & 86.4 & 83.8 & 95.6 & 66.8 & 94.8 & 81.3 & 59.6 & 75.7 & 82.8\\ \ad{HAPGN} \cite{chen2020hapgn} & yes & 87.1 & 89.3 & 87.1 & 85.7 & 90.1 & 86.2 & 91.7 & 78.3 & 94.3 & 85.9 & 82.6 & 95.2 & 77.9 & 94.3 & 90.1 & 73.9 & 90.3 & 90.6\\ \hline MAP-VAE \cite{han2019multi} & no & 67.95 & - & 62.7 & 67.1 & 73.0 & 58.5 & 77.1 & 67.3 & 84.8 & 77.1 & 60.9 & 90.8 & 35.8 & 87.7 & 64.2 & 45.0 & 60.4 & 74.8\\ Multi-task \cite{hassani2019unsupervised} & no & 72.1 & 77.7 & 78.4 & 67.7 & 78.2 & 66.2 & 85.5 & 52.6 & 87.7 & 81.6 & 76.3 & 93.7 & 56.1 & 80.1 & 70.9 & 44.7 & 60.7 & 73.0\\ \hline Ours (Linear Classifier) & no & 75.5 & 79.2 & 76.3 & 76.6 & 82.5 & 65.8 & 85.9 & 67.1 & 86.6 & 81.3 & 79.2 & 93.8 & 55.8 & 92.8 & 73.5 & 53.1 & 61.3 & 76.6\\ \hline \end{tabular} \end{table*} \begin{figure*}[htb]\footnotesize \centering \begin{minipage}[b]{0.16\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/segmentation_results/Airplane.png}} \centerline{airplane} \end{minipage} \begin{minipage}[b]{0.16\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/segmentation_results/Bag.png}} \centerline{bag} \end{minipage} \begin{minipage}[b]{0.16\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/segmentation_results/Cap.png}} \centerline{cap} \end{minipage} \begin{minipage}[b]{0.16\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/segmentation_results/Car.png}} \centerline{car} \end{minipage} \begin{minipage}[b]{0.16\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/segmentation_results/Chair.png}} \centerline{chair} \end{minipage} \begin{minipage}[b]{0.16\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/segmentation_results/Guitar.png}} \centerline{guitar} \end{minipage}\\ \begin{minipage}[b]{0.16\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/segmentation_results/Knife.png}} \centerline{knife} \end{minipage} \begin{minipage}[b]{0.16\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/segmentation_results/Lamp.png}} \centerline{lamp} \end{minipage} \begin{minipage}[b]{0.16\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/segmentation_results/Motorbike.png}} \centerline{motorbike} \end{minipage} \begin{minipage}[b]{0.16\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/segmentation_results/Mug.png}} \centerline{mug} \end{minipage} \begin{minipage}[b]{0.16\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/segmentation_results/Skateboard.png}} \centerline{skateboard} \end{minipage} \begin{minipage}[b]{0.16\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/segmentation_results/Table.png}} \centerline{table} \end{minipage} \caption{ Some examples of shape part segmentation using our method (Linear Classifier setting). } \label{fig:partseg} \end{figure*} \jc{PointContrast \cite{xie2020pointcontrast} also presented an unsupervised contrastive learning approach, which is based on point level while ours is based on point cloud level. They validated their effectiveness on some datasets using the pretrain-finetuning strategy. In order to provide a potential comparison with it, we also used the ShapeNetCore dataset for the classification task with pretraining evaluation. The comparison results are shown in Table \ref{table:compare_pointcontrast}, and we can see that our method (DGCNN as backbone) outperforms them by $0.5\%$, though PointContrast is pretrained on a rather larger dataset (ScanNet). Note that our method is not suitable to be pretrained on ScanNet since this downstream task is for classification (requiring point cloud level features for classification) while ScanNet has point-wise labels. The good performance of our method is mainly due to the proper design of point cloud level based contrastive pairs and contrastive learning, so that we can directly obtain the global feature from contrastive representation learning. We also re-implement DGCNN \cite{wang2019dynamic} on the ShapeNetCore dataset, which further demonstrates the effectiveness of our method by increasing from $84.0\%$ (original DGCNN) to $86.2\%$. In comparison with PointContrast which improved $0.6\%$ from the version of training from scratch, we achieve $2.2\%$ increase. } \subsection{Shape Part Segmentation} In addition to the 3D object classification, we also verify our method on shape part segmentation. The segmentation results are listed in Table \ref{table:segmentation}. Here we take DGCNN as the backbone of our approach and simply employ the linear classifier evaluation setting. It can be seen from the table that our method in linear classification evaluation achieves $79.2\%$ instance mIOU and $75.5\%$ class mIOU, which are remarkably better than state-of-the-art unsupervised techniques including MAP-VAE \cite{han2019multi} and Multi-task \cite{hassani2019unsupervised}. Specifically, our method outperforms MAP-VAE \cite{han2019multi} and Multi-task \cite{hassani2019unsupervised} by a margin of $7.55\%$ and $3.4\%$, respectively, in terms of class mIOU. Figure \ref{fig:partseg} illustrates some examples of our method (Linear Classifier setting) on the task of shape part segmentation. \subsection{Scene Segmentation} We also test our method for the scene segmentation task on the S3DIS dataset, which typically appears to be more challenging than the shape part segmentation. Similarly, we utilize DGCNN as the backbone and adopt the Linear Classifier evaluation setting. We are not able to compare our method with unsupervised methods like MAP-VAE \cite{han2019multi} and Multi-task \cite{hassani2019unsupervised}, since they did not provide scene segmentation results and their source codes are not publicly available. Table \ref{table:scenesegmentationarea5} lists the comparisons of 1 fold testing on Area 5. It is observed that our method even outperforms the supervised PointNet in terms of mean accuracy. Due to the unsupervised property, our method is inferior to the supervised PointCNN and fine-tuned PointContrast. Our method has relatively smaller mean IOU, which is probably due to the imbalanced categories and the limited minibatch size. The performance could be further improved if more powerful computing resources are allowed. Figure \ref{fig:sceneseg} shows a visual example for scene segmentation. \begin{comment} \begin{table}[htb] \centering \caption{Scene segmentation results of our method and state-of-the-art techniques on S3DIS dataset. }\label{table:scenesegmentationnumber} \begin{tabular}{l c c c c c c c c c c c c c c c c c} \hline Methods & Supervised & \tabincell{c}{Mean\\accuracy} & \tabincell{c}{Mean\\IOU}\\ \hline PointNet \cite{qi2017pointnet} & yes & 78.6 & 47.6\\ DGCNN \cite{wang2019dynamic} & yes & 84.1 & 56.1\\ 3P-RNN \cite{ye20183d} & yes & 86.9 & 56.3\\ SPG \cite{landrieu2018large} & yes & 86.4 & 62.1\\ PointCNN \cite{li2018pointcnn} & yes & 88.1 & 65.4\\ PointWeb \cite{zhao2019pointweb} & yes & 87.3 & 66.7\\ ShellNet \cite{zhang2019shellnet} & yes & 87.1 & 66.8\\ RandLA-Net \cite{hu2020randla} & yes & 88.0 & 70.0\\ \hline Ours (Linear Classifier) & no & \jc{60.7} & \jc{39.1}\\ \hline \end{tabular} \end{table} \end{comment} \begin{figure}[htb] \centering \begin{minipage}[b]{0.48\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/scene_results/scene1_gt.png}} \centerline{Ground truth} \end{minipage} \begin{minipage}[b]{0.48\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/scene_results/scene1_pred.png}} \centerline{Our result} \end{minipage} \caption{Visual result of scene segmentation. } \label{fig:sceneseg} \end{figure} \begin{table}[htb] \centering \caption{Scene segmentation results of our method and some state-of-the-art techniques on testing Area 5 (Fold 1) of the S3DIS dataset. }\label{table:scenesegmentationarea5} \begin{tabular}{l c c c} \hline Methods & Supervised & \tabincell{c}{Mean\\accuracy} & \tabincell{c}{Mean\\IOU}\\ \hline PointNet \cite{qi2017pointnet} & yes & 49.0 & 41.1\\ \ad{PointCE} \cite{liu2020semantic} & yes & - & 51.7\\ PointCNN \cite{li2018pointcnn} & yes & 63.9 & 57.3\\ PointContrast \cite{xie2020pointcontrast} & yes & 76.9 & 70.3\\ \hline Ours (Linear Classifier) & no & \jc{59.4} & \jc{32.6}\\ \hline \end{tabular} \end{table} \subsection{Ablation Studies} \label{sec:ablation} \textbf{Transformation.} \jc{One of the key elements in our unsupervised representation learning is using $180^\circ$ rotation around the $Y$ axis to get the transformation. To comprehensively study the influence of transformation on representations, we consider many common transformations including rotation, cutout, crop, scale, jittering and smoothing. Figure \ref{fig:contrastivetransformation} visualizes different transformations for a point cloud.} \begin{figure}[htbp] \centering \begin{minipage}[b]{0.24\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/transformation/origin.png}} \centerline{Original} \centerline{ } \end{minipage} \begin{minipage}[b]{0.24\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/transformation/rotate_y_180.png}} \centerline{Rotate $180^\circ$} \centerline{($Y$ axis)} \end{minipage} \begin{minipage}[b]{0.24\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/transformation/rotate_y_90.png}} \centerline{Rotate $90^\circ$} \centerline{($Y$ axis)} \end{minipage} \begin{minipage}[b]{0.24\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/transformation/rotate_y_45.png}} \centerline{Rotate $45^\circ$} \centerline{($Y$ axis)} \end{minipage} \\ \begin{minipage}[b]{0.24\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/transformation/rotate_x_180.png}} \centerline{Rotate $180^\circ$} \centerline{($X$ axis)} \end{minipage} \begin{minipage}[b]{0.24\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/transformation/rotate_x_90.png}} \centerline{Rotate $90^\circ$} \centerline{($X$ axis)} \end{minipage} \begin{minipage}[b]{0.24\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/transformation/rotate_x_45.png}} \centerline{Rotate $45^\circ$} \centerline{($X$ axis)} \end{minipage} \begin{minipage}[b]{0.24\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/transformation/cutout.png}} \centerline{Cutout} \centerline{ } \end{minipage} \\ \begin{minipage}[b]{0.24\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/transformation/crop.png}} \centerline{Crop} \end{minipage} \begin{minipage}[b]{0.24\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/transformation/scale.png}} \centerline{Scale} \end{minipage} \begin{minipage}[b]{0.24\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/transformation/jitter.png}} \centerline{Jitter} \end{minipage} \begin{minipage}[b]{0.24\linewidth} {\label{}\includegraphics[width=1\linewidth]{figures/transformation/smooth.png}} \centerline{Smooth} \end{minipage} \caption{Illustration for transformations used in Table \ref{table:transformablation}. } \label{fig:contrastivetransformation} \end{figure} \begin{table}[htbp] \centering \caption{Comparison of different contrastive transformation on ModelNet10. DGCNN \cite{wang2019dynamic} is the backbone. We use linear classification evaluation for comparisons. \% is used for classification accuracy. }\label{table:transformablation} \begin{tabular}{l c c} \hline \tabincell{c}{Transformation} & \tabincell{c}{Mean class\\accuracy} & \tabincell{c}{Overall\\accuracy}\\ \hline rotate $180^\circ$($Y$ axis) & 94.88 & 95.09\\ rotate $90^\circ$($Y$ axis) & 94.12 & 94.53\\ rotate $45^\circ$($Y$ axis) & 94.09 & 94.20\\ rotate $180^\circ$($X$ axis) & 93.21 & 93.53\\ rotate $90^\circ$($X$ axis) & 93.30 & 93.42\\ rotate $45^\circ$($X$ axis) & 93.71 & 93.97\\ cutout & 94.01 & 93.97\\ crop & 93.80 & 94.31\\ scale & 94.10 & 94.20\\ jitter & 93.95 & 93.97\\ smooth & 93.93 & 94.08\\ \hline \end{tabular} \end{table} \begin{table}[htbp] \centering \caption{Comparison of different two contrastive transformation on ModelNet10. The rotate means $180^\circ$ rotation around the $Y$ axis. DGCNN \cite{wang2019dynamic} is the backbone. We use linear classification evaluation for comparisons. \% is used for classification accuracy. }\label{table:transformatwice} \begin{tabular}{l c c} \hline \tabincell{c}{Transformation} & \tabincell{c}{Mean class\\accuracy} & \tabincell{c}{Overall\\accuracy}\\ \hline rotate + cutout & 93.43 & 93.64\\ rotate + crop & 93.90 & 93.97\\ rotate + scale & 94.11 & 94.08\\ rotate + jitter & 94.33 & 94.42\\ rotate + smooth & 93.41 & 93.64\\ \hline rotate $180^\circ$($Y$ axis) & 94.88 & 95.09\\ \hline \end{tabular} \end{table} \jc{We list the comparison results of the above transformations in Table \ref{table:transformablation}. It can be clearly observed that our choice attains the best accuracy, which is unlike SimCLR \cite{chen2020simple} that utilizes two different transformations of an image as the pair. We suspect that rotation is a very simple and effective transformation for 3D point cloud data, and a larger valid rotation would generate a greater pose discrepancy (i.e., contrast) in 3D space. As such, our choice using $180^\circ$ rotation around the $Y$ axis is better than others.} \jc{Furthermore, we also apply two sequential transformations on one point cloud and make it as a pair with the original point cloud. We chose the best transformation (i.e. rotate $180^\circ$ around $Y$ axis) as the first transformation, and then apply one of the rest of the transformations as the second. We also show the results in Table \ref{table:transformatwice}. We can see that after applying the transformation twice, it is still not as good as the best choice above. We suspect that the second transformation generally damages the information on the point cloud, thus leading to inferior results. Again, this verifies that our choice is the best transformation for 3D point cloud data in generating contrastive pairs. } \textbf{Output of encoder versus output of projection head.} We also compare the choices of using the output of the base encoder (i.e. global feature) and the output of the projection head for subsequent linear classification. Table \ref{table:encoderhead} shows the comparison results of the two choices on ModelNet40 and ModelNet10. We see that the former choice is better than the latter choice. We think the output of the base encoder involves more discriminative features for the training of the linear classifier. \begin{table}[htbp] \centering \caption{Comparison of using the output of encoder and projection head for linear classification evaluation. PointNet \cite{qi2017pointnet} is the backbone. \% is used for classification accuracy. }\label{table:encoderhead} \begin{tabular}{l c c c} \hline \tabincell{c}{Component} & \tabincell{c}{Dataset} & \tabincell{c}{Mean class\\accuracy} & \tabincell{c}{Overall\\accuracy}\\ \hline encoder & ModelNet40 & 83.81 & 88.65\\ head & ModelNet40 & 68.55 & 75.81\\ encoder & ModelNet10 & 90.55 & 90.64\\ head & ModelNet10 & 81.57 & 82.59\\ \hline \end{tabular} \end{table} \textbf{Cross validation.} In addition to the above evaluations, we further test the abilities of our unsupervised contrastive representation learning in a crossed evaluation setting. To achieve this, we use the learned representations from the unsupervised trained model on ModelNet40 to further train a linear classifier on ModelNet10, and vice versa. Classification outcomes are reported in Table \ref{table:crossvalidation}. It can be observed that our unsupervised representation learning is indeed working in the cross-dataset setting. It also reveals that our unsupervised method trained on a large dataset would probably benefit the testing on another dataset greatly. In here, our method trained on ModelNet40 enables a better cross-test accuracy, compared to unsupervised training on ModelNet10 and testing on ModelNet40. \begin{table}[htbp] \centering \caption{Cross validation for ModelNet40 and ModelNet10. PointNet \cite{qi2017pointnet} is the backbone. \% is used for classification accuracy. }\label{table:crossvalidation} \begin{tabular}{c c c c} \hline \tabincell{c}{Unsupervised\\dataset} & \tabincell{c}{Classification\\dataset} & \tabincell{c}{Mean class\\accuracy} & \tabincell{c}{Overall\\accuracy}\\ \hline ModelNet40 & ModelNet10 & 90.00 & 90.51\\ ModelNet10 & ModelNet40 & 77.13 & 82.87\\ \hline \end{tabular} \end{table} \textbf{Pretraining evaluation: initializing projection head.} Projection head is very useful in maximizing the agreement between the contrastive pair. However, it may hinder pretraining evaluation, if the corresponding part is initialized with the projection head of the unsupervised model. Table \ref{table:headablation} shows that initializing encoder only produces better classification accuracy for PointNet/DGCNN on ModelNet10/ModelNet40, which confirms the judgement that initializing encoder only is a better choice. \setlength{\tabcolsep}{3pt} \begin{table}[htbp] \centering \caption{Pretraining validation for ModelNet40 and ModelNet10. PointNet \cite{qi2017pointnet} and DGCNN\cite{wang2019dynamic} are the backbone. \% is used for classification accuracy. }\label{table:headablation} \begin{tabular}{c c c c c} \hline \tabincell{c}{Backbone} & \tabincell{c}{Dataset} & \tabincell{c}{Head\\initialization} & \tabincell{c}{Mean class\\accuracy} & \tabincell{c}{Overall\\accuracy}\\ \hline PointNet & ModelNet10 & yes & 93.80 & 93.97\\ PointNet & ModelNet10 & no & 94.23 & 94.38\\ PointNet & ModelNet40 & yes & 86.64 & 90.22\\ PointNet & ModelNet40 & no & 86.80 & 90.44\\ DGCNN & ModelNet10 & yes & 95.05 & 95.09\\ DGCNN & ModelNet10 & no & 95.78 & 95.93\\ DGCNN & ModelNet40 & yes & 88.58 & 91.96\\ DGCNN & ModelNet40 & no & 89.52 & 93.03\\ \hline \end{tabular} \end{table}
1,941,325,220,921
arxiv
\section{Introduction}\label{sec:intro} Quantum error correction (QEC) is essential to building a scalable and fault-tolerant quantum computer \cite{lidar2013quantum,gaitan2013quantum}. Although the theory of QEC has been developing since the 1990s and is now well established for the circuit model of quantum computation, the practical implemention of QEC in realistic hardware raises additional nuance that prompts more detailed investigation. The present work addresses one aspect of QEC implementation that is relevant to modern superconducting qubit architectures \cite{devoret2004superconducting,devoret2013superconducting} by investigating whether the time-continuous nature of standard dispersive qubit measurements can be used in principle to improve the logical-state-tracking fidelity in the prototypical error correction scenario of a 3-qubit bit-flip code \cite{Nielsen2011}. We show that direct monitoring of the error syndromes reduces hardware resources compared to the circuit model of QEC while maintaining performance. Circuit models of QEC redundantly encode a logical qubit state into multiple physical qubits. Examples of this model include the Shor \cite{shor1995schemefor}, Steane \cite{steane1996error}, and Calderbank-Steane-Shor (CSS) codes \cite{calderbank1996good,steane1996simple}, as well as more general stabilizer codes \cite{gottesman1996class,gottesman1997stabilizer,gottesman1998theory,gottesman2009anintroduction,calderbank1998quantum}. Encoded information is checked by measuring ancillary qubits that are entangled with the redundant code subspaces. The ancilla measurements project the logical qubits back onto the code subspaces, effectively converting analog drifts of the encoded state into digital jumps between code subspaces (e.g., bit flips, phase flips, or combinations thereof). The measurement results provide information about jumps between code subspaces, thus enabling correct decoding of the logical qubit state. Different encoding schemes protect against different error types and quantities according to the redundancy of the code subspaces, with the simplest codes protecting against only single jumps per measurement cycle. Simple forms of such gate-based QEC have already been implemented in several experiments, see e.g. \cite{reed2012realization,barends2014superconducting,chow2014implementing}. Stabilizer codes typically assume ancilla-based projective syndrome measurements. However, for superconducting qubits this assumption can be problematic for two reasons. First, the repeated entangling and disentangling of the code and ancillary qubits adds additional gate overhead and hardware resources. This overhead also increases the vulnerability of the protocol to additional error mechanisms. Second, superconducting qubit architectures implement projective measurements by integrating and thresholding time-continuous dispersive measurements, which are not instantaneous projections as assumed by the theoretical quantum circuit model \cite{murch2013observing,weber2014mapping,ficheux2017observing,campagne2016observing}. The temporally extended nature of the measurements further increases the overhead by substantially lengthening the achievable cycle time for periodic syndrome measurements. These challenges raise the question whether alternative strategies for performing the syndrome measurements could be fruitful. A possible route to perform QEC without the overhead of ancilla qubits is to directly monitor the error syndromes continuously in time \cite{Wiseman2009,jacobs2014quantum}. With this variation, the code subspaces for the error syndromes are directly coupled to a continuous readout device \cite{mao2004mesoscopic,trauzettel2006parity,williams2008entanglement,lalumiere2010tunable,tornberg2010high,haack2010parity,zu2014demand}, avoiding the need for periodic entangling gates and additional ancilla measurements. This idea of continuous quantum error correction was proposed in Ref.~\cite{ahn2002continuous}, and further developed in Ref.~\cite{ahn2003quantum,sarovar2004practical,vanhandel2005optimal,mabuchi2009continuous, denhez2012quantum, hsu2016methodfor,atalaya2017bacon,Cardona2019}. Experiments have since demonstrated several necessary components of continuous QEC, including measurement-generated entanglement between pairs of qubits via continuous parity and partial-parity measurements in superconducting circuits \cite{riste2013deterministic,PhysRevLett.112.170501,PhysRevX.6.041052,PhysRevX.6.031036}. We have thus reached a stage of technological development where implementing continuous QEC becomes feasible for at least the simplest codes. In principle, continuous measurements have the advantages of being: (1) Always on - A continuous measurement eliminates dead time between measurement cycles of ancillary qubits, preventing errors from occurring during entangling-gate sequences. (2) Natural - Standard dispersive measurements in superconducting circuitry are already continuous, producing binary results only after integrating and thresholding. (3) Potentially faster - Continuous measurements have a characteristic time scale to distinguish the signal from the intrinsic background noise, which can be shortened to yield ``strong continuous measurements'' that rapidly yield information about error syndromes. Continuous measurements also have disadvantages, however, since they are: (i) Noisy - An experimenter must interpret a stochastic time-continuous signal, which is a more difficult signal processing problem than for discrete ancilla measurements. (ii) Challenging - Using ancillary qubits of the same design as the data qubits is conceptually straightforward, whereas physically implementing direct syndrome measurements requires specialized qubit circuits. (iii) Computationally expensive - Optimal signal processing of the continuous readout may have high latency. In the present paper, we assess the performance of implementing continuous QEC for the simplest three-qubit bit-flip code, assuming a simplified model of modern superconducting hardware, and develop practical filters to interpret the stochastic time-continuous signals. We show that for {\it passive error tracking} the benefits of continuous measurements can outweigh the disadvantages, enabling high-fidelity decoding of the logical qubit without the need for active feedback. This positive result is particularly interesting, since much of the previous work on continuous QEC has focused on applying active feedback based on the monitored syndrome signals to also correct the errors continuously \cite{ahn2002continuous,Cardona2019}, which has been shown to be rather ineffective due to the large noise of the signal, as well as degradations from signal processing delay \cite{kumar2019quantum}. For simplicity, we consider passive error tracking for a prototypical setup that tracks only Poisson-distributed bit-flip errors in a three-qubit code, and consider possible generalizations in the subsequent discussion. However, we emphasize that these techniques also apply to active error correction with the additional caveat that additional errors not considered here may occur during the correction pulses. We compare three signal-processing filters for interpreting the error syndromes. We expand upon the Bayesian filtering methods discussed by van Handel and Mabuchi \cite{vanhandel2005optimal,mabuchi2009continuous} and derive a linear version of the Bayesian filter that permits faster numerical calculation of the most likely state compared to the nonlinear (Wonham) filter. To address the issue of computational expense we then propose two variations of the simplest Markovian ``boxcar'' filter that averages the noisy signals over temporal segments of a fixed length. After analyzing the ways in which error tracking can fail for the boxcar filter, we identify the dominant source of error that compromises its performance. We then introduce an improved non-Markovian ``half-boxcar'' filter that corrects the dominant error of the boxcar filter by re-examining the memory of the preceding half-boxcar average. Finally, we introduce an improved Markovian ``double threshold'' filter that also corrects the dominant error of the crude boxcar filter by using two signal thresholds. Both variations can be readily implemented with low-latency circuitry, such as field-programmable gate arrays (FPGAs), and compare favorably to the optimal Bayesian filter. We derive analytic results for the initial drop in fidelity and the approximately linear fidelity decay rate for each filter, optimize them over the free filter parameters, and verify them with numerical simulations, finding good agreement. We now summarize the main findings of this paper. We provide a simple direct parity readout implementation for superconducting transmons that relies on entangling strongly detuned resonator linewidths with the two-qubit parity subspaces. Our proposed design has recently been fabricated and tested \cite{LivingstonAPS2018,LivingstonAPS2019}, which motivates our current work. We derive analytic results for the initial drop of each signal-processing filter, as well as their logical error rates, which are summarized in Tables~\ref{tab:fingamma} and \ref{tab:optscaling}. We verify these analytical results through explicit numerical simulations in Figures~\ref{fig:fidelity}--\ref{fig:optimalgamma}. In particular, we conclude that the ``half-boxcar'' filter performs comparably to the optimal Bayesian filter and is therefore a good candidate for real-time laboratory implementation with an FPGA because of its simple numerical requirements. The paper is organized as follows. In Section~\ref{sec:code} we review the basics of the three-qubit bit-flip code, and pose the problem. In Section~\ref{sec:setup}, we discuss a possible implementation for the continuous syndrome measurements. In Section~\ref{sec:continuous} we introduce and analyze an optimal linear Bayesian filter. In Section~\ref{sec:periodic} we introduce and analyze three periodic averaging filters that are more efficient but suboptimal. In Section~\ref{sec:simulation}, we describe our numerical simulations for the continuous syndrome measurements. We verify our analytics of the continuous and periodic filters with the numerics, and discuss the results. We conclude in Section~\ref{sec:conclusions}. We also include an Appendix that contains a complementary analysis of an ancilla-based projective measurement implementation of the three-qubit bit-flip code. \section{Three-qubit bit-flip code}\label{sec:code} For clarity we review the basics of the three-qubit bit-flip code and introduce notation and terminology. \subsection{Encoding and error syndromes}\label{sec:code:encoding} The standard bit flip code redundantly encodes a logical qubit state $\alpha\ket{0}_{L}+\beta\ket{1}_{L}$ into three physical qubits, \begin{equation} \label{eq:encode0} |\psi_0\rangle = \alpha\ket{000}+\beta\ket{111}, \end{equation} and uses majority-voting to identify and correct single bit flip errors. We number the bits from left to right as 123. We use quantum computing conventions for the Pauli operators: $I = |0\rangle\la0| + |1\rangle\la1|$, $X = |0\rangle\la1| + |1\rangle\la0|$, $Y = -i|0\rangle\la1| + i|1\rangle\la0|$, and $Z = |0\rangle\la0| - |1\rangle\la1|$. To indicate idling and bit flip operations on the physical qubits, we use the Pauli identity $I$ and flip $X$ operators. The initial encoding of the logical state is recovered after an idle operation $III$. Omitting tensor products for brevity, we use the notation $III$ to indicate the original encoding. Similarly, the operations after a single bit flip on the first, second, or third qubit are $XII$, $IXI$, $IIX$, respectively, which also serve as suitable labels for the resulting encodings. For example, these bit flips produce the states \begin{eqnarray} |\psi_1\rangle &=& XII|\psi_0\rangle = \alpha |100\rangle + \beta |011\rangle, \\ |\psi_2\rangle &=& IXI|\psi_0\rangle = \alpha |010\rangle + \beta |101\rangle, \\ |\psi_3\rangle &=& IIX|\psi_0\rangle = \alpha |001\rangle + \beta |110\rangle.\label{eq:encode4} \end{eqnarray} In each single-bit-flip case, the resulting states can be perfectly decoded as long as the new encoding is learned. We can learn which single bit flip has occurred without destroying the logical state by performing projective parity measurements $Z_{1}Z_{2}$ and $Z_{2}Z_{3}$ on the system, where the subscripts of the Pauli $Z$ operators indicate the bit number. These parity measurements give results $+1$ or $-1$ if the parity of the two coupled bits is even or odd, respectively. The parity measurements must be performed without measuring each qubit individually in order to preserve the coherence of the logical state. After performing a \emph{syndrome measurement} of the pair of parities $(Z_1 Z_2,\, Z_2 Z_3)$, we can use the \emph{syndrome} outcomes to identify the new logical encoding according to the mapping: \begin{equation}\label{eq:syndromes} \begin{array}{cc} (+1,\,+1)\to & III,\\ (-1,\,+1)\to & XII,\\ (-1,\,-1)\to & IXI,\\ (+1,\,-1)\to & IIX. \end{array} \end{equation} These syndrome measurements are checked periodically to detect single bit flips and infer the updated logical encoding. If desired, one could apply the operation of the encoding label to restore the encoding to the original encoding. For example, if we detect the parity measurement outcome is $(-1,\,+1)$, we know the encoding is $XII$; therefore, applying the operation $XII$ restores the encoding $III$ since applying $XII$ twice on the initial state yields the identity. However, this correction step may be delayed or omitted, since knowledge of the encoding is sufficient to use the coherent quantum information. Therefore, we assume passive error tracking, rather than active error correction, for the remainder of the paper. \begin{figure} \begin{centering} \includegraphics[width=\columnwidth]{Fig/Figure01.png} \par\end{centering} \caption{Hidden Markov model for the transitions between the eight logical encodings for the 3-qubit bit-flip code. Each encoding is labeled by the Pauli $X$ operations that relate it to the reference encoding $|\psi_0\rangle = \alpha|000\rangle + \beta|111\rangle$, as well as a numeric index $k=0,\ldots,7$. Single bit-flips $X$ on each qubit cause transitions between encodings. Complementary encodings have identical parities, so cannot be distinguished by the syndrome measurements $(Z_1Z_2,\,Z_2Z_3)$, with bits numbered left-to-right as $123$. We assume that bit flips are independent and infrequent, with a constant rate $\mu$ per qubit.} \label{fig:Markov} \end{figure} The code does not protect against two simultaneous bit flips from the $III$ encoding, denoted $XXI$, $XIX$, and $IXX$, which produce the states \begin{eqnarray}\label{eq:encode5} |\psi_4\rangle &=& XXI|\psi_0\rangle = \beta |001\rangle + \alpha |110\rangle, \\ |\psi_5\rangle &=& XIX|\psi_0\rangle = \beta |010\rangle + \alpha |101\rangle, \\ |\psi_6\rangle &=& IXX|\psi_0\rangle = \beta |100\rangle + \alpha |011\rangle. \end{eqnarray} Parity measurements of complementary bit states are identical, so the error syndromes will not correctly identify the change in encoding if two bit flips occur between two syndrome measurements. An incorrect identification of the encoding produces a \emph{logical error} since the quantum information can no longer be correctly decoded. The situation is the same with three bit flips, denoted $XXX$, which produces an encoding complementary to the original encoding \begin{equation}\label{eq:encode8} |\psi_7\rangle = XXX|\psi_0\rangle = \beta|000\rangle + \alpha |111\rangle. \end{equation} This syndrome ambiguity is not resolved by including a third parity measurement of $Z_{1}Z_{3}$, so we restrict our analysis to two parity measurements to minimize hardware resources. However, we note that adding the third parity measurement would slightly improve our ability to discriminate sequential bit flips from a single bit flip. The code also does not protect against non-bit-flip errors of the data qubits, such as phase flips, which can also produce logical errors. Similarly, the code is not fault-tolerant, so does not protect against all errors that can appear during syndrome measurements, such as bit flips of ancillary qubits in the middle of an entangling gate. Our task is to track the transitions between the 8 encodings produced by bit flips, starting from the initial encoding $III$. We measure the syndromes to update our knowledge of the encoding. At some later time $t$, if we still know the correct encoding then we have tracked all bit flip errors successfully and thus can correctly decode the state. However, if we incorrectly track the encoding, then we have failed to track bit-flip errors, so trying to decode the state will produce a logical error. We define the (binary) \emph{fidelity} $f(t)\in\{0,1\}$ of error tracking after a duration $t$ to be 1 if the knowledge of the encoding matches the true encoding, and 0 if they differ. The \emph{average fidelity} $F(t)\in[0,1]$ is the average of the binary fidelity over many tracking realizations---equivalent to the process fidelity in quantum process tomography---and serves as a useful performance metric. \subsection{Bit-flip error model and fidelity}\label{sec:code:errormodel} For simplicity of analysis, we assume that bit flips occur independently, infrequently, and at a slow but constant rate $\mu$ per qubit, so that the flips are Poisson-distributed in time. We take the bit-flip rate to be equal for each of the three qubits for simplicity and symmetric with respect to the bit states. To focus on the flipping dynamics, we work in the rotating frame of the physical qubits, which remain uncoupled, so the effective idling Hamiltonian is zero. With these assumptions, the bit-flip-tracking task reduces to finding the evolution of a hidden Markov model \cite{zucchini2017hidden,gales2008theapplication,vanhandel2005optimal,mabuchi2009continuous}, with the possible transitions illustrated as the arrows in Fig.~\ref{fig:Markov}. Each encoding $k=0,\ldots,7$ described in the previous subsection has a probability $P_k\in[0,1]$ such that $\sum_k P_k = 1$. We assume the initial encoding state is $III$. The master equation that describes the jump processes on average can then be expressed as a matrix equation \begin{align}\label{eq:mastereqmatrix} \partial_t \vec{P} &= \mathbf{M}\, \vec{P}, & P_0(0) &= 1, \end{align} with probability vector $\vec{P} = [P_0\, P_1\, P_2\, P_3\, P_4\, P_5\, P_6\, P_7]^T$ and Markov transition matrix \begin{align}\label{eq:markovmatrix} \mathbf{M} &= \mu\begin{bmatrix} -3 & 1 & 1 & 1 & 0 & 0 & 0 & 0 \\ 1 & -3 & 0 & 0 & 1 & 1 & 0 & 0 \\ 1 & 0 & -3 & 0 & 1 & 0 & 1 & 0 \\ 1 & 0 & 0 & -3 & 0 & 1 & 1 & 0 \\ 0 & 1 & 1 & 0 & -3 & 0 & 0 & 1 \\ 0 & 1 & 0 & 1 & 0 & -3 & 0 & 1 \\ 0 & 0 & 1 & 1 & 0 & 0 & -3 & 1 \\ 0 & 0 & 0 & 0 & 1 & 1 & 1 & -3 \end{bmatrix}. \end{align} Note that we neglect double-flip or triple-flip processes in this matrix. The solution $\vec{P}(t) = \exp(t\mathbf{M})\,\vec{P}(0)$ asymptotically approaches the uniform distribution as a fixed point for large $t$, $\lim_{t\to\infty} P_k = 1/8$. The average encoding fidelity $F(t) \equiv P_0(t)$ may be obtained by solving Eq.~\eqref{eq:mastereqmatrix}. Each bit flips independently, so the solution factors into a product of exponential decays of each bit to an asymptotic flip probability of $1/2$. The average fidelity with no jump tracking is thus \begin{align}\label{eq:fidnotrack} F(t) \equiv P_0(t) &= \left[\frac{1 + \exp(-2\mu t)}{2}\right]^3 \\ &= 1 - 3\mu t + 6 \mu^2 t^2 + \cdots. \nonumber \end{align} The fractional deviation of this decay from the linear regime is $(6\mu^2 t^2)/(3\mu t) = 2\mu t$. For later optimizations we bound this deviation by $1/15$ to ensure that the linear approximation $F(t) \approx 1 - 3\mu t$ is a reasonable decay model, which bounds $\mu t \leq 1/30$ and thus the maximum average fidelity drop while remaining in the linear regime to $1-F(t) \leq 10\%$. For practical error-tracking purposes, it is sufficient to focus on improving the short-time fidelity with linear decay by tracking the jumps with syndrome measurements. After including jump tracking, the approximate form of the fidelity in the linear regime will be \begin{align}\label{eq:lineardecay} F(t) = 1 - \Delta F_{\text{in}} - \Gamma t, \end{align} where $\Delta F_{\text{in}}$ is the \emph{initial drop in fidelity} on a short time scale, while $\Gamma$ is the average \emph{logical error rate} for longer time scales after the tracking method takes full effect. In later sections we will derive approximate expressions for and optimize this linear drop in fidelity to assess the relative performance between error correction methods. We will find that with good error correction the optimized linear decay $\Gamma$ scales as $\mu^2$ after the short-duration initial drop in fidelity $\Delta F_{\text{in}}$ that is still linear in $\mu$. \section{Physical Setup}\label{sec:setup} The goal of a physical realization of the three-qubit code is to perform the syndrome measurements of the two-qubit parities $(Z_1Z_2,\,Z_2Z_3)$ and use them to track bit flips to preserve the knowledge of the logical state encoding. We focus on direct syndrome measurement that is continuous in time, in contrast to traditional ancilla-based periodic syndrome measurements. For concreteness, we consider one possible physical implementation with modern superconducting transmon qubits \cite{Koch2007} on a two-dimensional wafer, shown in Fig.~\ref{fig:Experimental-setup}, where the parities are measured continuously via dispersive coupling to microwave resonators \cite{LivingstonAPS2018,LivingstonAPS2019}. While a third parity measurement of $Z_1Z_3$ is possible in principle by wrapping the qubits into a ring, it increases the complexity of the hardware. \begin{figure}[t] \begin{centering} \includegraphics[width=\columnwidth]{Fig/Figure02.pdf} \par\end{centering} \caption{Possible experimental setup for continuous bit flip error correction using the three-bit code. The parity of neighboring qubits $(z_i z_j) = \pm 1$ is measured directly by coupling both qubits to the same readout resonator such that they each dispersively shift the resonator frequency by the same amount, $\chi_{i,j}$. Pumping on resonance then populates the resonator field entangled with the odd parity subspace, leaving the even parity subspace entangled with a near-vacuum state. Homodyne measurement produces a stochastic signal, $I_{i,j}(t)$, that directly reveals the parity after renormalization, $I_{i,j}(t) \to r_{i,j}(t) = (z_i z_j)(t) + \sqrt{\tau}\,\xi_{i,j}(t)$, with $\langle \xi_{i,j}(t)\xi_{i,j}(0) \rangle = \delta(t)$. Integrating this stochastic signal for the characteristic time scale $\tau$ produces a unit signal-to-noise ratio for identifying the qubit-qubit parity. \label{fig:Experimental-setup}} \end{figure} In this configuration, the readout resonators are coupled to pairs of data qubits to directly measure the parity. The dispersive shifts $\chi$ for each qubit (e.g., $\chi_{1,2}$ or $\chi_{2,3}$ in Fig.~\ref{fig:Experimental-setup}) must be tuned to be identical, such that they are comparable to or greater the linewidth $\kappa$ of the resonator, $2\chi \gg \kappa$. By fabricating qubits 1 and 3 with tunable SQUID loops, the dispersive shifts can be tuned to match as required \cite{LivingstonAPS2018,LivingstonAPS2019}. The odd-parity subspace with two-qubit states $|01\rangle$ and $|10\rangle$ will shift the resonator frequency first up by $\chi$ then down by $-\chi$ (or vice versa) to return to its original resonance frequency. The even-parity subspace will have $|11\rangle$ shift the frequency by $2\chi \gg \kappa$ while $|00\rangle$ will shift by $-2\chi$ so that the line widths do not overlap strongly. Hence, the resonant pump will produce a non-vacuum steady-state field in the resonator only for the odd-parity subspace, leaving the even-parity subspace in vacuum. The parity subspaces therefore become entangled with two distinct coherent fields: $(c_{00}|00\rangle + c_{11}|11\rangle + c_{01}|01\rangle + c_{10}|10\rangle)|\alpha=0\rangle \to (c_{00}|00\rangle + c_{11}|11\rangle)|\alpha=0\rangle + (c_{01}|01\rangle + c_{10}|10\rangle)|\alpha = \beta\rangle$ with $|\beta|>0$. This entanglement enables homodyne measurement of the leaked resonator field to distinguish the subspaces. The coherence of each subspace, however, remains essentially unperturbed because the fields for each parity subspace are indistinguishable within the subspace. Realistically, imperfect field overlap can still dephase the parity subspaces, which is an imperfection analogous to entangling-gate infidelity in ancilla-based parity-measurements. For simplicity of analysis, we assume this dephasing is sufficiently slow to neglect. After amplifying the leaked fields and measuring them via homodyne detection along the maximally informative field quadrature, a stochastic signal is obtained for each parity resonator. The resonator connected to data qubits 1 and 2 produces the signal $I_{1,2}(t)$, while the resonator connected to data qubits 2 and 3 produces the signal $I_{2,3}(t)$. After properly shifting and normalizing these signals, they approximate moving-mean Gaussian stochastic processes centered at the parity eigenvalues $(z_iz_j) = \pm 1$: \begin{equation}\label{eq:paritysignals} \begin{array}{cc} dr_{1,2}(t) &= (z_1z_2)(t)\,dt + \sqrt{\tau_{1,2}}\,dW_{1,2},\\ dr_{2,3}(t) &= (z_2z_3)(t)\,dt + \sqrt{\tau_{2,3}}\,dW_{2,3}. \end{array} \end{equation} Here, $dW_{1,2}$ and $dW_{2,3}$ are statistically independent Wiener increments, each with zero mean Gaussian statistics and variance $dt$. Formally these increments can also be understood as $\delta$-correlated white noise, $\xi_{i,j} \equiv dW_{i,j}/dt$, with $\langle \xi_{i,j}(t)\xi_{i,j}(t')\rangle = \delta(t-t')$. For simplicity in what follows, we assume both noises are characterized by the same \emph{characteristic measurement timescale} $\tau_{1,2} = \tau_{2,3} = \tau$, which signifies the integration duration needed to achieve unit signal-to-noise ratio (SNR). The parity information can thus be recovered by processing the stochastic signal over a duration of time. As a temporal reference in simulations, we will fix the measurement timescale to be fast, $\tau=100$ ns, and consider relatively slow bit-flip rates in the range $\mu\tau\in[10^{-6},\,10^{-3}]$. This direct parity-measurement method reduces hardware resources compared to an ancilla-qubit-based approach. Such a gate-based approach would require two additional ancilla qubits to measure the parities (in addition to the readout resonators for each ancilla qubit), as well as periodic entangling gates and projective measurements. In contrast, the direct parity-measurement method considered here requires only a single readout resonator per parity measurement. The direct method also yields a raw, time-continuous parity signal, which can be processed in two distinct ways for the purposes of error correction: \begin{enumerate} \item Continuous filtering to track the most likely errors that have occurred in real time \item Periodic filtering by integrating and thresholding over consecutive durations $\Delta t$ \end{enumerate} Notably, the second method can use the same error-tracking algorithm as for the ancilla-based approach with periodic projective measurements. We now analyze both methods in the following sections. \section{Continuous Bayesian filter}\label{sec:continuous} Environmental perturbations during monitoring cause jumps between the encoding states in Fig.~\ref{fig:Markov}. Encodings connected by a single jump have distinct parity eigenvalues, so the means of the noisy parity signals in Eqs.~\eqref{eq:paritysignals} will correspondingly jump. Integrating these noisy signal with a moving temporal window with a duration longer than $\tau$ can therefore identify infrequent single jumps \cite{Slichter2016}, allowing the changes in logical encoding to be tracked via the changing syndromes. However, if multiple jumps occur within a time scale comparable to $\tau$, then the noise can prevent the jumps from being identified before the encoding jumps to a complementary one with a parity indistinguishable from the original one. Such a misidentification of an encoding with its complement oding is a logical error that will not be corrected by continued monitoring. It is thus important to filter the noisy signals in a way that minimizes misidentification errors caused by rapid successive jumps. An optimal time-continuous filter can be derived by using all available information to process the time-continuous noisy signals. The key idea is to update the encoding probabilities $\vec{P}$ at each moment in time using Bayes' rule, which requires known likelihoods of observing the collected signals given definite parities and a known estimate of the flipping rate $\mu$. The maximum resulting probability then indicates the best guess for the updated encoding. Importantly, the fidelity of tracking the encoding is determined only by the correctness of the estimate at the final time. The assumed Markovian dynamics imply that each random jump and random noise fluctuation is independent of past fluctuations, which implies that adding information from temporally extended signal correlations will not improve the final state estimate. In particular, time-symmetric smoothing methods \cite{Simonoff1998,Ellison2009,Einicke2012}) that process the past signal still produce estimates identical to forward-in-time estimates for the state at the final time (as we have verified numerically), even though they do generally improve the tracking fidelity for past jumps. It is thus sufficient to consider only forward-in-time Bayesian updates to derive a filter that uses all relevant information about the stochastic signal and the flipping dynamics to achieve an optimal state estimate at the final time. Such a time-continuous Bayesian filter is known as a Wonham filter \cite{wonham1964someapplications}, and has been applied to continuous error correction of the three-bit code by van Handel and Mabuchi \cite{vanhandel2005optimal,mabuchi2009continuous}. However, the Wonham filter contains a nonlinear update from Bayes' rule that reduces its computational efficiency during real time processing of the stochastic signals. To address this problem, we introduce a variation of the Wonham filter that removes this nonlinearity to improve computational efficiency. Our \emph{linear Wonham filter} uses unnormalized probabilities $\vec{\sigma}(t)$ that reproduce the correct probabilities after renormalization $\vec{P}(t) = \vec{\sigma}(t)/||\vec{\sigma}(t)||_1$ with the 1-norm $||\vec{\sigma}(t)||_1 = \sum_{k=0}^7 \sigma_k(t)$. These unnormalized probabilities can be regularized periodically only as needed, drastically reducing the computational overhead of real-time processing with the filter. We expect this linear filter to be suitable for real-time processing with FPGAs to enable on-demand state estimation and feedback. We derive and analyze this filter in what follows. \subsection{Derivation of linear Bayesian filter}\label{sec:continuous:bayes} Recall that the Markovian master equation for the encoding probabilities $\vec{P}(t)$ without error tracking is Eq.~\eqref{eq:mastereqmatrix}. The goal is to update this evolution to include the information gained by the stochastic parity measurements. This new information will refine the probability evolution with Bayes' rule. Before deriving the linear filter, we first derive the nonlinear Wonham filter for comparison. The deterministic dynamics of the bit-flips is unchanged by the probabilistic updates from the measurement results, so the contribution of the averaged master Eq.~\eqref{eq:mastereqmatrix} will be unchanged in the final dynamical equation. For this reason, we will initially neglect this deterministic part in the derivation, then add it back at the end. \subsubsection{Nonlinear Bayesian (Wonham) filter} After averaging the stochastic signals over a short duration $dt$, the rescaled readouts $\bar{r}_{1,2}$ and $\bar{r}_{2,3}$ for the two continuous parity measurements are Gaussian with independent noises according to Eqs.~\eqref{eq:paritysignals}, so the joint probability density of both results is a product of Gaussian distributions, \begin{align} P(\bar{r}_{1,2}, \bar{r}_{2,3}\, |\, k) &= P(\bar{r}_{1,2}\,|\,k)P(\bar{r}_{2,3}\,|\,k), \\ P(\bar{r}_{i,j}\,|\,k) &= \frac{\exp(-dt(\bar{r}_{i,j} - s_{i,j|k})^2/2\tau)}{\sqrt{2\pi\tau/dt}}. \end{align} Here the index $k = 0, \ldots, 7$ indicates a definite encoding as described in Fig.~\ref{fig:Markov}, and the means $s_{i,j|k} = \pm 1$ are the parity eigenvalues of the encoding shown in Table~\ref{tab:parities}. \begin{table}[t] \begin{center} \begin{tabular}{ c|r@{\hspace{1em}}|r@{\hspace{1em}} } \;$k$\; & \; $s_{1,2|k}$ &\; $s_{2,3|k}$ \\ \hline \hline 0 & +1 & +1 \\ 1 & -1 & +1 \\ 2 & -1 & -1 \\ 3 & +1 & -1 \\ 4 & +1 & -1 \\ 5 & -1 & -1 \\ 6 & -1 & +1 \\ 7 & +1 & +1 \\ \end{tabular} \end{center} \caption{Parity eigenvalues $s_{i,j|k}$ for each encoding $k$.} \label{tab:parities} \end{table} After collecting integrated readouts, each encoding probability $P_k$ in $\vec{P}$ should be updated via Bayes' rule, \begin{equation}\label{eq:bayesrule} P_k \xrightarrow{(\bar{r}_{1,2},\,\bar{r}_{2,3})} \frac{P(\bar{r}_{1,2},\, \bar{r}_{2,3}\,|\, k) P_k}{\sum_\ell P(\bar{r}_{1,2},\, \bar{r}_{2,3}\,|\,\ell)P_\ell}. \end{equation} Since the likelihood probabilities are Gaussian with means that always square to 1, this update ratio considerably simplifies to \begin{equation}\label{eq:bayesgauss} P_k \to \frac{\exp\left[(dt/\tau)(\bar{r}_{1,2}s_{1,2|k} + \bar{r}_{2,3}s_{2,3|k})\right] P_k}{\sum_\ell \exp\left[(dt/\tau)(\bar{r}_{1,2}s_{1,2|\ell} + \bar{r}_{2,3}s_{2,3|\ell})\right]P_\ell}. \end{equation} This update is already sufficient to track the most likely state given a temporal sequence of integrated readouts $\{\bar{r}_{i,j}(n\, dt)\}_{n=0}^N$. However, it can be conceptually useful to put the time-continuous deterministic evolution of Eq.~\eqref{eq:mastereqmatrix} on equal footing with the Bayesian updates by taking the time-continuous limit of the latter to produce a \emph{filtering equation} that includes both stochastic and deterministic updates. To do this, we expand the Bayesian update equation to first order in $dt$ to obtain a nonlinear stochastic differential equation (SDE) in Stratonovich form (with time-symmetric derivative obeying standard calculus rules), to which we can simply add the deterministic part of the evolution from Eq.~\eqref{eq:mastereqmatrix} giving \begin{align}\label{eq:wonham} & \text{(Stratonovich)} \\ \partial_t P_k &= \sum_\ell M_{k\ell}P_{\ell} + (\delta_{k\ell} - P_k)\,P_\ell\, \frac{s_{1,2|\ell}\,r_{1,2} + s_{2,3|\ell}\,r_{2,3}}{\tau}, \nonumber \end{align} where $M_{k\ell}$ are the components of the transition matrix $\mathbf{M}$ in Eq.~\eqref{eq:markovmatrix}, and $\delta_{k\ell}$ is the Kronecker delta This equation can be used directly to convert the data stream into a state estimation. \begin{figure*}[t] \begin{center} \includegraphics[width=1.4\columnwidth]{Fig/Figure03a.png} \\ \includegraphics[width=1.4\columnwidth]{Fig/Figure03b.png} \end{center} \caption{Two examples of the linear Bayesian filter used for bit-flip tracking. The initial encoding is $III$ (see Fig.~\ref{fig:Markov}), with characteristic measurement time of $\tau = 0.1\mu$s, and a bit-flip rate of $\mu = 5\times 10^{-3}\,(\mu\text{s})^{-1}$. (top) Successful tracking. Single bit flips are identified after a brief delay comparable to $\tau$. Three filter errors caused by noise fluctuations are shown at times $160\mu$s, $255\mu$s, and $275\mu$s, which are all quickly self-corrected. (bottom) Unsuccessful tracking due to a logical error. Two bit flips (bits 1 and 3) occur in rapid succession at $10\mu$s, faster than the time scale of $\tau$ can detect. The filter incorrectly interprets this pair as a bit 2 flip, which is a logical encoding error. The filter never recovers, and continues tracking the complementary encoding.} \label{fig:Bayesian} \end{figure*} The stochastic process can be modeled by converting the SDE to It\^o form with a forward-difference derivative, which modifies the equation by adding an effective drift term \cite{Gardiner1997}. After lengthy calculation, the added drift cancels the means of the stochastic signals $r_{i,j}(t) = (z_iz_j)(t) + \sqrt{\tau}\,\xi_{i,j}(t)$ to leave only the zero-mean white noise $\xi_{i,j}(t) \equiv dW_{i,j}(t)/dt$, \begin{align}\label{eq:wonhamito} & \text{(It\^o)} \\ \partial_t P_k &= \sum_\ell M_{k\ell}P_{\ell} + (\delta_{k\ell} - P_k)\,P_\ell\, \frac{s_{1,2|\ell}\,\xi_{1,2} + s_{2,3|\ell}\,\xi_{2,3}}{\tau}. \nonumber \end{align} Since in It\^o form the noise terms $\xi_{i,j}$ are uncorrelated with each other and with the state probabilities at the earlier time step, this form of the equation makes it clear that averaging over all noise realizations eliminates the stochastic terms, leaving just the drift, to correctly recover the original Lindblad form master Eq.~\eqref{eq:mastereqmatrix} without tracking. This SDE is the nonlinear Wonham filter used by van Handel and Mabuchi \cite{wonham1964someapplications,mabuchi2009continuous}. \subsubsection{Linear Bayesian filter} We will now linearize the Wonham filter by removing the nonlinear normalization step from the Bayesian update. To do this, we define ``unnormalized probabilities'' $\vec{\sigma}$ such that $\vec{\sigma}/||\vec{\sigma}||_1 = \vec{P}$ recovers the same encoding probabilities as before. We then modify the key Bayesian update step of Eq.~\eqref{eq:bayesgauss} by omitting the denominator: \begin{equation}\label{eq:bayesgausslinear} \sigma_k \xrightarrow{(\bar{r}_{1,2},\,\bar{r}_{2,3})} \exp\left[\frac{dt}{\tau}(\bar{r}_{1,2}s_{1,2|k} + \bar{r}_{2,3}s_{2,3|k})\right] \sigma_k. \end{equation} Note that we have preserved the cancellation of state-independent Gaussian factors in the Bayesian update to prevent irrelevant (state-independent) changes in the norm. This \emph{linearized} update isolates only the state-dependent changes to the unnormalized probabilities. Proceeding as before, we can expand this update to linear order in $dt$ to obtain a filtering equation. Importantly, the instantaneous signals $r_{i,j}(t) = (z_i z_j)(t) + \sqrt{\tau}\,\xi_{i,j}(t)$ depend only on the definite parity $(z_i z_j)(t) = \pm 1$ of the actual encoding (i.e., not an expectation value in an estimated state), so do not depend upon the distinction between $\vec{P}$ or $\vec{\sigma}$. We then add the deterministic updates as before with one modification: we remove the diagonal part $-3\mu\mathbf{I}$ of the transition matrix in Eq.~\eqref{eq:markovmatrix} that is proportional to the identity matrix $\mathbf{I}$. Any such term proportional to the identity causes irrelevant increases in the norm of $\vec{\sigma}$ and thus can be added or removed arbitrarily without affecting the relative sizes of its components. This freedom of choice is analogous to choosing a gauge and will be useful in the derivation to follow. The Stratonovich filtering equation then takes the simple linear form \begin{align}\label{eq:linearbayes} \partial_t \vec{\sigma} &= \left(\widetilde{\mathbf{M}} + \frac{r_{1,2}\mathbf{S}_{1,2} +r_{2,3}\mathbf{S}_{2,3}}{\tau}\right)\,\vec{\sigma}, \end{align} where \begin{align} \label{eq:markovmatrixmod} \widetilde{\mathbf{M}}_{k\ell} &\equiv (1-\delta_{k\ell})\,\mathbf{M}_{k\ell}, \\ (\mathbf{S}_{i,j})_{k\ell} &\equiv \delta_{k\ell}\,s_{i,j|k}. \end{align} This \emph{linear Bayesian filter} directly depends on the measured signals $r_{i,j}(t)$ as well as the diagonal matrices $\mathbf{S}_{i,j}$ of the parities $s_{i,j|k}$ shown in Table~\ref{tab:parities}. Converting this equation to It\^o form simply adds a state-independent drift term, $\vec{\sigma}/\tau$, on the right hand side that only changes the overall norm and can thus be omitted. As a result, Eq.~\eqref{eq:linearbayes} can be understood in either the Stratonovich or It\^o picture without changing the results that will be predicted by the maximum unnormalized probability, \begin{equation} k_{\text{est}} \equiv \text{argmax}_k\,\sigma_k. \end{equation} We show examples of this filter being used for error tracking in Fig.~\ref{fig:Bayesian}. We contrast one example trial with only single bit-flip errors (top) against one trial with one ``double-flip'' error that is uncorrectable (bottom). In the successful tracking case, the filter tightly tracks the actual encoding jumps, with a delay in detecting jumps set by the characteristic measurement time $\tau$. Noise fluctuations cause occasional errors that are rapidly corrected on the same time scale of $\tau$. In the unsuccessful tracking case, two successive jumps that occur on a timescale faster than $\tau$ are misinterpreted as a different single bit flip, which produces a logical error that the filter is not designed to correct. \subsection{Bayesian filter analysis}\label{sec:continuous:analysis} We now derive simple expressions for the linear degradation of state-tracking fidelity in Eq.~\eqref{eq:lineardecay}. We first consider the initial fidelity drop $\Delta F_{\text{in}}$, then consider the linear decay rate $\Gamma$ at steady-state after the filter takes full effect. The Bayesian filter has no free parameters to optimize; it only depends on knowledge of the parity eigenvalues, the characteristic measurement time $\tau$ of the collected signals, and the estimated bit-flip rate $\mu$. As a result, the derived expressions for fidelity decay provide an estimate for the best tracking fidelity that could be achieved in principle with continuous parity measurements. \subsubsection{Initial fidelity drop} The Bayesian filter has an initial drop in fidelity $F_{\text{in}}$ primarily because of its delayed response to a bit flip. This delay makes the filter vulnerable to bit flips that occur just before the final state estimate is requested. We thus expect a drop in fidelity by the probability of a bit flip occurring within one filter response time. We stress this is a general feature of all error correction techniques and is in no way special to our protocol. The filters starts from time $t=0$ with the correct encoding. Since the initial encoding is 0 $(III)$ with certainty, we focus on the encodings reachable by one bit flip: 1 ($XII$), 2 ($IXI$), and 3 ($IIX$) according to the numbering in Fig.~\ref{fig:Markov}. For simplicity, we initially neglect the Gaussian noise in Eq.~\eqref{eq:linearbayes} to focus on the evolution caused by the signal means. We also use the freedom of the norm to add a constant term to Eq.~\eqref{eq:linearbayes} and shift the parity eigenvalues after each jump to 0 for the correct states and -2 for incorrect states; this shift simplifies the analysis by keeping correct state (unnormalized) probabilities nearly constant. Focusing on the four states relevant from the previous paragraph $(\sigma_0, \sigma_1, \sigma_2, \sigma_3)$, the equation of motion is \begin{equation} \partial_t {\vec \sigma} = \begin{pmatrix} 0 & \mu & \mu & \mu \\ \mu & -2/\tau & 0 & 0\\ \mu & 0 & -4/\tau & 0\\ \mu & 0 & 0 & -2/\tau \end{pmatrix} \cdot {\vec \sigma}. \end{equation} Here we have assumed that the true state of the system is $(III)$, so the parities read by the detectors is $\langle r_{1,2} \rangle= 1, \langle r_{2,3} \rangle = 1$ (even,even). From Eq.~(\ref{eq:linearbayes}) the diagonal term is thus $\{1, -1, -1, 1\}/\tau + \{1, 1, -1, -1\}/\tau = \{2, 0, -2, 0\}/\tau$. We have used the freedom of the overall norm of $\vec \sigma$ to subtract a factor of $2{\bf I}/\tau$ from all diagonal entries, so the correct state (0) does not grow, but rather the incorrect states decay. It is now straightforward to see that starting from the initial condition $(1, 0, 0, 0)$ the components $\sigma_1$ and $\sigma_3$ reach a steady state of $\mu \tau/2$, while the component $\sigma_2$ reaches its steady state of $\mu \tau/4$. The true component $\sigma_0$ actually grows very slowly from 1 as $\sigma_0 \approx 1 + (5/4)\mu^2 \tau t$, but we can neglect this correction on the time scales of interest. This is then the idling state of the filter while in the error-free original state. Suppose now that a bit flip occurs, either $XII$, $IXI$, or $IIX$. We take first a flip on bit 1. The parity eigenvalues then change immediately to the values $\langle r_{1,2} \rangle= -1, \langle r_{2,3} \rangle = 1$ (odd,even). Consequently, the filter equation changes the diagonal term to $-\{1, -1, -1, 1\}/\tau + \{1, 1, -1, -1\}/\tau = \{0, 2, 0, -2\}/\tau$, and we again shift the overall matrix by $-2{\bf I}/\tau$ to get the new equation \begin{equation} \partial_t {\vec \sigma} = \begin{pmatrix} -2/\tau & \mu & \mu & \mu \\ \mu & 0 & 0 & 0\\ \mu & 0 & -2/\tau & 0\\ \mu & 0 & 0 & -4/\tau \end{pmatrix} \cdot {\vec \sigma}, \end{equation} where we now have the initial condition just found, $(1, \mu \tau/2, \mu \tau/4, \mu \tau/2)$. These equations are readily solved to find $\sigma_0 = e^{-2 t/\tau}$ and $\sigma_1(t) = \mu \tau - (\mu \tau/2) e^{-2 t/\tau}$, which quickly limits to its new steady state of $\mu \tau$. The filter is able to catch the error when the value of $\sigma_1$ exceeds the value of $\sigma_0$. This timescale defines the {\it response time} of the filter. Solving then $\mu \tau = e^{-2 t/\tau}$, we find the response time of the filter to a qubit 1 flip to be $t_r^{(1)} = (\tau/2) \ln (1/\mu \tau)$. The definition of the {\it initial drop} of the filter fidelity is if the bit flip occurs before the filter can respond appropriately. Consequently, if the process is called after an error occurs, but before the filter can respond, then a logical error happens. The probability of this occurring is the drop in fidelity, $\Delta F = \mu t_r^{(1)}$, which is linear in $\mu$. Repeating this analysis for qubit 3 gives the same result, $t_r^{(3)}= (\tau/2) \ln (1/\mu \tau)$. For a qubit 2 error, both parities change (odd,odd), so the relevant equation of motion is \begin{equation} \partial_t {\vec \sigma} = \begin{pmatrix} -4/\tau & \mu & \mu & \mu \\ \mu & -2/\tau & 0 & 0\\ \mu & 0 & 0 & 0\\ \mu & 0 & 0 & -2/\tau \end{pmatrix} \cdot {\vec \sigma}, \end{equation} because the diagonal term is now $-\{1, -1, -1, 1\}/\tau - \{1, 1, -1, -1\}/\tau = \{-2, 0, 2, 0\}/\tau$, with the overall subtraction. Starting from the same initial conditions as before, the solutions are $\sigma_0(t) = e^{-4 t/\tau}$, and $\sigma_2(t) = (\mu \tau/2) - (\mu \tau/4)e^{-4 t/\tau}$, which limits quickly to $\mu \tau/2$. Thus, the time when $\sigma_0$ becomes smaller than $\sigma_2$ is given by $t_r^{(2)} = (\tau/4) \ln(2/\mu \tau)$. Adding up the drop contributions for the three single bit flips produces \begin{equation}\label{eq:bayesiandrop} \Delta F_{\text{in}} = \mu\tau\left[\frac{5}{4}\ln\left(\frac{1}{\mu\tau}\right) + \frac{1}{4}\ln 2\right]. \end{equation} The initial drop in fidelity is linear in $\mu$ up to logarithmic corrections, since the error correction has not taken full effect. As a reminder, we expect the long-time decay after error correction takes effect to be quadratic in $\mu$. This estimate for the initial drop has neglected the role of the Gaussian noise in the signals. As seen in Fig.~\ref{fig:Bayesian}, noise fluctuations can occasionally cause the filter to jump to a different state estimate even when no bit flip occurs. These fluctuations produce false positives that are usually quickly corrected on the time scale of the filter response and do not contribute to the logical error rate. However, if such a fluctuation occurs just before the termination time, then the false positive is not corrected before the estimated state is requested, resulting in a misidentification error. A drop contribution from such noise-induced misidentifications should be added to the flip-based drop estimate in Eq.~\eqref{eq:bayesiandrop}. As we present in Section~\ref{sec:simulation}, we have numerically checked Eq.~\eqref{eq:bayesiandrop} with flip rates ranging from $\mu\tau \in [10^{-6},10^{-3}]$. We found that the derived expression systematically underestimates the drop by a small amount, as anticipated from the omission of the noise contribution. In order to correct this systematic underestimation in a crude way, we found that it is sufficient to alter the numerical prefactor in the first term of Eq.~\eqref{eq:bayesiandrop} by substituting $5/4 \mapsto 3/2$, effectively adding a noise-based contribution that is approximately half that expected from a flip-induced single parity flip $\mu t^{(1)}_r/2 = (\mu\tau/4)\ln(1/\mu\tau)$. Though physically unjustifiable due to the lack of true bit flip, this adjustment compensates for the additional noise-based drop and agrees with numerics within the range $\mu\tau \in [10^{-6},10^{-3}]$. A proper treatment of the noise-induced drop is analytically lengthy and beyond the scope of the simple derivations given here, so we make this crude prefactor substitution for simplicity in the plots of Section~\ref{sec:simulation}. \subsubsection{Logical error rate of the Bayesian filter} In addition to the errors contributing to the initial drop in fidelity, which only occur just before the final time, the Bayesian filter is vulnerable at any time to logical errors caused by two consecutive bit flips that occur within one response time of the filter. Since the first bit flip does not have time to be registered by the filter, the two flips will be interpreted as a single flip, which causes the filter to track an incorrect complementary encoding, as shown in the bottom half of Fig.~\ref{fig:Bayesian}. These logical errors require two flips, so produce a logical error rate that scales as $\mu^2$. Logical errors of this type can be produced by 6 double-flip scenarios, which we can reduce to three distinct cases by symmetry. The sequence of bit 1 flipping then bit 2 flipping, (i.e., $\{1,2\}$) produces the same error as the $\{3,2\}$ flip sequence. Similarly, the $\{2, 1\}$ and $\{2, 3\}$ sequences produce identical errors, as do the $\{1 ,3\}$ and $\{3, 1\}$ sequences. We thus consider only three distinct cases: $\{1,2\}$, $\{2,1\}$, and $\{1,3\}$. We start with the $\{1, 2\}$ case. Consider a bit 1 flip at time $t=0$ (chosen arbitrarily) followed by a bit 2 flip at a later time $T$ that is faster than the filter can resolve it. After the second bit flip the correct encoding is $4$ ($XXI$). The filter can sometimes make an error and return the value of $3$ ($IIX$), because it cannot resolve the time between the two parity flips, and can only see the transition from $0$ to $3$, connected by a single bit flip on the third qubit (see Fig.~\ref{fig:Markov}). Formally, this occurs if $\sigma_4(t) < \sigma_3(t)$ asymptotically for $t\gg T$. We will now calculate the rate at which this mistake can occur, which leads to a logical error that can not be corrected. We generalize the analysis of the last subsection by also including the dynamics of state $\sigma_4$, which will be the true state at the end of the section. This state connects to states 1 and 2 by bit flips, and before the first flip has the equation of motion $\partial_t \sigma_4 = \mu(\sigma_1+\sigma_2) - 2\sigma_4/\tau$. Its steady state value is $3\mu^2\tau^2/8$. Once qubit 1 flips at time $t=0$, the new diagonal terms of the equation matrix are associated with (odd,even) parities, and become $\{-2,0,-2,-4,-4\}/\tau$. The equation of motion for state 4 is now $\partial_t \sigma_4 = \mu(\sigma_1+\sigma_2) - 4\sigma_4/\tau$. The steady-state value is $\sigma_4 = \mu^2\tau^2/4$. The results obtained before for $\sigma_0, \sigma_1$ still hold, and we find the solution $\sigma_3(t)= (\mu \tau/2) e^{-2t/\tau}$. For later convenience, we also note that solutions for the other components are $\sigma_2(t) = \mu t e^{-2 t/\tau}$ and $\sigma_5(t) = \mu^2\tau^2/2$ in the steady state. Qubit 2 then flips at time $t=T$, making the true state of the system now 4 ($XXI$) and the parity eigenvalues (even, odd). The new diagonal terms of the equation matrix become $\{-2,-4,-2,0,0\}/\tau$. This indicates that neither state 3 nor state 4 are dynamically suppressed by the filter, because both have the correct parities associated with them. We now reset our time, and need to solve the following approximate set of equations \begin{align} \partial_t \sigma_0 &= -2\sigma_0/\tau,\\ \partial_t \sigma_1 &= -4\sigma_1/\tau, \nonumber\\ \partial_t \sigma_3 &= \mu \sigma_0, \nonumber\\ \partial_t \sigma_4 &= \mu \sigma_1, \nonumber \end{align} starting with the initial conditions $\sigma_0(T) = e^{-2T/\tau}, \sigma_1(T) = \mu \tau, \sigma_3(T) = (\mu \tau/2) e^{-2T/\tau}, \sigma_4(T) = \mu^2 \tau^2$. In the equations above, we have kept the leading order terms assuming $\mu \tau \ll 1$ (states 2,5,7 are not relevant to this discussion). These equations can be solved with standard methods, leading to the asymptotic results for $t \gg T$ of ${\bar \sigma}_3 = \mu \tau e^{-2 T/\tau}$, and ${\bar \sigma}_4 = \mu^2 \tau^2/2$. The filter will give a logical error if ${\bar \sigma}_3 > {\bar \sigma}_4$ since it returns the incorrect state. From the asymptotic results derived above, this error occurs when \begin{equation} T < \frac{\tau}{2} \ln\left(\frac{2}{\mu \tau}\right). \end{equation} This result makes physical sense: If the second error occurs at a time shorter than the filter response time, it cannot sense the difference between the two scenerios we have sketched here. The {\it logical error rate} is the rate at which this kind of process occurs. This rate can be calculated as the rate of the first error occurring, $\mu$, times the probability that a second error occurs within a time $T$ after it, $\mu T$. Consequently, the error rate is given by $\Gamma_{\{1,2\}} = (\mu^2\tau/2) \ln(2/\mu \tau)$. We next consider the $\{2, 1\}$ case. The reverse scenario of a bit 2 flip followed by a bit 1 flip produces a very similar derivation to the one above, which we omit for brevity, and also yields the same condition $T < (\tau/2)\ln(2/\mu\tau)$. Therefore, the contributions to the logical error rate from this scenario (or bit 2 then bit 3 flips) are the same $\Gamma_{\{2,1\}} = \Gamma_{\{2,3\}} = (\mu^2\tau/2)\ln(2/\mu\tau)$. Finally, we consider the $\{1, 3\}$ case. The scenario of a bit 1 flip followed by a bit 3 flip produces a slightly different result than the other two cases. After the bit 1 flip, at time $T$ we have the same states from before, the largest of which are $\sigma_0(T) = \exp(-2T/\tau)$ and $\sigma_1(T) = \mu\tau$. Now, qubit 3 flips instead of qubit 2, so the relevant states are 5 ($XIX$), the correct state, and 2 ($IXI$), the incorrect one, with the same parity results of (odd,odd). The relevant diagonal terms in the matrix equation are now $\{-4, -2, 0, -2, -2, 0\}/\tau$, so we now focus on the $\sigma_0, \sigma_1, \sigma_2, \sigma_5$ dynamics. The equations of motion for this situation are then, \begin{align} \partial_t \sigma_0 &= -4\sigma_0/\tau,\\ \partial_t \sigma_1 &= -2\sigma_1/\tau, \nonumber \\ \partial_t \sigma_2 &= \mu \sigma_0, \nonumber \\ \partial_t \sigma_5 &= \mu(\sigma_1 +\sigma_3), \nonumber \end{align} with the initial conditions established after the qubit 1 bit flip, $\sigma_0(T) = e^{-2 T/\tau}$, $\sigma_1(T) = \mu \tau$, $\sigma_2(T) = \mu T e^{-2T/\tau}$, and $\sigma_5(T) = \mu^2 \tau^2/2$. Solving these equations with standard methods yields for $t \gg T$ the asymptotic results, ${\bar \sigma}_5 = \mu^2 \tau^2$, and ${\bar \sigma}_2 = \mu T e^{-2 T/\tau}$. We can find the logical error rate by finding the time where ${\bar \sigma}_2 >{\bar \sigma}_5$, or whenever $t <T$, where \begin{equation} T = \frac{\tau}{2} \ln \frac{T}{\mu \tau^2} \approx \frac{\tau}{2} \ln\left[ \frac{2}{\mu \tau} \ln \frac{c}{\mu \tau} \right], \end{equation} and $c$ is a number of order 5 in the logarithm approximation. By symmetry, all other error processes identified in the beginning of the section reduce to the type identified above. Adding them yields the total logical error rate, \begin{align}\label{eq:bayesiangamma} \Gamma = \mu^2\tau \left[3\,\ln\frac{2}{\mu\tau} + \ln\frac{\ln(5/\mu\tau)}{4}\right]. \end{align} We numerically verify this expression over the range of bit-flip rates $\mu\tau\in[10^{-6},10^{-3}]$ in Section~\ref{sec:simulation}. \section{Periodic Filters}\label{sec:periodic} The linear Bayesian filter analyzed in the preceding section produces an optimal estimate and is more computationally efficient than the nonlinear Wonham filter. However, it requires prior knowledge of the bit-flip rate $\mu$ and the Gaussian noise time scale $\tau$, and still requires several matrix multiplications per time step. We wish to compare this optimal case against simpler and more practical filters that require less prior information and are more easily implementable in hardware, e.g. with field-programmable gate arrays (FPGAs), to enable on-demand state estimation for purposes of feedback control. We consider variations of a particularly simple ``boxcar-averaging'' filter, which should be well-suited for low-latency hardware. Boxcar filters average successive durations $\Delta t$ of the noisy parity signals $r_{i,j}(t)$, then threshold the integrated means $\bar{r}_{i,j} = \int_0^{\Delta t}r_{i,j}(t)dt/\Delta t$, often using two thresholds $a_+$ and $a_-$, with $a_+>a_-$. That is, if the average signal exceeds the threshold $a_+$, we assign the value $+1$ to the parity. Similarly, if the time-averaged signal is less than the threshold $a_-$, then we assign the value $-1$ to the parity. Any result between the two thresholds $a_+$ and $a_-$ is treated as ambiguous, mandating a separate strategy for resolving the ambiguity. The final outputs of the filter are binary parity results, $(b_{1,2},\,b_{2,3})$ with $b_{i,j} = \pm 1$, for each time duration $\Delta t$. These results can then be used to track changes in the encoding using the syndromes in Eq.~\eqref{eq:syndromes}. This filter is the most direct translation of standard ancilla-based error correction with periodic projective measurements to continuous syndrome measurements. The primary difference is that continuous measurements are always on, and the collected data is only later partitioned into bins of duration $\Delta t$ for averaging and tracking. In what follows, we analyze three simple boxcar-averaging filter variations: \begin{enumerate}[A.] \item Boxcar filter : The simplest method for averaging sequential time bins of duration $\Delta t$, using symmetric thresholds $a_+=a_-=0$. \item Half-boxcar filter : A non-Markovian modification to the simple boxcar filter that removes its dominant source of error by occasionally processing the averaged signal shifted by a half duration $\Delta t/2$. \item Double-threshold boxcar filter : A Markovian modification to the simple boxcar filter that also removes its dominant source of error, using asymmetric thresholds $a\equiv a_+\geq 0$ and $a_-=0$. \end{enumerate} We find that although the simplest boxcar filter performs poorly compared to the Bayesian filter, the two proposed modifications can achieve performance comparable to the optimal Bayesian filter with significantly less computational overhead. Note that for these considered filters, only two tunable parameters must be set prior to filtering: the boxcar duration $\Delta t$ for all three filters, and the asymmetric threshold $a\geq 0$ for the double threshold filter. Over the next few subsections, we identify the dominant error mechanisms and define the three boxcar filters in more detail. For each filter, we derive expressions for the long-time linear decay rate $\Gamma$. We then derive expressions for their initial fidelity drops $\Delta F_{\text{in}}$, since they arise from similar mechanisms. We then consider optimization of the tunable filter parameters and analytically optimize the parameters to obtain simpler formulas that can be directly compared with those of the Bayesian filter. We numerically verify both the optimal parameters and the derived expressions in Section~\ref{sec:simulation}. \subsection{Boxcar error mechanisms} There are two main mechanisms for causing a change in syndrome in a boxcar filter: (i) A bit flip can occur with rate $\mu$ (yielding a probability of flip per averaging box of $\mu\Delta t$), which can alter one or both parities. (ii) Noise fluctuations can cause the average parity signal over a box to appear changed, even though no actual bit flip occurs. Such a misidentification of a parity flip will be incorrectly interpreted by the filter as an actual bit flip. Logical errors are produced by sequences of these basic mechanisms occurring at particular times. For example, a bit flip that occurs in the latter half of an averaging box will not produce a detectable parity flip until the subsequent box, which allows time for a second bit flip to occur and place the bits in a state complementary to the estimated state tracked by the filter. Similarly, if one parity is misidentified in an averaging box, a nearby bit flip can confuse the filter so that it tracks an estimation that is complementary to the actual state. We detail these dangerous event sequences in the following sections. Parity misidentification errors play an important role in the following analysis, so we give a general analysis of their probabilities to occur here. A parity misidentification occurs when the integrated signal for a box is observed to be less than the discrimination threshold $a$, even though no bit flip occurs. Given an integration duration $\Delta t$ and mean parity $r_m$ over that duration (e.g., $r_m = \pm 1$ for a definite parity that persists the entire duration), the probability of obtaining an integrated signal $\bar{r}$ is Gaussian, $P(\bar{r}\,|\,r_m) = \exp(-(\bar{r}-r_m)^2\Delta t/2\tau)/\sqrt{2\pi\tau/\Delta t}$. Thus, the probability of obtaining an integrated signal less than a discrimination threshold $a$ is \begin{align} \label{eq:Idisc} P(\bar{r} < a\, |\, r_m) &= \int_{-\infty}^a \!\! P(\bar{r}\, |\, r_m)d\bar{r} \nonumber \\ &= \text{erfc} \left[(r_m-a)\sqrt{\Delta t/2 \tau}\right]/2, \end{align} where $\text{erfc}(x) = 1-\text{erf}(x)$ is the complementary error function. The probability of misidentifying a parity of $+1$ as $-1$ is therefore \begin{align}\label{eq:pmis2} P_{\text{mis}}(a) &\equiv P(\bar{r}<a\,|\,{+1}), \end{align} while misidentifying $-1$ as $+1$ has probability \begin{align} P(\bar{r}>a\,|\,{-1}) = P(\bar{r}<-a\,|\,{+1}) = P_{\text{mis}}(-a), \end{align} using the simplification $P(-\bar{r}\,|\,{-1}) = P(\bar{r}\,|\,{+1})$. For the simple boxcar case, when the threshold is the symmetric point $a=0$, these formulas simplify to a single misidentification probability, \begin{align} \label{eq:pmis} P_{\text{mis}} &\equiv P_{\text{mis}}(0), \\ & = \text{erfc}(\sqrt{\Delta t / 2\tau})/2, \nonumber \\ & \approx \exp(-\Delta t/2\tau)\sqrt{\tau/2 \pi \Delta t}. \nonumber \end{align} The final asymptotic exponential approximation is valid when $\Delta t \gg \tau$. Note that for reasonably long integration times $\Delta t \sim 10\tau$, the misidentification probability is less than 0.1\%. \subsection{Boxcar logical error rate}\label{sec:periodic:box} The simplest boxcar filter with symmetric threshold $a=0$ is an important reference case for the other boxcar filter variations. As such, we first analyze its error mechanisms in detail so that we can identify its dominant error. The subsequent boxcar variations will use different strategies to target and correct this dominant error. We focus here on deriving the dominant contributions to the logical error rate $\Gamma$, and delay the consideration of the initial drop $\Delta F_{\text{in}}$ until after all three boxcar variations have been carefully defined. Since the 3-bit code is designed to protect against only a single bit-flip error, the most straightforward contributions to the logical error rate $\Gamma$ are from pairs of errors. For example, the two-flip sequence $III \to XII \to XIX$ can be misinterpreted as a single flip $III \to IXI$ to the complementary encoding, causing a logical error. These problematic error pairs can be broadly categorized into three groups: (a) two bit flips, (b) one bit flip and one parity misidentification, or (c) two parity misidentifications. The contributions that involve three or more basic errors are comparatively small, so we will neglect them in this analysis. In addition to these mechanisms involving pairs of errors, however, there is a more subtle and dangerous single error mechanism: (d) a mid-box flip of bit 2. In this case a single flip $III \to IXI$ can cause the parities to flip in successive boxes due to their independent noise fluctuations. Negative-biased noise fluctuations can make one averaged parity pass the zero threshold within the first box, while positive-biased noise fluctuations can make the other parity pass the zero threshold at a later time that occurs in the subsequent box. As such, the reported parity flips will be interpreted as the sequence $III \to XII \to XIX$ and yield a complementary encoding, causing a logical error. Since this last type of error is caused by a single flip, it is the dominant source of error for the simple boxcar filter that will be removed by the boxcar variations in subsequent sections. We illustrate this problematic error mechanism in Fig.~\ref{fig:boxcar}, which we will refer to again when discussing the mechanisms of the boxcar variations that fix this error. \begin{figure}[t] \begin{center} \includegraphics[width=\columnwidth]{Fig/Figure04.pdf} \caption{ Most significant boxcar-averaging error. If bit 2 of the three-bit code flips, then both normalized and averaged parity readouts $\bar{r}_{i,j}$ for bits i and j will flip over an averaging time $\Delta t$, as shown by the blue and red diagonal dot-dashed lines. When the flip occurs in the middle of an averaging box as shown, then the averaged readouts may cross the zero threshold at slightly different times due to noise fluctuations, causing one parity to appear flipped in one averaging box while the other appears flipped in the next averaging box. The sequence is interpreted as a succession of flips for bits 1 and 3, producing a logical error. We propose two variations to fix this error: (1) The non-Markovian half-boxcar method reevaluates the midsection between successive averaging boxes when bits 1 and 3 successively flip. If both parities flip for the midsection average, then the succession of flips is correctly reinterpreted as a bit 2 flip. (2) The Markovian double-threshold method introduces a second threshold $a\geq 0$, shown by the horizontal green dashed line, such that a flip in bit 2 is detected when both parity signals drop below this new threshold. }\label{fig:boxcar} \end{center} \end{figure} We now discuss each error contribution in turn. \begin{enumerate}[(a)] \item Two bit flips Each bit flip has an independent probability of $\mu\Delta t$. There are three ways to have two distinct flips with 3 qubits. Therefore, the probability for two distinct flips to occur within one box $\Delta t$ is $3(\mu \Delta t)^2$, yielding a contribution to the logical error rate of $\Gamma_\text{bb} = 3(\mu \Delta t)^2/\Delta t = 3\mu^2\Delta t$. More precisely, a bit flip in the first half of a box is likely to be detected, but a bit flip in the second half is unlikely to be detected until the following box. The sensitivity region for flips is thus shifted by a half-box in time from the periodic syndrome information. That is, for two bit-flip errors the danger is in having two flips within a region of duration $\Delta t$ that starts at a mid point of one box and ends at the midpoint of the next box. However, this temporal shift by a half-box does not affect the reasoning used for the logical error rate. \item One bit flip and one misidentification Logical errors generally require two averaging boxes to manifest. Over two consecutive boxes $2\Delta t$ there are two possible parities to misidentify in each box and three half-boxes $\Delta t/2$ in which a flip of one of the three bits could cause parity changes, yielding 36 total pairs of errors to consider. After checking each of these possibilities, we identify 4 classes that produce a logical error: \begin{enumerate}[i.] \item A misidentification and a flip of a complementary bit within the first half of the same box: For example, starting from encoding $III$ a misidentification in channel $r_{1,2}$ and a flip of bit 3 in the first half of the box produces a syndrome with two flipped parities, which is misinterpreted as a bit 2 flip. This error leaves the true encoding in $IIX$ while the estimated encodings follow the sequence $III \to IXI \to XXI$ to produce a logical error. There are 2 parities to misidentify in one box $\Delta t$, with 2 complementary bits each, so there are 4 possibilities of error, each with probability $\mu(\Delta t/2)\,P_{\text{mis}}$, producing a contribution to the logical error rate of $4(\mu\Delta t/2)\,P_{\text{mis}}/\Delta t = 2\mu\,P_{\text{mis}}$. \item A misidentification and a flip of a complementary bit within the second half of the same box: For example, misidentifying $r_{1,2}$ produces the estimated encoding $XII$. After a flip in bit 3 the true encoding becomes $IIX$, which in the following box will make it appear that both parities have flipped, leading to the estimated encoding sequence $III \to XII \to XXI$. As with the preceding case, there are 4 possibilities yielding a contribution to the error rate of $2\mu\,P_{\text{mis}}$. \item A misidentification in one box, then a bit flip in the first half of the next box: The parities from this sequence appear identical to the previous case. There are 4 possibilities, so the error rate contribution is also $2\mu\,P_{\text{mis}}$. \item A bit flip in the second half of a box, then a complementary parity misidentification in the following box: In the first box no error will be reported, but the second box will have two apparent bit flips interpreted as a wrong single flip. There are 4 possibilities, so the contribution is also $2\mu\,P_{\text{mis}}$. \end{enumerate} The total error rate contribution is $\Gamma_\text{bm} = 8 \mu P_{\text{mis}}$. \item Two misidentifications A misidentification in one channel followed by a second misidentification in the complementary channel during the next box causes a logical error. For example, misidentifications of $r_{1,2}$ then $r_{2,3}$ causes the estimated encoding sequence over three boxes: $III\to XII\to XXI \to XXX$. There are two orderings for this type of error, so the contribution to the logical error rate is $\Gamma_\text{mm} = 2 P_{\text{mis}}^2/\Delta t$. \item Mid-box flip of bit 2 Starting with the encoding $III$, suppose that bit 2 flips near the center of a box, at time $\Delta t/2+\delta t$. Both parities should flip in this case, but the flip occurs in a region where the integration result will be sensitive to noise fluctuations. If one parity shows a flip while the other does not, it leads to a logical error over the course of two boxes. For example, if $r_{1,2}$ flips in one box while $r_{2,3}$ flips in the next box, the apparent encoding follows the sequence $III \to XII \to XIX$. This error is unique to a bit 2 flip because such a flip requires both parities to correctly flip. The integrated parity signal $\bar{r}$ after such a flip is Gaussian-distributed with a mean value of $r_m = [(\Delta t/2 + \delta t) - (\Delta t/2 - \delta t)]/\Delta t = 2 \delta t/\Delta t$ and variance $\tau/\Delta t$. The probability of getting $\bar{r}<0$ in a channel is equal to $P(\bar{r}<0\,|\,r_m)$ as defined in Eq.~\eqref{eq:Idisc}. If one channel flips $\bar{r}_{1,2} < 0$, but the other does not $\bar{r}_{2,3}>0$, then the probability of this occurring for any $r_m\in[-1,1]$ is \begin{align}\label{eq:halfboxerror} \int_{-1}^1 P(\bar{r}<0\,|\,r_m)\,P(\bar{r}>0\,|\,r_m)\,dr_m \to \sqrt{\frac{\tau}{\pi\Delta t}}, \end{align} where the integral over the product of error functions rapidly reaches an exact asymptotic value after $\Delta t \gtrsim 10 \tau$. The contribution of this scenario to the logical error rate is therefore $\Gamma_\text{b2} = \mu\sqrt{\tau/\pi\Delta t}$. As highlighted previously, this is the most dangerous error mechanism because its error rate is linearly dependent on $\mu$. \end{enumerate} Gathering all of the above contributions produces the total logical error rate for the boxcar filter, arranged in order of significance: \begin{align}\label{eq:gammaboxcar} \Gamma &= \Gamma_\text{b2} + \Gamma_\text{bb} + \Gamma_\text{bm} + \Gamma_\text{mm}, \\ &= \mu\sqrt{\frac{\tau}{\pi\Delta t}} + 3 \mu^2\Delta t + 8 \mu \, P_{\text{mis}} + 2\frac{P^2_{\text{mis}}}{\Delta t} \nonumber \end{align} We will later optimize this formula over the boxcar duration $\Delta t$ and numerically verify its accuracy over the range of bit-flip rates $\mu\tau \in [10^{-6},10^{-3}]$. Because of the dominant error that is linearly dependent on $\mu$ the boxcar filter performs poorly compared to the Bayesian filter and is not useful for error correction. To make the boxcar filter viable, we therefore wish to eliminate this dominant error with a simple and minimal modification to the boxcar filter. We consider two such modifications in the subsequent sections: a non-Markovian modification that uses previous history in the continuous record to identify the problematic bit-2 flip, and a Markovian modification that identifies the problematic bit-2 flip by bracketing the parity signals between two thresholds. We will see that both variations compare favorably to the Bayesian method, so are suitable for practical error correction. \subsection{Half-boxcar filter and error rate}\label{sec:periodic:halfbox} To overcome the problem of the bit-2 flip in the boxcar filter, we introduce a non-Markovian extension that we call the half-boxcar filter. To the basic boxcar filter we add one extra conditional action that reexamines any ostensible sequential flips of bits 1 and 3 to make sure they are not an incorrectly interpreted flip of bit 2. That is, if a parity flip is observed after an averaging boxcar, $t\in[0,\,\Delta t]$, and a second parity flip is observed in the opposite channel one boxcar later, $t\in[\Delta t,\,2\Delta t]$, then the filter reexamines the signal in an interval that straddles both boxcars. The raw signal is reaveraged over a duration $\Delta t$ that is shifted one half-boxcar behind the most recent boxcar, $t\in[\Delta t/2,\,3\Delta t/2]$, and compared to the zero threshold as a secondary check. With this modification, a flip in bit 2 that happens near the center of the first box will cause both parities to change in the re-averaged middle box, unlike sequential flips of bits 1 and 3. The top portion of Fig.~\ref{fig:boxcar} illustrates how both averaged signals will flip when averaging the shifted middle box in the protocol. Therefore, this modification correctly distinguishes a bit-2 flip from sequential bit-1 and bit-3 flips and eliminates the primary logical error mechanism of the boxcar filter. When a bit-2 flip is detected, the interpreted history of bit-flips must then be corrected so that the first box, $t\in[0,\,\Delta t]$ records correctly that bit 2 flipped, while the second box, $t\in[\Delta t,\,2\Delta t]$, records that nothing additional occurred. An elegant implementation of this non-Markovian extension averages sequential half-box intervals $\Delta t/2$ of the raw signals, storing the most recent three half-box averages in memory in addition to the accumulating sequence of parity values. Averaging pairs of these pre-integrated half-boxes then efficiently produces either the most recent full-box average or the required shifted box to reassess a suspected bit-2 flip as needed. As such, this extension only minimally increases the computational complexity compared to the basic boxcar filter, while improving the fidelity so that it compares favorably with the Bayesian filter. It thus achieves a good balance between accuracy and efficiency. \subsubsection{Half-boxcar logical error rate} We follow the same procedure for categorizing contributions to the logical error rate $\Gamma$ as we did for the boxcar filter in the preceding section. While the addition of the half-box mechanism removes the most serious of the boxcar errors, it also subtly alters the other logical error mechanisms, both removing a few more errors and adding new ones. We now discuss each category of contributions in turn. \begin{enumerate}[(a)] \item Two bit flips Unlike the basic boxcar filter, the partitioning of two consecutive boxes into half-boxes matters for sequences of two bit flips. Logical errors can occur from bit flips within the same box, or two consecutive boxes. There are four relevant cases. \begin{enumerate}[i.] \item Two distinct bit flips in the same half-box: There are three possibilities, each with probability $(\mu \Delta t/2)^2$. The contribution to the logical error rate is thus: $3 \, (\mu \Delta t/2)^2 \, (2/\Delta t) = (3/2)\, \mu^2 \Delta t$. \item Two consecutive flips of bits 1 and 3, one in the second half of a box and the other in the first half of the following box: The first flip is not detected in the first box, so both parities will flip in the second box and be incorrectly interpreted as a flip in bit 2 after the second box. There are two possible orderings, so the total contribution to the logical error rate is: $2 \, (\mu\Delta t/2)^2/\Delta t = \mu^2 \Delta t/2$. \item A flip in either bit 1 or 3 during the second half of a box followed by a flip in bit 2 during the first half of the following box: The first flip is not detected after the first box, so only one parity will flip in the second box and be incorrectly interpreted as a flip of the complementary bit. After a third box, both parities will appear change and be incorrectly interpreted as a bit-2 flip, which leaves the estimate in a complementary state. There are two possibilities, with a similar situation if the bits flip in reverse order, so the total contribution to the logical error rate is $4 \, (\mu\Delta t/2)^2 /\Delta t = \mu^2 \Delta t$. \item Two consecutive flips of bits 1 and 3, one in each half of the same box: The second flip is not detected, so one parity appears to flip, followed by the other parity in the next box. The half-box prescription then averages the middle of the boxes, which will show that both parities flip, and thus be misinterpreted as a flip in bit 2, which is a logical error. There are two possible orderings, so the total contribution to the logical error rate is $2 \,(\mu \Delta t/2)^2 /\Delta t = \mu^2 \Delta t/2$. \end{enumerate} The final error above is newly introduced by the half-boxcar mechanism, so the total contribution of two bit flips to the logical error rate is larger than the simple boxcar filter: $\Gamma_\text{bb} = (7/2) \mu^2 \Delta t$. \item One bit flip and one misidentification There are 6 distinct mechanisms for a bit flip and misidentification to cause a logical error. The half-boxcar filter modifies these contributions significantly from the simple boxcar filter. \begin{enumerate}[i.] \item A misidentification and a flip of a complementary bit both within the same box: This mechanism is the same as the boxcar case, but with the improvement that any flip in bit 2 is now corrected by the half-box mechanism. There are 2 possible misidentifications, each with 1 complementary bit that flips with probability $\mu\Delta t$. The total contribution to the logical error rate is $2\mu P_{\text{mis}}$. \item A misidentification then a complementary flip of bit 1 or 3 during the first half of the next box: This appears as a sequence of two flips not corrected by the half-boxcar mechanism. For example, if bit 1 flips then the actual encoding becomes $XII$, but the apparent encoding follows the sequence $III \to IIX \to IXX$. There are two possibilities, so the contribution is $\mu P_{\text{mis}}$. \item A misidentification then a complementary flip of bit 1 or 3 during the second half of the next box, but near the middle: The bit flip will be reported with probability $P(\bar{r}<0\,|\,r_m)$, with $r_m = 2\delta t/\Delta t$ as in Eq.~\eqref{eq:halfboxerror}, where $\delta t$ is the location of the flip with respect to the middle of the box, resulting in the same logical error as in the previous case. There are two possibilities, so the contribution to the logical error rate is $2\,\mu\,P_{\text{mis}}\int_0^1 P(\bar{r}<0\,|\,r_m)\,dr_m \to \mu\,P_{\text{mis}}\,\sqrt{\tau/2\pi\Delta t}$. This exact asymptotic value is reached by $\Delta t \gtrsim 15\tau$. \item A misidentification then a flip in bit 2 during the first half of the next box: This produces an apparent sequence of flips in bits 1 and 3, which triggers the half-box mechanism. However, for the half-box-shifted middle section that is checked, bit 2 flipped too late to be detected. It is thus possible for only one parity to flip, analogously to the original bit-2 flip issue of the basic boxcar in Eq.~\eqref{eq:halfboxerror}, which leaves the logical error uncorrected. The contribution to the logical error rate is $2 \mu P_{\text{mis}}\int_0^1 P(\bar{r}< 0\,|\,r_m)P(\bar{r}>0\,|\,r_m)\,dr_m \to \mu\,P_{\text{mis}}\,\sqrt{\tau/4\pi\Delta t}$. This exact asymptotic value is reached by $\Delta t \gtrsim 15\tau$. \item A misidentification then a flip in bit 2 during the second half of the next box: This scenario appears identical to the preceding case, so also contributes $\mu\,P_{\text{mis}}\,\sqrt{\tau/4\pi\Delta t}$. \item A flip in bit 2 near the middle of a box that triggers the half-box mechanism, followed by a misidentification in one of the channels during the check of the middle box: The check will then not correct the misinterpretation of the bit 2 flip as two consecutive bit 1 and bit 3 flips. The probability of this occurring is identical to the preceding two cases, so also contributes $\mu\,P_{\text{mis}}\,\sqrt{\tau/4\pi\Delta t}$. \end{enumerate} The total contribution to the logical error rate is $\Gamma_\text{bm} = 3\mu P_{\text{mis}} + (1 + 3/\sqrt{2})\, \mu P_{\text{mis}}\, \sqrt{\tau/2\pi\Delta t}$. \item Two misidentifications The mechanism for two misidentification to cause a logical error is unchanged from the boxcar filter, so contributes $\Gamma_\text{mm} = 2 P_{\text{mis}}^2/\Delta t$. \end{enumerate} Gathering all contributions produces the total logical error rate for the half-boxcar filter: \begin{align}\label{eq:gammahalfbox} \Gamma &= \Gamma_\text{bb} + \Gamma_\text{bm} + \Gamma_\text{mm} \\ &= \frac{7}{2}\, \mu^2\Delta t + 3\mu\, P_{\text{mis}} + \frac{\sqrt{2}+3}{2}\sqrt{\frac{\tau}{\pi\Delta t}}\, \mu P_{\text{mis}} + 2\frac{P^2_{\text{mis}}}{\Delta t}. \nonumber \end{align} Note that the value of the prefactors for several terms is achieved only for $\Delta t \gtrsim 15\tau$. We will later optimize the free parameter $\Delta t$ to find that this condition is satisfied self-consistently for $\mu\tau \lesssim 10^{-4}$, and verify this expression numerically for the bit-flip rates $\mu\tau \in [10^{-6},10^{-4}]$, with slight numerical deviations visible for $\mu\tau \in [10^{-4},10^{-3}]$ due to shorter optimal $\Delta t$ violating the approximation of the prefactor integrals. \subsection{Double-threshold filter and error rate}\label{sec:periodic:thresh} The non-Markovian half-boxcar filter has the drawback of requiring extra memory and reinterpreting the past tracking record. We thus also introduce an alternative filter that also corrects the problem of the bit 2 flip in the simple boxcar filter while remaining Markovian. This new filter saves on memory at the expense of extra conditional processing per boxcar. We use the intuition that if a bit-2 flip happens near the center of a box, then both integrated parities should be near zero, with only noise fluctuations determining their sign about the usual threshold of zero. However, if a succession of bit 1 and bit 3 flips happens, then only one parity will cross the threshold at a time. We thus use a second signal threshold $a > 0$, that together with the zero threshold can bracket a region that checks whether both parities are simultaneously close to zero. Fig.~\ref{fig:boxcar} demonstrates this effect with the horizontal dashed green line for $a > 0$: Both averaged signals enter the region between $0$ and $a$ during the first boxcar, making the bit-2 flip correctly detectable using the second threshold. More precisely, assuming an initially even-parity encoding $III$, if both integrated signals are less than the new threshold $a$, then we infer that bit 2 has likely flipped. Otherwise, the signals are thresholded as normal. In pseudocode, given an estimated encoding $III$, \texttt{ \begin{verse} if $\bar{r}_{1,2}<a$ and $\bar{r}_{2,3}<a$ \\ \qquad then flip bit 2\\ elseif $\bar{r}_{1,2}<0$ \\ \qquad then flip bit 1\\ elseif $\bar{r}_{2,3}<0$ \\ \qquad then flip bit 3\\ else \\ \qquad do nothing \end{verse} } \noindent where the flips are performed on the estimated state in accordance with passive error tracking. More generally, for an initial estimated encoding with parities $P_{1,2},\,P_{2,3} \in \{+1,-1 \}$, the parity-corrected integrated signals $(\bar{r}_{1,2}P_{1,2})$ and $(\bar{r}_{2,3}P_{2,3})$ should be used in the above algorithm in place of $\bar{r}_{1,2}$ and $\bar{r}_{2,3}$. The relaxed threshold can more robustly detect simultaneous parity changes when bit 2 flips close to the middle of a box, while remaining Markovian. \subsubsection{Double-threshold logical error rate} One more we follow the same procedure as the boxcar filter to find the remaining contributions to the logical error rate $\Gamma$. As with the half-boxcar filter, the additional correction mechanism alters the mechanisms for producing logical errors. We now consider each category of contributions in turn. \begin{enumerate}[(a)] \item Two bit flips The logical error rate caused by two bit flips in the same box is exactly the same as the basic boxcar filter: $\Gamma_\text{bb} = 3 \mu^2 \Delta t$. \item One misidentification and one bit flip There are three distinct contributions: \begin{enumerate}[i.] \item A zero-threshold misidentification, then a complementary flip in the next box: This case is the same as the basic boxcar, so the contribution of this kind of error to the logical error rate is $4\, \mu\, P_{\text{mis}}$. \item An $a$-threshold misidentification, then a complementary flip of bit 1 or 3 in the same box: Since both parities are observed to be below $a$, this is interpreted by the double-threshold filter incorrectly as a bit 2 flip. There are two possibilities, so the contribution to the logical error rate is $2\, \mu\, P_{\text{mis}}(a)$, recalling the general form of the misidentification probability from Eq.~\eqref{eq:pmis2}. \item A flip in bit 2 near the middle of a box, at time $\Delta t/2 + \delta t$, followed by parity misidentification: As with the bit-2 flip in the boxcar case, the parity signals will both be Gaussian-distributed with mean $r_m = 2 \delta t/\Delta t$ and variance $\tau/\Delta t$ as in Eq.~\eqref{eq:halfboxerror}. The probability of misidentifying the parity requires $\bar{r}_{1,2} > a$ while $\bar{r}_{2,3} < 0$, or vice versa. There are two possibilities, so the contribution to the logical error rate after summing over all $r_m\in[-1,1]$ is \begin{align}\label{eq:doublethresherror} &2\mu\,\int_{-1}^1 P(\bar{r}<0\,|\,r_m)\,P(\bar{r}>a\,|\,r_m)\,dr_m \\ &\; \approx 2\mu\,\sqrt{\frac{\tau}{\pi\Delta t}}\,\exp\left[-0.9\,a\,\sqrt{\frac{\Delta t}{\tau}} - 0.15\,a^2\,\frac{\Delta t}{\tau}\right]. \nonumber \end{align} This Gaussian approximation to the error function integral is very accurate for $\Delta t \gtrsim 10\tau$ and $0\leq a \leq 1$, and correctly reduces to Eq.~\eqref{eq:halfboxerror} when $a=0$. It can be derived using the approximation $\text{erfc}(x) \approx \exp(-c_1x - c_2x^2)$ valid for $x>0$ with $c_1 \approx 1.1$ and $c_2 \approx 0.76$ \cite{Tsay2014}. \end{enumerate} The total contribution to the logical error rate is therefore $\Gamma_\text{bm} = 4 \mu P_{\text{mis}}+ 2 \mu P_{\text{mis}}(a) + 2\mu\,\sqrt{\tau/\pi\Delta t}\,\exp\left[-0.9\,a\,\sqrt{\Delta t/\tau} - 0.15\,a^2\,\Delta t/\tau\right]$. \item Two misidentifications The second threshold slightly modifies the simple boxcar contribution. After one misidentification of probability $P_{\text{mis}}$, either bit 1 or bit 3 flips. The next box corrects this error unless a second misidentification in the complementary channel occurs to make it appear that both parities have flipped. However, it is sufficient for both integrated signals to be less than $a$ in this case, due to the double threshold mechanism. Thus with probability $P_{\text{mis}}(a)$ there is a flip in bit 2. In the third box the remaining bit will flip, producing a logical error. There are two possibilities, so the contribution to the logical error rate is $\Gamma_\text{mm} = 2\, P_{\text{mis}}\, P_{\text{mis}}(a)/\Delta t$. \end{enumerate} Gathering all above contributions produces the total logical error rate: \begin{align}\label{eq:gammadouble} \Gamma &= \Gamma_\text{bb} + \Gamma_\text{bm} + \Gamma_\text{mm}, \\ &= 3\mu^2\tau \frac{\Delta t}{\tau} + 4 \mu\, P_{\text{mis}} + 2\, \mu\, P_{\text{mis}}(a) + 2\frac{P_\text{mis}\,P_{\text{mis}}(a)}{\Delta t} \nonumber \\ &\quad {} + 2\,\mu\,\sqrt{\frac{\tau}{\pi\Delta t}}\,\exp\left[-0.9\,a\,\sqrt{\frac{\Delta t}{\tau}} - 0.15\, a^2\, \frac{\Delta t}{\tau}\right]. \nonumber \end{align} We will later optimize this formula over the free parameters, the boxcar duration $\Delta t$ and threshold $a\geq 0$, and numerically verify its accuracy over the range of bit-flip rates $\mu\tau \in [10^{-6},10^{-3}]$. \begin{table*}[t] \small \setlength{\tabcolsep}{8pt} \renewcommand{\arraystretch}{4} \begin{tabular}{l|l|l} \multicolumn{1}{c|}{\textbf{Filter}} & $\Delta F_{\textrm{in}}$ & $\Gamma \tau$ \\ \hline\hline Bayesian & $\displaystyle \mu\tau\left[\frac{5}{4}\ln\frac{1}{\mu\tau} + \frac{1}{4}\ln2 \right]$ & $\displaystyle 3(\mu\tau)^2\left[\ln\frac{2}{\mu\tau} + \frac{1}{3}\ln\frac{\ln(5/\mu\tau)}{4}\right]$ \\ \hline Boxcar & $\displaystyle \frac{3\mu\tau}{2}\frac{\Delta t}{\tau}$ & $\displaystyle \mu\tau\, \sqrt{\frac{\tau}{\pi\Delta t}} + 3\, (\mu\tau)^2\, \frac{\Delta t}{\tau} + 8\, \mu\tau \, P_{\text{mis}} + 2\,P^2_{\text{mis}}\, \frac{\tau}{\Delta t} $ \\ \hline Half-boxcar & $\displaystyle \frac{3\mu\tau}{2}\frac{\Delta t}{\tau} - \frac{\mu\tau}{2}\sqrt{\frac{\Delta t}{\pi\tau}} + \frac{\sqrt{2}\,e^{-\Delta t/2\tau}}{\sqrt{\pi \Delta t/\tau}}$ & $\displaystyle \frac{7}{2}\, (\mu\tau)^2\, \frac{\Delta t}{\tau} + 3\, \mu\tau\, P_{\text{mis}} + \left[\frac{1}{\sqrt{2}} + \frac{3}{2}\right]\sqrt{\frac{\tau}{\pi\Delta t}} \, \mu\tau\, P_{\text{mis}} + 2\,P^2_{\text{mis}}\, \frac{\tau}{\Delta t}$\\ \hline \begin{minipage}[t]{0.2\columnwidth}Double-\\threshold\end{minipage} & $\displaystyle \frac{3\mu\tau}{2}\frac{\Delta t}{\tau}$ & $\displaystyle \begin{aligned} & 3\, (\mu\tau)^2\, \frac{\Delta t}{\tau} + 4\, \mu\tau\, P_{\text{mis}} + 2\, \mu\tau\, P_{\text{mis}}(a) + 2\,P_\text{mis}\,P_{\text{mis}}(a)\,\frac{\tau}{\Delta t} \\ &\qquad {} + 2\,\mu\tau\,\sqrt{\frac{\tau}{\pi\Delta t}}\,\exp\left[-0.9\,a\,\sqrt{\frac{\Delta t}{\tau}} - 0.15\, a^2\, \frac{\Delta t}{\tau}\right]\end{aligned}$ \end{tabular} \caption{Initial drops in fidelity $\Delta F_{\text{in}}$ and logical error rates $\Gamma$ for various filters. We express the formulas in terms of the dimensionless quantities $\mu\tau$ and $\Delta t/\tau$, where $\mu$ is the bit-flip rate, $\tau$ is the measurement timescale, and $\Delta t$ is the averaging timescale for boxcar filters. The parity misidentification probabilities for the boxcar filters are $P_{\text{mis}}(a) \equiv \text{erfc}[(1-a)\sqrt{\Delta t/2\tau}]/2$ and $P_{\text{mis}} \equiv P_{\text{mis}}(0)$. We find numerically that for the Bayesian initial drop, more accurate results can be obtained in the regime $\mu\tau \in [10^{-6},10^{-3}]$ via the substitution $5/4 \mapsto 3/2$, which crudely compensates for neglected noise-fluctuation contributions. The three boxcar filters should be optimized over the averaging duration $\Delta t$, while the double-threshold variation should also be optimized over the threshold $a\geq0$. For the initial drop of the boxcar filters, we have approximated that $\Delta t/\tau \gg 1$ for the simple and double-threshold boxcar filters and $\Delta t/\tau > 8$ for the half-boxcar filter to achieve peak performance (see Section~\ref{sec:periodic:drop}).} \label{tab:fingamma} \end{table*} \subsection{Initial drop in fidelity}\label{sec:periodic:drop} The initial drop in fidelity $\Delta F_{\text{in}}$ for all three variants of the boxcar filter comes from single logical errors in the final averaging box that do not have time to be detected. There are two dominant types of logical error: a single parity misidentification, or a single bit flip that happens too late within the averaging period. Other errors are higher-order and comparatively negligible. Since there are two parities, a misidentification can occur with probability $2 P_{\text{mis}}$. The case of the bit flip requires more careful analysis, since it may occur at any point within the final box. For bits 1 and 3, if a flip happens at time $\delta t$ after the center of the last box, $\Delta t/2 + \delta t$, the probability of not detecting the flip is $P(\bar{r}>0\,|\,r_m)$ with a shifted signal mean of $r_m = 2\delta t/\Delta t$ and variance $\tau/\Delta t$ as in Eq.~\eqref{eq:halfboxerror}. There are two possibilities of a bit flip, so their contribution to the initial drop is \begin{align} 2\,\frac{\mu\Delta t}{2}\,\int_{-1}^1 P(\bar{r}>0\,|\,r_m)\,dr_m = \mu\Delta t. \end{align} To detect a bit 2 flip correctly, both integrated signals should be less than the threshold $a$ (where $a=0$ for boxcar and half-boxcar filters and $a > 0$ for the double-threshold filter). The negation of this is for one of the signals to be greater than $a$. To not double-count the errors for bits 1 and 3, the remaining signal should also remain greater than 0. Therefore, a flip is not detectable if $\bar{r}_{1,2}>0$ and $\bar{r}_{2,3}>a$, or vice versa. If we denote one such event $A$ and the reverse configuration $B$, then the total probability of not detecting the bit 2 flip is $P(A \cup B)=P(A)+P(B)-P(A \cap B)$, where the intersection $A \cap B$ has both signals greater than $a$. After summing this probability for all $r_m$ we obtain \begin{align}\label{eq:p2} P_2 &\equiv \int_{-1}^1 [2P(\bar{r}>a\,|\,r_m)P(\bar{r}>0\,|\,r_m) \nonumber \\ &\qquad\quad - P(\bar{r}>a\,|\,r_m)P(\bar{r}>a\,|\,r_m)]\,dr_m. \end{align} The total contribution of this scenario to the initial drop is thus, $(\mu\Delta t/2)P_2$. Gathering the above contributions, the total initial drop in fidelity for all boxcar filters is \begin{equation} \Delta F_{\text{in}} = \left[2 + P_2 \right]\frac{\mu\Delta t}{2} + 2\,P_{\text{mis}}. \end{equation} Since this formula involves an unwieldy integral, we will find suitable approximation before continuing. As will become clear in the next section, for the simple boxcar and double-threshold filters the averaging duration $\Delta t$ should be set fairly long compared to $\tau$ to achieve good performance. In this regime, we $P_2 \approx 1$ is an excellent approximation. Similarly, we will find that $P_{\text{mis}}$ is negligible for both the simple boxcar and double-threshold filters when $\Delta t \gg \tau$. For the half-boxcar filter, the peak performance will be achieved for significantly smaller $\Delta t$, so we must evaluate $P_2$ more precisely to obtain an accurate estimate. When $a=0$, $P_2$ acquires the asymptotic form \begin{align}\label{eq:p2boxcar} P_2 &\xrightarrow{a=0,\,\Delta t > 8\tau} 1 - \frac{1}{\sqrt{\pi\Delta t/\tau}} + \frac{\exp(-\Delta t/\tau)}{(\Delta t/\tau)\sqrt{\pi}}, \end{align} which converges very slowly to 1 as $\Delta t \to \infty$. For $\Delta t \sim 15\tau$, $P_2 \sim 0.85$. The dominant part of this asymptotic form must be kept, as well as the $P_{\text{mis}}$ contribution to the error rate. Anticipating these simplifications now and using the asymptotic formula $P_{\text{mis}} \approx \exp(-\Delta t/2\tau)/\sqrt{2\pi \Delta t/\tau}$ thus yields the final approximations appropriate for the parameter regimes that yield peak filter performance: \begin{align} \label{eq:finboxcar} \Delta F_{\text{in}} &\xrightarrow{\text{boxcar}} \frac{3}{2}\mu\Delta t \\ \label{eq:finhalfbox} \Delta F_{\text{in}} &\xrightarrow{\text{half-box}} \frac{3}{2}\mu\Delta t - \frac{\mu\sqrt{\tau\Delta t}}{2\sqrt{\pi}} + \frac{\sqrt{2}\, e^{-\Delta t/2\tau}}{\sqrt{\pi \Delta t/\tau}} \\ \label{eq:findouble} \Delta F_{\text{in}} &\xrightarrow{\text{doub.thr.}} \frac{3}{2}\mu\Delta t \end{align} In Section~\ref{sec:simulation} we verify these expressions numerically for the bit-flip rates $\mu\tau \in [10^{-6},10^{-3}]$. The final expressions for the logical error rate $\Gamma$ and initial fidelity drop $\Delta F_{\text{in}}$ are summarized in Table~\ref{tab:fingamma}. \subsection{Optimizing filter parameters}\label{sec:periodic:boxcaropt} The boxcar filters contain several tunable parameters that play the role of the prior information about $\mu$ and $\tau$ required for the Bayesian filter. That is, the averaging duration $\Delta t$ and asymmetric threshold $a$ must be set prior to processing the parity signals. To achieve peak filter performance, these parameters must be optimized for each $\mu$ and $\tau$. As such, in order to fairly compare the performance of the boxcar filters to the Bayesian filter, we must choose an appropriate optimization strategy for the boxcar filter parameters. We choose to optimize the filter parameters to minimize the total decay in average fidelity in the linear decay regime. Both the initial fidelity drop $\Delta F_{\text{in}}$ and the linear decay rate $\Gamma$ contribute to the total infidelity, so we optimize both together by maximizing the duration of time required for the average fidelity to drop a total of 10\%. As discussed after Eq.~\eqref{eq:fidnotrack}, a 10\% fidelity drop is roughly the maximum tolerable drop while remaining in the linear decay regime without error correction, which makes it a reasonable target. Since $F(t) = 1 - \Delta F_{\text{in}} - \Gamma t$ for a duration $t$ in this linear regime, this optimization procedure yields a maximum time to drop by $10\%$ fidelity: \begin{align}\label{eq:maxt} t_{\text{max}} = \max_{\text{params}} \frac{0.1 - \Delta F_{\text{in}}}{\Gamma}. \end{align} For most cases, the duration $t_{\text{max}}$ will be sufficiently long that the linear decay dominates the total average-fidelity decay, making this procedure essentially equivalent to minimizing the decay rate $\Gamma$ directly. However, for some cases with larger flip rates $\mu$ the initial drop becomes too large to neglect and this maximum-time optimization produces more reasonable results. We now systematically optimize the general formulas for the initial drop in fidelity $\Delta F_{\text{in}}$ and the logical error rate $\Gamma$, using the maximum drop-time procedure outlined in Eq.~\eqref{eq:maxt}. The post-optimization formulas will show the best achievable performance of each filter more clearly. For sake of simple comparisons, we only keep the dominant scaling of each analytical approximation in what follows. However, we will numerically optimize the full formulas in Table~\ref{tab:fingamma} and use the full expressions when comparing theory to numerical simulations in Section~\ref{sec:simulation}. \subsubsection{Boxcar filter} The boxcar filter has only one free parameter to optimize: the boxcar duration $\Delta t$. We use the optimization procedure of Eq.~\eqref{eq:maxt}, with the logical error rate from Eq.~\eqref{eq:gammaboxcar} and initial drop from Eq.~\eqref{eq:finboxcar}. We use the error function approximation $P_\text{mis} \approx \exp(-\Delta t/2\tau)/\sqrt{\tau/2\pi\Delta t}$ from Eq.~\eqref{eq:pmis} to make the formulas analytically tractable. We solve for the minimum by taking a derivative of Eq.~\eqref{eq:maxt} with respect to $\Delta t$ and setting it to zero, in the usual way, which produces the results \begin{align} \frac{\Delta t}{\tau} &\approx 0.207\, (\mu \tau)^{-2/3} - 1.3\,(\mu \tau)^{-1/3} + 6, \nonumber \\ \Gamma \tau &\approx 1.86\,(\mu \tau)^{4/3}, \\ \Delta F_{\text{in}} &\approx 0.31\,(\mu\tau)^{1/3}, \nonumber \end{align} where we have replaced purely numerical prefactors with decimal approximations and have truncated the expressions to remove negligible terms. We choose the precision of these numerical constants and the truncations so that the simplified analytical formulas closely reproduce the numerically optimized result. Notably, the optimal averaging duration $\Delta t$ dominantly scales as $\mu^{-2/3} \propto \Gamma^{-1/2}$, so becomes impractically long for small error-rates $\Gamma$. Moreover, since the logical error rate $\Gamma$ scales as $\mu^{4/3}$, the filter performs dramatically worse than the $\mu^2$ scaling of the Bayesian filter. These features make the simple boxcar filter ill-suited for practical error correction. \subsubsection{Half-boxcar filter} To optimize the half-boxcar filter over the duration $\Delta t$, we follow the same procedure as in the previous section for the boxcar filter. We use Eqs.~\eqref{eq:gammahalfbox} and \eqref{eq:finhalfbox} and the error function approximation from Eq.~\eqref{eq:pmis}. The minimization procedure by taking a derivative with respect to $\Delta t$ produces the approximate nonlinear relation $\exp(\Delta t/2\tau) = 3/14\mu\tau\sqrt{\pi\Delta t/2\tau}$, which we solve recursively for $\Delta t/\tau$. This procedure yields the following continued fraction as a perturbative solution \begin{align}\label{eq:halfboxdeltat} \frac{\Delta t}{\tau} &\approx 2\ln\frac{3}{14\mu\tau\sqrt{\pi\ln[3/(14\mu\tau\sqrt{\pi\,\cdots})]}} \\ &\approx 2 \ln\frac{1}{15\mu\tau}, \nonumber \end{align} where the final approximation truncates the recursion at the dominant logarithmic functional form. The constant inside the logarithm is chosen as a crude fit to the full numerically optimized curve within the parameter regime $\mu\tau \in [10^{-6},10^{-3}]$ to help simplify the scaling comparison. Using this simplification, the logical error rate and initial drop have the forms \begin{align} \label{eq:halfboxgamma} \Gamma \tau &\approx 8.4\,(\mu\tau)^2 \ln \frac{1}{15 \mu \tau}, \\ \Delta F_{\text{in}} &\approx 3(\mu\tau)\ln \frac{1}{15 \mu \tau}, \nonumber \end{align} where numerical constants have again been reduced to appropriate precision decimals based on fits to the numerically optimized results. Notably, the logical error rate $\Gamma$ now scales with $\mu^2$, up to logarithmic corrections, analogously to the Bayesian filter in Eq.~\eqref{eq:bayesiangamma}, making the half-boxcar filter suitable for practical error correction. In the full numerical simulations that we detail in the following section, we will see that of the boxcar-averaging filters the half-boxcar variation has the closest performance to the optimal Bayesian filter, despite its dramatic reduction in computational overhead. \subsubsection{Double threshold filter} \begin{figure}[t] \includegraphics[width=0.9\columnwidth]{Fig/Figure05.pdf} \caption{Numerically optimized second threshold $a\geq 0$ for the double-threshold filter, as a function of the bit-flip rate $\mu\tau$. The dashed line is a crude analytical fit $a \approx 0.525[1 - 2.5(\mu\tau)^{1/3}]$ within the range $\mu\tau \in [10^{-6},10^{-3}]$.} \label{fig:optimala} \end{figure} \begin{table*}[t] \begin{center} \setlength{\tabcolsep}{8pt} \renewcommand{\arraystretch}{4} \begin{tabular}{l|l|l|l|l} \multicolumn{1}{c|}{\textbf{Filter}} & $\Delta F_{\textrm{in}}$ & $\Gamma \tau$ & $\Delta t/\tau$ & $a\geq 0$ \\ \hline\hline Bayesian & $\displaystyle 1.5\,(\mu\tau)\ln\frac{1}{\mu\tau}$ & $\displaystyle 3\,(\mu\tau)^2\ln\frac{1}{\mu\tau}$ & -- & -- \\ \hline Boxcar & $\displaystyle 0.31\,(\mu\tau)^{1/3}$ & $\displaystyle 1.86\,(\mu\tau)^{4/3} $ & $0.207\, (\mu\tau)^{-2/3}$ & $0$ \\ \hline Half-boxcar & $\displaystyle 3\,(\mu\tau)\ln\frac{1}{15\mu\tau}$ & $\displaystyle 8.4\,(\mu\tau)^2\ln\frac{1}{15\mu\tau}$ & $\displaystyle 2\,\ln\frac{1}{15\mu\tau}$ & $0$ \\ \hline Double-threshold & $\displaystyle 12\,(\mu\tau)\ln\frac{1}{150\mu\tau}$ & $\displaystyle 33\,(\mu\tau)^2\ln\frac{1}{150\mu\tau}$ & $\displaystyle 8\,\ln\frac{1}{150\mu\tau}$ & $0.5$ \end{tabular} \end{center} \caption{Dominant scaling with $\mu\tau$ of optimized filter performance. We compare the initial fidelity drops $\Delta F_{\text{in}}$, logical error rates $\Gamma$, boxcar averaging durations $\Delta t/\tau$, and second thresholds $a\geq 0$. We show only the dominant scaling in each case, truncating smaller corrections for sake of simple comparison. We choose the precision of numerical prefactors to best fit the full numerically optimized results obtained from Table~\ref{tab:fingamma} in the regime $\mu\tau \in [10^{-6},10^{-3}]$. See Section~\ref{sec:periodic:boxcaropt} for details on the optimization procedures.} \label{tab:optscaling} \end{table*} Unlike the preceding boxcar filters, the double-threshold filter has two free parameters to optimize: the boxcar duration $\Delta t$, and the second threshold $a\geq 0$. To get a rough idea of the analytic scaling, we follow a crude sequential optimization strategy. First, we follow the same optimization procedure from the previous section for $\Delta t$ for a fixed $a$, using Eqs.~\eqref{eq:gammadouble} and \eqref{eq:findouble} and the error function approximation from Eq.~\eqref{eq:pmis}. This optimization again yields a continued fraction solution \begin{align}\label{eq:thresholddeltat} \frac{\Delta t}{\tau} &\approx \frac{2}{(1-a)^2} \ln\frac{(1-a)^2}{6 \mu\tau \sqrt{\pi\ln\frac{(1-a)^2}{6 \mu\tau\sqrt{\pi\,\cdots }}}}, \\ &\approx \frac{2}{(1-a)^2} \ln\frac{(1-a)^2}{6 \mu\tau \sqrt{\pi\ln\frac{(1-a)^2}{6 \mu\tau\sqrt{\pi}}}}, \nonumber \end{align} where the final approximation truncates the recursion at the second-order, which achieves better accuracy than the first-order truncation in the parameter regime $\mu\tau \in [10^{-6},10^{-3}]$. We find numerically that the dependence of the optimal $a$ on $\mu\tau$ varies slowly in this parameter regime, so we approximate it using the following function \begin{align}\label{eq:thresholda} a &\approx 0.525[1 - 2.5(\mu\tau)^{1/3}], \end{align} which is a crude fit to the numerically optimized result and not derived analytically. We compare this crude formula to the numerically optimized result in Fig.~\ref{fig:optimala} for completeness. We will use these approximate analytical fits to the numerical optimization in Figs.~\ref{fig:fidelity}--\ref{fig:optimalgamma} in the following section. For the purposes of comparing the dominant scaling of the various filters, we further approximate the threshold as constant, $a\approx 0.5$, and truncate the recursive solution of $\Delta t/\tau$ to first-order, which yields the following loose approximations to the logical error rate and initial drop in fidelity, \begin{align} \label{eq:thresholdgamma} \frac{\Delta t}{\tau} &\approx 8\ln\frac{1}{150\mu\tau}, \nonumber \\ \Gamma\tau &\approx 33\, (\mu\tau)^2 \ln \frac{1}{150 \mu \tau}, \\ \Delta F_{\text{in}} &\approx 12\,(\mu\tau)\ln \frac{1}{150 \mu \tau}. \nonumber \end{align} As with the preceding filters, we have replaced constants factors with to best fit the full numerical optimization. The intention of these final formulas is not to be exceptionally accurate, but rather to capture the crude dominant scaling for sake of simple comparison with the other filters. As with the half-boxcar filter, the double-threshold filter achieves the $\mu^2$ scaling of the logical error rate $\Gamma$, up to logarithmic corrections, so is also a suitable filter for error correction. It has the benefit of being Markovian, as opposed to the half-boxcar filter that requires memory, but its scaling prefactors are not as favorable as in Eq.~\eqref{eq:halfboxgamma} for the half-boxcar filter or Eq.~\eqref{eq:bayesiangamma} for the Bayesian filter. For convenience, we compare the dominant scaling of all filters in Table~\ref{tab:optscaling}. \section{Numerical Simulations}\label{sec:simulation} To check the validity of our filter analysis, we numerically implement continuous measurements of the 3-bit code, the linear Bayesian filter, and the three boxcar filter variations in the programming language \texttt{julia} \cite{tomasi2018}. To do this efficiently, we first pick a target bit-flip rate $\mu$ to test, such that $\mu\tau \in [10^{-6},10^{-3}]$ with $\tau$ being the reference timescale for the numerics (set to 1 for convenience). We then initialize a $3\times N$ array of bits to describe $N=\mathtt{floor}(10\,\text{max}\Delta t/dt)$ time steps of duration $dt = \tau/10$. The timescale $\text{max}\Delta t$ is the maximum optimal boxcar size of the four filters being tested for each $\mu$, and ensures that enough data is simulated per trajectory to assess the behavior of each filter. We compared the results to those obtained with $dt = \tau/100$ to verify that no residual time discretization artifacts were present in the numerics. The optimal box sizes are numerically determined from the formulas presented in Table~\ref{tab:fingamma}, and shown in Fig.~\ref{fig:optimalboxcar}, with maxima ranging from $\text{max}\Delta t/\tau \in [5,2000]$. \begin{figure*}[ht!] \begin{center} \subfloat[]{\includegraphics[width=0.45\textwidth]{Fig/Figure06a.pdf} \label{fig:fidelity}} \quad \subfloat[]{\includegraphics[width=0.45\textwidth]{Fig/Figure06b.pdf} \label{fig:optimalboxcar}} \\ \subfloat[]{\includegraphics[width=0.45\textwidth]{Fig/Figure06c.pdf} \label{fig:optimaldrop}} \quad \subfloat[]{\includegraphics[width=0.45\textwidth]{Fig/Figure06d.pdf} \label{fig:optimalgamma}} \end{center} \caption{Comparison of numerical simulations to analytical expressions. (a) Average fidelity $F(t)$ in time, optimized for a bit-flip rate of $\mu\tau = 10^{-3}$, showing how the initial drop $\Delta F_{\text{in}}$ and logical error rate $\Gamma$ of various error correction methods manifest. For the boxcar filters the data points indicate the box periodicity $\Delta t$, while data points on the Bayesian curve are sampled similarly for reference. Note that the last box of the half-box filter decays with the same rate as the basic boxcar filter, since the non-Markovian correction cannot be applied. (b) Optimized boxcar-averaging duration, $\Delta t/\tau$, with the time-continuous Bayesian case omitted. (c) Optimized initial drop in average fidelity $\Delta F_{\text{in}}$ for different filters, as a function of the bit-flip rate $\mu\tau$. (d) Optimized logical error rate $\Gamma$ for different filters, as a function of the bit-flip rate $\mu\tau$. The order in the legend from top-to-bottom matches the order of the curves on the left edge of each plot. In all plots, numerical simulations of the various filters using optimized parameters are plotted as circular data points with error bars smaller than the width of the points. The numerically-optimized analytical formulas in Table~\ref{tab:fingamma} are plotted as solid lines, while the simplified formulas in the main text are plotted as dashed lines. The shaded gray areas indicate the regions that are unattainable even by an ideal 3-qubit code, while the shaded red areas indicate the regions that are worse than that of a single qubit with no error correction. The analytical initial drops for the Bayesian filter include the prefactor correction $5/4\mapsto 3/2$ discussed in Section~\ref{sec:continuous:analysis}. The analytical initial drops for the boxcar filters numerically correspond to the time at half of the first box $\Delta t/2$. } \end{figure*} To simulate each trajectory, at the initial time the bit state is set to $000$, representing the encoding $III$. Random bit flips are then added with Poisson statistics at the rate $\mu$. Specifically, the wait-time distribution for $n$ steps between two successive jumps is exponential, $p(n) = \exp[-n/(\mu dt)]/(\mu dt)$; for each qubit we sample this wait-time distribution to find the random number of steps $\mathtt{floor}(n)$ until the next jump for each bit, then flip the appropriate bits between specified jumps. After this procedure the $3\times N$ array holds the ``true'' state trajectory for the 3-bit code. This numerical model is thus a direct implementation of the hidden Markov model in Fig.~\ref{fig:Markov} that is described in the main text. Given a true 3-bit trajectory, we then simulate the noisy parity signals $r_{i,j}$ by computing the exclusive-or $x(i,j)$ between neighboring bits $i$ and $j$ at each timestep $dt$, then constructing $r_{i,j} = -2x_{i,j} + \xi$, where $\xi$ is sampled from a normal distribution with mean $+1$ and variance $\tau/dt$. This construction vectorizes the noise simulation efficiently, and centers the mean signals for even parities at $+1$ and odd parities at $-1$. The resulting noisy signals then simulate the parity signals that one would obtain from performing continuous direct parity measurements in the laboratory, after the signals have been correctly normalized. Given the simulated noisy signals $r_{i,j}$, we then pass both signals through each of the four trial filters analyzed in the preceding section: linear Bayesian, simple boxcar, half-boxcar, and double-threshold boxcar. We set the tunable parameters, $\Delta t$ and $a$, for the three boxcar filters to optimal values determined by the numerical optimization of the formulas in Table~\ref{tab:fingamma}. (We also verify numerically that tuning these parameters away from the theoretical optimum correctly shows that the parameter values are optimum.) Each filter then returns a $3\times N$ array of estimated 3-bit state trajectories. For each $dt$, we compute the bit state fidelity as a simple equality test between the triplet of true bits and the triplet of estimated bits, yielding 1 if the bits agree and 0 if they disagree. We compute the average state fidelities by repeating this process between $10^6$ and $10^8$ times and averaging the fidelities at each time step. This simulation procedure produces the numerical results plotted as the points in Figs.~\ref{fig:fidelity}--\ref{fig:optimalgamma}, with final numerical error bars on the order of the width of the points or smaller. The solid lines show the formulas summarized in Table~\ref{tab:fingamma} after numerical parameter optimization. For reference, the dashed lines show the crude analytical approximations of the optimized formulas that we presented in Section~\ref{sec:periodic:boxcaropt}. (Note that for the double-threshold filter in Figs.~\ref{fig:optimalboxcar}--\ref{fig:optimalgamma} we plot the more accurate $a$-dependent analytic formulas in Eqs.~\eqref{eq:thresholddeltat} as the dashed lines.) For the boxcar filters, we found that in order to apply the linear decay formula in Eq.~\eqref{eq:lineardecay} to the simulated data, the initial drop $\Delta F_{\text{in}}$ should be placed in the middle of the first averaging box, at $t=\Delta t/2$, after which the linear fit with slope $\Gamma$ correctly describes the data. For smaller $\mu$ the logical error rates $\Gamma$ become quite small so require more realizations to resolve the average to sufficient numerical precision; in the cases of the double-threshold and simple boxcar filters the optimized durations $\Delta t/\tau$ were sufficiently long to prohibit accurate averaging of $\Gamma$ below $\mu\tau \sim 10^{-4}$. Nevertheless, for all successfully simulated results the agreement is excellent between numerical simulations and numerically optimized analytical formulas from Table~\ref{tab:fingamma}. In Fig.~\ref{fig:fidelity}, we show the time-dependent average fidelities $F(t)$ of all methods, optimized for a relatively large bit-flip rate of $\mu\tau = 10^{-3}$. The numerical simulations (data points) confirm the numerically-optimized analytical results for $\Delta F_{\text{in}}$ and $\Gamma$ summarized in Table~\ref{tab:fingamma} (solid lines), as well as the corresponding crude approximations (dashed lines). The gray line is a simulation of idealized 3-bit code error correction using perfect-fidelity projective parity measurements with a rapid cycle delay of $\delta t/\tau = 4$ (see the Appendix for details); the shaded gray region above this line roughly represents fidelities that are inaccessible to even an ideal implementation of the 3-bit code. The light red shaded region below indicates fidelities that are worse than that of a single bit without error correction. After the initial drops in fidelity $\Delta F_{\text{in}}$ in Fig.~\ref{fig:fidelity}, the Bayesian filter (orange, top curve) and half-boxcar filter (green, second curve from top) achieve asymptotic slopes (corresponding to the logical error rates $\Gamma$) that are comparable to that expected for ideal operation of the code. The double threshold filter (blue, bottom curve at left of graph) performs slightly less favorably, while the simple boxcar filter (red, third curve from top at left of graph) performs significantly worse, as anticipated in the previous section. For the half-boxcar filter, the last box decays at the same rate as the simple boxcar filter because the non-Markovian correction cannot be applied to the last box. This change in decay rate in the final box yields an additional contribution to the net fidelity drop $\Delta F_{\text{fin}} \equiv \Delta t(\Gamma_{\text{boxcar}}-\Gamma_{\text{half-boxcar}})$ that is not observed with the Markovian filters. In Fig.~\ref{fig:optimalboxcar}, we show the optimized averaging durations $\Delta t$ for the boxcar methods (solid lines) and corresponding crude approximations (dashed lines). The gray line at bottom indicates the rapid cycle delay of $4\tau$ for the idealized projective measurements. The non-Markovian half-boxcar filter achieves the shortest averaging durations with $\Delta t \lesssim 20\tau$ even for small bit-flip rates of $\mu\tau \sim 10^{-6}$. The simple boxcar filter requires excessively long optimal averaging durations (up to two orders of magnitude longer than the half-boxcar filter for small bit flip rates). The Markovian double threshold filter consistently requires averaging lengths that are a factor of roughly 2--4 longer than the half-boxcar to achieve similar performance. In Fig.~\ref{fig:optimaldrop}, we show the optimized scaling of the initial drop $\Delta F_{\text{in}}$ with $\mu$ for all methods, using the same color and line-style conventions as Fig.~\ref{fig:fidelity}. The numerical simulations confirm the numerically-optimized analytical results in Table~\ref{tab:fingamma} for the entire tested range of parameters $\mu\tau \in [10^{-6},10^{-2}]$. For the Bayesian filter analytical curve we adjust the derived formula by making the substitution $5/4 \mapsto 3/2$ in Eq.~\eqref{eq:bayesiandrop}, which corrects a systematic deviation caused by noise-fluctuations, as discussed at the end of Section~\ref{sec:continuous:analysis}. Notably, the half-boxcar filter achieves an initial drop roughly a factor of 2 larger than the optimal Bayesian filter. For contrast, the double-threshold filter has an initial drop that is roughly a factor of 8 larger than the Bayesian filter. The simple boxcar filter suffers from comparatively large drops greater than $0.3$\%, even with small bit-flip rates of $\mu\tau \sim 10^{-6}$, which is nearly two orders of magnitude larger than the Bayesian filter. In Fig.~\ref{fig:optimalgamma}, we show the optimized scaling of the logical error rate $\Gamma$ with $\mu$ for all methods, using the same conventions. The numerical simulations again confirm the numerically-optimized analytic results, up to small deviations for the half-boxcar filter at larger bit-flip rates. We anticipated this slight deviation between the analytics and the simulated data points for $\mu\tau > 10^{-4}$ during the derivation in Section~\ref{sec:periodic:halfbox}, where it arises from a short boxcar duration, $\Delta t \lesssim 15\tau$, that prevents convergence to the asymptotic behavior assumed in the analytical formulas. For the double-threshold and simple boxcar filters we simulate only larger bit-flip rates $\mu\tau \gtrsim 10^{-4}$ due to the optimal boxcar sizes $\Delta t$ becoming prohibitively long for smaller $\mu\tau$; however, the tested cases confirm the $\mu\tau$-dependence expected from the analytics. These simulations confirm that quantum error correction based on \emph{passive state tracking} with continuous parity measurements is a viable strategy. As anticipated, the linear Bayesian filter performs the best, achieving only a slight reduction in performance compared to the idealized 3-bit code due to the noise of the monitored signal. Moreover, the half-boxcar filter nearly matches the Bayesian filter in performance despite a dramatic reduction in processing requirements, which makes it the best balance between performance and practicality of the minimal filters considered here. The double-threshold filter also scales comparably, though performs slightly worse overall. We also emphasize that in the presence of experimental nonidealities, realistic implementations of the 3-bit code that use entangling gates, ancillas, and projective measurements are likely to perform comparably to the continuous measurement filters considered here; for completeness, we provide a similar analysis of the ancilla-based projective case in the Appendix. \section{Conclusions}\label{sec:conclusions} We have analyzed the 3-qubit bit-flip code to assess the performance of direct methods for measuring the syndromes using time-continuous parity measurements. For interpreting the time-continuous noisy signals of the direct syndrome measurements, we have introduced and analyzed four distinct filters: (i) an efficient linear variation of an optimal Bayesian filter, (ii) a simple boxcar-averaging filter, (iii) a minimal non-Markovian ``half-boxcar'' variation of the boxcar-averaging filter, and (iv) a minimal Markovian variation of the boxcar-averaging filter that uses two thresholds. We have derived analytic estimations for the performance of all filters and have verified them with numerical simulations. These direct parity-measurement methods benefit from a reduction in hardware resources compared to ancilla-based methods (namely two fewer ancillary qubits), which limits the number of inherent bit-flip-error pathways even before extending the bit-flip code to more sophisticated encoding schemes. The Bayesian filter most closely approaches the ideal performance of the ancilla-based bit-flip code, but also requires the most computational resources for real-time processing of the noisy syndrome measurements. The boxcar variations require less active processing than the optimal Bayesian filter, so should be more easily implemented with signal processing hardware, such as field-programmable gate arrays (FPGAs), for the purposes of real-time syndrome tracking. The non-Markovian half-boxcar filter achieves the best balance between performance and computational overhead of the considered methods. The Markovian double-threshold filter performs slightly less well than the half-boxcar filter, but avoids the additional memory overhead at the expense of an increased boxcar duration. All three methods are suitable for immediate implementation with current superconducting hardware. The results of our study are promising for the continued investigation of direct syndrome-measurement methods. However, three scalability issues that we have ignored need to be addressed before direct methods can achieve full quantum error correction. First, we have focused our analysis on the performance of the methods with respect to their intended design: protecting against bit-flip errors. As such, we have ignored other sources of infidelity, particularly dephasing of the parity subspaces due to imperfect overlap of the entangled microwave fields, which is analogous to ignoring entangling gate infidelity in analyses of ancilla-based error correction. Some analysis of these types of implementation imperfections has begun in recent years \cite{Criger2016,Ciani2017,Huembeli2017,Royer2018}, but more investigation is needed for a definitive assessment. Second, while we have presented a practical method for directly measuring the $ZZ$ parities needed for bit-flip correction, we have not addressed how to directly measure the $XX$ parities needed for additional phase-flip correction. Obtaining high-fidelity direct parity measurements for both $ZZ$ and $XX$ is an open problem currently under investigation. Third, high-fidelity extensions of direct two-qubit parity measurements need to be developed to implement more sophisticated error-correction schemes, such as the surface code that requires four-qubit parity measurements. A direct quantitative comparison of the performance of this continuous error correction to conventional implementations of gate-based ancilla plus projective-measurement is challenging. This is because different assumptions must be made about how the ancilla-based scheme is implemented, and what a fair comparison of the approaches is. In superconducting-based architectures, projective measurements have traditionally been implemented as thresholded continuous measurements anyway. Consequently, the always-on methods with a fast measurement rate have the obvious advantages of not needing time to implement the two-qubit gates, or to have any down time between repeating the cycle again, where other errors might sneak in. There is also the possibility of errors occurring in the ancilla qubits, which would then demand a much larger quantum circuit to make everything fault tolerant, but at the price of even more hardware. We analyze several different error scenarios that can occur in the gate-based implementation in the Appendices, for contrast. Our overall conclusion is that the hardware efficiency of the measurement-based parity has the potential to minimize error possibilities, assuming both good parity measurement fidelity and good gate fidelity. Although full error correction using continuous parity measurements requires additional investigation, several experimental tasks can be achieved in the short term. First, the 3-qubit bit-flip code as analyzed here can be implemented immediately with current superconducting architectures. Second, a simple extension of the parity-syndrome monitoring idea to a 4-qubit Bacon-Shor error-detection code is a natural next step. Such a code involves four qubits in a square grid, coupled pairwise to parity-measuring resonators analogously to Fig.~\ref{fig:Experimental-setup}. A detailed analysis of the simultaneous measurement of $ZZ$ and $XX$ parities on the square grid is considered in Ref.~\cite{atalaya2017bacon}, which demonstrates that the error detection scheme works in a time-continuous way. A simpler variation of this idea can be performed without direct $XX$ measurements by alternating $ZZ$ measurements of different pairs, and suitably interleaving single-qubit rotation gates to effectively switch between $ZZ$ and $XX$ measurements. Such a variation is in between the usual ancilla-based projective scheme and a fully continuous scheme, much like the boxcar filters in the present work are in between ancilla-based schemes and fully continuous schemes. We expect such an experiment to be performed in the near future. \begin{acknowledgments} The authors thank Alexander N. Korotkov for many detailed discussions about the Bayesian and boxcar filters, in which he provided insight into the primary error mechanisms as well as initial derivations for the initial fidelity drops and average logical error rates. We also thank him for providing considerable feedback on the writing of this manuscript. The authors are additionally grateful for helpful discussions with Andrew Eddins, Machiel Blok, Leigh Martin, and William Livingston, as well as Liang Jiang. This work was supported by the Army Research Office (ARO) grant Nos. W911NF-15-1-0496 and W911NF-18-1-0178. JD also thanks Franco Nori for his hospitality at RIKEN during the completion of this manuscript, but does not thank the COVID-19 pandemic for subsequently delaying its publication. \end{acknowledgments}
1,941,325,220,922
arxiv
\section{Introduction} The gauge/gravity duality (also known as AdS/CFT correspondence) relates the physics of a gravitational theory in an asymptotically anti de Sitter (AdS) background to the physics of a conformal field theory (CFT) at the AdS boundary \cite{Maldacena1998}. This elegant discovery is the primary motivation for studying black holes in AdS space \cite{Witten1998a,Witten1998b,PhysicsReports2000}. In the most recognized version of this duality, $\rm{AdS}_5/\rm{CFT}_4$ correspondence, the parameters of the supergravity theory and the CFT satisfy the following dictionary \cite{Maldacena1998,NatsuumeBookAdS/CFT} \begin{equation} \frac{{{L^3}}}{{{G_5}}} = \frac{{2N_c^2}}{\pi },\,\,\,\,\lambda = {\left( {\frac{L}{{{\ell _s}}}} \right)^4}, \end{equation} where $L$ is the AdS radius, $G_5$ is the Newton's constant, $\lambda$ is the $^{,}$t Hooft coupling, $\ell_s$ is the string length, and $N_c$ is the number of the colors. In the gravitational theory and naturally in gauge/gravity duality, the cosmological constant ($\Lambda$), which relates to the AdS radius by $\Lambda \propto 1/{L^2}$, is an external fixed parameter. However, the recent and interesting development in black hole physics suggests that the mass of AdS black holes should be interpreted as the enthalpy via extending the phase space, i.e., treating the cosmological constant as a thermodynamic variable (pressure) \cite{Kastor2009,Kastor2010,Dolan2011a,Dolan2011b,Kubiznak2012}. Taking this proposal into account, the variation of $\Lambda$ in the gravity side is equivalent to the variation of the number of colors ($N_c$) in the field theory side, provided that the string coupling ($g_s$) is held fixed \cite{Kastor2009}. This leads to a new holographic interpretation of AdS black holes since the dynamics of the CFT crucially depends on the number of colors, and especially, the holographic origin of the Smarr relation is explained \cite{Karch2015}. \par Variation of physical constants leads to considering more fundamental theories in physics. Of interesting case is the use of the cosmological constant as a dynamical variable in black hole physics (see the earlier works in \cite{EarlierWorksA,EarlierWorksB,EarlierWorksC}). Along this line, extending the thermodynamic phase space of black holes by identifying the varying cosmological constant as a thermodynamic pressure, $P=-\Lambda/8\pi $, has opened up new theoretical avenues. In fact, the cosmological constant ($\Lambda$) has a privileged role to be identified as the thermodynamic pressure. There are several strong reasons for supporting this claim: \begin{itemize} \item Besides that $\Lambda$ generally appears as pressure in cosmology \cite{Weinberg2008Cosmology}, it has dimension of pressure in black hole thermodynamics as well (i.e., $[length]^{-2}$ in geometric units, ${G_N} = \hbar = c = {k_{\rm{B}}} = 1$) and also its conjugate variable has dimensions of volume \cite{Kastor2009,Kubiznak2012}. \item Having this identification, one can naturally obtain the conjugate volume using the standard thermodynamic relation as $V=\partial M/\partial P$ \cite{Kastor2009,Kubiznak2012}. This definition avoids the difficulties of the other definitions of black hole volume \cite{Parikh2006,VectorVolume2013,Rovelli2015}, in which the volume integration has to be performed inside the horizon and this is problematic in some ways since the interior of black hole metric is not static. \item Remarkably, this thermodynamic volume ($V=\partial M/\partial P$) is supported by the geometric definition of black hole volume by means of the Komar integral relation as (found by Kastor et al \cite{Kastor2009}) \begin{equation} V = - \left[ {\int_{\partial {\Sigma _\infty }} {d{S_{ab}}\left( {{\omega ^{ab}} - \omega _{{\rm{AdS}}}^{ab}} \right) - \int_{\partial {\Sigma _{\rm{h}}}} {d{S_{ab}}{\omega ^{ab}}} } } \right], \end{equation} where $d{S_{ab}}$ is the volume element normal to the co-dimension 2 surface $\partial \Sigma $ and the Killing potential ($\omega ^{ab}$) is precisely defined by use of Killing vector as ${\xi ^a} = {\nabla _b}{\omega ^{ab}}$. It is an interesting finding since this quantity gives a measure of the volume excluded from the spacetime by the black hole's horizon, as well understood in \cite{Kastor2009}, so it is interpreted as an effective volume inside the event horizon. \item On the other hand, when a cosmological constant is present, the first law of black hole thermodynamics in the traditional treatments becomes inconsistent with the Smarr relation which means that the scaling argument (Euler’s theorem in thermodynamics) is no longer valid. This difficulty can be cured by identifying $\Lambda$ proportional to the thermodynamic pressure \cite{Kastor2009,Kubiznak2012,Review2017}. \item There exist further pieces of evidence which show that black hole thermodynamics has a self-consistent framework within the extended phase space (for a nice review see \cite{Review2017}). For example, in the extended phase space, the Ehrenfest equations have well-defined definitions for a genuine second-order phase transitions \cite{Ehrenfest2013,Ehrenfest2014}. As another example, comparing the Van der Waals phase transition in the extended phase space of AdS black holes and also in the Van der Waals fluids shows a precise analogy between physical quantities of AdS black holes and Van der Waals fluids \cite{Kubiznak2012}. But, the similarity between the black hole thermodynamics and everyday thermodynamics found in the non-extended phase spaces (e.g., see Refs. \cite{Banerjee2010,Banerjee2011,Chamblin1999a,Chamblin1999b}) are some mathematical analogies rather than exact correspondences, as indicated in \cite{Kubiznak2012,Review2017}. However, in the extended phase space, the analogies are more direct \cite{Kubiznak2012,Ehrenfest2013,Ehrenfest2014,Review2017}. \end{itemize} \par All these results seem to justify that the varying cosmological constant as thermodynamic pressure has a privileged role. In conclusion, this identification leads to a number of interesting results, most importantly, the first law of black hole thermodynamics is defined the same as everyday thermodynamics. Consequently, one can think about the other branches of thermodynamics and apply them in black holes as well. Pushing on this idea, interesting phase transitions have emerged like van der Waals phase structure in a number of AdS black hole spacetimes \cite{Kubiznak2012,Mann2012,Review2017,vdWNED2019}. Another novel direction is studying of black holes as heat engines \cite{Johnson2014} and exploring various thermodynamic cycles in them which has attracted a great deal of attention lately \cite{HeatEngines2015a,HeatEngines2015b,Johnson2016a,Johnson2016b,HeatEngines2017a,HeatEngines2017b,HeatEngines2017c,HeatEngines2017d,HeatEngines2017e,Johnson2018a,Johnson2018b,Johnson2018c,HeatEngines2018a,HeatEngines2018b,HeatEngines2018c,HeatEngines2018d,HeatEngines2018e,Johnson2019,HeatEngines2019a,HeatEngines2019b,HeatEngines2019c,HeatEngines2019d,HeatEngines2019e,HeatEngines2020}. In what follows, we shall restrict ourselves to this subject. \par Considering the extended phase space, it is natural to use black holes as heat engines \cite{Johnson2014} and explore how different closed loops in the $P-V$ plane can be realized as thermodynamic cycles. A number of interesting results have been found for various classes of AdS black holes. Born-Infeld AdS black holes have been considered as the working substance in Ref. \cite{Johnson2016a}, proving that the engine's efficiency is affected by the non-linearity parameter, $\beta$. A generalization of this study has been performed via Einstein-dilaton-Born-Infeld system in Ref. \cite{HeatEngines2017e}. Corrections of higher order curvatures to heat engines is also examined in Ref. \cite{Johnson2016b}, where, from the perspective of the dual holographic large $N_{c}$ field theory, it amounts to studying the effects of a class of $1/N_{c}$ corrections to the efficiency of the engine's cycle. Holographic heat engines have also been constructed in the context of massive gravity in Refs. \cite{HeatEngines2018a,HeatEngines2018b} and it is shown that the existence of graviton mass could improve the heat engine efficiency significantly. These types of holographic heat engines demonstrate that corrections arisen from gravity model or matter fields affect the engine's performance and efficiency. As an another example, properties of holographic heat engines with charged accelerating AdS black holes as the working substance is investigated in a benchmarking scheme \cite{HeatEngines2018c, HeatEngines2019a}. The heat engines' properties are considered for static AdS black holes in higher and lower dimensions \cite{HeatEngines2015b,HeatEngines2017c,HeatEngines2019d}, as well as in other black hole backgrounds such as Taub-Bolt solutions, rotating AdS solutions etc \cite{HeatEngines2015a,HeatEngines2017b,HeatEngines2017d,Johnson2018a,HeatEngines2018d,HeatEngines2018e,HeatEngines2019b,HeatEngines2020}. There have been several papers on AdS black holes as heat engines which can be viewed as a toy model for further investigation of holographic heat engines which opened new avenues \cite{HeatEngines2017a,Johnson2018b,Johnson2018c,Johnson2019,HeatEngines2019c,HeatEngines2019e}. \par The objective of this paper is to define a new class of holographic heat engines using charged AdS black holes with logarithmic $U(1)$ gauge theory as a matter source. In fact, there are strong motivations for considering a logarithmic Lagrangian for the Abelian $U(1)$ gauge theory. In the logarithmic version of nonlinear electrodynamics (NED) theory (proposed by Soleng in \cite{NLBHSoleng1995}), the logarithmic action leads to a bounded field strength and so the electromagnetic self mass of a point charge will be finite \cite{NLBHSoleng1995}. Heisenberg and Euler have shown in their theory that, even in the vacuum, the Maxwell equations have to be exchanged by more fundamental theories of nonlinear electrodynamics in order to explain the vacuum polarization effects \cite{HeisenbergEuler1936}. Nonlinear electrodynamics Lagrangians such as Heisenberg-Euler theory, Born-Infeld (BI) theory \cite{BornInfeld} and logarithmic NED theory can be regarded as an effective field theories for simulating the 1-loop corrections of vacuum polarization (Feynman) diagrams computed in QED \cite{SchwartzQFT}. On the other hand, the quantum mechanical non-linearity of electromagnetic fields can be described by BI-type NEDs, which leads to scattering of light-by-light (Halpern scattering) \cite{JacksonElectrodynamics}. Note that the logarithmic NED is indeed a BI-type theory which can be manifest by expanding its Lagrangian in the weak-field coupling limit. In principle, it covers all the features of BI electrodynamics, so it is a viable theory which certainly merits further exploration. Moreover, in superstring theory, the effective actions describing nonlinear BI-type electrodynamics have been found for dynamics of D-branes \footnote{We also should emphasize that the dynamics of electromagnetic fields on the world-volumes of D-branes is exactly governed by the standard BI theory. Assuming that the nonlinear parameter is large enough, the other kinds of BI-type theories can naturally describe the same dynamics.} \cite{StringBI1,StringBI2,StringBI3,StringBI4}. For these reasons, corrections to black hole physics from nonlinear theories of electrodynamics as the matter field have been a subject of long-standing investigation \cite{NLBHs1935,NLBHs1937,NLBHSoleng1995,NLBHs1,NLBHs2,NLBHs3,NLBHs4,NLBHs5,NLBHs6,NLBHs7,NLBHs8,NLBHs9,NLBHs10,NLBHs11,NLBHs12,NLBHs13}. \par Logarithmic $U$(1) gauge theory of electrodynamics qualitatively shares many similarities with BI and Euler-Heisenberg theories in the weak-field limit, including describing 1-loop correction of the vacuum polarization, finite self-energy for point charges etc \cite{NLBHSoleng1995,BornInfeld,HeisenbergEuler1936,Berestetskii1,Berestetskii2,SchwartzQFT,LogEPJC}. Logarithmic, BI and Euler-Heisenberg gauge theories of electrodynamics have the following expansion in the weak-field limit (large enough $\cal \beta$) \begin{equation} {\cal L}({\cal F}) = - {\cal F} + {a_1}\frac{{{{\cal F}^2}}}{{{\beta ^2}}} - {a_2}\frac{{{{\cal F}^3}}}{{{\beta ^4}}} + O(1/{\beta ^6}) \end{equation} where $a_i$'s are some positive constants (for example, see the expansion of the logarithmic $U(1)$ gauge theory in Sect. \ref{black holes}, Eq. (\ref{logarithmics})). So, it seems sensible that these theories share some common features (especially in the weak field limit), leading to what might be called ‘BI-type nonlinear electrodynamics’. However, BI-type gauge theories can behave differently in strong-field limit and the outcomes in this limit should be examined for each theory separately. Despite a series of similarities between these theories there are some differences which depend on the functional form of the Lagrangian density. Focusing on the cases of BI and logarithmic NED theories, the vacuum birefringence phenomenon is absent in BI theory while it is present in the logarithmic theory of electrodynamics \cite{LogEPJC}. As an another example, the finite self-mass/energy of a point charge in logarithmic and Born-Infeld theories are not the same (The latter is greater but the the maximal electrostatic field of the point charge in BI theory is smaller than the logarithmic electrodynamics \cite{BornInfeld,LogEPJC}). More interestingly, the study of full vacuum polarization diagrams in QED reveals that there is a logarithmic dependence on the cut-off in the effective action \cite{SchwartzQFT}. This is equivalent to the interesting finding of Euler and Heisenberg in which a logarithmic term of the field strength appears in the action as an exact 1-loop correction to the vacuum polarization \cite{HeisenbergEuler1936}. Furthermore, it is shown in \cite{Berestetskii1,Berestetskii2} that for slowly varying fields (fields which cannot in practice create real electron-positron pairs, i.e., purely electromagnetic fields), the effective Lagrangian for radiative corrections increases at high field intensities logarithmically. Consequently, this observation shows that logarithmic electrodynamics is notable to examine photonic processes in certain regions of the electromagnetic fields. In addition, as indicated in \cite{NLBHSoleng1995}, the logarithmic U(1) gauge theory, which is contained in the class of theories constructed in \cite{Altshuler}, can be regarded as a possible mechanism for inflation. For these reasons, logarithmic U(1) gauge theory is not just a toy model with mathematical purposes and it has some physical evidences. Having these motivations, it would be interesting to examine the idea of holographic heat engines coupled with the logarithmic $U(1)$ gauge theory and investigate the effect of nonlinear matter source on them. However, in Ref. \cite{Johnson2016a}, spherically symmetric Born-Infeld AdS black holes as heat engines have been studied using high temperature expansion, but here, logarithmic $U(1)$ AdS black holes with various (spherical, planar and hyperbolic) symmetries are considered as a new class of holographic heat engines and our considerations are not restricted to the high temperature limit. We extensively explore the whole region of parameter space in low and high temperature limits for both the weak- and strong-field regimes. Elementally, in the weak coupling limit, our results approximately approach those of charged black hole heat engines with any kind of BI-type theories as matter source. In the strong coupling limit for any kind of BI-type theories, a qualitative behaviour similar to our results in this paper is expected. So, the results of this paper is predicted for all the BI-type theories. For our purposes, this paper is organized as follows: In Sec. \ref{black holes}, we define our set up and notation and give a brief review of Logarithmic $U(1)$ AdS black holes with their features. In Sec. \ref{Thermodynamics}, we give a brief review of the conserved charges, the extended black hole thermodynamics and the appearance of vacuum polarization quantity in a natural way. In Sec. \ref{Heat Engines}, we explore the idea of holographic heat engines, present the exact efficiency formula and investigate the engine's performance in weak and strong coupling regimes, followed by the numerical results and interesting figures. Finally in Sec. \ref{Final}, the results are summarized and discussed. \section{Logarithmic $U(1)$ AdS black holes} \label{black holes} The action of Einstein gravity coupled with the logarithmic $U(1)$ gauge field in AdS background is given by \begin{equation} {\cal{I}} =\frac{-1}{16 \pi}\int_{\cal{M}} d^D x\sqrt{-g}\big[R-2\Lambda+\mathcal{L(F)}\big], \end{equation} where $\Lambda=-(D-1)(D-2)/2L^2$ is the cosmological constant with the AdS radius $L$. In the above action, $\mathcal{L(F)}$ is the Lagrangian of logarithmic form of BI-type theory \cite{NLBHSoleng1995} \begin{equation}\label{logarithmicsexact} \mathcal{L(F)}= -8 \beta^2 \rm{ln} \Big(1+\frac{\cal{F}}{8\beta^2}\Big), \end{equation} where $\beta$ as a real constant denotes the non-linearity parameter and corresponds to the strength of the maximal electric field, ${\cal{F}}=F^{\mu \nu}F_{\mu \nu}$ is the Maxwell invariant, and $F^{\mu \nu}$ is the electromagnetic field strength tensor defined by $F_{\mu\nu}=\partial_\mu A_\nu-\partial_\nu A_\mu$. The non-linearity parameter, $\beta$, has units of electric intensity, $\rm{V/m}$, in SI units (in natural units with dimension of $mass$). In the weak coupling limit $\beta \to \infty$ (not infinity but large enough), ${\cal L}(\cal F)$ is of the form \begin{equation} \label{logarithmics} {\left. {{\cal L}({\cal F})} \right|_{\beta \to \infty }} = - {\cal F} + \frac{{{{\cal F}^2}}}{{16{\beta ^2}}} - \frac{{{{\cal F}^3}}}{{192{\beta ^4}}} + O(1/{\beta ^6}). \end{equation} The leading term of the expansion is the standard Maxwell's Lagrangian. The next leading term, $O({{\cal F}^2}/{\beta ^2})$, corresponds to the lowest order quantum correction to the classical Maxwell's Lagrangian. Therefore, the nonlinear parameter $\beta$ allows us to control the strength of the higher derivative corrections to the Abelian Maxwell field. However, in the strong coupling limit, i.e. $\beta \to 0$ (not zero but small enough), we are not allowed to use this expansion and the exact form of the logarithmic Lagrangian (\ref{logarithmicsexact}) should be applied. The gravitational and electromagnetic field equations of Einstein gravity in the presence of logarithmic NED reads \begin{equation} \label{Einstein field equations} G_{\mu \nu}+\Lambda g_{\mu\nu}=\frac{1}{2}g_{\mu\nu}\mathcal{L(F)}-2F_{\mu\lambda}F_{\nu}\,^{\lambda}\frac{\partial\mathcal{L(F)}}{\partial\mathcal{F}}, \end{equation} and \begin{equation}\label{EMFE} \partial_{\mu}\Big(\sqrt{-g}\frac{\partial\mathcal{L(F)}}{\partial\mathcal{F}} F^{\mu\nu}\Big)=0. \end{equation} For the spacetime metric, the following static ansatz with spherical symmetry for the event horizon is used \begin{equation} d{s^2} = - f(r)d{t^2} + \frac{{d{r^2}}}{{f(r)}} + {r^2}\big( {d x _1^2 + \sum\limits_{i = 2}^{D - 2} {\prod\limits_{j = 1}^{i - 1} {{{\sin }^2}{x _j}d x _i^2} } } \big). \end{equation} In what follows, we use the symbol "${\Sigma _{D-2}}$" as the volume of the unit $(D-2)$-sphere, given by \footnote{For hyperbolic and planar black holes, thermodynamic quantities can be computed per volume, $\Sigma _{D - 2}$. However, as will be seen in Sect. \ref{Heat Engines}, the volume $\Sigma _{D - 2}$ will not appear in the efficiency formula.} \begin{equation} {\Sigma _{D - 2}} = \frac{{2{\pi ^{(D- 1)/2}}}}{{\Gamma \left( {\frac{{D - 1}}{2}} \right)}}, \end{equation} where ${\Gamma \left( {\frac{{D - 1}}{2}} \right)}$ is the gamma function. We also generalize our considerations to AdS black holes with a horizon of zero (denoted by $k=0$) and negative (denoted by $k=-1$) constant curvatures. At the end, the cases with planar and hyperbolic black holes as heat engines will be discussed. The gauge field $A_\mu$ is determined by solving the electromagnetic field equation \ref{EMFE}. Assuming the electrostatic ansatz as ${A_\mu } = \Phi (r)\delta _\mu ^t$, the differential equation of scalar potential ($\Phi$) is obtained as follows \begin{equation} \label{gauge potential} \frac{r}{{{\beta ^2}}}\,\partial _r^2\Phi (r)\,{\left( {{\partial _r}\Phi (r)} \right)^2} - \frac{{(D - 2)}}{{{\beta ^2}}}{\left( {{\partial _r}\Phi (r)} \right)^3} + 4r\,\partial _r^2\Phi (r) + 4(D - 2){\partial _r}\Phi (r) = 0. \end{equation} The first two terms stem from the logarithmic $U(1)$ electrodynamics. As $\beta \to \infty$, these terms approaches zero and the differential equation reduces to the Maxwell case. Solving equation \ref{gauge potential} yields \begin{equation}\label{gauge potential_ans} \Phi (r) = \frac{{2{\beta ^2}{r^{D - 1}}}}{{(D - 1)q}}\left(1- {}_2{F_1}\left( {\left[ { - \frac{1}{2},\,\frac{{ - (D - 1)}}{{2(D - 2)}}} \right],\,\left[ {\frac{{D - 3}}{{2(D - 2)}}} \right],\,\Upsilon (r) } \right) \right)+C_1 \end{equation} where \begin{equation} \label{Upsilon} \Upsilon (r) = - \dfrac{q^2 }{\beta ^2r^{2(D - 2)}}, \end{equation} and $C_1$ is a constant which is specified by a gauge fixing (see the next section). Then, having the gauge vector field $A_\mu$, we can easily obtain the field strength tensor, $F_{\mu \nu}$. The only non-zero components of the field strength tensor are $F_{rt}=-F_{tr}$. This determines the associated electrostatic field as $F_{rt} = {\nabla _r}\Phi (r)=E(r)$. Accordingly, in this gauge theory, the electrostatic field of a point particle (or outside the event horizon of a charged black hole) is obtained as \begin{equation} \label{electric field} E(r) = \frac{2}{{1 + \sqrt {1 + \frac{{{q^2}}}{{{\beta ^2}{r^{2(D - 2)}}}}} }}\frac{q}{{{r^{D - 2}}}}. \end{equation} In remote distances ($r \to \infty$), the standard Coulomb behaviour, $E=q/r^{(D-2)}$, is recovered, no matter how much is the strength of non-linearity coupling $\beta$ (the power of $r_+$ is greater than the power of $\beta$). Having the stress-energy tensor, we can construct the equation of motion for the gravity side, equation (\ref{Einstein field equations}), which admits the emblackening factor, $f(r)$, given by \begin{eqnarray}\label{fr} f(r)=&& k - \frac{m}{{{r^{D - 3}}}} + \frac{{{r^2}}}{{{L^2}}} + \frac{{8{r^2}\beta }}{{\left( {D - 2} \right){{\left( {D - 1} \right)}^2}}}\left( {\left( { - 2D + 3} \right){r^{2 - D}}\xi (r) + \beta \left( {2D - 3 - \left( {D - 1} \right)\ln \left[ {\frac{{2{r^D}\beta }}{{{r^D}\beta + {r^2}\xi (r)}}} \right]} \right)} \right)\nonumber\\ &&+ \frac{{8\left( {D - 2} \right){q^2}{r^{8 - d}}\beta \sqrt {1 - \Upsilon }\, \xi (r)}}{{\left( {D - 3} \right){{\left( {D - 1} \right)}^2}\left( {{q^2}{r^4} + {r^{2D}}{\beta ^2}} \right)}}{}_2{F_1}\left( {\left[ {\frac{1}{2},\frac{{D - 3}}{{2\left( {D - 2} \right)}}} \right],\left[ {\frac{{ 3D-7}}{{2D-4}}} \right],\Upsilon (r) } \right) \end{eqnarray} with \begin{equation} \label{Xi} \xi (r)=\sqrt{q^2 + r^{2(D-2)} \beta^2}. \end{equation} It is assumed that the black hole's horizon could have spherical ($k=+1$), planar ($k=0$) or hyperbolic ($k=-1$) symmetries in obtaining the emblackening factor $f(r)$. Outstandingly, checking the Kretschmann invariant ($K = {R_{abcd}}{R^{abcd}}$) shows that the singularity at the origin ($r=0$) is much weaker than the singularities of the conventional linear Maxwell cases \cite{NLBHSoleng1995,NLBHs4}. In four spacetime dimensions, the Kretschmann scalar behaves as $K \propto 1/{r^4}$ for the logarithmic $U(1)$ case, while in the linear Maxwell limit ($\beta=\infty$) it behaves as $K \propto 1/{r^8}$. Moreover, the black hole solutions possesses an event horizon ($r_+$) which is the largest root of $f(r_+)=0$. \section{Conserved charges, thermodynamics and vacuum polarization} \label{Thermodynamics} In this section, we briefly review the conserved quantities of the logarithmic $U(1)$ AdS black hole solutions with different topologies for the event horizon and then verify the first law of thermodynamics and the associated Smarr relation in the extended phase space. Let us start with the finite mass. In asymptotically AdS backgrounds, one can use the Komar mass integral or the conformal method of Ashtekar-Magnon-Das (ADM) to obtain the well-known ADM mass formula as \cite{Ashtekar1984,Ashtekar2000} \begin{equation} M = \frac{{{\Sigma _{D - 2}}(D - 2)}}{{16\pi }}m, \end{equation} in which the constant $m$ is a parameter related to the total mass and it is obtained from $f(r_+)=0$. Hence the mass or more accurately the enthalpy, is given by \begin{eqnarray}\label{MADM} M =&& \frac{{\left( {D - 2} \right){\Sigma _{D - 2}}}}{{16\pi }}\Bigg( k {r_ + ^{D - 3}} + \frac{{2 \Lambda {r_ + ^{D - 1}}}}{{{(D-1)(D-2)}}} - \frac{{8\beta \left( {\left( {2D - 3} \right){r_ + }\xi_+ + {r_ + ^{D-1}}\beta \left( {3 - 2D +\ln (2) \left( {D - 1} \right)} \right)} \right)}}{{{{\left( {D - 1} \right)}^2}(D - 2)}}\nonumber\\ &&- \frac{{8{r_ + ^{D - 1}}{\beta ^2}\ln \Big( { - \frac{{{r_ + ^{D - 4}}\beta \left( {{r_ + ^D}\beta - {r_ + ^2}\xi_+ } \right)}}{{{q^2}}}} \Big)}}{{(D - 1)(D - 2)}} + \frac{{8\left( {D - 2} \right){q^2}{r_ + ^{3 - D}} {}_2{F_1}\left( {\left[ {\frac{1}{2},\frac{{D - 3}}{{2(D - 2)}}} \right],\left[ {\frac{{3D - 7}}{{2D - 4}}} \right],\Upsilon_+ } \right)}}{{\left( {D - 3} \right){{\left( {D - 1} \right)}^2}}} \Bigg), \end{eqnarray} where the conventions $\xi_+=\xi(r_+)$ and $\Upsilon_+=\Upsilon (r_+)$ have been used for relations (\ref{Xi}) and (\ref{Upsilon}). The Hawking temperature of the logarithmic $U(1)$ AdS black holes can be obtained by use of the definition of surface gravity \cite{Hawking1975}, ${\kappa ^2} = -\frac{{ 1}}{2}({\nabla _\mu }{\chi _\nu })({\nabla ^\mu }{\chi ^\nu })$ where ${\chi _\mu } = \delta _\mu ^t{\partial _t}$ is the timelike Killing field, yielding \begin{eqnarray}\label{temp} T &=& \frac{\kappa}{2 \pi}= \frac{1}{{4\pi }}{\left. {\frac{{\partial f(r)}}{{\partial r}}} \right|_{r = {r_ + }}} \nonumber \\ &=&\frac{{(D - 2)(D - 3)k - 2\Lambda r_ + ^2 + 8{\beta ^2}r_ + ^2\left( {\ln \Big( {\frac{{1 + \sqrt {1 + \Upsilon_+ } }}{2}} \Big) + 1 - \sqrt {1 + \Upsilon_+ } } \right)}}{{4\pi (D - 2){r_ + }}}. \end{eqnarray} The entropy is given in terms of the horizon radius via the well-known Bekenstein-Hawking formula as \begin{equation} \label{entropy} S =\dfrac{A}{4}= \frac{{{\Sigma _{D - 2}}}}{4}r_ + ^{D - 2}. \end{equation} In order to compute the $U(1)$ charge, one can calculate the charge passing through a $n$-dimensional hypersphere at spatial infinity with the same geometry as the event horizon \cite{Carroll2004}. Therefore, the total charge is given by the generalized Gauss' law as \begin{equation} \label{charge} Q = - \frac{1}{{4\pi }}\int_{{r_\infty }} {{d^{D-2}}x \left( {\frac{{\partial {\cal L}({\cal F})}}{{\partial {\cal F}}}} \right){n_\mu }{\sigma _\nu }{F^{\mu \nu }}} = \frac{{{\Sigma _{D - 2}}}}{{4\pi }}q, \end{equation} in which $n_\mu$ and $\sigma_\mu$ are (outward-pointing) unit normal vectors defined as \begin{equation} {n_\mu } = - \sqrt {f(r)} dt\,,\,\,\,\,\,\,{\sigma _\mu } = \frac{1}{{\sqrt {f(r)} }}dr. \end{equation} The $U(1)$ potential conjugate to the electric charge can be obtained using the elementary discussions in classical electrodynamics \cite{JacksonElectrodynamics}. We work in a gauge in which the norm of the gauge field, $A^2={A_\mu} {A^\mu}$, is finite at the horizon. This leads to ${A_t}(r_+)=0$ at the horizon and the constant $C_1$ in equation (\ref{gauge potential_ans}) is obtained as \begin{equation} C_1=- \frac{{2{\beta ^2}{r_+^{D - 1}}}}{{(D - 1)q}}\left(1- {}_2{F_1}\left( {\left[ { - \frac{1}{2},\,\frac{{ - (D - 1)}}{{2(D - 2)}}} \right],\,\left[ {\frac{{D - 3}}{{2(D - 2)}}} \right],\,\Upsilon_+ } \right) \right). \end{equation} Hence, using the fact that $E(r) = {\nabla _r}\Phi (r)$, the gauge potential with respect to the event horizon is calculated as \begin{eqnarray} \Phi = \Phi (\infty ) - \Phi ({r_ + }) &=& \int_{{r_ + }}^\infty {E(r) dr} \nonumber \\ &=& - \frac{{2{\beta ^2}r_ + ^{D - 1}}}{{(D - 1)q}}\left( {1 - {}_2{F_1}\left( {\left[ { - \frac{1}{2},\,\frac{{ - (D - 1)}}{{2(D - 2)}}} \right],\,\left[ {\frac{{D - 3}}{{2(D - 2)}}} \right],\,\Upsilon_+ } \right)} \right). \end{eqnarray} It is time to invoke the main idea of the extended phase space, interpreting the mass as enthalpy ($H \equiv M$) by treating $\Lambda$ as pressure ($\Lambda =- 8 \pi P$). With this picture in mind, the thermodynamic volume conjugate to $ P $ is naturally obtained as \begin{equation}\label{vol} V = {\left( {\frac{{\partial H}}{{\partial P}}} \right)_{S,Q}} = \frac{{{\Sigma _{D - 2}}}}{{D - 1}}r_ + ^{D - 1}. \end{equation} It is not difficult to show that the temperature and the potential can also be computed by using the following thermodynamic relations \begin{equation} T = {\left( {\frac{{\partial H}}{{\partial S}}} \right)_{P,Q}}\,,\,\,\,\,\Phi = {\left( {\frac{{\partial H}}{{\partial Q}}} \right)_{S,P}}, \end{equation} that means these quantities satisfy the first law of thermodynamics in the extended phase space as \begin{equation}\label{Efirstlaw} dH = TdS + VdP + \Phi dQ. \end{equation} Moreover, these quantities satisfy the following Euler-type relation \begin{equation} (D-3)H=(D-2)TS-PV+(D-3) \Phi Q - {\mathcal B} \beta \end{equation} which is called Smarr formula in community. The new thermodynamic quantity $\cal B$ is conjugate to the non-linearity parameter $\beta$ and is of the form \begin{eqnarray} \label{vacuum polarization} \mathcal{B} =&& {\left( {\frac{{\partial H}}{{\partial \beta}}} \right)_{S,P,Q}}\nonumber\\ =&& \frac{{{\Sigma _{D - 2}}}}{{2\pi {{\left( {D - 1} \right)}^2}\xi_+ }}\Bigg( \left( {5 - 3D} \right){\beta ^2}{r_{+}^{2D - 3}} + \left( {5 - 3D} \right){q^2}r_{+} - \beta \left( {5 - 3D + \ln \left( 4 \right)\left( {D - 1} \right)} \right) \xi_+ {r_{+}^{D - 1}} \nonumber\\ &&+ \frac{{\left( {D - 2} \right){q^2}{r_{+}^{1 - 2D}}\left( {{q^2}{r_{+}^4} + {\beta ^2} {r_{+}^{2D}}} \right){}_2{F_1}\left( {\left[ {\frac{1}{2},\frac{{D - 3}}{{2\left( { D-2} \right)}}} \right],\left[ {\frac{{3D - 7}}{{2D - 4}}} \right],\Upsilon_+ } \right)}}{{{\beta ^2}\sqrt {1 - \Upsilon_+ } }}\nonumber\\ && - 2 (D - 1) \beta {r_{+}^{D - 1}}\xi_+ \ln \Big( {\frac{{\beta{r_{+}^{D - 4}} \left( {{r_{+}^2}\xi_+ -\beta {r_{+}^D} } \right)}}{{{q^2}}}} \Big)\Bigg). \end{eqnarray} Obviously, the non-linearity parameter $\beta$ is necessary to satisfy the Smarr relation. On the other hand, it is required for consistency of both the extended first law and the corresponding Smarr relation. So we should consider $\beta$ as a thermodynamic phase space variable in what follows, yielding \begin{equation}\label{EfirstlawwithBeta} dH = TdS + VdP + \Phi dQ + {\cal B} d\beta. \end{equation} Therefore, considering the logarithmic NED in the extended phase space leads to the appearance of a new thermodynamic quantity ($\cal B$) which has a thermodynamic interpretation. This quantity also appears in the Smarr formula of AdS black holes with a BI NED source and it was referred to as the BI \textit{vacuum polarization} in Ref. \cite{Mann2012}. It is inferred that the appearance of this quantity ($\cal B$) is the characteristic of any BI-type nonlinear matter source which is absent in other nonlinear $U(1)$ gauge field theories such as power Maxwell invariant theories of Electrodynamics. Therefore, following \cite{Mann2012}, $\cal B$ is called logarithmic vacuum polarization by us. It is instructive to seek about the physical meaning of this quantity. As indicated in \cite{Mann2012}, ${\cal B}$ has units of electric polarization since the term ${\cal B}\beta$ has units of energy and $\beta$ has units of electric field strength. The limit $\beta = \infty$ corresponds to Maxwell Electrodynamics, so the vacuum polarization vanishes in this limit. This is one of the main differences between Maxwell Electrodynamics and any BI-type NED theory. In the weak-field limit ($\beta \to \infty$), $\cal B$ approaches zero rapidly by further increasing $\beta$ (this happens for BI theory as well \cite{Mann2012}). However, as depicted in Fig. \ref{VacPol_beta}, it becomes larger in the strong coupling regime and, eventually, diverges at $\beta = 0$. As will be shown in the next section, the weak-field limit is mathematically equal to long distances in which the event horizon radius is large and so we are far enough from the nonlinear matter source for applying the expansion. This can simply be inferred from Figs. \ref{VacPol_beta} and \ref{VacPol_r}, in which $\cal B$ approaches zero for large enough values of $r_+$ or $\beta$, indicating that the expansion of all quantities in this paper around large $r_+$ is the same as expansion around large $\beta$. In addition, for both the strong and weak coupling regimes, logarithmic vacuum polarization for a fixed $\beta$ reaches a finite value at the origin. This behaviour is illustrated in Fig. \ref{VacPol_r}. \\ \begin{figure}[!htbp] \epsfxsize=9 cm \includegraphics[width=9 cm]{VacPol_beta} \caption{The logarithmic vacuum polarization, ${\cal B}$, versus $\beta$ for $r_{+}=1$, $D=4$, and $q=0.5$ (black solid line), $q=1$ (red dashed line) and $q=1.5$ (blue dot-dashed line).} \label{VacPol_beta} \end{figure} \begin{figure}[!htbp] \epsfxsize=8.5 cm \includegraphics[width=9 cm]{VacPol_r} \caption{The logarithmic vacuum polarization, ${\cal B}$, versus $r_+$ for $D=4$, $q=2$ and $\beta=0.5$ (black solid line), $\beta=1$ (red dashed line) and $\beta=5$ (blue dot-dashed line).} \label{VacPol_r} \end{figure} \section{Holographic Heat engines} \label{Heat Engines} Having the extended first law of thermodynamics (\ref{EfirstlawwithBeta}) in hand, the black hole can be treated as a working substance of a classical heat engine, that absorbs heat $Q_H$, produces mechanical work $W$ and dumps waste heat $Q_C$ into the environment (See Fig. \ref{he}). The efficiency of an engine which is defined as $\eta=W/Q_H=1-Q_C/Q_H$ depends on the choice of the paths of the engine cycle in the $P-V$ plane and also the equation of state of the black hole considered as a working substance. Imposing the second law of thermodynamics, the maximum efficiency that can be achieved by an engine is $\eta_C=1-T_C/T_H$. This is the efficiency of a Carnot cycle consisting two adiabats and two isotherms. Labeling the isotherm with higher temperature as $T_H$ and the one with lower temperature by $T_C$, the Carnot cycle operates in four steps: First an isothermal expansion at $T_H$ through which $Q_H$ is absorbed, second an adiabatic expansion to $T_C$, third an isothermal compression at $T_C$ while dumping extra heat $Q_C$ and at last an adiabatic compression bringing back the substance to $T_H$ (See Fig. \ref{carnot}). During the Carnot cycle, no new entropy is created and therefore the efficiency always possesses the maximum value allowed by thermodynamics' laws, no matter what is the working substance or what is the equation of state that governs it. For this reason it is logical to consider the Carnot efficiency as a benchmark and compare efficiencies of our desired black hole heat engines with it. \begin{figure}[!htbp] \epsfxsize=5 cm \includegraphics[width=6 cm]{he} \caption{Energy-flow diagram for a heat engine.} \label{he} \end{figure} \begin{figure}[!htbp] \epsfxsize=9 cm \includegraphics[width=9 cm]{carnot} \caption{The black solid loop (points $1-2^{'}-3-4^{'}$) shows the Carnot cycle which is the same as Stirling cycle for static black holes. The dashed green rectangular cycle (points $1-2-3-4$) is an ideal engine for which the efficiency is computed. } \label{carnot} \end{figure} \par {It can clearly be seen from Eqs. (\ref{entropy}) and (\ref{vol}) that both the entropy and the volume depend only on the horizon radius $r_{+}$ and indeed they are not independent quantities, meaning that for static black holes there is no difference between adiabats and isochores. As a consequence, Stirling and Carnot cycles are identical (See Fig. \ref{carnot}). Also for static black holes some well-known thermodynamic cycles such as Otto cycle (involving a sequence of adiabatic $\to$ isochoric $\to$ adiabatic $\to$ isochoric processes) and Diesel cycle {(composed of adiabatic $\to$ isobaric $\to$ adiabatic $\to$ isochoric processes) have no place. Besides, for static black holes, the specific heat defines as \begin{equation} \label{specific heat} C=T{\left( {\frac{{\partial S}}{{\partial T}}} \right)} = T\left( {\frac{{\partial S}}{{\partial {r_ + }}}} \right)\left( {\frac{{\partial {r_ + }}}{{\partial T}}} \right), \end{equation} which vanishes at constant volume. For the case of logarithmic $U(1)$ AdS black holes, this can be checked by evaluating specific heat using Eqs. (\ref{temp}) and (\ref{entropy}) as \begin{eqnarray}\label{specific} C=&&{{\Sigma _{D - 2}}{r_ + ^{D - 2}}\left( {4{r_ + }\frac{{\partial P}}{{\partial T}} - D +2} \right)}\nonumber\\ && \times \left( {\frac{{k(D-2)(D-3) - 8{\beta ^2}{r_ + ^2}\left( { - 1 + \sqrt {1 - \Upsilon_+ } } \right) + 8{\beta ^2} {r_ + ^2} \ln \left( {\frac{1}{2}\left( {1 + \sqrt {1 - \Upsilon_+ } } \right)} \right) + 16\pi P {r_ + ^2}}}{{{{4 (D - 3)\left( {k (D - 2) - 8 {\beta ^2} {r_ + ^2} \left( { - 1 + \sqrt {1 - \Upsilon_+ } } \right)} \right)}} - 32{\beta ^2}{r_ + ^2}\ln \left( {\frac{1}{2}\left( {1 + \sqrt {1 - \Upsilon_+ } } \right)} \right) - 64\pi P {r_ + ^2}}}} \right), \end{eqnarray} and then substituting ${\partial S}/{\partial r_+} = 0$ into Eq. (\ref{specific heat}) or ${\partial P}/{\partial T}=(D-2)/4r_{+}$ into Eq. (\ref{specific}) which leads to $C_V=0$. The specific heat at constant pressure $C_P$ is simply given by setting ${\partial P}/{\partial T}=0$ in Eq. (\ref{specific}). Now, having the explicit form of $C_P$ with suitable choice of the desired cycle, it would be simpler to study the engine's efficiency. Fig. \ref{carnot} shows the rectangular cyclic process constructed out of a pair of isobaric and a pair of isochoric processes, often called the ideal cycle. If the black hole undergoes the ideal cycle, the absolute work done can be written as \begin{equation}\label{work} W= W_{1\rightarrow 2}+W_{3\rightarrow 4}=(P_1-P_4)(V_2-V_1), \end{equation} and the heat flows into the loop through the top isobar is given by \begin{equation}\label{heat} Q_H=\int_{T_1}^{T_2}C_P(P_1,T)dT. \end{equation} Solving for $r_{+}$ in terms of $T$ using Eq. (\ref{temp}) and substituting in Eq. (\ref{specific}), $C_P$ can be rewritten as a function of $T$ and the integration along the isobar in order to calculate $Q_H$ can be performed. For most cases, this integration is complicated and, as a result, a high temperature limit can be studied by performing a series expansion for $T$ in Eq. (\ref{temp}) about large $r_{+}$. This was done before for Born-Infeld \cite{Johnson2016a} and Gauss-Bonnet \cite{Johnson2016b} AdS black holes and the engine efficiencies were found in the high temperature limit. We will also study the high temperature limit for the logarithmic charged AdS black hole in detail in section \ref{secht}. \par For now, we use a much more straightforward way to evaluate efficiency which was first proposed in \cite{Johnson_exact}. The first law of thermodynamics for black holes (\ref{Efirstlaw}) suggests a simple way to compute heat flows ($Q_H$ and $Q_C$) along the isobars in the ideal cycle of Fig. \ref{carnot}. For the isobaric curves, the pressure is constant ($dP=0$) and, as a result, the thermodynamic identity leads to $dH=TdS$ for a black hole engine with specified charge as well as nonlinear parameter ($\beta$), which means that during these constant-pressure processes, the heat flows cause the enthalpy to change and the compression-expansion work has no effect. In other words, having the enthalpy changes, the heat flows and consequently the efficiency of the black hole engine are at hand. Since the enthalpy of the AdS black hole equals its mass in the extended phase space, thus the efficiency is obtained in terms of the black hole mass as \begin{equation}\label{EEF} \eta=1-\frac{M_3-M_4}{M_2-M_1}, \end{equation} where $1,2,3$ and $4$ refer to the mass of black hole evaluated at the corners of the rectangular cycle in Fig. \ref{carnot}. \par Before proceeding further, it should be noted that the equation of state of logarithmic $U(1)$ AdS black hole, i.e., \begin{eqnarray} P &=& \frac{{(D - 2)T}}{{4{r_ + }}} - \frac{{(D - 2)(D - 3)k}}{{16\pi r_ + ^2}} - \frac{{{\beta ^2}}}{{2\pi }}\left( {\ln \left( {\frac{{1 + \sqrt {1 + \Upsilon_+} }}{2}} \right) + 1 - \sqrt {1 + \Upsilon_+} } \right);\nonumber\\ {r_ + } &=& {\left( {\frac{{(D - 1)V}}{{{\Sigma _{D - 2}}}}} \right)^{\frac{1}{{D - 1}}}}, \end{eqnarray} shows the possibility of phase transitions for sufficiently low temperatures. As an example, a characteristic feature of isotherms in the $P-V$ plane for the well-known Van der Waals behaviour is displayed in Fig. \ref{eos}. No phase transition is seen for high temperature isotherms ($T>T_{cr}$) and so the phase boundary in the $P-T$ plane vanishes above the critical temperature, $T_{cr}$. For $T<T_{cr}$, as in the Van der Waals liquid-gas system, a first order phase transition occurs, here, between small and large black holes. Note that in this region the corrections should be implemented to the oscillatory parts of isotherms, in very much the same way to Maxwell construction for the Van der Waals fluid and, as a result, by decreasing pressure the system will go straight from the small black hole state to the large state and vice versa with an abrupt decrease/increase in the volume. Therefore the unphysical pressure regions and also the multi-valued parts disappear. Working in the safe temperature domain, i.e. high enough to avoid multivaluedness corresponding to phase transitions, allows us to consider heat engine cycles in $T>T_{cr}$ region. Hence we do not focus on phase transitions here (See Refs. \cite{Kubiznak2012,Mann2012,vdWNED2019} for more details about the subject of black hole phase transitions). It is worthwhile to point that the leading order of the equation of state in the high temperature limit reads \begin{equation}\label{eos_HT} PV^{\frac{1}{D-1}}\sim T, \end{equation} which is the ideal gas limit for our black holes. However, in what follows, we will study the whole parameter space of the theory including low and high temperatures. In the low temperature domain, the parameters of our black hole engines have been picked up in such a way that no phase transition takes place at all. \begin{figure}[!htbp] \epsfxsize=9 cm \includegraphics[width=9 cm]{eos} \caption{Isotherms in the $P-V$ plane for $D=4$, $q=1$, $\beta=10$. From bottom to top the isotherms are for $T/T_c$ ranging from $0.97$ to $1.03$ in increments of $0.01$. } \label{eos} \end{figure} \subsection{The exact efficiency formula for the heat engines: general considerations} In this section we use the exact efficiency formula \ref{EEF} to verify the efficiency of the ideal cycle of Fig. \ref{carnot} for logarithmic $U(1)$ AdS black holes. Inserting Eq. (\ref{MADM}) into Eq. (\ref{EEF}), we obtain \begin{eqnarray}\label{exactf} \eta=\frac{N(r_1,r_2,P_1,P_3)}{D(r_1,r_2,P_1,P_3)}, \end{eqnarray} where \addtocounter{equation}{-1} \begin{subequations} \begin{align} N(r_1,r_2,P_1,P_3)=& 16\pi \beta \left( {D - 3} \right)\left( {D - 1} \right)\left( {{P_1} - {P_3}} \right){\left( {{r_1}{r_2}} \right)^{2(D + 1)}}\left( {{r_1}{r_2^D} - {r_1^D}{r_2}} \right),\\ D(r_1,r_2,P_1,P_3)=& - 8{\left( {D - 2} \right)^2}{q^2}{r_1^8}{r_2^{2D + 3}}\sqrt {{q^2} + {r_1^{2D - 4}}{\beta ^2}} {}_2{F_1}\left( {{\Upsilon _1}} \right) \nonumber \\ &+ {r_1^{2D}}\Bigg\{8{\left( {D - 2} \right)^2}{q^2}{r_1^3{r_2^8}}\sqrt {{q^2} + {r_2^{2D - 4}}{\beta ^2}} {}_2{F_1}\left( {{\Upsilon _2}} \right) \nonumber \\ &+ \left( {D - 3} \right){r_2^{2D}}\beta \bigg(k\left( {D - 2} \right){\left( {D - 1} \right)^2}\left( { - {r_1^D{r_2^3}} + {r_1^3}{r_2^D}} \right) \nonumber \\ &+ 8\left( {D - 1} \right){r_1^2}{r_2^2}{\beta ^2}\left( {{r_1^D}{r_2}\ln \left( {\frac{{2{r_1^D}\beta }}{{{r_1^D}\beta + {r_2^2}{\xi _1}}}} \right) - {r_1}{r_2^D}\ln \left( {\frac{{2{r_2^D}\beta }}{{{r_2^D}\beta + {r_2^D}{\xi _2}}}} \right)} \right)\nonumber \\ & + 8{r_1^2}{r_2^2}\Big( {2\pi \left( {D - 1} \right){P_1}\left( { -{ r_1^D}{r_2} + {r_1}{r_2^D}} \right) + \left( {2D - 3} \right)\beta \left( { - {r_1^D}{r_2}\beta + {r_1}{r_2^D}\beta + {r_1^2}{r_2}{\xi _1} - {r_1}{r_2^2}{\xi _2}} \right)} \Big) \bigg)\Bigg\} \end{align} \end{subequations} with $\Upsilon_{1,2}=\Upsilon|_{r=r_1,r_2}$, $\xi_{1,2}=\xi|_{r=r_1,r_2}$ and \begin{equation} {}_2{F_1}({\Upsilon})={}_2{F_1}\left( {\left[ {1,\frac{{2D - 5}}{{2D - 4}}} \right],\left[\frac{3D-7} {2D - 4} \right],\Upsilon(r) } \right).\nonumber \end{equation} As discussed earlier, it is reasonable to compare the efficiency of the ideal cycle with the efficiency of the Carnot engine, $\eta_{C}$. Also, it is worthwhile to compare the efficiency of logarithmic $U(1)$ AdS black hole heat engine with the efficiency of the Reissner-Nordstr\"{o}m AdS black hole which is the simplest holographic heat engine, since our black holes have deviations from Reissner-Nordstr\"{o}m AdS black holes in the electromagnetic field sector. We denote the efficiency in the Einstein-Maxwell limit with $\eta_0$ which is defined as $\eta_0=\lim_{\beta\rightarrow \infty } \eta(\beta)$. In order to obtain the efficiency in this limit, we should find the asymptotic behaviour of the finite mass, given by \begin{equation}\label{massinf} M|_{\rm{large}\,\beta}=\frac{{{\Sigma _{D - 2}}\left( {D - 2} \right)}}{{16\pi }}\left(k {{r_+^{D - 3}} + \frac{16 \pi P{{r_+^{D - 1}}}}{{{(D-1)(D-2)}}} + \frac{{2{q^2}{r_+^{3 - D}}}}{{\left( {D - 3} \right)\left( {D - 2} \right)}} - \frac{{{q^4}{r_+^{7 - 3D}}}}{{4\left( {D - 2} \right)\left( {3D - 7} \right){\beta ^2}}}} \right), \end{equation} which clearly goes to the Reissner-Nordstr\"{o}m ADM mass as $\beta$ goes to infinity. Inserting Eq. \ref{massinf} into the exact efficiency formula (\ref{EEF}) leads to \begin{equation}\label{effbetainf} \eta|_{\rm{large}\,\beta} =\frac{A(r_1,r_2,P_1,P_3)}{B(r_1,r_2,P_1,P_3)}, \end{equation} where \addtocounter{equation}{-1} \begin{subequations} \begin{align} A(r_1,r_2,P_1,P_3) =\,& 64\pi \left( {D - 3} \right)\left( {3D - 7} \right)\left( {{P_1} - {P_3}} \right)\left(- {r_1^D} r_2 + {r_1}{r_2^D} \right){\beta ^2},\\ B(r_1,r_2,P_1,P_3)=& \left( {D - 3} \right)\left( {D - 1} \right){q^4}{r_1}{r_2}\left( {r_1^{7 - 3D}} - {r_2^{7 - 3D}} \right) \nonumber\\ & - 8\left( {D - 1} \right)\left( {3D - 7} \right){q^2}{r_1}{r_2}\left( {r_1^{3 - D}} - {r_2^{3 - D}} \right){\beta ^2}\nonumber\\ &+ 64\pi \left( {D - 3} \right)\left( {3D - 7} \right){P_1}\left( { - {r_1^D}{r_2} + {r_1}{r_2^D}} \right){\beta ^2} \nonumber\\ &+ 4k\left( {D - 3} \right)\left( {D - 2} \right)\left( {D - 1} \right)\left( {3D - 7} \right)\left( { - {r_1^{D - 2}}{r_2} + {r_1}{r_2^{D - 2}}} \right){\beta ^2} \end{align} \end{subequations} To check for correctness of this relation, one can easily take the limit $\beta \to \infty$ which yields the efficiency of the rectangular engine cycle for the $D$-dimensional Reissner-Nordstr\"{o}m AdS black hole as \begin{equation} \eta\,_{\rm{R.N. }}=\frac{C(r_1,r_2,P_1,P_3)}{D(r_1,r_2,P_1,P_3)}, \end{equation} with \addtocounter{equation}{-1} \begin{subequations} \begin{align} C(r_1,r_2,P_1,P_3)=& 16\pi \left( {D - 3} \right)\left( {{P_1} - {P_3}} \right){{\left( {{r_1}{r_2}} \right)}^{D + 2}}\left( {{r_1^D}{r_2} - {r_1}{r_2^D}} \right),\\ D(r_1,r_2,P_1,P_3)=& 2\left( {D - 1} \right){q^2}{r_1^3}{r_2^3}\left( { - {r_1^D}{r_2^3} + {r_1^3}{r_2^D}} \right)\nonumber\\ & + \left( {D - 3} \right){\left( {{r_1}{r_2}} \right)^d}\left( { - 16\pi {P_1}{{\left( {{r_1}{r_2}} \right)}^2}\left( { - {r_1^D}{r_2} + {r_1}{r_2^D}} \right) - k\left( {D - 2} \right)\left( {D - 1} \right)\left( { - {r_1^D}{r_2^3} + {r_1^3}{r_2^D}} \right)} \right). \end{align} \end{subequations} In what follows we study the efficiency of the ideal cycle in Fig. \ref{carnot} for the logarithmic $U(1)$ AdS black hole using the exact efficiency formula in high and low temperature domains. In order to verify the behaviour of efficiency as a function of $\beta$, we need to specify which parameters of the cycle remain constant as $\beta$ changes. There are many choices including the following two schemes as in \cite{Johnson2016a}. In the first scheme, the operating temperatures and pressures of the engine's ideal cycle ($T_1$, $T_2$, $P_1=P_2$ and $P_3=P_4$) in Fig. \ref{carnot} at points $1$ and $2$ are fixed. This is equivalent to specify the absorbed heat (Eq. \ref{heat}) along the isobar process $1\rightarrow2$ in Fig. \ref{carnot}}}. In the second scheme, the volumes and temperatures of the ideal cycle ($V_2$, $V_4$, $T_2\equiv T_H$ and $T_4\equiv T_C$) in Fig. \ref{carnot} at points $2$ and $4$ are specified and hold fixed. This corresponds to establish an engine with the initial and final volumes and temperatures in mind. In summary: \begin{itemize} \item \textit{Scheme 1}:\\ $(T_{1},T_{2})$ and $(P_{1},P_{3})$ $\to$ fixed ,\quad $T_{4}$ $\to$ found from the equation of state \item \textit{Scheme 2}:\\ $(T_{2}, T_{4})$ and $(V_{2},V_{4})$ $\to$ fixed, \quad $(P_1, P_3)$ $\to$ found from the equation of state \end{itemize} \textbf{High temperature domain:} As mentioned before, it is sensible to compare the efficiency of the desired engine cycle with the Carnot efficiency $\eta_C$, as well as the efficiency in the Einstein-Maxwell limit $\eta_0$, therefore we plot $\eta/\eta_C$ and $\eta/\eta_0$ as functions of $\beta$. It should be pointed out that, in scheme 1, $T_H\equiv T_2$ is specified but $T_C\equiv T_4$ is found from equation of state and hence changes with $\beta$. Fig. \ref{scheme1_fig1} shows $T_C$ and $\eta_C$ versus log$_{10}(\beta)$ for the range $0.01<\beta<100$ in this scheme. As $\beta$ goes to infinity that is a Maxwell limit (the limit for which our solutions approach the Reissner-Nordstr\"{o}m AdS black holes), $T_C$ decreases and $\eta_C$ increases. However the rate of changes with respect to $\beta$ is too slow for $\beta>0.1$. This feature is also seen for $\eta $, $\eta/\eta_C$ and $\eta/\eta_0$ in Fig. \ref{scheme1_fig2}. Note that for the set of parameters in Figs. \ref{scheme1_fig1} and \ref{scheme1_fig2}, the black hole operates in the high temperature domain. These figures were also verified for $D=5$ and $D=6$ and the same behaviour was exhibited. One would expect the same behaviour for higher dimensional AdS spacetimes too. \begin{figure}[!htbp] \begin{center} \epsfxsize=9 cm \includegraphics[width=8 cm]{EEF_d4_scheme1_Tc} \hskip 1 cm \epsfxsize=9 cm \includegraphics[width=8 cm]{EEF_d4_scheme1_etac} \caption{$T_C$ and $\eta_C$ versus log$_{10}(\beta)$ in scheme 1 for $D=4$, $T_1=20$, $T_2=30$, $P_1=1$, $P_4=0.5$ and $q=0.1$. } \label{scheme1_fig1} \end{center} \end{figure} \begin{figure}[!htbp] \begin{center} \epsfxsize=9 cm \includegraphics[width=5.5 cm]{EEF_d4_scheme1_eta} \hskip 0.5 cm \epsfxsize=9 cm \includegraphics[width=5.5 cm]{EEF_d4_scheme1_etapetac} \hskip 0.5 cm \epsfxsize=9 cm \includegraphics[width=5.5 cm]{EEF_d4_scheme1_etapeta0} \caption{ $\eta$, $\eta_{C}$ and $\eta/\eta_{0}$ versus log$_{10}(\beta)$ in scheme 1 for $D=4$, $T_1=20$, $T_2=30$, $P_1=1$, $P_4=0.5$ and $q=0.1$.} \label{scheme1_fig2} \end{center} \end{figure} While the Carnot efficiency $\eta_C$ is independent of $\beta$ in scheme 2, $P_1$, $P_3$ and $T_1$ which are evaluated from the equation of state vary with $\beta$. Fig. \ref{scheme2_fig} shows that contrary to scheme 1, in this scheme $\eta$ decreases with increasing $\beta$. The same qualitative behaviour as in scheme 1 is seen for $\eta/\eta_C$ and $\eta/\eta_0$ in this scheme. \\ \begin{figure}[!htbp] \begin{center} \epsfxsize=9 cm \includegraphics[width=5.5 cm]{EEF_d4_scheme2_eta} \hskip 0.5 cm \epsfxsize=9 cm \includegraphics[width=5.5 cm]{EEF_d4_scheme2_etapetac} \hskip 0.5 cm \epsfxsize=9 cm \includegraphics[width=5.5 cm]{EEF_d4_scheme2_etapeta0} \caption{ $\eta$, $\eta/\eta_{C}$ and $\eta/\eta_{0}$ versus log$_{10}(\beta)$ in scheme 2 for $D=4$, $T_2=30$, $T_4=10$, $V_2=10000$, $V_4=5000$ and $q=0.1$. } \label{scheme2_fig} \end{center} \end{figure} \textbf{Low temperature domain:} If we let the black hole engine to work in a low temperature domain (but still higher than critical temperature), i.e., decreasing both $T_C$ and $T_H$, and examine the efficiency of the ideal cycle for the schemes introduced before, it is seen that $\eta$, $\eta/\eta_C$ and $\eta/\eta_0$ behave the same way as the high temperature domain (See Figs. \ref{scheme1_LT} and \ref{scheme2_LT}). Comparing $\eta$ in Figs. \ref{scheme1_fig2} and \ref{scheme1_LT}, it is clear that for the set of parameter data we have chosen (keeping pressure constant while decreasing temperature) the efficiency decreases in scheme 1. However, for scheme 2, the efficiency increases by decreasing temperature while keeping the volumes constant (compare Figs. \ref{scheme2_fig}) and \ref{scheme2_LT}.) \begin{figure}[!htbp] \begin{center} \epsfxsize=9 cm \includegraphics[width=5.5 cm]{EEF_d4_scheme1_LT_eta} \hskip 0.5 cm \epsfxsize=9 cm \includegraphics[width=5.5 cm]{EEF_d4_scheme1_LT_etapetac} \hskip 0.5 cm \epsfxsize=9 cm \includegraphics[width=5.5 cm]{EEF_d4_scheme1_LT_etapeta0} \caption{ $\eta$, $\eta_{C}$ and $\eta/\eta_{0}$ versus log$_{10}(\beta)$ in scheme 1 for $D=4$, $T_1=3$, $T_2=5$, $P_1=1$, $P_4=0.5$ and $q=0.1$. Here the critical temperature $T_{cr.}$ is around $0.44$. } \label{scheme1_LT} \end{center} \end{figure} \begin{figure}[!htbp] \begin{center} \epsfxsize=9 cm \includegraphics[width=5.5 cm]{EEF_d4_scheme2_LT_eta} \hskip 0.5 cm \epsfxsize=9 cm \includegraphics[width=5.5 cm]{EEF_d4_scheme2_LT_etapetac} \hskip 0.5 cm \epsfxsize=9 cm \includegraphics[width=5.5 cm]{EEF_d4_scheme2_LT_etapeta0} \caption{ $\eta$, $\eta/\eta_{C}$ and $\eta/\eta_{0}$ versus log$_{10}(\beta)$ in scheme 2 for $D=4$, $T_2=5$, $T_4=1.5$, $V_2=10000$, $V_4=5000$ and $q=0.1$. Here the critical temperature $T_{cr.}$ is around $0.44$ . } \label{scheme2_LT} \end{center} \end{figure} \subsection{The engine's efficiency in weak and strong coupling regimes}\label{secht} There are four important cases of physical interest that can be analytically approximated: High and low temperature regions with weak and strong couplings. In order to analyze these regions we need to distinguish between diverse approximations. Although the expansions of electric field (\ref{electric field}), ADM mass (\ref{MADM}), temperature (\ref{temp}) and vacuum polarization (\ref{vacuum polarization}) around the large values of $r_+$ and around $\beta \to \infty$ are the same, but their meanings are different. Expansion around the large values of $r_+$ means we are far from the charged source (here, the black hole's singularity), and we can freely vary the non-linearity parameter $\beta$ for both the weak and strong coupling regimes. But, expansion around the weak coupling limit ($\beta \to \infty$) restricts us to vary $\beta$ in the weak coupling regime, no matter how far we are from the source. This distinction is vital for studying efficiency of holographic heat engines in this section. For now, we will analyze these domains for engines with spherical black holes as working substance. Results will be generalized to the holographic heat engines that are planar or hyperbolic in topology in Sect. \ref{planar-hyperbolic engines}.\\ \par \textbf{ High temperature with weak couplings ($T_{\rm{high}}, \beta_{\rm{large}}$):} Since the enthalpy ($H$) is a function of $S$, $P$ and $Q$ (it is not an explicit function of $T$), we cannot simply use the exact efficiency formula to obtain the high temperature limit. However, due to asymptotic behaviour of AdS black hole, it can analytically be approximated. The reason is that, according to Eqs. (\ref{temp}) and (\ref{eos_HT}), the AdS black holes with a fixed pressure in the high temperature limit behaves as $T \propto {r_+}$ (corresponding to the large black hole region) and one concludes that the event horizon radius should be large. Using this fact, it is inferred that the expansion of $\eta$ around large values of $r_+$ leads to the analytic efficiency formula in the high temperature limit. This expansion works for both weak and strong couplings. The series expansion for efficiency about $r_+=\infty$ can be achieved by replacing $M|_{\beta=\infty}$ (\ref{massinf}) in the exact efficiency formula \ref{EEF} or making use of Eqs. (\ref{work}) and (\ref{heat}) both expanded in the high temperature limit or equivalently about $r_+=\infty$. The latter produces more accurate estimates when compared to the results obtained from the exact efficiency formula. Accordingly, we expand Eq. (\ref{temp}) about $r_+=\infty$ and solve it to find $r_+$ as a function of $T$, which for $D=4$ yields \begin{eqnarray}\label{rp4} r_+&=&\frac{T}{{2P}} - \frac{1}{{4\pi T}} + \frac{{P\left( {8\pi P{q^2} - 1} \right)}}{{8{\pi ^2}{T^3}}} + \frac{{{P^2}\left( {16\pi P{q^2} - 1} \right)}}{{8{\pi ^3}{T^5}}} - \frac{{{P^3}\left( {5{\beta ^2} - 120\pi P{q^2}{\beta ^2} + 192{\pi ^2}{P^2}{q^4}{\beta ^2} + 64{\pi ^3}{P^3}{q^4}} \right)}}{{32{\pi ^4}{\beta ^2}{T^7}}}\nonumber\\ && - \frac{{{P^4}\left( {7{\beta ^2} - 224\pi P{q^2}{\beta ^2} + 896{\pi ^2}{P^2}{q^4}{\beta ^2} + 256{\pi ^3}{P^3}{q^4}} \right)}}{{32{\pi ^5}{\beta ^2}{T^9}}} + O{\left[ {\frac{1}{T}} \right]^{11}}. \end{eqnarray} In $D=5$, it is given by \begin{eqnarray}\label{rp5} r_+&=&\frac{{3T}}{{4P}} - \frac{1}{{2\pi T}} - \frac{P}{{3{\pi ^2}{T^3}}} + \frac{{4{P^2}\left( {32{\pi ^2}{P^2}{q^2} - 27} \right)}}{{243{\pi ^3}{T^5}}} + \frac{{4{P^3}\left( {128{\pi ^2}{P^2}{q^2} - 45} \right)}}{{243{\pi ^4}{T^7}}} + \frac{{112{P^4}\left( {128{\pi ^2}{P^2}{q^2} - 27} \right)}}{{2187{\pi ^5}{T^9}}}\nonumber\\ && - \frac{{32{P^5}\left( {15309{\beta ^2} - 103680{\pi ^2}{P^2}{q^2}{\beta ^2} + 10240{\pi ^4}{P^4}{q^4}{\beta ^2} + 2048{\pi ^5}{P^5}{q^4}} \right)}}{{177147{\pi ^6}{\beta ^2}{T^{11}}}} + O{\left[ {\frac{1}{T}} \right]^{13}}, \end{eqnarray} and in $D=6$ \begin{eqnarray}\label{rp6} r_+&=& \frac{T}{P} - \frac{3}{{4\pi T}} - \frac{{9P}}{{16{\pi ^2}{T^3}}} - \frac{{27{P^2}}}{{32{\pi ^3}{T^5}}} + \frac{{{P^3}\left( {32{\pi ^3}{P^3}{q^2} - 405} \right)}}{{256{\pi ^4}{T^7}}} + \frac{{3{P^4}\left( {128{\pi ^3}{P^3}{q^2} - 567} \right)}}{{512{\pi ^5}{T^9}}}+ \frac{{81{P^5}\left( {80{\pi ^3}{P^3}{q^2} - 189} \right)}}{{2048{\pi ^6}{T^{11}}}}\nonumber\\ && + \frac{{297{P^6}\left( {160{\pi ^3}{P^3}{q^2} - 243} \right)}}{{4096{\pi ^7}{T^{13}}}} - \frac{{{P^7}\left( {2814669{\beta ^2} - 2594592{\pi ^3}{P^3}{q^2}{\beta ^2} + 7168{\pi ^6}{P^6}{q^4}{\beta ^2} + 1024{\pi ^7}{P^7}{q^4}} \right)}}{{65536{\pi ^8}{\beta ^2}{T^{15}}}} + O{\left[ {\frac{1}{T}} \right]^{17}}.\nonumber\\ \end{eqnarray} Using Eqs. (\ref{rp4})-(\ref{rp6}), the thermodynamic volume of the $D$ dimensional spherical black hole can easily be found using the relation $V=\frac{\Sigma_{D-2}}{D-1}r^{D-1}$ and, when inserted in Eq. (\ref{work}), gives the work in the high temperature limit. Substituting Eqs. (\ref{rp4})-(\ref{rp6}) in Eq. (\ref{specific}) and integrating with respect to $T$, the absorbing heat in $D=4$ is given by \begin{eqnarray} {Q_H} &=& \frac{{\pi {T^3}}}{{6{P^2}}} + \frac{{16\pi P{q^2} - 1}}{{8\pi T}} + \frac{{P\left( {24\pi P{q^2} - 1} \right)}}{{12{\pi ^2}{T^3}}} - \frac{{3{P^2}\left( {5{\beta ^2} - 160\pi P{q^2}{\beta ^2} + 320{\pi ^2}{P^2}{q^4}{\beta ^2} + 128{\pi ^3}{P^3}{q^4}} \right)}}{{160{\pi ^3}{\beta ^2}{T^5}}}\nonumber\\ &&- \frac{{{P^3}\left( {{\beta ^2} - 40\pi P{q^2}{\beta ^2} + 192{\pi ^2}{P^2}{q^4}{\beta ^2} + 64{\pi ^3}{P^3}{q^4}} \right)}}{{8{\pi ^4}{\beta ^2}{T^7}}} + \left. {O{{\left[ {\frac{1}{T}} \right]}^9}} \right|_{_{{T_1}}}^{{T_2}}. \end{eqnarray} In $D=5$, we have \begin{eqnarray} {Q_H} &=& \frac{{81{\pi ^2}{T^4}}}{{512{P^3}}} - \frac{{27\pi {T^2}}}{{128{P^2}}} + \frac{{64{\pi ^2}{P^2}{q^2} - 9}}{{96\pi {T^2}}} + \frac{{5P\left( {256{\pi ^2}{P^2}{q^2} - 27} \right)}}{{864{\pi ^2}{T^4}}} + \frac{{7{P^2}\left( {320{\pi ^2}{P^2}{q^2} - 27} \right)}}{{648{\pi ^3}{T^6}}}\nonumber\\ && - \frac{{{P^3}\left( {1701{\beta ^2} - 24192{\pi ^2}{P^2}{q^2}{\beta ^2} + 4096{\pi ^4}{P^4}{q^4}{\beta ^2} + 1024{\pi ^5}{P^5}{q^4}} \right)}}{{2916{\pi ^4}{\beta ^2}{T^8}}} + \left. {O{{\left[ {\frac{1}{T}} \right]}^{10}}} \right|_{_{{T_1}}}^{{T_2}}, \end{eqnarray} and in $D=6$ \begin{eqnarray} {Q_H} &=& \frac{{8{\pi ^2}{T^5}}}{{15{P^4}}} - \frac{{4\pi {T^3}}}{{3{P^3}}} + \frac{{128{P^3}{\pi ^3}{q^2} - 81}}{{288{\pi ^2}{T^3}}} + \frac{{3P\left( {160{\pi ^3}{P^3}{q^2} - 81} \right)}}{{320{\pi ^3}{T^5}}} + \frac{{9{P^2}\left( {64{\pi ^3}{P^3}{q^2} - 27} \right)}}{{128{\pi ^4}{T^7}}}\nonumber\\ && + \frac{{15{P^3}\left( {224{\pi ^3}{P^3}{q^2} - 81} \right)}}{{256{\pi ^5}{T^9}}} - \frac{{{P^4}\left( {1082565{\beta ^2} - 3421440{\pi ^3}{P^3}{q^2}{\beta ^2} + 22528{\pi ^6}{P^6}{q^4}{\beta ^2} + 4096{\pi ^7}{P^7}{q^4}} \right)}}{{90112{\pi ^6}{T^{11}}{\beta ^2}}}\nonumber\\ &&+ \left. {O{{\left[ {\frac{1}{T}} \right]}^{13}}} \right|_{_{{T_1}}}^{{T_2}}. \end{eqnarray} With the work and absorbed heat relations in hand, finally the efficiency of the logarithmic $U(1)$ AdS black hole engine in the high temperature limit in $D$ dimensional spacetime is achieved by using $\eta=W/Q_H$. Looking at above expansions, it is seen that the non-linearity parameter $\beta$ appears in the work and heat relations at order $T^{-5}$ for $D=4$, at order $T^{-8}$ for $D=5$ and at order $T^{-11}$ for $D=6$. Hence, the contributions of these terms which contain $\beta$ factor are completely negligible in the weak couplings. However, for the strong couplings ($\beta \rightarrow 0$), the effect of these terms are sensible. If $\eta$, $\eta/\eta_{C}$ and $\eta/\eta_{0}$ are plotted as functions of ${\rm{log}}_{10}(\beta)$ in the high temperature domain with weak couplings (in the range $10<\beta<1000$), the corresponding diagrams like those of Fig. \ref{scheme1_fig2} are obtained (See Fig. \ref{HT_WC}): $\eta$ and $\eta/\eta_{0}$ follow the strictly increasing functional pattern (like the function$f(x)=1/x$) and $\eta/\eta_{C}$ is a strictly decreasing function (like the function $f(x)=\sqrt{x}$). By that we mean when $\beta$ runs from strong to weak coupling region, it has a monotonically smooth behaviour. Plotting $\eta$, $\eta/\eta_{C}$ and $\eta/\eta_{0}$ in scheme 2 in high temperature region with weak couplings also reveal the similar behaviour to those of Fig. (\ref{scheme2_fig}). \begin{figure}[!htbp] \begin{center} \epsfxsize=9 cm \includegraphics[width=5.5 cm]{HT_WC_eta} \hskip 0.5 cm \epsfxsize=9 cm \includegraphics[width=5.5 cm]{HT_WC_etapetac} \hskip 0.5 cm \epsfxsize=9 cm \includegraphics[width=5.5 cm]{HT_WC_etapeta0} \caption{ $\eta$, $\eta/\eta_{C}$ and $\eta/\eta_{0}$ versus log$_{10}(\beta)$ in scheme 1 in high temperature domain with weak couplings for $D=4$, $T_1=20$, $T_2=30$, $P_1=1$, $P_4=0.5$ and $q=0.1$ .} \label{HT_WC} \end{center} \end{figure} \textbf{ High temperature with strong couplings ($T_{\rm{high}}, \beta_{\rm{small}} $):} Here, we apply the method we used for evaluating efficiency in the high temperature limit with weak couplings ($\beta \rightarrow \infty$) again, but this time to strong couplings ($\beta \rightarrow 0$). The results for $\eta$, $\eta/\eta_{C}$ and $\eta/\eta_{0}$ are plotted for a range of $10^{-3}<\beta<10^{-1}$ in Fig. \ref{HT_SC} for scheme 1. The qualitative similarity between these diagrams and those in Figs. \ref{scheme1_fig2} and \ref{HT_WC} emphasizes the smooth behaviour of efficiency in the range of $0<\beta<\infty$. \begin{figure}[!htbp] \begin{center} \epsfxsize=9 cm \includegraphics[width=5.5 cm]{HT_SC_eta} \hskip 0.5 cm \epsfxsize=9 cm \includegraphics[width=5.5 cm]{HT_SC_etapetac} \hskip 0.5 cm \epsfxsize=9 cm \includegraphics[width=5.5 cm]{HT_SC_etapeta0} \caption{ $\eta$, $\eta/\eta_{C}$ and $\eta/\eta_{0}$ versus log$_{10}(\beta)$ in scheme 1 in high temperature domain with strong couplings for $D=4$, $T_1=20$, $T_2=30$, $P_1=1$, $P_4=0.5$ and $q=0.1$. } \label{HT_SC} \end{center} \end{figure} \textbf{ Low temperature with weak couplings ($T_{\rm{low}}, \beta_{\rm{large}} $)}: In the low temperature limit $T$ is no longer proportional to $r_{+}$ and so expanding the efficiency about $T=0$ is not possible. Instead, we run its series expansion about $\beta=\infty$ in low temperature limit. The relation for $\eta|_{\beta=\infty}$ was already derived in Eq. (\ref{effbetainf}). Using this equation $\eta$, $\eta/\eta_{C}$ and $\eta/\eta_{0}$ are sketched for low temperature domain (higher than critical temperature) in Fig. \ref{LT_WC} for scheme 1. As expected, a qualitative agreement is seen with Fig. \ref{scheme1_LT}. \begin{figure}[!htbp] \begin{center} \epsfxsize=9 cm \includegraphics[width=5.5 cm]{LT_WC_eta} \hskip 0.5 cm \epsfxsize=9 cm \includegraphics[width=5.5 cm]{LT_WC_etapetac} \hskip 0.5 cm \epsfxsize=9 cm \includegraphics[width=5.5 cm]{LT_WC_etapeta0} \caption{ $\eta$, $\eta/\eta_{C}$ and $\eta/\eta_{0}$ versus log$_{10}(\beta)$ in scheme 1 in low temperature domain with weak couplings for $D=4$, $T_1=3$, $T_2=5$, $P_1=1$, $P_4=0.5$ and $q=0.1$. } \label{LT_WC} \end{center} \end{figure} \textbf{Low temperature with strong couplings ($T_{\rm{low}}, \beta_{\rm{small}} $):} Expanding the ADM mass about $\beta=0$, we obtain \begin{eqnarray} M|_{\rm{small}\, \beta}&=&\frac{{{\Sigma _{D - 2}}}}{{16\pi {{\left( {D - 1} \right)}^2}{r_+^3}}} \times\nonumber \\ &&\Biggr\{- 8{\left( {D - 1} \right)^2}q{r_+^4}\beta - \frac{2}{\sqrt \pi } \left( {D - 1} \right)\,\Gamma \left( {\frac{1}{{2D - 4}}} \right)\Gamma \left( {\frac{{1 - D}}{{2D - 4}}} \right)q\beta {{\left( {\frac{q}{\beta }} \right)}^{\frac{1}{{D - 2}}}}r_+^3 \nonumber\\ && + {r_+^D}\left( {\left( {D - 1} \right)\left( {2 + D\left( {D - 3} \right) + 16\pi P{r_+^2}} \right) + 8\left( {2D - 3} \right){\beta ^2}{r_+^2} - 8\left( {D - 1} \right){\beta ^2}{r_+^2}\ln \left( {\frac{{2{r_+^{D - 2}}\beta }}{q}} \right)} \right)\Biggr\}. \end{eqnarray} Replacing $ M|_{\beta=0}$ in the exact efficiency formula \ref{EEF}, the efficiency of the ideal cycle for a logarithmic $U(1)$ AdS black hole will be as \begin{eqnarray}\label{eff0} \eta|_{\rm{small}\, \beta}=\frac{G(r_1,r_2,P_1,P_3)}{H(r_1,r_2,P_1,P_3)}, \end{eqnarray} where \addtocounter{equation}{-1} \begin{subequations} \begin{align} G({r_1},{r_2},{P_1},{P_3}) &= 16{\pi ^{3/2}}\left( {D - 1} \right)\left( {{P_1} - {P_3}} \right)(r_1 r_2)^2\left( { - {r_1^D}{r_2} + {r_1}{r_2^D}} \right),\notag\\ H(r_1,r_2,P_1,P_3)&= \sqrt \pi \Biggr\{ r_1^3r_2^D\left( {\left( {D - 1} \right)\left( {2 + D\left( {D - 3} \right) + 16\pi {P_1}r_2^2} \right) + 8\left( {2D - 3} \right){\beta ^2}r_2^2} \right)\notag\\ &+ 8\left( {D - 1} \right){\beta ^2}{\left( {{r_1}{r_2}} \right)^2}\left( {r_1^D{r_2}\ln \left( {\frac{{2r_1^{D - 2}\beta }}{q}} \right) - {r_1}r_2^D\ln \left( {\frac{{2r_2^{D - 2}\beta }}{q}} \right)} \right)\notag\\ &+ r_2^3\left( {8{{\left( {D - 1} \right)}^2}q\beta r_1^3\left( {{r_1} - {r_2}} \right) - r_1^D\left( {\left( {D - 1} \right)\left( {2 + D\left( {D - 3} \right) + 16\pi {P_1}r_1^2} \right) + 8\left( {2D - 3} \right){\beta ^2}r_1^2} \right)} \right)\Biggr\}. \end{align} \end{subequations} Notice that the same is obtained for the expansion around $r \to 0$, meaning that we are approaching the point charge, no matter how much is the strength of the non-linearity coupling $\beta$. $\eta$, $\eta/\eta_{C}$ and $\eta/\eta_{0}$ are displayed in the range of $10^{-6}<\beta<10^{-1}$ for scheme 1 in Fig. \ref{LT_SC}. $\eta$ and $\eta/\eta_{0}$ are increasing functions as they were in other regions whereas $\eta/\eta_{C}$ first increases unlike in Figs. \ref{scheme1_fig2}-\ref{LT_WC} and then decreases with respect to $\beta$. In the interval in which $\eta/\eta_{C}$ is increasing, $\eta$ and $\eta_{C}$ are both decreasing but the rate of decrease of $\eta_{C}$ is greater than $\eta$ and, as a result, $\eta/\eta_{C}$ increases. \begin{figure}[!htbp] \begin{center} \epsfxsize=9 cm \includegraphics[width=5.5 cm]{LT_SC_eta} \hskip 0.5 cm \epsfxsize=9 cm \includegraphics[width=5.5 cm]{LT_SC_etapetac} \hskip 0.5 cm \epsfxsize=9 cm \includegraphics[width=5.5 cm]{LT_SC_etapeta0} \caption{ $\eta$, $\eta/\eta_{C}$ and $\eta/\eta_{0}$ versus log$_{10}(\beta)$ in scheme 1 in low temperature domain with strong couplings for $D=4$, $T_1=3$, $T_2=5$, $P_1=1$, $P_4=0.5$ and $q=0.1$. } \label{LT_SC} \end{center} \end{figure} \par From our physical intuition, we anticipate that in the high $U(1)$ charge limit, the metric function, mass, temperature and efficiency should behave in the same way as in the strong coupling limit. And in the low $U(1)$ charge limit, the same results are obtained as in the weak coupling limit. This can be easily proved by expanding these quantities about $q=\infty$ (or equivalently $\beta=0$) and about $q=0$ (or equivalently $\beta=\infty$). These features can be seen in Figs. \ref{scheme1_q} and \ref{scheme2_q} where $\eta$, $\eta/\eta_{C}$ and $\eta/\eta_{0}$ are plotted versus $q$ in scheme 1 and scheme 2. As $q$ goes to zero, $\eta$ approaches $\eta_0$ in both schemes. \begin{figure} \begin{center} \epsfxsize=9 cm \includegraphics[width=5.5 cm]{eta_q_scheme1} \hskip 0.5 cm \epsfxsize=9 cm \includegraphics[width=5.5 cm]{etapetac_q_scheme1} \hskip 0.5 cm \epsfxsize=9 cm \includegraphics[width=5.5 cm]{etapeta0_q_scheme1} \caption{ $\eta$, $\eta/\eta_{C}$ and $\eta/\eta_{0}$ versus $q$ in scheme 1 for $D=4$, $T_1=20$, $T_2=30$, $P_1=1$, $P_4=0.5$ and $\beta=10$. } \label{scheme1_q} \end{center} \end{figure} \begin{figure} \begin{center} \epsfxsize=9 cm \includegraphics[width=5.5 cm]{eta_q_scheme2} \hskip 0.5 cm \epsfxsize=9 cm \includegraphics[width=5.5 cm]{etapetac_q_scheme2} \hskip 0.5 cm \epsfxsize=9 cm \includegraphics[width=5.5 cm]{etapeta0_q_scheme2} \caption{ $\eta$, $\eta/\eta_{C}$ and $\eta/\eta_{0}$ versus $q$ in scheme 2 for $D=4$, $V_2=10000$, $V_4=5000$, $T_2=30$, $T_4=10$ and $\beta=10$. } \label{scheme2_q} \end{center} \end{figure} \subsection{Efficiency for hyperbolic and planar black hole heat engines} \label{planar-hyperbolic engines} Here, we concisely discuss the cases of planar and hyperbolic black holes as holographic heat engines. Apparently, the mass and therefore the efficiency of the black hole heat engine \ref{MADM} depend on the topology of the event horizon. Using the exact efficiency formula \ref{EEF}, we plot $\eta$, $\eta/\eta_{C}$ and $\eta/\eta_{0}$ versus log$_{10}(\beta)$ for black hole heat engines that are hyperbolic or planar in topology in Figs. \ref{hyperbolic_scheme1}-\ref{planar_scheme2} with the same parameters we have used in Fig. \ref{scheme1_fig2} for scheme 1 and \ref{scheme2_fig} for scheme 2. In scheme 1, similar to the spherical case, for both planar and hyperbolic black hole heat engines, $\eta$ and $\eta/\eta_{0}$ are increasing functions and $\eta/\eta_{C}$ is a decreasing function (see Figs \ref{hyperbolic_scheme1} and \ref{planar_scheme1}) and their behaviours are qualitatively the same as in Fig. \ref{scheme1_fig2}. Comparing Figs. \ref{scheme1_fig2}, \ref{hyperbolic_scheme1} and \ref{planar_scheme1} with each other, it is found that in this scheme the hyperbolic black hole heat engines are the most efficient and the spherical black hole heat engines work the least efficiently. Furthermore, in scheme 2, the same qualitative behaviour for $\eta$, $\eta/\eta_{C}$ and $\eta/\eta_{0}$ as in the spherical black hole heat engines is seen for planar and hyperbolic heat engines. By comparing Figs. \ref{scheme2_fig}, \ref{hyperbolic_scheme2} and \ref{planar_scheme2}, it is seen that in scheme 2, the order is reversed. i.e., black hole heat engines with spherical symmetry have higher efficiencies than planar and hyperbolic. \begin{figure} \begin{center} \epsfxsize=9 cm \includegraphics[width=5.5 cm]{hyperbolic_eta_scheme1} \hskip 0.5 cm \epsfxsize=9 cm \includegraphics[width=5.5 cm]{hyperbolic_etapetac_scheme1} \hskip 0.5 cm \epsfxsize=9 cm \includegraphics[width=5.5 cm]{hyperbolic_etapeta0_scheme1} \caption{$\eta$, $\eta_{C}$ and $\eta/\eta_{0}$ versus log$_{10}(\beta)$ for hyperbolic black hole heat engines in scheme 1. Here we take$D=4$, $T_1=20$, $T_2=30$, $P_1=1$, $P_4=0.5$ and $q=0.1$ as in Fig. \ref{scheme1_fig2}. } \label{hyperbolic_scheme1} \end{center} \end{figure} \begin{figure} \begin{center} \epsfxsize=9 cm \includegraphics[width=5.5 cm]{planar_eta_scheme1} \hskip 0.5 cm \epsfxsize=9 cm \includegraphics[width=5.5 cm]{planar_etapetac_scheme1} \hskip 0.5 cm \epsfxsize=9 cm \includegraphics[width=5.5 cm]{planar_etapeta0_scheme1} \caption{$\eta$, $\eta_{C}$ and $\eta/\eta_{0}$ versus log$_{10}(\beta)$ for planar black hole heat engines in scheme 1. Here we take $D=4$, $T_1=20$, $T_2=30$, $P_1=1$, $P_4=0.5$ and $q=0.1$ as in Fig. \ref{scheme1_fig2}. } \label{planar_scheme1} \end{center} \end{figure} \begin{figure} \begin{center} \epsfxsize=9 cm \includegraphics[width=5.5 cm]{hyperbolic_eta_scheme2} \hskip 0.5 cm \epsfxsize=9 cm \includegraphics[width=5.5 cm]{hyperbolic_etapetac_scheme2} \hskip 0.5 cm \epsfxsize=9 cm \includegraphics[width=5.5 cm]{hyperbolic_etapeta0_scheme2} \caption{ $\eta$, $\eta/\eta_{C}$ and $\eta/\eta_{0}$ versus log$_{10}(\beta)$ for hyperbolic black hole heat engines in scheme 2. Here we take $D=4$, $T_2=30$, $T_4=10$, $V_2=10000$, $V_4=5000$ and $q=0.1$. } \label{hyperbolic_scheme2} \end{center} \end{figure} \begin{figure} \begin{center} \epsfxsize=9 cm \includegraphics[width=5.5 cm]{planar_eta_scheme2} \hskip 0.5 cm \epsfxsize=9 cm \includegraphics[width=5.5 cm]{planar_etapetac_scheme2} \hskip 0.5 cm \epsfxsize=9 cm \includegraphics[width=5.5 cm]{planar_etapeta0_scheme2} \caption{$\eta$, $\eta/\eta_{C}$ and $\eta/\eta_{0}$ versus log$_{10}(\beta)$ for planar black hole heat engines in scheme 2. Here we take $D=4$, $T_2=30$, $T_4=10$, $V_2=10000$, $V_4=5000$ and $q=0.1$.} \label{planar_scheme2} \end{center} \end{figure} \section{Closing remarks} \label{Final} Logarithmic $U(1)$ gauge theory (proposed by Soleng \cite{NLBHSoleng1995}) has attracted lot of attention due to its relation to string effective actions in low energies. In addition, this nonlinear theory of Electrodynamics can represent the properties of both the Born-Infeld and the Euler-Heisenberg actions. We have briefly reviewed the geometric and thermodynamic properties of AdS black hole solutions coupled with logarithmic nonlinear electrodynamics. The non-linearity of $U(1)$ electromagnetic field, controlled by parameter $\beta$, affects the extended first law and the Smarr relation, which leads to new thermodynamic prospects. As highlighted in Sec. \ref{Thermodynamics}, the appearance of new pair ``${\cal B} \beta$'' in the Smarr relation is characteristic of BI-type nonlinear electrodynamics and is absent in linear Maxwell or power Maxwell invariant theories of Electrodynamics. We also found that the logarithmic vacuum polarization (${\cal B}$) at short and large distances from the charged source behaves as \begin{equation} {\cal B} \to \left\{ \begin{array}{l} \mathop {\lim }\limits_{{r_ + } \to \infty } {\cal B} = 0\,\,\,\,\,\,\,\,\,\,{\rm{for \, any \, fixed }} \,\beta \\ \mathop {\lim }\limits_{{r_ + } \to 0} {\cal B} = {\rm{finite}}\,\,\,\,\,\,\,\,\,\,{\rm{for \, any \, fixed }}\, \beta \end{array} \right. \end{equation} while in strong ($\beta \to 0$) and weak ($\beta \to \infty$) coupling regimes, this quantity behaves as \begin{equation} {\cal B} \to \left\{ \begin{array}{l} \mathop {\lim }\limits_{\beta \to 0} {\cal B} = \infty \,\,\,\,\,\,\,\,\,\,{\rm{for \, any \, fixed }} \, {r_ + }\\ \mathop {\lim }\limits_{\beta \to \infty } {\cal B} = {\rm{0}}\,\,\,\,\,\,\,\,\,\,{\rm{for \, any \, fixed }} \, {r_ + } \,. \end{array} \right. \end{equation} In both the expansions around $\beta = \infty$ and $r_+ = \infty$, the vacuum polarization has similar form, indicating that the high temperature expansion ($T \propto r_+$ for large $r_+$) is equivalent to the weak coupling limit ($\beta \to \infty$), as analytically proved in Sect. \ref{Heat Engines}. We considered the logarithmic $U(1)$ AdS black hole as a working substance of a holographic heat engine which undergoes a rectangular cyclic process consisting two isobars and two isochores/adiabats. Using the exact efficiency formula, we studied the efficiency of the spherical black hole heat engine as a function of $\beta$ in two schemes. These schemes differ by the choice of thermodynamic cycle parameters which are fixed under changing the non-linearity parameter $\beta$. We saw that efficiency changes by exceedingly small amounts with respect to $\beta$ for both strong and weak couplings, although the changes are more sensible in strong couplings. In general, the nonlinear corrections to Maxwell electrodynamics are so tiny, as expected. As an example, light-by-light scattering $(\gamma\gamma\rightarrow \gamma\gamma)$, which is a very rare phenomenon, can be classically described within the context of nonlinear electrodynamics theories \cite{HeisenbergEuler1936,JacksonElectrodynamics,SchwartzQFT} and quantum mechanically by radiative corrections in QED \cite{SchwartzQFT,Klauber}. After a long search, its evidence has been recently reported by the ATLAS Collaboration \cite{ATLAS}. \par In order to compare our results with the Carnot efficiency, the maximum efficiency available by a heat engine working between two temperatures, we plotted $\eta/\eta_{C}$ for both schemes. It was found that $\eta/\eta_{C}$ decreases with increasing $\beta$, meaning that the holographic black hole heat engine works more efficiently with respect to the maximum possible efficiency in the strong coupling limit. In other words, the engine efficiency with respect to its possible maximum value increases when nonlinear effects of electrodynamics are taken into account, where, as a consistency check, it is in agreement with the results of \cite{Johnson2016a} and \cite{HeatEngines2017b}. Also, $\eta/\eta_{0}$ was displayed for different schemes and as expected it approaches to $1$ when $\beta$ goes to infinity. Exploring the low temperature domain, the same qualitative behaviour for $\eta/\eta_{C}$ and $\eta/\eta_{0}$ was seen. Furthermore, it was observed that in general efficiency varies too slowly in $\beta$ especially for $\beta>0.1$. \par Then we investigated the limiting behaviour of efficiency in the weak and strong coupling regimes and found the analytic relations for high/low temperature domains with weak/strong couplings. In the high temperature limit, we used the fact that $T \propto r_{+}$ and made series expansions of $Q_H$, $W$ and indeed $\eta$. The results show that for weak couplings ($\beta\rightarrow \infty$) in the high temperature domain, the change of efficiency with $\beta$ is imperceptible while it is sensible in the strong coupling regime ($\beta\rightarrow 0$). The similar qualitative behaviour of $\eta$ in weak and strong coupling regimes in the high temperature limit indicates that the efficiency is a smooth monotonic function of $\beta$ for $0<\beta<\infty$. In the low temperature limit with weak couplings, we made an expansion series around $\beta=\infty$ and ran it in the low working temperature, since expanding around $T=0$ is impossible. Again, $\eta$, $\eta/\eta_{C}$ and $\eta/\eta_{+}$ were explored in this domain and it was found that they behave the same way as in the high temperature domain. It was seen that for the strong couplings in the low temperature limit, high enough to be greater than critical temperature, $\eta/\eta_{C}$ versus log$_{10}(\beta)$ first increases and then decreases. \par Moreover, we studied the planar and hyperbolic black holes in the presence of logarithmic non-linear electrodynamics as holographic heat engines. Qualitatively, the same behaviour as in spherical holographic heat engines was seen for efficiency in both schemes. Quantitatively, hyperbolic black hole heat engines are the most efficient in scheme 1 but, in scheme 2, the spherical holographic heat engines works more efficiently. In summary, we observed \begin{itemize} \item \textit{in scheme 1}: \begin{equation} {\eta _{{\rm{spherical}}}} < {\eta _{{\rm{planar}}}} < {\eta _{{\rm{hyperbolic}}}} \end{equation} \end{itemize} \begin{itemize} \item \textit{in scheme 2}: \begin{equation} {\eta _{{\rm{hyperbolic}}}} < {\eta _{{\rm{planar}}}} < {\eta _{{\rm{spherical}}}} \end{equation} \end{itemize} These relations are also hold for the ratio of efficiency with Carnot efficiency ($\eta/\eta_C$). As it is clear, the results are scheme/parameter dependent. The reason is that the equation of state depends on $\beta$ and as a result the parameters of the cycle, which are not hold fixed, depend on $\beta$. This leads to the opposite behaviour for the efficiency of black holes with different topologies in the mentioned schemes. \par Finally, it should be noted that Logarithmic $U(1)$ charged AdS black holes can be understood more naturally in the context of the extended phase space thermodynamics. However, it is not clear that what kinds of thermodynamic phase transitions are allowed in these black holes and it would be interesting to investigate about their critical phenomena. In addition, the Joule-Thomson expansion for these black hole solutions can be studied by isenthalpic and inversion curves in $T-P$ plane in order to explore about the effect of nonlinear electromagnetic corrections to this process. \begin{acknowledgements} We gratefully thank the anonymous referee for enlightening comments and suggestions which substantially helped in improving the quality of the paper. S.Z. would like to express her sincere gratitude to A. Dehghani for useful discussions. S.Z. also appreciates the support of University of Sistan and Baluchestan research council. \end{acknowledgements}
1,941,325,220,923
arxiv
\section{Introduction} \label{sec:intro} In recent years, a number of studies presented direct or indirect detections of planets embedded in protoplanetary disks~\citep[e.g.,][]{Keppler_2018, Pinte_2018, Teague_2018}, indicating that planet formation is a fast process. In the core accretion paradigm, micron-sized particles that are inherited from the interstellar medium need to grow quickly to form a planetary core which can attract the surrounding gas before the disk has dissipated. The streaming instability~\citep[][]{Youdin_2005} is one of the currently favored mechanisms allowing a sufficiently rapid growth, from pebbles to planetesimals, in the disk phase. For this instability to develop, the dust-to-gas ratio needs to be high in the disk midplane, of the order of 1, and large pebble-sized particles need to be present. After this planetesimal formation stage, planetary embryos can continue to form the cores of giant planets by accreting remaining inwards-drifting pebbles \citep{Lambrechts_2012}. Various studies have shown that substructures, in particular rings and gaps, are ubiquitous in protoplanetary disks in the Class II phase~\citep{Long_2018, Huang_2018}, and already present in some young Class~I disks~\citep{ALMA_HLTau_2015, Segura-Cox_2020}. In many cases, they act as dust traps and grain growth is associated with rings~\citep[e.g.,][]{Carrasco-Gonzalez_2019,Macias_2021, Sierra_2021}. On the other hand, the efficiency of dust vertical settling remains largely unconstrained. Similarly to radial drift~\citep{Weidenschilling_1977}, this mechanism is a balance between the stellar gravity and the interaction of dust with gas. Large grains (e.g., millimeter sizes) are expected to settle most efficiently towards the midplane, while smaller grains (e.g., micron sizes) are predicted to remain well coupled to the gas and are co-located with it. At the same time, because of radial drift, large grains are also predicted to drift towards the star. Currently the vertical scale height of the gas in protoplanetary disks around T Tauri stars has been constrained in a number of studies to be about 10\,au at a radius of 100\,au~\citep[e.g.,][]{Burrows_1996, Watson_2007, Wolff_2017}. This was mainly done by modeling scattered light images, which probe small dust grains assumed to be well coupled to the gas. Some other studies also estimated near-infrared scattering surface extent~\citep[e.g.,][]{Ginski_2016, Avenhaus_2018} or the height of gas emission layers~\citep[e.g.][]{Pinte_2018, Law_2021, Rich_2021}, using less model-dependent techniques. However, these latter techniques do not probe the pressure scale height of the disk but the scattering or emission surfaces, which may be several times higher than the physical scale height. On the other hand, only a few studies have estimated the scale height of millimeter dust particles, most affected by vertical settling. This is in part because the majority of observed protoplanetary disks are moderate inclination systems in which it is difficult to constrain the thin vertical extent of millimeter grains. For these systems detailed modeling of gaps and rings is needed~\citep[e.g.,][]{Pinte_2016, Doi_Kataoka_2021}. On the other hand, disks seen edge-on offer the most favorable orientation to study their vertical extent. \citet{Villenave_2020} presented the first high angular resolution ($\sim$0.1\arcsec) millimeter survey of edge-on disks. By comparing the vertical and radial extent of 3 systems with radiative transfer models, they showed that the scale height of the millimeter dust particles is of a few au at 100 au, significantly smaller than the typical gas scale height. These results indicate that efficient vertical settling is occurring in protoplanetary disks. Further studies are needed to increase the number of disks with known gas and millimeter dust vertical extent in order to better understand the efficiency of vertical settling. In this paper, we focus on SSTC2D J163131.2-242627 (hereafter Oph\,163131), a highly inclined protoplanetary disk located in the Ophiuchus star-forming region, at a distance of about $147\pm3$\,pc~\citep{Ortiz-Leon_2017, Ortiz-Leon_2018}. Oph\,163131\ was included in the survey of \citet{Villenave_2020}, and has recently been studied by two companion papers~\citep{Flores_2021, Wolff_2021}, which presented ALMA observations at an angular resolution of $\sim 0.2\arcsec$, as well as scattered light HST and Keck images. \citet{Wolff_2021} used radiative transfer to model both the 1.3\,mm ALMA image and the 0.8\,$\mu$m HST images with an extensive MCMC framework. They found that some degree of vertical settling is needed to reproduce both observations. On the other hand, \citet{Flores_2021} analyzed the ALMA $^{12}$CO and $^{13}$CO maps to characterize the 2D temperature structure of the disk. They obtained a dynamical mass estimate of $1.2\pm0.2~M_\odot$ from the ALMA observations, and characterized the spectral type of the source to be K4, using optical and near-infrared spectroscopy. In this work, we present new high angular resolution observations of Oph\,163131\ at 1.3\,mm (resolution of $\sim 0.02\arcsec$ or 3\,au). The new images reveal a number of rings that were not detected with previous millimeter observations. We derive a detailed radiative transfer model of the new millimeter observations to characterize the physical structure of the disk and add constraints on vertical settling. In Sect.~\ref{sec:observations}, we present the observations and data reduction. In Sect.~\ref{sec:results}, we present the main features seen in the images. Then, in Sect.~\ref{sec:model} we describe the disk model and present the results. We discuss the implication of our results in Sect.~\ref{sec:discussion}. Finally the conclusions are presented in Sect.~\ref{sec:concl}. \begin{figure*} \centering \includegraphics[width = 0.49\textwidth]{figures/Oph163131_Image.pdf} \includegraphics[width = 0.49\textwidth]{figures/Oph163131_Image_labels.pdf} \caption{\emph{Left:} Continuum image of Oph\,163131. The beam size is indicated by an ellipse in the bottom left corner of the plot. North is up and east is left. \emph{Right:} Labeled zoomed-in continuum image of Oph\,163131, scaled so that the rings are more visible~(see Sect.~\ref{sec:model_ALMA} and Fig.~\ref{fig:model_1330_cut_zoom} for details on the identification of region~1). The central region is saturated (black) and pixels with less than 2$\sigma$ appear in white. The beam size is indicated by an ellipse in the bottom left corner of the plot. } \label{fig:cont} \label{fig:cont_label} \end{figure*} \section{Observation and data reduction} \label{sec:observations} We present new cycle~6 observations of Oph\,163131\ in band~6~(1.3\,mm, Project: 2018.1.00958.S, PI: Villenave). The spectral setup was divided in three continuum spectral windows of rest frequency 229.0\,GHz, 243.5\,GHz, and 246.0\,GHz, and a fourth spectral window including the $^{12}$CO $J=2-1$ transition at 230.538\,GHz. The line spectral window has a native velocity resolution of 0.64\,km\,s$^{-1}$. The data were obtained on June 8, 2019, with baselines ranging from 80\,m to 16\,km. The total observing time on source was about 2~hours and 30~minutes. The raw data were calibrated using the CASA pipeline version~5.4.0~\citep{McMullin_2007}, and the rest of the data processing was done with CASA~5.6.1. To produce the final images, we combined our cycle~6 observations with lower angular resolution observations from cycle~4~(Project 2016.1.00771.S, PI: Duch\^ene) previously published \citep{Villenave_2020, Flores_2021, Wolff_2021}. We note that a shift of about 0.05\arcsec~($\sim$20\% of the cycle~4 beam) was present between the cycle~4 and cycle~6 observations, which is roughly consistent with the astrometric accuracy of ALMA~(see ALMA technical handbook\footnote{https://almascience.eso.org/documents-and-tools/cycle8/alma-technical-handbook}). Thus, before producing the combined image, we aligned both observations using the \texttt{fixplanet} CASA task (with the option \texttt{fixuvw} as True). To maximize the dynamic range of the final image, we performed phase self-calibration on the cycle~4 continuum observations as mentioned in \citet{Flores_2021, Villenave_2020}. Due to the limited signal-to-noise per beam, no self-calibration could be performed on the cycle~6 observations, nor on the combined cycle~4 and cycle~6 data. We then produced the continuum image using the CASA \texttt{tclean} task on the combined dataset, with a Briggs weighting (robustness parameter of +0.5) and using the multiscale deconvolver, with scales of 0, 1, 5, and 10 times the beam FWHM. The resulting continuum beam size is 0.024\arcsec$\times$0.020\arcsec ($\sim3.5\times3$\,au) with a major axis at PA\,$=-81^\circ$, and the resulting continuum rms is 9.3~$\mu$Jy/beam. After applying the continuum self-calibration solutions to all spectral windows, we subtracted the continuum emission using the \texttt{uvcontsub} task in CASA. Then, we derived the emission line maps from the calibrated visibilities using the \texttt{tclean} function. We use 0.7\,km\,s$^{-1}$ velocity resolution and a Briggs weighting~(robustness parameter of +0.5) to create the line images. Additionally, to increase the signal to noise of the $^{12}$CO observations, we applied a uv-taper while generating the images. The combined resulting $^{12}$CO beam is 0.081\arcsec$\times$0.072\arcsec\ with a major axis at PA\,$=89^\circ$, and the average rms of the line is 0.8~mJy/beam per channel. Finally, we generate moment~0 and 1 maps including only pixels above 3~times the rms. \section{Results} \label{sec:results} \subsection{Continuum emission} \label{sec:frank} We present the continuum observations of Oph\,163131\, in~Fig.~\ref{fig:cont}. The high angular resolution of the image reveals several rings even though the disk is highly inclined. We highlight the structures in the right panel of Fig.~\ref{fig:cont}. From outside in, we see two rings (ring~2 and ring~1) separated by a clear gap (gap~1), some emission inside of the second ring (that we call region~2 in Fig.~\ref{fig:cont}), and an inner central emission (region~1 and central point source, see Sect.~\ref{sec:model_ALMA}). We note that the existence of the outer gap was hinted by a shoulder detected in previous observations~\citep{Villenave_2020}, but it was not resolved as with the current image. To characterize the substructures, we fit the deprojected visibilities using the \texttt{frank} package~\citep{Jennings_2020}. We exported the visibilities using a modified version of \texttt{export\_uvtable} from \citet{uvplot_mtazzari}. In our version, instead of using the average wavelength from all data, each baseline is normalized using the wavelength associated to each spectral window and channel. The visibilities from cycle~6 and~4 are independently obtained, shifted, and deprojected using an inclination and position angle of 84$^\circ$ and $49^\circ$, respectively (see Sect.~\ref{sec:model}). Then, they are concatenated. The radial profile obtained from \texttt{frank} depends on two hyper-parameters: $\alpha_{frank}$, $w_{\rm smooth}$. The former controls the maximum baseline that is used in the fitting process (long baselines where the signal-to-noise ratio is small are not included), and the latter controls how much the visibility model is allowed to fit all the visibilities structures from the data (see \citealt{Jennings_2020} for more details). We ran 1000 fits using different combinations of these two hyper-parameters varying between $1.2 < \alpha_{frank} < 1.4$ and $-2 <\log_{10} (w_{\rm smooth})< -1$. We chose these ranges to avoid artificial high frequency structure, and observe no significant difference between the profiles. In addition, we chose not to allow the resulting radial profiles from \texttt{frank} to have negative values. The average resulting radial profile and the average real part of the 1000 visibility fits are presented in Fig.~\ref{fig:frank}. In the top panel, we also include the cut along the major axis obtained from the image~(Fig.~\ref{fig:cont}). We find that the structures obtained from the visibility fit and the major axis cut are in very good agreement. They share the same location, and the relative brightness between ring~1 and ring~2 is similar in both cases. Gap~1 appears deeper in the major axis cut than in the visibility fit, likely because the visibility fit is averaged over all angles and the gaps are filled along the minor axis due to projection effects. Starting from the outside, we find that ring~2 peaks at $0.73\pm0.02$\arcsec, gap~1 is lowest at $0.63\pm0.02$\arcsec, and ring~1 peaks at $0.51\pm 0.02$\arcsec. Inward of $\sim$0.5\arcsec, there is a flat region with increasing surface brightness with radius, that we name region~2. In the \texttt{frank} profile, we also detect a small peak at $\sim$0.18\arcsec\ (inside of region~2), that is not visible in the major axis cut. Finally, we detect some emission from the inner regions of the disk. In the major axis cut, the central emission can be described by a central marginally resolved Gaussian plus some extended emission (possibly a ring, see Sect.~\ref{sec:model_ALMA} and Fig.~\ref{fig:model_1330_cut_zoom}). The two components are not detectable in the visibility fit likely because this is an azimuthally averaged profile and the inner ring is not resolved in the minor axis direction, even in the visibility domain. In addition, we find that the central flux of the \texttt{frank} fit is greater than that of the major axis cut. This is because, contrary to the major axis cut, the visibility fit is an azimuthal average, which is differently affected by the central beam smearing. \begin{figure} \centering \includegraphics[width = 0.48\textwidth, trim={1cm 0cm 0cm 0cm}, clip]{figures/Frank_majCut.pdf} \includegraphics[width = 0.48\textwidth, trim={1cm 0cm 0cm 1cm}, clip]{figures/visibility_frank.png} \caption{\emph{Top:} Mean radial intensity profile of millimeter continuum emission obtained from the visibilities with \texttt{frank} (in blue), and major axis cut obtained from the image (in orange). The major axis cut is normalized to its peak and the \texttt{frank} profile is normalized so that the normalized intensity of ring~2 coincides in both profiles. The uncertainties on the visibility fit correspond to the typical 10\% flux calibration error which dominates over the fits uncertainty, while the uncertainty on the major axis cut corresponds to $\pm3\sigma$. \emph{Bottom:} Observed averaged deprojected visibilities (black) and model obtained with \texttt{frank} (red). } \label{fig:frank} \end{figure} \begin{figure*} \centering \includegraphics[width = 0.48\textwidth, trim={1cm 0cm 0.5cm 0cm}, clip]{figures/Oph163131_mom0_continuum.pdf} \hspace{0.5cm} \includegraphics[width = 0.48\textwidth, trim={1cm 0cm 0.5cm 0cm}, clip ]{figures/Oph163131_mom1_continuum.pdf} \caption{\emph{Left:} $^{12}$CO continuum substracted moment 0 map, \emph{Right:} $^{12}$CO moment 1 map and continuum $5 \sigma$ and $10 \sigma$ contours in white. We illustrate the beam sizes by ellipses in the bottom left corner of each panel ($^{12}$CO beam in black -- both panels --, and continuum beam in white within the CO beam -- right panel only). } \label{fig:CO} \end{figure*} In Sect.~\ref{sec:model}, we present a radiative transfer modeling of the continuum millimeter emission. By reproducing the features characterized with the visibility fit, we aim to constrain the physical structure of the disk and of the different grain populations. In particular, we use the presence and shape of gap~1 to constrain the vertical extent of millimeter sized particles in the disk. As both the \texttt{frank} profile and major axis cut show similar substructures, we use the major axis profile for the rest of the analysis. \subsection{CO emission} \label{sec:CO_data} We display the $^{12}$CO moment 0 and 1 maps in Fig.~\ref{fig:CO}. Those represent the integrated intensity and the velocity map, respectively. Before discussing the shape of the maps, we first estimate the integrated flux of the $^{12}$CO emission, using the CASA \texttt{imstat} function on our ALMA moment~0 map. We measure the flux over a rectangle of 0.9\arcsec\ along the minor axis direction and 4.4\arcsec\ along the major axis, and obtain $F_{^{12}CO} = 6.1 \pm 0.6$\,Jy\,km\,s$^{-1}$. We report a 10\% uncertainty due to flux calibration errors. From the overlay of the $^{12}$CO moment 1 map and continuum emission in Fig.~\ref{fig:CO}, we see that the $^{12}$CO emission is significantly more extended than the millimeter continuum emission, both in the radial and vertical directions~\citep[see also][]{Flores_2021}. This is consistent with large grains being affected by vertical settling~\citep{Barriere-Fouchet_2005} and possibly radial drift~\citep{Weidenschilling_1977}. Thanks to the increased angular resolution, the $^{12}$CO moment 0 map (see left panel of Fig.~\ref{fig:CO}) shows significantly more details than previously detected in \citet{Flores_2021}, though the same overall features are present. The bottom south-east side appears brighter than the north-west side of the disk, which indicates that the south-east side is closest to us~\citep[see also][]{Wolff_2021}. In addition, the disk emission can be described by two distinct regions. First, the inner 150\,au ($\sim 1\arcsec$) of the disk describe a X shape, typical of very inclined systems. With the new observations, we clearly resolve the cold midplane with little CO emission, which separates the two bright sides of the disk. This inner region is colocated with the continuum millimeter emission. On the other hand, at larger radii ($R>200$\,au, $1.4\arcsec$), the disk appears flat, with a nearly constant brightness profile with radius. We find that the transition from the inner flaring X-region to the outer flat region starts at a radius of $\sim$150\,au, which roughly corresponds to the radius where the millimeter continuum emission disk ends and the disk' scattered light stops (see Sect.~\ref{sec:HST} and Fig.~\ref{fig:overlay_hst}). This change in the dust structure seems to also corresponds to a change in the disk gas structure. Using lower angular resolution observations, \citet{Flores_2021} found that this outer region is associated with a uniform temperature both radially and vertically. In Appendix~\ref{appdx:TRD}, we present an updated temperature map of the disk, resulting from the Tomographically Reconstructed Distribution method \citep[TRD,][]{Flores_2021, Dutrey_2017} applied to the new high angular resolution observations. We also recover the isothermal outer region. We note that \citet{Flores_2021} suggested that it was due to external UV illumination through a mostly optically thin region, and refer the reader to their analysis for a more detailed interpretation. \subsection{Comparison with scattered light image} \label{sec:HST} In this section, we compare the high angular resolution millimeter continuum and $^{12}$CO observations with a HST scattered light image of Oph\,163131\ modeled in \citet{Wolff_2021}. The scattered light image is expected to trace the scattering surface of small micron-sized particles, well mixed with the gas. We present an overlay of the HST image at 0.6\,$\mu$m, the $^{12}$CO moment 0 emission, and millimeter continuum contours in Fig.~\ref{fig:overlay_hst}. As first shown in \citet{Stapelfeldt_2014}, the scattered light image of Oph\,163131\ presents the characteristic features of edge-on disks, with two parallel nebulosities separated by a dark lane. \citet{Villenave_2020} estimated that the disk size~(diameter) in scattered light is 2.5\arcsec. \begin{figure} \centering \includegraphics[width = 0.49\textwidth]{figures/Oph163131_overlay_HST_CO.pdf} \caption{HST 0.6 $\mu$m scattered light image (colors), $^{12}$CO moment 0 ($5 \sigma_{gas}$, $15 \sigma_{gas}$, and $30 \sigma_{gas}$ contours, in blue), and $5 \sigma$ and $10 \sigma$ contours of the continuum map (white). The beam sizes are shown by ellipses in the bottom left corner ($^{12}$CO beam in white, and continuum beam in black, inside the CO beam).} \label{fig:overlay_hst} \end{figure} We first compare the apparent radial and vertical extent of the disk in the different tracers. In the radial direction, we find that the millimeter dust emission is less extended than both the scattered light and $^{12}$CO emission. The outer radius of the disk millimeter dust emission (at $\sim$150\ au) appears to coincide with the 15$\sigma_{gas}$ contours which also marks the limit of the inner region of $^{12}$CO emission (characterized by the bright X pattern split by a cold midplane, see Sect.~\ref{sec:CO_data}). Further away, the scattered light is fainter and comes from a more diffuse region which seems to extent as far out radially as the gas emission (out to approximately 400~au). The change of intensity in scattered light suggests that small grains are either not illuminated by the central star, possibly due to a decrease of the height of their scattering surface, or that they are depleted in the outer regions of the disk~\citep[e.g.,][]{Muro-Arena_2018}. Both the scattered light and the $^{12}$CO emission appear significantly more extended vertically than the millimeter continuum image, which is consistent with vertical settling occurring in the disk. The disk in scattered light seems to be as extended vertically as the $^{12}$CO emission, but with an additional fainter halo extending to significantly higher levels, which is due to PSF convolution (see for example the vertical halo also present in the PSF-convolved model of the HST image in Appendix~\ref{appdx:SED_HST}). \begin{figure} \centering \includegraphics[width = 0.46\textwidth, trim={0cm 0cm 0cm 0cm}, clip,]{figures/Spines.pdf} \caption{Position of the spines of the 0.6$\mu$m image (green lines), and $^{12}$CO moment 0 maps (blue lines), overlayed on top of either the moment 0 map (top panel) or the scattered light image (bottom panel).} \label{fig:spines} \end{figure} Interestingly, we also see significant differences between the morphologies seen in scattered light and $^{12}$CO emission. In Fig.~\ref{fig:spines}, we applied the method described in Appendix~D of \citet{Villenave_2020} to show the spine of each nebula (peak intensity) as seen in scattered light and $^{12}$CO emission. We find that, up to about 150 au, there is a clear increase of the $^{12}$CO apparent height with radius (blue lines). When deprojected, \citet{Flores_2021} found that this increase is roughly linear. We fitted the vertical extent of the $^{12}$CO emission surface as a function of radius with a linear regression and obtain $z/r\sim0.3$, for $r<150$~au. The variation of the CO scattering surface with radius in Oph\,163131\ is relatively small compared to similar estimates in other protoplanetary disks~\citep[e.g.,][]{ Pinte_2018, Flores_2021, Law_2021}. On the other hand, the opposite is seen for the scattered light nebulae (green lines), which are extremely flat, in the sense that their apparent height does not vary significantly with projected distance. This difference in behavior is particularly clear in the least bright side of the disk, which is in the bottom of Fig.~\ref{fig:spines}, and is likely due to optical depth effects. Differences between the height of the dust scattering surface and CO emission surfaces have already been identified in other protoplanetary disks~\citep[e.g.,][]{Rich_2021}, and can be due to different optical depth in the different tracers. In the case of our study, we are comparing the integrated scattered light intensity to the $^{12}$CO moment map, which is the integration of the $^{12}$CO channel maps. In the channel maps, higher velocities detected at small projected distances allow us to probe the warmer (brighter) disk interior. On the other hand, scattered light is optically thick and comes from the disk outer edge even at small projected distances, as indicated by its non-variation with projected distance. Thus, at small projected distances it is expected that the scattered light surface, probing the disk outer edge, appears higher than the $^{12}$CO moment~0 peak of emission, which traces the inner regions. The agreement in height towards the outer radii however indicates that both components are emitted at similar altitudes above the midplane. \section{Radiative transfer modeling} \label{sec:model} In the previous section, we presented high angular resolution observations of Oph\,163131, which reveal a highly structured disk. Given the high inclination of the disk, the presence of rings provides constraints on the vertical extent of millimeter dust particles, and on the efficiency of vertical settling. We aim to use these additional morphological constraints to refine the millimeter continuum radiative transfer modeling of the source presented in \citet{Wolff_2021}, and to constrain the physical structure of the millimeter grains in the disk of Oph\,163131. Building on the existing model and to obtain a more complete view of the disk, we also compute the spectral energy distribution (SED) and the 0.6 $\mu$m image. \subsection{Methodology} \label{sec:mcfost} To model the disk of Oph\,163131, we use the radiative transfer code \textsc{mcfost}~\citep{Pinte_2006, Pinte_2009}. We assume that the disk structure is axisymmetric, and we model the disk using several regions to reproduce all substructures seen in the continuum observations. Given the complexity of the ALMA continuum image, we do not aim to find a unique model, but rather a well-fitting one. For each region, we assume a power-law distribution for the surface density: \begin{equation} \label{eq:surf_dens} \Sigma(r) \propto r^{p}, \text{ for } R_\mathrm{in} < r < R_\mathrm{out} \end{equation} We parametrize the vertical extent of the grains by the scale height, such that: \begin{equation} \label{eq:scale_height} H(r) = H_\mathrm{100\,au} (r/100\,\text{au})^\beta, \text{ for } R_\mathrm{in} < r < R_\mathrm{out} \end{equation} where $\beta$ is the flaring exponent. For simplicity, we use astronomical silicates \citep[similar to those shown in Fig.~3 of][]{Draine_1984}\footnote{The dust properties are evaluated from: ftp://ftp.astro.princeton.edu/draine/dust/diel/eps\_suvSil} with a power law distribution of the grain size following $n(a)\text{d}a \propto a^{-3.5}\,\text{d}a$. The main free parameters of each region are thus $R_\mathrm{in}$, $R_\mathrm{out}$, $H_\mathrm{100\,au}$, and $M_\mathrm{dust}$. In all our modeling we fixed a flaring exponent of $\beta=1.1$ for large grains, and $\beta=1.2$ for small grains~\citep[as in][for example]{Villenave_2019}. In addition, we fixed the surface density exponent to $p=-1$ in all regions, except for region~2 and ring~2 where we needed to adjust this exponent~(see Sect.~\ref{sec:model_ALMA}). The inclination is a global disk parameter that we constrain with this modeling to be 84$^\circ$ (see Sect.~\ref{sec:model_ALMA}, ring~2). We assume that all regions are coplanar with this inclination. Following \citet{Wolff_2021} and \citet{Flores_2021}, we adopt a distance of 147~pc, a stellar mass of 1.2~M$_\odot$, stellar radius of 1.7~R$_\odot$, stellar effective temperature of 4500~K, and assume 2 magnitudes of interstellar extinction in the SED. For each set of parameters, we compute the 1.3\,mm continuum image, SED, and 0.6\,$\mu$m scattered light image. All maps are convolved by a representative PSF before being compared to the data. We use the HST PSF generated by the \texttt{TinyTim} software package~\citep{Krist_2011} for the 0.6\,$\mu$m image. For the ALMA data, we simulate the real interferometric response by producing synthetic images using the CASA simulator. For each model, we first generate synthetic visibility files for all 3 observing times and configurations, using the CASA task \texttt{simobserve}. We then produce synthetic images of these visibilities using the \texttt{tclean} function with the same weighting parameters as for the observations.\\ \begin{figure*} \centering \includegraphics[width = 0.3\textwidth, trim={0cm 0cm 3.2cm 0cm}, clip ]{figures/Oph163131_Image_data.pdf} \includegraphics[width = 0.3\textwidth, trim={0cm 0cm 3.2cm 0cm}, clip ]{figures/image_1330_CASA.pdf} \includegraphics[width = 0.34\textwidth, trim={1cm 0cm 0.5cm 0cm}, clip]{figures/residuals_1330.pdf} \caption{\emph{Left:} Continuum image of Oph\,163131, \emph{Middle:} Synthetic observations of the model using the CASA simulator, \emph{Right:} Residual map. Deviations from 3$\sigma$ and 5$\sigma$ are indicated by black and grey contours. In the scale bar $\sigma$ corresponds to the rms of the data image. The maximum of the scale corresponds to the peak signal-to-noise in the data ($\sim48.7\sigma$). } \label{fig:model_1330} \end{figure*} Previous modelings of Oph\,163131\ by \citet{Wolff_2021} have demonstrated that some degree of vertical settling is needed to reproduce both the scattered light and millimeter images. In this section and Sect.~\ref{sec:model_ALMA}, we mimic dust settling by considering two separated grain populations~\citep[e.g.,][]{Villenave_2019, Keppler_2018}. One layer is composed of large grains ($10-1000\ \mu$m) aimed to model the ALMA continuum map, and the other layer includes smaller dust particles ($0.01-10\ \mu$m) necessary to reproduce the SED and scattered light image. In addition, we note that we present a complementary model in Sect.~\ref{sec:settling}, using a different (parametric) settling prescription and considering a continuous distribution of dust particles. This second approach allows us to test the level of turbulence in the disk. For the model with two layers of dust grains (this section and Sect.~\ref{sec:model_ALMA}), we build the layer of small grains based on the results of the comprehensive MCMC fitting presented in \citet{Wolff_2021}. We have used a model where the small grains extend from 0.07~au to 170~au, they have a scale height of $H_{sd, 100au} = 9.7$~au, and a surface density and flaring exponents of -1 and 1.2, respectively. The parameters of the small grain layer are summarized in Table~\ref{tab:parameters}. They were chosen from the range of allowed parameters in \citet{Wolff_2021} and to provide a reasonable match of the SED and scattered light image (see Appendix~\ref{appdx:SED_HST}). We note that, for all regions of our model, we assumed that the gas is colocated with the dust with a gas-to-dust ratio of~100. For the new millimeter continuum data, our strategy followed an iterative process, looking for a representative model. We adjusted the parameters to reproduce both the surface brightness cut along the major axis and minimize the residual map, by way of visual inspection. In Sect.~\ref{sec:model_ALMA}, we focus on the radial structure and adopt a scale height for the large grains of $H_{ld, 100{\rm au}} = 0.5$\,au at 100\,au. Then, in Sect.~\ref{sec:vertical_extent}, we explore and discuss this scale height assumption in more details. \subsection{ALMA continuum model} \label{sec:model_ALMA} In the right panel of Fig.~\ref{fig:cont_label}, we presented a labelled version of the continuum image of Oph\,163131. We highlighted the main features that we aim to reproduce with radiative transfer modeling, namely ring~2, ring~1, the partially-depleted region~2, and the central region. To reproduce the ALMA continuum emission, we thus consider 5 regions in our model: (1) a central point source, representing the central peak; (2) a first ring, needed to reflect the elongated structure around the central point source, labeled region~1 in Fig~\ref{fig:cont_label}; (3) an inner region, to reproduce region~2; (4) a second ring, for ring~1, and (5) an outer ring, reproducing ring~2. \begin{figure} \centering \includegraphics[width = 0.45\textwidth, trim={0.4cm 0cm 0.5cm 0cm}, clip]{figures/cut_1330.pdf} \caption{Cut along the major axis (top), with each map normalized to its maximum and the contours corresponding to 3$\sigma$ levels, and residual cut (bottom). The blue region in the residual map corresponds to 3$\sigma$ and the green to 5$\sigma$. Green vertical lines are at 0.63\arcsec.} \label{fig:model_1330_cut} \medskip \includegraphics[width = 0.45\textwidth, trim={0.4cm 0cm 0.5cm 0cm}, clip]{figures/zoomed_cut_1330.pdf} \caption{Zoom in the cut along the major axis (Fig.~\ref{fig:model_1330_cut}) to show the central region. We show the beam size by a dashed blue Gaussian.} \label{fig:model_1330_cut_zoom} \end{figure} In this section, we present the main characteristics of each region starting from the outside. Our strategy was to first fix the inner and outer radii of each region and then we followed an iterative process to estimate the best combination of dust masses in each region to reproduce the major axis cut and residual maps. We present the model in Fig.~\ref{fig:model_1330}, Fig.~\ref{fig:model_1330_cut}, and Fig.~\ref{fig:model_1330_cut_zoom}, and its parameters in Table~\ref{tab:parameters}. \paragraph{Ring 2} We reproduce the outer emission by implementing a broad ring between 98~au and 150~au. The brightness profile does not drop steeply at the edges of the disk, which we model with a steeper power-law exponent of $p= -6$. Because this region is the best resolved, we used it to constrain the inclination of the system. We find that an inclination of $84\pm1^\circ$ matches best the major axis profile, the residual map, and the minor axis size of the disk. Both a higher and lower inclination lead to axis ratios which are not compatible with the data. We note that an inclination of 84$^\circ$ is consistent with the results of \citet{Wolff_2021}. We used this inclination and the observed position angle of the disk in the visibility fit presented in Sect~\ref{sec:frank}. \begin{figure} \centering \includegraphics[width = 0.49\textwidth]{figures/surfacedensity_2.pdf} \caption{Dust surface densities of the model presented in Table~\ref{tab:parameters}.} \label{fig:surface_density} \end{figure} \paragraph{Ring 1} The continuum millimeter image and \texttt{frank} profile (Fig.~\ref{fig:cont} and Fig.~\ref{fig:frank}) clearly show a ring centered at 73\,au ($\sim$ 0.5\arcsec). Because of the high inclination of the system, the ring is less clear in the major axis cut, as it is affected by projection effects. We reproduce this ring with a disk region between 60\,au and 87\,au. After including ring~1 in our model, it became clear both by looking at the model images and the major axis cut that the region inner to it is not devoid of dust. To reproduce this feature, we introduced a region of large grains in region~2. \begin{table} \centering \caption{Parameters for our radiative transfer model.} \begin{tabular}{rlc} \hline \hline Inclination& ($^\circ$)& 84\\ PA& ($^\circ$)& 49 \\ \hline && \emph{ Central point source}\\ $a_\mathrm{min}-a_\mathrm{max}$& ($\mu$m) &$10 - 1000$\\ $R_\mathrm{in}-R_\mathrm{out}$ & (au)&$0.07 - 3.5$\\ $M_\mathrm{dust}$& (M$_\odot$)& $4\,\cdot\,$10$^{-6}$\\ H$_\mathrm{ld, 100au}$, $\beta$, p &(au, $-$, $-$) & 0.5, 1.1, -1\\ \hline &&\emph{Region 1}\\ $a_\mathrm{min}-a_\mathrm{max}$& ($\mu$m) &$10 - 1000$\\ $R_\mathrm{in}-R_\mathrm{out}$ & (au)&$3.5 - 13$ \\ $M_\mathrm{dust}$& (M$_\odot$)& $3\,\cdot\,$10$^{-7}$\\ H$_\mathrm{ld, 100au}$, $\beta$, p &(au, $-$, $-$) & 0.5, 1.1, -1\\ \hline &&\emph{Region 2}\\ $a_\mathrm{min}-a_\mathrm{max}$& ($\mu$m) &$10 - 1000$\\ $R_\mathrm{in}-R_\mathrm{out}$ & (au)&$13 - 60$ \\ $M_\mathrm{dust}$& (M$_\odot$)& $1.3\,\cdot\,$10$^{-5}$ \\%$8\cdot10^{-6}$\\ H$_\mathrm{ld, 100au}$, $\beta$, p &(au, $-$, $-$) & 0.5, 1.1, +1\\ \hline && \emph{Ring 1}\\ $a_\mathrm{min}-a_\mathrm{max}$& ($\mu$m) &$10 - 1000$\\ $R_\mathrm{in}-R_\mathrm{out}$ & (au)& $60 - 87$ \\ $M_\mathrm{dust}$& (M$_\odot$)& $4\,\cdot\,$10$^{-5}$ \\ H$_\mathrm{ld, 100au}$, $\beta$, p &(au, $-$, $-$) & 0.5, 1.1, -1\\ \hline && \emph{Ring 2} \\ $a_\mathrm{min}-a_\mathrm{max}$& ($\mu$m) &$10 - 1000$\\ $R_\mathrm{in}-R_\mathrm{out}$ & (au)&$98 - $ 150\\%170$\\ $M_\mathrm{dust}$& (M$_\odot$)& $4\cdot10^{-5}$\\ H$_\mathrm{ld, 100au}$, $\beta$, p &(au, $-$, $-$) & 0.5, 1.1, -6\\ \hline &&\emph{Small grains}\\ $a_\mathrm{min}-a_\mathrm{max}$& ($\mu$m)&$0.01 - 10$\\ $R_\mathrm{in}-R_\mathrm{out}$ & (au)&$0.07-170$ \\ $M_\mathrm{dust}$ & (M$_\odot$)& $2\,\cdot\,$10$^{-5}$\\ H$_\mathrm{sd, 100au}$, $\beta$, p &(au, $-$, $-$) & 9.7, 1.2, -1\\ \hline \end{tabular} \tablecomments{Each parameter was adjusted during the modeling, except for the grain size ($a_\mathrm{min}-a_\mathrm{max}$) and the flaring exponent of each region ($\beta$). The parameters $H_{100au}$ and $\beta$ were defined in Eq.~(\ref{eq:scale_height}), and the surface density exponent in Eq.~(\ref{eq:surf_dens}).} \label{tab:parameters} \end{table} \begin{figure*} \centering \includegraphics[width = \textwidth]{figures/scaleHeightComparison_055.pdf} \caption{Scale height comparison. \emph{Top and bottom left panels:} Both sides of the continuum image of Oph\,163131, \emph{Middle panels:} Model images of Oph\,163131, with $H_{ld, 100{\rm au}}$ varying from 0.5 to 3 au. \emph{Top right panel:} Zoom in on gap 1~(0.63\arcsec\ is marked by the green vertical line) along the major axis of the disk, for the data (averaged from both sides) and models. \emph{Bottom right panel:} Minor axis cuts of the data and the models, at 0.55\arcsec~(80 au) from the central point source, marked by a black cross on the data and model images. The uncertainty on the cuts corresponds to 1$\sigma$.} \label{fig:scale_height_comp} \end{figure*} \paragraph{Region~2} To reproduce the increase of surface brightness with distance to the star in region~2 (between 0.09\arcsec and 0.45\arcsec), we included a disk region between 13\,au and 60\,au. However, based on the shape of the major axis profile, we found that a surface density exponent of $p=-1$ (as in most other disk regions) does not provide a good match to this region. With a decreasing surface density with radius, the profile of the region would not be flat but dropping steeply after a few au. Our results indicate that to reproduce the major axis profile between 13\,au and 60\,au (0.09\arcsec\ and 0.45\arcsec), region~2 needs to have an increasing surface density with radius, with $p=+1$. Such a profile is not expected in protoplanetary disks with a smooth pressure gradient and might indicate the presence of more complex structure, such as additional unresolved rings. We note that self-induced dust traps at the position of ring~1 might be able to reproduce a profile of increasing surface density with radius at millimeter wavelengths, corresponding to region~2~\citep{Vericel_2021, Gonzalez_2017}. \paragraph{Central point source and region 1} During our modeling, we found that the innermost region of Oph\,163131\ can not be correctly modeled by an unique region extending from 0.07\,au to 13\,au. This is because, as seen in the zoomed major axis cut on Fig.~\ref{fig:model_1330_cut_zoom}, the central emission is not properly reproduced by an unresolved point source convolved with the corresponding beam size: it can be decomposed into two Gaussians with different widths. We reproduce the central region with a slightly resolved central point source, plus a second ring extending up to 13\,au. We fixed the inner radius of the central point source to be a conservative estimate of the sublimation radius~\citep[see][their Section~3.2]{Wolff_2021}.\\ To summarize, we present the millimeter model images and model parameters in Fig.~\ref{fig:model_1330}, Fig.~\ref{fig:model_1330_cut}, Fig.~\ref{fig:model_1330_cut_zoom}, and Table~\ref{tab:parameters}. The SED and scattered light model are shown in Appendix~\ref{appdx:SED_HST}, and we also display the surface density of the model in Fig.~\ref{fig:surface_density} (see Appendix~\ref{appdx:surface_density} for details). The model has a total dust mass of 1.2$\,\cdot\,$10$^{-4}\ M_\odot$, which is in agreement with the results from MCMC modeling by \citet[][constraining $M_\mathrm{dust} >3 \cdot 10^{-5}\ M_\odot$ for the scattered light model]{Wolff_2021}. However, we note that with the current modeling the large grain layer is partially optically thick, which implies that the dust masses presented in Table~\ref{tab:parameters} are formally lower limits. \section{Discussion} \label{sec:discussion} \subsection{Vertical extent of mm sized particles} \label{sec:vertical_extent} Despite the high inclination of Oph\,163131, the new millimeter continuum image presented in this work reveals interesting radial substructures. Those can be used to constrain the vertical extent of large dust grains in this disk. In particular, the fact that the outer gap is well resolved readily indicates that the mm-emitting dust is constrained to a very thin midplane layer. In Sect.~\ref{sec:model_ALMA}, we presented a model with an extremely thin large grain scale height of $H_{ld, 100{\rm au}} =0.5$ au. Here, in order to constrain the vertical extent of millimeter dust particles, we follow the methodology of \citet{Pinte_2016} on HL~Tau and look for geometrical constraints on the vertical extent of the millimeter dust in the rings. We modify our model presented in Sect.~\ref{sec:model} to test larger scale heights for the large dust layers, with $H_{ld, 100{\rm au}} =$ 0.5, 1, 2, and 3 au at 100\,au. We keep the same disk radial structure, dust mass, and small grain layer as presented in Table~\ref{tab:parameters}, and only vary the value of $H_{ld, 100{\rm au}}$ in the large grains layers. In Fig.~\ref{fig:scale_height_comp}, we compare the data with the model images, a zoom in on the major axis cut at the position of the outer gap, and minor axis cuts across ring~1. From the model images and major axis cut, one can clearly see that when $H_{ld, 100{\rm au}}$ increases, the outer gap gets filled in due to projection effects~\citep[see also][]{Pinte_2016}. By observing the major axis profiles displayed in the top right panel of Fig.~\ref{fig:scale_height_comp}, we find that the gap is clearly less deep for the model with $H_{ld, 100{\rm au}} = 3$\,au than in the data, which indicates that this model is too vertically thick to reproduce the observations. In addition, we computed minor axis cuts at 0.55\arcsec ($\sim80$\,au) from the central point source along the major axis direction to assess how well the ring/gap is resolved in the data and in the model. In the observations, the cut along the minor axis at 0.55\arcsec\ reveals 3 peaks: one for each side of ring~2 and the central peak corresponding to ring~1. We also see a brightness asymmetry between the two peaks corresponding to the outer ring, which is mostly due to optical depth / geometrical effects. This asymmetry varies between 20\% and 40\% along the major axis. When compared to the models, we find that the models with a millimeter scale height of $H_{ld, 100{\rm au}} = 3$\,au and $H_{ld, 100{\rm au}} = 2$\,au do not show three peaks in the cut. Their minor axis profiles are smoother, with only one peak, which indicates that they are too thick vertically to reproduce the observations. On the other hand, the models with a scale height of $H_{ld, 100{\rm au}} = 1$ au and $H_{ld, 100{\rm au}} = 0.5$ au show the three peaks. The peaks are significantly clearer for the model with a large grain scale height of 0.5\,au, which provides the best match to the data. This indicates that (at least) the outer region of the disk is extremely thin vertically, with a vertical scale height of the order of $H_{ld, 100{\rm au}} \leq 0.5$\,au or less. Our results are in agreement with findings from \citet{Pinte_2016} who modeled the gaps and rings of HL\,Tau, and also with the conclusions of \citet{Villenave_2020} based on the comparison of the major and minor axis profiles of HK\,Tau\,B, HH\,30, and HH\,48 with some fiducial radiative transfer modeling. Both studies find a scale height of millimeter grains $H_{ld, 100{\rm au}}$ of about or less than one au in these systems. Our analysis of Oph\,163131\ thus seems to generalize the finding that grains emitting at millimeter wavelengths are extremely thin vertically in the outer regions of protoplanetary disks to a larger number of disks, pointing to the possibility that this is the case in most protoplanetary disks. We discuss some implications of this result for the formation of wide-orbit planets in Sect.~\ref{sec:pa}. \vspace{1cm} \subsection{Implications on the degree of dust-gas coupling} \label{sec:alpha/St} In the previous section, we have constrained the vertical extent of the millimeter dust particles to be $H_{ld, 100{\rm au}} \leq 0.5$\,au at 100\,au. This value is about one order of magnitude smaller than the small grains and gas scale height obtained by \citet{Wolff_2021} using MCMC fitting of the scattered light data (of $H_{g, 100{\rm au}} = H_{sd, 100{\rm au}} = 9.7 \pm3.5$\,au), and is indicative of efficient settling of millimeter grains in the disk. From the difference of scale height between the gas and millimeter dust grains, one can estimate the degree of coupling of this dust with gas, which is what we aim to do in this section. More specifically we characterize the ratio $\alpha/St$, where $\alpha$ represents the turbulence strength~\citep{Shakura_1973}, and $St$ is the Stokes number which describes the aerodynamic coupling between dust and gas. In the classical 1D prescription of settling~\citep{Dubrulle_1995}, the vertical transport of particles is related to turbulence through a diffusion process. Assuming a balance between settling and turbulence, for grains in the Epstein regime ($St\ll1$), and under the assumption of $z\ll H_g$, the dust scale height can be written as follows~\citep[e.g.,][]{Youdin_2007, Dullemond_2018}: \begin{equation} H_d = H_g \left(1 + \frac{StSc}{\alpha}\right)^{-1/2} \label{eq:dust_scale_height} \end{equation} where $Sc$ is the Schmidt number of the turbulence, characterizing the ratio of the turbulent viscosity over turbulent diffusivity. In this section, we follow \citet{Dullemond_2018} and \citet{Rosotti_2020} and assume that the Schmidt number of the turbulence is equal to $Sc=1$. Using Eq.~(\ref{eq:dust_scale_height}) for $H_{d} = H_{ld, 100au}=0.5$~au and $H_{g, 100au} = 6.2$~au (the lowest limit from \citealt{Wolff_2021}), we obtain an upper limit for the ratio $\alpha/St$ of $[\alpha/St_{ld}]_{100au}<6\cdot10^{-3}$, suggesting a strong decoupling between the large grains in this model and the gas in the vertical direction. This value is relatively small compared to previous estimates of the $\alpha/St$ ratio, which have been measured both in the vertical and in the radial directions. In this paragraph we compare our estimates with results from \citet{Doi_Kataoka_2021}, also studying the strength of the coupling in the vertical direction. \citet{Doi_Kataoka_2021} estimated $[\alpha/St]_{100au, \text{ HD}163296}<1\cdot10^{-2}$ for the outer ring of HD~163296 (and $[\alpha/St]_{67au, \text{ HD}163296}>2.4$ for the inner ring). Their relatively looser constraint on the coupling strength of the outer ring likely comes from the fact that HD~163296 is significantly less inclined than Oph\,163131\ (47$^\circ$ vs 85$^\circ$) making the study of its vertical structure more difficult. On the other hand, \citet{Dullemond_2018} and \citet{Rosotti_2020} used the dust and gas width of several ringed systems to constrain the coupling strength in the radial direction. They consistently obtained $[\alpha/St]_{100au, \text{ HD}163296}\geq4\cdot10^{-2}$ for the second ring of HD~163296, and $[\alpha/St]_{120au, \text{ AS}209}\geq0.13$ for AS~209, which is significantly higher than our estimate of the vertical coupling on Oph\,163131\ and that of \citet{Doi_Kataoka_2021} in HD~163296. In the radial direction, it is also clear that the rings in Oph\,163131\ are not particularly thin (and do not appear Gaussian). In particular, there is only a factor $R_{CO, out}/R_{ld, out}\sim2.8$ in radial size between $^{12}$CO and the millimeter dust emission while the difference in the vertical direction reaches about ten times that value: $H_{g, 100au}/H_{ld, 100au}\sim 20$ (or at least $H_{g, 100au}/H_{ld, 100au}> 12$ if we assume the lowest limit from \citealt{Wolff_2021}). As previously suggested by \citet{Doi_Kataoka_2021}, these apparent inconsistencies might indicate that the turbulence level is different in the radial and vertical direction, or that the ring formation mechanism is different from what was assumed by \citet{Dullemond_2018} and \citet{Rosotti_2020}. The first possibility is also consistent with the results from \citet{Weber_2022}, who produced hydrodynamical simulations of V4046\,Sgr and found that, to reproduce all their observations, the turbulence in the vertical direction must be reduced compared to its value in the radial direction. \subsection{Vertical settling} \label{sec:settling} The modeling presented in Sect.~\ref{sec:model_ALMA}, Sect.~\ref{sec:vertical_extent} and used in Sect.~\ref{sec:alpha/St} was based on the simplified assumption of two independent layers of dust particles, where the small grains ($<10\,\mu$m) have a large scale height and the large grains ($>10\,\mu$m) are concentrated into the midplane. We now aim to test a more realistic settling prescription (previously used by e.g., \citealt{Pinte_2016} and \citealt{Wolff_2021}), which allows to consider a continuous distribution of dust particles. We use the \citet{Fromang_Nelson_2009} prescription for settling (their equation 19), which implementation in \texttt{mcfost} is described in \citet{Pinte_2016}. We assume that the gas vertical profile remains Gaussian and that the diffusion is constant vertically. With this prescription, each dust grain size follows its individual vertical density profile, which depends on the gas scale height, local surface density, and the turbulent viscosity coefficient $\alpha$~\citep{Shakura_1973}. This profile reproduces relatively well the vertical extent of dust grains obtained with global MHD simulations~\citep{Fromang_Nelson_2009}. For completeness, we report the vertical density profile for a grain size $a$ that we adopt in this section: \begin{equation} \rho(r, z, a)\propto \Sigma(r) \exp\left[-\frac{\Omega\tau_S(a)}{\tilde{D}}\left(e^{\frac{z^2}{2H_g^2}} -1\right) -\frac{z^2}{2H_g^2}\right] \label{eq:Fromang} \end{equation} with $\Omega=\sqrt{\frac{GM_\star}{r^3}}$, $\tilde{D} = \frac{H_g\alpha}{S_c\Omega}$, $\tau_S(a)=\frac{\rho_da}{\rho_gc_S}$, and where $c_S=H_g\Omega$ is the midplane sound speed, $H_g$ is the gas scale height, $S_c$ is the Schmidt number that is fixed to~1.5 in \texttt{mcfost}~\citep{Pinte_2016}, $\rho_d$ is the dust material density, and $\rho_g$ is the gas density in the midplane. We note that the Stokes number described in Sect.~\ref{sec:alpha/St} corresponds to $St = \Omega\tau_S(a)$. The degree of settling is set by varying the $\alpha$ parameter. We also note that when $z\ll H_g$, the dust density defined in Eq.~(\ref{eq:Fromang}) reduces to a simple Gaussian function with a scale height defined by Eq.~(\ref{eq:dust_scale_height})~\citep[e.g.,][]{Riols_2018}. The value of the turbulence parameter is usually predicted to range from $\alpha = 10^{-3} - 10^{-2}$\ when driven by the MRI (magnetorotational instability, e.g., \citealt{Balbus_1991}). On the other hand, in regions with low ionization and weak coupling between the gas and magnetic fields, recent studies have showed that hydrodynamic or non-ideal MHD effects may dominate, and could lead to turbulent parameter as low as $\alpha = 10^{-4} - 10^{-6}$~\citep{Flock_2020, Bai_2013}. In this context, we produced models for different settling strengths, obtained for an $\alpha$ parameter of $10^{-5}$, $10^{-4}$, and $10^{-3}$. For this modeling, we assume that the grain size integrated over the whole disk follows a simplified power law distribution as of $n(a)\mathrm{d}a \propto a^{-3.5}\,\mathrm{d}a$, with minimum and maximum grain sizes of 0.01 and 1000\,$\mu$m, respectively. We note that the power law size distribution may vary as a function of vertical height due to the presence of vertical settling in our model~\citep{Sierra_2020}. We distribute the grains over 5 radial regions coincident with the large grain regions presented in Table~\ref{tab:parameters}. All 5 regions are normalized to a gas scale height of $H_{g, 100{\rm au}} = 9.7$ au at 100~au with a flaring of $\beta =1.1$, but the large grains will have a smaller scale height than the gas because of the settling prescription considered ($H_{ld, 100{\rm au}}<H_{g, 100{\rm au}}$). We adjust the mass of each region to obtain a model representing relatively well the millimeter image, and more specifically the levels of the surface brightness profile the major axis. The total dust mass in these models varies between 7.3 and 9.9\,$\cdot 10^{-5}\ M_\odot$. As for Sect.~\ref{sec:model_ALMA}, we produced synthetic images of the models using the CASA simulator. \begin{figure*} \centering \includegraphics[width =1 \textwidth]{figures/settling_model_CASA.pdf} \caption{Data (left panel) and models using the settling prescription of \citet{Fromang_Nelson_2009}, with an $\alpha$ parameter of $10^{-5}$ (second left panel), $10^{-4}$ (second right panel), and $10^{-3}$ (right panel). } \label{fig:settling_fromang} \end{figure*} We present the 1.3\,mm model images obtained for an $\alpha$ parameter of $10^{-5}$, $10^{-4}$, and $10^{-3}$ in Fig.~\ref{fig:settling_fromang}. For comparison with Sect.~\ref{sec:vertical_extent}, the scale height of 1~mm sized particles obtained in these models is $H_{1\text{mm}, 100au}= 0.95$~au, 2.7~au, and 5.7\,au, respectively. Similarly to the previous section, the effect of vertical settling is clearly visible in these models. Focusing on the outer regions of the disk (ring~1, gap~1, and ring~2), we find that the highest turbulence level leads to a high millimeter dust scale height, making the gap blurred due to projection effects. On the other hand, the lowest turbulence tested of $\alpha =10^{-5}$ settles efficiently the large grains to the midplane and reproduces well the shape of the millimeter emission. When performing a similar study for various other gas scale heights, between $H_{g, 100{\rm au}} = 6.2$\,au and 13.2\,au, compatible with the range of posterior values estimated by~\citet[][]{Wolff_2021}, we also found that $\alpha =10^{-5}$ provides the best match with the observations. Thus, we find that the millimeter observations of Oph\,163131\ are consistent with a turbulent viscosity coefficient of $\alpha \leq10^{-5}$, at least in the outer regions of the disk (r $\sim$ 100 au). Together with results from turbulent line broadening in several protoplanetary disks~\citep[e.g.,][]{Flaherty_2020, Flaherty_2018, Flaherty_2017}, and of \citet{Pinte_2016} using a similar technique on HL\,Tau, our study suggests that the turbulence is low in the outer regions of protoplanetary disks. Additionnally, we note that when combining these results with the coupling estimate from Sect.~\ref{sec:alpha/St}, we find that the Stokes number of particles emitting at millimeter wavelengths must be greater than $St >1.7\cdot10^{-3}$ at 100\,au. Finally, we note that the settling prescription implemented in this section depends on many parameters (e.g., grain size distribution, settling prescription, same turbulence in all the disk, constant grain opacity, ...) and that more complexity might need to be added in the models to be able to simultaneously reproduce the millimeter, scattered light images, and SED. Nevertheless, the high resolution datasets available for the highly-inclined disk in Oph\,163131\ allows us to get independent measures of scale height for multiple dust sizes, which is extremely valuable to test and improve current dust settling models. \subsection{Potential for wide-orbit planet formation in Oph\,163131} \label{sec:pa} The exceptionally well-characterized outer disk of Oph\,163131\ is strongly indicative of dust growth and settling. A significant fraction of the dust mass appears to have grown to near-mm sizes and to have settled towards a dense disk midplane layer, with $H_{ld, 100au}/H_{g, 100au}\sim~0.05$ at 100\,au. These two processes, dust growth and settling, are widely recognized as the key first steps for planetesimal formation and planetary growth \citep[for example, see reviews by][]{Youdin_2013,Johansen_2017,Ormel_2017}. Nevertheless, the outer parts of protoplanetary disks ($\gtrsim~50$\,au) are a challenging environment for planet formation due to inherently low solid surface densities and long orbital timescales. In this section, we address the potential for wide-orbit planet formation in Oph\,163131, motivated by the ring-like features and dust depletion pattern that could be indicative of ongoing planet formation. A common suggestion for the formation of wide-orbit giant planets is that such planets emerge through the direct fragmentation of young, massive, gravitationally unstable disks \citep{Helled_2014}. However, the apparent quiescent nature of Oph\,163131\ with signs of extremely low of vertical stirring ($\alpha \lesssim 10^{-5}$) and little to no accretion onto the star~\citep{Flores_2021} rules out this scenario. Possibly, the formation of such gravitational-instability planets could have taken place earlier in the disk lifetime, but the early formation of such massive planets -- exceeding Jupiter in mass \citep{Kratter2010} -- would have resulted in a deep inner gas and dust cavities, that are not seen. Instead, as we argue below, planet formation could have proceeded more gently through core accretion via pebble accretion. In the core accretion scenario \citep{Mizuno_1980,Pollack_1996}, the formation of a giant planet is initiated by the formation of a solid icy/rocky core, that accretes a H/He gaseous envelope of varying mass, resulting in either ice giants -- with an envelope mass smaller than the core mass -- or gas giants. However, the formation of a typical giant-planet core of about $10$ Earth masses (M$_\mathrm{E}$) cannot occur solely through the process of planetesimal accretion at orbits outside $10$\,au: formation times would exceed disk lifetimes even when assuming a planetesimal mass budget exceeding a solar dust-to-gas ratio by factor $10$ \citep{Rafikov_2004,Kobayashi_2010}. On the other hand, core growth can be accelerated by the direct accretion of pebbles, provided that, as is the case in Oph\,163131, they are abundantly present and concentrated towards the disk midplane. The pebble accretion efficiency is highly dependent on the vertical pebble scale height, because only when the pebble accretion radius exceeds the pebble scale height, accretion reaches maximal efficiency \citep[in the so-called 2D Hill accretion branch,][]{Lambrechts_2012}. Figure \ref{fig:pebble_accretion} illustrates the growth timescale as function of the pebble scale height and the pebble-to-gas ratio \citep[following][]{Lambrechts_2012,Lambrechts_2019}. Pebbles are assumed to be spherical with a radius of $a=0.5$\,mm. We take for the pebble surface density the values inferred in Sect.~\ref{sec:model_ALMA} (see Table~\ref{tab:parameters} and Fig.~\ref{fig:surface_density}). However, we note that regions in the disk may be optically thick, which would result in higher dust-to-gas ratios than used here. We present two assumptions on the gas surface density. One where the dust-to-gas ratio is the nominal solar value of $0.01$, with a 75\%-mass fraction locked in pebbles (consistent with our model from Sect.~\ref{sec:model_ALMA}, see also Appendix~\ref{appdx:paparam}) and one where we assume a total disk mass of $0.1$\,M$_{\odot}$. The latter is an upper limit given that the gas disk is likely less massive: there is no strong gas accretion onto the star detected~\citep{Flores_2021} and no evidence for asymmetries in the disk~\citep[potentially triggered by gravitational instabilities, e.g.,][]{Hall_2019, Hall_2020}. We choose a location of $r=100$\,au, corresponding to the inner edge of the outer ring 2. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{figures/dtgplotv14_100AU_R0.05.png} \caption{ Time to grow a planetary embryo of 10$^{-2}$~M$_{\rm E}$ to a fully grown core of 10~M$_{\rm E}$ at $r=100$\,au, as given by the gray contours, depending on the pebble scale height and local pebble-to-gas surface density ratio, assuming pebbles $a=0.5$\,mm in size. The vertical blue dashed line gives the estimate for the pebble scale height in Oph\,163131. The red point corresponds to a disk with solar dust-to-gas ratio, with a 75\%-mass fraction locked in pebbles. The vertical orange dashed line corresponds to the assumption of massive gas disk, with a total mass of $0.1$\,M$_\odot$. The gray area shows where the midplane pebble-to-gas ratios is equal to, or above, unity, which corresponds to the parameter region in line with planetesimal formation through the streaming instability. % Pebble scale heights exceeding the one inferred here for Oph\,163131\ would suppress planet formation in wide orbits. } \label{fig:pebble_accretion} \end{figure} The disk around Oph\,163131\ appears to be conductive to core growth via pebble accretion within a range of reasonable values for the pebble-to-gas surface density, as shown in Figure \ref{fig:pebble_accretion}. We find that a $10^{-2}$~M$_{\rm E}$ embryo can grow to up to 10~M$_{\rm E}$ at $60$\,au from the central star, within less than $10$\,Myr \citep[the maximum lifetime of protoplanetary disks,][]{Ribas_2015}. Importantly, a pebble scale height of $H_{\rm pebbles}/H_g \sim 0.05$, consistent with $H_{ld, 100au}/H_{g, 100au}$ (see Sect. \ref{sec:model_ALMA} and \ref{sec:vertical_extent}, and also Appendix~\ref{appdx:surface_density}), strongly promotes core growth, while pebble scale heights that are a factor $10$ larger would largely suppress core growth by pebble accretion. Furthermore, in Appendix~\ref{appdx:paparam}, we show that core-growth timescales do not strongly depend on the chosen location. We present results near the inner and outer edge of ring 1, at respectively $61$ and $85$\,AU. In contrast, our results do depend on the chosen particle size. In Appendix D, we illustrate that particles need to grow beyond $0.1$\,mm in size for pebble accretion to drive core growth, which is consistent with what is generally inferred in protoplanetary disks~\citep[e.g.,][]{Carrasco-Gonzalez_2019, Macias_2021, Tazzari_2021}. A possible issue is that core growth by pebble accretion needs to be initiated by a sufficiently large seed mass that results from an episode of planetesimal formation. Planetesimals are believed to be the results of the gravitational collapse of pebble swarms that self-concentrate through the streaming instability in the pebble midplane \citep{Youdin_2007,Johansen_2007}. The streaming instability requires dust growth to pebble sizes and a high degree of pebble settling (approaching midplane pebble-to-gas ratios near unity). Although planetesimal formation may be unlikely to be ongoing now (see the gray upper triangle in Fig.~\ref{fig:pebble_accretion} marking where the midplane pebble-to-gas ratios is larger than one), the combined lack of strong vertical pebble stirring and the possibility of local moderate increases in pebble concentrations, would be conductive to planetesimal formation \citep{ Johansen_2009, Bai_2010, Carrera_2021}. When planetesimal formation is triggered in simulations of the streaming instability, the analysis of the typical planetesimal mass distribution argues for large planetesimals in the outer disk \citep{Schafer_2017}. Following the mass scaling of \citet{Liu_2020}, we find planetesimals in the exponential tail of the mass distribution could reach masses exceeding 0.01~M$_{\rm E}$ beyond 60\,au (as assumed in Fig.\,\ref{fig:pebble_accretion}). In summary, the low observed pebble scale height of the protoplanetary disk around Oph\,163131\ is conducive to planetary growth by pebble accretion, even in wide orbits, exceeding $50$\,au. If indeed planet formation was initiated in these outer regions, it could possibly be consistent with the depletion of pebbles in the inner disk ($<60$\,au) and the ring-like features identified in this work. Thus, the Oph\,163131-disk provides us with an exciting opportunity to study possibly ongoing planet formation at large orbital radii. \section{Summary and conclusions} \label{sec:concl} In this paper, we present new high angular resolution ALMA continuum and $^{12}$CO millimeter images of the highly inclined disk Oph\,163131. The $^{12}$CO image can be described by two distinct regions: 1) an inner region ($R\lesssim150$\,au) where the disk describes a clear X~shape, with a linear increase of the gas emission height with radius as of $z/r\sim0.3$, and 2) a flat outer region (for $R>200$\,au), with uniform brightness and temperature. In contrast with the $^{12}$CO emission, the scattered light surface brightness of HST 0.6\,$\mu$m data appears flat at all radii. This indicates that scattered light comes from the outer regions of the disk, due to the high optical depth of micron-sized particles in Oph\,163131. We find that the heights of scattered light and $^{12}$CO emission are similar at large radii, which suggests that both components are emitted at similar altitudes above the midplane. In addition, we find that the millimeter continuum emission from larger grains is less extended in both the radial and vertical direction compared to the scattered light and gas emission, which is in agreement with expectations from vertical settling and possibly radial drift. Our millimeter continuum observations of Oph\,163131\ reveal clear rings, which remained undetected with previous lower angular resolution observations. From the outside in, the disk shows two rings separated by a clear gap, some emission inside the first ring, and some bright central emission. We performed comprehensive radiative transfer modeling of the ALMA continuum image in order to constrain the physical structure of the source. In particular, we used the resolved outer ring, located at $\sim$100\,au, to add strong geometrical constraints on the vertical extent of millimeter sized particles. Our modeling of Oph\,163131\ indicates that the grains emitting at 1.3\,mm are extremely thin vertically, with a vertical scale height at 100\,au of $H_{ld, 100au}\leq 0.5$ au. Because the vertical extent of small dust particles (and indirectly the gas) has been constrained to be $\sim10$~au at 100\,au~\citep{Wolff_2021}, this is a clear evidence that vertical settling is occurring in the disk. Using a classical 1D prescription of settling and our estimate of dust and gas scale height, we estimate the degree of coupling of dust and gas to be $[\alpha/St]_{100au}<6\cdot10^{-3}$. This is value is particularly low compared to previous estimates in protoplanetary disks. We also aimed at constraining the turbulence parameter $\alpha$ in Oph\,163131. To do so, we produced three additional radiative transfer models assuming the settling model from \citet{Fromang_Nelson_2009}, with $\alpha=10^{-3}, 10^{-4},\text{ and }10^{-5}$, and a gas vertical extent of 9.7\,au at 100\,au. We find that the coefficient of turbulent viscosity needs to be extremely low in the disk to reproduce the observations, of order of $\alpha \lesssim10^{-5}$ in the outer regions of the disk. Finally, we used our results to test the pebble accretion scenario in the outer regions of the disk. The remarkably small pebble scale height of Oph\,163131\ is particularly favorable for pebble accretion: we show a $10$ Earth mass planet can form in the outer disk, between approximately $60$ and $100$\,au, within less than 10~Myr. If, on the other hand, the dust scale height would have been 10 times larger than our observational constraint, then this would largely suppress core growth by pebble accretion. Thus, the extreme vertical settling measured in Oph\,163131\ can be the origin of the formation of wide-orbit planets. Further constraints on vertical settling in a larger number of protoplanetary disks might provide more insights into the dominant formation mechanism of wide-orbit planets. \bigskip \emph{Acknowledgments.} We thank the anonymous referee for their detailed revision of our work, which helped to improve the quality of this study. This paper makes use of the following ALMA data: ADS/JAO.ALMA\#2018.1.00958.S and 2016.1.00771.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), MOST and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. MV research was supported by an appointment to the NASA Postdoctoral Program at the NASA Jet Propulsion Laboratory, administered by Universities Space Research Association under contract with NASA. This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sk\l{}odowska-Curie grant agreement Nº 210021. GD acknowledges support from NASA grants NNX15AC89G and NNX15AD95G/NExSS as well as 80NSSC18K0442. MB acknowledges funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant PROTOPLANETS No. 101002188). C.P. acknowledges funding from the Australian Research Council via FT170100040, and DP180104235. KRS acknowledges support from the NASA Exoplanet Exploration Program Office. AS acknowledges support from ANID/CONICYT Programa de Astronom\'ia Fondo ALMA-CONICYT 2018 31180052. \bigskip \emph{Software:} CASA \citep{McMullin_2007}, \texttt{mcfost} \citep{Pinte_2006, Pinte_2009}, \texttt{frank} \citep{Jennings_2020}, Matplotlib \citep{Hunter_2007}, Numpy \citep{Harris_2020}, \texttt{TinyTim} \citep{Krist_2011}. \bigskip \copyright 2022. All rights reserved. \subsubsection*{#1}} \pagestyle{headings} \markright{Reference sheet: \texttt{natbib}} \usepackage{shortvrb} \MakeShortVerb{\|} \begin{document} \thispagestyle{plain} \newcommand{\textsc{Bib}\TeX}{\textsc{Bib}\TeX} \newcommand{\texttt{#1}\def\filedate{#2}\def\fileversion{#3}}}{\texttt{#1}\def\filedate{#2}\def\fileversion{#3}}} \begin{center}{\bfseries\Large Reference sheet for \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ usage}\\ \large(Describing version \fileversion\ from \filedate) \end{center} \begin{quote}\slshape For a more detailed description of the \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package, \LaTeX\ the source file \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\texttt{.dtx}. \end{quote} \head{Overview} The \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package is a reimplementation of the \LaTeX\ |\cite| command, to work with both author--year and numerical citations. It is compatible with the standard bibliographic style files, such as \texttt{plain.bst}, as well as with those for \texttt{harvard}, \texttt{apalike}, \texttt{chicago}, \texttt{astron}, \texttt{authordate}, and of course \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}. \head{Loading} Load with |\usepackage[|\emph{options}|]{|\texttt{#1}\def\filedate{#2}\def\fileversion{#3}}|}|. See list of \emph{options} at the end. \head{Replacement bibliography styles} I provide three new \texttt{.bst} files to replace the standard \LaTeX\ numerical ones: \begin{quote}\ttfamily plainnat.bst \qquad abbrvnat.bst \qquad unsrtnat.bst \end{quote} \head{Basic commands} The \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package has two basic citation commands, |\citet| and |\citep| for \emph{textual} and \emph{parenthetical} citations, respectively. There also exist the starred versions |\citet*| and |\citep*| that print the full author list, and not just the abbreviated one. All of these may take one or two optional arguments to add some text before and after the citation. \begin{quote} \begin{tabular}{l@{\quad$\Rightarrow$\quad}l} |\citet{jon90}| & Jones et al. (1990)\\ |\citet[chap.~2]{jon90}| & Jones et al. (1990, chap.~2)\\[0.5ex] |\citep{jon90}| & (Jones et al., 1990)\\ |\citep[chap.~2]{jon90}| & (Jones et al., 1990, chap.~2)\\ |\citep[see][]{jon90}| & (see Jones et al., 1990)\\ |\citep[see][chap.~2]{jon90}| & (see Jones et al., 1990, chap.~2)\\[0.5ex] |\citet*{jon90}| & Jones, Baker, and Williams (1990)\\ |\citep*{jon90}| & (Jones, Baker, and Williams, 1990) \end{tabular} \end{quote} \head{Multiple citations} Multiple citations may be made by including more than one citation key in the |\cite| command argument. \begin{quote} \begin{tabular}{l@{\quad$\Rightarrow$\quad}l} |\citet{jon90,jam91}| & Jones et al. (1990); James et al. (1991)\\ |\citep{jon90,jam91}| & (Jones et al., 1990; James et al. 1991)\\ |\citep{jon90,jon91}| & (Jones et al., 1990, 1991)\\ |\citep{jon90a,jon90b}| & (Jones et al., 1990a,b) \end{tabular} \end{quote} \head{Numerical mode} These examples are for author--year citation mode. In numerical mode, the results are different. \begin{quote} \begin{tabular}{l@{\quad$\Rightarrow$\quad}l} |\citet{jon90}| & Jones et al. [21]\\ |\citet[chap.~2]{jon90}| & Jones et al. [21, chap.~2]\\[0.5ex] |\citep{jon90}| & [21]\\ |\citep[chap.~2]{jon90}| & [21, chap.~2]\\ |\citep[see][]{jon90}| & [see 21]\\ |\citep[see][chap.~2]{jon90}| & [see 21, chap.~2]\\[0.5ex] |\citep{jon90a,jon90b}| & [21, 32] \end{tabular} \end{quote} \head{Suppressed parentheses} As an alternative form of citation, |\citealt| is the same as |\citet| but \emph{without parentheses}. Similarly, |\citealp| is |\citep| without parentheses. Multiple references, notes, and the starred variants also exist. \begin{quote} \begin{tabular}{l@{\quad$\Rightarrow$\quad}l} |\citealt{jon90}| & Jones et al.\ 1990\\ |\citealt*{jon90}| & Jones, Baker, and Williams 1990\\ |\citealp{jon90}| & Jones et al., 1990\\ |\citealp*{jon90}| & Jones, Baker, and Williams, 1990\\ |\citealp{jon90,jam91}| & Jones et al., 1990; James et al., 1991\\ |\citealp[pg.~32]{jon90}| & Jones et al., 1990, pg.~32\\ |\citetext{priv.\ comm.}| & (priv.\ comm.) \end{tabular} \end{quote} The |\citetext| command allows arbitrary text to be placed in the current citation parentheses. This may be used in combination with |\citealp|. \head{Partial citations} In author--year schemes, it is sometimes desirable to be able to refer to the authors without the year, or vice versa. This is provided with the extra commands \begin{quote} \begin{tabular}{l@{\quad$\Rightarrow$\quad}l} |\citeauthor{jon90}| & Jones et al.\\ |\citeauthor*{jon90}| & Jones, Baker, and Williams\\ |\citeyear{jon90}| & 1990\\ |\citeyearpar{jon90}| & (1990) \end{tabular} \end{quote} \head{Forcing upper cased names} If the first author's name contains a \textsl{von} part, such as ``della Robbia'', then |\citet{dRob98}| produces ``della Robbia (1998)'', even at the beginning of a sentence. One can force the first letter to be in upper case with the command |\Citet| instead. Other upper case commands also exist. \begin{quote} \begin{tabular}{rl@{\quad$\Rightarrow$\quad}l} when & |\citet{dRob98}| & della Robbia (1998) \\ then & |\Citet{dRob98}| & Della Robbia (1998) \\ & |\Citep{dRob98}| & (Della Robbia, 1998) \\ & |\Citealt{dRob98}| & Della Robbia 1998 \\ & |\Citealp{dRob98}| & Della Robbia, 1998 \\ & |\Citeauthor{dRob98}| & Della Robbia \end{tabular} \end{quote} These commands also exist in starred versions for full author names. \head{Citation aliasing} Sometimes one wants to refer to a reference with a special designation, rather than by the authors, i.e. as Paper~I, Paper~II. Such aliases can be defined and used, textual and/or parenthetical with: \begin{quote} \begin{tabular}{lcl} |\defcitealias{jon90}{Paper~I}|\\ |\citetalias{jon90}| & $\Rightarrow$ & Paper~I\\ |\citepalias{jon90}| & $\Rightarrow$ & (Paper~I) \end{tabular} \end{quote} These citation commands function much like |\citet| and |\citep|: they may take multiple keys in the argument, may contain notes, and are marked as hyperlinks. \head{Selecting citation style and punctuation} Use the command |\bibpunct| with one optional and 6 mandatory arguments: \begin{enumerate} \item the opening bracket symbol, default = ( \item the closing bracket symbol, default = ) \item the punctuation between multiple citations, default = ; \item the letter `n' for numerical style, or `s' for numerical superscript style, any other letter for author--year, default = author--year; \item the punctuation that comes between the author names and the year \item the punctuation that comes between years or numbers when common author lists are suppressed (default = ,); \end{enumerate} The optional argument is the character preceding a post-note, default is a comma plus space. In redefining this character, one must include a space if one is wanted. Example~1, |\bibpunct{[}{]}{,}{a}{}{;}| changes the output of \begin{quote} |\citep{jon90,jon91,jam92}| \end{quote} into [Jones et al. 1990; 1991, James et al. 1992]. Example~2, |\bibpunct[; ]{(}{)}{,}{a}{}{;}| changes the output of \begin{quote} |\citep[and references therein]{jon90}| \end{quote} into (Jones et al. 1990; and references therein). \head{Other formatting options} Redefine |\bibsection| to the desired sectioning command for introducing the list of references. This is normally |\section*| or |\chapter*|. Define |\bibpreamble| to be any text that is to be printed after the heading but before the actual list of references. Define |\bibfont| to be a font declaration, e.g.\ |\small| to apply to the list of references. Define |\citenumfont| to be a font declaration or command like |\itshape| or |\textit|. Redefine |\bibnumfmt| as a command with an argument to format the numbers in the list of references. The default definition is |[#1]|. The indentation after the first line of each reference is given by |\bibhang|; change this with the |\setlength| command. The vertical spacing between references is set by |\bibsep|; change this with the |\setlength| command. \head{Automatic indexing of citations} If one wishes to have the citations entered in the \texttt{.idx} indexing file, it is only necessary to issue |\citeindextrue| at any point in the document. All following |\cite| commands, of all variations, then insert the corresponding entry to that file. With |\citeindexfalse|, these entries will no longer be made. \head{Use with \texttt{chapterbib} package} The \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package is compatible with the \texttt{chapterbib} package which makes it possible to have several bibliographies in one document. The package makes use of the |\include| command, and each |\include|d file has its own bibliography. The order in which the \texttt{chapterbib} and \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ packages are loaded is unimportant. The \texttt{chapterbib} package provides an option \texttt{sectionbib} that puts the bibliography in a |\section*| instead of |\chapter*|, something that makes sense if there is a bibliography in each chapter. This option will not work when \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ is also loaded; instead, add the option to \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}. Every |\include|d file must contain its own |\bibliography| command where the bibliography is to appear. The database files listed as arguments to this command can be different in each file, of course. However, what is not so obvious, is that each file must also contain a |\bibliographystyle| command, \emph{preferably with the same style argument}. \head{Sorting and compressing citations} Do not use the \texttt{cite} package with \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}; rather use one of the options \texttt{sort} or \texttt{sort\&compress}. These also work with author--year citations, making multiple citations appear in their order in the reference list. \head{Long author list on first citation} Use option \texttt{longnamesfirst} to have first citation automatically give the full list of authors. Suppress this for certain citations with |\shortcites{|\emph{key-list}|}|, given before the first citation. \head{Local configuration} Any local recoding or definitions can be put in \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\texttt{.cfg} which is read in after the main package file. \head{Options that can be added to \texttt{\char`\\ usepackage}} \begin{description} \item[\ttfamily round] (default) for round parentheses; \item[\ttfamily square] for square brackets; \item[\ttfamily curly] for curly braces; \item[\ttfamily angle] for angle brackets; \item[\ttfamily colon] (default) to separate multiple citations with colons; \item[\ttfamily comma] to use commas as separaters; \item[\ttfamily authoryear] (default) for author--year citations; \item[\ttfamily numbers] for numerical citations; \item[\ttfamily super] for superscripted numerical citations, as in \textsl{Nature}; \item[\ttfamily sort] orders multiple citations into the sequence in which they appear in the list of references; \item[\ttfamily sort\&compress] as \texttt{sort} but in addition multiple numerical citations are compressed if possible (as 3--6, 15); \item[\ttfamily longnamesfirst] makes the first citation of any reference the equivalent of the starred variant (full author list) and subsequent citations normal (abbreviated list); \item[\ttfamily sectionbib] redefines |\thebibliography| to issue |\section*| instead of |\chapter*|; valid only for classes with a |\chapter| command; to be used with the \texttt{chapterbib} package; \item[\ttfamily nonamebreak] keeps all the authors' names in a citation on one line; causes overfull hboxes but helps with some \texttt{hyperref} problems. \end{description} \end{document}
1,941,325,220,924
arxiv
\section{Introduction}\label{sec:intro}} \input{tex/intro.tex} \section{Motivation and Example Data} \label{sec:motiv} \input{tex/motiv.tex} \section{Related Work} \label{sec:rel} \input{tex/rel.tex} \section{Tasks for Binned 2D Data} \label{sec:tasks} \input{tex/tasks.tex} \section{Design Space} \label{sec:design} \input{tex/design.tex} \section{Implementation} \label{sec:impl} \input{tex/impl.tex} \section{Discussion and Outlook} \label{sec:dis} \input{tex/dis.tex} \section{Conclusion} \label{sec:conclusion} \input{tex/conclusion.tex} \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{abbrv} \subsection{Task List} \ldaall Tasks for 2D point data that was aggregated by binning are connected to the more general tasks for regular scatterplots, for which an analysis and categorization of tasks has been published~\cite{Sarikaya2018}. The task space that we derive and discuss in this section, is an extended subset of this larger collection of tasks that can be supported by general, unbinned scatterplots. We refine the derived set of tasks according to the task design space of Schulz~\latinphrase{et~al.}\xspace~\cite{Schulz2013}, and ground each of these abstract tasks in a concrete example from the previously introduced data sets. Based on a review of relevant literature, Sarikaya and Gleicher~\cite{Sarikaya2018} collect and categorize a set of 12 tasks that users do with scatterplots. Those tasks are grouped into three different categories, comprising \emph{object-centric} tasks, \emph{browsing}, and \emph{aggregate-level} tasks. The first type, \emph{object-centric}, focuses on single data objects, and includes identifying and finding the location of a particular object. In other words, \emph{object-centric} tasks cover all the low-level data characteristics of Schulz~\latinphrase{et~al.}\xspace's task design space~\cite{Schulz2013}. The second category, \emph{browsing}, comprises tasks focused on either single data items or higher level structures such as clusters, and thus targets low- as well as high-level data characteristics. The third category, \emph{aggregate-level} tasks, focus entirely on high-level data characteristics. When working with binned scatterplots, the data analyst has decided that aggregating the data is the best way to perform the task at hand. Since the aggregation step abstracts away from single items, leaving only high-level data characteristics, we can reduce the set of potential tasks supported by a binned scatterplot to \emph{browsing} and \emph{aggregate-level} tasks. Binned representations of multi-class data introduces two new visual elements that analysis tasks can target, \emph{bins} and \emph{classes}. The dimension that captures this is called the \emph{scope} (or \emph{cardinality}) of a task~\cite{Schulz2013}. Each of the tasks in our space can either be targeted at bins (\emph{bin-centric}), or at classes (\emph{class-centric}). Extending the task set along this dimension is helpful for tasks supported by binned representation of 2D data, since it significantly influences the adequacy of designs to serve a task. Table~\ref{targetTable} lists all resulting tasks and diversifies them into a \emph{bin-centric} and a \emph{class-centric} version. In addition, a more extensive table, mapping all of the abstract tasks to high-level data characteristics and example tasks discussed in the following section is available as supplemental material (see also \S\ref{sec:dis} for a discussion of task completeness). \subsection{Task Examples}\label{sec:example_tasks} We have already seen examples for the first six tasks from Table~\ref{targetTable} back in \S\ref{sec:motiv}. Here, we introduce two additional datasets and show designs and examples for the remaining tasks. \vspace{1mm} \noindent \textbf{Early Modern Drama Collection} contains the full text of 1,242 dramas from the years 1576--1700. The texts are categorized into nine different genres, including \crule[tr]{0.25cm}{0.25cm}\xspace tragedy, \crule[tc]{0.25cm}{0.25cm}\xspace tragicomedy, and others (see caption of Figure~\ref{ldaall}). Both dimensions have been generated using topic modeling to extract eight distinct topics based on the document-level co-occurrences of words across the corpus. A topic is represented as a list of weighted words that are used in documents to talk about the topic. We then picked two topics as dimensions to lay out the documents, based on the amount of words that each of the documents contains from the respective topic. An example analysis scenario is to explore how those two topics separate the different drama genres. Figure~\ref{ldaall}a shows a traditional scatterplot of the dataset based on those two dimensions. While we are able to see some rough structure, especially in terms of class distribution for the larger classes such as for \crule[co]{0.25cm}{0.25cm}~comedies and \crule[tr]{0.25cm}{0.25cm}~tragedies, it is hard to spot the positions and distribution of the smaller classes. To get a better sense of the general density distribution of the plot (\emph{task 7: characterize distribution -- bin}), we can take a look at the design in Figure~\ref{ldaall}b. It uses color weaving to combine unnormalized color counts from all classes. The number of colored fragments representing the overall density of each bin. We can see that the general density is roughly distributed within a triangular shape, with density centers close to its three corners (\emph{task 9: identify anomalies -- bin}). Next, we are interested in whether the different drama genres are separated based on the position within the plot (\emph{task 8: characterize distribution -- class}). Figure~\ref{ldaall}c can give us initial insight into which bins are density centers for the classes. It uses rectangular bins colored by interpolating the colors of all classes in each bin, weighted by densities normalized across categories. Bins are thus dominated by the color of the class that has density peaks within the respective bin. It thus provides an overview of class intensities across the space. We can see that the two largest groups, \crule[co]{0.25cm}{0.25cm}~comedies and \crule[tr]{0.25cm}{0.25cm}~tragedies are mostly placed on the left and the right of the plot. While Figure~\ref{ldaall}c colors entire bins based on color blending, Figure~\ref{ldaall}d shows density information for each class by mapping tone of each color to class intensity. Rather than providing a coarse overview, it allows users to see more specific properties of class distributions and supports an in-depth analysis of class overlaps. We can see that the density centers for the drama types \crule[tr]{0.25cm}{0.25cm}\xspace tragedy and \crule[co]{0.25cm}{0.25cm}\xspace comedy are situated along an axis from the left side of the space to its right side, with the density center for \crule[tc]{0.25cm}{0.25cm}\xspace tragicomedy right between both. All other types are roughly aligned along a perpendicular axis from the top of the plot further down. We can also see, that while class density centers seem to be separated by the dimensions of this space, there is a huge overlap between the classes. One interesting class whose distribution pattern is different from the others (\emph{task 10: identify anomalies -- class}) is \crule[hi]{0.25cm}{0.25cm}\xspace history, which has its density center right in the middle of the plot and has lots of overlap with every other class in the dataset. \treecoverall \vspace{1mm} \noindent \textbf{Colorado Tree Coverage} dataset is part of the UCI machine learning data repository~\cite{Lichman2013}. It contains data about the arbor environment of four wilderness areas in Colorado. Each entry of the data set contains attributes of one individual tree, including variables such as tree type, position, and additional environmental factors. Overall the data set covers seven different types of trees. An example analysis scenario for this data set is to analyze how different environmental variables influence absolute and relative proportions of tree types in a region. For this example, we focus on two specific variables from the data set. The first one,`elevation', encodes the meters above sea level at which a tree grows, and helps to stratify the environment into different regions that tree types might prefer. The second one, `horizontal distance to hydrology', encodes the number of meters (in horizontal direction) to the next water source. Figure~\ref{treecoverall}a shows a traditional scatterplot of the roughly 600,000 trees. We can get an impression of the areas in which different types of trees grow, but overdraw is a huge problem. A question such as whether there is a correlation between elevation level and the variety of tree types that grow on that level (\emph{task 11: identify correlation -- bin}) is hard to answer. From Figure~\ref{treecoverall}b, however, we can easily see that variety decreases with rising elevation based on the number of classes within the pie charts. The design is based on hexagonal bins without showing explicit boundaries or bin shapes. This design aspect has the advantage of reducing visual clutter, but it also makes it harder to read exact bin positions off the plot. Pie charts convey class proportions in each of the bins, while the area of each pie chart encodes overall point density in the bin. From this design, we can easily learn about the general density distribution across the bins in addition to a rough impression of relative class distributions. Analyzing distributions within single classes, and the regions of overlap between multiple classes are, however, hard to explore with this design. Figure~\ref{treecoverall}c is an alternative for this data set with explicit outlines for each bin. The background colors are the result of blending together all class colors present in a bin, weighted according to their frequency within the bin. This enables us to quickly read the main class present in a bin and gauge each bin's pureness with respect to class distribution. Similar to the previous design, the plot conveys rough class distributions based on the blended background colors. This design also shows a subset of points for each bin, sampled based on the distribution of classes within the bin, but at least one data point per class. It allows us to see classes that have very low frequency compared to others. From Figure~\ref{treecoverall}c we can see that the most prominent type of trees in an environment varies with elevation level. We then wonder whether elevation level can help us separate between different tree types (\emph{task 12: identify correlation -- class}). While there is a lot of overlap generally between types, we can see from Figure~\ref{ldaall}c, for example, that \crule[ponderosa]{0.25cm}{0.25cm}\xspace ponderosa pines do not grow above 2800m, while \crule[krumm]{0.25cm}{0.25cm}\xspace krummholz starts to appear at about 3300m. Figure~\ref{ldaall}b also allows us to quickly spot regions of high (or low) density---such as the high concentration of trees at 2800--3200 meters, very close to the nearest water source (\emph{task 13: numerosity comparison -- bin}). The next questions we might ask is whether both tree types in the area contribute equally to this peak, or whether this is due to one particular type (\emph{task 14: numerosity comparison -- class}). From unnormalized frequencies in Figure~\ref{treecoverall}d, we can see that there are actually two overlapping density peaks in \crule[lodgepole]{0.25cm}{0.25cm}\xspace lodgepole pines, and, at slightly higher altitude levels \crule[spruce]{0.25cm}{0.25cm}\xspace spruce/fir that both combine to this peak in tree density. Another question we might have for the dataset are general boundaries of tree growth (\emph{task 15: understand distances -- bin}). From Figure~\ref{treecoverall}c we can see that \crule[cottonwood]{0.25cm}{0.25cm}\xspace cottonwood, \crule[douglas]{0.25cm}{0.25cm}\xspace douglas firs, and and \crule[ponderosa]{0.25cm}{0.25cm}\xspace ponderosa pines need water close by (700 meters to the nearest water source), while the other four types are much less dependent on close surface water. The plot also shows that most trees have a preferred elevation range in which they grow, with bands of roughly 500-600 meters of altitude and large overlaps between the different tree types across those regions (\emph{task 16: understand distances -- class}). \subsection{Fatality Analysis Reporting System (FARS) Dataset} This is a dataset about fatal car crashes that includes car crashes in California from 2011-2015 \cite{farsdata2017}. We picked 5 of the most common crash types in the dataset and investigate their distribution. Figure \ref{farsall}(a) shows a scatterplot based on this dataset. We can see rough distribution that car crashes happen more in big cities. Figure \ref{farsall}(b) shows a splatterplot \cite{Mayorga2013} version of this dataset. Splatterplot is a design of KDE that shows aggregated information. We have clearer overview of the most dense area in this figure and relative density of a specific class. However, both scatterplot and splatterplot do not clearly show which crash type happens more often. Figure \ref{farsall}(c) shows a design of binned aggregation. In this figure, we can see that car crashes happen more in big cities and south California has more accidents. It is not surprising to see this since the traffic is heavier in cities. Los Angelas is a ``cars first" city, so it is reasonable to see more car crashes in Los Angelas. By using binned aggregation, we have the freedom of focusing on specific regions (e.g., comparing San Francisco and Los Angelas) by parametrizing the size and the position of the bins. Besides, we can see that ``Drive off Road" (labeled as 2 in Figure \ref{farsall}) is the most common crash type no matter in big cities or not from Figure \ref{farsall}(c). In that case, we might want to investigate the reason why people are more likely to drive off road. Moreover, it seems that ``Control/Traction Loss" (labeled as 1 in Figure \ref{farsall}) happens more in the south-east part of California. A possible reason might be people tend to drive faster in that region. \farsall \subsection{Tree Cover Type Datasets} This is a standard dataset from the UCI machine learning repository \cite{Lichman2013}. It is derived from a cartographic survey of the Roosevelt National Forest of Colorado. It contains seven different kinds of tree cover types. We extracted two attributes from the dataset: elevation and horizontal distance to hydrology (nearest surface water). Elevation is the attribute that separates different tree types most. Its scatterplot is shown in Figure \ref{treecoverall}(a). Although we can see rough boundary of some classes in this figure, it is hard to see all 7 classes. It is even harder to tell proportion of different tree types in a specific region. Figure \ref{treecoverall}(b) shows the splatterplot of the dataset. In splatterplot, we can better see the range of different tree types when looking at elevation. We can also see that trees with larger elevation tend to spread out more in the domain of horizontal distance to hydrology. However, it is hard to read the color of overlapping part and we might need to carefully trace the boundary to understand the class labels. Figure \ref{treecoverall}(c) shows the dataset in binned aggregation. From this figure, we can see that the most dense part is when elevation is between 2800 - 3200 meters. Suprisingly, it is not the same area with most tree type diversity that we can clearly see in Figure \ref{treecoverall}(b). It is due to different parameters set in the function when generating splatterplot. People might have difficulty set a suitable threshold in splatterplot without enough statistic background. Yet, binned aggregation does not require this knowledge. It could show variety of tree types in a region and distribution of different tree types. It has lower chance to mislead people without concrete statistics background. From binned aggregation, we can find out that in the area with the most variety, tree type 2 has the largest proportion. We could further investigate whether tree type 2 is suitable to survive in more kinds of environments. We can also see that tree type 1 and tree type 2 live in similar elevation and distance of hydrology. Having this information, we can further investigate whether these two tree types are competitor or mutually beneficial. \treecoverall \subsection{Data Aggregation Methods} Binning is a frequently used method to help address the problem of overdraw with traditional scatterplots. According to Battersby~\cite{Battersby2016}, its computational efficiency, and the perceptual advantages of regular grids for comparing densities within and between aggregated plots are reasons for the popularity of binning. Cleveland describes how it can be used to address issues of overdraw and occlusion in his book~\cite{Cleveland1985}. He places great importance on the ability of the binned representation to convey a sense of aggregate distributions (see also discussion of choosing bin shape and size in \S\ref{sec:shape} and \S\ref{sec:size}), while sacrificing the ability of the viewer to identify individual points. Carr stresses the computational efficiency of binning, and reiterates its importance to show distributions by applying binning to SPLOMs in order to understand distributions across many dimensions~\cite{Carr1986}. Aside from regular lattices, other shape types and tesselations can be used to achieve different design goals. Battersby \latinphrase{et~al.}\xspace~\cite{Battersby2016} show that geographical distortion of bin shape can impact viewers' ability to compare count and density measures. An example method of methodologically creating irregular bins is to ensure that each bin contains the same number of data instances~\cite{Bak2009,Hao2010}. In this case, as the number of data items increases, spatial bins must also grow in size. As a follow-up, Hao \latinphrase{et~al.}\xspace~\cite{Hao2010} propose variable binned scatterplots, a method that shows each region in a separate plot large enough to show all points without occlusion. Cartograms are an example of modifying bin size to communicate magnitude of data instances, and different techniques have varying analysis trade-offs~\cite{Sun2010, Nusrat2015}. For choropleth maps that communicate magnitude within bins with color, Brewer \latinphrase{et~al.}\xspace~\cite{Brewer1997} evaluate the use of diverging, sequential, and spectral color schemes to support different analysis tasks. As an alternative to discrete aggregations methods, kernel density estimation (KDE) can aggregate point data to a continuous scalar field~\cite{Scott2015}. Techniques such as Splatterplots~\cite{Mayorga2013} can utilize these methods to create semantically meaningful visualizations that automatically scale to the available screen-space. While Splatterplots uses thresholding to denote areas of density, Chen \latinphrase{et~al.}\xspace~\cite{Chen2014} takes advantage of space-aware subsampling to illustrate proportional densities with smaller amounts of points. Isocontours can also be computed from a KDE, describing densities much like a topographic map~\cite{Urness2003}. Jo \latinphrase{et~al.}\xspace~\cite{Jo2019} describe how vonoroi tesselations can create variable bins shapes and sizes, dependent on the distribution. While our focus is on regular bins, we also consider tasks and designs for irregular lattices as sources for design ideas. \newTasksTable \subsection{Visual Representations for Binned Data} The results of binning multi-class data are scalar fields over a regular lattice. For the visual encoding of this type of data, many alternatives are available to communicate both frequencies and distributions of classes, and facilitate comparison between them based on both. Color is one common visual variable to encode frequency. Aside from using continuous color channels such as luminance to convey frequency~\cite{Brewer1997}, color ramps can be chosen to emphasize frequency peaks~\cite{Liu2013}, and quantized to improve viewers' accuracy of color perception and recall~\cite{Padilla2017}. For particular tasks, designs may want to highlight frequencies that do not match an underlying distribution---Correll and Heer~\cite{Correll2017} uses Bayesian surprise as an orthogonal dimension in a bi-variate color map, emphasizing those areas with unusually high or low frequency. Another common use of color is to communicate class identity. Color weaving~\cite{Shenas2007} assigns proportions of color conveying membership pixel-wise to an area, then permutes those pixels to create a proportional tapestry of color. It thus uses color to communicate both, class identity as well as frequency. Color weaving can help to elucidate class proportions in dense areas of the plot, as described by Luboschik \latinphrase{et~al.}\xspace~\cite{Luboschik2010}. Attribute blocks~\cite{Miller2007} can show the proportions of a large number of classes by further subdividing bins, using each bin to encode a single class frequency via color. Other choices for conveying frequency and class membership include texture and size, sometimes combined with a color encoding. Ware~\cite{Ware2009} uses textons (small texture-based glyphs) and color to allow the viewer to compare two scalar fields. Tobler~\cite{Tobler1973} uses textures that are able to convey continuous values in choropleth maps via quantization. The ideas for visualizing regular and non-regular 2D density fields from these approaches helps to inform our exploration of visual designs suitable for binned aggregation throughout this paper. \subsection{Linking Task and Design} Taxonomies of tasks and designs use abstraction to emphasize differences and similarities without dependencies on the implementation and domain details. Tasks are a core consideration in the design of visualizations, scaffolding the viewer to obtain the desired information about the data (see Munzner~\cite{Munzner2015} for an overview). Early work by Shneiderman~\cite{Shneiderman1996} has been extended by numerous other task taxonomies; see Bremer and Munzner~\cite{Brehmer2013} for an overarching taxonomy. More domain-specific task sets have been proposed for areas such as cartography~\cite{Andrienko2006,Roth2013}. Of particular interest for this paper, Schulz \latinphrase{et~al.}\xspace~\cite{Schulz2013} create a generalized space of visualization analysis task, taking into account the cardinality of objects being considered, as well as the types of data characteristics (e.g., distribution, outliers) communicated by the task. Our interest is in how tasks relate to the designs of a resulting binned visualization. While no such work exists for this scenario, previous has focused on other combinations of design decisions and tasks. Javed and Elmqvist~\cite{Javed2012} describe the design space of compositing visualizations, basing their analysis on the literature. Schulz \latinphrase{et~al.}\xspace~\cite{Schulz2011} review and extend the visualization design space for hierarchical data, similar as we do for multi-class binning. Borgo \latinphrase{et~al.}\xspace~\cite{Borgo2013} compile an overview of glyph-based visualization and design guidelines with associated tasks from examples in the literature. Informed by the guidelines in Kerracher and Kennedy~\cite{Kerracher2017}, we validate our task classification through examining existing taxonomies and instantiating abstract tasks on concrete analyses. Closely related to our paper, Sarikaya and Gleicher~\cite{Sarikaya2018} provide a space of analysis tasks, data characteristics, and design decisions derived from existing examples in the literature. Their guidance is generalized to the entire space of scatterplot designs, suggesting a need for more specific analysis for particular scenarios such as for multi-class binning. Jo \latinphrase{et~al.}\xspace~\cite{Jo2019} generate a grammar for deriving numerous binned designs, highlighting decisions of encoding type and normalization. This work, however, stops short of drawing relationships between design decisions and the types of analysis tasks they support. We seek to fill this gap with this work. \subsection{Representing Bins} The first three design dimensions discuss choices to generate and represent bins and their properties within the 2D space. \subsubsection{Shape} \label{sec:shape} All of the designs that we discuss are based around visual representations of the bins generated during the first part of the abstraction process. For this reason, the choice of bin shape, which is part of the output of the first step of binning, depends on statistical and distributional characteristics of the data, as well as perceptual properties of the resulting visualization. In addition to choosing an effective shape for the data and task at hand, designers should also take care in choosing the scale of axes, as the perception of distribution can be affected by the positioning of bins. There are only three shapes to tessellate a 2D space: triangles, rectangles, and hexagons. Scott~\cite{Scott1988} suggests that triangular bins should be avoided since dividing the space up into triangles results in a higher expected per-point position error than alternatives. In addition, triangles require rotations of the shape. Rectangular bins provide good contrast between orthogonal and diagonal neighbors~\cite{Birch2006}, making them a good choice if the alignment with vertical or horizontal neighbors are important. They are the only shape with a constant interval along both axes, making them a good choice if either the task requires reading off intervals with a certain precision from the plot, or the chosen intervals have semantic meaning (such as temporal or spatial units). Hexagons are particularly common on maps, and are considered aesthetically superior to alternatives~\cite{Carr1992}. They are better at representing local neighborhoods of bins~\cite{Birch2007}, making them a good choice for bin-centric tasks that focus on local structure, such as \emph{tasks 1 and 2: explore neighborhood -- bin/class}. Hexagons also introduce the least expected error between a point and the bin center, resulting in the least expected distortion of density counts~\cite{Scott1988}. They are thus also well-suited for tasks that involve the identification of fine-grained local density gradients, such as \emph{tasks 9 and 10: identify anomalies -- bin/class}, or \emph{tasks 11 and 12: identify correlation -- bin/class}. \subsubsection{Size} \label{sec:size} Similar to bin shape, its size is also influenced by data characteristics and visual properties of the resulting display. Its choice determines the number of bins, limiting or enhancing the spatial fidelity of the visualization. For designs that are based on multiple plots (see discussion about comparison in \S\ref{sec:composition}), bin size can either be homogeneous or heterogeneous across the plots. The latter allows designers to choose different spatial resolutions for each class, but complicates the mapping between plots (more in \S\ref{sec:interaction}). Binning creates a 2D histogram of the data space, where bin size controls the degree of aggregation that is applied to the data. It determines what details users are able to discern about the data. Methods that find optimal solutions for a large range of different datasets have been studied by Wand~\cite{Wand1997} and Knuth~\cite{Knuth2006}. Another aspect of bin size are perceptual aspects of the visual representations of the bins, which is governed by the available screen space for the visualization. Smaller bin sizes reduces the fidelity of communicating both spatial and class-proportional information within each bin---this greatly affects class-specific tasks, such as \emph{task 14: numerosity comparison -- class}, or bin-centric ones that compare inbalanced class proportions, such as \emph{bin-centric tasks 1: explore neighborhood, 5: explore data, or 7: characterize distribution}). The available space also affects color perception~\cite{Stone2012}. Therefore, there is a balance in trade-offs between maximizing the number of bins to reduce spatial aliasing (smaller bin size), but large enough to convey distributional information for each bin (larger bin size). \subsubsection{Bin Boundaries} \label{sec:boundary} Bins that have no explicit boundaries have to contain glyphs to communicate distributional information (much like QTonS~\cite{Ware2009}, an overlay for a scalar field), such as the pie glyphs in Figure~\ref{treecoverall}b. An advantage of boundary-less designs is reduced clutter. For this reason, depending on the complexity and visual properties of the glyphs, smaller glyph sizes can be accommodated which are beneficial to tasks that profit from high spatial resolution (e.g., \emph{tasks 9 and 10: identify anomalies -- bin/class, and 11 and 12: identify correlation -- bin/class}). However, missing bin boundaries make it harder to gauge the exact area a bin covers, hindering tasks that depend on this (e.g., \emph{tasks 7 and 8: characterize distribution -- bin/class}). An example for a design that explicitly encodes the spatial boundaries of bins is shown in Figure~\ref{ldaall}b. In addition to making bin intervals easier to read off the plot, explicit boundaries also help with mapping bins across different plots~\cite{Tory2003}, for example when using juxtaposed designs (as discussed in \S\ref{sec:composition}). Another advantage of bin boundaries is that they introduce bins as separate visual elements into the plot, making it easier to support additional user interaction with them (see \S\ref{sec:interaction}). \subsection{Composition}\label{sec:composition} In this section, we discuss encoding classes and class distributions for each of the bins, and ways of composing this class-specific information into a multi-class density map. \vspace{1mm} \noindent \textbf{Class Identity} The designs we discuss encode bin position in the data space as position in the plot. Thus, while position would be a very salient channel for encoding identity~\cite{Mackinlay1986}, it is already in use for the two primary data attributes. From the remaining choices, Livingston~\latinphrase{et~al.}\xspace~\cite{Livingston2012} find that color is quite effective. This is by far the most popular choice in the literature, with few historic exceptions that use texture to encode class identity~\cite{Pierce1894}. All of our designs and the following discussions are based on using color to encode class identity. Those colors are combined in different ways to communicate a range of class distribution properties. There are two exceptions to this, discussed in \S\ref{sec:colorplus}: one of our designs (attribute blocks) uses relative position within the bins, while another (hatching) uses angle to encode class identity. Both designs redundantly encode these additional visual variables with color for class identity. \vspace{1mm} \noindent \textbf{Normalization} After creating the spatial bins, the data items in each of the bins are reduced to raw frequency counts for each class present in the bin. Normalization of those raw counts and the visual encoding of the resulting distribution are independent. Still, choice of normalization has a significant influence on the adequacy of visual encodings and the types of comparisons that those encodings need to support for the task at hand. The details of these dependencies are discussed with the designs in \S\ref{sec:distribution}. There are three options for normalization: \begin{itemize} \item \emph{bin-internal}: all frequency counts are normalized by the maximum frequency in their bin. \item \emph{class-internal}: all frequency counts are normalized by the maximum frequency in their class. \item \emph{global}: all frequency values are normalized by the overall maximum frequency across bins and classes. \end{itemize} Designs based on bin-internal normalizations favor bin-centric tasks (\emph{odd-numbered tasks}). As an example, Figure~\ref{treecoverall}b has been created using bin-internal normalization, with each of the pie charts depicting bin-relative class distributions. It lets us compare overall densities and their distribution between bins and clusters of bins. Comparing class intensities and numerosity of a particular class across the space, however, is not possible with this design. Conversely, class-centric tasks (\emph{even-numbered tasks}) are supported with class-internal normalization that allows comparing class intensities across bins. In addition, some class-centric tasks require global normalization, such as numerosity-based \emph{task 14: numerosity comparison -- class}, which depends on the ability to compare raw frequencies across bins. Figure~\ref{ldaall}b and c are both use class-internal normalization and allow to compare class-specific properties. Figure~\ref{treecoverall}d supports a class-centric task with global normalization, and allows for both comparison across and within bins. \vspace{1mm} \noindent \textbf{Scale} Many of the designs are not effective at displaying details at the tail end of distributions. For example, with unnormalized frequencies, if a large number of data items fall into smaller bins, it is difficult to see any details outside those dense bins. The same is generally true for class distributions within bins. Figure~\ref{nbabinall}b uses grayscale color to encode the number of total points in each bin, with darker background indicating higher numerosity. Since the density of data items is particularly high around the basket, relative color difference of other bins across the space are diluted due to their relatively smaller difference in numerosity. One possible solution is to scale the raw count numbers for each bin based an attenuation function. The log-function is a popular choice~\cite{Cleveland1985b}. Different variants are discussed in the literature as dynamic range reduction, see Shirley and Marschner~\cite{Shirley2009} for examples. \vspace{1mm} \noindent \textbf{Comparison} The tasks from Table~\ref{targetTable} can be considered comparison tasks. The bin-centric version of each task is a visual comparison between bin properties, while the class-centric versions are comparisons between local or global class properties. Choice of visual design to support these comparisons has a significant effect on the types of tasks supported by a visualization. Two comparison designs~\cite{Gleicher2011} widely used for binned scatterplots are juxtaposition and superimposition. The former shows multiple visualization components separately, one per class in the data set (such as Figure~\ref{nbabinall}c). Consequently, each class has its own coordinate system that can be adapted (and the resulting space aggregated) independently of the others. Superimposition overlays multiple visualizations, one for each class, into a single view. In this case, all data points share a common coordinate system. Distributional information about classes is mixed on the bin level for superimposition-based designs (Figure~\ref{ldaall}c is an example for this). Juxtaposition-based designs can only be combined with global or class-based normalization. This makes them ideal for the analysis of density distributions within classes, and comparison of global features of those distributions across classes, such as \emph{class-centric tasks 4: search motif, 6: explore data , and 10: identify anomalies}. One advantage of juxtaposed designs is that overall less information has to be presented in a bin, making it easier to find a good balance between proportional and spatial fidelity while remaining readable. This also accommodates smaller bin sizes, resulting in potentially higher spatial resolution of the visualizations. However, it is difficult to link classes across bins between multiple juxtaposed plots. While this is true for plots with the same scale and bins, plots that differ in those properties make it even harder to visually match regions across multiple plots. For this reason, tasks that require users to compare local features of density distributions across classes, such as \emph{class-centric tasks 2: explore neighborhood, 8: characterize distribution, and 14: numerosity comparison}, are better served with a superimposition-based design. Juxtaposed designs neither support any of the bin-centric tasks well, because all of them require to collect densities across classes for each bin. This is in line with Livingston~\latinphrase{et~al.}\xspace~\cite{Livingston2012}, who find that when tasks require reading multiple variables across different plots, juxtaposition has a higher error rate and slower response time. \subsection{Density and Class Distribution}\label{sec:distribution} This section discusses designs that convey classes, class proportions, and distributions within bins. \subsubsection{Single Color}\label{sec:single_color} These design alternatives are methods for using bin background color to communicate properties of the class distribution. \vspace{1mm} \noindent \textbf{Luminance (Grayscale or Color)} While a univariate encoding does not communicate class distribution, it can convey item density within a bin. Color luminance imparts an implicit order of magnitude~\cite{Munzner2015}. This allows users to locate and explore bin properties related to item densities, and thus can serve a number of bin-centric tasks (e.g., \emph{tasks 1: explore neighborhood, 3: search motif, 5: explore data, 7: characterize distribution, 9: identify anomalies, 11: identify correlation, and 13: numerosity comparison}). Padilla~\latinphrase{et~al.}\xspace~\cite{Padilla2017} find that binned color scales, as opposed to continuous ones, expedite tasks that include the identification of maxima (such as \emph{bin-centric tasks 5: explore data, 9: identify anomalies, and 13: numerosity comparison}). Adding glyphs to the foreground of bins can extend task coverage of the base design (as in Figure~\ref{nbabinall}b, for example). For class-based tasks, luminance can also be used in a juxtaposition-based design to convey densities of single classes separately. To encode class identity, class colors are best used in each of the juxtaposed multiples instead of grayscale colors. \vspace{1mm} \noindent \textbf{Color of Majority Class} Another option is to only show the color of the majority class for each bin. Considering the choice of normalization, there are two versions of this, with bin-internal (or global normalization), or class-internal normalization. With the former, each bin is colored according to the class that is most prominent (in terms of item numerosity) within a bin. This is useful for \emph{bin-centric tasks 5: explore data, 7: characterize distribution, and 9: identify anomalies} that target the general distribution and anomalies. The latter version colors bins based on which class has the highest relative proportion of members in a bin. Figure~\ref{ldaall}b shows an example of this, revealing class-specific density centers. Generally, the design serves tasks that target distribution and potential anomalies within classes, such as \emph{class-centric tasks 6: explore data, 8: characterize distribution, and 10: identify anomalies}. The example in Figure~\ref{ldaall}b uses an additional luminance encoding of overall item densities for each bin. This basic design is particularly effective in scenarios with a small set of classes. \vspace{1mm} \noindent \textbf{Color Blending} With color blending (Figure~\ref{fig:blend_weave_and_hatch}a), class colors are combined according to a weighted average. Again, weights can be based on class-internal or bin-internal normalization to focus on bin-centric and class-centric tasks, respectively. Interpretation of blended colors is generally hard in cases with more than two base colors~\cite{Gama2014}. Color blending is thus most useful for data sets with a small number of classes, and little overlap between more than two of them. Compared to the previous method, it can provide a sense of the bin purity in terms of the classes it contains, helping to support some bin-centric (e.g., \emph{task 1: explore neighborhood}) and class-centric tasks (e.g., \emph{task 8: characterize distribution}). \subsubsection{Color + Additional Variables}\label{sec:colorplus} This section discusses color-based designs that use additional visual variables. These differ from the previous three designs by occupying the entire bin area to convey distributional information, and can thus not be complemented with additional visualizations. \vspace{1mm} \noindent \textbf{Color Weaving} An alternative to color blending is color weaving, a technique that permutes the positions of colored fragments while maintaining proportionality of each color. It takes advantage of the visual system’s ability to summarize color within an area~\cite{Albers2014}. Hagh-Shenas~\latinphrase{et~al.}\xspace~\cite{Shenas2007} show that users are better at discerning different contributing colors when using color weaving compared to color blending. As a consequence, color weaving is as a more effective and scalable option to show class diversity and summary information for bins. This, however, comes at the cost of not being able to combine this design with glyphs to cover additional tasks, because they would cover large areas of the space that are needed for accurate interpretation of the weaving pattern. Depending on the normalization, color weaving can also be effective at conveying overall bin densities (as demonstrated in Figure~\ref{ldaall}b). Bin-internal normalization will result in the optimal use of a bin's area to encode its class distributions by filling all available fragments (but does not allow comparisons across bins). Global normalization, on the other hand, which allows for comparison of class frequencies across bins, results in some of the fragments of each bin remaining white. Overall bin densities are then encoded by the ratio of white to colored fragments, allowing users to effectively compare overall densities across bins. Figure~\ref{fig:blend_weave_and_hatch}b shows an example. In addition to a variety of class-centric tasks, this design thus also supports many of the bin-centric tasks discussed for the luminance-based design in \S\ref{sec:single_color}. Finally, we found that the third, class-internal, normalization option is not a good choice in combination with color weaving and may confuse users due to the implicit part-whole metaphor of color weaving. This aspect of weaving is similar to pie chart glyphs, discussed in \S\ref{sec:glyphs}. However, weaving can also be used in juxtaposed designs to generate multiple single-class density maps for class-centric tasks, such as in Figure~\ref{nbabinall}c. One general disadvantage of weaving is that it is not very effective at conveying particularly small class proportions, since a low number of equally colored fragments is hard to discern. \blendWeaveAndHatch \vspace{1mm} \noindent \textbf{Attribute Blocks: Tone and Position} Attribute blocks~\cite{Miller2007} assign each class a square bin within a glyph. Color luminance encoding a summary value for the class (see Figure~\ref{ldaall}d). Livingston~\latinphrase{et~al.}\xspace~\cite{Livingston2012} show that attribute blocks have a high error rate for tasks that require reading multiple variates. Their experiments only use position for class membership, with a color gradient for magnitude. In our experience, using additional class colors (similar to Miller~\cite{Miller2007}) makes extracting class-specific distributions easier. While attribute blocks can be combined with any type of normalization, they excel at class-centric tasks (using class-internal normalization), in particular those that involve analyzing class overlap across the space (\emph{class-centric tasks 6: explore data, 8: characterize distribution, 12: identify correlation, and 16: understand distances}). \vspace{1mm} \noindent \textbf{Hatching: Color and Angle} Hatching is a particularly common choice to encode numerosity in monochrome choropleth maps~\cite{Peterson1979}. A viable option for multi class data is to use simple, angled strokes for different classes, varying their density to encode class intensity. Livingston~\latinphrase{et~al.}\xspace~\cite{Livingston2012} show that using orientation of lines to encode variable values (representing classes by angle) is a perceptually efficient, easily separable encoding for multiple variables. In addition, it shows An example of multi class hatching is shown in Figure~\ref{fig:blend_weave_and_hatch}c, with class-internal normalization. It encodes class identity as both color and angle. Despite the general trade-off between line thickness and resolution of density values, the design keeps classes separable due to redundant encoding, even if colors are hard to identify for thin lines~\cite{Stone2014}. A disadvantage is that the drawing order of the lines influences their prominence across bins---denser lines exacerbate this effect. One solution is adding interaction that lets users change class drawing order (as discussed in \S\ref{sec:interaction}). This design is most effective for low to medium density values, and thus works best with global or class-internal normalization, especially for data that is well distributed throughout the space. In addition, comparisons within classes and across bins are supported due to redundant encoding of class identity, making the design well-suited for class-centric tasks that analyze and compare class distributions (\emph{class-centric tasks 6: explore data, 8: characterize distribution, 10: identify anomalies}). \subsubsection{Glyphs}\label{sec:glyphs} Glyphs are small, independent visual objects that encode attributes of a data record~\cite{Borgo2013}. They can either be added to the foreground of bins, or used in isolation. When added to a base design, they increase task coverage in bins whose background is based on a single color value, while their effectiveness is reduced with more complex backgrounds by placing high cognitive load on users~\cite{Ware2013}. We discuss three alternative glyph designs. While they cover all of the tasks to convey class distributions, this list could be extended by additional glyphs that encode arbitrary properties of the data instances in a bin, thus, e.g., covering potential additional domain specific tasks of a particular dataset. \vspace{1mm} \noindent \textbf{Part-Whole (Pie Charts)} Pie charts are a widely used solution to show proportion of different classes and part-whole relationships. They are a common visualization technique available in geographic information systems to show distributions on top of choropleth maps~\cite{Anselin1999}. There are multiple variations of pie charts~\cite{Kosara2010}, including donut and square pie charts. Both regular pie and donut charts have comparable perceptual effectiveness~\cite{Skau2016}. Figure~\ref{nbabinall}b shows standard pie charts, with color ordered according to class proportion and combined with a grayscale density map. An alternative to encode overall item numerosity per bin is to use pie area, as shown in Figure~\ref{treecoverall}b. While this has the advantage of reducing the number of visual elements in the plot, it exacerbates the poor visibility of sparse classes in the pies. Pie charts can only be used with bin-internal normalization due to their part-whole metaphor, and there is evidence that they outperform alternatives when comparing class proportions across bins~\cite{Spence1991}. This makes them a good solution for \emph{bin-centric tasks 3: search motif, 5: explore data, 7: characterize distribution, and 9: identify anomalies}. However, care should be taken in case a tasks involves the identification of clusters across the space~\cite{Lewandowsky1993}. \vspace{1mm} \noindent \textbf{Baseline Proportional (Bar Charts)} An alternative glyph, also borrowed from choropleth maps~\cite{Few2009}, are small bar charts, as shown in Figure~\ref{treecoverall}d. They come in two flavors: regular and stacked. Bar charts can be effective at conveying relative class proportion within each bin, and have the advantage of having a common bar baseline. They have been used to successfully visualize multivariate datasets~\cite{Bo2014}, and allow users to accurately read values of proportion when the bin size is large enough. Similar to pie charts, it can be hard to perceive class colors when the bars are small, though this problem can be somewhat alleviated by ordering bars according to class labels. For stacked bar charts, users can have difficulties determining precise class proportions~\cite{Kosara2010}. Despite these disadvantages, bar charts are a versatile design choice that can be used with all three types of normalization. In contrast to pie charts, they allow for comparison across bins, which makes them suitable for class-centric tasks. Bar charts are particularly useful for comparisons across bins and classes (e.g., for \emph{task 14: numerosity comparison -- class}). \vspace{1mm} \noindent \textbf{Points} Point glyphs emphasize class variance within the bins. They maintain a balance between showing class proportions and the spatial distribution of different classes. For this, we sample the points in a bin to retain features of the underlying distribution. This sampling strategy can be combined with all three normalization methods. Bertini~\latinphrase{et~al.}\xspace~\cite{Bertini2006} and Chen~\latinphrase{et~al.}\xspace~\cite{Chen2014} show that sub-sampling points helps to overcome the problem of overdraw while preserving much of the spatial information. Figure~\ref{treecoverall}c shows an example of this. We sample points based on the class proportion and reduce overlap as much as possible. In the case that a sampled point overlaps with the boundaries of the bin, its position is moved slightly towards the center. To represent all of the classes present in a bin, proportion of a class may be distorted during sampling since we show at least one sample for each existing class. This helps to identify small frequency phenomena, particularly for class-centric tasks, such as \emph{tasks 12: identify correlation, and 16: understand distances}. \subsection{Interaction}\label{sec:interaction} In addition to adding glyphs, another way of extending the supported tasks of a given design is adding interaction features to it. Designs can support interaction on classes, bins, or both. Selection of classes can either be done via the legend, by letting users hover over and click on labels. Alternatively, in case a design includes glyphs, which have visually distinct elements for each class in a bin, interaction with these elements can highlight or select the respective class. Selection of a class can either show additional information, such as distribution of the class or overlaps with other classes, filter all visual elements of a certain class, e.g., showing only the bars for one class in bar chart glyphs. For designs that depend on the order of classes (Hatching in \S\ref{sec:colorplus}), highlighted or selected ones can be moved to the foreground. Alternatively, it could open another view of the data that focuses on the selected class (similar to a juxtaposition-based design \S\ref{sec:composition}), switching to a class-centric visualization. Generally, supporting filtering by class extends the tasks supported by a given design with additional, class-centric tasks. Similar to classes, interacting with bins can also show additional information about each bin (e.g., raw counts of class items in a bin). In addition, for juxtaposition-based designs, interaction can help users identify corresponding areas across multiple plots. Corresponding referents can all be highlighted if multiple plots are based on identical bins. If this is not the case, an additional overlay on the other plots can be used that shows the spatial extent of the selected bin across the other plots, conveying class and item overlaps with the existing bins of each juxtaposed plot. Another mode of interaction on the bin-level is zooming. Based on the zoom-level, once the items that can be viewed on the available screen space no longer overlap each other, the summary design could switch to a detail view that shows item per bins. This can provide additional support for class- and bin-centric tasks that depend on local details, such as \emph{tasks 1: explore neighborhood -- bin, 2: explore neighborhood -- class, and 8: characterize distribution -- class}.
1,941,325,220,925
arxiv
\subsection[2][\subsectiontoc]{% \def\subsectiontoc{#2}% \oldsubsection[#1]{\boldmath #2}% } \let\oldsubsubsection\subsubsection \renewcommand\subsubsection[2][\subsubsectiontoc]{% \def\subsubsectiontoc{#2}% \oldsubsubsection[#1]{\boldmath #2}% } \newcommand{\tmop}[1]{\ensuremath{\operatorname{#1}}} \newcommand{\tmtextit}[1]{{\itshape{#1}}} \newcommand{\tmtextrm}[1]{{\rmfamily{#1}}} \newcommand{\tmtexttt}[1]{{\ttfamily{#1}}} \newcommand{\texttt{POWHEG-box}}{\texttt{POWHEG-box}} \def $LO + \textit{k-factors}$ {$LO + \textit{k-factors}$} \def $\checkmark$ {$\checkmark$} \def $[\checkmark]$ {$[\checkmark]$} \def $(\checkmark)$ {$(\checkmark)$} \def $ n_{dat} = 78$ {$ n_{dat} = 78$ } \def $ N_{op} = 16$ {$ N_{op} = 16$ } \newcommand{\cHWB}{c_{HWB}} \newcommand{c_{HW}}{c_{HW}} \newcommand{c_{HB}}{c_{HB}} \newcommand{c_{HD}}{c_{HD}} \newcommand{c_{H\ell}^{(3)}}{c_{H\ell}^{(3)}} \newcommand{c_{H\ell}^{(1)}}{c_{H\ell}^{(1)}} \newcommand{\mathcal{L}}{\mathcal{L}} \newcommand{\mathcal{O}}{\mathcal{O}} \tolerance=1 \emergencystretch=\maxdimen \hyphenpenalty=10000 \hbadness=10000 \usepackage{xcolor} \input{Macros.tex} \begin{document} \newgeometry{top=1.5cm,bottom=1.5cm,left=2.5cm,right=2.5cm,bindingoffset=0mm} \begin{titlepage} \thispagestyle{empty} \noindent \begin{flushright} Nikhef-2020-039 , IPPP/20/71 \\ VBSCAN-PUB-01-21 \\ \end{flushright} \vspace{0.7cm} \begin{center} {\LARGE \bf\boldmath SMEFT analysis of vector boson scattering \\[0.3cm] and diboson data from the LHC Run II}\vspace{1.4cm} Jacob~J.~Ethier,$^{1,2}$ Raquel~Gomez-Ambrosio,$^{3,4,\dagger}$ Giacomo~Magni,$^{1,2}$ Juan~Rojo,$^{1,2}$\\[0.1cm] \vspace{0.7cm} {\it \small ~$^1$ Department of Physics and Astronomy, Vrije Universiteit Amsterdam,\\ NL-1081 HV Amsterdam, The Netherlands\\[0.1cm] ~$^2$ Nikhef Theory Group, Science Park 105, 1098 XG Amsterdam, The Netherlands\\[0.1cm] ~$^3$ Dipartimento di Fisica, Universita degli Studi di Milano Bicocca \\ and INFN Sezione di Milano Bicocca, Milan, Italy. \\[0.1cm] ~$^4$ Institute for Particle Physics Phenomenology , Durham University, \\ South Road DH1 3LE Durham, UK \\[0.1cm] } \vspace{0.7cm} {\bf \large Abstract} \end{center} We present a systematic interpretation of vector boson scattering (VBS) and diboson measurements from the LHC in the framework of the dimension-six Standard Model Effective Field Theory (SMEFT). We consider all available measurements of VBS fiducial cross-sections and differential distributions from ATLAS and CMS, in most cases based on the full Run II luminosity, and use them to constrain 16 independent directions in the dimension-six EFT parameter space. Compared to the diboson measurements, we find that VBS provides complementary information on several of the operators relevant for the description of the electroweak sector. We also quantify the ultimate EFT reach of VBS measurements via dedicated projections for the High Luminosity LHC. Our results motivate the integration of VBS processes in future global SMEFT interpretations of particle physics data. \vspace{9cm} \par\noindent\rule{\textwidth}{0.4pt} $^\dagger$ {\it \small corresponding author: [email protected]} \end{titlepage} \restoregeometry \tableofcontents \input{sec-introduction.tex} \input{sec-theory.tex} \input{sec-expdata.tex} \input{sec-results.tex} \input{sec-hllhc.tex} \input{sec-summary.tex} \FloatBarrier \phantomsection \addcontentsline{toc}{section}{References} \bibliographystyle{JHEP} \section{Experimental data and theoretical calculations} \label{sec:expdata} In this section we describe the experimental data sets that will be used in the present analysis as well as the corresponding theoretical predictions both in the SM and at the EFT level. We also quantify the sensitivity that each of the VBS and diboson data have on the coefficients associated to the dimension-six operators introduced in Sect.~\ref{sec:eftth}. \subsection{Vector boson scattering} \label{sec:VBSproduction} At hadron colliders, vector boson scattering occurs when two vector bosons are radiated off incoming quark lines and scatter into another pair of vector bosons, $VV'\to VV'$. The latter decay either leptonically or hadronically, and thus the VBS amplitude will be proportional to $\alpha_{\rm EW}^6$. Fig.~\ref{fig:feynman_ewVBS_ZZ} displays representative Feynman diagrams associated to vector boson scattering at the LHC for the $ZZjj$ channel. The sensitivity to quartic gauge couplings is a unique feature of this process, and in particular the longitudinally polarised scattering amplitude $V_LV'_L\to V_LV_L'$ provides a direct probe of the high-energy behaviour of the theory. We emphasize again that QGCs represent only a fraction of the VBS events, and thus a complete description of the process requires accounting for EFT effects in all possible topologies, as discussed in Sect.~\ref{sec:eftth}. \begin{figure}[t] \centering \subfloat{ \includegraphics[width=0.22\textwidth]{plots/ZZjj_qgc_diag.png}} \hspace{0.8cm} \subfloat{ \includegraphics[width=0.22\textwidth]{plots/ZZjj_tgc_diag.png}} \hspace{0.8cm} \subfloat{ \includegraphics[width=0.22\textwidth]{plots/ZZjj_h_diag.png}} \subfloat{ \includegraphics[width=0.25\textwidth]{plots/ZZjj_box.png}} \hspace{1.2cm} \subfloat{ \includegraphics[width=0.2\textwidth]{plots/ZZjj_bgk_diag.png} } \caption{\small Representative Feynman diagrams for vector boson scattering in the $ZZjj$ final state (top row) % and its main background, the QCD-induced diboson production (bottom row). } \label{fig:feynman_ewVBS_ZZ} \end{figure} The characteristic VBS topology is defined by two energetic jets with moderate transverse momenta, $p_T\sim M_V/2$, which therefore are produced relatively close to the beam pipe and appear predominantly in the forward region of the detectors. The specific final-state signature that we will focus on in this work is thus composed by four leptons (either charged or neutral) and two jets in the forward region exhibiting a large invariant mass $m_{jj}$ and wide rapidity separation $\Delta y_{jj}$. Furthermore, being a purely electroweak process, there is no color flow between the two incoming quark lines. This implies that the central rapidity region between the two tagging jets will have a reduced amount of hadronic activity, known as the ``rapidity gap''. As highlighted by the bottom diagrams of Fig.~\ref{fig:feynman_ewVBS_ZZ}, the vector boson scattering process is affected by large backgrounds from QCD-induced diboson production processes with similar topology, with amplitudes proportional instead to $\alpha_{\rm EW}^4 \alpha_s^2$. The interference terms between the diboson and VBS processes are usually small and therefore will be neglected in this analysis. Beyond diboson production, other sources of background to VBS include $t+V$, $t\bar{t}$, $V$+jets and QCD multijet production and are generally small. While the diboson inclusive cross-section is much larger than the VBS one, provided the statistics are large enough, one can efficiently disentangle the two processes by focusing on the large $m_{jj}$ and $\Delta y_{jj}$ region (or related kinematic variables) where the VBS processes dominates. \paragraph{Theoretical simulations.} In order to evaluate the expected cross-sections and differential distribution for the VBS (and the diboson) processes, we use two Monte Carlo generators, \texttt{MG5\_aMC@NLO}~\cite{Alwall:2014hca} and \texttt{POWHEG-box}~\cite{Nason:2004rx, Frixione:2007vw, Alioli:2010xd}, to generate NLO QCD matrix elements. QCD corrections represent up to a $\mathcal{O}(100\%)$ effect for diboson processes~\cite{Grazzini:2017ckn, WZ:atlas, WZ:cms}, while in VBS, a purely electroweak process, they amount to a few percent~\cite{Denner:2019tmn,Bozzi:2007ur,Biedermann:2017bss,Chiesa:2019ulk,Denner:2020bcz,Denner:2020zit,Denner:2012dz}, although they can modify the shape of distributions. Here we adopt the NNPDF3.1NNLO no-top PDF set~\cite{Ball:2017nwa}. The fixed-order NLO events are then showered with \texttt{Pythia8}~\cite{Hoche:2014rga, Sjostrand:2007gs, Sjostrand:2006za}. Accounting for parton shower effects is especially relevant for the modelling of additional soft QCD radiation in diboson production. It is also convenient to facilitate the matching between the theoretical predictions with the experimental analyses. However, since we restrict ourselves to fully leptonic final states, both hadronisation, underlying event, and multiple parton interactions are switched off in the \texttt{Pythia8} simulation. The showered events are further processed with {\tt Rivet}~\cite{Bierlich:2019rhm}, a crucial step to reproduce the experimental selection requirements and acceptance cuts, given that only a subset of these can be implemented at the generation level. Moreover, this allows us to compare directly with the datasets published in {\tt HEPData}~\cite{Maguire:2017ypu}. Bottom quarks are always included in the initial state ($n_f=5$ scheme) and sometimes also in the definition of the final state jets, following the prescription in the associated experimental analysis. The signal to background ratio in VBS is generally small, and for this reason most VBS differential results are only available as a sum of EW- and QCD-induced processes, which can only be disentangled at the level of fiducial cross-sections. To account for this, in the simulation of VBS processes we generate MC events corresponding to both the EW-induced contributions (signal) and the QCD-induced contributions (background), with EFT corrections included only in the former\footnote{Note that EFT corrections to some of these backgrounds are already being constrained by the diboson production measurements included here.}. The evaluation of the linear EFT cross-sections, $\sigma^{(\rm eft)}_i$ in Eq.~(\ref{eq:crosssection}), is carried out with \texttt{MG5\_aMC@NLO} $\:$ interfaced with {\tt SMEFTsim}~\cite{Brivio:2017btx}, in its $\lbrace m_W , m_Z , G_F \rbrace$ IPS implementation. Specifically, we compute the linear EFT cross-sections at LO in the SMEFT, and then calculate an NLO/LO $K$-factor assuming that the QCD corrections to the SM cross-sections factorise such that they can be assumed to be the same in the EFT. Nevertheless, we found the impact of this assumption to be rather small at the level of our fit results. In future work, it would be advisable to use exact NLO QCD calculations for the EFT cross-sections, such as the ones presented in~\cite{Baglio:2017bfe,Baglio:2020oqu,Baglio:2019uty}, or by using for example {\tt SMEFT@NLO}~\cite{Degrande:2020evl}. In Table~\ref{tab:calc_details} we summarize the settings of the SM and EFT theoretical calculations used to evaluate the LHC VBS and diboson cross-sections included in the fit. The perturbative accuracy and the codes used to produce the corresponding predictions for both the SM and the EFT contributions are also given. \input{tables/calc_details} \paragraph{Same sign $\mathbold{W^{\pm}W^{\pm}jj}$ production.} In this category we consider two data sets, one from ATLAS~\cite{WWjj:atlas,WWjj:atlas:hepdata} based on $\mathcal{L}=36$ fb$^{-1}$ and another from CMS based on the full Run II luminosity~\cite{WWjjWZjj:cms, WWjjWZjj:cms:hepdata}, $\mathcal{L}=137$ fb$^{-1}$. Theoretical predictions are evaluated using \texttt{MG5\_aMC@NLO}~ and then showered with {\tt Pythia8}. Only the fiducial cross section measurement from ATLAS is used in the fit, since no differential distributions are available. Concerning the CMS measurement, the input to the fit is the differential distribution in the mass of the charged lepton pair $m_{ll'}$, which includes the sum of VBS (EW-induced) and diboson (QCD-induced) contributions. In addition, we include the VBS-only fiducial cross section measurement. To avoid double counting, we remove one bin of the aforementioned distribution. Fig.~\ref{fig:WWjjmll} displays the CMS $m_{ll'}$ measurement together with the corresponding EW+QCD-induced theoretical predictions, finding good agreement. These theoretical predictions also agree with those presented in the original CMS publication~\cite{WWjjWZjj:cms}. \begin{figure}[t] \centering \includegraphics[width=0.6\textwidth]{plots/SM_dataset_plots/WWjj_mll_cms.pdf} \caption{\small The dilepton invariant mass distribution $m_{ll'}$ from the CMS measurements of same-sign $W^{\pm}W^{\pm}jj$ production~\cite{WWjjWZjj:cms}, compared with the corresponding diboson (QCD-induced) plus the VBS (EW-induced) theoretical predictions. % The error bar on the CMS data points indicates the total experimental uncertainty. % In the lower panel, we display the ratio of theory over data. } \label{fig:WWjjmll} \end{figure} \paragraph{$\mathbold{W^{\pm}Zjj}$ production.} In this category we include the $m_{T}^{WZ}$ differential distribution from the ATLAS measurement~\cite{WZjj:atlas,WZjj:atlas:hepdata} based on $\mathcal{L}=36$ fb$^{-1}$, which consists again on the sum of VBS signal and diboson background. For this dataset the full bin-by-bin correlation matrix is available and is accounted for in the fit. We also include the (signal plus background) differential distribution in the dijet invariant mass from CMS, $d\sigma / d m_{jj}$, based on the full Run II dataset luminosity of $\mathcal{L}=137$ fb$^{-1}$~\cite{WWjjWZjj:cms,WWjjWZjj:cms:hepdata}. Again, we include the EW-only fiducial cross-section from CMS in addition to the differential distribution, and remove a bin from the latter to avoid double counting. Theoretical predictions for this process are evaluated at NLO with \texttt{POWHEG-box} $\:$ for the EW component and at LO for the QCD diboson background with \texttt{MG5\_aMC@NLO}. \begin{figure}[h] \centering \subfloat{ \includegraphics[width=0.45\textwidth]{plots/SM_dataset_plots/WZjj_mwz_atlas.pdf} }\quad \subfloat{ \includegraphics[width=0.45\textwidth]{plots/SM_dataset_plots/WZjj_mjj_cms.pdf} } \caption{\small The $W^{\pm}Zjj$ production measurements from ATLAS~\cite{WZjj:atlas} (left) and CMS~\cite{WWjjWZjj:cms} (right). % In both cases the EW-induced induced contributions, which are being added to the QCD-induced ones, are separated into $W^{+}Zjj$ and $W^{-}Zjj$.} \label{fig:WZjj} \end{figure} Fig.~\ref{fig:WZjj} displays a comparison between our theoretical predictions and the $W^{\pm}Zjj$ production measurements from ATLAS~\cite{WZjj:atlas} (absolute $m^{WZ}_T$ distribution) and from CMS~\cite{WWjjWZjj:cms} (absolute $m_{jj}$ distribution). For completeness, the EW-induced induced contributions, which are being added to the QCD ones, have been separated into $W^{+}Zjj$ and $W^{-}Zjj$. In the case of the CMS $m_{jj}$ measurement, there is good agreement between data and theory, and one can observe how the VBS contribution clearly dominates over the QCD-induced processes at large dijet invariant masses $m_{jj}$. For the ATLAS measurement, we observe some tension on the second bin in $m_{T}^{WZ}$ where the theory undershoots the data, a behaviour that was also observed in the original analysis~\cite{WZjj:atlas}. Both the ATLAS and CMS $W^{\pm}Zjj$ measurements benefit from sensitivity to the high-energy region, covering kinematics of up to $m_{T}^{WZ}\simeq 1$ TeV for ATLAS and $m_{jj}=3$ TeV for CMS, which highlights their potential for constraining EFT operators that modify the VBS process. \paragraph{$\mathbold{ZZjj}$ production.} Here we consider two recently released measurements from ATLAS~\cite{ZZjj:atlas,ZZjj:atlas:hepdata} and CMS~\cite{ZZjj:cms137,ZZjj:cms137:hepdata} based on the full Run II luminosity of $\mathcal{L} \approx 140$ fb$^{-1}$. The ATLAS analysis represents their first VBS measurement in the $ZZjj$ final state, while the CMS one updates a previous study of the same final state~\cite{ZZjj:cms}. In the ATLAS case, we include the fiducial VBS cross section, which accounts for both EW- and QCD-induced contributions, while from CMS we include the EW-induced fiducial cross section together with the detector-level differential distribution in $m_{ZZ}$ for the sum of the EW and QCD-induced contributions. Since the latter is not unfolded, it requires some modelling of detector effects. For this reason, our baseline dataset used in the fit will include only unfolded measurements, with the detector-level ones used as an additional cross-check\footnote{Moreover, we only include bins of the detector-level distribution containing more than 30 events to ensure the validity of the Gaussian approximation.}. The theoretical calculation for the $ZZjj$ process for the signal (EW-induced) events is simulated at NLO using \texttt{POWHEG-box} $\:$ \cite{Jager:2013iza} and at LO with \texttt{MG5\_aMC@NLO}~for the QCD-induced background. As discussed in Sect.~\ref{sec:EFTsensitivity}, the $ZZjj$ final state exhibits a large sensitivity to the EFT operators considered in this work, but their practical impact in the fit is moderate due to the large experimental uncertainties. In Fig.~\ref{fig:ZZjj_mzz} we compare the number of events per $m_{ZZ}$ bin between the theoretical predictions and the detector-level experimental data from CMS in the $ZZjj$ final state based on the full Run II luminosity. In this comparison, our simulations account for the QCD- and EW-induced $ZZjj$ contributions, while the other sources of background are taken from the original publication~\cite{ZZjj:cms137}. Note that the error band on the data points includes only the statistical uncertainty, which is dominant. The overall detector selection efficiency is modelled here by comparing the theory prediction for the fiducial cross-section with the expected yields in the folded distribution. In general, we observe a fair agreement between the theory simulations and the experimental data once the experimental uncertainties are accounted for. \begin{figure}[t] \centering \includegraphics[width=0.6\textwidth]{plots/SM_dataset_plots/ZZjj_mzz_cms.pdf} \caption{\small The CMS detector-level VBS measurement in the $ZZjj$ final state based on the full Run II luminosity~\cite{ZZjj:cms137}. % Here we compare the number of events per $m_{ZZ}$ bin between the theoretical predictions and the experimental data. % The error bars display only the statistical uncertainty. } \label{fig:ZZjj_mzz} \end{figure} \paragraph{$\mathbold{\gamma Zjj}$ production.} Finally, we consider the rare VBS final state composed by a photon $\gamma$ and a $Z$ boson which subsequently decays leptonically. In this case, we have available two fiducial cross-section measurements for the electroweak production of a $Z\gamma$ pair in association with two jets from ATLAS~\cite{AZjj:atlas} and CMS~\cite{AZjj:cms,AZjj:cms:hepdata} based on the 2016 dataset with $\mathcal{L}\simeq 36$~fb$^{-1}$. As for the $ZZjj$ final state, we will consider here one detector-level distribution from ATLAS as a consistency check. Our theoretical predictions for this channel are evaluated at LO with \texttt{MG5\_aMC@NLO}~ and are found to be in good agreement with the data. This channel is interesting for our study both because of its sensitivity to neutral Higgs couplings as well as its ability to break degenerate solutions in the EFT parameter space. Moreover, we found that ATLAS and CMS have taken very different approaches to the definition of the phase space, which is already useful at the level of the cross-section and would mean an increased EFT sensitivity if unfolded distributions were also available. In Fig.~\ref{fig:AZjj_ptlla} we report the reconstructed differential distribution. Our theoretical simulation includes only the EW signal, while other sources of background (QCD-induced $\gamma Z$, $Z+{\rm jets}$, and $t\bar{t}\gamma$) are taken from~\cite{AZjj:atlas}. For this process, EW-induced VBS contributes only to $\sim 10 \%$ of the total events, thus the impact of this distribution to the EFT fit is expected to be moderate. \begin{figure}[t] \centering \includegraphics[width=0.6\textwidth]{plots/SM_dataset_plots/AZjj_ptlla_atlas.pdf} \caption{\small Comparison between data and theory predictions for the ATLAS measurement of VBS in the $\gamma Zjj$ final state~\cite{AZjj:atlas}. % Here we compare the number of events per $p_{T}^{\ell \ell \gamma}$ bin between our predictions and the experimental data. } \label{fig:AZjj_ptlla} \end{figure} \paragraph{Overview of VBS measurements.} A summary of the VBS datasets to be considered in our EFT interpretation is collected in Table~\ref{tab:datasettable_VBS}. For each dataset, we indicate the final state, the selection criteria ({\it e.g.} EW-only versus EW+QCD contributions), the experimental observable, the number of data points $n_{\rm dat}$ and integrated luminosity $\mathcal{L}$, as well as the dataset label and the original reference. In the data labelled with $^{(*)}$, one bin from the differential distribution has been traded by the associated fiducial cross section to avoid double counting. In those cases, the latter corresponds to the EW-only component and thus exhibits increased sensitivity to the EFT operators, and $n_{\rm dat}$ indicates the actual number of fitted data points. In this overview we separate the unfolded from the folded, detector-level data, since only the former will be part of the baseline dataset. Overall, we end up with $n_{\rm dat}=18$ unfolded VBS cross-sections and $n_{\rm dat}=15$ bins for the detector-level distributions, giving a total of $n_{\rm dat}=33$ fitted data points. As will be shown in Sect.~\ref{sec:results}, the addition of the detector-level distributions has a significant impact in a VBS-only EFT fit, but only a marginal effect in the joint VBS+diboson analysis. \input{tables/table-dataset-VBS.tex} \subsection{Diboson production} \label{sec:VVproduction} In this work, gauge boson pair production is defined as the process whereby, at leading order, two vector bosons are produced on shell and then decay. This implies that the tree-level scattering amplitude will be proportional to $\alpha_{\rm EW}^4$. Higher-order QCD corrections will lead to additional hard radiation and thus the QCD-induced $VV'jj$ final state becomes a background to the VBS processes. This final state scales as $\alpha_{\rm EW}^4\alpha_s^2$, and therefore in general will dominate over the EW-induced diagrams except in regions of the phase space where the VBS topology is enhanced. Fig.~\ref{fig:VVfeynman} displays representative Feynman diagrams for opposite sign $W^\pm W^\mp$ production, a typical example of a diboson process. One can observe how diboson production is sensitive to the TGCs at the Born level and that the QGCs do not enter the theoretical description of this process. The gluon-gluon-initiated contributions are usually quite suppressed in VBS-like analysis, since their topology does not have the characteristic forward tagging jets. In this work, we will focus on the diboson production data with leptonic final states, in correspondence with the VBS case. \begin{figure}[t] \centering \subfloat{ \includegraphics[width=0.18\textwidth]{plots/WW1.pdf} } \hspace{0.8cm} \subfloat{ \includegraphics[width=0.16\textwidth]{plots/WW2.pdf} } \hspace{0.8cm} \subfloat{ \includegraphics[width=0.20\textwidth]{plots/WW3.pdf} } \hspace{0.8cm} \subfloat{ \includegraphics[width=0.16\textwidth]{plots/WW4.pdf} } \caption{\small Representative Feynman diagrams for opposite-sign $W^\pm W^\mp$ diboson production, where the first two diagrams correspond to leading order processes while other two to gluon-initiated loop-induced contributions.} \label{fig:VVfeynman} \end{figure} The standard experimental selection cuts for diboson processes are $p_T$ cuts in the leading and subleading charged leptons, leptonic rapidities being restricted to the central region, and in the presence of $W$ bosons, a cut on the missing transverse energy, $E_{T}^{\rm miss} \gtrsim 30 $ GeV. Furthermore, additional cuts on the transverse masses of the reconstructed leptons around $m_W$ and $m_Z$ are required to minimise the contribution from Higgs $s$-channel production. The resulting fiducial cross-sections are relatively large, and already at $\mathcal{L}\simeq 36$~fb$^{-1}$ they become limited by systematic uncertainties. These large cross-sections explain why unfolded differential cross-sections for different kinematic variables have been available for some time already. \paragraph{Opposite-sign $\mathbold{W^\pm W^\mp}$ production.} This channel has been measured by ATLAS based on the $\mathcal{L}=36$ fb$^{-1}$~\cite{WW:atlas,WW:atlas:hepdata} data in the $e \mu$ final state. Several differential distributions are available with their corresponding bin-by-bin correlation matrices. From CMS, we include their recent measurement~\cite{WW:cms,WW:cms:hepdata} based on the same luminosity, where events containing two oppositely charged leptons (electrons or muons) are selected. In our EFT analysis, we will include the same differential distribution, $m_{\mu e}$, from both ATLAS and CMS consisting of $n_{\rm dat}=13$ data points in each case. While the ATLAS distribution is provided as an absolute distribution, the CMS is normalised to the fiducial cross-section. Since the EFT total cross-section is different to the SM one, we revert this normalisation to maximise our EFT sensitivity. Fig.~\ref{fig:WW_atlas} displays a comparison between our theory predictions and the experimental data. The measurement extends up to values of the dilepton invariant mass of $m_{e\mu}\simeq 1.5$ TeV. Here one can observe that the inclusion of higher-order QCD and gluon-initiated contributions is essential to achieve a good agreement with experimental data, which turns out to be similarly good for the two data sets. Furthermore, the effect of NLO QCD corrections is seen to be smaller for the normalised distribution than the absolute one, indicating that the NLO $K$-factor depends only mildly on the value of the invariant mass $m_{e\mu}$. \begin{figure}[t] \centering \subfloat{ \includegraphics[width=0.46\textwidth]{plots/SM_dataset_plots/WW_memu_atlas.pdf} }\quad \subfloat{ \includegraphics[width=0.46\textwidth]{plots/SM_dataset_plots/WW_memu_cms.pdf} } \caption{\small The $m_{e\mu}$ differential distributions in opposite-sign $W^\pm W^\mp$ diboson production at $\sqrt{s}=13$ TeV from ATLAS (left) and CMS (right panel). % The legend indicates the values of the $\chi^2$ per data point associated to different theoretical predictions: $q\bar{q}$-initiated at LO and NLO, and the latter plus $gg$-initiated at LO. \label{fig:WW_atlas} } \end{figure} \paragraph{$\mathbold{W^\pm Z}$ production.} In this channel, we consider the ATLAS~\cite{WZ:atlas,WZ:atlas:hepdata} and CMS~\cite{WZ:cms} measurements at 13 TeV based on $\mathcal{L}=36$ fb$^{-1}$. In particular we chose the $e \mu \mu$ final state as a benchmark, although other combinations are available. The ATLAS and CMS $p_T^Z$ distributions contain $n_{\rm dat}=7$ and 11 data points and their kinematic reach is $p_T^Z\sim 1$ TeV and 300 GeV, respectively. For the ATLAS measurement, the information on the bin-by-bin correlated systematic uncertainties is made available and therefore are included. Moreover, we note that an EFT interpretation in terms of a subset of dimension-six operators has been presented in the CMS analysis of Ref.~\cite{WZ:cms}. We display in Fig.~\ref{fig:WZ_SM} the comparison to our theoretical predictions at LO and at NLO. The latter in particular provides an excellent description to the experimental data. Here the effects of the NLO QCD corrections are reduced in the normalised distributions as was the case in $W^\pm W^\mp$ production. Finally, as we will show in Sect.~\ref{sec:results}, this channel provides the strongest bounds on the TGC/QGC operator $\mathcal{O}_W$. \begin{figure}[t] \centering \subfloat{ \includegraphics[width=0.45\textwidth]{plots/SM_dataset_plots/WZ_ptz_atlas.pdf} }\quad \subfloat { \includegraphics[width=0.45\textwidth]{plots/SM_dataset_plots/WZ_ptz_cms.pdf} } \caption{\small The $Z$ boson transverse momentum distribution, $p_T^Z$, as measured in $W^\pm Z$ production from ATLAS~\cite{WZ:atlas} and CMS~\cite{WZ:cms} at 13 TeV based on $\mathcal{L}=36$ fb$^{-1}$. % Note that while ATLAS provides an absolute distribution, the CMS one is instead normalised. } \label{fig:WZ_SM} \end{figure} \paragraph{$\mathbold{ZZ}$ production.} For this channel, we use the recent CMS measurements based on $\mathcal{L}=137$~fb$^{-1}$ corresponding to the four-lepton final state~\cite{ZZ:cms137}, which supersedes a previous publication based on 36 fb$^{-1}$~\cite{ZZ:cms,ZZ:cms:hepdata}. For the theoretical predictions, the $qq \rightarrow ZZ$ and $gg \rightarrow ZZ$ contributions are simulated with \texttt{POWHEG-box} $\:$ at NLO and with \texttt{MG5\_aMC@NLO} $\:$ at LO, respectively. Fig.~\ref{fig:ZZ_cms} displays the normalized $d \sigma / d m_{ZZ}$ distribution in the fiducial phase space from this CMS $ZZ\to 4\ell$ measurement, which contains $n_{\rm dat}=8$ data points. We find that the agreement with the normalised distribution at LO is good, and that the contribution from the gluon-dominated diagrams is quite small. The most updated ATLAS analysis related to the $ZZ$ final state is the measurement of the four-lepton invariant mass spectrum at 13 TeV based on $\mathcal{L}=36$ fb$^{-1}$~\cite{Aaboud:2019lxo}, which receives contributions also from single-$Z$ and from Higgs production (via $h\to ZZ^*$ decays) and therefore is not considered further here. \begin{figure}[t] \centering \includegraphics[width=0.6\textwidth]{plots/SM_dataset_plots/ZZ_mzz_cms.pdf} \caption{\small Normalized $d \sigma / d m_{ZZ}$ distribution in the fiducial phase space from the CMS measurement based on $\mathcal{L}=137$ fb$^{-1}$. } \label{fig:ZZ_cms} \end{figure} \paragraph{Overview of diboson measurements.} The diboson measurements that will be considered in this analysis are summarised in Table~\ref{tab:datasettable_VV}. In total we have $n_{\rm dat}=52$ diboson cross-sections from the $W^\pm W^\mp$, $W^\pm Z$, and $ZZ$ channels, three times more data points than the corresponding VBS unfolded cross-sections. In Sect.~\ref{sec:results} we will compare the impact in the EFT parameter space between these two families of measurements. \input{tables/table-dataset-VV.tex} \subsection{Sensitivity on the dimension-six EFT operators } \label{sec:EFTsensitivity} Quantifying the sensitivity of each VBS and diboson data set to the various dimension-six EFT operators is an important step towards understanding the fit results. It is also relevant to understand if there are flat directions in our fit basis, and identify which data sets will provide the dominant constraints in the parameter space. In the following, we summarise the dependence of each process to the EFT operators considered and determine their relative sensitivity by means of the Fisher information. We also apply a principal component analysis (PCA) to identify the hierarchy of directions in the parameter space and assess the possible presence of flat directions. \paragraph{General discussion.} In Table~\ref{tab:sensitivitytable} we list the contributions of the dimension-six EFT operators that constitute our fitting basis to the various VBS and diboson processes. Overall the complementarity between the diboson and VBS can be seen, with VBS providing direct access to the $\mathcal{O}_{\varphi B}$ and $\mathcal{O}_{\varphi W}$ operators (and their CP-odd counterparts) which are essentially unconstrained from diboson-only data. \input{tables/sensitivity_table} The operators $\mathcal{O}_W$ and $\mathcal{O}_{\widetilde{W}}$ modify both the TGCs and the QGCs, and thus are not relevant for the description of the diboson production in the $ZZ$ channel. The operators $\ensuremath{\mathcal{O}}_{\varphi D}$ and $\ensuremath{\mathcal{O}}_{\varphi WB}$ contribute to all the diboson and VBS channels, since they lead to modifications of the SM parameters as discussed in Sect.~\ref{sec:eftth}. Given that $\ensuremath{\mathcal{O}}_{\varphi B}$ modifies only couplings involving the Higgs boson and/or $Z$ and $\gamma$, it will be unconstrained from the $WW$ and $WZ$ diboson channels as well as from the $WWjj$ and $WZjj$ processes. The operator $\ensuremath{\mathcal{O}}_{\varphi W}$ induces additional modifications compared to $\ensuremath{\mathcal{O}}_{\varphi B}$, contributing to diboson processes by means of the $hZZ$ and $hWW$ vertices. The two-fermion interaction vertices $\gamma \bar{\psi} \psi$ and $Z\bar{\psi} \psi$ are modified by some of the two-fermion operators, specifically by $\ensuremath{\mathcal{O}}_{\varphi l}^{(1)}$, $\ensuremath{\mathcal{O}}_{\varphi e}$, $\ensuremath{\mathcal{O}}_{\varphi l}^{(3)}$, $\ensuremath{\mathcal{O}}_{\varphi q}^{(3)}$, $\ensuremath{\mathcal{O}}_{\varphi q}^{(1)}$, $\ensuremath{\mathcal{O}}_{\varphi d}$, and $\ensuremath{\mathcal{O}}_{\varphi u}$, while the $W\bar{\psi} \psi$ vertex will be affected by $\ensuremath{\mathcal{O}}_{\varphi l}^{(3)}$ and $\ensuremath{\mathcal{O}}_{\varphi q}^{(3)}$. Furthermore, the $pp \to VV \to 4\ell$ and $pp \to VVjj \to 4\ell jj$ processes provide sensitivity to two-fermion operators of the form $\varphi D \psi^2$ in all channels except for the ones with two $WW$ bosons. Moreover, since the experimental phase space selection in the diboson production is designed to be orthogonal to the Higgs production, we expect that the $WW$ and $ZZ$ channel will be less sensitive to these operators compared to VBS. This justifies why the contributions from $\ensuremath{\mathcal{O}}_{\varphi B}$ and $\ensuremath{\mathcal{O}}_{\varphi W}$ (and their corresponding CP-odd counterparts) are negligible in this channel. \paragraph{The Fisher information matrix.} While certainly informative, Table~\ref{tab:sensitivitytable} does not allow one to compare the sensitivity brought in by different data sets on a given EFT degree of freedom. In particular, we would like to quantify the relative impact that the diboson and VBS observables have for each coefficient. To achieve this, it is convenient to resort to the Fisher information matrix~\cite{Ellis:2018gqa,Brehmer:2017lrt} which, when restricted to linear contributions only, is given by \begin{equation} \label{eq:fisherinformation2} I_{ij} = \sum_{m=1}^{n_{\rm dat}} \frac{\sigma^{\rm (eft)}_{m,i}\sigma^{\rm (eft)}_{m,j}}{\delta_{{\rm exp},m}^2} \, ,\quad i,j=1,\ldots,n_{\rm op} \, , \end{equation} where the EFT coefficients are defined in Eq.~(\ref{eq:crosssection}) and where $\delta_{{\rm exp},m}$ stands for the total experimental error associated to the $m$-th data point. In Eq.~(\ref{eq:fisherinformation2}), the sum extends over all the data points that belong to a given data set or family of processes. While the absolute values of the entries of the Fisher matrix $I_{ij}$ are not physically meaningful (since the overall normalisation of the EFT operators is arbitrary), the ratios of the diagonal entries $I_{ii}$ for the $i$-th degree of freedom between two different groups of process is well-defined, since there the operator normalizations cancel out. The diagonal entries of the Fisher information matrix evaluated for each of the degrees of freedom that form our basis are displayed in Fig.~\ref{fig:FisherMatrix}. Its entries have been normalised such that the sum over the elements of a given row adds up to 100. We show results both for the individual groups of processes as well as the comparison between the overall impact of the VBS and the diboson datasets. For those entries greater than 10\%, we also indicate its numerical value in the heat map. \begin{figure}[htbp] \centering \includegraphics[width=0.9\textwidth]{plots/smefit_plots/Fisher_heat} \caption{\small The diagonal entries of the Fisher information matrix, $I_{ii}$, evaluated for each of the coefficients that form our fitting basis. % We display results separately for each channel (left) and when clustering all VBS and diboson datasets together (right panel). % For those entries greater than 10\%, we also indicate the numerical value in the heat map. \label{fig:FisherMatrix} } \end{figure} One can observe from Fig.~\ref{fig:FisherMatrix} that the VBS data provide the dominant sensitivity for several of the operators considered in this analysis, in particular for three of the CP-odd ones. In general, we find that VBS process can provide complementary information on the EFT parameter space compared to the diboson data. Specifically, one finds that VBS measurements provide the dominant sensitivity (more than 50\% of the Fisher information) for $c_{\varphi B}$ and $c_{\varphi W}$ (and their CP-odd versions) as well as for $c_{\varphi \widetilde{W}B}$. Moreover, they provide a competitive sensitivity (defined as more than 20\%) for $c_{\varphi l}^{(3)}$, $c_{\varphi d}$, $c_{\varphi D}$ and for the triple gauge operator $c_{W}$. The latter result illustrates how VBS measurements, while still providing less information that diboson measurements to constrain modifications of the TGCs, do indeed provide useful information. In the case of the triple gauge operator $c_{W}$, we also note that the $WZ$ diboson final state dominates the sensitivity, with the contribution from the $WW$ one being negligible. In terms of identifying which VBS final states lead to higher relative sensitivities, we observe that $ZZjj$ provides most of the information for $c_{\varphi B}$ and $c_{\varphi \widetilde{B}}$, $W^\pm W^\mp jj$ dominates for $c_{\varphi W}$, and $WZjj$ leads in constraining the CP-odd operators $c_{\varphi \widetilde{W}}$ and $c_{\varphi \widetilde{W}B}$. \paragraph{EFT benchmark points.} Another strategy to quantify the sensitivity to the different Wilson coefficients is to compare the size of the SM and EFT cross-sections for representative benchmark points in the parameter space. Here we present only representative results for these comparisons, since compatible information is found for the complete set of final states and EFT operators. In Figs.~\ref{fig:sensitivity_benchmarks_ATLAS} we display the theoretical predictions for the VBS signal (EW-induced component only) at $\sqrt{s}=13$ TeV. We show the differential distributions for the $\gamma Zjj$ and $W^\pm W^\pm jj$ final states based on the selection cuts of the corresponding ATLAS reference measurements. In each case, we compare the SM predictions with three EFT benchmark points, in which either of the dimensionless quantities $c_{W} v^2 / \Lambda^2 $, $c_{\varphi W} v^2 / \Lambda^2$, or $c_{\varphi B} v^2 / \Lambda^2$ are set to 0.5, and the rest are set to zero. In the upper panels, only the EFT prediction for $c_{W} v^2 / \Lambda^2 $ is shown, to improve readability. We also display in Fig.~\ref{fig:sensitivity_benchmarks_CMS} the corresponding comparisons for the $m_{ZZ}$ and $m_T^{WZ}$ distributions in the $ZZjj$ and $W^\pm Zjj$ final states based on the same selection cuts as in the associated CMS measurements. \begin{figure}[t] \centering \includegraphics[width=0.49\textwidth]{plots/sensitivities/AZjj.pdf} \includegraphics[width=0.49\textwidth]{plots/sensitivities/WWjj.pdf} \caption{\small Theoretical predictions for the VBS signal (EW-induced component only) for different final states at $\sqrt{s}=13$ TeV. % We show the dilepton $p_{T_{\gamma \ell \ell }}$ distributions for the $\gamma Zjj$ (left) and $m_{\ell \ell }$ for $W^\pm W^\pm jj$ (right) final states based on the selection cuts of the corresponding ATLAS measurements. % In each case, we compare the SM predictions with three EFT benchmark points in terms of the dimensionless quantities $\bar{c} = c v^2 / \Lambda^2 $. Either $\bar{c}_{W}$, $\bar{c}_{\varphi W}$, or $\bar{c}_{\varphi B}$ are set to $0.5$ and the other coefficients to zero. % In the upper panels, only the EFT prediction with $\bar{c}_{W}=0.5$ are shown to improve readability. \label{fig:sensitivity_benchmarks_ATLAS} } \end{figure} \begin{figure}[H] \centering \includegraphics[width=0.49\textwidth]{plots/sensitivities/ZZjj.pdf} \includegraphics[width=0.49\textwidth]{plots/sensitivities/WZjj.pdf} \caption{\small The $m_{ZZ}$ and $m_T^{WZ}$ distributions in the $ZZjj$ and $W^\pm Zjj$ final states, based on the same selection cuts as the corresponding CMS measurements. \label{fig:sensitivity_benchmarks_CMS} } \end{figure} From the comparisons in Figs.~\ref{fig:sensitivity_benchmarks_ATLAS} and~\ref{fig:sensitivity_benchmarks_CMS}, one can observe a distinct variation in the EFT sensitivity across the specific final state and differential distribution being considered. In the case of the $\gamma Zjj$ and $W^\pm W^\pm jj$ final states, there is good sensitivity to $c_{W}$ but rather less for $c_{\varphi W}$ and $c_{\varphi B}$ assuming the same value for each coefficient. Interestingly, the sensitivity to $c_{W}$ can arise both from the low energy region as well as from the high energy tail of the distributions. The situation concerning $c_{W}$ is similar for the $ZZjj$ and $W^\pm Z jj$ final states, with the difference being that now one becomes also sensitive to $c_{\varphi B}$, which suppresses the cross-section compared to the SM expectation in a manner more or less independent from the kinematics. In the case of the $c_{\varphi W}$ coefficient, the only distribution with comparable sensitivity to the other benchmark points is $m_T^{WZ}$ in the $W^\pm Z jj$ final state. \paragraph{Principal component analysis (PCA).} Lastly, we use PCA in this section to identify the combinations of Wilson coefficients which exhibit the largest and the smallest variabilities and determine the possible presence of flat directions. While PCA is primarily used as a dimensionality reduction tool by removing principal components with the lowest variance, here we use its core steps based on singular value decomposition (SVD) only for diagnosis purposes, and the EFT fitting basis remains the same as that defined in Sect.~\ref{sec:eftth}. More specifically, we utilize PCA to identify the possible presence of flat directions, assess whether there is a large gap in the variability between the principal components, and to determine the matching between the physical fitting basis and the principal components. The starting point of the principal component analysis is the matrix $K$ of dimensions $n_{\rm dat}\times~n_{\rm op}$ and (dimensionless) components $K_{mi}=\sigma^{(\rm eft)}_{m,i}/\delta_{{\rm exp},m}$, where $\delta_{{\rm exp},m}$ is the same total experimental error that appears in the evaluation of the Fisher information matrix. Using singular value decomposition (SVD) we can write $K = U W V^\dagger$, where $U~(V)$ is a $n_{\rm dat} \times n_{\rm dat}$~($n_{\rm op} \times n_{\rm op}$) unitary matrix and $W$ is an $n_{\rm dat}\times n_{\rm op}$ diagonal matrix with semi-positive real entries, called the singular values, which are ordered by decreasing magnitude. The larger a singular value, the higher the variability of the associated principal component. The elements $V$ contain the (normalised) principal components associated to each of the singular values, which can be expressed as a superposition of the original coefficients, \begin{equation} \label{eq:PCdef} {\rm PC}_k = \sum_{i=1}^{n_{\rm op}} a_{ki}c_i \, , \quad k=1,\ldots,n_{\rm op} \, ,\qquad \left(~ \sum_{i=1}^{n_{\rm op}} a_{ki}^2=1\,~\forall k \right) \end{equation} where the larger the value of the coefficient $a_{kl}$, the larger the relative weight of the associated Wilson in this specific principal component. \begin{figure}[htbp] \centering \includegraphics[width=0.9\textwidth]{plots/smefit_plots/PCA_edited} \caption{\small \small Results of the principal component analysis applied to the baseline VBS+diboson dataset. % The upper panel shows the distribution of singular values, while the lower one displays the squared values for the coefficients $a_{ki}$ of the principal components in Eq.~(\ref{eq:PCdef}) } \label{fig:PCA} \end{figure} The upper panel of Fig.~\ref{fig:PCA} displays the distribution of singular values for the $n_{\rm op}=16$ principal components associated to the fitting basis described in Sect.~\ref{sec:Warsawbasis} with the baseline VBS+diboson dataset. This analysis confirms that there are no flat directions in our parameter space, which would appear as a principal component with a vanishing singular value. Furthermore, we do not observe large hierarchies in the distribution of singular values, indicating that the physical dimensionality of our problem coincides with that of the adopted fitting basis. The lower panel of Fig.~\ref{fig:PCA} displays a heat map indicating the values of the (squared) coefficients $a_{ki}^2$ that relate the original fitting basis to the principal components via the rotation in Eq.~(\ref{eq:PCdef}), and whose associated eigenvalues are displayed in the upper panel. For entries with $a^2_{ki}\ge 0.1$, we also indicate the numerical value in the corresponding entry. The principal component associated with the highest singular value can be attributed to the two-fermion coefficient $c_{\varphi q}^{(3)}$, which therefore is expected to be well constrained from the fit (as anticipated in Sect.~\ref{sec:eftth}). Other principal components which coincide with the coefficients of our fitting basis are $c_{W}$ and $c_{\varphi \widetilde{B}}$. In general, the majority of principal components involve a superposition of several basis coefficients $c_i$, for example in PC$_k$ with $k=2,7,8$ or 10, none of the squared coefficients $a_{ki}^2$ is larger than 0.3. \section{Vector boson scattering at the HL-LHC} \label{sec:hllhc} While the results presented in the previous section indicate the potential of VBS measurements for dimension-6 EFT analyses, their impact is currently limited by statistics. The ultimate LHC sensitivity required to constrain the coefficients of these dimension-6 operators from VBS data will only be achieved by legacy measurements based on the full HL-LHC luminosity of $\mathcal{L}\simeq 3$ ab$^{-1}$ per experiment. With this motivation, we generate HL-LHC pseudo-data for EW-induced vector boson scattering processes and quantify their impact on the EFT fit by comparing the results to those presented in Sect.~\ref{sec:fitresult}. The strategy adopted here is the same as the one used for the HL-LHC PDF projections in Refs.~\cite{Khalek:2018mdn,AbdulKhalek:2019mps}, which were subsequently used in the studies presented in the corresponding Yellow Reports~\cite{Azzi:2019yne,Cepeda:2019klc}. In order to generate the HL-LHC pseudo-data, we select reference measurements out of the VBS datasets presented in Sect.~\ref{sec:expdata}. Table~\ref{tab:datasettable_VBS_HL} presents the overview of the HL-LHC projections considered in this analysis, which include only EW-induced VBS processes since we assume that the QCD-induced backgrounds can be removed at the analysis level. % We consider the following differential distributions for each final state: $m_{\ell\ell}$ for $W^{\pm}W^{\pm}jj$, $p_{T}^{\ell \ell \ell}$ and $m_{T}^{WZ}$ in $ZW^{\pm}jj$, $m_{ZZ}$ for $ZZjj$, and then $ p_{T}^{\gamma \ell \ell}$ and $m_{\gamma Z}$ in the $\gamma Zjj$ final state, yielding a total of $n_{\rm dat}=61$ datapoints. The theoretical predictions for these observables are generated as in Sect.~\ref{sec:expdata} with the same selection and acceptance cuts, except that they are rescaled to account for the increase in the center of mass energy from $\sqrt{s}=13$~TeV to $\sqrt{s}=14$~TeV. We note that the actual HL-LHC analysis are expected to contain a larger number of bins, as well as a higher reach in energy, however for simplicity we maintain here the current binning. The theoretical calculations are generated for the null hypothesis ($\boldsymbol{c}= \boldsymbol{0}$), with the caveat that better sensitivities would be obtained in the case of an EFT signal. The statistical and systematic uncertainties associated to the HL-LHC pseudo-data are evaluated as follows. First, we denote $\sigma^{\rm th}_{i}$ as the theoretical prediction for the EW-induced VBS cross-section in the $i$-th bin of a given differential distribution. This cross-section includes all relevant selection and acceptance cuts, as well as the leptonic branching fractions. The expected number of events in this bin and the associated (relative) statistical uncertainty $\delta_i^{\rm stat}$ are then given by, \begin{equation} N_{i}^{\rm th}= \sigma^{\rm th}_{i}\times \mathcal{L} \,, \quad \delta_i^{\rm stat}\equiv \frac{\delta N_{i}^{\rm stat} }{N_{i}^{\rm th}} = \frac{1}{\sqrt{N_{i}^{\rm th}}} \, . \end{equation} Note that the relative statistical uncertainty for the number of events and for the cross-sections will be the same, either in the fiducial region or extrapolated to the full phase space. Here we take the luminosity to be $\mathcal{L}=3$ ab$^{-1}$ and generate two differential distributions per final state, one from ATLAS and the other from CMS, as indicated in Table~\ref{tab:datasettable_VBS_HL}. \input{tables/table_datsaset-VBS-HL} Concerning the systematic uncertainties, these are also taken from the reference measurements as follows. If $\delta_{i,j}^{\rm sys}$ denotes the $j$-th relative systematic uncertainty associated to the $i$-th bin of the reference measurement, we assume that the same systematic error at the HL-LHC will be given by $f_{{\rm red},j}\delta_{i,j}^{\rm sys}$, where $f_{{\rm red},j}\simeq 1/2$ is the expected reduction in systematic errors, in agreement with available projections~\cite{CMS:2018mbt,CMS:2018zxa,ATLAS:2018tav,ATLAS:2018ocj}. Adding in quadrature all systematic uncertainties with the statistical error, the total relative uncertainty for the $i$-th bin of our HL-LHC projections will be given by \begin{equation} \delta_{{\rm tot},i}^{\rm exp} = \left( \left( \delta_i^{\rm stat}\right)^2 + \sum_{j=1}^{n_{\rm sys}} \left( f_{{\rm red},j}\delta_{i,j}^{\rm sys} \right)^2\right)^{1/2} \, , \end{equation} where $n_{\rm sys}$ indicates the number of systematic error sources. Finally, we generate the central values for the HL-LHC pseudo-data projections by fluctuating the theory prediction by the expected total experimental uncertainty, namely \begin{equation} \sigma^{\rm hllhc}_{i} \equiv \sigma^{\rm th}_{i} \left( 1+ r_i\delta_{{\rm tot},i}^{\rm exp} \right) \, , \qquad i=1,\ldots,n_{\rm bin} \, , \end{equation} where $r_i$ are univariate Gaussian random numbers. By construction, one expects that the EFT fit quality to the HL-LHC pseudo-data to be $\chi^2/n_{\rm bin} \simeq 1$ for a sufficiently large number of bins. \begin{figure}[htbp] \centering \subfloat{\includegraphics[width=\textwidth]{plots/smefit_plots/Coeffs_Bar_HL.pdf}} \quad \subfloat{\includegraphics[width=\textwidth]{plots/smefit_plots/Coeffs_Central_HL.pdf}} \caption{\small Comparison of the 95\% CL intervals for the EFT coefficients between three related analyses: the VBS-only and a the combined diboson+VBS fits based on current data, and the VBS-only fit based on the HL-LHC projections listed in Table~\ref{tab:datasettable_VBS_HL}. } \label{fig:HLresults} \end{figure} Fig.~\ref{fig:HLresults} displays the comparison of the obtained 95\% CL intervals for the 16 EFT coefficients considered here between three related analyses. In particular, EFT fits based on the current measurements, both for a VBS-only and for a combined diboson+VBS dataset, are compared with the corresponding results from the VBS-only fit based on the HL-LHC projections listed in Table~\ref{tab:datasettable_VBS_HL}. Here we find that the HL-LHC measurements lead to a significant impact at the level of the VBS-only fit, where the current best bounds are improved by up to three orders of magnitude depending on the specific coefficient. It is also interesting to note that a VBS-only fit from HL-LHC measurements would even have a superior sensitivity compared to the combined diboson+VBS analysis, especially for the purely bosonic operators where at least a factor of 10 improvement over the current bounds is expected. The results presented here further highlight the capability of VBS measurements for dimension-six EFT studies and the relevance of their integration in the global EFT fit, especially as more luminosity is accumulated. While our projections are based on optimistic assumptions such as a clean separation between the EW- and QCD-induced components of the measurement, the outstanding performance of the LHC experiments so far is rather encouraging. \section{Introduction} \label{sec:introduction} Since the dawn of the Standard Model (SM), the vector boson scattering (VBS) process has been heralded as a cornerstone to test the high-energy behaviour of the electroweak sector. Such importance originated in calculations of scattering amplitudes involving longitudinally polarised vector bosons which, in the absence of a Higgs boson, were shown to grow quadratically with energy and eventually violate unitarity bounds~\cite{PhysRevLett.17.616,PhysRevLett.57.2344, PhysRevD.36.1490,PhysRevD.10.1145,PhysRevD.22.200, PASSARINO199031}. The ability to fully scrutinise the VBS process was therefore one of the motivations to project the ill-fated Superconducting Super Collider (SSC) with a center of mass energy of $\sqrt{s}=40$ TeV~\cite{Eichten:1984eu}. If the Higgs boson were not responsible for electroweak symmetry breaking, the SSC might have been able to discover new resonances in the high-energy tail of VBS events. While we know now that the Higgs boson, following its discovery in 2012~\cite{Hdiscovery:atlas,Hdiscovery:cms}, unitarises the VBS cross-sections, such processes still provide unique sensitivity to deformations of the SM at high energies, such as those parametrised by the Standard Model Effective Field Theory (SMEFT)~\cite{Weinberg:1979sa,Buchmuller:1985jz,Georgi:1994qn}. VBS therefore provides a fully complementary probe to investigate the electroweak sector of the SMEFT compared to processes such as on-shell Higgs production or gauge-boson pair production, both in terms of covering a different energy regime (up to the TeV scale) and by its contributions from different EFT operator combinations. A particularly attractive feature of VBS in this context is the appearance of quartic gauge couplings (QGCs), which have often led to a theoretical interpretation of VBS data in terms of anomalous QGCs (aQGCs). One significant challenge in studying the VBS process at the LHC is the rather small signal-to-noise ratios due to its electroweak nature, with backgrounds being dominated by QCD-induced diboson production. Fortunately, VBS also benefits from a characteristic signature that allows for a relatively clean isolation, defined by two energetic jets in the forward region and a large rapidity gap between them that contains reduced hadronic activity.\footnote{ This is same kinematic signature relevant to identify single Higgs~\cite{Cacciari:2015jma} and Higgs pair~\cite{Bishara:2016kjn} production in vector boson function (VBF).} The combination of this characteristic topology together with the improved analysis of the high statistics delivered during Run II of the LHC ($\mathcal{L}=140$ fb$^{-1}$ at $\sqrt{s}=13$ TeV) has made possible not only the identification of VBS events with reasonable statistical significance, but also the measurement of the associated unfolded cross-sections and differential distributions in the fiducial region~\citep{WWjjWZjj:cms,WWjj:atlas,WZjj:atlas,ZZjj:atlas,ZZjj:cms137,AZjj:atlas,AZjj:cms,Sirunyan:2020gvn}. In particular, VBS measurements from ATLAS and CMS based on the full Run II dataset have recently been presented for different final states, from $W^{\pm}W^{\pm}jj$ and $ZW^{\pm}jj$~\cite{WWjjWZjj:cms} to $ZZjj$~\citep{ZZjj:cms137,ZZjj:atlas}, including one analysis targeting polarized $W^{\pm}W^{\pm}$ scattering~\cite{Sirunyan:2020gvn}. In the past, searches for new physics using VBS processes have either been based on unitarisation techniques~\cite{Kilian:2014zja,Perez:2018kav,Corbett:2014ora,Sekulla:2016yku} or interpreted in terms of anomalous gauge couplings, where the SM couplings are rescaled by phenomenological parameters fitted from the data~\cite{2017380,Gounaris:1995ed, Khachatryan:2016vif,ZZjj:cms137}. However, this approach is only beneficial for bookkeeping purposes since, among other limitations, it violates gauge invariance. For this reason, different strategies based on effective field theories have been advocated~\cite{Hagiwara:1993ck,Eboli:2006wa,Degrande:2012wf,Gritsan:2020pib} to interpret multi-boson and VBS measurements. These EFT-based approaches have numerous advantages over the previous phenomenological approaches: they respect the fundamental symmetries of the SM, are systematically improvable in perturbation theory, allow the correlation of eventual deviations between different processes, and can accommodate a meaningful quantification of theoretical uncertainties. We note that, beyond the SMEFT, other effective theory interpretations of VBS data have been considered such as those based on the Electroweak Chiral Lagrangian~\cite{Delgado:2019ucx, Delgado:2018nnh,Delgado:2017cls,Kozow:2019txg}, where the Higgs boson is not necessarily part of an SU(2) doublet. With this motivation, VBS measurements have often been interpreted in the SMEFT framework to identify, parametrise, and correlate possible deviations in the structure of the electroweak gauge couplings compared to the SM predictions. However, these studies have so far~\cite{Sirunyan:2019der,Khachatryan:2017jub,Aad:2015uqa,Kalinowski:2018oxd} been mostly restricted to a selection of dimension-eight operators~\cite{Eboli:2006wa,Brass:2018hfw}, in particular those that induce aQGCs without modifying the Triple Gauge Couplings (TGCs). As emphasized in Ref.~\cite{Gomez-Ambrosio:2018pnl}, it is theoretically inconsistent to derive bounds on aQGCs from VBS data accounting for dimension-eight operators while neglecting the dimension-six ones, which also modifying the electroweak interactions that enter the same observables. The fact that available EFT interpretations of VBS processes ignore the contribution from dimension-six operators casts doubts on the robustness of the obtained aQGCs bounds. While several works have investigated the effects of dimension-six operators on diboson production~\cite{Grojean:2018dqj,Falkowski:2016cxu,Rahaman:2019mnz}, including the impact of QCD corrections to the EFT cross-sections~\cite{Baglio:2017bfe,Baglio:2020oqu,Baglio:2019uty}, much less attention has been devoted to the corresponding effects on VBS processes~\cite{Jager:2013iza, Gomez-Ambrosio:2018pnl,Dedes:2020xmo,Gallinaro:2020cte}. In this work, we present for the first time a systematic interpretation of VBS fiducial cross-sections and unfolded differential distributions from the LHC in the framework of the dimension-6 SMEFT at linear order, $\mathcal{O}\left( \Lambda^{-2}\right)$, in the effective theory expansion. Our study is carried out within the {\tt SMEFiT} framework, a toolbox for global EFT interpretations of experimental data which has been deployed to characterise the top-quark sector~\cite{Hartland:2019bjb} and is currently being updated to perform a combined EFT analysis of Higgs boson, top-quark, and diboson measurements from LEP and the LHC in Ref.~\cite{smefittophiggs}. In the present study, we consider all available VBS measurements of fiducial cross-sections and distributions, in most cases based on the full Run II integrated luminosity. These are complemented by the most updated QCD-induced diboson production datasets from ATLAS and CMS~\citep{WW:atlas,WW:cms,WZ:atlas,WZ:cms,ZZ:cms137}, which are interpreted simultaneously within the same EFT theoretical framework as the VBS measurements. We demonstrate how the VBS measurements provide complementary information on several operators relevant for the description the electroweak sector of the SMEFT, in particular those modifying the triple and quartic gauge couplings. In addition, we quantify the impact of the VBS data by direct fits and by using statistical metrics such as information geometry and principal component analysis. We also highlight the consistency between the constraints separately provided by the VBS and diboson data on the dimension-six operators considered, representing a non-trivial stress-test of the gauge sector of the SMEFT. Overall, our analysis motivates the systematic inclusion of VBS data in global SMEFT interpretations~\cite{Berthier:2015gja,deBlas:2016ojx,Englert:2017aqb,Ellis:2018gqa, Biekotter:2018rhp,Aebischer:2018iyb,Falkowski:2019hvp,deBlas:2019okz,Ellis:2020unq,Dawson:2020oco}. While we have now the first VBS unfolded measurements of cross-sections and differential distributions, they are limited by statistics. Accessing the full physics potential associated to VBS processes will only be achieved with the analysis of the complete dataset from the High Luminosity LHC~\cite{Azzi:2019yne,Cepeda:2019klc}. In particular, the HL-LHC will provide access to the high energy region of $VV'\to VV'$ scattering and has the potential to disentangle contributions from $V_LV_L'$ polarised scattering~\cite{CMS:2018mbt,CMS:2018zxa,ATLAS:2018tav,ATLAS:2018ocj}. To quantify this impact, we present projections for the reach in the EFT parameter space of the VBS measurements expected at the HL-LHC, which demonstrate a significant increase in sensitivity compared to current measurements. The structure of this paper is as follows. First, we present the theoretical framework of the analysis in Sect.~\ref{sec:eftth}, in particular our definition of the dimension-six operator basis and the flavour assumptions. In Sect.~\ref{sec:expdata} we describe the VBS and diboson data used as input for our EFT fit, outline the details of the corresponding SM theoretical calculations, and present different measures of the expected operator sensitivity. The main results of this work are then presented in Sect.~\ref{sec:results}, where we derive bounds on the relevant operators and discuss the interplay between the various data sets. Finally, we study in Sect.~\ref{sec:hllhc} the impact that future measurements of VBS processes at the HL-LHC will have on the EFT parameter space, followed by a summary and indication of possible future developments in Sect.~\ref{sec:summary}. \section{Results and discussion} \label{sec:results} In this section, we present the main results of this work, namely the dimension-six EFT interpretation of the VBS and diboson datasets from the LHC Run II. We first briefly summarise the fitting strategy adopted in this analysis and then present the fit quality by comparing the best-fit results with the corresponding experimental measurements. We then present the fit results for the baseline dataset, determine the 95\% CL intervals for the $n_{\rm op}=16$ operators considered, and study the dependence of our results with respect to variations of the input data, in particular with fits based only on VBS measurements. \subsection{Fitting strategy} The EFT analyses carried out in this work are based on the {\tt SMEFiT} global fitting framework presented in~\cite{smefittophiggs,Hartland:2019bjb}. Two options to constrain the EFT parameter are available in this framework: the Monte Carlo replica fit method (MCfit) and Nested Sampling (NS) via {\tt MultiNest}~\cite{Feroz:2013hea}. In this work we adopt the latter technique. The end result of {\tt SMEFiT} is a representation of the probability density in the space of Wilson coefficients spanned by $N_{\rm spl}$ samples, $\{ c_i^{(k)}\}$, which allows the evaluation of statistical estimators such as mean values and standard deviations, {\it e.g.}, \begin{equation} \label{eq:MCaverage} \left\langle c_i\right\rangle = \frac{1}{N_{\rm rep}}\sum_{k=1}^{N_{\rm rep}} c_i^{(k)} \, , \quad i=1,\ldots,n_{\rm op} \, , \end{equation} \begin{equation} \delta c_i = \left( \frac{1}{N_{\rm rep}-1}\sum_{k=1}^{N_{\rm rep}} \left( c_i^{(k)} -\left\langle c_i\right\rangle \right)^2\right)^{1/2} \, , \quad i=1,\ldots,n_{\rm op} \, , \end{equation} and likewise for other estimators such as the correlation coefficients. Since the present analysis is carried out at the linear level in the EFT expansion, and there are no flat directions for the baseline dataset (see Sect.~\ref{sec:EFTsensitivity}), the probability distributions associated to the coefficients $\boldsymbol{c}$ are expected to be Gaussian. For this reason, it is not necessary to go beyond the first two moments of the posterior distributions in $\boldsymbol{c}$. The overall fit quality is assessed by means of the $\chi^2$ figure of merit, defined as \begin{equation} \chi^2\left( {\boldsymbol c} \right) \equiv \frac{1}{n_{\rm dat}}\sum_{i,j=1}^{n_{\rm dat}}\left( \sigma^{(\rm th)}_i\left( {\boldsymbol c} \right) -\sigma^{(\rm exp)}_i\right) ({\rm cov}^{-1})_{ij} \left( \sigma^{(\rm th)}_j\left( {\boldsymbol c}\right) -\sigma^{(\rm exp)}_j\right) \label{eq:chi2definition2} \; , \end{equation} where $\sigma_i^{\rm (exp)}$ corresponds to the central experimental data point and $\sigma^{(\rm th)}_i\left( {\boldsymbol c}\right)$ is the associated theoretical prediction, Eq.~(\ref{eq:crosssection}), for the $i-$th cross-section. The covariance matrix, ${\rm cov}$, is constructed from all available sources of uncorrelated and correlated experimental uncertainties, with the `$t_0$' definition~\cite{Ball:2009qv} used for the fit and the standard experimental covariance used to quote the resulting $\chi^2$ values. Whenever appropriate, we also add to the covariance matrix estimates of theoretical uncertainties coming from the input proton PDFs, as well as the MC theory calculations. The post-fit $\chi^2$ values are then evaluated using the best-fit estimate (mean) of the Wilson coefficients, Eq.~(\ref{eq:MCaverage}), computed from the resulting MC samples obtained by NS. \subsection{Fit quality and comparison with data} \label{sec:fitquality} In Table~\ref{tab:chivals} we display the values of the $\chi^2/n_{\rm dat}$, Eq.~(\ref{eq:chi2definition2}), for each of the data sets contained in our baseline fit, as well as the total values associated to the diboson and VBS categories. We also indicate the $\chi^2$ values corresponding to the Standard Model predictions (pre-fit) together with the values obtained once the EFT corrections are accounted for (post-fit). Note that our baseline dataset does not contain any detector-level folded distributions. The graphical representation of these $\chi^2$ values is also displayed in Fig.~\ref{fig:chi2_values}. \input{tables/chi2_table} \begin{figure}[t] \centering \includegraphics[width=\textwidth]{plots/smefit_plots/Chi2_plot.pdf} \caption{ \small Graphical representation of the $\chi^2$ values reported in Table~\ref{tab:chivals}. } \label{fig:chi2_values} \end{figure} From Table~\ref{tab:chivals} one can observe that for the diboson data, a $\chi^2$ of around one per data point is obtained. Moreover, the total $\chi^2/n_{\rm dat}=1.17$ found at the level of SM calculations is reduced to 0.97 once EFT effects are included in the fit. Concerning the VBS dataset, there is a higher spread in the $\chi^2/n_{\rm dat}$ values, which is explained by the fact that each data set is composed of either a single or a few cross-section measurements. Taking into account the 18 independent cross-section measurements that we includein the fit, the SM value of $\chi^2/n_{\rm dat}=0.83$ is reduced to 0.75 at the post-fit level. Overall, the combination of the diboson and VBS measurements adds up to $n_{\rm dat}=70$ data points for which a pre-fit value of $\chi^2/n_{\rm dat}=1.08$ based on the SM predictions is reduced to 0.92 after the EFT fit. Fig.~\ref{fig:DvTdiboson} displays a comparison between experimental data and best-fit EFT theory predictions for the LHC diboson distributions considered in the present analysis. We show the results for the $W^\pm Z$, $W^\pm W^\mp$ and $ZZ$ final states from CMS in the upper panels and the corresponding $W^\pm Z$ and $W^\pm W^\mp$ distributions from ATLAS in the lower panels. Both the data and the EFT fit results are normalised to the central value of the SM prediction. The experimental data is presented as both unshifted in central values (where the error band represents the total error) and with the best-fit systematic shifts having been subtracted (so that the error band contains only the statistical component). The band in the EFT prediction indicates the post-fit 95\% CL uncertainty. For the datasets in which the information on correlated systematics is not available, only the unshifted data is shown. In Fig.~\ref{fig:DvTvbs}, we show a similar comparison as that of Fig.~\ref{fig:DvTdiboson} but now for the VBS measurements. In all cases a fair agreement is observed between experimental data and SM and EFT theory predictions, consistent with the $\chi^2$ values reported in Table~\ref{tab:chivals}. \begin{figure}[t] \centering \includegraphics[width=0.49\textwidth]{plots/DvT_CMSWZ.pdf} \includegraphics[width=0.49\textwidth]{plots/DvT_CMSWW.pdf} \includegraphics[width=0.49\textwidth]{plots/DvT_CMSZZ.pdf} \includegraphics[width=0.49\textwidth]{plots/DvT_ATLASWZ.pdf} \includegraphics[width=0.49\textwidth]{plots/DvT_ATLASWW.pdf} \caption{\small Comparison between experimental data and best-fit EFT theory predictions for the LHC diboson distributions considered in the present analysis. % Both the data and the EFT fit results are normalised to the central value of the SM prediction. % The band in the EFT prediction indicates the post-fit 95\% CL uncertainty. } \label{fig:DvTdiboson} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.49\textwidth]{plots/DvT_ATLASWZjj.pdf} \includegraphics[width=0.49\textwidth]{plots/DvT_CMSWWjj.pdf} \includegraphics[width=0.49\textwidth]{plots/DvT_CMSWZjj.pdf} \includegraphics[width=0.60\textwidth]{plots/smefit_plots/DvT13_fid_xs.pdf} \caption{\small Comparison for the VBS measurements, both for the unfolded differential distributions (top panel), and for the EW-only fiducial cross-sections (bottom panel).} \label{fig:DvTvbs} \end{figure} \subsection{Constraints on the EFT parameter space} \label{sec:fitresult} We now present the constraints on the coefficients of the dimension-six EFT operators used to interpret the VBS and diboson cross-sections listed in Table~\ref{tab:chivals}. In Fig.~\ref{fig:coeff_distributions}, we display the posterior probability distributions associated to each of the $16$ coefficients that are constrained in this analysis for the baseline dataset. In all cases, we can see that these are approximately Gaussian, as expected for a linear EFT fit without flat directions. The latter result is consistent with observations derived from the PCA in Fig.~\ref{fig:PCA}, and confirm that the input dataset is sufficient to constrain all 16 independent directions in the EFT parameter space. \clearpage \begin{figure}[t] \centering \includegraphics[width=\textwidth]{plots/smefit_plots/Coeffs_Hist_Combined.pdf} \caption{\small The posterior probability distributions associated to each of the $n_{\rm op}=16$ coefficients that are constrained in this analysis for the baseline dataset. Note that the $x$-axis ranges are different for each coefficient. } \label{fig:coeff_distributions} \end{figure} \input{tables/SMEFiT_results_table.tex} From these posterior probability distributions, the 95\% confidence level intervals associated to each of the fit coefficients can be evaluated. Table~\ref{tab:finalbounds} displays these 95\% CL intervals associated to all 16 degrees of freedom. Moreover, a comparison is made between the results of the baseline VBS+diboson fit performed at the global (marginalised) and individual levels, as well as with a fit based only on the diboson cross-sections. In the fourth column (individual fits), only one coefficient is varied at a time while all others are set to their SM values. The results of Table~\ref{tab:finalbounds} are also graphically represented in Fig.~\ref{fig:MainBounds}, which displays the absolute value (upper) and the magnitude (bottom panel) of these 95 $\%$ CL intervals. \begin{figure}[htbp] \centering \includegraphics[width=\textwidth]{plots/smefit_plots/Coeffs_Central_Ind.pdf} \includegraphics[width=\textwidth]{plots/smefit_plots/Coeffs_Bar_Ind.pdf} \caption{\small Graphical representation of the results of Table~\ref{tab:finalbounds}, displaying the absolute value (upper) and the magnitude (bottom panel) of the 95 $\%$ CL intervals associated to each of the 16 EFT operators considered here. % We compare the marginalised results of a diboson-only fit (blue) with the same fit once VBS data is added (orange) in both cases when all coefficients are fitted simultaneously. % For reference, we also show the results of the individual VBS+diboson fits, where only one operators is varied at the time and the rest are fixed to their SM value. } \label{fig:MainBounds} \end{figure} From the comparison between the 95\% CL intervals in Table~\ref{tab:finalbounds} and Fig.~\ref{fig:MainBounds}, several interesting observations can be made. First, in comparing the results of the combined VBS+diboson fit with the diboson-only analysis, the VBS measurements are seen to improve the bounds provided by the diboson data in a pattern consistent with the Fisher information matrix displayed in Fig.~\ref{fig:FisherMatrix}. For instance, the bounds on $\bar{c}_{\varphi W}$ improve from $\left[-0.97,+2.1\right]$ to $\left[ -0.55,+1.4\right]$, while those on the CP-even (odd) triple gauge operator $\bar{c}_{W}$ ($\bar{c}_{\widetilde{W}}$) are reduced from $\left[ -0.20,+0.11\right]$ ($\left[ -0.63,+0.85\right]$) down to $\left[ -0.13,+0.14\right]$ ($\left[ -0.35,+0.57\right]$). In all cases, the VBS data improve the bounds on the EFT coefficients obtained from the diboson-only fit, highlighting the consistency and complementarity between the two families of processes. This result applies both to the CP-even as well as the CP-odd operators. Another relevant observation from Table~\ref{tab:finalbounds} concerns the differences between the marginalised and individual fits in the case of the combined VBS+diboson analysis, which illustrates the role of the correlations between the operators that modify these two processes. In the individual fits, one finds more stringent bounds by artificially setting all other EFT operators to zero, and this distorts the physical interpretation of the results. For several operators, the individual bounds underestimate the results of the 16-dimensional fit by an order of magnitude or more. This highlights the importance of accounting for all relevant EFT operators that contribute to a given process rather than just selecting a subset of them, as has often been the case in the interpretation of VBS measurements. Fig.~\ref{fig:correcoeff} then displays the values of the correlation coefficient between the operators considered in the fit to the baseline dataset. For some pair-wise combination of operators we observe strong (anti-)correlations between the fit coefficients, for example $c_{\varphi B}$ and $c_{\varphi \widetilde{B}}$ are strongly anticorrelated, and the same holds for $c_{\varphi D}$ and $c_{\varphi W B}$. However, in most cases, these correlations turn out to be quite small, confirming that our choice of fitting basis is suitable to describe efficiently the available dataset in consistency with the PCA results. \begin{figure}[t] \centering \includegraphics[width=0.7\textwidth]{plots/smefit_plots/Coeffs_Corr_NS_Combined_Fit_NLO_NHO.pdf} \caption{\small The values of the correlation coefficients between the operators considered in the fit to the baseline dataset. % These are categorised as positively correlated ($\rho \ge 0.50\ (0.75)$, in red (orange)), negatively correlated ($\rho \le 0.50\ (0.75)$, in green (blue)), and uncorrelated ($|\rho| \le 0.5$, in grey). } \label{fig:correcoeff} \end{figure} Finally, in Fig.~\ref{fig:energycoeff} we display the 95\% CL lower bounds on the value of $\Lambda/(v \, \sqrt{c_i})$. These bounds can be interpreted as the lower bounds derived from the EFT fit on the scale of new physics $\Lambda$ in UV-completions where the corresponding Wilson coefficients are $c_i=\mathcal{O}\left( 1\right)$. They are again presented as dimensionless quantities, measured in units of vev. This interpretation can be adjusted to other BSM scenarios, for example in the case of strongly coupled theories where one expects $c_i=\mathcal{O}\left( 4\pi\right)$. For several operators, the combined VBS+diboson analysis results in values above 1 TeV for the new physics scale $\Lambda$, for example the triple gauge operator $c_W$ which has $(\Lambda/\sqrt{c_i})\gtrsim 3 v$ at the 95\% CL. \begin{figure}[t] \centering \includegraphics[width=\textwidth]{plots/smefit_plots/Coeffs_Energy_Ind.pdf} \caption{\small Same as the lower panel of Fig.~\ref{fig:MainBounds} now representing the 95\% CL bounds on $\Lambda/(v \sqrt{c_i})$. } \label{fig:energycoeff} \end{figure} \paragraph{Comparison with other EFT analyses.} Fig.~\ref{fig:Coeffs_Bar_indivd_comapre} displays a comparison between the individual bounds obtained in this work, based on the VBS+diboson dataset and shown in Fig.~\ref{fig:MainBounds}, with the corresponding individual bounds obtained in the BDHLL20~\citep{Baglio:2020oqu} and EMMSY20~\citep{Ellis:2020unq} EFT analyses. The BDHLL20 fit includes data on diboson cross-sections from the LHC together with information from the associated production of a Higgs with a vector boson, $hW$ and $hZ$. EMMSY20 is instead a global EFT interpretation that includes Higgs and top production data together with the EWPOs from LEP and some diboson cross-sections. For the three sets of results shown in Fig.~\ref{fig:Coeffs_Bar_indivd_comapre}, only the linear terms in the EFT expansion are being included and the EFT cross-sections are evaluated at leading order.\footnote{We note that the BDHLL20 analysis has been performed also accounting for NLO QCD corrections in the EFT cross-sections, here we use the LO ones for the sake of comparison.} Given that these three analyses are based on different subsets of dimension-six operators, a comparison at the level of individual constraints is the most direct way of interpreting similarities or differences. We also note that CP-odd operators are only considered in this analysis. For the majority of operators, the global study of EMMSY20 exhibits the superior sensitivity. Our good determination of $c_{W}$ can be traced back to the inclusion of the $WZ$ differential distributions from ATLAS and CMS, which are also included in BDHLL20, but absent in EMMSY20, where $Zjj$ is included instead. This fact hints that a combined analysis of $WZ$ and $Zjj$ might shed more light on the purely gauge operator. The results of the global EFT fit lead to more stringent bounds as compared to those from this work and from BDHLL20, especially for the purely bosonic operators $c_{\varphi B}$, $c_{\varphi W}$ and $c_{\varphi B W}$, which are significantly constrained both by the EWPOs from LEP as well as Higgs measurements. For most coefficients, our individual results and those of BDHLL20 are in good agreement, in particular for bosonic operators $c_{\varphi D}$, $c_{\varphi B W}$, and $c_{\varphi W}$, $c_{W}$. This is what we would expect, given the datasets chosen. The comparison of the three works shows that Higgs, LEP and EWPD measurements represent the leading contributions to the parametrisation of BSM effects. There are also enough hints that a global interpretation of the LHC data, independent of older measurements is also a feasible way to go further on the road to the most accurate EFT interpretation. \begin{figure}[t] \centering \includegraphics[width=\textwidth]{plots/smefit_plots/Coeffs_Bar_indiv_compare.pdf} \caption{\small Comparison of the individual bounds obtained in this work from the VBS+diboson dataset (shown in Fig.~\ref{fig:MainBounds}) with the corresponding individual bounds obtained in the BDHLL20~\citep{Baglio:2020oqu} and EMMSY20~\citep{Ellis:2020unq} and EFT analyses, see text. In the three cases, only the linear terms in the EFT expansion are being included and the EFT cross-sections are evaluated at leading order. \label{fig:Coeffs_Bar_indivd_comapre}} \end{figure} \subsection{Dataset dependence} Until now, we have focused only on the analysis of the EFT fit results for the baseline dataset listed in Table~\ref{tab:chivals}. In the following, we assess the dependence of these results with respect to variations in the input data and theory settings by performing VBS-only fits and studying the impact of the VBS detector-level distributions when added to the VBS-only and to the baseline VBS+diboson fits. We also present fits where the CP-odd operators are set to zero and only the CP-even ones remain. \paragraph{VBS-only fits.} First of all, we have verified through a dedicated PCA that flat directions in the EFT parameter space are absent also in the case of a VBS-only fit . However, the same analysis also reveals that some combinations of coefficients will be poorly constrained. The latter result is not unexpected, given that for a VBS-only dataset we have $n_{\rm op}=16$ parameters to fit with only $n_{\rm dat}=18$ data points. We display in Fig.~\ref{fig:VBSonlyUnfolded} the same 95\% CL intervals as in the lower panel of Fig.~\ref{fig:MainBounds}, but now comparing the results of our baseline fit with those obtained from the marginalised and individual VBS-only fits. By comparing the VBS+diboson with the VBS-only fits, we see that the obtained bounds in the latter case are much looser by a factor between 10 and 100 for most operators. These findings are consistent with our previous observations that current VBS data provides only a moderate pull when added together with the diboson cross-sections. \begin{figure}[t] \centering \includegraphics[width=\textwidth]{plots/smefit_plots/Coeffs_Bar_VBS_Ind.pdf} \caption{\small Comparison of the 95\% CL intervals in the baseline fit with those resulting from marginalised and individual VBS-only fits. % Only the unfolded VBS cross-section measurements listed in Table~\ref{tab:datasettable_VBS} are being included in the fits. \label{fig:VBSonlyUnfolded}} \end{figure} However, we would like to emphasize that this result does not imply that VBS-only fits cannot provide competitive sensitivity in a EFT analysis, but rather that the available VBS measurements are still scarce and limited by statistics. In fact, if one compares the results of the marginalised with the individual VBS-only fits, one can see that the individual bounds are notably reduced and become similar, or even better, than in the baseline VBS+diboson analysis. This implies that VBS processes are endowed with a unique potential to constrain the dimension-six operators of the SMEFT, but only once sufficient data has been collected to pin down the effects of the individual operators separately. We will verify this expectation in Sect.~\ref{sec:hllhc} through EFT fits based on dedicated HL-LHC projections. \paragraph{The impact of the VBS detector-level measurements.} As was discussed in Sect.~\ref{sec:expdata}, one can in principle use detector-level measurements in the EFT fit in addition to the unfolded VBS cross-sections and distributions measured by ATLAS and CMS. Here we consider the $m_{ZZ}$ and $p_T^{\ell \ell \gamma}$ distributions from CMS and ATLAS in the $ZZjj$ and $\gamma Z jj$ final states respectively, which consist of 15 data points that can be included together with the unfolded VBS cross-section measurements. Given that our modelling of the detector response is basically reduced to a flat acceptance correction, we have chosen to remove these data points from the baseline results presented in the previous section. We would therefore like to illustrate how these detector-level distributions contain valuable information and are particularly instrumental to realise a reliable VBS-only EFT dimension-six analysis. \begin{figure}[t] \centering \includegraphics[width=0.9\textwidth]{plots/smefit_plots/Coeffs_Residuals_Hist_VBS_Compare} \caption{\small Posterior distributions associated to the VBS-only fits that include only unfolded cross-sections (blue) and also the detector-level distributions (orange). } \label{fig:coeff_distributions_vbsonly} \end{figure} Fig.~\ref{fig:coeff_distributions_vbsonly} displays the same posterior probability distributions as in Fig.~\ref{fig:coeff_distributions} but now corresponding to the VBS-only fits. We compare the results of the analysis based only on unfolded cross-sections with that in which the two detector-level distributions mentioned above are also included. While the VBS-only fit based on unfolded cross-sections does not exhibit genuine flat directions, several coefficients end up poorly constrained. The situation is different once the detector-level distributions are added to the fit: here the posterior distributions become Gaussian-like, and their width is markedly reduced compared to the previous case. In particular, the inclusion of the $m_{ZZ}$ and $p_T^{\ell \ell \gamma}$ detector-level distributions is particularly helpful in strengthening the VBS-only bounds on $c_{\varphi B}$ and its CP-odd counterpart. The 95\% CL intervals associated to the posterior probability distributions of Fig.~\ref{fig:coeff_distributions_vbsonly} are then represented in Fig.~\ref{fig:VBSonlyFolded}, where for reference we also display the results of the baseline VBS+diboson fit. We find that by adding the detector-level distributions, there is a noticeable improvement in the result of the VBS-only fit, with bounds being reduced by a factor between two and ten depending on the specific operator. In the case of $c_{\varphi B}$, the resulting bound becomes comparable to that obtained in the VBS+diboson fit, though in general the VBS-only fit cannot compete with the combined VBS+diboson results even after the addition of the folded data. These results motivate the release of all available VBS measurements in terms of unfolded distributions. We have verified that in the case of the combined VBS+diboson fit, adding the detector-level measurements leaves the results essentially unaffected, providing a further justification of our choice of removing them from the baseline dataset. \begin{figure}[t] \centering \includegraphics[width=\textwidth]{plots/smefit_plots/Coeffs_Bar_VBS_Compare.pdf} \caption{\small Comparison of the results of the VBS-only fit based only on unfolded cross-sections with those of the same fit where in addition one includes the detector-level distributions. For reference, we also display the results for baseline VBS+diboson dataset. \label{fig:VBSonlyFolded}} \end{figure} \paragraph{The impact of CP-odd operators.} Finally, we assess how the EFT fit results are modified once only CP-conserving operators are considered. Fig.~\ref{fig:bounds_CPeven} compares the results of the baseline VBS+diboson fit with those of the same fit where the CP-odd operators have been set to zero, such that only the CP-even ones remain. In general the differences are quite small, and as expected the fit without CP-violating operators leads to somewhat more stringent bounds. The only operator for which removing the CP-odd operators has a significant effect is $c_{\varphi B}$, where a difference of an order of magnitude in the 95\% CL bound is observed. The reason for this behaviour is that, as indicated in the correlation heat map of Fig.~\ref{fig:correcoeff}, $c_{\varphi B}$ and $c_{\varphi \widetilde{B}}$ are strongly anti-correlated and thus in general it is rather challenging to disentangle them. \begin{figure}[t] \centering \includegraphics[width=\textwidth]{plots/smefit_plots/Coeffs_Bar_CP.pdf} \caption{\small Comparing the results of the baseline fit with those of the same fit where the CP-odd operators have been set to zero, such that only the CP-even ones remain. \label{fig:bounds_CPeven}} \end{figure} \section{Summary and outlook} \label{sec:summary} In this work, we have presented an exhaustive investigation of effects from dimension-six SMEFT operators in the theoretical modelling of vector boson scattering processes. By exploiting information provided by the most updated VBS measurements from ATLAS and CMS, several of which are based on the full Run II data, we have obtained bounds on the relevant SMEFT operators that contribute to this process. We have demonstrated the overall consistency of the constraints provided by VBS with those from diboson production, and have highlighted how VBS measurements provide a useful addition to global EFT interpretations of LHC data. Using tailored projections, we have also estimated the improvements in the bounds on these dimension-six operators that can be expected from the VBS process with the legacy measurements of the HL-LHC, finding that these measurements will provide a remarkable sensitivity to several directions in the EFT parameter space. We emphasize that the goal of this work was not to achieve state-of-the-art bounds on all the dimension-six operators that modify VBS observables. Such ambition can only be achieved within a dedicated global EFT fit that includes all relevant sensitive observables. These analyses must include, among others, Higgs production and decay measurements from the LHC and electroweak precision observables from electron-positron colliders, which by virtue of the electroweak gauge symmetry, constrain several of the same dimension-six operators that enter the description of VBS observables, as well as Drell-Yan distributions. For such an effort, some improvements in the theory calculations compared to this work will be required, in particular the use of exact, rather than approximate, NLO QCD effects in the EFT cross-sections using {\tt SMEFT@NLO} as well as accounting for the quadratic corrections in the EFT expansion. Most of the previous EFT interpretations of VBS observables from the LHC have focused on dimension-eight operators, with the argument that these can modify the quartic gauge couplings while leaving unaffected the triple ones that are purportedly well constrained by other processes. It would therefore be important to revisit these studies within a consistent EFT analysis that includes the effects of both dimension-six and dimension-eight operators up to $\mathcal{O}\left( \Lambda^{-4}\right)$. For instance, it would be important to quantify how the current bounds on dimension-8 operators are modified with the inclusion of the dimension-six ones. Since there is no cross-talk between the dim-6 and dim-8 operators at this order in the EFT expansion, it would be possible to extend the present analysis by adding the various sources of quadratic contributions separately. Such a fully consistent $\mathcal{O}\left( \Lambda^{-4}\right)$ analysis, combined with future measurements from Run III and the HL-LHC, would unlock the ultimate potential of EFT interpretations of VBS data and represent one of the key legacy results from the LHC. Additional avenues for future research include the EFT interpretation of novel VBS observables, such as polarised scattering, as well as going beyond the SMEFT by considering other effective theories such as the HEFT or the Electroweak Chiral Lagrangian. In this respect, we point out that the fitting framework used in this work can be straightforwardly extended to other EFTs, and a fully general dependence of the theory predictions with the EFT coefficients is allowed. The first measurements of unfolded VBS cross-sections and differential distributions discussed in this work undoubtedly represent a milestone in the LHC program, with profound implications for our understanding of the gauge sector in the SM and its extensions. While current VBS measurements are still statistics-dominated and, for the time being, provide only a moderate pull in the EFT fit, we have demonstrated that they provide complementary information as compared to the more traditional diboson processes. VBS is therefore poised to play a growing role in global EFT interpretations in the coming years, especially once high-statistics measurements become available.\\ \noindent {\bf \Large Acknowledgments}\\ \noindent We are grateful to Simone Alioli, Fabio Maltoni, Giampiero Passarino, Eleni Vryonidou, and Cen Zhang for discussions about the topics covered in this paper and for feedback on this manuscript. We thank our colleagues of the VBScan COST action for many useful discussions about VBS over the past years. Special thanks go to the {\tt HEPData} team, in particular Graeme Watt, and to the ATLAS and CMS analyzers that have provided assistance with the implementation of the VBS measurements: Claude Charlot, Roberto Covarelli, Guillelmo G\'omez-Ceballos, and Kristin Lohwasser. The work of J.~E., G.~M., and J.~R. is partially supported by the Netherlands Organization for Scientific Research (NWO). R.~G. acknowledges funding from the ERC Starting Grant REINVENT-714788 and from the Fondazione Cariplo and Regione Lombardia, grant 2017-2070, as well as the UK Science and Technology Facilities Council (STFC) grant ST/P001246/1. \section{Theoretical framework} \label{sec:eftth} \label{sec:Warsawbasis} In this section we introduce the dimension-six SMEFT operators that will be considered for the interpretation of the vector boson scattering and diboson measurements at the LHC. Restricting ourselves to dimension-six operators, we can express the SMEFT Lagrangian as, \begin{equation} \label{eq:SMEFTlag} \mathcal{L}_{\rm SMEFT} = \mathcal{L}_{\rm SM} + \sum_{i=1}^{n_{\rm op}} \frac{c_{i}}{\Lambda^2} \ensuremath{\mathcal{O}}_i^{(6)} \, , \end{equation} where the $\mathcal{O}_i^{(6)}$ represent a complete basis of operators built upon the SM fields with mass dimension equal to six, and $c_i$ are their corresponding Wilson coefficients. These operators respect the fundamental symmetries of the SM such as gauge and Lorentz invariance. In Eq.~(\ref{eq:crosssection}), $\Lambda$ indicates the energy scale that determines the regime of validity of the EFT approximation. For instance, $\Lambda$ can be interpreted as the typical mass of the new heavy particles that arise in the ultraviolet (UV) completion of the SM. Note that, from a bottom-up phenomenological analysis, only the ratio $c_i/\Lambda^2$ can be determined, rather than the two parameters separately. In this work, we will focus on those operators that modify the interactions of the electroweak gauge bosons. These will involve the weak gauge field strength tensors \begin{equation} \label{eq:Wstrength} W^{I}_{\mu \nu } = \partial_\mu W^{I}_\nu - \partial_\nu W^I_\mu - g_2 \epsilon^{IJK} W^J_\mu W^K_\nu \, , \end{equation} \begin{equation} \label{eq:Bstrength} B_{\mu \nu } = \partial_\mu B_\nu - \partial_\nu B_\mu \, , \end{equation} as well as the SM covariant derivative, given by \begin{equation} D_\mu = \partial_\mu + i g_2 \frac{\sigma^{I}}{2} W^{I}_\mu + i g_{1} Y_f B_\mu \, , \label{eq:covderiviative} \end{equation} where $g_1, g_2$ are the weak couplings, $\sigma^{I}$ are the Pauli matrices (SU(2)$_L$ generators), and $Y_f$ is the fermionic hypercharge. % Here we neglect strong interaction effects, which play a limited role in the description of the VBS process, and set to zero the masses of all leptons and quarks except for the top quark. % Some of the relevant dimension-six operators for this analysis will also involve the Higgs doublet field, defined in the unitary gauge by \begin{equation} \label{eq:HHdoublet} \varphi = \frac{1}{\sqrt{2}} \begin{pmatrix} 0 \\ v + h \end{pmatrix} \, , \end{equation} with $v=246$ GeV being the Higgs vacuum expectation value (vev) and $h$ represents the $m_h=125$ GeV Higgs boson. Here we will also consider CP-odd operators, which are constructed in terms of the dual field strength tensors, defined by \begin{equation} \label{eq:dual} \widetilde{X}_{\mu\nu } = \frac{1}{2} \epsilon_{\mu \nu \rho \sigma} X^{\rho \sigma} \, , \end{equation} and whose presence leads to CP-violating effects which are potentially observable in the electroweak sector~\cite{DasBakshi:2020ejz,Choudhury:1999fz,Biekotter:2020flu, Azatov:2019xxn,Banerjee:2020vtm} There exist several bases that span the SMEFT operator space at dimension-six. In this work we adopt the Warsaw basis~\cite{Grzadkowski:2010es}, which contains 59 operators for one fermion generation, and consider only those operators that contain at least one electroweak gauge field. This means, in particular, that we neglect the contributions from four-fermion operators as well as from those that modify the Yukawa interactions and the Higgs self-coupling. \paragraph{Flavour assumptions.} In this work, we will assume that the operator structure is the same across the three fermionic families, the so-called SU(3)$^5$-symmetric model. In other words, we assume flavour universality of the UV-complete theory. In practice, this means that all Warsaw basis operators that contain fermion generation indices will be understood as diagonal and summed over generations, {\it e.g.}, \begin{equation} [c_{\varphi f}]_{ij} ( \varphi^{\dagger}\overleftrightarrow{D}_{\mu}\varphi )(\Bar{f}_i \gamma^\mu f_j) \longrightarrow c_{\varphi f} \sum_{i=1}^3 ( \varphi^{\dagger}\overleftrightarrow{D}_{\mu}\varphi )(\Bar{f}_i \gamma^\mu f_i) \, . \end{equation} Note that, as a consequence of this SU(3)$^5$ symmetric flavour structure, when comparing with constraints obtained in EFT fits based on more general flavour specific operators, such as those that single out the top quark, the value of our coefficient will be the average of the flavour-dependent coefficients in that analysis. \paragraph{Purely bosonic operators.} To begin, we define the purely bosonic operators that modify the gauge structure of the theory as compared to the SM. In Table~\ref{tab:gauge} we list the dimension-six operators constructed from bosonic fields that modify the interactions of the electroweak gauge bosons and which are considered in this work. For each operator, we indicate its definition in terms of the SM fields and also the notation conventions adopted both for the operator and for the Wilson coefficient. Note that, as mentioned above, we consider both CP-even and CP-odd operators. The only CP-even modifications of the triple and quartic gauge couplings arise from $\mathcal{O}_{W}$. In addition, we account for possible CP-odd contributions to the aTGC and aQGC from the $\lbrace \mathcal{O}_{\widetilde{W}} , \mathcal{O}_{ \varphi \widetilde{W}} , \mathcal{O}_{ \varphi \widetilde{B}} , \mathcal{O}_{ \varphi \widetilde{W}B} \rbrace $ operators. The remaining operators in this category modify the Higgs-gauge ($hVV$ and $hhVV$) vertices. They appear in the processes either by means of Higgs decays (through the interference of $gg \to h \to 4 \ell / 2 \ell 2\nu$ with diboson production), or through the $t$-channel Higgs exchange contributions to the VBS cross-sections. Furthermore, the operators $\mathcal{O}_{ \varphi WB}$ and $\mathcal{O}_{\varphi D}$ also enter the definitions of the gauge masses and mixing angle in the SMEFT Lagrangian, and are hence both dependent of our scheme choice. \input{tables/table-gauge.tex} \paragraph{Two-fermion operators.} Another relevant class of dimension-six operators that modify the interactions of the electroweak gauge bosons are those composed by two fermion fields and two Higgs fields, where the gauge bosons enter via the covariant derivative. These operators describe new contact interactions involving fermions with gauge and Higgs bosons which are unrelated to the Yukawa couplings. They generate corrections to the $V\ell \ell$ and $V q \bar{q}$ vertices and can be constrained, among other processes, from the electroweak precision observables (EWPOs) measured by LEP~\cite{ALEPH:2005ab}. They also generate contact interactions of the form $h V f \bar{f}$ which affect specific Higgs boson production and decay processes. The two-fermion operators that will be considered in this work are listed in Table~\ref{tab:2fermion}, and consist of seven CP-even operators containing each two Higgs doublets, a covariant derivative, and two fermionic fields. In the definition of these operators, we have introduced \begin{equation} \overleftrightarrow{D}_\mu \equiv ( D_\mu + \overleftarrow{D}_\mu )\, , \end{equation} which is required to ensure that operators with fermionic neutral currents are Hermitian. All the operators listed in Table~\ref{tab:2fermion} are CP-even. \input{tables/table-2fermion.tex} \paragraph{Dipole operators.} These operators involve the direct interactions between gauge bosons and fermions, rather than the indirect ones that proceed via the covariant derivative such as the operators listed in Table~\ref{tab:2fermion}. They have a special Lorentz structure connecting same-helicity fermions. In general, they do not interfere with the SM, except for a few cases where the light Yukawa couplings are taken to be nonzero. Since our analysis is restricted to the $\mathcal{O}\left( \Lambda^{-2}\right)$ corrections to the VBS and diboson cross-sections and we neglect quark masses, we do not need to consider these operators here. \paragraph{Parameter shifts and EWPOs.} Some of the dimension-six SMEFT operators generate a contribution to the relevant electroweak parameters, \begin{equation} m_Z, m_W, G_F, \sin^2 \theta_W , \alpha_{\rm EW} \, , \end{equation} and depending on which input parameter scheme (IPS) one adopts, the expressions for $\lbrace g_1 , g_2 , v \rbrace $, and hence for the resulting SM Lagrangian and Feynman rules, will be different. The operators affecting these electroweak input parameters are closely connected with the EWPOs and are thus significantly constrained by the former. In particular, the $c_{\varphi l}^{(3)}$ and $c_{l l}$ coefficients modify the definition of Fermi's constant $G_F$, while $c_{\varphi W B}$ and $ c_{\varphi D}$ enter the $Z$ mass and mixing angle. They can be well constrained through the measurement of the muon lifetime and of the EW oblique parameter respectively~\cite{Alonso:2013hga}: $c_{\varphi W B}$ affects directly the value of the $S$ parameter, also known as $\rho$~\cite{Ross:1975fq}, whereas $c_{\varphi D}$ contributes to the $T$ parameter. Several BSM and EFT fits of these EWPO have been performed in recent years~\cite{Ciuchini2013,deBlas:2016ojx,Ciuchini:2014dea}, and furthermore various LHC analyses tackle the extraction of the same EWPOs from LHC data~\cite{A.Savin:2018bmz,Aaboud:2017svj,Khachatryan:2016yte,Aad:2015uau,Aaij:2015lka,Chatrchyan:2011ya}, mostly relying on Drell-Yan production and related processes. Here we choose not to account for these constraints in our study, and constrain the coefficients of the operators listed in Tables~\ref{tab:gauge} and~\ref{tab:2fermion} solely from the VBS and diboson measurements. % In the future, once the VBS measurements are integrated in the global EFT analysis, one will be able to constrain these electroweak parameter shifts by including both the LEP's EWPOs, the LHC Drell-Yan data directly~\cite{Farina:2016rws,Franceschini:2017xkh,Alioli:2017nzr,Dawson:2018dxp,Ricci:2020xre,Alioli:2018ljm}, and all other measurements ({\it e.g.} Higgs production) sensitive to them. \paragraph{Overview of fitted degrees of freedom.} We summarise in Table~\ref{tab:table-operatorbasis} the degrees of freedom considered in the present work, categorised into purely bosonic and two-fermion operators. We also indicate the notation that will be used in some of the plots and tables of the following sections. We end up with $n_{\rm op}=16$ independent coefficients, of which 9 are purely bosonic and 7 are two-fermion operators. Of the purely bosonic operators, 5 are CP-even and 4 are CP-odd. Recall that we use symmetric flavour assumptions and thus the operators involving quarks or leptons are summed over the three SM generations. \input{tables/table-operatorbasis.tex} \paragraph{Amplitudes and cross-sections.} The dimension-six operators that compose the SMEFT Lagrangian Eq.~(\ref{eq:SMEFTlag}) modify a generic SM cross-section to be, \begin{equation} \label{eq:crosssection} \sigma_{\rm SMEFT} = \sigma_{\rm SM} + \sum_i^{n_{\rm op}} \frac{c_i}{\Lambda^2}\sigma^{(\rm eft)}_i + \sum_{i,j}^{n_{\rm op}} \frac{c_i c_j}{\Lambda^4} \widetilde{\sigma}^{(\rm eft)}_{ij} \, , \end{equation} where $\sigma_{\rm SM}$ indicates the SM prediction and the Wilson coefficients are assumed to be real. The $\mathcal{O}(\Lambda^{-2})$ terms arise from EFT operators interfering with the SM amplitude and in most cases correspond to the dominant correction. For this reason, the cross-sections $\sigma^{(\rm eft)}_i$ are usually denoted as the SMEFT linear interference terms. The third term in the RHS of Eq.~(\ref{eq:crosssection}) contains the quadratic contribution arising from the square of the amplitudes involving dimension-six operators, and scales as $\mathcal{O}(\Lambda^{-4})$. These quadratic terms are of the same order of the dimension-eight operators that interfere with the SM amplitudes and that modify the TGCs and QGCs. Given that we consider here only dimension-six operators, the consistent inclusion of $\mathcal{O}(\Lambda^{-4})$ corrections to VBS processes is left for future work and we restrict ourselves to the linear approximation. We note that linear EFT interference effects due to CP-odd operators remain CP-odd, while squared CP-odd terms become CP-even and thus are difficult to disentangle from their CP-even counterparts~\cite{Englert:2019xhk}. For this reason, it is interesting to study CP-odd operators in processes for which the linear EFT terms are dominant, such as the high energy bins of differential distributions, or by looking at specific observables such as asymmetries. Separating the impact of CP-even and CP-odd operators has been studied mostly in the context of EFT analysis of the Higgs sector~\cite{Dolan:2014upa,Cirigliano:2019vfc,Englert:2019xhk,Bernlochner:2018opw,Biekotter:2020flu}. The SMEFT is defined to be valid for energies satisfying $E \ll \Lambda$. A lower bound on the value of $\Lambda$ is given by the highest energy scale of the data included in our fit, which as discussed in Sect.~\ref{sec:expdata} turns out to be around $E\simeq 3$ TeV. An upper bound on $\Lambda$ cannot be set from first principles and requires the observation of a hypothetical heavy resonance. In the rest of this paper, we will assume for simplicity $\Lambda=1$ TeV, with the caveat that results for any other values of $\Lambda$ can be obtained by a trivial re-scaling. \paragraph{Interplay between VBS and diboson production.} Gauge boson pair production has been extensively studied as a precision probe of the electroweak sector of the SM and its various extensions, first in the context of precision SM electroweak tests at LEP and more recently in the EFT framework and accounting for the corresponding LHC measurements~\cite{Grojean:2018dqj,Falkowski:2016cxu,Rahaman:2019mnz,Baglio:2017bfe,Baglio:2020oqu,Baglio:2019uty}. Since diboson production is a relatively clean process with large cross-sections~\cite{Campbell:2011bn}, fiducial cross-sections and differential distributions have been measured with high precision by ATLAS and CMS. Most of the dimension-six operators listed in Table~\ref{tab:table-operatorbasis} modify also the theoretical calculation of diboson cross-sections, and thus it would seem that VBS data might be redundant for EFT studies. While indeed dimension-six EFT effects can be well constrained by diboson production at the LHC~\cite{Grojean:2018dqj,Azatov:2017kzw}, here we will show that VBS measurement provide non-trivial, complementary information for many of these operators. Furthermore, the role of VBS measurememnts is only bound to increase as more data is accumulated, in particular at the HL-LHC. In VBS, only one CP-even operator in the Warsaw basis affects directly the triple and quartic gauge couplings, with three more operators contributing once CP-odd effects are allowed. Beyond these modifications of the TGCs and QGCs, the VBS process is also sensitive to several other dimension-six operators, given the large amount of vertices and topologies contributing to the definition of the its final state. This is illustrated in Fig.~\ref{fig:feynman_EFT1}, where we show representative diagrams for EFT corrections to quartic and triple gauge couplings as well as the the $t$-channel Higgs exchange contribution. \begin{figure}[t] \centering \includegraphics[width=1.\textwidth]{plots/TGC_QGC_EFT.pdf} \caption{\small EFT corrections modifying the quartic (left panel) and triple (middle panel) gauge couplings in vector-boson scattering, as well as the the $t$-channel Higgs exchange contribution (right panel) and the $V f \bar{f}$ interaction vertices. % In this work we consider only final states where the gauge bosons decay leptonically. } \label{fig:feynman_EFT1} \end{figure} In the case of $WW$ diboson production at LEP, the process is sensitive to the triple gauge couplings $ZWW$ and $\gamma WW$ at leading order in the EFT expansion, and thus the corresponding EFT parametrisation will include the modification of the TGC (through $c_W$). It will also modify the $e \bar{e} Z$ vertex and the corresponding IPS dependence, which could include $c_{\varphi WB},c_{\varphi D}$ and $c_{\varphi l}^{(3)}$, and even some contact term of the form $e \bar{e} W W$, generally not interfering with the SM. Similar considerations apply for diboson production at hadron colliders, although now a new feature appears, namely the interference with Higgs production in gluon fusion followed by the $h \to VV$ decay. This correction induces a non-negligible sensitivity to the $c_{\varphi B}$ and $c_{\varphi W}$ coefficients in gauge boson pair production at the LHC. These features are illustrated in Fig.~\ref{fig:feynman_EFT2}. \begin{figure}[t] \centering \subfloat{ \includegraphics[width=0.44\textwidth]{plots/diboson_EFT.pdf}} \hspace{0.7cm} \subfloat{ \includegraphics[width=0.44\textwidth]{plots/ggF_dib_interf.pdf}} \caption{\small Same as Fig.~\ref{fig:feynman_EFT1} for two representative EFT diagrams contributing to diboson production: a pure diboson diagram (left) and another for which diboson production interferes with the $h\to VV$ process (right).} \label{fig:feynman_EFT2} \end{figure}
1,941,325,220,926
arxiv
\section{Introduction} Topological phases of matter have been a topic of great interest in condensed matter physics since the discovery of the integer quantum Hall effect \cite{vonKlitzing:1986}. They are characterized by transport properties -- such as a quantized Hall conductivity -- that depend on the topological structure of the eigenstates \cite{Kohmoto:1985}, and not on the details of the microscopic Hamiltonian. As a result, such properties are remarkably robust against external perturbations. Integer quantum Hall phases, the first topological insulating phases to be discovered \cite{vonKlitzing:1986}, are realized by applying a large uniform magnetic field to a quasi-ideal two-dimensional electron gas, as formed in layered semiconductors structures. The presence of a uniform magnetic field is not, however, a necessary condition to produce quantum Hall states, as first realized by Haldane \cite{Haldane1988}. He proposed a remarkably simple model on a honeycomb lattice, with real nearest-neighbor (NN) hopping and \emph{complex} next-nearest neighbor (NNN) hopping mimicking the Peierls phases experienced by charged particles in a magnetic field. Although the magnetic flux through an elementary cell of the honeycomb lattice is zero, a staggered magnetic field present within this cell locally breaks time-reversal symmetry. Haldane showed that this model supports phases that are equivalent to integer quantum Hall phases: they correspond to insulators with quantized Hall conductivities, $\sigma_H=\nu \, e^2/h$ where $e$ is the electron charge. In this manner, it is possible to generate a quantum Hall effect without a uniform external magnetic field. The integer $\nu =\pm1$ (depending on the particular values of the microscopic parameters) is a topological invariant -- the Chern number -- characteristic of the phase and robust with respect to small perturbations \cite{Thouless1982,Kohmoto:1985}. More recently, a more broad concept of topological insulators has emerged, classifying all possible topological phases for non-interacting fermions in terms of their symmetries \cite{Hasan2010,Qi2011}. In this modern terminology, the Haldane model belongs to the class A of Chern insulators, which are topologically equivalent to the standard quantum Hall states. The Haldane model has not been directly realized in solid-state systems, due to the somewhat artificial structure of the staggered magnetic field. Interestingly, ultracold atomic gases \cite{Lewenstein:2007,Bloch2008a} appear better suited to achieve this goal \cite{Liu:2010,Stanescu:2010}. In recent years, many proposals have been put forward to realize artificial magnetic fields for ultracold atoms (see \cite{Dalibard2011} for a review). Staggered fields are relatively easier to implement than uniform ones \cite{Jaksch2003,Gerbier2010,Lim2008}, and have already been realized in a square optical lattice \cite{aidelsburger2011}. Building on these ideas, Alba and coworkers \cite{Alba2011} proposed a model very similar to Haldane's that could be realized with ultracold atoms. Their variant is based upon a state-dependent honeycomb optical lattice \cite{Bloch2008a}, where cold atoms in two different internal ``pseudospin'' states are localized at two inequivalent sites of the elementary cell. Additionally, laser induced transitions \cite{Jaksch2003,Ruostekoski:2002} between the nearest-neighbor sites lead to pseudospin-dependent hopping matrix elements containing phase factors, schematically depicted in Fig.~\ref{figlattice}. Furthermore, Alba and coworkers \cite{Alba2011} suggested a measurement based on spin-resolved time-of-flight (ToF) experiments to identify topological invariants. \\ The present work provides a systematic analysis of the model proposed in Ref. \cite{Alba2011} and identifies parameter regimes where Chern insulators emerge. The goals are: firstly, to serve as a detailed guide to possible experiments aiming at realizing such topological phases; and secondly, to discuss the subtle issue of identifying them through ToF methods. Ref. ~\cite{Alba2011} focused on the very special case when one of the NNN hopping amplitudes was zero, and we find that such a system is semi-metallic, not a quantized Chern insulator. In this limit, where the bulk energy gap is \emph{closed}, the Hall conductivity is no longer simply given by a Chern number $\sigma_H \ne \nu \, e^2/h$, and therefore, this transport coefficient generally looses its topological stability. Yet, the ToF method seems to give a non-trivial signature in this regime, which is robust with respect to small variations of model parameters. The subtlety is that the ToF method of Ref. \cite{Alba2011} actually measures a winding number \cite{Qi:2006}, which only coincides with a topologically protected Hall conductivity \emph{when the energy gap is open}. If this condition is met, the ToF method of Ref. \cite{Alba2011} then produces a reasonable experimental measure of the topologically invariant Chern number, and we indeed verify its robustness when varying the system parameters. Interestingly, if the bulk gap is closed, we find that the winding number measured from a ToF absorption image might still depict a stable plateau when varying the microscopic parameters, under the condition that the Fermi energy is exactly tuned at the gap closing point. In this work, such gapless phases associated with a non-trivial winding number will be referred to as \emph{topological semi-metals}. Absent in the original Haldane model, they constitute intriguing topological phases, which can be created and detected in the laser-coupled honeycomb lattice. In this work, several types of band structures and topological orders will therefore be present: (1) Chern insulating phases, i.e. gapped phases with non-trivial Chern numbers $\nu=\pm 1$, (2) Topological semi-metals, i.e. gapless phases associated with a non-trivial winding number, (3) Standard semi-metals, i.e. gapless phases with the two bands touching at the Dirac points, as in graphene \cite{Wallace1947,CastroNeto2009}, and which are found at the transition between two topological phases. \\ This paper is structured as follows. In Sect. \ref{modelsection}, we introduce the model and discuss how the energy band topology can be characterized in terms of Chern numbers. We also discuss the magnetic flux configuration as a function of the model parameters, highlighting the time-reversal-symmetry breaking regimes. Sect. \ref{topologicalphases} presents the main results, where the phase diagrams are investigated as a function of the microscopic parameters. In Sect. \ref{skyrmion}, we examine the signatures of the ToF method \cite{Alba2011}, and compare its results when applied to a Chern insulator or to a semi-metallic phase, i.e., when the topological bulk gap is absent. We summarize the results in Sect. \ref{conclusions}, and discuss an extension which implements the Kane-Mele model leading to $\mathbb{Z}_2$ topological insulators \cite{Kane:2005}. \section{The Model and the gauge structure} \label{modelsection} \subsection{The Hamiltonian} \begin{figure} \centering \includegraphics[width=1\columnwidth]{honeycomb.pdf} \caption{\label{figlattice} (a) Honeycomb lattice composed of two coupled triangular sublattices $A$ and $B$. The site positions in each sublattice are defined as $\bs{r}_{m_A}=m_1 \bs a_1 +m_2 \bs a_2 $ and $\bs{r}_{m_B}=m_1 \bs a_1 +m_2 \bs a_2 -\bs \delta_2$, with unit vectors $\bs a_1 = \bs \delta_1-\bs \delta_3$ and $\bs a_2 = \bs \delta_2-\bs \delta_3$ and with $m=(m_1,m_2)$ integer. The nearest-neighbour vectors are $\bs \delta_1= a/2 (1, \sqrt{3})$, $\bs \delta_2= a/2 (1, -\sqrt{3})$ and $\bs \delta_3= a(-1,0)$. We define $\bs a_3 = \bs a_1- \bs a_2= \bs \delta_1-\bs \delta_2$. The hopping factors between NN and NNN sites of the honeycomb lattice are indicated by $t_A$, $t_B$ and $t e^{i \phi}$, with $\phi\equiv \phi(m_A,m_B)$ given by Eq. \eqref{peierls}. The lattice spacing is $a\sqrt{3}$, and we set $a=1$ in the main text, which defines our unit of length. (b) Three-beam laser configuration giving rise to the desired spin-dependent hexagonal lattice, which we describe for $^{40}K$. In this vision, the lasers are detuned between the D1 and D2 lines of the 4S-4P transition, whereby the state-independent (scalar) light shift is zero. The remaining spin-dependent potential -- an effective Zeeman magnetic field -- is depicted in (c). The strength of the ``same-spin" hopping, i.e. $t_a$ and $t_b$, is governed by the choice of internal states: the pair $\left|f = 9/2, m_F= 7/2\right>$ and $\left|f = 7/2, m_F= 7/2\right>$ produce $t_a\approx t_b$ as they have opposite magnetic moments. In contrast, the choice $\left|f = 9/2, m_F= 9/2\right>$ and $\left|f = 7/2, m_F= 7/2\right>$ produces $t_a\neq t_b$. The effective Zeeman shift is plotted with a color scale where blue indicates the potential minima for pseudo-spin up atoms, forming the A sublattice; and red indicates the minima for pseudo-spin down atoms, forming the B sublattice. Not shown are an additional pair of Raman lasers, also in the ${\bf e}_x\!-\!{\bf e}_y$ plane, that couple between the different sublattices (red and blue in (c)). } \end{figure} In the model introduced in Ref. \cite{Alba2011}, cold fermionic atoms are trapped in a honeycomb structure formed by two intertwined triangular optical lattices, whose sites are labeled by $A$ and $B$ respectively [cf. Fig. \ref{figlattice} (a)-(c)]. In the tight-binding regime -- applicable for sufficiently deep optical potentials $V_{A,B} (\bs x)$ -- atoms are only allowed to hop between neighboring sites of the two triangular sublattices, which correspond to next-nearest neighbors of the honeycomb lattice (denoted $\langle \langle n_{\tau}, m_{\tau} \rangle \rangle$, with $\tau=A,B$). The second-quantized Hamiltonian takes the form \begin{equation} \hat H_{\text{NNN}}= - t_A \sum_{\langle \langle n_A, m_A \rangle \rangle} \hat a^{\dagger}_{n_A} \hat a_{m_A} - t_B \sum_{\langle \langle n_B, m_B \rangle \rangle} \hat b^{\dagger}_{n_B} \hat b_{m_B}, \end{equation} where $\hat a_{m_A}$ ($\hat b_{m_B}$) is the field operator for annihilation of an atom at the lattice site $\bs r_{m_A}$ ($\bs r_{m_B}$) associated with the $A$ ($B$) sublattice, and where $t_{A,B}$ are the tunneling amplitudes. Furthermore, the two sublattices are coupled through laser-assisted tunneling, where hopping is induced between neighboring sites of the honeycomb lattice by a laser coupling the two internal states associated with each sublattice. This corresponds to tunneling processes linking nearest neighbors sites of the honeycomb lattice, denoted as $ \langle m_{A}, m_{B} \rangle $, which are described by the Hamiltonian \begin{equation} \hat H_{\text{NN}}= - t \sum_{\langle m_A, m_B \rangle} \left( e^{i \phi (m_A, m_B)} \hat a^{\dagger}_{m_A} \hat b_{m_B} + h.\,c. \right). \end{equation} Here, the phases $\phi (m_A, m_B)$ generated by the laser fields are the analogs of the Peierls phases familiar from condensed matter physics \cite{Luttinger1951,Hofstadter1976}, with $\bs r_{m_A}$ and $\bs r_{m_B}$ specifying the nearest neighboring sites of the hexagonal lattice. Following the approach of Jaksch and Zoller \cite{Jaksch2003}, these phases can be expressed in terms of the momentum $\brec$ transferred by the laser-assisted tunneling as \begin{equation} \phi (m_A, m_B)= \brec \cdot (\bs r_{m_A} + \bs r_{m_B})/2 = - \phi (m_B,m_A), \label{peierls} \end{equation} so that the phases have opposite signs for $\bs r_A \rightarrow \bs r_B$ and $\bs r_B \rightarrow \bs r_A$ hoppings (cf. Fig. \ref{figfluxlattice}(a) and Refs. \cite{Dalibard2011,Gerbier2010}). Finally, the model also features an on-site \emph{staggered} potential, described by \begin{equation} \hat H_{\text{stag}}= - \varepsilon \sum_{m} \left( \hat a^{\dagger}_{m_A} \hat a_{m_A} - \hat b^{\dagger}_{m_B} \hat b_{m_B}\right), \end{equation} which explicitly breaks the inversion symmetry of the honeycomb lattice \cite{Haldane1988}. The total Hamiltonian, given by \begin{equation} \hat H_{\text{tot}}=\hat H_{\text{NN}}+\hat H_{\text{NNN}}+\hat H_{\text{stag}},\label{htot} \end{equation} is characterized by the hopping amplitudes ($t$, $t_A$ and $t_B$), the momentum transfer $\brec =(p_x,p_y)$, as well as the mismatch energy $\varepsilon$. To eliminate the explicit spatial dependence of our Hamiltonian \eqref{htot}, we perform the unitary transformation \begin{align} &\hat a^{\dagger}_{m_A} \rightarrow \tilde a^{\dagger}_{m_A}=\hat a^{\dagger}_{m_A} \exp(i \brec \cdot \bs r_{m_A} /2 ), \\ &\hat b^{\dagger}_{m_B} \rightarrow \tilde b^{\dagger}_{m_B}=\hat b^{\dagger}_{m_B} \exp(- i \brec \cdot \bs r_{m_B} /2 ), \nonumber \end{align} giving a transformed Hamiltonian \begin{align} \hat H_{\text{tot}}=& - t \sum_{\langle n_A, m_B \rangle} \bigl( \tilde a^{\dagger}_{n_A} \tilde b_{m_B} + \tilde b^{\dagger}_{m_B} \tilde a_{n_A} \bigr ) \nonumber \\ &- t_A \sum_{\langle \langle n_A, m_A \rangle \rangle} e^{i \tilde \phi (n_A, m_A)} \tilde a^{\dagger}_{n_A} \tilde a_{m_A} - t_B \sum_{\langle \langle n_B, m_B \rangle \rangle} e^{i \tilde \phi (n_B, m_B)} \tilde b^{\dagger}_{n_B} \tilde b_{m_B} \nonumber \\ & - \varepsilon \sum_{m} \bigl( \tilde a^{\dagger}_{m_A} \tilde a_{m_A} - \tilde b^{\dagger}_{m_B} \tilde b_{m_B} \bigr ), \label{hamnew} \end{align} with new Peierls phases given by \begin{align} &\tilde \phi (n_A, m_A)= \brec \cdot (\bs r_{m_A} - \bs r_{n_A})/2, \nonumber \\ &\tilde \phi (n_B, m_B)= \brec \cdot (\bs r_{n_B} - \bs r_{m_B})/2. \label{peierls2} \end{align} The transformed Hamiltonian \eqref{hamnew}, featuring complex hopping terms along the links connecting NNN sites, is similar to the Haldane model \cite{Haldane1988}, but with important differences highlighted in Sect. \ref{haldanesection}. Since $\bs r_{n_{A,B}} - \bs r_{m_{A,B}}=\bs \delta_{\mu}- \bs \delta_{\nu}=\pm \bs a_{\lambda}$ are the primitive lattice vectors of the honeycomb lattice (see Fig. \ref{figlattice}), where $\mu,\nu,\lambda=1,2,3$, the phases $\tilde{\phi}$ in Eq. \eqref{peierls2} no longer depend on the spatial coordinates. Therefore, the Hamiltonian \eqref{hamnew} is invariant under discrete translations, $[\hat H_{\text{tot}}, \mathcal{T}_{1,2}]=0$ where $\mathcal{T}_{1,2} \psi (\bs r)=\psi (\bs r+\bs a_{1,2})$, allowing us to invoke Bloch's theorem and reduce the analysis to a unit cell formed by two inequivalent sites $A$ and $B$. In momentum space, the Hamiltonian takes the form of a $2 \times 2$ matrix, \begin{equation} H(\bs k)= - \begin{pmatrix} \varepsilon + 2 t_A f (\bs k - \brec / 2) & t g (\bs k) \\ t g^{*} (\bs k) & - \varepsilon + 2 t_B f (\bs k + \brec / 2) \end{pmatrix} , \end{equation} where \begin{align} &f (\bs k)=\sum_{\nu=1}^{3}\cos \bigl ( \bs k \cdot \bs{a}_{\nu} \bigr), \quad g (\bs k)=\sum_{\nu=1}^{3} \exp (-i \bs k \cdot \bs{\delta}_{\nu}), \label{gfunction} \end{align} and $\bs k =(k_x,k_y)$ belongs to the first Brillouin zone (FBZ) of the system. We rewrite this Hamiltonian in the standard form \begin{equation} H(\bs k)= \epsilon (\bs k) \hat{1}+ \bs{d} (\bs k) \cdot \bs{\hat \sigma}, \label{hamtwo} \end{equation} with \begin{equation} \epsilon (\bs k)= - t_A f (\bs k - \brec / 2) \label{epsilon_k} - t_B f (\bs k + \brec / 2), \end{equation} where $\bs {\hat \sigma}$ is the vector of Pauli matrices, and $\bs d (\bs k)$ has real-valued Cartesian components defined by \begin{align} &d_x (\bs k)-id_y (\bs k)= - t g(\bs k), \nonumber \\ &d_z (\bs k)= - \varepsilon - t_A f (\bs k - \brec / 2) + t_B f (\bs k + \brec / 2).\label{sz} \end{align} The eigen-energies of the Hamiltonian (\ref{hamtwo}) are $E_{\pm} (\bs k)=\epsilon (\bs k)\pm d (\bs k)$, where we introduced the ``coupling strength" $d (\bs k)= \vert \bs d (\bs k) \vert$. Our Hamiltonian \eqref{hamtwo}-\eqref{sz} differs from the expression derived in Ref. \cite{Alba2011}, where different Peierls phases were used \footnote{In Ref. \cite{Alba2011}, Peierls phases were considered to be of the form $\phi (m_A, m_B)= \brec \cdot (\bs r_{m_A} - \bs r_{m_B})$ instead of Eq. \eqref{peierls}, cf. the Supplemental Material in Ref. \cite{Alba2011}. We note that the correct form \eqref{peierls}, used in the present work, corresponds to the synthetic Peierls phases that can be realized with cold atoms in optical lattices, following the method of Ref. \cite{Jaksch2003}.}. Both models are exactly equivalent when $t_{B} = 0$, where the system describes a semi-metal (cf.\ Section \ref{topologicalphases} and \ref{skyrmion}). We now briefly describe how the energy spectrum changes with the parameters $(t_A,t_B, \varepsilon)$ of the Hamiltonian \eqref{hamtwo}-\eqref{sz}, in the absence of momentum transfer $\brec = 0$. When $t_{A,B}=\varepsilon=0$, the band structure is that of graphene \cite{Wallace1947,CastroNeto2009}, namely, the spectrum is given by \begin{align} E_{\pm} (\bs k)= \pm \vert t g(\bs k)\vert , \quad t_{A,B}=\varepsilon=0 . \end{align} The two bands touch at zero energy for particular points $\bs K_{\pm}$ (the so-called Dirac points), where $g(\bs K_{\pm})=0$, and around which the spectrum is quasi-linear with momentum, $E_{\pm} (\bs k)\approx \pm v_F \vert \bs k \vert$. We will still use the term ``Dirac" points to denote $\bs K_\pm$, even if the gap is open (in the vicinity of these points the excitations describe massive Dirac fermions). For $\varepsilon \ne0$, a bulk gap $\Delta \propto \varepsilon$ opens at the Dirac points, where the gap width is defined as $\Delta = \text{min} (E_+) - \text{max} (E_-)$ \footnote{We set $\Delta =0$ when $ \text{max} (E_-) \ge \text{min} (E_+)$. This happens when the two bands touch at a Dirac point, $E_+ (\bs K_{D}) = E_{-} (\bs K_{D})$, but also when the bulk gap is indirectly closed, cf. Figs. \ref{figa7} (a)-(c). The properties of semi-metallic phases with $\Delta=0$ are discussed in Section \ref{semimetal}.}. This is not a necessary condition to open a gap, as the NNN couplings $t_A,t_B$ are also able to do so. For $t_{A,B} \ne 0$ and $\varepsilon=0$, the spectrum is now \begin{align} E_{\pm} (\bs k; \brec=0,\varepsilon=0)=D_{+}(\bs k) \pm \sqrt{ \vert t ~g(\bs k)\vert]^2 +\left[ D_{-}(\bs k) \right]^2 }, \end{align} with $D_{\pm}(\bs k) = - (t_{A}\pm t_B) ~ f(\bs k)$. Next we note that $\vert g(\bs k) \vert^2 = 3 +2 f(\bs k)$, showing that a gap $\Delta \propto \vert t_A-t_B\vert$ opens at the Dirac points due to NNN couplings. For finite momentum transfer $\brec \ne 0$, the energy spectrum \begin{align} E_{\pm} (\bs k)= \epsilon (\bs k; \brec , t_{A,B}) \pm \sqrt{ \left[ t ~g(\bs k)\right]^2 + \left[d_z (\bs k; \brec, \varepsilon , t_{A,B}) \right]^2 } \end{align} leads to more complex spectral structures and phases, to be explored in Section \ref{topologicalphases}. \\ In the following, we study the phases of non-interacting fermions in an optical-lattice setup described by Hamiltonian \eqref{hamtwo}-\eqref{sz}. Such a system forms a metal (or a semi-metal) when the gap is closed $\Delta =0$, and an insulator when $\Delta > 0$. In the latter case, we set the Fermi energy $E_{\text{F}}$ in the middle of the bulk gap. This classification in terms of the band structure is not exhaustive, and it must be completed by a description of the topological properties of this band structure. This is examined in the following Section \ref{chernsection}. In addition, the properties of some peculiar semi-metals are also explored in this work (cf. Section \ref{skyrmion}). \subsection{The Chern number}\label{chernsection} When the two-band spectrum $E (\bs k)$ exhibits an energy gap $\Delta$, one can define a topologically invariant Chern number \cite{nakahara}, which encodes the topological order of the system. As shown in Ref. \cite{Thouless1982}, the Chern number $\nu$ is equal to the transverse Hall conductivity, $\sigma_H= \nu$ in units of the conductivity quantum, provided the \emph{Fermi energy is located in the bulk gap}. The Chern number is given by the standard TKNN expression \cite{Thouless1982,Xiao2010} \begin{align} \nu&= \frac{i}{2 \pi} \int_{\mathbb{T}^2} \langle \partial_{k_x} u_{(-)}(\bs k) \vert \partial_{k_y} u_{(-)} (\bs k) \rangle - (k_x \leftrightarrow k_y) \txt{d}^2 \bs{k},\label{chern} \\ &= \frac{1}{2 \pi} \int_{\mathbb{T}^2} \bs 1_z \cdot (\nabla _{\bs k} \times \bs A (\bs k) ) \txt{d}^2 \bs{k} ,\label{chernone} \end{align} where $\vert u_{(-)} (\bs k) \rangle$ denotes the single-particle eigenstate associated with the lowest bulk band $E_{-} (\bs k)$. The Berry's connection -- or vector potential -- $\bs{A}(\bs k)$ is defined by \begin{align} \bs A (\bs k) &= i \langle u_{(-)} \vert \bs \nabla _{\bs k} \vert u_{(-)} \rangle.\label{A-pm} \end{align} This quantity, which defines the parallel transport of the eigenstates over the FBZ \cite{nakahara}, also determines the topological order of the system \cite{Kohmoto:1985}. The integration in Eq. \eqref{chernone} is taken over the FBZ, a two-torus denoted as $\mathbb{T}^2$, where the contribution due to any singularities of $\bs A(\bs k)$ -- to be discussed later on -- should be excluded. It is convenient to parametrize the ``coupling" vector $\bs d (\bs k)$ in terms of the spherical angles $\theta \equiv \theta (\bs k)$ and $\phi \equiv \phi (\bs k)$, defined as \begin{equation} \tan \phi = d_y (\bs k) / d_x (\bs k) , \qquad \cos \theta= d_z (\bs k) / d (\bs k),\label{defangles} \end{equation} where $\phi= \pi- \arg g(\bs k)$ for $t>0$. In what follows we shall assume that $t>0$, without loss of generality. In this representation, the Hamiltonian \eqref{hamtwo} takes the form \begin{equation} H (\bs k)= \epsilon (\bs k) \hat{1} - d (\bs k) \begin{pmatrix} \cos \theta & e^{- i \phi} \sin \theta \\ e^{i \phi} \sin \theta & - \cos \theta \end{pmatrix} . \label{hamcoupling} \end{equation} The lowest eigenstate of \eqref{hamcoupling} is given by \begin{equation} \qquad \vert u_{(-)} \rangle =\begin{pmatrix} -e^{-i \phi} \sin (\theta /2) \\ \cos (\theta /2) \end{pmatrix},\label{pm} \end{equation} and from Eqs. (\ref{A-pm})-(\ref{pm}), we obtain an explicit expression for the Berry's connection, \begin{equation} \bs A (\bs k)=\frac{1}{2} (1- \cos \theta) \bs \nabla _{\bs k} \phi . \label{berryconnection} \end{equation} A crucial point to note is that the Berry's connection \eqref{berryconnection} has singularities at the points in $\bs{k}$-space where $d_x (\bs k)= d_y (\bs k)=0$ and $d_z (\bs k)<0$. The condition $d_x (\bs k)= d_y (\bs k)=0$, which coincides with the zeros of the function $g (\bs k)$, is always fulfilled at the special points $\bs{K}_{\pm} = (\frac{2 \pi}{3}, \pm \frac{2 \pi}{3 \sqrt{3}})$, where we have set $a = 1$. The second condition $d_z (\bs{K}_{\pm})<0$ is only satisfied for certain values of the model parameters ($t_{A,B}$, $\varepsilon$, $\brec$). In terms of the coupling vector $\bs d$, the singularity takes place at the ``South pole" where $\theta=\pi$ and $\phi$ is arbitrary, so that the state $\vert u_{(-)} \rangle$ is multivalued there. Note that this singularity can be removed locally by a gauge transformation, but not globally \cite{Wu1975}. Moreover, we find that the phase $\phi= \pi- \arg g(\bs k)$ yields opposite vorticities at the two inequivalent Dirac points, \begin{equation} v_{\pm}=\oint_{\gamma_{\pm}} \bs{\nabla} \phi (\bs k) \cdot \text{d} \bs k= \pm 2 \pi, \label{vortex} \end{equation} where $\gamma_{\pm}$ denotes closed loops around the two Dirac points $\bs K_{\pm}$. If these singularities were absent, the integrand in Eq. \eqref{chernone} would constitute an \emph{exact} differential form over the entire FBZ. In this trivial case, Stokes theorem would then ensure that the integral in Eq. \eqref{chernone} is zero, since this exact two-form is integrated over a \emph{closed} manifold \footnote{The FBZ is a two-dimensional torus $\mathbb{T}^2$, which is a closed manifold. See also \ref{analytical}.}. To account for these singularities, Stokes theorem can be applied to a contour avoiding them \cite{Kohmoto:1985,Hatsugai1993}. In particular, the Chern number \eqref{chernone} can be written as a sum of integrals performed over the excluded singularities, i.e. by contributions from small circles of infinitesimal radius $\gamma_{\pm}$ around the excluded Dirac points $\bs k=\bs K _{\pm}$ at which $\bs{A} (\bs k)$ is singular, \begin{equation} \nu = - \frac{1}{2 \pi} \sum_{\bs K _{-}, \bs K _{+}} \oint_{\gamma_{\pm}} \bs{A} (\bs k) \cdot \text{d} \bs k . \end{equation} Using Eq. \eqref{vortex} and taking into account the fact that $\cos \theta (\bs k)$ remains well-defined close to $\bs K _{\pm}$, we find the simple expression for the Chern number \begin{align} \nu&= \frac{1}{4 \pi} \left( \cos \left[ \theta \left(\bs K_{+}\right) \right] v_{+} ~+~ \cos \left[ \theta \left(\bs K_{-}\right) \right] v_{-} \right) \nonumber \\ &=\frac{1}{2} \Biggl( \frac{d_z (\bs K_{+})}{\vert d_z (\bs K_{+}) \vert} - \frac{d_z (\bs K_-)}{\vert d_z (\bs K_{-}) \vert} \Biggr ), \label{chern2} \end{align} which only involves the sign of the ``mass" term $d_z (\bs k)$ \eqref{sz} at the two inequivalent Dirac points $\bs K_{\pm}$. A detailed demonstration of Eq. \eqref{chern2}, which further highlights the role played by the singularities, is presented in \ref{analytical}. The important result in Eq. \eqref{chern2} shows that the Chern number $\nu$ can now be directly evaluated, without performing the integration over the FBZ in Eq. \eqref{chern}. From Eqs. \eqref{sz}-\eqref{chern2}, one can already deduce that non-trivial Chern numbers $\nu \ne 0$ can only be obtained when $d_z (\bs k)$ has opposite signs at the two inequivalent Dirac points $\bs K_{\pm}$, which can only be achieved for $\brec \ne 0$. In the following Section \ref{fluxsection}, we give a physical interpretation in terms of effective magnetic fluxes and time-reversal-symmetry breaking. We also comment on pathological time-reversal symmetric configurations, which necessarily lead to a trivial topological order $\nu =0$. To conclude this Section, we note that a Chern insulator is also characterized by current-carrying edge states that propagate along the edge of the system. This edge transport is guaranteed by the opening of a non-trivial bulk gap ($\Delta,\nu\ne0$, cf. Fig. \ref{figa7} (b)), and it leads to the quantization of the Hall conductivity {\it via} the bulk-edge correspondence \cite{Hatsugai1993}. The latter is observed through transport measurements in solid-state experiments. In the cold-atom framework, such measurements are not convenient, as they would require atomic reservoirs coupled to the optical lattice. However, alternative methods, based on Bragg spectroscopy \cite{stenger1999b,steinhauer2002a}, have been proposed to extract and image these topological edge states \cite{Goldman2012}. We will use the appearance of chiral edge states later in this paper to strengthen the identification of Chern insulators (Section \ref{anisotropysection}). They are obtained from the spectrum of Hamiltonian (\ref{htot}) in a finite geometry \cite{Hatsugai1993}, as explained in the \ref{app:edge}. \subsection{Flux configurations and physical description of the model} \label{fluxsection} In this Section, we examine the effects of the Raman-induced phases in Eq. \eqref{peierls} from a less formal point of view, by associating effective ``fluxes" to these Peierls phases. First, one can evaluate the number of magnetic flux quanta penetrating each hexagonal plaquette $\hexagon$, which yields (cf. Fig. \ref{figfluxlattice} (a)) \begin{align} 2 \pi \Phi(\hexagon) &= \sum_{\hexagon} \phi (n_A, m_B) \nonumber =0. \end{align} Therefore, in the absence of NNN hopping (i.e. $t_{A,B}=0$), the system has a trivial flux configuration $\Phi=0$ and remains invariant under time reversal. \begin{figure \centering \includegraphics[width=1\columnwidth]{lattice_flux.pdf} \caption{\label{figfluxlattice} (a) Laser-coupled honeycomb lattice, including the Peierls phases \eqref{peierls}, and the corresponding flux configuration. The local fluxes $\Phi_{1,2,3}$ are explicitly given in terms of the momentum recoil $\bs p$. Here, the basic reciprocal lattice vectors are $\bs b_1=2 \pi/3 (1, \sqrt{3})$ and $\bs b_2=2 \pi/3 (1, -\sqrt{3})$. (b) The Haldane model and its simpler flux configuration, entirely characterized by the phase $\phi_{\text{H}}$.} \end{figure} Importantly, when NNN hopping terms are introduced (i.e. $t_{A,B} \ne 0$), triangular sub-plaquettes are penetrated by non-zero magnetic fluxes, explicitly breaking time-reversal symmetry and potentially leading to QH phases \cite{Haldane1988}. Considering the sub-plaquettes formed by the $A-B$ and $A-A$ hoppings, illustrated in Fig. \ref{figfluxlattice} (a), one finds that \begin{align} &\Phi_1=- \brec \cdot \bs a_3/4 \pi=(\rec_2 - \rec_1)/2 , \nonumber \\ &\Phi_2=- \brec \cdot \bs a_2/4 \pi=-\rec_2/2, \nonumber \\ &\Phi_3=\brec \cdot \bs a_1/4 \pi=\rec_1/2, \label{fluxeq} \end{align} where we expressed the recoil momentum $\brec=\rec_1 \bs b_1 + \rec_2 \bs b_2$ in terms of the basic reciprocal lattice vectors $\bs b_{1,2}$, for which $\bs b_{j}\cdot \bs a_{l}=2\pi \delta_{jl}$. The sub-plaquettes formed by the $A-B$ and $B-B$ hoppings have a similar flux structure. Thus, the space-dependent Peierls phases \eqref{peierls} produce a flux configuration characterized by three local fluxes $\Phi_{1,2,3}$, and which is translationally invariant over the whole lattice (cf. Fig. \ref{figfluxlattice} (a)). We also note that $\sum_{\alpha}\Phi_{\alpha}=0$, which indicates that the total flux penetrating each hexagonal plaquette $\hexagon$ remains zero, as found above \cite{Haldane1988}. The system remains invariant under time reversal when $H(\{ \Phi_{1,2,3} \}) \equiv H (-\{ \Phi_{1,2,3} \})$, where $\{ \Phi_{1,2,3} \}$ represents the flux configuration stemming from a given $\brec$. Besides the obvious case $\brec$=0, we find from Eq. \eqref{fluxeq}, that this occurs: \begin{itemize} \item if $\rec_1$ and $\rec_2$ are \emph{both integers}, {\emph i.e.} if $\brec$ is a vector of the reciprocal lattice. \item if one of the components $\rec_1, \rec_2$ or $\rec_2-\rec_1$ is an \emph{even integer}, and in particular, if $\brec$ is collinear with one of the basis vectors $\bs b_{1,2}$. For example, when $\rec_1=0$ (resp. $\rec_2=0$), one finds $\Phi_3=0$ and $\Phi_1=-\Phi_2$ (resp. $\Phi_2=0$ and $\Phi_1=-\Phi_3$). \end{itemize} In these pathological ``staggered flux" cases, the system remains invariant under time reversal and therefore topologically trivial (note that the number of magnetic flux quanta $\Phi_{\alpha}$ is only defined modulo 1). We can verify that these singular time-reversal configurations equally correspond to the condition \begin{equation} d_z (\bs K_{+})=d_z (\bs K_{-}), \forall t_A,t_B. \label{condtr} \end{equation} As established in Eq.~(\ref{chern2}), the condition \eqref{condtr} naturally leads to a trivial Chern insulator $\nu=0$ when $\Delta > 0$, as expected for a time-reversal-invariant system exhibiting a gap. One can check that the condition \eqref{condtr} can be simply rewritten in terms of the vector $\brec = \rec_1 \bs b_1 + \rec_2 \bs b_2$, \begin{equation} \sin(\pi\rec_2)-\sin(\pi\rec_1)+\sin\left[ \pi(\rec_2-\rec_1)\right]=0, \label{condtr2} \end{equation} whose solutions exactly reproduce the pathological cases listed above. When $\brec$ satisfies the condition \eqref{condtr2}, the system is necessarily a trivial insulator or a standard semi-metal depending on the other parameters. In Section~\ref{deltap}, we explore other values of the momentum recoil $\brec$, where trivial or non-trivial phases can be found depending on the specific values of the parameters $t_A,t_B,\varepsilon$. \\ \subsection{Comparison with the Haldane model} \label{haldanesection} We conclude this Section by comparing the laser-coupled honeycomb lattice \eqref{htot}, with the original Haldane model (cf. \cite{Haldane1988} and Fig. \ref{figfluxlattice} (b)). In the latter, the hopping factor $t_1$ between NN sites of the honeycomb lattice is real, while NNN hoppings $t_2$ are multiplied by a constant phase factor $e^{ \pm i 2 \pi \phi_{\text{H}}}$ (the sign being determined by the orientation of the path). Thus, in the Haldane model, the three small triangular subplaquettes illustrated in Fig. \ref{figfluxlattice} (b) are all penetrated by the \emph{same} flux $\Phi_{1,2,3}=\phi_{\text{H}}$, whereas the large central triangular plaquette is penetrated by a flux $- 3 \phi_{\text{H}}$. This leads to a staggered magnetic field configuration, with a vanishing total flux penetrating the hexagonal unit cell $\Phi(\hexagon)=0$. We stress that time-reversal symmetry is \emph{necessarily broken} in the Haldane model, for \emph{any} finite value of the phase $\phi_{\text{H}} \ne 0$. This important difference between the two models highlights the richness of the laser-coupled honeycomb lattice \eqref{hamtwo}-\eqref{sz}, where the flux configuration and the nature of the spectral gaps strongly depend on the orientation of the vector $\brec$ entering the Peierls phases. \section{Phase diagrams for topological insulating phases} \label{topologicalphases} In this section, we perform a systematic characterization of the phase diagram. We set the nearest-neighbor tunneling amplitude to $t =1$, thus effectively measuring all energies in units of $t$. The Cartesian components of the recoil momentum $p_x$ and $p_y$ are conveniently measured in units of $K_x$ and $K_y$, which are the coordinates of the Dirac point ${\bs K}_{+}$, with $K_x = 2\pi/3$ and $K_y = 2\pi / 3\sqrt{3}$. Following the discussions in the preceding Section, we can expect three different phases : \begin{itemize} \item a semi-metal (energy gap $\Delta=0$), \item an insulator (energy gap $\Delta \neq0$) with trivial topology ($\nu=0$), \item a Chern insulator ($\Delta\neq0, \nu\neq0$). \end{itemize} At this point, let us remind ourselves that the Chern number $\nu$ defined in Eq. \eqref{chern} characterizes the topological order of \emph{insulating} phases \cite{Kohmoto:1985}. However, the expression in Eq. \eqref{chern2} could also be formally computed for a semi-metal configuration ($\Delta =0$), but in this case, the index $\nu$ cannot be associated with a robust and topologically protected Hall conductivity. This fact, which is crucial from the experimental detection point of view, is further elaborated in the next Section \ref{skyrmion}. In this Section, where the focus is set on Chern insulators, we are therefore looking for wide regions in parameter space where both $\Delta$ and $\nu$ are non-zero. In Section \ref{deltap}, we consider how the system evolves as the recoil momentum $\brec$ is varied without staggered potential ($\varepsilon=0$). We examine further the role of anisotropy in the tunneling energies ($t_A \neq t_B$) in Section \ref{anisotropysection}, and finally the role of a staggered potential ($\varepsilon\neq 0$) in Section \ref{staggeredsection}. \subsection{Recoil momentum}\label{deltap} We first investigate the effects of the Raman recoil momentum $\brec$. Here, the staggered potential is set to $\varepsilon = 0$. The phase diagrams shown in Fig.~\ref{figa1} illustrate the appearance of topological phases as a function of the Cartesian components $p_x$ and $p_y$, for several values of the tunneling rates $t_{A,B}$. The areas corresponding to nontrivial topological phases, characterized by the Chern numbers $\nu = \pm 1$, are indicated by blue and red colors, respectively. Green areas correspond to the trivial insulating phase $\nu=0$, and white areas signify the ``undesired" metallic regime ($\Delta \approx 0$). The size of the bulk gap $\Delta$ is simultaneously shown through the color intensity. Panel (a) shows the isotropic case with equal next-nearest neighbor tunneling amplitudes set to $t_B = t_A = 0.3 t$. Here, non-trivial topological phases ($\nu= \pm 1$) are generally separated by semi-metallic or metallic phases, and these topological regions depict triangular patterns. Panels (b) and (c) correspond to anisotropic cases where the hopping amplitude $t_B$ is reduced to $t_B = 0.2 t$ in panel (b) and set to zero in panel (c). As the anisotropy increases, the metallic regions and trivial insulating phases progressively modify the non-trivial islands. In the special case $t_B=0$, we find that all the regions that were non-trivial for $t_B > 0$ reduce to semi-metals: when $t_B=0$, no topological insulating phase is found (contrary to what the Skyrmion behavior of the vector $\bs d$ would suggest \cite{Alba2011}, see Section \ref{skyrmion}). We stress that the semi-metallic behavior of the special case $t_B=0$ is found for the entire parameter space (i.e. for all $t_A$, $\varepsilon$ and $\brec$), and equally happens for the case $t_A=0$ and $t_B\ne0$. This subtle effect is highlighted in Fig. \ref{figa7} (c), presented in Section \ref{anisotropysection}, where the band structure $E=E(k_y)$ clearly shows the indirect gap closing for the case $t_B=0$. This energy spectrum suggests that a small perturbation could open the bulk gap and lead to a Chern insulator. However, in Section \ref{staggeredsection}, we show that the staggered potential does not open such a non-trivial gap in the case $t_B=0$. We therefore conclude that the condition $t_{A,B} \ne 0$ should be satisfied to generate a robust Chern insulator. The Hamiltonian is a periodic function of $\brec$, and the resulting periodicity of phases is conspicuous in the phase diagrams illustrated in Fig.~\ref{figa1}. The central elementary lattice (the ``resized" FBZ) cell is marked by a black hexagon \footnote{To be more precise, the arguments of the cosines and the complex exponentials in the Hamiltonian feature $\brec / 2$, thus the ``resized" FBZ is twice larger than the actual FBZ. The panels of Fig.~\ref{figa1} show rectangular regions containing exactly four Brillouin zones.} in all panels of Fig.~\ref{figa1}. We find that the most convenient non-trivial topological insulating phases (i.e. phases protected by the largest bulk gaps $\Delta \sim 2 t$) are found for $\brec \propto (\sin N \pi/3 , \cos N \pi/3 )$, where $N$ is an integer. Therefore, setting $p_x=0$ potentially leads to topological phases with large bulk gaps, which is the most interesting situation for an experimental realization (cf. Section \ref{skyrmion}). \begin{figure \centering \includegraphics[width=1.\columnwidth]{pxpy.pdf} \caption{\label{figa1} Phase diagrams as a function of the recoil momentum components $p_x$ and $p_y$. In all the figures, we set $t=1$, $\varepsilon=0$ and $t_A=0.3 t$. In panel (a) we set $t_B = t_A = 0.3 t$, and in panel (b) $t_B = 0.2 t$. The extreme case $t_B=0$ is shown in (c). The white regions correspond to metallic phases (i.e. vanishing of the gap $\Delta \approx 0$), the blue and red regions correspond to topological phases with $\nu=\pm 1$. The green regions correspond to trivial insulating phases $\nu=0$. The ``resized'' FBZ is indicated by a hexagon, which also serves to highlight the angle dependence with respect to the inverse lattice vectors. The size of the gaps is indicated by the intensity: the lightest shades denote areas where the gaps are $\Delta <0.1 t$ and the brightest areas correspond to $1.5t < \Delta < 2 t$.} \end{figure} \begin{figure \centering \includegraphics[width=0.5\columnwidth]{tapy.pdf} \caption{\label{figa2} Phase diagrams as a function of the tunneling amplitude $t_A$ and the recoil momentum component $p_y$. We set $t=1$, $\varepsilon=0$, $\brec=(0,p_y)$ and $t_A = t_B$. The color code is the same as in Fig. \ref{figa1}.} \end{figure} We now explore how the topological phases evolve as the laser recoil momentum $\bs{p}$ and the tunneling amplitudes $t_{A,B}$ are modified. As motivated above, we set $p_x=0$, and then compute the phase diagrams in the $p_y - t_A$ plane. First of all, we investigate the isotropic case $t_A=t_B$ (the effects of anisotropy will be discussed in Section \ref{anisotropysection}). The phase diagram presented in Fig.~\ref{figa2} indicates that in the realistic situation where $t_A \approx t_B$, the sizes of the topological gaps $\Delta$ are maximum for $t_A \approx t_B \approx 0.3 t$, where $\Delta \approx 2 t$ for $p_y \approx 4 K_y$ (see also Fig. \ref{figa1} (a)). Furthermore, this figure indicates that one should generally observe phase transitions between metallic and non-trivial topological phases as $p_y$ is varied. Importantly, we note that the system remains metallic ($\Delta=0$) when the ``natural" hoppings $t_{A,B}$ are \emph{larger} than the Raman-induced hopping $t$, in particular when $t_A \approx t_B \approx t$. In the following, we show that an anisotropy $t_A \ne t_B$ , or the inclusion of a staggered potential $\varepsilon \ne 0$, can turn this metallic phase into a topological one. \subsection{Anisotropy} \label{anisotropysection} In Fig.~\ref{figa3}, we show the phase diagram in the plane $p_y - t_B$ for a large and fixed value of the tunneling rate $t_A=t$. This important result shows that when $t_A \approx t$, the anisotropy $\vert t_A - t_B \vert \ne 0$ is necessary to open non-trivial topological gaps. This effect occurs for a relatively large range of the anisotropy, namely for $t_B \in ] 0 , t_A]$, and for specific values of the momentum $p_y$. For larger anisotropy $\vert t_A - t_B \vert > t$, the topological phases are destroyed and only metallic and trivial insulators survive. Specific phase transitions between semi-metallic and Chern insulating phases, indicated in Fig.~\ref{figa3} by three successive dots, are further illustrated through the edge-state analysis, in Fig.~\ref{figa7}. In panel \ref{figa7} (b), one indeed observes the presence of topological edge states within the bulk gap, which is the hallmark of a Chern insulator, i.e. through the bulk-edge correspondence \cite{Hatsugai1993}. Finally, we note the robustness of the topological edge states within the semi-metal regime $\Delta =0$, in Figs. \ref{figa7} (a),(c), a fact which is further analyzed in Section \ref{skyrmion}. \begin{figure \centering \includegraphics[width=0.5\columnwidth]{anisotropy.pdf} \caption{\label{figa3} Phase diagram as a function of the tunneling amplitude $t_B$ and the recoil momentum component $p_y$, as $t_A=t$ is fixed. Here, we set $t=1$, $\varepsilon=0$ and $p_x=0$. The color code is the same as in Fig. \ref{figa1}. The three dotted configurations (a)-(c) are further illustrated through band structures $E=E(k_y)$ in Figs. \ref{figa7} (a)-(c). } \end{figure} \begin{figure \centering \includegraphics[width=1.02\columnwidth]{edges_new.pdf} \caption{\label{figa7} Energy spectra $E=E(k_y)$, as a function of the quasi-momentum $k_y$, for a cylindrical geometry with zigzag edges. The parameters in (a)-(c) correspond to the configurations labelled by dots in Fig. \ref{figa3}: namely, $\varepsilon=0$ and (a) $t_A=t_B=t$, (b) $t_B=0.3 t$, (c) $t_B=0$. In all the figures $t_A=t=1$, $\varepsilon=0$ and $p_y= 4 K_y$. When $\nu\ne 0$ and $\Delta \ne 0$, as in Fig. (b), gapless dispersion branches cross the bulk energy gap: they describe current-carrying edge states, which lead to a quantized Hall conductivity \cite{Hatsugai1993}. Figs. (a),(c) illustrate the peculiar situations where the gap indirectly closes, $\Delta = 0$, and where the winding number \eqref{qiformula} is non-trivial $w \ne 0$ (cf. Section \ref{semimetal}).} \end{figure} \subsection{Staggered potential} \label{staggeredsection} In this Section, we explore the effect of the staggered potential. In Fig.~\ref{figa4}, we show the phase diagram as a function of the staggered potential strength $\varepsilon$ and of the recoil momentum component $p_y$, for several configurations of the tunneling amplitudes $t_{A,B}$ (we set $p_x=0$). First, we show the case $t_A=t_B=t$ in Fig.~\ref{figa4} (a). In this situation, large metallic regions and small non-trivial islands are found in the phase diagram, which can already be anticipated from Fig.~\ref{figa2} for $\varepsilon=0$. Interestingly, in the totally symmetric case, where $t_A=t_B=t$, the topological phases vanish for $\varepsilon=0$, and they are thus separated along the $\varepsilon$ axis\footnote{Note that the vanishing of the topological insulating phases for $t_A=t_B=t$ and $\varepsilon=0$ can be visualized in Fig.~\ref{figa2}.}. These results indicate that the staggered potential is \emph{necessary} to induce topological phases in this situation where $t_A=t_B=t$. However, for large values of the staggered potential, a trivial phase with $\nu=0$ is always privileged, in agreement with the general belief that such a staggered potential generically leads to trivial phases \cite{Haldane1988}. \begin{figure \centering \includegraphics[width=\columnwidth]{stagabc.pdf \caption{\label{figa4} Phase diagrams as a function of the staggered potential strength $\varepsilon$ and the recoil momentum component $p_y$. In all the figures, we set $t=1$ and $p_x=0$. (a)-(b) The isotropic cases $t_A=t_B=t$ and $t_A=t_B=0.3t$. The extreme case where $t_B=0$ is shown in (c). The color code is the same as in Fig. \ref{figa1}.} \end{figure} For $t_A=t_B<t$, non-trivial Chern insulating phases can be formed both with and without the staggered potential. In Fig.~\ref{figa4} (b), we illustrate the effects of the staggered potential for the optimized values of the tunneling rates $t_A= t_B=0.3 t$. Here, one observes two topological phases with $\nu=\pm 1$, which are separated by a small metallic region. This result highlights that one should generally observe phase transitions between semi-metallic and non-trivial topological phases as $p_y$ is varied. On the other hand, varying the staggered potential to large values always privileges the transition to a trivial phase with $\nu=0$. In the extreme case where $t_B=0$, shown in Fig.~\ref{figa4} (c), one finds the \emph{contour} of the phase diagram presented in Ref. \cite{Alba2011}. However, we stress that the two central regions featured in this diagram do not correspond to Chern insulating phases, as their corresponding bulk gap is \emph{closed}. This indirect gap closing is further illustrated in Fig. \ref{figa7} (c). In the next Section, we analyze this important point in more details. \section{The winding number and the ToF measurement: Chern insulators, topological semi-metals and Skyrmions}\label{skyrmion} In the previous Section, we have identified the topological insulating phases that could be realized in our cold-atom system, when the parameters $(t_{A,B}, \varepsilon, \bs p)$ are tuned in the gapped regimes $\Delta > 0$. In these situations, the Chern number \eqref{chern} associated with the low-energy eigenstate $\vert u_{-} \rangle$ can be defined, and its experimental measure would witness a clear manifestation of non-trivial topological order. However, contrary to solid-state experiments where the Chern number is directly evaluated through a Hall conductivity measurement \cite{vonKlitzing:1986}, it can only be observed indirectly in the cold-atom framework \cite{Alba2011,Hafezi:2007,Palmer:2008,umucalilar2008a,Bermudez:2010,Stanescu:2010,Price2012,Goldman2012}. In this Section, we analyze in detail the topological orders which could be detected through a ToF experiment \cite{Alba2011}, and further discuss the role played by the bulk gap $\Delta$ in this context. \\ First of all, let us note that the Hamiltonian \eqref{hamtwo} can be associated with a topological (Pontryagin) winding number \cite{Qi:2006,Girvin:1999,Konig:2008,Prada:2011}, \begin{align} w &= \frac{1}{4 \pi} \int_{\mathbb{T}^2} \bs n \cdot \biggl ( \partial_{k_x} \bs n \times \partial_{k_y} \bs n \biggr) \txt{d}^2 \bs{k}, \nonumber \\ &= \frac{1}{4 \pi} \int_{\mathbb{T}^2} \frac{\bs{d}}{d^3} \, \cdot \biggl ( \partial_{k_x} \bs{d} \times \partial_{k_y} \bs{d} \biggr) \txt{d}^2 \bs{k}, \label{qiformula} \end{align} which measures the number of times the unit vector $\bs n (\bs k)=\bs{d} (\bs k)/d (\bs k)$ covers the Bloch sphere $S^2$ as $\bs k$ evolves on the entire FBZ \cite{Qi:2006}. When $w \ne 0$, this leads to a Skyrmion configuration for the vector field $\bs{n}(\bs k)$. As will be discussed later in this Section and depicted in Fig. \ref{skyrmionfig}, the Skyrmion configuration corresponds to a situation where the unit vector $\bs{n} (\bs{k})$ entirely covers the Bloch sphere once, which for the present model implies that the vector $\bs{n} (\bs{k})$ points in opposite directions (i.e. North and South poles) at the two inequivalent Dirac points, \begin{eqnarray} &w=+1 \longrightarrow \bs n (\bs K_{+})= + \bs{1}_z \text{ and } \bs n (\bs K_{-})= - \bs{1}_z, \nonumber \\ &w=-1 \longrightarrow \bs n (\bs K_{+})= - \bs{1}_z \text{ and } \bs n (\bs K_{-})= + \bs{1}_z. \label{poleswinding} \end{eqnarray} The winding number $w$ characterizes the map $\bs n (\bs k) : \mathbb{T}^2 \rightarrow S^2$ defined in Eq. \eqref{sz}, and therefore, it is not necessarily related to the spectrum or eigenstates of the Hamiltonian \eqref{hamtwo} -- contrary to the Chern number \eqref{chern}, which is a mathematical index associated with the state $\vert u_{-} \rangle$ \cite{nakahara}. \\ In this work, a \emph{topological semi metal} denotes a gapless phase $\Delta =0$, characterized by a non-trivial winding number $w\ne 0$. The fate of the winding number $w$ and its corresponding Skyrmion pattern will be discussed in Subsection \ref{semimetal}, where these structures are shown to remain stable when $\Delta =0$, as long as the gap does not close at the Dirac points. In fact, when the gap is open $\Delta > 0$, the Chern number \eqref{chern} is exactly equal to the winding number \eqref{qiformula}, \footnote{The Chern number \eqref{chern} and its corresponding fibre bundle structure \cite{nakahara} could also be formally defined when the gap is indirectly closed, such as in Fig. 6 (c). Thus, the Chern number $\nu$ and the winding number $w$ are formally equivalent under the more general gap-opening condition \cite{Hatsugai:2005}: $E_-(\bs k) < E_{+} (\bs k)$ for all $\bs k \in \text{FBZ}$. In the present model, this condition reads $d_z(\bs K_{\pm}) \ne 0$.} \begin{equation} \nu = w , \label{chernwind2} \end{equation} as can be demonstrated using Eqs. \eqref{chernone},\eqref{A-pm} and \eqref{pm} (cf. also Refs. \cite{Qi:2006,Konig:2008,Prada:2011}). As a corollary, the result in Eq. \eqref{poleswinding} can be easily deduced from Eq. \eqref{chern2}. From the equivalence \eqref{chernwind2}, we observe that the Chern insulating phases discussed in the previous Sections are characterized by a non-trivial winding number $w \ne 0$, and therefore, they are also associated with a Skyrmion pattern. In summary, measuring the winding number $w$ in an experiment would allow to equally identify Chern insulators ($\Delta >0$) and topological semi-metals ($\Delta = 0$).\\ As first observed in Ref. \cite{Alba2011}, the vector field $\bs n (\bs k)$ could be detected through a ToF absorption image. From such data, one could then evaluate the winding number $w$, using a discretized version of Eq. \eqref{qiformula}. This detection method is based on the fact that $\bs n (\bs k)$ can be expressed in terms of the momentum densities $\rho_{A,B} (\bs k)$ associated with the two spin species $A,B$ (cf. Fig. \ref{figlattice}). Defining the regions $\mathcal{K}_{(\pm)}=\{ \bs k : E^{(\pm)}(\bs k) < E_F \}$, we find that \begin{align} \rho_B (\bs k) - \rho_A (\bs k) &= + n_z (\bs k) & &\text{for $\bs k \in \mathcal{K}_{(-)}$ and $\bs k \notin \mathcal{K}_{(+)}$}, \nonumber \\ \rho_B (\bs k) - \rho_A (\bs k) &= - n_z (\bs k) & &\text{for $\bs k \in \mathcal{K}_{(+)}$ and $\bs k \notin \mathcal{K}_{(-)}$}, \nonumber \\ \rho_B (\bs k) - \rho_A (\bs k) &=0 & &\text{otherwise} \nonumber. \end{align} Unfortunately, one cannot generally determine the regions $\mathcal{K}_{(\pm)}$ in an experiment, unless the Fermi energy is exactly located in a bulk gap (in which case $\mathcal{K}_{(-)}=\text{FBZ}$ and $\mathcal{K}_{(+)}=\emptyset$). Therefore, the vector field $\bs n (\bs k)$ can only be approximately reconstructed from the data when both bands $E_{\pm} (\bs k)$ are partially filled. In fact, if we apply the relation $\rho_B (\bs k) - \rho_A (\bs k) = n_z (\bs k)$ to every pixel of a ToF image \footnote{The other components $n_{x,y} (\bs k)$ of the vector field could be obtained through similar measurements, combined with a rotation of the atomic states \cite{Alba2011}. }, and discretize the expression \eqref{qiformula} to evaluate the winding number from this data, we would experimentally measure the following quantity \begin{align} w_{\text{ToF}}&= \frac{1}{4 \pi} \Biggl ( \sum_{ \mathcal{K}_{(-)}} - \sum_{ \mathcal{K}_{(+)}} \Biggr ) \sum_{\mu \ne \nu \ne \lambda} n_{\mu} (\bs k) \biggl ( n_{\nu} (\bs k + \bs e_x) n_{\lambda} (\bs k + \bs e_y) - n_{\nu} (\bs k + \bs e_y) n_{\lambda} (\bs k + \bs e_x) \nonumber \\ &+ n_{\nu} (\bs k + \bs e_y) n_{\lambda} (\bs k) - n_{\nu} (\bs k + \bs e_x) n_{\lambda} (\bs k) + n_{\nu} (\bs k) n_{\lambda} (\bs k + \bs e_x) - n_{\nu} (\bs k) n_{\lambda} (\bs k + \bs e_y) \biggr ),\label{dis2} \end{align} where $\mu,\nu,\lambda=x,y,z$, and where $\bs e_{x,y}$ are the two unit vectors defined on the discretized FBZ. When the Fermi energy is set within a bulk gap, only the first sum contributes $\sum_{ \mathcal{K}_{(-)}}=\sum_{\text{FBZ}} $, and the quantity $w_{\text{ToF}}$ converges towards the winding number $w$ as the resolution of the grid is increased (cf. \ref{app:finite}). When the gap is closed, and if the Fermi energy is tuned such that the bulk bands $E_{\pm} (\bs k)$ are only partially filled, the quantity $w_{\text{ToF}}$ will generally deviate from the quantized value $w$. Consequently, the assumption of a perfectly filled lowest band ($\mathcal{K}_{(-)}=\text{FBZ}$ and $\mathcal{K}_{(+)}=\emptyset$), as considered in the calculations of Ref. \cite{Alba2011}, is crucial in the case $\Delta =0$. However, we indicate that this condition would be difficult to fulfill in an experiment, due to experimental imperfections and finite temperatures. We now illustrate this discussion in Subsections \ref{chernwind}-\ref{semimetal}, where the signatures of Chern insulators and topological semi-metals are compared and commented. Let us finally remark that the quantity $w_{\text{ToF}}$ defined in Eq. \eqref{dis2} is strictly equivalent to the discretized expression for the Hall conductivity $\sigma_H$, which is not necessarily quantized in the general case where the Fermi energy is not located in a bulk gap (cf. \ref{app:hall}). \subsection{The Chern insulators} \label{chernwind} When a spectral gap is opened, $\Delta >0$, the winding number \eqref{qiformula} is exactly equal to the Chern number \eqref{chernone}. This potentially gives rise to a Chern insulator, as illustrated in Figs. \ref{gaptopology} (a)-(b), where we compare how the energy gap $\Delta$, the winding number $w$ and the ToF measurement $w_{\text{ToF}}$ vary as a function of the recoil momentum $p_y$. As expected from the topological property of the Chern number, we find that the ranges where $\Delta >0$ and $\nu =w= \pm 1$ lead to the clear plateaus depicted by the observable $w_{\text{ToF}}(p_y) \approx \pm 1$ (cf. \ref{app:finite} for a discussion on finite size effects). We also demonstrate the robustness of these plateaus in Fig. \ref{gaptopology2} (a), where the quantity $w_{\text{ToF}}$ is computed as a function of the Fermi energy $E_{\text{F}}$, and where a large plateau $\sim \Delta$ is observed for fixed values of the other parameters. Therefore, the ToF winding number $w_{\text{ToF}}$ shows a robust behavior, and exhibits a clear plateau when the Fermi energy lies in the band gap. In other words, the Chern insulating phase is characterized by a ``quantized" winding number $w_{\text{ToF}}$, which is protected against finite changes of the parameters through the existence of a topological bulk gap \footnote{The plateaus depicted by $w_{\text{ToF}}$ are strictly equivalent to Hall conductivity plateaus (cf. \ref{app:hall}). However, since the Hall conductivity is not measured in cold-atom experiments, we choose to represent the observable quantity $w_{\text{ToF}}$ in our plots, rather than $\sigma_H$.}. \\ Let us stress the important fact that the Chern number $\nu$ in Eq. \eqref{chernone} no longer reflects the quantized Hall conductivity when the bulk gap is closed, in which case the system effectively describes a metallic phase. However, the topological order and Skyrmion patterns associated with the winding number $w$ survive even when the bulk gap is closed, as we now explore in the next Subsection. \begin{figure} \centering \includegraphics[width=0.7\columnwidth]{gap_topology_5_new.pdf} \caption{The energy gap $\Delta$ and the discretized winding number $w_{\text{ToF}}$ as a function of $p_y$ for (a)-(b) $t_A=t_B=0.3 t$, $\varepsilon=0$ and (c-d) $t_A=0.5 t$, $t_B=0$, $\varepsilon=-0.5 t_A$. For all plots $p_x=0$. The discretized winding number $w_{\text{ToF}}$ has been computed from Eq.\eqref{dis2} using a $30 \times 90$ lattice and setting the Fermi energy: (b) $E_{\text{F}}=0$ (i.e. inside the gap); (d) $E_{\text{F}}=-0.25$ (i.e. at the gap closing point). For comparison, purple dotted lines show the integral winding number $w$ defined in Eq. \eqref{qiformula}. The parameters in (c)-(d) are the same as in Ref. \cite{Alba2011}. In all the figures, a vertical dashed line shows the value $p_y=4 K_y$ used in Fig. \ref{gaptopology2}.} \label{gaptopology} \end{figure} \begin{figure} \centering \includegraphics[width=0.7\columnwidth]{gap_topology_4_new.pdf} \caption{The winding number $w_{\text{ToF}}$ as a function of the Fermi energy $E_{\text{F}}$ at zero temperature for $p_x=0,p_y=4 K_y$. The parameters in (a) are the same as in Figs.~\ref{gaptopology}a-b, whereas the parameters in (b) are the same as in Figs.~\ref{gaptopology} c-d. The computations were performed using Eq.\eqref{dis2}, on a $30 \times 90$ lattice. The dashed lines show the values (a) $E_{\text{F}}=0$, (b) $E_{\text{F}}=-0.25$ used in Figs. \ref{gaptopology} (b) and (d), respectively.} \label{gaptopology2} \end{figure} \subsection{The topological semi-metals} \label{semimetal} First of all, we find that the ToF winding number $w_{\text{ToF}}$, given by Eq. \eqref{dis2}, can be robust \emph{even when the gap is closed}. This effect is illustrated in Figs. \ref{gaptopology} (c)-(d), where we compare how the energy gap $\Delta$, the winding number $w$ and the ToF winding number $w_{\text{ToF}}$ vary as a function of the recoil momentum $p_y$. From Figs. \ref{gaptopology} (c)-(d), we find that the winding number $w$ displays non-trivial plateaus $w=\pm 1$, in regions where the bulk energy gap is closed $\Delta = 0$. In Fig. \ref{gaptopology} (d), we precisely set the Fermi energy at the gap closing point $E_{\text{F}}=-0.25 t$, which can be determined from the spectra in Figs. \ref{skyrmionfig}(b)-(e). In this specific configuration, the observable winding number $w_{\text{ToF}}(p_y)$ depicts plateaus, and it converges towards the quantized value $w_{\text{ToF}} \rightarrow w$ as the resolution of the grid is increased (cf. \ref{app:finite}). However, as the Fermi energy is tuned away from this ideal value, we find that the plateaus $w_{\text{ToF}}(p_y) \sim \pm 1$ progressively loose their robustness. This dramatic effect is illustrated in Fig. \ref{gaptopology2}(b), where $w_{\text{ToF}}$ is computed as a function of the Fermi energy. Here, in contrast with the Chern insulator case shown in Fig. \ref{gaptopology2}(a), the winding number $w_{\text{ToF}}$ is strongly parameter-dependent: it only reaches $w_{\text{ToF}} \approx w=+1$ at the specific Fermi energy $E_{\text{F}}=-0.25 t$ (cf. Fig. \ref{gaptopology2} (b)). Consequently, the robust behavior of these topological semi-metals will only be observed if the Fermi energy is precisely tuned at the gap closing point (such as in Fig. \ref{gaptopology} (d)). This important fact makes topological semi metals more challenging to detect than Chern insulating phases. We point out that the phase diagrams and Skyrmion configurations presented in Ref. \cite{Alba2011}, and reproduced in Figs. \ref{gaptopology}(c)-(d), only feature trivial insulators and ``topological semi-metals", since all the computations were performed for the peculiar configuration $t_B=0$, whose corresponding spectrum remains gapless in the ``non-trivial" regions (cf. also Fig. \ref{skyrmionfig}). \\ \begin{figure}[!] \centering \includegraphics[width=1\columnwidth]{skyrmion_new_3_zoom.pdf} \caption{Topological phase transitions: (top) Energy spectra $E(k_y)$ and (bottom) Skyrmion configuration depicted by the vector $\bs n (\bs k)$ within the FBZ. The parameters are the same as in Fig. \ref{gaptopology} (c)-(d), namely $t_A=0.5 t$, $t_B=0$ and $\varepsilon= - 0.5 t_A$. Note that the bulk gap is closed $\Delta = 0$ in all the figures except in (a). The $x$ and $y$ components of the normalized vector field $\bs{n}(\bs k)=(d_x (\bs k) , d_y (\bs k) , d_z (\bs k))/ d(\bs k)$ are represented by red arrows for $d_z (\bs k) > 0$ and blue arrows for $d_z (\bs k) < 0$. The corresponding winding numbers $w=0, \pm 1$ are also indicated. The location of the two Dirac points $\bs K_{\pm}$ are indicated by two vectical lines (top) and circles (below). A non-trivial winding number $w = \pm 1$ is clearly seen when the vector $\bs{n}(\bs k)$ has covered the whole Bloch sphere once. Namely, when the North ($\bs n=+ \bs{1_z}$) and South ($\bs n = -\bs{1_z}$) poles have been reached at the two inequivalent Dirac points $\bs K_{\pm}$ (cf. Eqs.\eqref{chern2}-\eqref{chernwind2}). Note that topological phase transitions $w \rightarrow w'$ occur through a gap closing point, which is accompanied with the vanishing of $d_z (\bs K_D)$, at the Dirac point $\bs K_D$.} \label{skyrmionfig} \end{figure} The topological semi-metal is an intriguing phase, and in this new context, an important question arises: If the topological winding number $w$ remains stable for $\Delta = 0$ (cf. Fig. \ref{gaptopology} (d)), under which condition is this quantity changing its value? We address this question by analyzing the energy spectrum $E(k_y)$ together with the skyrmion pattern depicted by the vector $\bs n (\bs k)$ in Fig. \ref{skyrmionfig}. Here, the parameters are the same as in Fig. \ref{gaptopology} (d) and $p_y$ is varied between $K_y$ and $8 K_y$, where transitions between $w=0 \leftrightarrow w=+1$, but also between $w=+1 \leftrightarrow w=-1$, are expected. From Fig. \ref{skyrmionfig}, we find that the winding number $w$, and its corresponding Skyrmion pattern, remain extremely stable as long as the energy bands do not touch at the Dirac points. When a direct gap closing occurs at the Dirac point $\bs K_D$, where $\bs K_D$ denotes $\bs K_{+}$ and/or $\bs K_-$, we observe a topological phase transition signaled by a change in the winding number $w$ (cf. Fig. \ref{skyrmionfig} (b) and (d)). In particular, we find that this direct gap closing is accompanied with the cancellation $d_z (\bs K_D)=0$ at the band touching point $\bs K_D$. In fact, the gap-closing condition $d_z (\bs K_D)=0$ should necessarily be satisfied at the transition between different values of the winding number $w$, as can be deduced from the equivalence \eqref{chernwind2} and from the simple expression \eqref{chern2}. The topological phase transitions are clearly visible on the Skyrmion patterns of Fig. \ref{skyrmionfig}, where a non-trivial winding number $w = \pm 1$ emerges when the vector $\bs{n}(\bs k)$ has covered the whole Bloch sphere once. We remind the reader that this full covering of the Bloch sphere is achieved when the vector field $\bs{n}(\bs k)$ reaches the North ($\bs n=+ \bs{1}_z$) and South ($\bs n = -\bs{1}_z$) poles at the two inequivalent Dirac points $\bs K_{\pm}$. In Fig. \ref{skyrmionfig}, we observe a radical change in the behavior of $\bs n (\bs K_{\pm})$ as $p_y$ is varied. For example, for $p_y=K_y$ (Fig. \ref{skyrmionfig} (a)), the vector field $\bs n (\bs k)$ visits the North pole twice (i.e. at $\bs K_+$ \emph{and} $\bs K_-$) but never the South pole ($w=0$), while for $p_y= 4 K_y$ (Fig. \ref{skyrmionfig} (c)) the vector visits the entire Bloch sphere once ($w=1$). Between these two topologically different configurations, a direct gap closing occurs at $\bs k = \bs K_{-}$ for $p_y= 2 K_y$ (cf. Fig. \ref{skyrmionfig} (b)), a singular situation where the gapless phase is equivalent to a standard semi-metal \cite{Wallace1947,CastroNeto2009}. We note that transitions $w=0 \leftrightarrow w =\pm 1$ require a single gap closing point (cf. Fig. \ref{skyrmionfig} (b)), while transitions $w=+1 \leftrightarrow w =- 1$ involve two gap-closing points (cf. Fig. \ref{skyrmionfig} (d)). Therefore, in agreement with the equivalence \eqref{chernwind2}, we observe that the topological phase transitions between different topological semi metals are of the same nature as the transitions between different Chern insulators, in the sense that both phenomena occur through direct gap closing (driven here by the control parameter $p_y$). \\ In summary, we conclude that the laser-coupled honeycomb lattice and the ToF method of Ref. \cite{Alba2011} offers the possibility to explore the topological order of topological semi-metals, which survive in the absence of a band gap. However, we remind the reader that this detection scheme relies on the evaluation of the winding number $w_{\text{ToF}}$, through a ToF measurement of the vector field $\bs n (\bs k)$, which only converges towards the quantized value $w$ for a complete filling of the lowest energy band $E_- (\bs k)$. Thus, the experimental detection of topological semi-metals would constitute a subtle task, in the sense that the Fermi energy should be finely tuned in order to maximize the filling of the lowest band (Fig. \ref{gaptopology} (d)). Let us stress that the winding number can only take three possible values, $w=0, \pm1$. Therefore, an experimental plateau $w_{\text{ToF}}(p_y) \sim \pm 1$, stemming from a slightly incomplete filling of the band $E_- (\bs k)$ and from finite size effects, would already provide an acceptable witness of non-trivial topological order. \\ We end this Section by observing that the transitions between topologically different semi-metals are driven by the laser recoil momentum $\brec$, and therefore, this interesting effect cannot be captured by the original Haldane model. \section{Conclusion}\label{conclusions} In this work, we explored the rich properties of the laser-coupled honeycomb lattice, which is described by the Hamiltonian \eqref{hamtwo}-\eqref{sz}. We demonstrated the existence of robust Chern insulators in this system, which can be reached in experimentally accessible regions of the large parameter space. In particular, we showed that the possibility of producing such non-trivial phases highly depends on the laser-coupling, through the orientation of the momentum transfer $\brec$ and the effective (laser-induced) tunneling amplitude $t$. We showed that it is important to finely tune the ratios $t_A /t_B$ and $t_{A,B}/t$ in order to open large and robust topological bulk gaps of the order $\Delta \sim 2 t$. We also discussed the role of the staggered potential $\varepsilon$, which is shown to be crucial in the fully symmetric regime $t=t_{A,B}$, and which could also be used to drive transitions between topological phases of different nature. Importantly, we addressed the question of detectability in the context of the quest for robust Chern insulators, and we stressed the importance of identifying regimes corresponding to large bulk gaps. We showed that an experimental measure of the topological winding number \eqref{qiformula}-\eqref{dis2}, e.g. through ToF measurement \cite{Alba2011}, yields a strong signature for two types of topological phases: the Chern insulating phase and the topological semi-metal (a semi-metal characterized by a non-trivial winding number). Importantly, we showed that the detection of the topological semi-metal would require a delicate tuning of the Fermi energy, which privileges the search for Chern insulating phases from an experimental point of view. \\ The Chern insulator could alternatively be detected through the identification of chiral edge states, which are protected by the topological gap. A clear signature could be obtained, for example, using the shelving method described in Ref. \cite{Goldman2012}. From the spectra presented in Figs. \ref{figa7}(a),(c) and Fig. \ref{skyrmionfig}, we find that these topological edge states remain robust in the topological semi-metallic phase: the edge states can only disappear from the bulk gap through direct band-touching processes at the Dirac points. However, the experimental identification of these robust edge states for the semi-metallic regime, e.g. using the shelving method, remains an open question to be explored. \\ Let us finally end this work by mentioning the fact that this system could be directly extended to reproduce the spinful Kane-Mele model for $\mathbb{Z}_2$ topological insulators \cite{Kane:2005} (see also its generalizations \cite{Goldman:2012,Beugeling:2012,Shevtsov:2012,Goldman:2011,Guo:2009,Lan:2012}). In this case, each triangular sublattice should trap atoms in two internal atomic states (Zeeman sublevels), yielding a ``spin"-1/2 structure. These atoms should then be coupled independently by lasers in such a way that the tunneling operators, which are $2 \times 2$ matrices acting between NN sites $n_A$ and $m_B$, have the form $$U (n_A, m_B)= \exp (i \sigma_Z \brec \cdot (\bs r_{n_A} + \bs r_{m_B})/2),$$ where $\sigma_Z$ acts on the ``spins" and $U (n_A, m_B)=U^{\dagger} (m_B,n_A)$. In this spinful honeycomb lattice configuration, non-trivial $\mathbb{Z}_2$ topological phases featuring helical edge states \cite{Kane:2005,Bernevig:2006,Wu:2006}, should be reached in the non-trivial regions identified in Section \ref{topologicalphases}. Thus, the versatile laser-coupled honeycomb lattice is well suited for the exploration of two-dimensional topological phases with cold atoms \cite{Goldman:2010,Stanescu:2010,Beri:2011,Mazza:2012,Goldman2012,Buchhold:2012,Cocks:2012}. \paragraph*{Acknowledgments} We thank the Lithuanian Research Council, F.R.S-F.N.R.S (Belgium), DARPA (Optical lattice emulator project), the Emergences program (Ville de Paris and UPMC), the Carnegie Trust for the Universities of Scotland, EPSRC, and ERC (Manybo Starting Grant) for financial support. I.B.S acknowledges the financial support the NSF through the Physics Frontier Center at JQI, and the ARO with funds from both the Atomtronics MURI and the DARPA OLE Program. The authors thank J. Dalibard, J. Beugnon, S. Nascimb\`ene and N. Bray-Ali for stimulating discussions. \section*{References} \providecommand{\newblock}{}
1,941,325,220,927
arxiv
\section{Introduction} \label{sec:intro} Many European countries impose temporary driving bans for heavy vehicles. Driving may be restricted during the night, on weekends, and on public holidays. Such bans may apply to the whole road network of a country or parts of it. When routing a heavy vehicle from a source to a destination, it is crucial to take these temporary driving bans into account. But it is not only about heavy vehicles. Temporary closures of bridges, tunnels, border crossings, mountain pass roads, or certain inner-city areas as well as closures due to roadworks may affect all road users alike. In case of road space rationing in cities, the driving restriction may depend on the license plate number. To sum up, temporary driving restrictions exist in different forms, and the closing and re-opening times of a road segment must be considered in the route planning. As a consequence of temporary driving restrictions, waiting times may be inevitable and even last for hours. During such waiting hours, the vehicle must be parked properly, and thus a suitable parking area has to be found. The driving time of the detour from and to such a parking area should also be incorporated in the route planning. Unfortunately, the underlying shortest (here: quickest) path problem becomes \textsl{NP}-hard if waiting is only allowed at dedicated locations~\cite{or-tnp-89}. This is because in this case, the so-called \emph{FIFO} (first in, first out) property is not satisfied, that is, the property that a driver cannot arrive earlier by departing later. Thus, our first research question is how we can consider dedicated waiting locations without making the underlying problem \textsl{NP}-hard. It is our aim to obtain a feasible running time even for long-distance routes. In practice, we often find that small parking areas without any facilities like public toilets or restaurants cause the least detour. So an algorithm that looks for the shortest route, that is, a route with the shortest driving time, would select small parking areas in these cases, provided that waiting is necessary. But the longer the waiting time is, the more vital a secure and pleasant place for waiting becomes. So it may be important for the driver that nearby facilities of the parking area and their quality are somehow taken into account as well. How to do this is our second research question. In our setting, a single-criterion objective is not practical. A driver may not always be in favor of the shortest route if that means to spend a very long time waiting and to arrive at the destination considerably later than on the quickest route, that is, a route with the earliest arrival at the destination. Conversely, a driver may not always be interested in a quickest route if that route means to take an unjustified long detour around temporarily closed road segments that could be avoided by waiting in a comfortable place. In other words, an early arrival at the destination (and thus low opportunity costs), little driving time (and thus low fuel costs), and pleasant waiting conditions (and thus high driver satisfaction) are competing criteria. Solutions can differ significantly with regards to these criteria. How to deal with this and find reasonable routes is the third research question. In this paper, we answer these questions as follows: \begin{enumerate} \item We present a model in which waiting is allowed at any vertex and any edge at any time in the road graph but waiting on edges and waiting on those vertices that do not correspond to parking areas is penalized. This is done by assigning a cost to time spent waiting there. Since driving comes at a price, too, we also assign a cost per time unit spent driving. As we will show, we can find a route with least costs in polynomial time if both cost parameters are set to the same value. \item We assume that the nearby facilities of a parking area and their quality can be expressed by some single rating number. To take account of this, we assign a waiting cost to every corresponding vertex as well. This cost is lower than the cost of waiting anywhere else in the road graph, and it is even lower the higher the rating of the parking area is. \item We return routes that are Pareto-optimal with regards to arrival time at the destination on the one hand and total costs on the other. Despite the potentially larger output, our algorithm still runs in polynomial time under the same condition as before. \end{enumerate} As our experiments reveal, many queries within Europe are answered within milliseconds. Except some pathological cases, even more complex queries with four or more Pareto-optimal solutions are solved in less than a second. \subparagraph{Related Work} Many route planning problems are modeled as shortest path problems. To this day, the theoretically fastest known algorithm to find shortest paths on graphs with static non-negative edge weights is the algorithm of Dijkstra~\cite{d-ntpcg-59}. However, for many practical applications, it is not fast enough. One approach to speed up the computation is to reduce the search space of Dijkstra's algorithm by guiding the search towards the destination by means of estimates of the remaining distance to the destination. It is known as the A* algorithm~\cite{hnr-afbhd-68}. Since the advent of routing services, a lot of research has been done on efficient algorithms for routing in road networks. Routing services have to answer many queries on the same network. This can be used to speed up shortest path queries through precomputed auxiliary data. Many approaches exploit certain characteristics of road networks, for example the hierarchical structure (freeways are more important than rural roads). For an extensive overview, we refer to~\cite{bdgmpsww-rptn-16}. One particularly popular speed-up technique are Contraction Hierarchies~\cite{gssv-erlrn-12}. During preprocessing, additional shortcut edges are inserted into the graph, which skip over unimportant vertices. This preprocessing typically takes a few minutes. Then, shortest path queries can be answered in less than a millisecond. A natural approach to handle driving restrictions is to model them as time-dependent travel times~\cite{d-aassp-69}. For the blocked time, the travel time of the edge can be set to infinity. Time-dependent route planning has also received some attention and effective speed-up techniques are known~\cite{bgsv-mtdtt-13,bdpw-dtdrp-16,dn-crdtd-12,d-tdsr-11,ndls-bastd-12}. Variants of our problem have been studied in the literature. In~\cite{desaulniers2000shortest} a related problem is discussed where nodes (not edges) have time windows and waiting is associated with a cost. In~\cite{pugliese2013survey} an overview is given over different exact approaches to solving shortest path problems with resource constraints. Time windows on nodes are a specific kind of constraint in this framework. More specialized models for routing applications have been proposed. The authors of~\cite{twb-rpbtd-18} study the problem of planning a single break, considering driving restrictions and provisions on driver breaks. They aim to find only the route with the earliest arrival. \subparagraph{Contribution} We present a novel model that helps answer our three research questions in the context of temporary driving restrictions and dedicated waiting locations. To the best of our knowledge, this is the first unifying approach that gives answers to all three research questions. Our theoretical analysis reveals that our model can be solved to optimality in polynomial time, given certain restrictions on the parameterization. The experimental evaluation of our implementation demonstrates a practical running time. \subparagraph{Outline} In \cref{sec:problem}, we give a formal definition of the routing problem at hand. In \cref{sec:algorithm}, we present an exact algorithm for this problem. In \cref{sec:analysis}, we analyze the complexity of the problem and show that our algorithm runs in polynomial time if the costs for driving are the same as for waiting anywhere else than at a dedicated waiting location. In \cref{sec:impl}, we describe techniques to speed-up the computation. In \cref{sec:exp}, we present the main results of our experiments. Finally, we conclude in \cref{sec:conclusion}. \section{Problem} \label{sec:problem} A problem instance comprises a \emph{road graph with ban intervals on edges, driving costs and location-dependent waiting costs} (or \emph{road graph with ban intervals and costs} for short) as well as a set of \emph{queries}. The road graph is characterized by the following attributes: \begin{itemize} \item A set $V$ of $n$ vertices and a set $E$ of $m$ directed edges. \item A mapping $\Phi$ that maps each edge $e \in E$ to a sequence of disjoint time intervals, where the edge is considered to be \emph{closed} during each interval. Precisely, for any \emph{ban interval} $[t^{closed},t^{open}) \in \Phi(e)$ of an edge $e$, $t^{open}$ denotes the first point in time after $t^{closed}$ where the edge is open again. Here and in the following, all points in time are integers and the length of an interval is denoted by $|[t^{closed},t^{open})|$ and equals $t^{open} - t^{closed} > 0$. During such a time span, a vehicle on the corresponding road segment must not move. We denote the total number of ban intervals as $b$. \item A mapping $\delta: E \rightarrow \mathbb{N}$ that maps each edge $e:=(u,v)\in E$ to the time $\delta(e)$ that it takes to drive from $u$ to $v$, provided the edge is open. \item A mapping $\rho$ that maps each vertex to a rating in $\{0,1,\ldots,r\}$ with $r \le n$. Rating 0 means \emph{unrated}, that is, it is assumed that it is highly difficult, dangerous, and not allowed to park the vehicle there. In contrast to an unrated location, we call a vertex $v$ with $\rho(v) > 0$ a \emph{parking location}. \item A parameter set of abstract costs, consisting of $d \in \mathbb{Q}_{\ge 0}$, the cost per unit of driving time, and $\cwait[i] \in \mathbb{Q}_{\ge 0}$ for all $i$ from 0 to $r$, the cost per time unit of waiting on a vertex with rating~$i$. Edges are always unrated so waiting there costs $\cwait[0]$ per time unit. W.l.o.g. $\cwait[i] < \cwait[i-1]$ holds for all $i$ between 1 and $r$, that is, we assume that waiting on vertices with a higher rating costs less than waiting on those with a lower rating. \end{itemize} A \emph{$u$-$v$-route} is a triple $(R, A,D)$ of three sequences of the same length $\ell:=|R|=|A|=|D|$. Here, $R$ is the sequence of vertices along the route. It describes a (not necessarily simple) \emph{path} in the graph that starts at $u$ and ends in $v$, that is, $e_i:=(R[i],R[i+1])\in E$ for all $1 \le i< \ell$ and $R[1]=u$ and $R[\ell]=v$. The other two sequences $A$ and $D$ denote the \emph{arrival times} and the \emph{departure times} from the respective vertices, where $A[i] \le D[i]$ for all $1\le i \le \ell$ and $A[{i+1}] - D[i] \ge \delta(e_i)$ for all $1\le i < \ell$ holds. A query comprises a \emph{source} $\ensuremath{s} \in V$ and a \emph{destination} $\ensuremath{z} \in V$ as well as a \emph{planning horizon} $H$. The latter is defined as the time interval between an \emph{earliest departure time} $t^{min}$ from $\ensuremath{s}$ and a \emph{latest arrival time} $t^{max}$ at $\ensuremath{z}$. Waiting costs arise as soon as the planning horizon opens. For a given query, we look for \emph{feasible} $\ensuremath{s}$-$\ensuremath{z}$-routes. A route is feasible with respect to the planning horizon if $A[1]=t^{min}$ and $D[\ell]\le t^{max}$. In addition, ban intervals must be taken account of. Let $T_i:=[D[{i}],A[{i+1}])$ be the time interval in which the edge $e_i:=(R[i],R[{i+1}])$ of the route's path is traversed. A route is feasible with respect to the ban intervals if $\sum_{I \in \Phi(e_i)} | T_i \cap I | \le |T_i| - \delta(e_i)$ for all $1\le i < \ell$. Here, $\sum_{I \in \Phi(e_i)} | T_i \cap I |$ is the time during which the edge between $R[i]$ and $R[{i+1}]$ is closed while the edge is being traversed. Let \emph{travel time} include driving time and waiting time. The \emph{travel time costs} of a route are the sum of the waiting time costs and the driving time costs. So given a route of length $\ell$, the travel time costs are \[\sum_{i=1}^{\ell} \cwait[{\rho(R[i])}] \cdot \left(D[i] - A[i]\right) + \sum_{i=1}^{\ell-1} \cwait \cdot \left(A[{i+1}] - D[i] - \delta(e_i)\right) + d \cdot \delta(e_i), \] where we use $e_i:=(R[i],R[{i+1}])$. We say an $\ensuremath{s}$-$\ensuremath{z}$-route is \emph{Pareto-optimal} (or simply \emph{optimal}) if it is feasible and if its travel time costs are less or its arrival time at $\ensuremath{z}$ is earlier or equality holds in both cases compared to any other feasible $\ensuremath{s}$-$\ensuremath{z}$-route. For a query, the objective is to find a maximal set of (Pareto-)optimal $\ensuremath{s}$-$\ensuremath{z}$-routes such that no two routes in the set have both the same arrival time at~$\ensuremath{z}$ and the same travel time costs. \section{Algorithm} \label{sec:algorithm} The algorithm maintains a priority queue. Each entry of the queue consists of a vertex and a point in time within the planning horizon as key. We say a vertex is \emph{visited} at a certain point in time whenever we remove the top entry from the queue, that is, an entry with the earliest time among the entries in the queue. At every vertex $v \in V$, we store a time-dependent function $\mathcal{C}_v: H \rightarrow \mathbb{Q}_{\ge 0}\cup \{\infty\}$. It maps a point in time $t$ within the planning horizon $H$ to an upper bound on the minimum travel time cost over all $\ensuremath{s}$-$v$-routes that end in $v$ at time $t$. We call this function \emph{cost profile} of $v$ or, more general, \emph{label} of $v$. The algorithm works in a \emph{label correcting} manner in the sense that a vertex may be visited multiple times, albeit at different times within the planning horizon. Before we describe the phases of the algorithm in greater detail, we introduce an auxiliary time-dependent function~$\mathcal{T}_e$ for every edge $e \in E$. It maps a time $t$ at the head $v$ of an edge $e:=(u,v)$ to the \emph{shortest travel time} that it takes to traverse the edge from $u$ to $v$ completely and be at $v$ at time $t$, possibly including waiting time. That is, for a time $t$ at $v$, $\mathcal{T}_e(t)$ is the minimum period~$p$ such that $p - \sum_{I \in \Phi(e)} | [t-p,t) \cap I | \ge \delta(e)$ holds if such a $p$ exists, and $\infty$ otherwise. In other words, $t - \mathcal{T}_e(t)$ is the latest departure time from $u$ in order not to arrive at $v$ later than at time $t$. An example is given in \cref{fig:exampleTravelTimeFunction}. \begin{figure} \begin{subfigure}[b]{0.49\textwidth} \centering \resizebox{\linewidth}{!}{\input{Figures/exampleTravelTimeFunction.tikz} } \caption{Travel time function $\mathcal{T}_e$ of an edge $e$ with ban intervals (grey) and a driving time $\delta(e)$ of 3. The latest departure to be at $v$ at time $t$ is $t-\mathcal{T}_e(t)$.} \label{fig:exampleTravelTimeFunction} \end{subfigure} \hspace{1mm} \begin{subfigure}[b]{0.49\textwidth} \centering \resizebox{\linewidth}{!}{\input{Figures/exampleCostProfileNeighbor.tikz}} \caption{Cost profile of vertex $v$ after linking, that is, after considering travel time (dashed) and waiting time at $v$ (solid).} \label{fig:exampleCostProfileNeighbor} \end{subfigure} \caption{Computing the cost profile of a vertex $v$. Let $v$ be adjacent to the source $\ensuremath{s}$ via an edge $e:=(\ensuremath{s},v)$ with three ban intervals and a driving time $\delta(e)$ of 3. The corresponding travel time function is given in \cref{fig:exampleTravelTimeFunction}. It is infinite between $0=t^{min}$ and $3=\delta(e)$. In \cref{fig:exampleCostProfileNeighbor}, we see the cost profile $\mathcal{C}_{v}$ after considering the travel time along the edge (dashed) and after considering waiting at $v$ (solid). Here, the assumed cost parameters are $\cwait[{\rho(\ensuremath{s})}] = 0$, $\cwait[{\rho(v)}] = 0.5$, and $d = \cwait = 2$, where $\cwait[{\rho(\ensuremath{s})}] = 0$ implies that the cost profile $\mathcal{C}_{\ensuremath{s}}$ at the source is 0 over the whole planning horizon. } \label{fig:exampleAlgorithm} \end{figure} In the initialization phase of the algorithm, we set $\mathcal{C}_\ensuremath{s}(t) := \cwait[{\rho(\ensuremath{s})}] \cdot (t-t^{min})$ for all $t \in H$. For every other $v\in V \setminus \{\ensuremath{s}\}$, we set $\mathcal{C}_v(t) := \infty$ for all $t \in H$. Furthermore, we insert the source $\ensuremath{s}$ with key $t^{min}$ into the priority queue. As long as the queue is not empty, we are in the main loop of the algorithm. In every iteration of the main loop, we remove the top entry from the queue. Let us suppose we visit a vertex $u$ at time $t^{visit} \ge t^{min}$. Then, we check for every edge $e:=(u,v)$ going out of $u$ whether we can improve the cost profile $\mathcal{C}_v$ of $v$. We do so in three steps. In the first step, we consider the travel time along the edge and set \begin{equation} \label{eqn:step1} \mathcal{C}'_v(t) := \mathcal{C}_u(t - \mathcal{T}_e(t)) + d \cdot \delta(e) + \cwait \cdot (\mathcal{T}_e(t) - \delta(e)) \end{equation} for all $t$ with $t^{visit} + \mathcal{T}_e(t) \le t \le t^{max}$. For all other $t\in H$ we set $\mathcal{C}'_v(t) := \infty$. In the second step, we consider waiting at $v$ at cost $\cwait[{\rho(v)}]$ per time unit and set \begin{equation} \label{eqn:step2} \mathcal{C}'_v(t) := \min \{\mathcal{C}'_v(t') + \cwait[{\rho(v)}] \cdot (t-t') \mid t^{min} \le t'\le t \} \end{equation} for all $t \in H$. An example of the first two steps is illustrated in \cref{fig:exampleCostProfileNeighbor}. Finally, in the third step, we compare $\mathcal{C}'_v$ and $\mathcal{C}_v$. Let $t^*$ be the earliest point in time such that $\mathcal{C}'_v(t^*)$ is less than $\mathcal{C}_v(t^*)$ if such a time $t^*$ exists. Only if it exists, we set $\mathcal{C}_v(t)$ to the minimum of $\mathcal{C}_v(t)$ and $\mathcal{C}'_v(t)$ for all $t^* \le t \le t^{max}$. Furthermore, we insert vertex $v$ with key $t^*$ into the priority queue or decrease the key if $v$ is already contained. When the priority queue is empty, we enter the finalization phase of the algorithm. We say a time-cost-pair $(t,\mathcal{C}_\ensuremath{z}(t))$ with $t \in H$ and $\mathcal{C}_\ensuremath{z}(t) < \infty$ is Pareto-optimal if there is no time~$t'$ with $t^{min} \le t' < t$ and $\mathcal{C}_\ensuremath{z}(t') \le \mathcal{C}_\ensuremath{z}(t)$. In the finalization phase, we extract an $\ensuremath{s}$-$\ensuremath{z}$-route for every Pareto-optimal time-cost-pair. So let such a time-cost-pair $(t,\mathcal{C}_\ensuremath{z}(t))$ be given. In order to find a corresponding route $(R, A,D)$, we initially push $\ensuremath{z}$ and $t$ and $t$ to the front of the (empty) sequences $R$ and $A$ and $D$, respectively. The following is done iteratively until we reach the source, that is, $R[1]=\ensuremath{s}$ holds. First, we look for an incoming edge $e:=(u,R[1])$ of $R[1]$ and a departure time $t$ from $u$ with \[\mathcal{C}_u(t) + d \cdot \delta(e) + \cwait \cdot (\mathcal{T}_e(A[1]) - \delta(e)) = \mathcal{C}_{R[1]}(A[1]) \] which must exist. We push $u$ and $t$ to the front of $R$ and $D$, respectively. Then, we push the earliest time $t\le D[1]$ such that \[\mathcal{C}_{R[1]}(t) + \cwait[{\rho({R[1]})}] \cdot (D[1]-t) = \mathcal{C}_{R[1]}(D[1]) \] holds to the front of the arrival time sequence $A$, and continue with the next iteration. This concludes the description of the finalization phase and thus the whole algorithm. For the correctness of the algorithm it is important that the upper bound $\mathcal{C}_v(t)$ on the minimum travel time cost is tight for all $t \le t^{visit}$ and all $v \in V$ whenever we visit a vertex at time $t^{visit}$. After the main loop, it is tight for every $t\in H$ and all $v \in V$, especially for $\ensuremath{z}$. This can be proven by induction on the time of visit. The time of visiting a vertex increases monotonically because whenever a vertex is inserted into the queue or its key is decreased, the (new) value of that key can only be later than the current time of visit. \section{Analysis} \label{sec:analysis} In this section, we first show the intractability of the general problem. Then, we restrict the problem by requiring the driving cost $d$ to be equal to the unrated waiting cost $\cwait$, and prove that our algorithm solves the restricted problem in polynomial time. \subparagraph{Intractability of the General Problem} The first two theorems show the intractability of the general problem if $d \ne \cwait$. Parking locations are not used in the proofs, so already the simplified problem without parking locations is intractable if $d \ne \cwait$. \begin{restatable}{theorem}{nphard} \label{thm:np_hard} If $d < \cwait[0]$ then it is $\textsl{NP}$-complete to decide whether there is a feasible route with travel time costs less than or equal to a given threshold $k$. \end{restatable} We prove this theorem in Appendix~\ref{sec:appendixA} by reduction from PARTITION. \begin{restatable}{theorem}{exproutes} \label{thm:exp_routes} If $d > \cwait[0]$ then the number of Pareto-optimal routes can be exponential in the number of vertices. \end{restatable} Given a number of vertices a graph with ban intervals can be constructed that has exponentially many routes which are all Pareto-optimal. The construction and the proof that all those routes are Pareto-optimal can be found in Appendix~\ref{sec:appendixB}. \subparagraph{Tractable Problem Variant} For the remaining analysis we assume $d = \cwait$. In the setting without parking locations, there is only one optimal solution, since the quickest solution has also the least cost. Hence, this setting is a single-criterion shortest path problem with time-dependent edge weights that fulfill the \emph{FIFO} property and can be solved in polynomial time with a time-dependent variant of Dijkstra's algorithm~\cite{d-aassp-69}, and also our algorithm reduces to such a time-dependent Dijkstra variant and has polynomial running time. Now we turn to the setting $d = \cwait$ with parking locations and show that it is still tractable. Cost profiles are piecewise linear functions. An important aspect of our polynomial time proof is to count the non-differentiable points of the profiles. The running time of each profile operation of our algorithm is linear in the number of non-differentiable points of the involved profiles. These points are either \emph{convex}, \emph{concave}, or \emph{discontinuous}, meaning an environment around such a point exists in which the profile is convex or concave or discontinuous, respectively. In a discontinuous point, a profile is always jumping down. The non-differentiable points in the cost profiles are induced by the travel time functions. In our example of \cref{fig:exampleTravelTimeFunction}, the convex points are $\{4,8,11\}$, the concave points are $\{6,9,12\}$, and the discontinuous points are $\{10,13,15\}$. For a travel time function $\mathcal{T}_e$ of an edge $e$, we can assign a convex point $t$ to the beginning of a ban interval in $t$, a concave point $t$ to the end of a ban interval in $t$, and a discontinuous point $t$ to the end of a ban interval in $t-\mathcal{T}_e(t)$. From this initial assignment, we can derive a ban interval assignment of the convex or discontinuous points of cost profiles. We omit to count the number of concave points of a cost profile because every gradient of a piece must be in $\{\cwait,\ldots,\cwait[r]\}$, so the number of consecutive concave points in a cost profile is limited by $r$. Initially, a profile $\mathcal{C}_{v}$ of a vertex $v$ has no convex or discontinuous points. Such points may be introduced in the third step of an iteration of the algorithm when the auxiliary profile $\mathcal{C}'_{v}$ is merged into $\mathcal{C}_{v}$. In the second step of an iteration, no new convex or discontinuous points can arise in $\mathcal{C}'_{v}$, so all such points must be created in $\mathcal{C}'_{v}$ in the first step. Since $d=\cwait$, $\mathcal{C}'_v(t)$ is set to $\mathcal{C}_u(t - \mathcal{T}_e(t)) + d \cdot \mathcal{T}_e(t)$ (compare \cref{eqn:step1}) for some edge $e=(u,v)$ in this step. If $t_v$ is a convex or discontinuous point of $\mathcal{C}'_v$, then $\mathcal{T}_e$ must be convex or discontinuous in the same point in time, or $\mathcal{C}_u$ must be convex or discontinuous in $t_u:=t_v - \mathcal{T}_e(t_v)$. In the former case, $t_v$ inherits the assignment of the same point in time in $\mathcal{T}_e$, whereas in the latter case, $t_v$ inherits the assignment of $t_u$ in $\mathcal{C}_u$. Since the cost profiles change during the algorithm, we do not only assign a ban interval to every convex or discontinuous point but also an iteration. Again, in the former case, $t_v$ is assigned the current iteration, whereas in the latter case, $t_v$ inherits the iteration assignment of $t_u$ in $\mathcal{C}_u$. \begin{lemma} \label{lemma:numberOfPointsInCostProfile} If $d = \cwait$ then a cost profile after iteration $i$ has at most $ib$ convex and at most $ib$ discontinuous points. \end{lemma} \begin{proof} In the following, we denote the state of the profile $C_v$ after iteration $i$ by $C^i_v$. Let $t_v$ be a convex or discontinuous point of $\mathcal{C}^i_{v}$ that is assigned both to an iteration $k$ and to a ban interval of some edge with head $x$. We can follow the inheritance relation until we finally reach a convex or discontinuous point $t_x$ in $\mathcal{C}^k_x$. By induction, we have $\mathcal{C}^i_v(t_v) = \mathcal{C}^k_x(t_x) + d \cdot (t_v - t_x)$. Now suppose there are two convex or two discontinuous points $t^1_v < t^2_v$ in the profile $\mathcal{C}^i_{v}$ that are assigned to the same ban interval and the same iteration $k$, so they can be traced back to the same point $t_x$ in $\mathcal{C}^k_x$. Then the previous observation implies that $\mathcal{C}^i_v(t^2_v) - \mathcal{C}^i_v(t^1_v) = d \cdot (t^2_v - t^1_v)$ holds, that is, the profile $\mathcal{C}^i_{v}$ must contain a piece with gradient $d$ that contains both $t^1_v$ and $t^2_v$. But then $t^2_v$ can neither be convex nor discontinuous. Hence, two convex or two discontinuous points must differ in their assigned ban interval or their assigned iteration and there can only be $ib$ discontinuous and convex points, respectively. \end{proof} \begin{restatable}{lemma}{iterations} \label{lemma:iterations} If $d = \cwait[0]$ then the total number of iterations is at most $2n(b(r+1)+1)$. \end{restatable} As in the proof of Lemma~\ref{lemma:numberOfPointsInCostProfile} we use the ban interval assignment of convex and discontinuous points. Every visit of a vertex can either be assigned to the start or end of a ban interval, or it can be assigned to a concave point of the final cost profile of the vertex. The detailed proof is omitted due to space limitations and can be found in Appendix~\ref{sec:poly_iterations}. \begin{theorem} If $d = \cwait[0]$ then the running time of the algorithm is polynomial. \end{theorem} \begin{proof} From Lemma~\ref{lemma:numberOfPointsInCostProfile} with the bound from Lemma~\ref{lemma:iterations} it follows that the number of pieces of any profile that is constructed during the algorithm is polynomial. We now estimate the overall running time of our algorithm: Lemma~\ref{lemma:iterations} states that the total number of iterations is polynomial. In every iteration of the algorithm one vertex is considered and for its outgoing edges the profiles are updated with a running time linear in the number of pieces of the profiles. The adjacent vertices are inserted into the priority queue or their keys are decreased. Since the size of the priority queue is at most the total number of vertices also the running time of the priority queue operations is polynomial. \end{proof} \section{Implementation} \label{sec:impl} The past decade has seen a lot of research effort on the engineering of efficient route planning algorithms. This section describes the speed-up techniques we employ in our implementation and some implementation details. We store cost profiles as a sorted list of pieces. Each piece is represented as a triple: a point in time from which this piece is valid, the costs it takes to reach the vertex at the beginning of the piece and the incline of the piece. For each piece we also store a parent vertex. This allows us to efficiently reconstruct routes by traversing the parent pointers. We employ A* to guide the search toward the destination vertex. The queue is ordered by the original key plus an estimate of the remaining distance (here: driving time) to the destination. The estimate for vertex $u$ is denoted by $\pi_\ensuremath{z}(u)$. We use the exact shortest driving time to $\ensuremath{z}$ without driving restrictions as the potential. This is the best possible potential in our case. We efficiently extract these exact distances from a Contraction Hierarchy \cite{gssv-erlrn-12}, as described in \cite{strasser2019perfect}. Since our algorithm has to run until the queue is empty, we can not immediately terminate when we reach the destination. However, we get a tentative cost profile at the destination. This allows for effective pruning. Additionally, we do not need to insert a vertex $u$ into the queue when $t^{visit} + \pi_\ensuremath{z}(u) > t^{max}$ holds, that is, we cannot reach the destination from $u$ within the planning horizon. We employ pruning to avoid linking and merging when possible using the following rules: \begin{itemize} \item Consider a vertex $u$ that is visited at $t^{visit}$. Before relaxing any outgoing edges, we first check if $u$ can actually contribute to any optimal route to $\ensuremath{z}$. If $\mathcal{C}_u(t) + \pi_\ensuremath{z}(u) \cdot d > \mathcal{C}_\ensuremath{z}(t + \pi_\ensuremath{z}(u))$ for all $t$ with $t^{visit} \leq t < t^{max}$, $u$ can not contribute to an optimal route to $\ensuremath{z}$ and can thus be skipped. \item Let $\alpha(u) := \min\{t \mid \mathcal{C}_u(t) < \infty\}$ be the first point in time such that $u$ can be reached with finite costs and $\infty$ if no such point exists. For each vertex $u$, we maintain a lower bound $\beta(u) := \min_t\{\mathcal{C}_u(t)\}$ and an upper bound $\gamma(u) := \max_{t > \alpha(u)}\{\mathcal{C}_u(t)\}$ or $\infty$, if there are no finite costs. They can be updated efficiently during the merge operation. An edge $(u, v)$ only needs to be relaxed if $\beta(u) + \delta(u, v) \cdot d \leq \gamma(v)$ or $\alpha(u) + \delta(u, v) < \alpha(v)$. \item When all of the pieces of the cost profile of a vertex $u$ share the same parent vertex $v$ and $\rho(u) = 0$, the edge $(u, v)$ back to the parent does not need to be relaxed as loops can never be part of an optimal route unless they include waiting at a parking location. \end{itemize} \section{Experimental Evaluation} \label{sec:exp} Our algorithm is implemented in C++14 and compiled with Visual C++. For the CH-potentials, we build upon the Contraction Hierarchy implementation of RoutingKit\footnote{\url{https://github.com/RoutingKit/RoutingKit}}~\cite{dsw-cch-15}. All experiments were conducted on a Windows 10 Pro machine with an Intel i7-7600 CPU with a base frequency of 3.4\,GHz and 32\,GB of DDR4 RAM. The implementation is single-threaded. Our experimental setup is taken from~\cite{b-rptrc-18}. We perform experiments on a road network used in production by PTV\footnote{\url{https://ptvgroup.com}}. The network is adapted from data by TomTom\footnote{\url{https://tomtom.com}}. It covers Austria, France, Germany, Italy, Liechtenstein, Luxembourg, and Switzerland. It has 21.9 million vertices and 47.6 million edges. We use travel times, driving bans, and road closures for a truck with a gross combined weight of 40~tons. Driving bans were derived from the current legislation of the respective countries. This includes Sunday driving bans in all countries, a late Saturday driving ban in Austria and night driving bans in Austria, Liechtenstein and Switzerland. Additionally, there is a Saturday driving ban in Italy during the summer holidays. The dataset also includes several local road closures in city centers. Parking locations were taken from data by Truck Parking Europe\footnote{\url{https://truckparkingeurope.com}}. There is a total of 15\,317 vertices classified as parking locations in our data set. The dataset also contains the capacity of each parking location. We assign each parking location a rating between 1 and 5 depending on its capacity. Table~\ref{tab:parking_lots} shows the number of parking locations for each rating and our default waiting costs. We also evaluate different parameterizations. The waiting costs are calculated such that for an hour of waiting a detour of up to four minutes will be taken to get to a parking location rated better by one. For waiting at the source vertex of a query, we assign zero waiting costs regardless of the rating. \begin{table} \centering \caption{Rating and default waiting cost by capacity of parking locations. The driving cost is the same as the cost for waiting at unrated vertices.}\label{tab:parking_lots} \begin{tabular}{lrrrrrr} \toprule Capacity of parking locations & $\geq 80$ & $\geq 40$ & $\geq 15$ & $\geq 5$ & $\geq 1$ & - \\ \midrule Rating & 5 & 4 & 3 & 2 & 1 & 0 \\ Default waiting costs & 3 & 4 & 5 & 6 & 7 & 14 \\ Number of parking locations & 448 & 997 & 2\,664 & 5\,418 & 5\,748 & 21.9\,M \\ \bottomrule \end{tabular} \end{table} We generate two sets of source-destination pairs and combine them with different planning horizons. The first set is used to evaluate the practicality of our model. It is designed to make the algorithm cope with the night driving ban in Austria and Switzerland. We select 100 pairs of vertices. One vertex is randomly selected from the area around southern Germany. The other vertex is selected from the area around northern Italy. See Figure~\ref{fig:geofence_vis} for exact coordinates and a visualization. We store each pair in both directions. Hence, we have 200 vertex pairs in this set. The planning horizon starts at Monday 2018/7/2, 18:00 with length one day (query set A1) and two days (A2). Figure~\ref{fig:result_example} depicts an example query from A1. The second set is generated by selecting 100 source vertices uniformly at random. From each source vertex, we run Dijkstra's algorithm without a specific target ignoring any driving restrictions. Dijkstra's algorithm explores the graph by traversing vertices in increasing distance of the source vertex. We use the order in which vertices are settled to select destination vertices with different distances from the source. Every $2^i$th settled vertex with $i \in [12,24]$ is stored. We denote $i$ as the \emph{rank} of the query. This results in 1\,300 source-destination pairs. We combine these vertex pairs with four planning horizons: starting at Friday 2018/7/6, 06:00 for one day (denoted as query set B1), for two days (B2) and starting later that day at 18:00 for one day (B3) and for two days (B4). \begin{figure} \centering \includegraphics[width=.5\textwidth]{Figures/compareSimple.PNG} \caption{ Optimal paths of an example query from northwestern Austria to northern Italy, slightly south of Milano. The source is indicated by a red, the destination by a yellow marker. The other markers indicate the parking locations along the respective routes. The blue route in the east has the shortest driving time, around 10.5 hours, but the latest arrival. It schedules a waiting time of seven hours during the night driving ban at a parking location of rating 4 and afterwards takes the fastest route to the destination. The green route in the middle arrives an hour earlier at the destination but the driving time is over two hours longer. This route includes three hours of waiting at a parking location of rating 5. The black route in the west takes 16 hours to drive, includes only a few minutes of waiting and arrives six minutes before the green one. } \label{fig:result_example} \end{figure} \begin{table} \centering \caption{ Query statistics for different waiting cost parameters for query set A1. The first six columns show the waiting cost parameters. Waiting costs at the source are always set to zero. The waiting time columns depict the share of the time spent waiting at vertices with the respective rating summed up over all routes. The routes column gives the average number of optimal routes per query. The arrival time deviation column contains the average of the difference between earliest and latest arrival time among all optimal routes for all queries. Running times are also averaged. }\label{tab:cost_params} \begin{tabular}{r@{\hskip4pt}r@{\hskip4pt}r@{\hskip4pt}r@{\hskip3pt}r@{\hskip2pt}rr@{\hskip5pt}r@{\hskip5pt}r@{\hskip5pt}r@{\hskip5pt}r@{\hskip5pt}r@{\hskip5pt}r@{\hskip5pt}r@{\hskip5pt}r@{\hskip5pt}r@{\hskip5pt}r} \toprule {} & {} & {} & {} & {} & {} & {} & {} & {} & {} & {} & {} & {} & Optimal & Arrival time & Running \\ {} & {} & {} & {} & {} & {} & \multicolumn{7}{c}{Waiting time by rating [\%]} & Routes & deviation & time \\ \cmidrule(lr){7-13} $\cwait[5]$ & $\cwait[4]$ & $\cwait[3]$ & $\cwait[2]$ & $\cwait[1]$ & $\cwait[0] = d$ & $\ensuremath{s}$ & 5 & 4 & 3 & 2 & 1 & 0 & [\#] & [h:mm] & [ms] \\ \midrule 1 & 10 & 50 & 100 & 1000 & 10000 & 59.4 & 2.5 & 5.8 & 23.3 & 3.1 & 2.8 & 3.1 & 3.02 & 2:21 & 364.1 \\ 1 & 2 & 4 & 8 & 16 & 128 & 62.2 & 3.5 & 6.5 & 19.8 & 2.1 & 2.8 & 3.1 & 3.02 & 2:20 & 412.3 \\ 1 & 2 & 4 & 8 & 16 & 32 & 70.8 & 6.0 & 5.1 & 12.1 & 1.0 & 1.9 & 3.1 & 2.96 & 2:20 & 435.4 \\ 3 & 4 & 5 & 6 & 7 & 14 & 79.3 & 6.2 & 3.1 & 4.6 & 1.5 & 2.0 & 3.3 & 2.86 & 2:17 & 529.4 \\ 16 & 24 & 28 & 30 & 31 & 32 & 85.2 & 4.6 & 1.1 & 3.3 & 1.1 & 1.1 & 3.6 & 2.71 & 2:14 & 742.2 \\ \bottomrule \end{tabular} \end{table} We first investigate whether allowing waiting everywhere (albeit penalized) may lead to unwanted results in practice. On the one hand, routes with many stops are impractical. Our experiments indicate that this is not the case: Accross all routes for A1, there is at most one additional stop scheduled (0.2 on average). On the other hand, let us call a route \emph{precarious} if waiting is scheduled at an unrated location (other than the source vertex). For 187 of the 200 queries of A1, there is no precarious route in the Pareto set. For the other 13 queries, the Pareto set always contains more than one route, and it is always only the quickest route in the Pareto set that is precarious. So filtering out such routes in a postprocessing step does not make a query infeasible. On average, the second quickest route in the Pareto set arrives 422\,s later than the quickest but precarious route (minimum 38\,s, maximum 877\,s). We also evaluate the influence of different waiting cost parameterizations on the performance and the results of our algorithm. Table~\ref{tab:cost_params} depicts the results. We observe that the parametrization has only limited influence on the results of the algorithm. The average number of optimal routes and the arrival time deviation change only very little even between the two most extreme configurations. Since waiting at the source vertex costs nothing, the majority of the waiting in all configurations is scheduled there. When waiting at parking locations is much cheaper than driving, less waiting time will be scheduled at the source and more waiting at parking locations. Also, clear differences between the costs lead to a better running time, because cost profiles become less complex. \begin{table} \centering \caption{Query statistics for all six query sets. First, for all queries. Second, only for non-trivial queries. A query is denoted as trivial if there is exactly one optimal route which is also optimal when ignoring all driving restrictions. All numbers are averages unless reported otherwise. The arrival time deviation column contains the average of the difference between earliest and latest arrival time among all optimal routes for all queries. The routes column contains the number of optimal routes.}\label{tab:perf} \begin{tabular}{r@{\hskip4pt}r@{\hskip4pt}lrrrrrr} \toprule & & & Query & Optimal & Arrival time & \multicolumn{2}{c}{Running time} \\ \cmidrule(lr){7-8} & & & share & Routes & deviation & Avg. & Median \\ & Set & Planning horizon & [\%] & [\#] & [h:mm] & [ms] & [ms] \\ \midrule & A1 & Mon. 18:00, 1 day & 100.0 & 2.86 & 2:17 & 529.4 & 266.3 \\ & A2 & Mon. 18:00, 2 days & 100.0 & 3.54 & 3:19 & 648.1 & 405.6 \\ & B1 & Fri. 06:00, 1 day & 100.0 & 1.04 & 0:10 & 10.0 & 0.6 \\ & B2 & Fri. 06:00, 2 days & 100.0 & 1.08 & 0:16 & 79.5 & 0.7 \\ & B3 & Fri. 18:00, 1 day & 100.0 & 1.13 & 0:08 & 205.8 & 0.6 \\ & B4 & Fri. 18:00, 2 days & 100.0 & 1.32 & 0:20 & 1\,028.1 & 0.7 \\ \midrule \parbox[t]{3mm}{\multirow{6}{*}{\rotatebox[origin=c]{90}{Only non-trivial}}} & A1 & Mon. 18:00, 1 day & 67.5 & 3.82 & 3:13 & 764.1 & 560.6 \\ & A2 & Mon. 18:00, 2 days & 72.0 & 4.53 & 4:37 & 899.2 & 655.0 \\ & B1 & Fri. 06:00, 1 day & 4.1 & 2.19 & 4:10 & 42.5 & 6.6 \\ & B2 & Fri. 06:00, 2 days & 4.8 & 2.76 & 5:43 & 1\,105.6 & 35.8 \\ & B3 & Fri. 18:00, 1 day & 9.2 & 2.73 & 1:25 & 1\,359.0 & 475.2 \\ & B4 & Fri. 18:00, 2 days & 11.6 & 3.79 & 2:51 & 5\,819.4 & 1\,947.2 \\ \bottomrule \end{tabular} \end{table} We next investigate the algorithm's performance for each of the different query sets. We report the same numbers limited to non-trivial queries. A query is denoted as \emph{trivial} if there is exactly one optimal route which is also optimal when ignoring all driving restrictions. Table~\ref{tab:perf} depicts the results. Clearly, the query set has a strong influence on the running time of the algorithm. Average running times range from ten milliseconds to one second when looking at all queries. However, median query times are significantly smaller. The reason for this is that our algorithm can answer trivial queries in a few milliseconds or less. Due to the perfect potentials, the algorithm only traverses the optimal path. Once the destination is reached, because of the target pruning, all other vertices in the queue are skipped and the algorithm terminates. Excluding trivial queries, we get a clearer picture of the algorithm's performance when solving the harder part of the problem. For the query sets B1 and B2, only 4\% to 5\% of the queries have to deal with driving restrictions. This is mostly due to closures for individual roads in certain cities and not country-wide driving bans. When the planning horizon begins later at 18:00 (B3 and B4), we get around twice as many non-trivial queries. These are primarily caused by the night driving bans in Austria and Switzerland. Road closures and country-wide driving bans lead to different optimal routes. When there is a road closure on the shortest path ignoring any driving restrictions, we often have two optimal routes. One which takes a (small) detour around the closure, and one waiting at the source until the closed road opens and then taking that slightly shorter path. Thus, we have two routes with very similar driving times but (often vastly) diverging arrival times. When dealing with night driving bans, we get more optimal results with different trade-offs as in the example of Figure~\ref{fig:result_example}. Increasing the length of the planning horizon to two days leads to more non-trivial queries, more optimal routes per query, and a greater deviation in arrival time. The reason are routes with a travel time longer than 24 hours which were not valid for the shorter planning horizon. Even when we restrict ourselves to queries with non-trivial results, running times still vary depending on the query set. Average and median deviate not as strong as when considering all queries, but the distribution of running times is still skewed by a few long running queries, especially on set B4. The reason for this is that the running time heavily depends on the types and lengths of driving restrictions in the search space. The Saturday driving ban in Italy causes heavy outliers in B4 (but also B2 and B3), when the destination lies in an area blocked for most of the planning horizon. This causes the algorithm to explore large parts of the graph, until the driving ban is over. The worst of these queries took 49 seconds to answer. Nevertheless, when looking at query sets A1 and A2, we clearly see that the algorithm can answer queries affected by country-wide night driving bans in less than a second. \section{Conclusion} \label{sec:conclusion} We have introduced a variant of the shortest path problem where driving on edges may be forbidden at times, both driving and waiting entail costs, and the cost for waiting depends on the rating of the respective location. The objective is to find a Pareto set of both quickest paths and minimum cost paths in a road graph. We have presented an exact algorithm for this problem and shown that it runs in polynomial time if the cost for driving is the same as for waiting in an unrated location. With this algorithm, we can solve routing problems that arise in practice in the context of temporary driving bans for trucks as well as temporary closures of roads or even larger parts of the road network. Our experiments demonstrate that our implementation can answer queries with realistic driving restrictions in less than a second on average. There are a few slow outlier queries when the destination vertex lies in a blocked area. A promising angle to improve this could be to study bidirectional variants of our algorithm. We exploit Contraction Hierarchies to efficiently obtain good A* potentials. The algorithm can also be used in a dynamic (or live or online) scenario when combined with Customizable Contraction Hierarchies~\cite{dsw-cch-15}. A natural extension of our problem at hand is to consider time-dependent driving times or rules for truck drivers that enforce a break after a certain accumulated driving time. \input{temporary_road_closures.bbl}
1,941,325,220,928
arxiv
\section{Introduction} \label{intro} Heavy quarks, mostly charm and bottom at RHIC energies, are produced via gluon fusion in the early stage of heavy-ion collisions. Therefore, they are good tools to study the evolution of medium produced in heavy-ion collisions. Due to their large masses, heavy quark production is naturally a hard process so that it can be described by perturbative QCD calculations. These theoretical approaches can be tested by comparison to the measurement of heavy quark production in \mbox{$p$$+$$p$}\xspace collisions. The results in \mbox{$p$$+$$p$}\xspace collisions also provide a baseline in order to quantify nuclear effects in other collision systems. In a hot and dense medium, ``dead cone effect'' predict that bottom quarks will lose less energy than charm quarks ($R_{AA}^{c}<R_{AA}^{b}$) due to a limited range of gluon radiation~\cite{deadcone:2001}. The precise measurement in heavy-ion collisions will provide essential information about energy loss mechanism of heavy quarks inside the produced medium. The production of heavy quarks can be modified at final stage due to the hot and dense medium as well as at initial stage before heavy-ion collisions. In order to distinguish the initial- and final-state modification, \mbox{$d$$+$Au}\xspace collisions are used as a control experiment. The PHENIX experiment has a suitable design to measure leptons from semi-leptonic decay of open heavy-flavor hadrons. In central arms at mid-rapidity ($|\eta|<0.35$) and muon arms at forward rapidity ($1.2<|\eta|<2.2$), heavy-flavor electrons and muons are measured, respectively. Hadron cocktail method is used for background estimation at both rapidity regions, and a converter method is also used for heavy-flavor electron analysis at mid-rapidity. For the baseline measurements, PHENIX measured heavy-flavor electrons and muons at mid- and forward rapidity, respectively~\cite{ppg065,ppg117}, and the results of charm cross section are consistent with FONLL calculations within uncertainties. \section{Heavy-ion results} \label{heavyion} \begin{figure} \centering \includegraphics[width=0.51\textwidth,clip]{Figure1_1.pdf} \includegraphics[width=0.48\textwidth,clip]{Figure1_2.pdf} \caption{Left: \mbox{$R_{AA}$}\xspace of heavy-flavor leptons as a function of \mbox{$p_T$}\xspace in central \mbox{Cu$+$Cu}\xspace collisions at mid- and forward rapidity regions. Right: Comparion of \mbox{$R_{AA}$}\xspace of heavy-flavor leptons as a function of \mbox{$\langle N_{\rm part}\rangle$}\xspace between \mbox{Au$+$Au}\xspace collisions at mid-rapidity and \mbox{Cu$+$Cu}\xspace collisions at forward rapidity.} \label{fig1} \end{figure} Previously, PHENIX measured heavy-flavor electrons in \mbox{Au$+$Au}\xspace collisions at mid-rapidity~\cite{ppg066}. In this measurement, a significant suppression is observed in high \mbox{$p_T$}\xspace region, and the values of \mbox{$R_{AA}$}\xspace at $\mbox{$p_T$}\xspace>5~{\rm GeV/c}$ is almost similar with that for $\pi^{0}$. Recently, PHENIX has shown heavy-flavor electron measurements in smaller systems, \mbox{Cu$+$Cu}\xspace collisions. The left panel in Fig.~\ref{fig1} shows \mbox{$R_{AA}$}\xspace of heavy-flavor leptons in central \mbox{Cu$+$Cu}\xspace collisions at mid- (squares) and forward (circles) rapidity regions~\cite{ppg150,ppg117}. At mid-rapidity, a small suppression is observed at $\mbox{$p_T$}\xspace>3~{\rm GeV/c}$, where as a huge suppression is seen at forward rapidity. Two bands in the same figure represent pQCD calculations considering both collisional and radiative energy loss in the hot and dense medium as well as cold nuclear matter (CNM) effects~\cite{vitev:2009}. The theoretical predictions are qualitively consistent with the data at both rapidity regions. \begin{figure} \centering \includegraphics[width=0.6\textwidth,clip]{Figure2.pdf} \caption{\mbox{$R_{dA}$}\xspace as a function of \mbox{$p_T$}\xspace of heavy-flavor electrons at mid-rapidity in the most central (top) and the most peripheral (bottom) centrality classes.} \label{fig2} \end{figure} The right panel in Fig.~\ref{fig1} shows comparison of \mbox{$R_{AA}$}\xspace as a function of \mbox{$\langle N_{\rm part}\rangle$}\xspace between \mbox{Cu$+$Cu}\xspace results at forward and \mbox{Au$+$Au}\xspace results at mid-rapidity. As can be seen in this plot, it is interesting that \mbox{$R_{AA}$}\xspace in central \mbox{Cu$+$Cu}\xspace collisions at forward rapidity is comparable with that in central \mbox{Au$+$Au}\xspace collisions at mid-rapidity. The additional CNM effects at forward rapidity may contribute the large suppression of heavy-flavor muons. \section{\mbox{$d$$+$Au}\xspace results} \label{dau} \begin{figure} \centering \includegraphics[width=1.0\textwidth,clip]{Figure4.pdf} \caption{\mbox{$R_{AA}$}\xspace of heavy-flavor electrons as a function of \mbox{$\langle N_{\rm coll}\rangle$}\xspace in various collision systems.} \label{fig4} \end{figure} In order to correctly interpret and better understand the results in heavy-ion collisions, the results in \mbox{$d$$+$Au}\xspace collisions with minimal effects of hot and medium are needed. During the \mbox{$d$$+$Au}\xspace run in 2008, PHENIX collected large number of event samples, so many interesting results to study CNM effect have been came out. Figure~\ref{fig2} shows \mbox{$R_{dA}$}\xspace as a function of \mbox{$p_T$}\xspace for heavy-flavor electrons in the most central (top) and most peripheral (bottom) centrality classes measured at mid-rapidity~\cite{ppg131}. In central \mbox{$d$$+$Au}\xspace collisions, heavy-flavor electron production is clearly enhanced at moderate \mbox{$p_T$}\xspace region than the scaled \mbox{$p$$+$$p$}\xspace results, whereas no overall modification is observed in the most peripheral centrality class. By comparing the results in heavy-ion collisions, one can conclude that the suppression seen in central \mbox{Au$+$Au}\xspace collisions is due to the hot and dense medium. Figure~\ref{fig4} shows \mbox{$R_{AA}$}\xspace as a function of \mbox{$\langle N_{\rm coll}\rangle$}\xspace in \mbox{$d$$+$Au}\xspace, \mbox{Cu$+$Cu}\xspace, and \mbox{Au$+$Au}\xspace collisions for two different \mbox{$p_T$}\xspace ranges at mid-rapidity. Between the enhancement in \mbox{$d$$+$Au}\xspace collisions and the suppression in central \mbox{Au$+$Au}\xspace collisions, a smooth transition between cold and hot nuclear matter effects is seen in \mbox{Cu$+$Cu}\xspace collisions. Recently, heavy-flavor muons have been measured for various \mbox{$d$$+$Au}\xspace centrality classes at forward and backward rapidity regions~\cite{ppg153}. Figure~\ref{fig3} shows \mbox{$R_{dA}$}\xspace as a function of \mbox{$p_T$}\xspace for heavy-flavor muons at three centrality ranges, 60--88\% (top left), 0--20\% (top right), and 0--100\% (bottom), at forward (squares) and backward (circles) rapidity. In the most peripheral centrality class, \mbox{$R_{dA}$}\xspace at both rapidity regions are consistent with the unity. However, a clear enhancement is observed at backward rapidity where Au nucleus (high $x$) is going in the most central collisions. At forward rapidity which is $d$ direction (low $x$), heavy-flavor muons are supressed than the scaled \mbox{$p$$+$$p$}\xspace results. From these comparisions, additional CNM effects beyond the nPDF modification may play an important role depending on rapidity. \begin{figure} \centering \includegraphics[width=0.49\textwidth,clip]{Figure3_cent6088.pdf} \includegraphics[width=0.49\textwidth,clip]{Figure3_cent0020.pdf} \includegraphics[width=0.49\textwidth,clip]{Figure3_cent0100.pdf} \caption{\mbox{$R_{dA}$}\xspace of heavy-flavor muons as a function of \mbox{$p_T$}\xspace at forward and backward rapidity regions in the most peripheral (top left), the most central (top right), and the unbiased (bottom) \mbox{$d$$+$Au}\xspace collisions compared with two theoretical predictions, PYTHIA$+$EPS09s nPDF and pQCD calculations.} \label{fig3} \end{figure} Theoretical calculations, PYTHIA$+$EPS09s nPDF set~\cite{eps09s:2012}, considering only modification of nPDF are plotted in the same plots. The prediction shows a qualitative agreement with the forward data, but it underestimates the enhancement seen in backward rapidity and the differece between forward and backward. Another theoretical approach from pQCD calculation in the unbiased centrality class is consistent with the forward data. This pQCD prediction considers the CNM effects such as shadowing, \mbox{$p_T$}\xspace broadening, and energy loss. \section{Summary} \label{sum} PHENIX have measured leptons from open heavy-flavor decay in variety of collision systems. Following points can summarize the obtained results. \begin{itemize} \item A nice trend between cold and hot nuclear matter effects depending on the system size (\mbox{$\langle N_{\rm part}\rangle$}\xspace) is observed; a clear enhancement in central \mbox{$d$$+$Au}\xspace, a small suppression in central \mbox{Cu$+$Cu}\xspace, and a significant suppression in central \mbox{Au$+$Au}\xspace collisions. \item In \mbox{$d$$+$Au}\xspace collisions, an enhancement is observed at mid- and backward rapidity regions, whereas a suppression is seen at forward rapidity. The prediction from the EPS09s nPDF model shows a similar trend of rapidity dependence, but it underestimate the difference between forward and backward rapidity seen in the data. \end{itemize} \bibliographystyle{elsarticle-num}
1,941,325,220,929
arxiv
\section{I. Introduction} Rare-earth (RE) doped solids have gained considerable interest from quantum communication~\cite{GisinRMP2011} and quantum information processing community~\cite{Thiel2011} due to exceptionally long coherence time of their spin~\cite{Sellars2015,Sellars2018} and optical degrees of freedom~\cite{Boettger2009}. Today, one of the intensively studied RE doped solid is yttrium ortho-silicate (Y$_2$SiO$_5$ or YSO). Being doped with Kramers ions, such as Nd$^{3+}$, Yb$^{3+}$ and Er$^{3+}$, these crystals are studied in a view of different applications such as (a) optical quantum memories, due to the presence of optical transitions inside telecommunication bands~\cite{Lauritzen2010, Gisin2014, Tittel2015, Boettger2016, Goldner2016, Faraon2017}; (b) efficient microwave quantum memories, because of long coherence time of electronic and nuclear spins~\cite{Probst2015, Morton2015, Morton2018}; (c) circuit QED, due to large g-factor~\cite{Probst2013,Longdell2016,Saito2018}; (e) microwave-to-optical frequency converters, due to the addressable transitions is optics, microwave and RF frequencies and large g-factor~\cite{Longdell2018_2,Thiel2018}. Nevertheless, there is a major challenge of working with Kramers ions, and it is associated with large unquenched electron magnetic moment. Large RE magnetic moments exhibit rapid decoherence as a result of the increased coupling to phonons (process known as spin-lattice relaxation)~\cite{Afzelius2008, Afzelius2017} and to other spins via magnetic dipolar interactions (spectral and instantaneous diffusion)~\cite{Boettger2006, Morton2018}. There are three ways to circumvent the detrimental effects of decoherence processes. The first one relies on using of so-called ZEFOZ technique, where an optical photon is mapped to the hyperfine transitions, which are insensitive to the magnetic field fluctuations~\cite{McAuslan2012}. For the case of low-symmetry YSO crystal, such transitions may occur at zero field~\cite{Longdell2018} or at the specific magnitudes and directions of the applied magnetic field~\cite{Afzelius2018}. At such cases, the electronic spin coherence time may reach milliseconds timescale. The second way is to freeze electronic spin bath by applying a strong magnetic field of 7~T at a relatively low temperature of 1.5~K and to write coherent optical pulses into a nuclear spin states of $^{167}$Er$^{3+}$ Kramers ion~\cite{Sellars2018}. By using this prescription a record-long $T_2\simeq1.3~$s has been recently obtained. The third promising way involves the possibility to map optical photons into nuclear spins of a crystal host, such as Y$^{3+}$ in the case of YSO crystal~\cite{Thierry2018}. \section{II. Why cooling below 1 Kelvin?} In this article we present an experimental investigations of optical coherence of $^{167}$Er:Y$_2$SiO$_5$ crystal at sub-Kelvin temperatures. Such temperature range is very challenging to work at, yet it is very attractive due to few possible applications, which are hardly accessible at conventional temperatures above 1.5~K: for instance, the direct interface between superconducting qubits and RE optical or spin degrees of freedom with a view of the application in microwave quantum memory~\cite{Kubo2011}, and microwave-to-optical frequency converters~\cite{Obrien2013,Longdell2018_2}. At ultra-low temperatures, it is possible to attain nearly full polarization of the electronic spin bath, which in turn quenches the major sources of optical/spin decoherence, i.e. spectral diffusion occurring in the presence of direct and indirect flip-flops of surrounding electronic spins~\cite{Probst2015}. The dephasing time contains information about nearly all processes influencing the spins, and it is derived from homogeneous linewidth $\Gamma_0$, magnetic dipolar broadening, a.k.a. spectral diffusion, $\Gamma_{SD}$ and characteristic spin flip-flop time $W_{ff}$~\cite{Boettger2006}. All these parameters can be extracted by performing 3-pulse echo and 2-pulse echo experiments. Echo experiments with Er:YSO in sub-Kelvin temperature range have already demonstrated substantial slowing down of the spin-lattice relaxation process even at weak magnetic fields, where the spin-lattice relaxation time attains time-scale of seconds $T_1^{SLR}\sim 1-10~$sec~\cite{Probst2013,Tkalcec2014}. In our recent work we have demonstrated the increase of optical coherence time of isotopically purified $^{166}$Er:$^7$LiYF$_4$ at $T<1$~K~\cite{Kukharchyk2017} by nearly two orders of magnitude while cooling below 1.5 K at moderate fields. The LiYF$_4$ is however a challenging substrate: the Fluorine possesses large nuclear spin moment, which is a source of magnetic noise and strongly limits the electronic coherence of Erbium even below 1~K. In comparison, Y$_2$SiO$_5$ is magnetically quite crystal, in which it is in general easier to achieve higher coherence times. \section{III. Experimental setup} \begin{figure}[ht!] \includegraphics[width=1\columnwidth]{Fig1} \caption{(Color online) Sketch of the experimental setup. The setup consists of two parts: OVNA - for the optical vector network analysis and for heterodyne echo detection. MZM stands for Mach-Zehnder intensity modulator. AOM is acousto-optical modulator. DR is the dilution refrigerator. PD is the high-speed InGaAs photodetector. SG1 and SG2 are signal sources for the generation of echo sequence and heterodyne detection, respectively. RF-VNA is the radio-frequency vector network analyzer. AOM and signal generators are triggered by using pulse generator. DOSX is the digitizing oscilloscope. The experiment is controlled by using PC.} \label{Setup} \end{figure} We investigate a single Er:YSO crystal doped with 0.005\% atomic concentration of $^{167}$Er$^{3+}$ ions grown by Scientific Materials (Bozen, USA). The crystal has dimensions of 3 x 4 x 6~mm and its faces are AR coated for 1539 nm wavelength transmission. In the presented experiment the orientation of the crystal in the magnetic field ($\theta=45^{\circ}$ and $\varphi=90^{\circ}$) allows for the lifting off the magnetic class degeneracy~\cite{Probst2013,Probst2015}. The optical pulses are propagated along the magnetic field and polarization of the light is set along $D_1$ axis of the crystal. The crystal is placed inside a copper sample holder which is thermally anchored to the mixing chamber of the optical dilution refrigerator BF-LD-250 with calibrated cooling power of $450~\mu$W at $T=0.1~$K, see refs.~\cite{Probst2015,Kukharchyk2017} for details. A schematics of the experimental setup is outlined in Fig.~\ref{Setup}. The erbium-doped free-running fiber laser (NKT Photonics Adjustik E15) emits a continuous signal at the frequency which is about $6~$GHz below the observed $I_{15/2} \leftrightarrow I_{13/2}$ optical transition for the site 2 at zero field ($\omega_0/2\pi = 194802~$GHz). The laser frequency is stabilized by using optical wavemeter (High-Finesse WS6-200). Optical vector network analysis (OVNA) is used for the transmission optical spectroscopy of the sample~\cite{Kukharchyk2018}. For that purpose the signal generated by RF vector network analyzer (RF-VNA) is used to create optical sidebands by using Mach-Zehnder intensity modulator (MZM). The power of the laser beam is adjusted by acousto-optical modulator (AOM) and is set to $30~\mu$W during spectroscopy experiments and to $5~\mu$W for photon echo experiments. The light beam is tightly focused into the sample yielding an estimated beam waist of $100~\mu$m. The transmitted signal is detected by high-speed InGaAs photodetector (PD). The intereference between optical sidebands and the carrier signal produces microwave signal at the excitation frequency. This signal is amplified, and its magnitude, group delay and phase are measured by RF-VNA. Pulsed optical spectroscopy (2PE and 3PE) deploys heterodyne method. An echo sequence is cerated by triggering the output of the RF signal generator SG1. Microwave pulses of certain length and durations creates optical sideband which is resonant with an erbium transition. Another triggering sequence opens AOM only for the time of the echo generation pulses and its detection. The light power in the interacting sideband is estimated to be about $1~$mW. The interference between the carrier pulse and the echo signal is detected by a high-speed photoreceiver. The detected microwave pulse is amplified by using low-noise amplifier (LNA) and conventional $+26~$dB amplifier. The amplified microwave signal is mixed down to $30~$MHz with the help of the heterodyne SG2. After the band-pass filtration, the echo signal is finally detected by using a digitizing oscilloscope DOS-X. The full-experiment is controlled by using PC and MATLAB scripts. \section{IV. Spectroscopy} \begin{figure}[ht!] \includegraphics[width=0.8\columnwidth]{Fig2} \caption{(Color online) (a) The OVNA group delay transmission spectrum of Er:YSO sample measured at $T=12~$mK as a function of the applied magnetic field. Arrows on the right hand side of the plot show optical transitions between Zeeman levels with the same nuclear projection $m_I$. (b) The OVNA amplitude transmission spectrum measured at $B=280~$mT. The red arrow shows transitions of our interest, which is used for studying of optical coherence.} \label{Spectrum} \end{figure} Figure~\ref{Spectrum}(a) demonstrates the transmission spectrum measured at the base temperature of the dilution refrigerator $T=12~$mK as a function of the applied magnetic field. At weak fields, the magnitude response of the transmitted signal $\vert S_{21}(\Omega)\vert$ consists of many overlapping lines, and it is therefore difficult to assign each line to a particular optical transition. However, the measured group delay of the microwave signal yields much cleaner spectrum, which is also insensitive to the intensity fluctuations. The group delay signal $\delta \tau = \partial \arg \big(S_{21}(\Omega)\big) /\partial \Omega$ is automatically computed by RF-VNA. The transmission spectrum consists of several groups of lines with high and small optical g-factor. All groups can be identified at magnetic field above 0.2 Tesla, where the electron Zeeman term prevails and the electronic spin becomes a good quantum number. We focus our study of optical coherence on the group of lines with small optical g-factor. There are 8 resolved lines in this multiplet, which are attributed to the transitions between optical Zeeman states with the same projection of electronic spin $m_S=-1/2$, but different nuclear spin projection $m_I=-7/2...+7/2$. The optical transmission spectrum, i.e. the magnitude of the RF-VNA response, measured at $B=280~$mT is shown in Fig.\ref{Spectrum}(b). Optical density of the observed transitions can be calculated by using the simple theoretical model for the central multiplet $\alpha L \sim 0.5-1$~\cite{Kukharchyk2018}. The full-width of the optical inhomogeneous broadening is relatively small $\Gamma_{opt}\simeq ~200$ MHz. \section{V. Optical echo measurements} \begin{figure}[ht!] \includegraphics[width=0.8\columnwidth]{Fig3} \caption{(Color online) Echo decay measured at 280 mT at temperatures of 0.012~K, 0.15~K and 0.4~K. The solid line is the fit to the function suggested by Mims.} \label{Echo_decay} \end{figure} Coherent optical spectroscopy of the present sample is carried out by two- and three-pulse echo (2PE and 3PE) experiments. The 8 lines of the central multiplet demonstrate similar optical coherence properties, therefore, in the following we discuss results measured for the line in the middle, which is marked by the red arrow, see Fig.~\ref{Spectrum}(b). This line corresponds to the transition between Zeeman states with nuclear magnetic number $m_I=1/2$. The length of the $\pi/2$-pulse is optimized to yield the maximum echo amplitude and is equal to 1.2$~\mu$s. An example of the heterodyned echo sequence for the pulse delay of $\tau=7~\mu$s is shown in the inset of Fig.~\ref{Echo_decay}. For each echo signal, we extract the pulse envelope and plot its intensity as a function of the delay between excitation pulses $\tau$. In order to avoid the heating of the sample and let the spins relax to their ground state, the echo sequence is repeated at an extremely slow rate of 2 measurements per minute. \section{VI. Model for the data processing} The Figure~\ref{Echo_decay} displays the normalized echo decay measured at the magnetic field of 280~mT at temperatures of 0.012~K, 0.15~K and 0.4~K. The echo decays are non-exponential as the consequence of the spectral diffusion. Initially, the data are processed with the well known amplitude decay function suggested by W. Mims~\cite{Mims1968}: $A(\tau)=A_0 e^{-(2\tau/T_M)^x}$. However, a precise analysis requires to extract several parameters: spectral diffusion linewidth $\Gamma_{SD}$, effective linewidth $\Gamma_0$, relaxation rate $R$ and spin-lattice relaxation (SLR) time $T_1$. To extract all these parameters, we use the generalized formula proposed by B\"{o}ttger et al. \cite{Boettger2006}: \begin{eqnarray} I(t_{12},t_{23})& = & I_0 e^{-2t_{23}/T_1} \nonumber \\ & & e^{-4\pi t_{12} (\Gamma_0 + {1 \over 2} \Gamma_{SD}(Rt_{12}+1-e^{-Rt_{23}})) }, \label{eq_exp_fit} \end{eqnarray} where $t_{12}$ and $t_{23}$ are the first-to-second and second-to-third pulse delays respectively (for the 2PE, $t_{23} = 0$); $R$ is the characteristic relaxation rate; $\Gamma_0$ is the homogeneous linewidth; $\Gamma_{SD}$ is the spectral diffusion (SD) linewidth; $T_1$ is the relaxation time of the excited state. The dephasing time $T_M$ is then derived as follows \cite{Boettger2006}: \begin{eqnarray} T_M& = & {2\Gamma_0 \over \Gamma_{SD} R}\left(-1+{\Gamma_{SD}R \over \pi \Gamma_0^2}\right). \end{eqnarray} Measured via stimulated echo decay, the relaxation time T$_1$ is very sensitive to the decoherence and spectral diffusion occurring during the both $t_{12}$ and $t_{23}$ delays. The model in Eq.\ref{eq_exp_fit} developed by B\"{o}ttger et al. \cite{Boettger2006} aims to compensate for the both cases given that there is only one source of SD. It, however, does not include any contributions from the non-equilibrium phonons or more that one sources of SD. Each decoherence process possesses a characteristic dependence on the magnetic field. The spectral diffusion reveals the contribution of the indirect flip-flop processes to the coherence and is typically described as \cite{Boettger2006} \begin{eqnarray} \Gamma_{SD} = \Gamma_{max}\textrm{sech}^2{g_{env}\mu_BB \over 2k_B T_{eff}}, \label{eq_Gsd} \end{eqnarray} where $\Gamma_{max}$ is the FWHM frequency broadening resulting from the magnetic dipole-dipole interactions and it is estimated to be $\Gamma_{max}\simeq 2\pi\times60$~kHz for our crystal orientation. $T_{eff}$ is the effective temperature of the crystal due to the low-temperature effects, which is saturated at a minimal thermal point, s.f. ~\cite{Kukharchyk2018}. We find that it can be well described by the equation \begin{eqnarray} T_{eff}=T_{min}\bigg(1+\bigg({T \over T_{min}}\bigg)^2\bigg)^{1/2}, \end{eqnarray} where $T_{min}$ is the minimal attainable temperature. Given our thermal range, the relaxation rate $R$ is limited to the flip-flop and direct processes: $R= R_{d} + R_{ff}$. The flip-flop rate $R_{ff}$ is given by the resonant spin flips \cite{Abragam1970}: \begin{eqnarray} R_{ff}=W_{ff}\textrm{tanh}\bigg({\hbar\omega \over k_B T_{eff}}\bigg)^2, \end{eqnarray} where $W_{ff}$ is the flip-flop coefficient, $\omega$ is the angular frequency of the transition between the spin Zeemsn states. The direct process rate $R_d$ is in general case composed of the direct process itself and the bottleneck effect, which shows up at temperatures below 1 Kelvin\cite{Abragam1970}: \begin{eqnarray} R_d = {1 \over \tau_{1d} + (1+b)\tau_{ph} }, \end{eqnarray} where $\tau_{1d}$ is the ion-phonon relaxation time, which is understood as the direct process in the absence of the bottleneck; $\tau_{ph}$ is the lifetime of the phonons set by the mean free path of a phonon; $b$ is the bottleneck coefficient. At low temperatures, $\tau_{ph}$ is known to be 500~ns \cite{Wild1998}, which also correlates with our estimate via heat capacity, $C_p$, and thermal conductivity, $\kappa$: $C_p r^2 / 4 \kappa\simeq 300$~ns. The estimate of the ion-phonon relaxation time gives $\tau_{1d}\sim10^{-5}$~s \cite{Abragam1970} for the temperatures below 1~K. The bottleneck coefficient $b$ is given by: \begin{eqnarray} b={n_{at} \over \Sigma_{ph}} \textrm{tanh}\bigg({\hbar\omega \over k_B T_{eff}}\bigg)^2\sim 10^3, \end{eqnarray} where $n_{at}\simeq3.66\cdot10^{20}$~m$^{-3}$ is the density of Erbium; $\Sigma_{ph}$ is the density of phonons, which are resonant to the inhomogeneous spin linewidth $\Delta\omega$\cite{Probst2013,Probst2015}. $\Sigma_{ph}$ is estimated as \cite{Abragam1970} \begin{eqnarray} \Sigma_{ph}={2\omega^2 \Delta\omega \over 2\pi \upsilon^3} = 1.86\cdot10^{17} m^{-3}. \end{eqnarray} The bottleneck relaxation dominates the direct process: $\tau_{1d} \ll \{(1+b)\tau_{ph}\simeq 0.5\cdot10^3 \}$, and the equation is simplified to \begin{eqnarray} R_d &=& {1 \over b\tau_{ph} } \nonumber\\ &=&{3 \mu_B^2\Delta\nu \over \hbar^2\upsilon^3 n_{at}\tau_{ph}}g^2B^2\textrm{coth}\bigg({\hbar\omega \over k_B T_{eff}}\bigg)^2. \end{eqnarray} The final equation for the relaxation rate looks as follows: \begin{eqnarray} R=W_{BN}\,g^2B^2\textrm{coth}\bigg({\hbar\omega \over k_B T_{eff}}\bigg)^2+W_{ff}\,\textrm{tanh}\bigg({\hbar\omega \over k_B T_{eff}}\bigg)^2. \label{R_fit} \end{eqnarray} Taking the speed of sound in Y$_2$SiO$_5$ equal to $\upsilon\simeq 4$~km/s and $\Delta\omega\simeq2\pi\times30$~MHz, we estimate the bottleneck coefficient $W_{BN} \simeq 2\pi\times6$~kHz\,T$^{-2}$. The flip-flop coefficient is estimated to be $W_{ff}\simeq 2\pi\times8$~kHz \cite{Kukharchyk2017}. \section{VII. Experimental results} The dependences of $T_M$ on temperature and magnetic field are presented in Fig.~\ref{Tm_dep}. With increase of the magnetic field up to 300~mT, we observe increase of the coherence time by one order of magnitude: from 27~$\mu$s at 30~mT to 217~$\mu$s at 300~mT. Similarly, $T_M$ increases by one order of magnitude with the decrease of temperature. The most dramatic increase of $T_M$ happens around 400~mK: from $(39\pm2)~\mu$s to $(146\pm7)~\mu$s. This is due to fast polarization of the spins when thermal energy drops below 10~GHz \cite{Takahashi2008}. Above 900~mK, $T_M$ remains almost constant $\simeq (28\pm2)~\mu s$. At 50~mK, $T_M$ saturates, which is the result of temperature saturation at the minimal value of $T_{min} \simeq 50$~mK. \begin{figure}[ht!] \includegraphics[width=\columnwidth]{Fig4} \caption{(Color online) (a) Decoherence rate $1 /\pi T_M$ as a function of the magnetic field at $12~$mK. (b). Decoherence rate $1 /\pi T_M$ as a function of temperature at the magnetic field of $280~$mT.} \label{Tm_dep} \end{figure} Behavior of the relaxation rate $R$ is shown in Fig.~\ref{RG0_dep}. Both B-field and temperature dependences are fit to the Eq.~\ref{R_fit}. The results correlate and yield the following values: $W_{ff}\simeq2\pi\times(10.0\pm0.2)$~kHz, $W_{BN}\simeq2\pi\times(7.5\pm0.2)$~kHz\,T$^{-2}$ and $T_{min}\simeq2\pi\times(74\pm9)$~mK, see Tab.~\ref{table}. The bottleneck rate is close to the estimated value $W_{BN}\simeq2\pi\times6$~kHz\,T$^{-2}$. The minimal temperature $T_{min}$ agrees with the observed saturation of the $T_M$ below 100~mK, see Fig.~\ref{Tm_dep}. The flip-flop rate value is similar to the calculated by $W_{ff}\simeq2\pi\times8$~kHz \cite{Abragam1970}. \begin{figure}[ht!] \includegraphics[width=\columnwidth]{Fig6} \caption{(Color online) (a) Dependence of the homogeneous linewidth $\Gamma_0$ and relaxation rate $R$ on magnetic field. (b) Dependence of the homogeneous linewidth $\Gamma_0$ and relaxation rate $R$ on temperature at the magnetic field of 280~mT. The red line represents fit of the data to Eq.~\ref{R_fit} and Eq.~\ref{G0_eq}.} \label{RG0_dep} \end{figure} The spectral diffusion is governed by the indirect flip-flops. The obtained SD amplitude $\Gamma_{max}$ is found to depend on the delay $t_{12}$, which has not been observed in the experiments before\cite{Boettger2006}. The theoretical rate of $\Gamma_{max}$ does not include any dependence on the first-to-second pulse delay. The result of the fit of $\Gamma_{SD}$ to Eq.~\ref{Gsd_dep} is shown in Fig.\ref{Gsd_dep}(a). The minimal attainable temperature $T_{min}$ is extracted from the $\Gamma_{max}$ dependence on the B-field for all $t_{12}$ delays, and it equals $T_{min}\simeq 170$~mK. $\Gamma_{max}$ decreases from $2\pi\times$58~kHz for $t_{12} = 10~\mu$s to $2\pi\times$26~kHz for $t_{12} = 30~\mu$s, see Tab.~\ref{table} and Fig.~\ref{Gsd_dep}(b). We relate such dependence of the $\Gamma_{max}$ on $t_{12}$ to the non-equilibrium phonon (NQP) dynamics and existence of several sources of SD. As mentioned before, our fit equation includes only one source of SD and no contribution from NQP, which leads to the variation of the derived values of $\Gamma_{SD}$. The $\Gamma_{max}$ values are nicely following an exponential dependence on $t_{12}$, see Fig.~\ref{Gsd_dep}\,(b), with the characteristic time $\tau_{t12}\simeq~25\mu$s, which holds for both temperature and B-field dependencies. Being in the order of the $t_{12}$ values used in 3PE measurement, the $\tau_{t12}$ leads to the observed variation of SD and relaxation time $T_1$ values\cite{Bai1992}. It allows us to access the timescales of the NQP dynamics, which we discuss later in detail. The temperature dependence yields $\Gamma_{max}\simeq2\pi\times22$~kHz for $t_{12} = 10~\mu$s to $\Gamma_{max}\simeq2\pi\times13$~kHz for $t_{12} = 30~\mu$s. The $\Gamma_{max}$ values obtained from the temperature dependence are $\sim2$ times smaller than $\Gamma_{max}$ values obtained from the field dependence. Such a difference can be related to the differences of the NQP dynamics in changing magnetic field and temperature, and saturation of the effective temperature at $T_{min}$. From the exponential dependence of $\Gamma_{max}$ on $t_{12}$, we derive the maximal amplitude of $\Gamma_{max}(t_{12}\rightarrow0)\simeq 2\pi\times90$~kHz for the B-field dependence and $\Gamma_{max}(t_{12}\rightarrow0)\simeq 2\pi\times35$~kHz for the temperature dependence, which are within 50$\%$ variation of the estimated value $\Gamma_{max}\simeq2\pi\times60$~kHz. Considering the fact that the relaxation rate $R$ is independent of the delay between the excitation pulses, we use its parameters obtained with 3PE decays to extract the spectral diffusion from 2PE decays, where it is obtained as a product $\Gamma_{SD}R$. This gives the minimal temperature of spectral diffusion $\simeq(163\pm79)$~mK and $\Gamma_{max}\simeq2\pi\times12$~kHz for the B-field dependence and $\Gamma_{max}\simeq2\pi\times9$~kHz for temperature dependence. The 2PE experiment represents the limit of $t_{23}\rightarrow0$, which can be compared to the limit of $t_{12}\rightarrow 0$, considering the pulse order is not relevant. Values of T$_{min}$ is in reasonable agreement for both 2PE and 3PE experiments. The $\Gamma_{max}$ values are lower in the 2PE experiment as compared to 3PE experiment. If we put these $\Gamma_{max}$ values on the exponential dependence on the $t_{12}$ delay, see Fig.~\ref{Gsd_dep}\,(b), we find that for the both field and temperature dependences these values fall on the delay of $t_{12}\simeq37~\mu$s. This means that 2PE cannot be effectively described by 3PE with $t_{12}\rightarrow 0$, but rather with $t_{12 mean}\simeq37~\mu$s. \begin{figure}[ht!] \includegraphics[width=\columnwidth]{Fig7} \caption{(Color online) (a) Spectral Diffusion linewidth $\Gamma_{SD}$ as a function of the magnetic field for different $t_{12} values$. Lines show the fits of the data to Eq.~\ref{eq_Gsd}. (b) $\Gamma_{max}$ values obtained from the fit at different $t_{12}$. Red and black lines show the exponential fit of the $\Gamma_{max}$ dependence on the $t_{12}$. Possible position of the $\Gamma_{max2PE}$ and $\Gamma_{maxh}$ on the exponential decay is shown with the ovals, the dashed line guides to a corresponding $t_{12}$ delay.} \label{Gsd_dep} \end{figure} The homogeneous linewidth $\Gamma_0$ contains contributions from dynamic broadening mechanisms which occur faster than the experimental time-scale, such as the homogeneous linewidth $\Gamma_h$ given by the lifetime broadening and single-ion linewidth, optical broadening due to the spin flips of the ground state $\Delta \Gamma_h$, and fast spectral diffusion $\Gamma_{SDh}$ resulting from indirect spin flip-slops: \begin{eqnarray} \Gamma_0 = \Gamma_h + \Delta \Gamma_h + \Gamma_{SDh}, \label{G0_eq} \end{eqnarray} where $\Delta \Gamma_h$ is given by \cite{Boettger2006} \begin{eqnarray} \Delta \Gamma_h = {R \over 4\pi}e^{-g_{g}\mu_BB \over 2k_B T}sech{g_{env}\mu_BB \over 2k_B T}. \end{eqnarray} $\Gamma_{SDh}$ is the contribution from the spectral diffusion, which cannot be resolved as part of $\Gamma_{SD}$ in the Eq.~\ref{eq_exp_fit}, and is thus described by the same dependence on the magnetic field and temperature as the spectral diffusion $\Gamma_{SD}$: \begin{eqnarray} \Gamma_{SDh} = \Gamma_{maxh} sech^2{g_{env}\mu_BB \over 2k_B T}. \end{eqnarray} Fit of Eq.~\ref{G0_eq} to the experimental data is shown in Fig.~\ref{RG0_dep}. The minimal temperature measured through the $\Gamma_0$ is the same as for the relaxation rate $R$, $T_{min}\simeq 2\pi\times74\pm9~mK$. $\Gamma_h$ equals $\simeq2\pi\times0.3$~kHz, which correlates with the $\Gamma_0$ vales obtained by Böttger~et.~al.~\cite{Boettger2006}: $2\pi\times$22~kHz at 4.2~K and $2\pi\times$1.3~kHz at 1.6~K for the erbium concentration of 0.005 at.$\%$. In both magnetic field and temperature dependences, change of the $\Gamma_0$ value fit by the Eq.~\ref{G0_eq} is to $\sim90\%$ due to the contribution from spectral diffusion and to $\sim10\%$ due to $\Delta\Gamma_h$. The amplitude of spectral diffusion extracted from $\Gamma_0$ is different in case of dependence on the magnetic field, $\Gamma_{maxh}\simeq 2\pi\times(11.4\pm2.6)$~kHz, and in case of temperature dependence, $\Gamma_{maxh}\simeq 2\pi\times(3.3\pm0.5)$~kHz. Similar to the above, we put $\Gamma_{maxh}$ values onto the exponential dependence of $\Gamma_{max}$ on $t_{12}$, see Fig.~\ref{Gsd_dep}. The both $\Gamma_{maxh}$ values correspond to the delay $t_{12}\simeq 55~\mu$s. Such a correlation in the $\Gamma_{max}$ values accessed via different parameters can be also linked to the $T_{min}$ values in 2PE experiments. The minimal temperature "seen" by the SD is $\sim170$~mK in both 2PE and 3PE experiments. $t_{min}$ "seen" by the $\Gamma_0$ is approx. 2.5 time smaller, i.e. $\sim75$~mK. When we put the both $\Gamma_{max2PE}$ and $\Gamma_{maxh}$ onto the same exponential dependence in Fig.~\ref{Gsd_dep}~(b), the difference in $T_{min}$ converts into the difference in the assigned "effective" $t_{12}$ values, with a longer $t_{12}$ for the lower $T_{min}$. The relaxation time $T_1$ is the characteristic optical relaxation time. It is derived by fitting the 3PE decays to the Eq.~\ref{eq_exp_fit}. The relaxation time $T_1$ increases with increase of the magnetic field and reaches $\sim$3500~$\mu s$ at 300~mT for $t_{12} = 10~\mu$s, see Fig.~\ref{T1_dep}~(a). At low magnetic fields, i.e. 30-50~mT, $T_1 \simeq T_M \simeq 30~\mu s$. At 12~mK, we see that $T_1$ depends on the delay $t_{12}$, similarly to the spectral diffusion. On the temperature dependence, we see that this variation is much less relevant at higher temperatures: the $t_{12}$ length brings significant difference to the $T_1$-values only below 0.5~K. Thus below 0.5~K, the rotating $\pi/2$ pulses thermally influence the decoherence mechanisms, via creating NQP with dynamics on the timescales of the experiment. \begin{figure}[ht!] \includegraphics[width=\columnwidth]{Fig5} \caption{(Color online) (a)~SLR rate $1 /\pi T_1$ as a function of the magnetic field at $12~$mK for different values of t$_{12}$. (b)~SLR rate $1 /\pi T_1$ as a function of temperature at the magnetic field of $280~$mT for different values of t$_{12}$. Legend holds for both (a) and (b)} \label{T1_dep} \end{figure} \begin{table*}[t] \centering \caption{Derived experimental and theoretical values} \begin{tabular}{ |p{2.3cm}||p{1cm}|p{2cm}|p{2.7cm}|p{2cm}| p{3.8cm}| p{2cm}| } \hline & $t_{12}, \mu s$& $\sfrac{1}{2\pi}W_{ff}$, kHz& \sfrac{1}{$2\pi$}$W_{BN}$, kHz$\cdot$T$^{-2}$ & $T_{min}$, mK & \sfrac{1}{$2\pi$}$\Gamma_{max}$, kHz &\sfrac{1}{$2\pi$}$\Gamma_h$, Hz\\ \hline Rate R & all & $10.0\pm0.2$ & $7.5\pm0.2$ & $74\pm9$ & - & -\\ $\Gamma_{SD}$ & 10 & - & - & $176\pm37$ & $57.8\pm6.0~/~21.6\pm3.9^{*}$& -\\ & 15 & - & - & $177\pm57$ & $51.6\pm7.0~/~22.9\pm2.1^{*}$& -\\ & 20 & - & - & $162\pm12$ & $37.8\pm1.0~/~18.6\pm1.1^{*}$& -\\ & 30 & - & - & $170\pm10$ & $26.2\pm3.0~/~12.9\pm0.7^{*}$& -\\ & 2PE & \textit{fixed at} $10$ & \textit{fixed at} $7.5$ & $163\pm79$ & $11.6\pm2.3~/~9.4\pm0.5^{*}$& -\\ $\Gamma_0$ & 2PE & \textit{fixed at} $10$ & \textit{fixed at} $7.5$ & $74\pm9$ & $8.4\pm1.6~/~5.2\pm0.5^{(*)}$& 300\\ \hline Our numerical estimation & - & $\sim8$& $\sim5.7$ & - & $\sim60$ & $\sim100^s$ \\ \hline Taken from \cite{Boettger2006} & all & - & - & $1.6\cdot10^3$ & 820 & 1300\\ \hline \end{tabular} \\ $^{*}$ Values differ for magnetic field and temperature dependences. Presented as\\ ( value from magnetic field dependence) / (value from temperature dependence). \label{table} \end{table*} \section{VIII. Non-equilibrium phonons and Shottky anomaly} At the temperatures below 1~K, we enter a peculiar experimental regime. Most of the decoherence processes are reduced or even removed. At the same time, the sensitivity of the atomic ensemble and the host crystal to the excitation energy is increased. Let's consider in more detail how it changes the picture of the decoherence mechanisms. There are three systems to consider: an ensemble of erbium ions, the host crystal where these ions are trapped and the boundary interface between the host and the cryostat. Before the first pulse arrives, all these elements are at thermal equilibrium, and a specific number of thermal phonons exists in the crystal. Upon the arrival of the first pulse, part of the pulse energy is absorbed by the atoms, and another part - is directly absorbed into the phononic excitation the host crystal. When the non-radiative emission occurs, a number of nonequilibrium phonons (NQP) is created. These phonons would travel through the crystal, participate in collisions with thermal phonons as well as with the other NQP, and leave the crystal through the boundary interface. Now we need to consider specific thermal properties of the atoms and the substrate. The heat capacity of the YSO crystal can be estimated from the Debye model\cite{Kittel2005,Pobell2007} or taken from tables\cite{Pobell2007}, which is $\sim3$ times smaller than that of the SiO$_2$ and equals $c_{YSO}\simeq0.22\cdot T^3 {J \over m^3 K}$. We assume that the thermal conductivity of the YSO is similar to that of the SiO$_2$, which is $\kappa_{YSO}\simeq0.02\,T^2 {W \over m K}$\cite{Pobell2007}. The heat capacity of a spin system at ultralow temperatures is described by the well-known Shottky anomaly equation\cite{Sears1964,Tucker1965}: $c_{spin} = (\Delta E/k_BT)^2 e^{(-\Delta E/k_BT)}/(1+e^{(-\Delta E/k_BT)})^2$, where $\Delta E$ is the Zeeman splitting energy of the spin system. The thermal capacity $c_{spin}$ has a characteristic peak exactly in our working field-temperature range. The Shottky anomaly in $c_{spin}$ occurs between 50~mK and 500~mK for the magnetic field range $(50 - 300)$~mT resulting into $c_{spin}\simeq c_{YSO}\cdot 10^3$. This means that the spin system can store 1000 more energy than the crystal at the same change of temperature. Part of these energy will be non-radiatively transmitted into the NQP. Given that the NQP lifetime in the crystal is much shorter than the coherence time and that the thermal coupling to the cryostat is good, phonons will leave the crystal fast with no influence on the measured echo signal. Below 1~K, resistance of thermal boundary usually leads to the reduction of the heat transfer through the sample-cryostat contact, which is known as Kapitza resistance \cite{Kapitza1941,Liberadzka2019}. The NQPs can not leave the crystal fast enough and block the paths for the spins to relax, moreover they re-excite other spins. As the result, the phonon bottleneck effect appears, and the dynamics of the non-equilibrium phonons enters the dynamics of the spin system itself. We estimate the phonon lifetime, $\tau_{ph}$, and the phonon mean free path, $l_{ph}$,via $c_{YSO}$ and $\kappa_{YSO}$\cite{Abragam1970,Auzel1997,Kittel2005}, and we find it being inversely proportional to temperature: $l_{ph} \simeq 7 \cdot 10^{-6}$~\sfrac{m}{K} and $\tau_{ph} \simeq 2\cdot 10^{-9}$~\sfrac{s}{K}. The mean free path of a phonon in YSO below 1~K, $10^{-4}$~m, is shorter than the crystal size, $2.5\cdot10^{-3}~m$, and much longer than the mean distance between the Erbium ions, $\simeq 12~nm$. With this values, we have estimated the bottleneck coefficient $W_{BN}\simeq2\pi\times5.7$~kHz$\cdot T^{-2}$, which is similar to the experimental value. The phonon lifetime is then in the order of $\sim100^s$~ns, which similar to the previously accepted value of $\tau_{ph}\simeq500$~ns and is much shorter that the coherence time of the erbium ions. We do not know how large is the thermal resistance of the boundary. Based on the observation of the phonon "bottleneck" effect, we assume it to be rather large. Degression of the thermal contact~\cite{Liberadzka2019} and high density of the NQP lead to the dependence of the measured spectral diffusion and $T_1$ on the $t_{12}$ delay. $T_1$ is known to be sensitive to the final conditions of the spin-crystal system at the moment of arrival of the second pulse. So for the shorter $t_{12}$ delay, larger number of the NQP is left inside the crystal, therefore, $T_1$ values are smaller. For the longer $t_{12}$ delay, effect of the first $\pi$/2 pulse is reduces and $T_1$ values have increased. Spectral diffusion $\Gamma_{SD}$ linewidth contains an accumulative effect from NQP. $\Gamma_{SD}$ is larger for the shorter $t_{12}$ delay and decreases in an exponential way. We can attribute the characteristic time $\tau_{t12}$ of this exponential dependence to the lifetime of the NQP in the crystal~\cite{Bai1992}. It then explains, why we observe such a drastic influence of the $t_{12}$ delay on experimental result: relaxation timescale of the NQP is in the order of the delay between the pulses. Extension of the phonon lifetime by two orders (compare, $\tau_{t12}\simeq25~\mu$s and $\tau_{ph}\simeq0.5~\mu$s) is due to the strong phonon bottleneck effect and high thermal boundary resistance. We can assess the contribution from the NQP as follows \cite{Graf1998}: \begin{eqnarray} \Gamma_{NQP}&=&{\sigma_0 \upsilon \Sigma_{ph} \over \pi}\simeq 10~kHz, \end{eqnarray} where $\sigma_0 \simeq 10~nm^2$\cite{Graf1998} is the phonon cross-section. $\Gamma_{NQP}$ is in the order of the bottleneck coefficient. The influence of non-equilibrium phonons (NQP) created by a laser pulse has been already considered in different crystals including RE-doped Y$_2$SiO$_5$ \cite{Macfarlane1985,Bai1992,Altner1996}. The number of created NQP depends on the excitation energy, resonant phonon frequencies, presence of the other phonon effects. NQP lead to the faster disappearance of the echo envelope and thus to the shorter measured coherence and relaxation times, which we observe in our data: the relaxation time $T_1$ is shorter for a shorter delay $t_{12}$, while SD is larger. The NQP are tightly linked to the PBN effect so that it is not possible to sort out one from the other. \section{IX. Comparison of the Er:Y$_2$SiO$_5$ to the Er:LiYF$_4$ at mK} In our previous experiments with Er:LiYF$_4$ \cite{Kukharchyk2017}, the minimal temperature was much higher than in the current experiment. We find a correlation between the effective temperature and thermal properties of the crystal. Heat capacity of the spin system is considered to be similar in both types of substrates, Y$_2$SiO$_5$ and LiYF$_4$. The heat capacities are similar for most of the common substrates: 0.79~J~g$^{-1}$~K$^{-1}$ for LiYF$_4$; 0.74~J~g$^{-1}$~K$^{-1}$ for SiO$_2$; 0.42~J~g$^{-1}$~K$^{-1}$ for YAlO$_3$; 0.625~J~g$^{-1}$~K$^{-1}$ for Y$_3$Al$_5$O$_{12}$ (values provided for $\sim$300~K) \cite{Weber2002}. The difference comes, when comparing the thermal conductance values. Most number of the common optical substrates also have similar values: (7.5 - 12.7)~W~m$^{-1}$~K$^{-1}$ for SiO$_2$; $\simeq$14~W~m$^{-1}$~K$^{-1}$ for Y$_3$Al$_5$O$_{12}$; 11~W~m$^{-1}$~K$^{-1}$ for YAlO$_3$\cite{Weber2002}. The thermal conductance of LiYF$_4$ equals $\simeq$6.3~W~m$^{-1}$~K$^{-1}$\cite{Weber2002}, which is at least 2 time lower than that of SiO$_2$ and, similarly, Y$_2$SiO$_5$. Comparing the general experimental conditions, say power, additional laser harmonics, these conditions were more favorable in the experiments with the LiYF$_4$: The YSO crystal reached lower minimal temperature during the experiment despite the high-power background laser harmonic. This leads to a concrete conclusion, that in experiments with LYF the minimal attainable temperature was higher as a result of a poorer thermal conductance of the crystal itself. \section{X. Conclusions} In conclusion, we have presented a detailed 2PE and 3PE spectroscopy of mono-isotopic $^{167}$Er:YSO crystal at moderate fields and temperatures below 1~K. In the echo experiments, we have studied the decoherence properties at low magnetic fields and sub-Kelvin temperatures. The main sources of decoherence are the spectral diffusion, non-equilibrium phonons and phonon bottleneck effect. Spectral diffusion linewidth and relaxation rate are in particular sensitive to the dynamics on the non-equilibrium phonons, which occurs in the timescale of the experiment. As the limiting factor, we find the thermal properties of the crystal itself: heat capacity is 100-times smaller than that of the spin-system. Deterioration of the thermal interface below 0.5~K leads to the higher number of active phonon modes during the echo measurement, which limit the maximal achievable coherence time. It is thus not possible to achieve an increase in optical coherence by solely cooling a spin-doped crystal down to sub-Kelvin temperatures. We thus can conclude that in bulk samples it is much easier to freeze the spins by applying the magnetic field rather than by cooling down. \section{XI. Acknowledgement} This work is supported by the Saarland University, Land of Saarland and DFG through the grant INST 256/415-1, BU 2510/2-1. A.K. acknowledges support by the government assignment for FRC Kazan Scientific Center of RAS.
1,941,325,220,930
arxiv
1,941,325,220,931
arxiv
\section{Introduction} \label{s-intro} Since \cite{Bloch} we know that the generic point, considered as a zero-cycle, plays an important role in the study of algebraic cycles on a smooth projective variety $X$ over a field $k$, because it can be considered as a specialization of the diagonal carrying the motivic information at large. More precisely, let $k$ be an algebraically closed field, let $d$ be the dimension of $X$, and let $K=k(X)$ be the function field on $X$. Consider a pull-back homomorphism $$ \Phi :CH^d(X\times X)\to CH^d(X_K) $$ induced by the embedding of the generic point $\eta =\Spec (K)$ into $X$. The kernel of $\Phi $ is generated by correspondences supported on $Z\times X$, where $Z$ runs Zariski closed subschemes in $X$ different from $X$ itself, see \cite{KMP}. Hence, various motivic effects, given originally in terms of correspondences, i.e. cycle classes in $CH^d(X\times X)$, can be expressed in terms of zero-cycles on $X_K$, modulo motives of varieties of dimension $<d$. Assume, for example, that $X$ is a surface of general type over an algebraically closed field $k$, and the second Weil cohomology group $H^2(X)$ is algebraic. Let $\Delta _X$ be the diagonal on $X\times X$. Its specialization $$ P_{\eta }=\Phi (\Delta ) $$ is the generic point $\eta $ viewed as a zero-cycle on $X_K$. Fix now a $k$-rational point $P_0$ on $X$. Let $\Omega $ be a universal domain containing $k$ and embed $K$ into $\Omega $ over $k$. In the paper we will show, see Corollary \ref{Bloch}, that if $P_{\eta }$ is rationally equivalent to $P_0$ on $X_{\Omega }$ then any point $P$ is rationally equivalent to any other point $Q$ on $X_{\Omega }$, i.e. Bloch's conjecture hold's for $X_{\Omega }$. As Bloch's conjecture is equivalent to finite-dimensionality of the motive $M(X_{\Omega })$, we see that the above specialization map $\Phi $ allows to reformulated motivic effects at large in terms of rational equivalence between two concrete points on $X_{\Omega }$. Certainly, it is still not easy to prove (or disprove) rational equivalence between the above points $P_{\eta }$ and $P_0$. One of the problems here consists of the lack of rational curves on surfaces of general type with algebraic $H^2(X)$. However, we believe that any further progress towards Bloch's conjecture must involve analysis of a possibility of an explicit rational deformation of $P_{\eta }$ into $P_0$ on the surface $X_{\Omega }$. \medskip The above picture can now be generalized as follows. Let $X$ be a smooth projective variety of dimension $d$ over an algebraically closed field $k$. To any zero-cycle $Z=\sum _in_iP_i$ on $X$ one can define its transcendence degree as the maximum of transcendence degrees of the residue fields $k(P_i)$. The transcendence degree of a zero-cycle class $\alpha \in CH^d(X)$ is the exact lower bound of the transcendence degrees of representatives of $\alpha $. Then the motive $M(X)$ is a direct summand of motives of varieties of dimensions $<d$, twisted by Lefschetz motives, if and only if the transcendence degree of any zero-cycle class $\alpha \in CH^d(X)$ is strictly smaller than $d$. A nice thing is that the last assertion is also equivalent to the fact that there exists a point $P$ of transcendence degree $d$ on $X_{\Omega }$, rationally equivalent to a zero-cycle on $X$ whose transcendence degree is strictly smaller than $d$. More precisely, we prove the following theorem (see Theorem \ref{theor-maintrdeg} in the text below): \medskip \begin{itemize} \item[]{} {\it For any smooth projective variety $X$ of dimension $d$ over $k$ the following conditions are equivalent: \begin{enumerate} \item[]{(i)} the class of the diagonal $\Delta _X$ is balanced; \item[]{(ii)} the Chow motive of $X$ is a direct summand of a sum of motives of varieties of dimension strictly smaller than $d$; \item[]{(iii)} the transcendence degree of any zero-cycle class on $X_{\Omega }$ is strictly less than $d$; \item[]{(iv)} there exists a closed point on $X_{\Omega }$ whose transcendence degree is $d$ but the transcendence degree of its class modulo rational equivalence is strictly less than $d$. \end{enumerate} } \end{itemize} \medskip \bigskip {\sc Acknowledgements.} The first author was partially supported by the grants RFBR 08-01-00095, NSh-4713.2010.1 and MK-297.2009.1. Both authors are thankful to Artiom Brazovsky for the hospitality in his country-house in Zadomlya (Belarus) where the draft version of this paper has been designed in August 2010. \bigskip \bigskip \section{Some motivic lemma} Below we will use the notation from \cite{GG3}. In particular, all Chow groups will be with coefficients in $\QQ $, unless the cases when integral coefficients will be subscripted by $\ZZ $. The category of Chow motives $\CHM $ over a field $k$ will be contravariant. That is, if $X$ and $Y$ are two smooth projective varieties over $k$, and $X$ is decomposed in to its connected components $X_j$, then the group of correspondences $$ \Corr ^m(X,Y) $$ of degree $m$ from $X$ to $Y$ is a direct sum of the Chow groups $$ CH^{e_j+m}(X_j\times Y)\; , $$ where $e_j$ is the dimension of $X_j$. The composition of two correspondences $f\in \Corr ^n(X,Y)$ and $g\in \Corr ^m(Y,Z)$ is standard $$ g\circ f={p_{13}}_*(p_{12}^*(f)\cdot p_{23}^*(g))\; , $$ where the dot stands for the intersection of cycle classes in the sense of \cite{Fulton}. We also have a contravariant functor $M$ from the category of smooth projective varieties over $k$ to the category $\CHM $ sending any variety $X$ to its motive $$ M(X)=(X,\Delta _X,0)\; , $$ where $\Delta _X$ is the diagonal class of $X$, and any morphism $f:X\to Y$ maps to the class of transposition of its graph $$ \Gamma _f^{\rm t}\in \Corr ^0(X,Y)\; . $$ The category of Chow motives $\CHM $ is tensor, with the tensor product induced by the products of varieties. The unite motive $$ \uno =(\Spec (k),\Delta _{\Spec (k)},0) $$ and the Lefschetz motive $$ \Le =(\Spec (k),\Delta _{\Spec (k)},-1) $$ are related by the formula $$ M(\PR ^1)=\uno \oplus \Le \; . $$ For any positive integer $m$ let $\Le ^m$ be the $m$-fold tensor power of the Lefschetz motive $\Le $, let $\Le ^0=\uno $ and let $\Le ^{-m}=(\Le ^{-1})^{\otimes {-m}}$, where $$ \Le ^{-1}=(\Spec (k),\Delta _{\Spec (k)},-1)\; . $$ Further details on Chow motives can be found, for example, in \cite{GG3}. \bigskip The next notion we need is the notion of balancing. Let $X$ and $Y$ be two equi-dimensional varieties over $k$. Similarly to \cite{Barbieri Viale}, we say that a correspondence $\alpha\in CH^m(X,Y)$ is {\it balanced on the left} (respectively, {\it on the right}) if there exists an equi-dimensional Zariski closed subscheme $Z\subset X$, such that $$ \dim (Z)<\dim (X)\; , $$ and an algebraic cycle $\Gamma $ on $X\times Y$, such that $[\Gamma ]=\alpha $ in $CH^m(X,Y)$ and the support of $\Gamma $ is contained in $Z\times X$ (respectively, in $X\times Z$). The subscheme $Z$ will be called a {\it pan} of balancing. We say that $\alpha \in CH^m(X,Y)$ is {\it balanced} if $\alpha =\alpha _1+\alpha _2$, where $\alpha _1$ is balanced on the left, and $\alpha_2$ is balanced on the right. \bigskip Balancing was discovered in \cite{Bloch} and \cite{BlochSrinivas}. It is a motivic notion and can be restated in purely motivic terms: \bigskip \begin{lemma} \label{lemma-suppleft} Let $X$ and $Y$ be equidimensional smooth projective varieties over $k$, and let $\alpha \in CH^m(X,Y)$. Then $\alpha $ is balanced on the left if and only if there exists an equidimensional smooth projective variety $Z$ over $k$ with $$ \dim (Z)<\dim (X)\; , $$ such that $\alpha $ factors through $M(Z)$, that is $\alpha$ is a composition $$ M(X)\lra M(Z)\lra M(Y)\otimes \Le ^{-m}\; . $$ Symmetrically, the correspondence $\alpha $ is balanced on the right if and only if there exists an equidimensional smooth projective variety $Z$ over $k$ with $$ n=\dim(Y)-\dim(Z)>0\; , $$ such that $\alpha $ is a composition $$ M(X)\lra M(Z)\otimes \Le ^{n-m}\lra M(Y)\otimes \Le^{-m}\; . $$ \end{lemma} \begin{pf} If $m=0$ and the closed subscheme $Z$ is smooth, then the lemma is just obvious. Indeed, let $i:Z\hra X$ be the closed embedding, and let $\Gamma _i^{\rm t}$ be the transpose of the graph of the embedding $i$. If $\alpha $ is balanced on the left then it can be considered as a correspondence of degree zero from $Z$ to $Y$. Therefore, the correspondence $\alpha $ from $X$ to $Y$ is a composition of the correspondence $\Gamma _i^{\rm t}$ with $\alpha $ as a correspondence from $Z$ to $Y$. The detailed proof of the lemma when $Z$ is not necessarily smooth and $m\neq 0$ is given in \cite{GG3}. \end{pf} In the next section we will introduce the transcendence degree of a zero cycle on a smooth projective variety and we will show how it is related to balancing of the diagonal, and so the above motivic factorizations from Lemma \ref{lemma-suppleft}. \section{Transcendence degree of zero-cycles} First we need to recall some well-known things from the theory of schemes. \medskip Let $k$ be a field, and let $X$ be an algebraic scheme over $k$. Let $k\subset K$ be a field extension. Recall that a $K$-point on $X$ is a morphism of schemes $P:\Spec (K)\to X$ over $\Spec (k)$. A subextension $k\subset L\subset K$ is a field of definition of the point $P$ if there exists a morphism $$ p_L : \Spec (L)\lra X\; , $$ such that the following diagram is commutative $$ \xymatrix{ \Spec (K) \ar[ddrr]^-{} \ar[rrrr]^-{P} & & & & X \\ \\ & & \Spec (L) \ar[rruu]_-{P_L} & & } $$ as a diagram over $\Spec (k)$. Let $\xi _P$ be the image of the unique point in $\Spec (L)$ with respect to the morphism $P$, and let $k(\xi _P)$ be the residue field of the point $\xi _P$ on the scheme $X$. Then $k(\xi _P)$ is the minimal field of definition of the point $P$, i.e. the initial object in the category of fields of definition of the point $P$, because $k(\xi _P)\hra L$ and the above morphism $P_L$ factors through the natural morphism $\Spec (k(\xi _P))\to X$. By definition, the transcendence degree of the point $P$ over the ground field $k$ is the transcendence degree of the field $k(\xi _P)$ over $k$: $$ \trdeg (P/k)=\trdeg (k(\xi _P)/k)\; . $$ Thus, the transcendence degree $\trdeg (P/k)$ is the transcendence degree of the minimal field of definition of the point $P$ over the ground field $k$. Notice that if $k\subset L\subset K$ is a field subextension then one has a commutative diagram $$ \xymatrix{ \Spec (K) \ar[ddrr]^-{} \ar[rrrr]^-{Q_L} & & & & X \times _{\Spec (k)}\Spec (L) \ar[lldd]_-{} \\ \\ & & \Spec (L) & & } $$ Notice that the transcendence degree $\trdeg (Q_L/L)$ can be different from the transcendence degree $\trdeg (P/k)$. For example, $\trdeg (P_{k(\xi _P)}/k(\xi _P))=0$. \medskip Let $Y$ be the Zariski closure of the schematic point $\xi _P$ in $X$. Then $Y$ is a closed irreducible subscheme in $X$ and $$ \trdeg (P/k)=\dim _k(Y)\; . $$ It follows, in particular, that $$ \trdeg (P/k)\leq \dim _k(X)\; . $$ \medskip Now we are going to introduce the notion of a transcendence degree of a zero-cycle on a variety. Let $\Omega $ be a universal domain containing $k$. Suppose $X$ is an equidimensional variety, and let $d$ be the dimension of $X$. \medskip \begin{definition} A {\rm transcendence degree} $\trdeg (\alpha /k)$ of a zero-cycle class $\alpha \in CH^d(X_{\Omega })$ over $k$ is the minimal natural number $n$, such that there exists a zero-cycle $$ Z=\sum _in_iP_i $$ on $X_{\Omega }$ representing the class $\alpha $ with the property $$ \trdeg (P_i/k)\leq n $$ for all $i$. \end{definition} \medskip The following properties of the transcendence degree for zero-cycles follow directly from the above definition. \begin{lemma} \label{lemma-propertiestrdeg} Let $X$ be an equidimensional variety over $k$ of dimension $d$. Then the following is true: \begin{enumerate} \item[]{(i)} for any element $\alpha\in CH^d(X_{\Omega })$ one has $$ \trdeg (\alpha /k)\le d; $$ \item[]{(ii)} for all elements $\alpha ,\beta \in CH^d(X_{\Omega })$ we have that $$ \trdeg((\alpha+\beta)/k)\le \max\{\trdeg(\alpha/k)\, ,\; \trdeg(\beta/k)\}; $$ \item[]{(iii)} given a field subextension $k\subset K\subset \Omega $ and an element $\beta \in CH^d(X_K)$, we have an inequality $$ \trdeg(\beta\,_{\Omega }/k)\le \trdeg(K/k)\; . $$ \end{enumerate} \end{lemma} \begin{remark} {\rm Not any cycle class $\alpha \in CH^d(X_{\Omega })$ is equal to $\beta\, _{\Omega }$, for some $\beta \in CH^d(X_K)$ and $K$ with $\trdeg (K/k)=\trdeg (\alpha /k)$. Let, for example, $X$ be a smooth projective curve of genus at least two. Then there exists a point $P$ of transcendence degree at least two on the Jacobian variety $\Jac (X)$ of $X$ over $k$. Let $\alpha $ be a cycle class in the Chow group $CH^1(X_{\Omega })_0$ of degree zero $0$-cycles on the curve $X$ corresponding to the point $P$ under the isomorphism $$ CH^1(X_{\Omega })_0=\Jac (X)_{\Omega }\; . $$ Then $\trdeg (\alpha )\leq 1$ because $\dim (X)=1$. Suppose now that $\alpha $ comes from an element $\beta \in CH^1(X_K)_0$ by means of the scalar extension from $K$ to $\Omega $, where $\trdeg (K/k)=1$. Since the isomorphism between the Chow group of degree zero $0$-cycles and the Jacobian commutes with scalar extensions of the ground field, the point $P$ must be defined over $K$, which is impossible as $\trdeg (P/k)=2$. } \end{remark} \medskip We will also use the following fact. \begin{lemma}\label{lemma-motivictrdeg} Let $X$ and $Y$ be two smooth projective equidimensional varieties over $k$, let $d=\dim (X)$, $e=\dim (Y)$ and assume $e<d$. Let $\varphi $ be a correspondence of degree $d-e$ from $Y$ to $X$, that is $\varphi$ is a morphism of Chow motives $$ M(Y)\otimes \Le ^{\otimes (d-e)}\lra M(X)\; . $$ Then for any element $\alpha \in CH^e(Y_{\Omega })$ one has $$ \trdeg ((\varphi _{\Omega })_*(\alpha )/k)\leq \trdeg (\alpha /k)\; . $$ \end{lemma} \begin{pf} Let $\sum _in_iP_i$ be a zero-cycle on $Y_{\Omega }$, such that $$ \alpha =\left[\sum _in_iP_i\right]\; , $$ and $$ \trdeg (P_i/k)\le \trdeg (\alpha /k) $$ for all $i$. By linearity of the push-forward homomorphism $(\varphi _{\Omega })_*$ and also by Lemma~\ref{lemma-propertiestrdeg} (ii), it is enough to show that $$ \trdeg ((\varphi _{\Omega })_*[P_i]/k)\leq \trdeg (P_i/k) $$ holds true for all indices $i$. Let $P$ be one of the points $P_i$. By the definition of a transcendence degree of a point there exists a field $K$, such that $$ \trdeg (K/k)=\trdeg (P/k)\; , $$ and a point $W\in Y(K)$, such that $$ W_{\Omega }=P\; . $$ Moreover, $$ (\varphi _{\Omega })_*[P]=((\varphi _K)_*[W])_{\Omega }\; . $$ Since $(\varphi _K)_*[W]\in CH^d(X_K)$, by Lemma~\ref{lemma-propertiestrdeg}(3), we see that $$ \trdeg ((\varphi _{\Omega })_*[P]/k)\leq \trdeg (K/k)=\trdeg (P/k)\; , $$ which completes the proof. \end{pf} \begin{remark} {\rm Certainly, one can also define the notion of a transcendence degree for all closed irreducible subschemes in $X_{\Omega }$ and, respectively, for elements in Chow groups $CH^p(X_{\CC})$ of arbitrary codimension $p$. Moreover, analogs of Lemma~\ref{lemma-propertiestrdeg} (ii) (iii) and Lemma~\ref{lemma-motivictrdeg} imply that a transcendence degree is also well-defined for elements in Chow groups of Chow motives over $k$, and that this transcendence degree does not increase under taking push-forwards with respect to morphisms between Chow motives over $k$. } \end{remark} \bigskip Now we are ready to prove our main statement. \medskip \begin{theorem} \label{theor-maintrdeg} Let $X$ be an irreducible smooth projective variety over $k$ of dimension $d$. The following conditions are equivalent: \begin{enumerate} \item[]{(i)} the class of the diagonal $\Delta _X$ is balanced in $CH^d(X\times X)$; \item[]{(ii)} the Chow motive $M(X)$ is isomorphic to a direct summand of the motive $$ M(Y_1)\oplus (M(Y_2)\otimes\Le^{d-e})\; , $$ where $Y_1$ and $Y_2$ are equidimensional smooth projective varieties over $k$ whose dimensions are strictly less than $d$, and $e$ is the dimension of the variety $Y_2$; \item[]{(iii)} any element $\alpha \in CH^d(X_{\Omega })$ satisfies $$ \trdeg (\alpha /k)<d\; ; $$ \item[]{(iv)} there exists a closed point $P\in X_{\Omega }$, such that $\trdeg (P/k)=d$ and $$ \trdeg ([P])<d\; , $$ where $[P]$ is the class of the point $P$ in $CH^d(X_{\Omega })$. \end{enumerate} \end{theorem} \begin{pf} \bigskip \noindent $(i)\Rightarrow(ii)$ \medskip Suppose that $[\Delta _X]=\alpha _1+\alpha _2$, where $\alpha _1$ is balanced on the left and $\alpha _2$ is balanced on the right. By Lemma~\ref{lemma-suppleft}, there exist two equidimensional varieties $Y_1$ and $Y_2$ as in (2), and factorizations of $\alpha _1$ and $\alpha _2$, so that $\alpha $ factorizes like this: $$ M(X)\to M(Y_1)\to M(X),\quad M(X)\to M(Y_2)\otimes \Le ^{d-e}\to M(X)\; . $$ Put $M:=M(Y_1)\oplus (M(Y_2)\otimes\Le^{d-e})$. Then the identity morphism from $M(X)$ to itself factors through $M$, thus, $M(X)$ is a direct summand in $M$. \bigskip \noindent $(ii)\Rightarrow(iii)$ \medskip Looking at the Chow groups of the motives involved in the decomposition $$ M(X)\lra M(Y_1)\oplus (M(Y_2)\otimes\Le^{d-e})\lra M(X) $$ we see that all elements in $CH^d(X_{\Omega })$ are push-forwards with respect to the morphism $$ M(Y_2)\otimes\Le^{d-e}\lra M(X)\; , $$ as $$ CH^d(M((Y_1)_{\Omega }))=CH^d((Y_1)_{\Omega })=0 $$ because $e<d$. Then (iii) follows from Lemma~\ref{lemma-motivictrdeg} and Lemma~\ref{lemma-propertiestrdeg} (i). \bigskip $(iii)\Rightarrow(iv)$ \medskip This is just obvious. \bigskip $(iv)\Rightarrow(i)$ \medskip Let $\sum _in_iP_i$ be a zero-cycle on $X_{\Omega }$, such that $$ [P]=\left[\sum _in_iP_i\right]\; , $$ and $$ \trdeg (P_i/k)<d\; . $$ By definition of a transcendence degree, there are field extensions $$ K\subset \Omega \quad \hbox{and}\quad K_i\subset \Omega $$ over $k$, and points $$ W\in X(K)\; ,\; \; W_i\in X(K_i)\; , $$ such that $$ W_{\Omega }=P\; ,\; \; \; (W_i)_{\Omega }=P_i\; , $$ the fields $K$ and $K_i$ are finitely generated over $k$ with $$ \trdeg (K/k)=d\quad \hbox{and}\quad \trdeg(K_i/k)<d\; . $$ Let $L$ be the composite of the fields $K$ and $K_i$ in $\Omega $. As $$ [P]=\left[\sum _in_iP_i\right]\in CH^d(X_{\Omega }) $$ and all involved Chow groups are with coefficients in $\QQ $, one has $$ [W_L]=\left[\sum _in_i(W_i)_L\right]\in CH^d(X_L)\; , $$ see \cite{Bloch}, page 1.21. Let now $V$ be a smooth irreducible quasi-projective variety over $k$, such that $$ k(V)=K\; . $$ Then we also have a rational dominant morphism $$ f:V{\dashrightarrow }X\; , $$ which coincides at the generic point with the morphism $W:\Spec(K)\to X$. Similarly, for each $i$, we have a smooth irreducible quasi-projective variety $V_i$ with $k(V_i)=K_i$, and a rational dominant morphism $$ f_i:V_i{\dashrightarrow }X $$ inducing the morphism $W_i:\Spec(K_i)\to X$ at the generic point. Shrinking the varieties $V$ and $V_i$ to Zariski open subsets one can think that the above morphisms $f$ and $f_i$ are all regular. We also need a smooth irreducible quasi-projective variety $Z$ over $k$ with dominant regular morphisms $g:Z\to V$ and $g_i:Z\to V_i$, such that the function field $k(Z)$ coincides with $L$. For any regular morphism $h$ let $\Gamma _h$ be the graph of $h$. Shrinking $Z$ to a non-empty Zariski open subset if necessary, we have that $$ [\Gamma _{fg}]=\left[\sum _in_i\Gamma _{f_ig_i}\right] $$ in the group $CH^d(Z\times X)$, because the analogous rational equivalence holds over the generic point of $Z$, which is $\Spec (L)$, see above. Notice that $$ \dim (V)=\trdeg (K/k)=d\; . $$ Let $T\subset Z$ be a generic $d$-dimensional multiple hyperplane section of $Z$. The scheme $T$ is irreducible by Bertini's theorem, and the restrictions $$ h:=g|_T:T\to W,\quad h_i:=(g_i)|\, _T:T\to W_i $$ are still dominant. By taking pull-backs in Chow groups with respect to the embedding $T\times X\to Z\times X$, we obtain $$ [\Gamma _{fh}]=\left[\sum _in_i\Gamma _{f_ih_i}\right] $$ in the group $CH^d(T\times X)$. Since $\dim (T)=d$ and the composition $fh:T\to X$ is dominant, we see that $fh$ is generically finite. Thus, shrinking $T$ to a non-empty open subset, we may assume that the morphism $fh$ is a finite surjective morphism from $T$ onto a non-empty open subset $U$ in $X$. Now we use push-forwards in Chow groups with respect to the finite morphism $$ fh\times \id_X:T\times X\lra U\times X\; . $$ From the above equality we obtain that $$ (fh\times \id _X)_*[\Gamma _{fh}]= (fh\times \id _X)_*\left[\sum _in_i\Gamma _{f_ih_i}\right] $$ in the group $CH^d(U\times X)$. Set-theoretically, $$ (fh\times \id _X)(\Gamma _{fh})=\Delta _X\cap (U\times X)\; . $$ The closure of $(fh\times \id _X)(\Gamma _{f_ih_i})$ in $X\times X$ is contained in $X\times \overline {f_i(W_i)}$, where $\overline {f_i(W_i)}$ is the Zariski closure of $f_i(W_i)$ in $X$. Since $$ \dim (V_i)=\trdeg (K_i/k)<d $$ and all the Chow groups are with rational coefficients, we see that $\Delta _X$ is balanced. \end{pf} \medskip \noindent {\bf Remark.} The equivalence $(i)\Leftrightarrow(ii)$ was actually proved in \cite{GP2} but we included it in the theorem for the convenience of the reader. \medskip \section{An example} An important thing in Theorem \ref{theor-maintrdeg} is that (iv) implies (i). Let us illustrate this by an example. Let $X$ be a smooth projective surface over $\CC $, of general type and with $p_g=0$. Recall, that Bloch's conjecture predicts that for any two closed points $P$ and $Q$ on $X_{\CC }$ the point $P$ is rationally equivalent to $Q$. This conjecture is a codimension $2$ case of the Bloch-Beilinson paradigma for algebraic cycles, and it is highly inaccessible. It is known for surfaces with the Kodaira dimension $<2$, \cite{BKL}, for finite quotients of products of curves, \cite{Kimura}, and for surfaces of general type (which are not finite quotients of products of curves) in \cite{InoseMizukami}, \cite{Barlow} and \cite{Voisin}. Let now $k$ be the algebraic closure in $\CC $ of the minimal field of definition of the surface $X$, and let $K=k(X)$ be the function field of $X$ over $k$. Let $\eta =\Spec (K)$ be the generic point of $X$, and let $P_{\eta }$ be the corresponding $K$-rational closed point on $X_K$. Theorem \ref{theor-maintrdeg} implies the following corollary: \medskip \begin{corollary} \label{Bloch} Bloch's conjecture holds for $X$ if and only if there exist an embedding of $K$ into $\CC $ over $k$, and a $k$-rational point $P$ on $X$, such that the above closed $K$-rational point $P_{\eta }$ is rationally equivalent to $P$ on $X$ as a variety over $\CC $. \end{corollary} \medskip This can be made absolutely explicit in the case of Godeaux surfaces, for which Bloch's conjecture was proved by C.Voisin in \cite{Voisin}. Namely, let $\mu _5$ be the group of $5$-th roots of the unit in $\CC $, and let $\epsilon $ be a primitive root in it. The group $\mu _5$ acts on $\PR ^3$ by the rule: $$ [x_0:x_1:x_2:x_3]\mapsto [x_0:\epsilon x_1:\epsilon ^2x_2:\epsilon ^3x_3] $$ Let $f=f(x)$ be a $\mu _5$-invariant smooth quintic form in $\PR ^3$, and let $Y=Z(f)$ be the set of zeros of $f$ in $\PR ^3$. Since $f$ is $\mu _5$-invariant, the group $\mu _5$ acts on $Y$. Assume, in addition, that $Y$ does not contain the four fixed points of the action of $\mu _5$ on $\PR ^3$. Then the quotient surface $$ X=Y/\mu _5 $$ is non-singular, and it is called a Godeaux surface. It is well known that $p_g=q=0$ for such $X$, see \cite{Reid}. Take now two transcendental complex numbers which are algebraically independent over $\QQ $, say $e$ and $e^{\pi }$, see \cite{Nesterenko}. Let $\alpha $ be one of the zeros of the polynomial obtained by substitution of the coordinates $e$ and $e^{\pi }$ in to the affinized form $f$. Then $P_{\eta }$ can be represented as the class of the point $$ (e,e^{\pi },\alpha )\in \CC ^3 $$ under the quotient-map $Y\to X$. Then Voisin's result says that the point $P_{\eta }$ is rationally equivalent to a point in $X(\bar \QQ )$. The specificity of Corollary \ref{Bloch} is that it says that the above rational equivalence between two single points on $X(\CC )$ is the only reason for vanishing of the whole Albanese kernel in this situation. We believe that this observation can be useful in approaching to Bloch's conjecture in some concrete contexts, such as Mumford's fake surface, see \cite{Mumford}. Recall that such surfaces were recently classified in \cite{PrasadYeung}. \bigskip WARNING. It would be a temptation to find a rational curve through the points $P_{\eta }$ and $P_0$ on the Godeaux surface $X$ over $\CC $. The first problem is that $X$ is a surface of general type whose discrete invariants vanish, so that one can expect only a few rational curves on $X_{\CC }$. But this is not yet the main trouble. The main difficulty is that no rational curves can pass through $P_{\eta }$ at all. \medskip Indeed, let $X$ be a smooth projective surface over the ground subfield $k$ in $\Omega $. Let $P_{\eta }$ be a closed point of transcendence degree $2$ on $X_{\Omega }$. Suppose there exists a field subextension $k\subset K\subset \Omega $, a point $P:\Spec (K)\to X$ with $\trdeg (P/k)=2$, and a rational curve $C$ on $X_K$ passing through the point $P$. Let us show that $X$ is uniruled then. Without loss of generality one can assume that $K$ is finitely generated over $k$. Let $Y$ be an irreducible variety over $k$, such that $K$ is the function field of $Y$ over $k$.The rational curve $C\subset X_K$ induces a morphism $$ \phi :\PR ^1_K\lra X_K $$ which induces a rational morphism $$ \PR ^1\times _kY\dashrightarrow X\times _kY\; , $$ such that $$ f\times _Y\Spec (K)=\phi \; . $$ The point $p$ gives a morphism $$ \Spec (K)\to \PR ^1\times _kK $$ over $K$. This corresponds to some rational section of the projection $$ \PR ^1\times Y\to Y\; . $$ The morphism $$ \Spec (K)\to \PR ^1_K\to X_K\to X $$ sends the unique point in $\Spec (K)$ in to the generic point of $X$ because $\trdeg (p/k)=2$. Therefore, the composition $$ Y\dashrightarrow \PR ^1\times _kY\stackrel{f}{\dashrightarrow }X\times _kY \stackrel{p_X}{\dashrightarrow }X $$ is dominant, where $p_X$ is the projection onto $X$. It follows that the morphism $$ \PR ^1\times _kY\lra X $$ is dominant as well. Moreover, the induced map $$ \PR ^1_K\lra X_K $$ gives a birational isomorphism with its image. It follows that this image is a curve in $X_K$. Hence, the map $$ \PR ^1\times _kY\lra X $$ does not factor through the projection $\PR ^1\times Y\to Y$. Hence, at least for one point $y\in Y$ the induced map $$ \PR ^1_y\lra X $$ is not constant. Hence, $X$ is uniruled by \cite[1.3.4]{Kollar} \medskip Thus, if we could have a rational curve through $P_{\eta }$ on a smooth projective surface $X_{\CC }$, of general type with $p_g=0$, then immediately we would get a contradiction as such a surface is very far from to be uniruled. \medskip This shows that in order to find a precise rational equivalence between $P_{\eta }$ and $P_0$ we need to find more than one curves of genus $>0$ on the Godeaux surface $X$, and rational functions of them, which will provide a suitable zero-poles cancelation for their principle divisors. \bigskip \begin{small}
1,941,325,220,932
arxiv
\section*{Proof of Theorem \ref{ThmMain} and Proposition \ref{PropInC}} We use the $\overline{\partial}$-Neumann problem in the proof of Theorem \ref{ThmMain}. Let $\Omega$ be a bounded pseudoconvex domain in $\mathbb{C}^n$ and $\Box^{\Omega}=\overline{\partial}\dbar^*+\overline{\partial}^*\overline{\partial}$ be defined on square integrable $(0,1)$-forms, $L_{(0,1)}^{2}(\Omega),$ where $\overline{\partial}^*$ is the Hilbert space adjoint of $\overline{\partial}.$ Kohn \cite{Kohn63} and H\"ormander \cite{Hormander65} showed that (since $\Omega$ is a pseudoconvex domain) $\Box$ has a solution operator, denoted by $N^{\Omega},$ on $L_{(0,1)}^{2}(\Omega).$ Kohn \cite{Kohn63} also showed that $P^{\Omega}=I-\overline{\partial}^* N^{\Omega}\overline{\partial}.$ Therefore, $H^{\Omega}_{\phi}(f)=\overline{\partial}^*N^{\Omega}(f\overline{\partial}\phi)$ for $f\in A^2(\Omega)$ and $\phi\in C^1(\overline{\Omega}).$ We note that $H^{\Omega}_{\phi}(f)$ is the canonical solution for $\overline{\partial} u=f\overline{\partial}\phi.$ That is, $H^{\Omega}_{\phi}(f)$ is the solution that is orthogonal to $A^2(\Omega)$ (or equivalently, it is the solution with the smallest norm in $L^2(\Omega)$). We refer the reader to \cite{ChenShawBook,StraubeBook} and \cite{CuckovicSahutoglu09} (and references therein) for more information about the $\overline{\partial}$-Neumann problem and compactness of Hankel operators on Bergman spaces. We use a series of Lemmas for the proof of Theorem \ref{ThmMain}. We note that the following Lemma is an immediate corollary of \cite[Proposition V.2.3]{D`AngeloIneqBook} (see also \cite[Lemma 4.3]{StraubeBook}). \begin{lemma}\label{LemCompEstimate} Let $T:X\to Y$ be a linear operator between two Hilbert spaces $X$ and $Y$. Then $T$ is compact if and only if for every $\varepsilon>0$ there exist a compact operator $K_{\varepsilon}:X\to Y$ so that \[\|T(h)\|_Y\leq \varepsilon\|h\|_X+\|K_{\varepsilon}(h)\|_Y \text{ for } h\in X.\] \end{lemma} In the proof of Theorem \ref{ThmMain} we will need to apply Lemma \ref{LemCompEstimate} in the following set-up. \begin{lemma} \label{LemCompactH} Let $\Omega$ be a bounded pseudoconvex domain in $\mathbb{C}^n,$ $\phi\in C^1(\overline{\Omega}),$ and $X_{\phi}(\Omega)$ be the closure of $\{f\overline{\partial} \phi \in L^2_{(0,1)}(\Omega): f\in A^{2}(\Omega)\} $ in $L^2_{(0,1)}(\Omega).$ Then $H^{\Omega}_{\phi}$ is compact on $A^2(\Omega)$ if and only if for every $\varepsilon>0$ there exists a compact operator $K_{\varepsilon}:X_{\phi}(\Omega)\to L^{2}(\Omega)$ such that \begin{equation}\label{CompEst} \|\overline{\partial}^{*}N^{\Omega}(f\overline{\partial}\phi)\|\leq \varepsilon\|f\overline{\partial} \phi\|+\|K_{\varepsilon}(f\overline{\partial} \phi)\| \text{ for all } f\in A^{2}(\Omega). \end{equation} \end{lemma} \begin{proof} Assume that $H^{\Omega}_{\phi}$ is compact on $A^{2}(\Omega).$ Then $\overline{\partial}^{*}N^{\Omega}$ is compact on a dense subset of $X_{\phi}(\Omega)$ which implies that it is compact on $X_{\phi}(\Omega).$ Then applying Lemma \ref{LemCompEstimate} with $T= \overline{\partial}^{*}N^{\Omega}$ and $X=X_{\phi}(\Omega)$ we get the following estimate: for every $\varepsilon>0$ there exists a compact operator $K_{\varepsilon}:X_{\phi}(\Omega) \to L^{2}(\Omega)$ so that \[\|\overline{\partial}^{*}N^{\Omega}(f\overline{\partial} \phi)\|\leq \varepsilon\|f\overline{\partial} \phi\| +\|K_{\varepsilon}(f\overline{\partial} \phi)\| \textrm{ for } f \in A^2(\Omega).\] On the other hand, if we assume that we have \eqref{CompEst} then Lemma \ref{LemCompEstimate} implies that $\overline{\partial}^{*}N^{\Omega}$ is a compact operator on $X_{\phi}(\Omega).$ Hence, $H^{\Omega}_{\phi}$ is compact on $A^2(\Omega).$ This completes the proof of Lemma \ref{LemCompactH}. \end{proof} The following famous theorem of H\"{o}rmander \cite[Theorem 4.4.2]{HormanderBook} will be used. \begin{theorem*}[H\"{o}rmander] Let $\Omega$ be a pseudoconvex domain in $\mathbb{C}^{n}$ and $\psi$ be a continuous plurisubharmonic function on $\Omega$. Assume that $u =\sum_{j=1}^{n}u_{j}d\bar z_{j} \in L^{2}_{(0,1)}(\Omega,e^{-\psi})$ such that $\overline{\partial} u=0$. Then there exists $f \in L^{2}(\Omega,e^{-\psi})$ such that $\overline{\partial} f=u$ and \[\int_{\Omega}\frac{|f(z)|^{2}}{(1+\sum_{j=1}^n|z_j|^{2})^{2}}e^{-\psi(z)}d\lambda(z) \leq \int_{\Omega}\sum_{j=1}^{n}|u_{j}(z)|^{2}e^{-\psi(z)}d\lambda(z)\] where $z=(z_1,\dots,z_n)\in \mathbb{C}^n.$ \end{theorem*} We include the following standard Lemma and its proof for convenience of the reader. \begin{lemma} \label{Lem2} Let $\Omega$ be a bounded pseudoconvex domain in $\mathbb{C}^n, B(p,r)$ be the ball centered at $p\in b\Omega$ with radius $r,$ and $\Omega(p,r)=B(p,r)\cap \Omega.$ For $\varepsilon>0$ and $0<\delta<r$ there exists a bounded operator $E_{\varepsilon,\delta}:A^2(\Omega(p,r))\to A^{2}(\Omega)$ such that \begin{equation*} \|f-E_{\varepsilon,\delta}(f)\|_{L^{2}(\Omega(p,r-\delta))} \leq \varepsilon \|f\|_{L^{2}(\Omega(p,r-\delta))} \text{ for } f\in A^{2}(\Omega(p,r)). \end{equation*} \end{lemma} The following proof will use H\"{o}rmander's Theorem in a similar fashion as in the proof of \cite[TheoremVI.3]{JupiterThesis} where Jupiter shows that a pseudoconvex domain in $\mathbb{C}^n$ is a Runge domain if and only if it is polynomially convex. \begin{proof}[Proof of Lemma \ref{Lem2}] The crucial step in the proof is constructing a sequence of weight functions that will allow us to get the desired norm estimates. To that end, let us choose positive numbers $\delta,r_1,$ and $r_2$ so that $0<r-\delta=r_{1}<r_{2}<r$ and define a function $\psi$ as \[\psi(z)=-r_{2}^{2}+\sum_{j=1}^n|z_j-p_j|^{2}\] where $z=(z_1,\ldots,z_n)\in \mathbb{C}^n.$ Furthermore, we choose a smooth cut-off function $\chi\in C^{\infty}_{0}(B(p,r))$ such that $\chi \equiv 1 $ in a neighborhood of $\overline{B(p,r_{2})}.$ We note that $\psi$ is a continuous plurisubharmonic function on $\mathbb{C}^{n}$ that satisfies the following crucial property: $\psi(z)<0$ for $z\in B(p,r_{2})$ and $\psi(z)>0$ for $z\in \mathbb{C}^{n}\setminus \overline{B(p,r_{2})}.$ Since $\psi$ is bounded on $\Omega,$ the Hilbert spaces $L^{2}(\Omega)$ and $L^{2}(\Omega, e^{-k\psi})$ are equal for all $k$ as sets. Then H\"{o}rmander's Theorem implies that for every $k$ there exists $u_{k}\in L^{2}(\Omega)$ such that $\overline{\partial} u_{k}=f\overline{\partial} \chi$ with \begin{align}\label{Eqn1} \int_{\Omega}|u_{k}(z)|^{2}e^{-k\psi(z)}d\lambda(z)\leq C\int_{\Omega}|f(z)|^{2}\sum_{j=1}^{n}\left|\frac{\partial \chi(z)}{\partial \bar z_{j}}\right|^{2} e^{-k\psi(z)} d\lambda(z) \end{align} where $C$ is a positive real number that depends only on $\Omega.$ We note that $\psi <- r_2^2+r_1^2<0$ on $B(p,r_{1})$ and $\psi$ is strictly positive on a neighborhood of the support of the $\overline{\partial} \chi$. Hence the right hand side of \eqref{Eqn1} goes to zero as $k$ goes to infinity and we have \begin{align}\nonumber \int_{\Omega\cap B(p,r_{1})}|u_{k}(z)|^{2} d\lambda(z) &\leq \int_{\Omega}|u_{k}(z)|^{2}e^{-k\psi(z)}d\lambda(z) \\ \label{Eqn2}&\leq C \int_{\Omega}|f(z)|^{2}\sum_{j=1}^{n}\left| \frac{\partial \chi(z)}{\partial \bar z_{j}}\right|^{2} e^{-k\psi(z)}d\lambda(z). \end{align} Then depending on $\varepsilon$ and $\delta$ (and using \eqref{Eqn2}) we can choose $C_{\varepsilon,\delta}>0$ and $k$ so that $\|u_{k}\|_{L^{2}(\Omega(p,r_1))} \leq \varepsilon \|f\|_{L^{2}(\Omega(p,r_1))}$ and $\|u_{k}\|_{L^{2}(\Omega)} \leq C_{\varepsilon,\delta}\|f\|_{L^{2}(\Omega(p,r))}.$ Therefore, we can define $E_{\varepsilon,\delta}$ as $E_{\varepsilon,\delta}(f)=\chi f-u_{k}.$ \end{proof} Now we are ready to prove Theorem \ref{ThmMain}. \begin{proof}[Proof of Theorem \ref{ThmMain}] To simplify the notation in this proof we will denote the norm $\|.\|_{L^2(U)}$ by $\|.\|$ and the operator $H^U_{R_U(\phi)}$ by $H^U_{\phi}.$ We note that $\langle .,.\rangle $ denotes the inner product on $U$ and $A\lesssim B$ means that $A\leq cB$ for some constant $c$ that is independent of the parameters of interest and its value can change at every appearance. For $f\in A^{2}(U)$ we have \begin{align*} \| H^{U}_{\phi} (f)\|^{2}=&\langle \overline{\partial}^{*}N^U(f\overline{\partial}\phi),\overline{\partial}^{*}N^U(f\overline{\partial} \phi) \rangle \\ =&\langle f\overline{\partial}\phi ,N^U\overline{\partial}\dbar^{*}N^U(f\overline{\partial}\phi) \rangle \\ =&\langle f\overline{\partial}\phi, N^U(f\overline{\partial}\phi)\rangle. \end{align*} In the last equality above we used the facts that $N^U(\overline{\partial}\dbar^{*}+\overline{\partial}^{*}\overline{\partial})=I$ and $\overline{\partial} N^U \overline{\partial}=0.$ Now we will construct a smooth bounded function $\lambda$ that has a large Hessian on the boundary of the ball $B(p,r).$ Let $\gamma:\mathbb{R}\to\mathbb{R}$ be a smooth, non-decreasing, convex function such that $-1\leq \gamma(t)\leq 0$ for $t\leq 0,\gamma(0)=0,$ and $\gamma'(0)\geq 2.$ Furthermore, let us define \[\rho_{\varepsilon}(z)=\frac{1}{\varepsilon}\left(-r^2+\sum_{j=1}^n|z_j-p_j|^2\right)\] for $r,\varepsilon>0$ and $\psi_{\varepsilon} (z)= \gamma(\rho_{\varepsilon}(z)).$ Then one can check that $\psi_{\varepsilon}$ is a smooth plurisubharmonic function on $\mathbb{C}^n,$ such that $-1\leq\psi_{\varepsilon}(z) \leq 0$ for $z\in B(p,r).$ Also, by continuity, there exists $\delta>0$ such that \[\sum_{j,k=1}^n \frac{\partial^2 \psi_{\varepsilon} (z)}{\partial z_j\partial \overline z_k} w_j\overline w_k \geq \frac{1}{\varepsilon}\sum_{j=1}^n|w_j|^{2}\] for $z\in K=\overline{B(p,r)\setminus B(p,r-\delta)}$ and $(w_1,\ldots w_n)\in \mathbb{C}^{n}.$ Then (ii) in \cite[Corollary 2.13]{StraubeBook} implies that \begin{align} \nonumber \frac{1}{e \varepsilon}\int_{K\cap U}|h(z)|^{2}d\lambda(z) &\leq \sum_{j,k=1}^n \int_{U}e^{\psi_{\varepsilon}(z)} \frac{\partial^2 \lambda (z)}{\partial z_j\partial \overline z_k} h_j(z)\overline{h_k(z)}d\lambda(z)\\ \label{EqnMain}& \leq \|\overline{\partial} h\|^{2}+\|\overline{\partial}^{*}h\|^{2} \end{align} for $ h=\sum_{j=1}^nh_jd\overline z_j \in Dom(\overline{\partial})\cap Dom(\overline{\partial}^{*}) \subset L^{2}_{(0,1)}(U).$ Let $\chi\in C^{\infty}(\overline{B(p,r)})$ such that $\chi\equiv 1$ on a neighborhood of $bB(p,r),$ and $\chi\equiv 0$ on $B(p,r-\delta).$ Then \begin{align*} \|H^{U}_{\phi} (f)\|^{2}\leq & |\langle f\overline{\partial}\phi, \chi N^U(f\overline{\partial}\phi)\rangle| +|\langle f\overline{\partial}\phi, (1-\chi)N^U(f\overline{\partial}\phi)\rangle| \\ \leq& \|f\overline{\partial}\phi\| \|\chi N^U(f\overline{\partial}\phi)\|+|\langle (1-\chi)f\overline{\partial}\phi, N^U(f\overline{\partial}\phi)\rangle|. \end{align*} Then \eqref{EqnMain} implies that \begin{align*} \|\chi N^U(f\overline{\partial} \phi)\|^2 &\lesssim \varepsilon \left(\|\overline{\partial} N^U(f\overline{\partial} \phi)\|^2+\|\overline{\partial}^{*} N^U(f\overline{\partial} \phi)\|^2\right)\\ & \lesssim \varepsilon \|f\|^2 \end{align*} for $ f\in A^2(U).$ Let us denote $\chi_{1}=1-\chi$ and choose $\widetilde{\chi} \in C^{\infty}_{0}(B(p,r))$ such that $0 \leq \widetilde{\chi} \leq 1 $ and $\widetilde{\chi}\equiv 1$ on the support of $\chi_1.$ Then Lemma \ref{Lem2} implies that there exists a bounded operator $E_{\varepsilon,\delta}: A^{2}(U)\to A^{2}(\Omega)$ such that $\|\widetilde{\chi}(R_UE_{\varepsilon,\delta}(f)-f)\|\leq \varepsilon \|f\|.$ Since $\delta$ depends on $\varepsilon$ in the following calculation we will use the following notation: $E_{\varepsilon}=E_{\varepsilon,\delta},F_{\varepsilon}=E_{\varepsilon}(f).$ Let $M_{\varepsilon}$ denote the norm of the operator $E_{\varepsilon}.$ We note that in the following inequalities $\overline{\partial}^*_{\Omega}$ and $\overline{\partial}^*$ denote the Hilbert space adjoints of $\overline{\partial}$ on $\Omega$ and on $U$, respectively. A $(0,1)$-form $f$ is in the domain of $\overline{\partial}^*$ if there exists a square integrable function $g$ such that $\langle f,\overline{\partial} h\rangle=\langle g,h\rangle $ for all $h$ in the domain of $\overline{\partial}.$ Furthermore, if a $(0,1)$-form $f=\sum_{j=1}^nf_jd\overline{z}_j$ is in the domain of $\overline{\partial}^*$ then $\overline{\partial}^*f=-\sum_{j=1}^n \frac{\partial f_j}{\partial z_j}$ in the sense of distributions (see Chapter 4.2 in \cite{ChenShawBook} for more information). The fact that $\overline{\partial}^*N$ is a solution operator for $\overline{\partial}$ (that is, $\overline{\partial}\dbar^*Nf=f$ if $f$ is a $\overline{\partial}$-closed form) implies that $ F_{\varepsilon} \overline{\partial} \phi=\overline{\partial}(F_{\varepsilon}\phi)=\overline{\partial} \overline{\partial}^*_{\Omega}N^{\Omega}F_{\varepsilon} \overline{\partial}\phi.$ We will use this equality as well as the Cauchy-Schwarz inequality to pass from the first line to the second line below. \begin{align*} |\langle \chi_{1}(f\overline{\partial}\phi), N^U(f\overline{\partial}\phi) \rangle| &\leq |\langle \chi_{1} (f-F_{\varepsilon})\overline{\partial}\phi, N^U(f\overline{\partial}\phi)\rangle| +|\langle \chi_{1} F_{\varepsilon}\overline{\partial} \phi, N^U(f\overline{\partial}\phi)\rangle| \\ &\lesssim \|\chi_{1} (f-F_{\varepsilon})\|\|f\| +|\langle \chi_{1} \overline{\partial}\dbar^{*}_{\Omega}N^{\Omega} (F_{\varepsilon}\overline{\partial}\phi), N^U(f\overline{\partial}\phi)\rangle| \\ &\lesssim \|\widetilde{\chi} (f-F_{\varepsilon})\|\|f\|+ |\langle \overline{\partial}^{*}_{\Omega} N^{\Omega} (F_{\varepsilon}\overline{\partial}\phi),\overline{\partial}^{*} \chi_{1} N^U(f\overline{\partial}\phi)\rangle| \\ &\lesssim \varepsilon\|f\|^{2} + \widetilde{C}_{\varepsilon} \|\overline{\partial}^{*}_{\Omega}N^{\Omega}(F_{\varepsilon}\overline{\partial}\phi) \|_{L^{2}(\Omega)}\|f\|, \end{align*} where $\widetilde{C}_{\varepsilon}$ is a constant that is independent of $f.$ Now we will use the fact that $H_{\phi}^{\Omega}$ is compact on $A^2(\Omega)$ and $\|F_{\varepsilon}\|_{_{L^{2}(\Omega)}}\leq M_{\varepsilon}\|f\|_{L^{2}(U)} .$ Lemma \ref{LemCompactH} implies that for any $\varepsilon'>0 $ there exists a compact operator $K_{\varepsilon'}$ on $X_{\phi}(\Omega)$ such that \[\|\overline{\partial}^{*}_{\Omega}N^{\Omega}(F_{\varepsilon}\overline{\partial}\phi) \|_{L^{2}(\Omega)} \lesssim \varepsilon' \|F_{\varepsilon}\|_{L^{2}(\Omega)}+\|K_{\varepsilon'} \Pi_{\overline{\partial}\phi} (F_{\varepsilon})\|_{L^{2}(\Omega)}.\] where $\Pi_{\overline{\partial}\phi}:A^2(\Omega)\to X_{\phi}(\Omega)$ denotes the (bounded) multiplication operator by $\overline{\partial}\phi.$ That is, $\Pi_{\overline{\partial}\phi} h = h\overline{\partial}\phi$ for $h\in A^2(\Omega)$. Therefore, for $f\in A^2(U)$ we have the following inequality \begin{align*} \|H^{U}_{\phi} (f)\|^{2} \lesssim& \left( \varepsilon+\sqrt{\varepsilon} + \varepsilon'M_{\varepsilon}\widetilde{C}_{\varepsilon} \right)\|f\|^2+ \widetilde{C}_{\varepsilon} \| f\| \|K_{\varepsilon'}\Pi_{\overline{\partial}\phi}E_{\varepsilon}(f) \|_{L^2(\Omega)} \\ \leq &\left( \varepsilon+\sqrt{\varepsilon} + \varepsilon'M_{\varepsilon}\widetilde{C}_{\varepsilon}+ \varepsilon'\widetilde{C}_{\varepsilon}\right)\|f\|^2\\ &+ \left(\widetilde{C}_{\varepsilon} +\frac{\widetilde{C}_{\varepsilon}}{\varepsilon'}\right) \|K_{\varepsilon'}\Pi_{\overline{\partial}\phi}E_{\varepsilon}(f)\|^2_{L^2(\Omega)}. \end{align*} For any $0<\varepsilon<1$ there exists $\varepsilon'>0$ so that $\varepsilon+\sqrt{\varepsilon} + \varepsilon'M_{\varepsilon}\widetilde{C}_{\varepsilon} \leq 2\sqrt{\varepsilon}.$ Then the above inequality combined with fact that $x^2+y^2\leq (x+y)^2$ for $x,y\geq 0$ imply the following: for any $0<\varepsilon<1$ there exists a compact operator $K_{\varepsilon}=(\widetilde{C}_{\varepsilon}+\widetilde{C}_{\varepsilon}/ \varepsilon')^{1/2}K_{\varepsilon'}\Pi_{\overline{\partial}\phi}E_{\varepsilon}$ such that \[\|H^U_{\phi}(f)\| \lesssim \varepsilon^{1/4} \|f\| + \|K_{\varepsilon}(f)\| \text{ for } f\in A^2(U).\] Now Lemma \ref{LemCompEstimate} implies that $H^{U}_{\phi}$ is compact on $A^{2}(U).$ \end{proof} \begin{proof}[Proof of Proposition \ref{PropInC}] Since functions that are smooth up to the boundary of $\Omega$ are dense in $C(\overline{\Omega})$ and the sequence $\{H^{\Omega}_{\psi_n}\}$ converges to $H^{\Omega}_{\psi}$ in the operator norm whenever $\{\psi_n\}$ converges to $\psi$ uniformly on $\overline{\Omega}$ it suffices to prove that $H^{\Omega}_{\psi}$ is compact whenever $\psi\in C^{\infty}(\overline{\Omega}).$ Let us define \[S_{\psi} (f)(z)= -\frac{1}{\pi }\int_{\Omega} \frac{\frac{\partial\psi}{\partial\overline\xi}(\xi) f(\xi)}{\xi-z}d\lambda(\xi)\] for $f\in A^2(\Omega)$ and $z\in \Omega.$ We will show that $H^{\Omega}_{\psi}$ is compact on $ A^2(\Omega)$ by showing that $S_{\psi}$ is a limit of compact operators (in the operator norm) and $ S_{\psi}(f)$ solves $\overline{\partial} u =f\overline{\partial}\psi$ (because $H^{\Omega}_{\psi} =S_{\psi} - P^{\Omega}S_{\psi}$). To that end, for $\varepsilon>0$ let $\chi_{\varepsilon}$ be a smooth cut-off function on $\mathbb{R}$ such that $\chi_{\varepsilon}\equiv 1$ on a neighborhood of the origin and $\chi_{\varepsilon}(t)=0$ for $|t|\geq \epsilon.$ Then $S_{\psi}=A^{\varepsilon}_{\psi}+B^{\varepsilon}_{\psi}$ where \begin{align*} A^{\varepsilon}_{\psi}(f)(z)&=-\frac{1}{\pi }\int_{\Omega}\frac{\chi_{\varepsilon}(|\xi-z|)\frac{\partial\psi}{\partial \overline\xi}(\xi) f(\xi)}{\xi-z}d\lambda(\xi)\\ B^{\varepsilon}_{\psi}(f)(z)&=-\frac{1}{\pi}\int_{\Omega} \frac{(1-\chi_{\varepsilon} (|\xi-z|)) \frac{\partial\psi}{\partial \overline\xi}(\xi) f(\xi)}{\xi-z}d\lambda(\xi). \end{align*} Then the operator $B^{\varepsilon}_{\psi}$ is Hilbert-Schmidt and, in particular, compact because the kernel \[-\frac{(1-\chi_{\varepsilon}(|\xi-z|))\frac{\partial\psi}{\partial \overline\xi}(\xi)}{\pi(\xi-z)}\] is square integrable on $\Omega\times \Omega.$ Next we will show that $A^{\varepsilon}_{\psi}$ has a small norm. Let $\widehat{f}$ denote the trivial extension of $f$. That is, $\widehat{f}=f$ on $\Omega$ but $\widehat{f}=0$ otherwise. Since $\frac{\partial\psi}{\partial \overline\xi}$ is continuous on $\overline{\Omega}$ and $\Omega$ is bounded, using polar coordinates, we get \[ |A^{\varepsilon}_{\psi}(f)(z)|\lesssim \int_{\mathbb{C}} \frac{|\chi_{\varepsilon}(|\xi|)\widehat{f}(z+\xi)|}{|\xi|} d\lambda(\xi) \lesssim \int_0^{2\pi}\int_0^{\varepsilon}|\widehat{f}(z+re^{i\theta})| dr d\theta.\] Then the Cauchy-Schwarz inequality together with Fubini's theorem yield that \begin{align*} \|A^{\varepsilon}_{\psi}(f)\|^2\lesssim 2\pi \varepsilon \int_0^{2\pi}\int_0^{\varepsilon}\int_{\Omega}|\widehat{f}(z+re^{i\theta})|^2d\lambda(z) dr d\theta \leq 4\pi^2\varepsilon^2\|f\|^2. \end{align*} Hence, $\|A^{\varepsilon}_{\psi}\|\lesssim \varepsilon$ and $S_{\psi}$ is a limit (in the operator norm) of a sequence $\{B^{1/k}_{\psi}\}$ of compact operators. Next we want to show that $\overline{\partial} S_{\psi}(f) =f\overline{\partial}\psi.$ Let $\{f_n\}$ be a sequence of functions that are smooth on $\overline{\Omega}$ and converging to $f$ in $L^2(\Omega).$ Then the Cauchy integral with remainder formula (see \cite[Theorem 2.1.2]{ChenShawBook}) shows that $\overline{\partial} S_{\psi}(f_n)=f_n\overline{\partial}\psi.$ On the other hand, $\{\overline{\partial} S_{\psi}(f_n)\}$ converges weakly to $\overline{\partial} S_{\psi}(f)$ and $\{f_n\overline{\partial}\psi\}$ converges to $f\overline{\partial}\psi$ in $L^2(\Omega).$ Therefore, $\overline{\partial} S_{\psi}(f) =f\overline{\partial}\psi $ for $f\in A^2(\Omega).$ \end{proof} \section*{Acknowledgement} I am in debt to Daniel Jupiter for bringing the proof of \cite[Theorem VI.3]{JupiterThesis} to my attention, to Mehmet \c{C}elik and Trieu Le for fruitful conversations, to my advisor Emil Straube for the proof of Proposition \ref{PropInC}, and to the referee for valuable comments.
1,941,325,220,933
arxiv
\section{Introduction} Relativistic flows are likely to be encountered in accretion discs around compact objects like neutron stars and black holes, astrophysical jets, Gamma Ray Bursts (GRBs) etc. A fluid is said to be relativistic if the bulk speed of the plasma is comparable to the speed of light ($c$), or if its thermal energy is comparable or greater than its rest energy --- a fancy way of saying, that the random or jittery speed of the constitute particles of the fluid become comparable to the speed of light. This brings the issue of equation of state (EoS) of the fluid. The issue of relativistic equation of state was raised long ago (Chandrasekhar 1938, hereafter C38; Taub 1948; Synge 1957, hereafter S57), and was used for theoretical calculations too (Blumenthal \& Mathews 1976, hereafter BM76; Fukue 1987), although didnot become very popular in the community. Later Falle \& Komissarov (1996) showed that the relativistic EoS of the form presented by C38 and S57, is computationally expensive which led Mignone \etal (2005) to use the EoS of the form presented by BM76. Ryu \etal (2006, hereafter RCC06), proposed a new EoS, which is very close to the EoS by C38 and S57, and better than the EoS proposed by BM76, and which can be efficiently implemented in a simulation code. However, these EoS states are for single species fluid. Chattopadhyay (2008; hereafter C08), Chattopadhyay \& Ryu (2009; hereafter CR09) then modified the single species EoS for multi-species EoS, and analytically showed that the the accreting and solutions around black holes are indeed dependent on the composition of the flow. More interestingly, CR09 showed that electron-positron jets are the least relativistic when compared with flows containing protons. The most relativistic flow is the one with composition parameter (defined as the ratio proton to electron number density) $\xi\sim0.24$. Later Chattopadhyay \& Chakrabarti (2011; hereafter CC11) showed that, fluids containing protons can produce accretion shocks while an entirely leptonic fluid ($\xi=0$) will not be able to form accretion shocks. Although it is difficult to envisage an accreting flow entirely composed of leptons \ie electrons-positrons from close to the horizon to infinity, however, our main intention is to show that --- (1) fluid behaviour do depend on composition even in absence of composition, and (2) it is erroneous to a assume the most relativistic matter fluid is the lightest one that is pair plasma. In this paper therefore we want to compute the radiation produced by flow accreting onto a black hole, and show that indeed the spectra also depends on the composition of the fluid even if the outer boundary condition is the same. For that purpose we have developed a general relativistic Monte-Carlo code. \section{Governing Equations} We assume a non-rotating black hole described by the Schwarzschild radii \begin{eqnarray}\label{} ds^2 & = & -\left(1-\frac{2GM_B}{c^2r} \right)c^2dt^2 \\ \nonumber &+ & \left(1-\frac{2GM_B}{c^2r}\right)^{-1}dr^2 +r^2d{\theta}^2+r^2\sin^2{\theta}d\phi^2, \end{eqnarray} where, $G$, $M_B$ and $c$ are the universal gravitational constant, the mass of the central black hole and the speed of light, respectively, and $t,~r,~\theta,~\phi$ are the usual four coordinates. \begin{equation}\label{} T^{\mu \nu}_{; \nu}=-T^{\mu \nu}_R ~~~~ \mbox{and} ~~~~ (nu^{\nu})_{; \nu}=0, \end{equation} where, the energy momentum tensor of the fluid is $T^{\mu \nu}=(e+p)u^{\mu}u^{\nu}+pg^{\mu \nu}$, the radiation stress-tensor being $T^{\mu \nu}_R$, where $N$ is the total number density of the fluid. The fluid equation is solved theoretically assuming there is no radiation. And then it is fed to the general relativistic Monte Carlo code. The Monte Carlo code estimates bremsstrahlung emission from each grid point, and then the emitted photons travels and impinges on the electron at some other point. If the energy of the photon is less than electrons kinetic or thermal energy then inverse-Comptonization, \ie the photon will take energy away. If the the photon energy is more, then the opposite happens. The energy density of the fluid is given by (see, C08, C09), \begin{eqnarray} e &=& n_{e^-}m_ec^2f \\ f &=& (2-\xi)\left[1+\Theta\left(\frac{9\Theta+3}{3\Theta+2}\right)\right] \\ \nonumber &+& \xi\left[\frac{1}{\eta}+\Theta\left(\frac{9\Theta+3/\eta}{3\Theta+2/\eta} \right)\right], \end{eqnarray} where, $\xi=n_{p^+}/n_{e^-}$ is the composition parameter, where electron number density $n_{e^-}=0.5$, the mass density $\rho=n_{e^-}m_e\left\{ 2-\xi \left(1-1/\eta \right)\right\}$, $\eta=m_e/m_{p^+}$, $\Theta=kT/m_ec^2$ and the pressure $p=2n_{e^-}kT$. For radial and adiabatic flow in steady state, the equation of motion simplifies to the form given in CR09. The initial configuration of the fluid is, semi-analytical calculation of an adiabatic radial inflow for Bernoulli parameter ${\cal E}=0.001$ in units of $c^2$. The the electron number density is plotted assuming $M_B=10M_{\odot}$ and particle flux rate to be ${\dot N}=1.724\times 10^{42}$cm$^{-3}$s$^{-1}$. This value ${\dot N}$ is the particle flux corresponding to accretion rate $0.1{\dot M}_{\rm Edd}$ for an electron-proton (\ep) fluid or a fluid with $\xi=1$. All types of fluid starts with these values of ${\dot N}$ and ${\cal E}$. In Fig. 1a-b, we plot total lepton density $(n_{e^-}-n_{e^+})$ (long dashed), temperature $T$ (dashed) and the radial three velocity $v$ (solid) are plotted with $r$ in log-log scale, for two kinds of fluid the $\xi=0$ or \ee (a) and $\xi=1$ or \ep (b). It is to be noted the positron number density ($n_{e^+}$) is different for these two fluid, \eg for $\xi=0$, $n_{e^+}=n_{e^-}$ and $n_{p^+}=0$ but for $\xi=1$, $n_{e^+}=0$ and $n_{p^+}=n_{e^-}$. \begin{figure}[h!] \vspace{0.0cm} \centering \includegraphics[height=4.0cm]{fig1a.eps} \includegraphics[height=4.0cm]{fig1b.eps} \caption{ Total lepton number density $(n_{e^-}-n_{e^+})$ (long dashed), temperature $T$ (dashed) and the radial three velocity $v$ (solid) are plotted with $r$ in log-log scale, for two kinds of fluid the $\xi=0$ or \ee (a) and $\xi=1$ or \ep (b). Both the fluids starts with ${\dot N}=1.724\times 10^{42}$cm$^{-3}$s$^{-1}$ and ${\cal E}=0.001$ in units of $c^2$.} \end{figure} Starting with these initial values of the fluid, we calculate the Bremsstrahlung seed photon distribution. The spectral form of the bremsstrahlung emission from $e^{\pm}-p^+$, $e^{\pm}$ and \ee are (Gould 1980) \begin{eqnarray} \left(\frac{dE}{dVdtd\epsilon}\right)_{e^{\pm}-p^+}&=&\frac{16}{3}{\sqrt{\frac{2}{\pi}}} \alpha^3\lambda^2c(2-\xi)\xi n^2_{e^-}\Theta^{-1/2} \\ \nonumber & & e^{-a}K_0(a)\left[1+\frac{\epsilon}{4m_ec^2}f(a) \right], \\ \left(\frac{dE}{dVdtd\epsilon}\right)_{e^{\pm}}&=&\frac{128}{15}\alpha^3\lambda^2[1+(1-\xi)^2]n^2_{e^-} \\ \nonumber && {\sqrt{\frac{kT}{m_e\pi}}}ae^{-a}K_0(a)\Psi(a), \\ \left(\frac{dE}{dVdtd\epsilon}\right)_{e^--e^+}&=&\frac{128}{15}\alpha^3\lambda^2(1-\xi)n^2_{e^-} \\ \nonumber && {\sqrt{\frac{kT}{m_e\pi}}}ae^{-a}K_0(a)\Psi(a), \end{eqnarray} where, $a=\epsilon/2kT$, $\epsilon=h\nu$, $\alpha=e^2/\hbar c$, $\lambda=\hbar/m_ec$, and $f(a)=1/4a+3K_1(a)/K_0(a)+a[1-K_2(a)/K_0(a)]$, $\Psi(a)=3/4a+K_1(a)/K_0(a)+c_2e^{-a}/16K_0(a)$. The $K$'s are modified Bessel's function of various kind. Now there are at least three reference frames involved, the local comoving frame, the local rest frame and the Schwarzschild frame. The three velocities are measured in the local fixed frame and the cooling rates are calculated in the comoving frame. The photons, however will be moving in a curved geometry, and would follow geodesic equations. The coordinate transformations between the locally fixed frame ($x^{\hat \beta}$) and the coordinate frame ($x^{\mu}$) are given by (Park 2006), \begin{eqnarray*} \frac{\partial}{\partial {\hat t}}& =& \frac{1}{(1-2/r)^{1/2}}\frac{\partial}{\partial t} \\ \frac{\partial}{\partial {\hat r}}& =& (1-2/r)^{1/2}\frac{\partial}{\partial r} \\ \frac{\partial}{\partial {\hat \theta}}& =& \frac{1}{r}\frac{\partial}{\partial \theta} \\ \frac{\partial}{\partial {\hat \phi}}& =& \frac{1}{rsin\theta}\frac{\partial}{\partial \phi} \end{eqnarray*} And the transformation between local fixed frame ($x^{\hat \beta}$) and the comoving frame ($x^{\hat \alpha}_{co}$) are related by Lorentz transformation of the form \begin{eqnarray*} \frac{\partial}{\partial x^{\hat \alpha}_{co}}=\Lambda^{\hat \beta}_{\hat \alpha}(v)\frac{\partial}{\partial x^{\hat \beta}}, \end{eqnarray*} Where ${\hat \alpha},~{\hat \beta}$ indicates the tetrads and $\Lambda^{\hat \beta}_{\hat \alpha}(v)$ are the components of Lorentz transformation. The photons in strong gravity moves in curved trajectories and the geodesic equations given by Weinberg (1972), can be written as, \begin{eqnarray} && \frac{d^2r}{dt^2}-\frac{3}{2}\frac{1}{r(r-1)}\left(\frac{dr}{dt}\right)^2-(r-1)\left(\frac{d\theta}{dt}\right)^2 \\ \nonumber && -(r-1)sin^2\theta \left(\frac{d\phi}{dt}\right)^2+\frac{1}{2r(r-1)}=0 \\ && \frac{d^2\theta}{dt^2}+\frac{2r-3}{r(r-1)}\frac{dr}{dt}\frac{d\theta}{dt}-sin\theta cos\theta \left(\frac{d\phi}{dt}\right)^2 =0 \\ && \frac{d^2\phi}{dt^2}+\frac{2r-3}{r(r-1)}\frac{dr}{dt}\frac{d\phi}{dt}+2 cot\theta \frac{d\theta}{dt} \frac{d\phi}{dt}=0 \end{eqnarray} Moreover the definition of optical depth in curved space and moving media in radial direction, is modified to \begin{eqnarray} d\tau=\sigma\gamma \{(2-\xi)n_{e^-}\}(1-vn_r){\sqrt{(1-1/r)}}dt, \end{eqnarray} where, $\gamma$, $\sigma$ and $n_r$ are the Lorentz factor, Klein Nishina cross-section and the directional cosine of the photon. Equations 5-11, are implemented in the Monte-Carlo code of Ghost \etal (2011) and Garain \etal (2012), to make the code general relativistic. Therefore, not only the fluid is moving in curved space, but even the photons are doing the same. The total cooling/heating of fluid is $L=B+C$, where, $L$ is cooling term, $B$ is the total bremsstrahlung loss and $C$ is the Comptonization term, if there is heating then $C$ is negative and if there is cooling then the sign is positive. $C$ has been calculated following Pozdnyakov \etal (1983). If $\delta E$ is the energy radiated from a grid from unit volume ($C$ or $B$ or $L$), then the new energy density will be given by, $e_{n}=e_{old}-\delta E$, which will result in new temperature because $e_n=n_{e^-}m_ec^2f(\xi,\Theta_{\rm n})$, \begin{eqnarray} a_1\Theta^3_{\rm n}+a_2\Theta^2_{\rm n}+a_3\Theta_{\rm n}+a_4=0, \end{eqnarray} where, $a_1=54\eta$, $a_2=9[2(\eta+2)-\eta(2-\xi+\xi/\eta)(e_{n}-\rho c^2)/(\rho c^2)-\xi(1-\eta)]$, $a_3=6[2-(2-\xi+\xi/\eta)(e_{n}-\rho c^2)(1+\eta)/(\rho c^2)]$, and $a_4=-4(2-\xi+\xi/\eta)(e_{n}-\rho c^2)/(\rho c^2)$. The code updates the new non-dimensional temperature of the fluid $\Theta_{\rm n}$, until it converges. \section{Result and Discussion} \begin{figure}[h!] \vspace{0.0cm} \centering \includegraphics[height=9.5cm,angle=-90]{fig2.eps} \caption{Two photons emerging from the outer edge of spherically accreting cloud at S, one escapes without scattering (blue), the other (magenta) scatters with an electron, changes direction and is absorbed by the black hole (at $+$). The arrows show the velocity field of the spherical accretion.} \end{figure} In Fig. 2,we show that, photon trajectories of two photons originating from same location at S. One photon escapes and suffers no scattering (blue), while the other scatters (magenta), changes direction and is captured by the black hole at the centre (location $+$). The arrow heads are the velocity field or $v$ as shown in Fig. 1a. In Schwarzschild metric, the trajectories of the photons, which follows geodesic equations, are curved. It is to be noted that the seed photons generated will depend on $\xi$, as well as temperature and other relevant quantities. If the fluid is \ep then $\xi=1$, then the contribution due to Eq. (7) is zero, while for \ee fluid or when $\xi=0$, contribution due to Eq. (5) is zero. \begin{figure}[h!] \vspace{0.0cm} \centering \includegraphics[height=8.5cm,angle=-90]{fig3.eps} \caption{Convergence of temperature of \ee fluid or the fluid with $\xi=0$.} \end{figure} In Fig. 3, we show the convergence of temperature, or in other words, calculating the new temperature at each iteration. The initial temperature (red) is decreased as bremsstrahlung and Comptonization is considered. At large distances away from the black hole the radiative loss is not much but at $r\lsim 10$ Schwarzschild radii (\ie $r_g=2GM_B/c^2$), cooling becomes important. However, close to the horizon the temperature rises, since the cooling time scale is much larger than the infall timescale at those distances. The temperature converges after few iterations. Final spectra is obtained from the converged temperature (black). \begin{figure}[h!] \vspace{0.0cm} \centering \includegraphics[height=8.5cm,angle=-90]{fig4.eps} \caption{Comparison of spectra (power in arbitrary units) from three different fluids $\xi=0$ (red), $\xi=0.5$ (blue), and $\xi=1$ (magenta), for same outer boundary condition (see text).} \end{figure} In Fig. 4, we present the combined spectra due to bremsstrahlung and Comptonization for fluids of different composition \eg \ee or $\xi=0$ (red), \ep or $\xi=1$ (magenta), and $\xi=0.5$ (blue). Clearly, \ee fluid which was coldest and the least relativistic is the least luminous and the less energetic in terms of the spectra as well. This is to be expected since the temperature of \ee fluid is much lower than that by fluids containing protons. Therefore, inverse-Comptonization produces much less energetic photons. In this paper, we have compared with flows starting with the same outer boundary condition, namely, at $r_{\rm out}=500~r_g$, ${\cal E}=0.001~c^2$ and ${\dot N}= 1.724\times 10^{42}$cm$^{-3}$s$^{-1}$. We have not injected flows with the same accretion rate, because in that case the number density of particles at the outer boundary for \ee fluid will be much higher than fluids with $\xi=0.5$ and \ep fluid. \section{Conclusion} It has been shown earlier (C08, CR09, CC11) that, the solution of relativistic, transonic, and adiabatic accretion depends on the composition of the plasma. More interestingly, we also showed that, contrary to the expectation, we found that \ee fluid is the least relativistic fluid. And we conjectured that a purely leptonic flow will be less luminous and of low energy. Since \ee flow is least relativistic and slowest, density should be higher. This would increase the optical depth, and therefore should be less luminous. Moreover, the temperature of \ee is very low too, so higher energies will not be available through inverse-Comptonization. Chattopadhyay \etal (2013) showed that this is indeed true for a toy model of radiation, where the seed photons were mono-energetic and artificially injected. In this paper, we present preliminary solutions of the total spectra from radial accretion onto black holes by considering realistic seed photons (bremsstrahlung) and the Comptonization of those photons, and we vindicate the conclusions of Chattopadhyay \etal (2013). It may be noted though, that \ee fluid cannot completely describe an accretion flow from infinity to the horizon. This is shown here as an extreme case, the realistic cases are \ep and $\xi=0.5$ fluid. In view of the results of CC11, extension of these methods to accretion disc, in presence of all kind of cooling processes, is very interesting. We are working on it and will be reported elsewhere. \textbf{Acknowledgements} We acknowledge Central Department of Physics, Tribhuvan University for providing various supports during the conference. \\
1,941,325,220,934
arxiv
\section{Method} \label{sec:method} \paragraph{Setup.} We consider an NLP task (sentiment analysis) consisting of data $\mathcal{X}$ and labels $\mathcal{Y}$ (positive, negative). There exist two different distributions, called the source domain $\mathcal{D_S}$ and the target domain $\mathcal{D_T}$ over $\mathcal{X} \times \mathcal{Y}$. Unsupervised domain adaptation (\textsc{uda}{}) consists of a model $\mathcal{C}$ that receives labeled input samples $\mathcal{X_S}: (x_s, y_s)_{s=1}^{n_s} \sim \mathcal{D}_\mathcal{S}$ and unlabeled input $\mathcal{X_T}: (x_t)_{t=1}^{n_t} \sim \mathcal{D}_\mathcal{T}$. The goal of \textsc{uda}{} is to learn a model $\mathcal{C}$ such that we perform well in the NLP task for the target domain $\mathcal{D_T}$. The popular method in \textsc{uda}{} is to learn representations that are invariant in the input domain and still have sufficient power to perform well in the source domain \cite{dann, dsn}. Then, according to the theory of domain divergence \cite{ben2010theory} shows that the error in the target domain is bounded by the error in the source domain and the divergence. The unsupervised domain adaptation method thus consists of two components: the reduction of the divergence measure and a classifier for the source domain. A new classifier must be learned for every pair of source-target domains, and the method fine-tunes a large number of parameters. \textsc{UDApter}{} makes unsupervised domain adaptation more parameter efficient (cf. \Cref{sec:two-step-method}, \Cref{sec:joint-method}) using adapters. We follow the framework proposed by \citet{pmlr-v97-houlsby19a} where small bottleneck layers are added to the transformer layers, fine-tuning{} only the adapter parameters while keeping the other parameters frozen, and propose the following. \subsection{Two-Step Domain and Task Adapters} \label{sec:two-step-method} \paragraph{Domain Adapters.} To learn domain-invariant representations, we first train a domain adapter. The adapter architecture follows the work of \citet{pfeiffer-etal-2021-adapterfusion}, which consists of a simple down-projection followed by an up-projection. In a transformer layer $l$, let $h_l$ be the hidden representation of the layer \textbf{Add \& Norm} and let $r_l$ be the representation of the layer \textbf{Feed-Forward} (\Cref{fig:domain-adapter}), then the adapter makes the following transformation and calculates a new hidden representation. \begin{equation} dom_{l} = W_{up} \cdot f(W_{down} \cdot h_l) + r_l \end{equation} \noindent where $f$ is a nonlinear function (e.g., \textsc{ReLU}), $W_{down} \in \mathbb{R}^{h \times d}$ projects the hidden representations down to a lower dimension, $W_{up} \in \mathbb{R}^{d \times h}$ projects them back to a higher dimension, and $d \ll h$. We pass a sample from the source domain $(x_s^{src}) \sim \mathcal{D_S}$ and one from the target $(x_t^{trg}) \sim \mathcal{D_T}$ through the adapters in layer $l$ and obtain their representations $h^{src}_l$ and $h^{trg}_l$, respectively. We then reduce the divergence between these representations. \begin{equation} \Delta_l = div(dom_l^{src}, dom_l^{trg}) \end{equation} Here $div(\cdot)$ is the divergence function such as the correlation alignment (CORAL) \cite{coral}, the central moment discrepancy (CMD) \cite{cmd} or the multi-kernel maximum mean discrepancy (MK-MMD) \cite{mmd, dsn}. In this work, we use MK-MMD for all of our experiments, since it performed best\footnote{CMD and CORAL also perform similarly to MK-MMD}. Similar ideas are used to adapt representations in computer vision models \cite{deepadaptationnetworks, deepcoral}. The final divergence loss considers all $L$ layers. \begin{equation} \mathcal{L}_{div} = \sum_{l=1}^{L} \Delta_l \end{equation} \paragraph{Task Adapters.} Task adapters are stacked with frozen domain adapters. We pass the representations $dom_l$ from the previous step and the supervised data from the source domain $(x_{s}^{src}, y_s^{src}) \sim \mathcal{D_S}$. Task adapters have the same architecture as domain adapters and perform the following: \begin{equation} task_l = W_{up} \cdot f(W_{down} \cdot dom_l^{src}) + r_l \end{equation} The goal of these task adapters is to learn representations that are task-specific. Only task adapters are updated when training on the end task (sentiment classification, natural language inference) and all other parameters, including domain adapters, are frozen. Regular cross-entropy loss is reduced during training of task adapters: \begin{equation} \mathcal{L}_{task} = softmax\_ce(W_{task} \cdot h_L) \end{equation} $h_L$ is the hidden representations of the last layer of the transformer, $W_{task} \in \mathbb{R}^{h * |\mathcal{Y}|}$ where $|\mathcal{Y}|$ is the number of classes, and $softmax\_ce$ is the softmax followed by cross-entropy. This two-step process deconstructs \textsc{uda}{} methods with a domain adapter and a task adapter. This affords composability, where task adapters can be reused for different pairs of domains (\Cref{sec:further-analysis}). However, domain and task representations can be learned jointly, as explored in the next section. \paragraph{Training Process.} Given a source-target domain adaptation scenario, we first train the domain adapter and save their weights. We then stack the task adapter with the domain adapter, which is trained using the supervised data from the source domain. When training the task adapter, the domain adapter is frozen. During inference, we stack the domain and task adapter. \subsection{Joint Domain Task Adapters} \label{sec:joint-method} This method adds a single adapter that performs the reduction of the divergence measure and learns task representations jointly. For a given supervised sample from the source domain $(x_s^{src}, y_s^{src}) \sim \mathcal{D_S}$ and an unsupervised sample $(x_t^{trg}) \sim \mathcal{D_T}$, let $h_l^{src}, h_l^{trg}$ be the hidden representations of the adapters for $x_s^{src}$ and $x_t^{trg}$ for layer $l$. We reduce the following joint loss: \begin{equation} \mathcal{L} = \lambda \cdot \mathcal{L}_{task} + (1-\lambda) \cdot \mathcal{L}_{div} \end{equation} Here $\mathcal{L}_{task}$ is the task loss on the source domain supervised samples, $\lambda$ is the adaptation factor. Reducing divergence along with cross-entropy loss beyond a certain point makes training unstable and does not contribute to increased performance. Following \cite{dann} we suppress the noisy signal from the divergence function as training progresses and gradually change $\lambda$ from 0 to 1 to reduce the contribution of divergence loss using the following schedule ($\gamma=10$ for all of our experiments): \begin{equation} \lambda = \frac{2}{1+\exp{(-\gamma \cdot p)}} - 1 \end{equation} Similar methods have been proposed to adapt models to other domains by \citet{deepadaptationnetworks} and \citet{wu-etal-2022-learning}. Compared to the two-step process introduced earlier (\Cref{sec:joint-method}), we need to properly control the losses to obtain optimal results and also this method does not offer composability (\Cref{sec:further-analysis}). \section{Discussion} This work shows that domain adaptation in NLP can be made more efficient using adapters. We use adapters fine-tuning{} \cite{pmlr-v97-houlsby19a} proposed before and stacking of adapters that have been proposed before for a cross-lingual setting \cite{pfeiffer-etal-2020-mad} for the unsupervised domain adaptation. The approach we have discussed will make domain adaptation more practical for real-world use cases, making adaptation faster and cheaper. However, in this work, we have used \texttt{bert-base-uncased}{} for all of our methods. Using other backbone transformer models is part of our future work. We deal only with a classification and natural language inference task. Adapters have previously been used for machine translation \cite{bapna-firat-2019-simple} and other generation tasks \cite{zhang-etal-2022-continual}. We need to explore our domain adaptation methods for other generation tasks. In this work, we reduce the marginal distribution of the two distributions. Previous works such as \citet{coregularized-domain-adaptation} show that reducing only the marginal distribution is not sufficient and aligning the label distributions is necessary. However, NLP works do not consider this and would require further investigation by the community. \section{Literature Review} \paragraph{Parameter Efficient Fine-tuning Methods.} Adapters \cite{pmlr-v97-houlsby19a} are task-specific modules added to frozen transformer layers, with only the adapter parameters updated. Their plug-and-play characteristics and the avoidance of catastrophic forgetting have resulted in their use for NLP tasks: machine translation \cite{bapna-firat-2019-simple}, named entity recognition \cite{pfeiffer-etal-2020-mad}, etc. Recently, \cite{he-etal-2021-effectiveness} have shown that they are efficient in scenarios where there is minimal supervised data. However, they neither test their performance under domain shift nor propose methods to improve adapter fine-tuning{}. Closely related to our method is the work of \citet{ngo-trung-etal-2021-unsupervised}, who learns a shared-private representation per layer, similar to \textsc{dsn}{} \cite{dsn}. Their method requires balancing multiple loss functions, compared to our simpler two-step domain adaptation method. The stacking of adapters has been followed before by \cite{pfeiffer-etal-2020-mad} for cross-lingual tasks: learning a language adapter first and stacking a task adapter. However, one language adapter is learned per language, assumes large amounts of unsupervised data to be available in all the languages, and requires supervised data to be available to learn a task, which is not applicable for domain adaptation. Compared to other methods, we make domain adaptation more efficient using principles of unsupervised domain adaptation. \paragraph{Unsupervised Domain Adaptation (\textsc{uda}{}).} Existing \textsc{uda}{} approaches can be categorized into model-centric, data-centric, and hybrid. \textit{Model-centric} approaches involve augmenting feature space or altering the loss function, architecture, or model parameters \cite{blitzer-etal-2006-domain,10.1145/1772690.1772767, dann} have been popular. A popular \textit{model-centric} approach is to use adversarial training between the domain and the task classifier \cite{dann} to extract domain-invariant information. \cite{dsn} in addition preserves domain-specific information. These works involve training a large number of parameters and require careful balancing of multiple loss functions. Our methods build on top of these works and make it more parameter-efficient. Large-scale transformers pretrained on domain-specific corpora have been a norm: biomedical \cite{10.1093/bioinformatics/btz682}, scientific publications \cite{beltagy-etal-2019-scibert}, among others. Another alternative is to continue pretraining generic models on domain-specific data: domain adaptive pretraining \cite{gururangan-etal-2020-dont}. Both solutions are expensive since a huge model has to be stored for every domain while using adapters affords storing a small number of parameters for every domain pair and can be quickly adapted to new domains. \section{Experiments} \renewcommand{\arraystretch}{1.2} \begin{table*}[t!] \centering \footnotesize \resizebox{\linewidth}{!}{% \begin{tabular}{l|c|@{\hskip 0.4in}cc|ccccc} \hline & \colorbox{red!30}{\textbf{Fully Supervised}} & \multicolumn{2}{@{\hskip -0.3in}c@{\hskip 0.1in}|}{\colorbox{orange}{\textbf{Unsupervised Domain Adaptation}}} & \multicolumn{5}{c}{\colorbox{blue!25}{\textbf{Adapter Based}}} \\ Src \textrightarrow Trg & \fireemoji & \textsc{dann}{} & \textsc{dsn}{} & \dann{}-\adapteremoji{} & \dann{}-\adapteremoji-\textsc{mc}{} & \textsc{task}-\adapteremoji{}{} & \textsc{ts-dt}-\adapteremoji{}{} & \textsc{joint-dt}-\adapteremoji{}{} \\ [0.5ex] \hline \textsc{a}{} \textrightarrow{} \textsc{ba}{} & 87.52 \scriptsize(1.96) & 85.57 \scriptsize(3.72) & 89.90 \scriptsize(0.26) & 86.46 \scriptsize(0.26) & \textbf{88.74 \scriptsize (0.64)} & 87.03 \scriptsize(0.26) & 88.24 \scriptsize(0.76) & \textbf{88.74 \scriptsize (0.13)} \\ \textsc{a}{} \textrightarrow{} \textsc{bo}{} & 86.67 \scriptsize(1.06) & 36.48 \scriptsize(0.45) & 84.47 \scriptsize(0.99) & 78.41 \scriptsize(1.14) & 83.36 \scriptsize (0.43) & 84.15 \scriptsize(1.10) & 84.22 \scriptsize(0.76) & \textbf{84.96 \scriptsize (0.28)} \\ \textsc{a}{} \textrightarrow{} \textsc{c}{} & 91.62 \scriptsize(0.37) & 57.51 \scriptsize(13.32) & 88.56 \scriptsize(0.81) & 87.31 \scriptsize(0.39) & 88.75 \scriptsize (0.69) & \textbf{89.67 \scriptsize(0.32)} & 88.76 \scriptsize(1.32) & 89.39 \scriptsize (0.23) \\ \textsc{a}{} \textrightarrow{} \textsc{mr}{} & 82.08 \scriptsize(0.78) & 35.23 \scriptsize(1.99) & 78.08 \scriptsize(0.46) & 75.54 \scriptsize(0.63) & 76.60 \scriptsize (1.06) & 76.63 \scriptsize(0.92) & 77.39 \scriptsize(0.13) & \textbf{77.63 \scriptsize (0.71} ) \\ \textsc{ba}{} \textrightarrow{} \textsc{a}{} & 89.12 \scriptsize(0.38) & 77.52 \scriptsize(11.25) & 87.46 \scriptsize(1.83) & 87.72 \scriptsize(1.85) & 88.47 \scriptsize (0.72) & 88.33 \scriptsize(1.10) & 89.55 \scriptsize(0.10) & \textbf{89.70 \scriptsize (0.23)} \\ \textsc{ba}{} \textrightarrow{} \textsc{bo}{} & 86.67 \scriptsize(1.06) & 43.45 \scriptsize(8.96) & 82.19 \scriptsize(3.70) & 82.89 \scriptsize(3.08) & 83.86 \scriptsize (0.41) & 84.61 \scriptsize(0.39) & 84.38 \scriptsize(0.61) & \textbf{85.01 \scriptsize (0.60} \\ \textsc{ba}{} \textrightarrow{} \textsc{c}{} & 91.62 \scriptsize(0.37) & 47.58 \scriptsize(7.65) & 89.68 \scriptsize(0.71) & 86.63 \scriptsize(0.53) & 88.73 \scriptsize (0.42) & \textbf{90.63 \scriptsize(0.33)} & 87.46 \scriptsize(0.88) & 88.64 \scriptsize (0.30) \\ \textsc{ba}{} \textrightarrow{} \textsc{mr}{} & 82.08 \scriptsize(0.78) & 50.63 \scriptsize(7.43) & 77.88 \scriptsize(0.38) & 74.48 \scriptsize(1.79) & 78.07 \scriptsize (0.34) & 78.74 \scriptsize(0.35) & \textbf{79.42 \scriptsize(0.44)} & 78.44 \scriptsize (0.70)\\ \textsc{bo}{} \textrightarrow{} \textsc{a}{} & 89.12 \scriptsize(0.38) & 37.40 \scriptsize(1.90) & 88.20 \scriptsize(0.51) & 85.90 \scriptsize(0.12) & 85.91 \scriptsize (0.25) & 85.03 \scriptsize(0.36) & 84.79 \scriptsize(0.75) & \textbf{87.46 \scriptsize (0.27)} \\ \textsc{bo}{} \textrightarrow{} \textsc{ba}{} & 87.52 \scriptsize(1.96) & 54.33 \scriptsize(12.49) & 88.56 \scriptsize(0.44) & 82.06 \scriptsize(1.15) & 84.27 \scriptsize (0.11) & 86.50 \scriptsize(0.39) & \textbf{86.84 \scriptsize(0.48)} & 86.41 \scriptsize (0.79)\\ \textsc{bo}{} \textrightarrow{} \textsc{c}{} & 91.62 \scriptsize(0.37) & 39.43 \scriptsize(0.49) & 88.58 \scriptsize(1.01) & 86.94 \scriptsize(0.83) & 87.40 \scriptsize (0.44) & 88.44 \scriptsize(0.53) & 87.86 \scriptsize(0.61) & \textbf{88.53 \scriptsize (0.43)} \\ \textsc{bo}{} \textrightarrow{} \textsc{mr}{} & 82.08 \scriptsize(0.78) & 54.23 \scriptsize(13.94) & 79.07 \scriptsize(1.01) & 76.19 \scriptsize(0.89) & 79.44 \scriptsize (0.86) & 79.44 \scriptsize(0.95) & \textbf{80.52 \scriptsize(0.61)} & 78.91 \scriptsize (0.38)\\ \textsc{c}{}{} \textrightarrow{} \textsc{a}{} & 89.12 \scriptsize(0.38) & 60.93 \scriptsize(3.78) & 89.76 \scriptsize(0.76) & 87.02 \scriptsize(1.86)& 86.63 \scriptsize (0.29) & 87.74 \scriptsize(1.18) & 88.53 \scriptsize{(0.42)} & \textbf{88.92 \scriptsize (0.44)} \\ \textsc{c}{} \textrightarrow{} \textsc{ba}{} & 87.52 \scriptsize(1.96) & 77.29 \scriptsize(3.61) & 89.42 \scriptsize(0.70) & 88.10 \scriptsize(1.13) & 89.14 \scriptsize (0.30) & 81.71 \scriptsize(2.72) & \textbf{89.72 \scriptsize(0.43)} & 89.32 \scriptsize (0.42) \\ \textsc{c}{} \textrightarrow{} \textsc{bo}{} & 86.67 \scriptsize(1.06) & 38.21 \scriptsize(1.40) & 85.56 \scriptsize(0.62) & 81.18 \scriptsize(2.07) & 83.61 \scriptsize (0.67) & 80.55 \scriptsize(0.81) & 84.14 \scriptsize(0.52) & \textbf{85.42 \scriptsize (0.70)} \\ \textsc{c}{} \textrightarrow{} \textsc{mr}{} & 82.08 \scriptsize(0.78) & 35.08 \scriptsize(1.94) & 76.13 \scriptsize(0.54) & 64.99 \scriptsize(5.91) & \textbf{74.22 \scriptsize (0.31)} & 69.53 \scriptsize(1.24) & 73.22 \scriptsize(0.48) & 73.50 \scriptsize (0.84) \\ \textsc{mr}{} \textrightarrow{} \textsc{a}{} & 89.12 \scriptsize(0.38) & 37.07 \scriptsize(4.16) & 82.64 \scriptsize(2.17) & 81.05 \scriptsize(1.15) & 79.56 \scriptsize (0.53) & 82.45 \scriptsize(1.43) & 81.93 \scriptsize(0.47) & \textbf{84.41 \scriptsize (0.43)} \\ \textsc{mr}{} \textrightarrow{} \textsc{ba}{} & 87.52 \scriptsize(1.96) & 38.76 \scriptsize(4.17) & 80.59 \scriptsize(2.18) & 77.95 \scriptsize(1.46) & 79.33 \scriptsize (0.43) & 81.70 \scriptsize(1.22) & 84.28 \scriptsize(0.41) & \textbf{84.91 \scriptsize (0.36)} \\ \textsc{mr}{} \textrightarrow{} \textsc{bo}{} & 86.67 \scriptsize(1.06) & 42.07 \scriptsize(4.86) & 85.13 \scriptsize(0.83) & 82.83 \scriptsize(0.62) & \textbf{84.90 \scriptsize(1.29)} & \textbf{84.90 \scriptsize (0.23)} & 84.47 \scriptsize(0.80) & 84.45 \scriptsize (0.31) \\ \textsc{mr}{} \textrightarrow{} \textsc{c}{} & 91.62 \scriptsize(0.37) & 36.92 \scriptsize(1.86) & 86.56 \scriptsize(0.63) & 84.58 \scriptsize(0.46) & 82.53 \scriptsize (0.92) & 86.68 \scriptsize(0.65) & 86.25 \scriptsize(0.38) & \textbf{88.37 \scriptsize (0.11)} \\ [0.5ex] \hline Avg & 87.40 \scriptsize(0.91) & 49.28 \scriptsize(5.47) & 84.92 \scriptsize(1.03) & 81.91 \scriptsize (1.37) & 83.68 \scriptsize(0.50) & 83.72 \scriptsize(0.88) & 84.60 \scriptsize(0.57) & \textbf{85.16 \scriptsize (0.43)} \\ \hline \end{tabular} % } \caption{\label{tab:amazon_results}F1 scores for \textsc{amazon}{} dataset. We report mean and standard deviation of 3 runs. The five domains are Apparel (\textsc{a}{}), Baby (\textsc{ba}{}), Books (\textsc{bo}{}), Camera\_Photo (\textsc{c}{}) and Movie Reviews (\textsc{mr}{}). On average, our method outperforms all baselines. Our methods are competitive with fully unsupervised domain adaptation methods.} \end{table*} \subsection{Datasets} We evaluate our approach on two representative datasets with different tasks, both in English. \Cref{tab:dataset} shows the details of the datasets. Every dataset has 5 domains, and we consider each domain with every other domain which results in 20 domain adaptation scenarios per dataset, 120 experiments per method, totalling over 1.9K experiments. \noindent \paragraph{\textsc{amazon}:} Multi Domain Sentiment Analysis Dataset \cite{blitzer-etal-2007-biographies} that contains Amazon product reviews for five different types of products (domains): Apparel (\textsc{a}{}), Baby (\textsc{ba}{}), Books (\textsc{bo}), Camera\_Photo (\textsc{c}{}), and Movie Reviews (\textsc{mr}{}). Each review is labeled as positive or negative. We follow the setup in \citep{ramesh-kashyap-etal-2021-domain}. \noindent \paragraph{\textsc{mnli}{}:} The Multigenre Natural Language Inference (MNLI) corpus \cite{williams-etal-2018-broad} contains hypothesis--premise pairs covering a variety of genres: Travel (\textsc{tr}{}), fiction (\textsc{f}{}), telephone (\textsc{te}{}), government (\textsc{g}{}), and slate (\textsc{s}{}). Each pair of sentences is labeled Entailment, Neutral, or Contradiction. The train and validation data set are taken from the train set by sampling 90\% and 10\% samples, respectively. We use the MNLI-matched validation set as our test set. \subsection{Baseline Methods} \noindent \paragraph{Fully supervised.} \textit{Fine-tune (\fireemoji)}: Fine-tunes a language model using labeled data from the target domain. Serves as an upper bound of performance. \noindent \paragraph{Unsupervised Domain Adaptation (\textsc{uda}{}).} \textit{Domain Adversarial Neural Networks} (\textsc{dann}{}): An unsupervised domain adaptation method \cite{dann} that learns domain-invariant information by minimizing task loss and maximizing domain confusion loss with the help of gradient reversal layers. \textit{Domain Separation Networks:} (\textsc{dsn}{}) \cite{dsn} improves \textsc{dann}{}, with additional losses to preserve domain-specific information along with the extraction of domain-invariant information. \texttt{bert-base-uncased}{} serves as a feature extractor for both methods. \noindent \paragraph{Adapter Based.} \textit{\textsc{dann}{} Adapter} (\dann{}-\adapteremoji{}): Similar to \textsc{dann}{}, but we insert trainable adapter modules into every layer of a PLM. \textit{\textsc{dann}{} Adapter with Multiple Classifiers} (\dann{}-\adapteremoji-\textsc{mc}{}): Unlike \textsc{dann}{}-\adapteremoji which involves a single task and domain classifier, here a task and domain classifier are added to each of the last 3 layers of a PLM. The representation of the last layers of a PLM is domain variant \cite{ramesh-kashyap-etal-2021-analyzing}, and this model obtains domain-invariant information\footnote{We tried adding classifiers incrementally to the last few layers. Adding it to the last 3 layers performed the best.} (vi) Task adapter (\textsc{task}-\adapteremoji{}{}): Adapter fine-tuning \cite{pfeiffer-etal-2020-adapterhub} where adapters are fine-tuned in the labeled source domain and tested in the target domain. (vii) Two-step Domain and Task Adapter (\textsc{ts-dt}-\adapteremoji{}{}): This work, where we first train a domain adapter that reduces the probabilistic divergence between two domains and then fine-tunes a task adapter by stacking. (viii) Joint Domain Task Adapter (\textsc{joint-dt}-\adapteremoji{}) - We train a single adapter that reduces the domain and task loss jointly. For all adapter-based experiments, the PLM is frozen, and only adapter modules are trained. Since we use adapters, we only consider other adapter based baselines and omit other methods such as Prefix-tuning \cite{lester-etal-2021-power}. Also, \citep{Zhang2021UnsupervisedDA} target multidomain adaptation and use data from all the domains during training unlike our method and is not a fair comparison. \paragraph{Implementation Details and Evaluation.} For our experiments, we use \texttt{bert-base-uncased}{} \cite{devlin-etal-2019-bert} available in the HuggingFace Transformers library \cite{wolf-etal-2020-transformers} as our backbone. Adapter implementations are from AdapterHub \cite{pfeiffer-etal-2020-adapterhub}. We follow \citep{pfeiffer-etal-2021-adapterfusion} and add only one bottleneck layer after the feedforward layer. We use the AdamW optimizer and a learning rate of $1e-4$ for all our adapter-based training and $2e-5$ otherwise. Only for the smaller \textsc{amazon}{} dataset, we used an adapter bottleneck size (reduction factor) of 32. For all other adapter-based experiments and datasets, we use the default adapter bottleneck size of 16. We performed experiments on three different seeds. We report the mean and standard deviation of the F1 scores. For \textsc{dann}{} we use 0.04 as our $\lambda$ and for \textsc{dsn}{} we use 0.1, 0.1, and 0.3 as our weights for three losses: reconstruction, similarity, and difference respectively. We avoid extensive hyperparameter tuning per domain adaptation scenario for efficiency. \renewcommand{\arraystretch}{1.2} \begin{table*}[t!] \centering \footnotesize \resizebox{\linewidth}{!}{% \begin{tabular}{l|c|@{\hskip 0.4in}cc|ccccc} \hline & \colorbox{red!30}{\textbf{Fully Supervised}} & \multicolumn{2}{@{\hskip -0.3in}c@{\hskip 0.1in}|}{\colorbox{orange}{\textbf{Unsupervised Domain Adaptation}}} & \multicolumn{5}{c}{\colorbox{blue!25}{\textbf{Adapter Based}}} \\ Src \textrightarrow Trg & \fireemoji & \textsc{dann}{} & \textsc{dsn}{} & \dann{}-\adapteremoji{} & \dann{}-\adapteremoji-\textsc{mc}{} & \textsc{task}-\adapteremoji{}{} & \textsc{ts-dt}-\adapteremoji{}{} & \textsc{joint-dt}-\adapteremoji{}{} \\ [0.5ex] \hline \textsc{f}{} \textrightarrow \textsc{s}{} & 74.09 \scriptsize(0.40) & 73.68 \scriptsize(0.21) & 72.36 \scriptsize(0.17) & 70.96 \scriptsize(0.03) & 62.40 \scriptsize(4.79) & 72.36 \scriptsize(0.36) & \textbf{73.46 \scriptsize(0.34)} & 72.30 \scriptsize(0.26) \\ \textsc{f}{} \textrightarrow \textsc{g}{} & 82.19 \scriptsize(0.12) & 79.17 \scriptsize(0.25) & 79.79 \scriptsize(0.21) & 78.73 \scriptsize(0.43) & 77.23 \scriptsize(0.33) & 79.00 \scriptsize(0.46) & 78.65 \scriptsize(0.25) & \textbf{79.79 \scriptsize(0.22)} \\ \textsc{f}{} \textrightarrow \textsc{te}{} & 78.41 \scriptsize(0.66) & 73.72 \scriptsize(0.81) & 75.07 \scriptsize(0.32) & 70.89 \scriptsize(0.74) & 71.68 \scriptsize(0.59) & 70.83 \scriptsize(0.54) & \textbf{73.05 \scriptsize(0.70)} & 71.59 \scriptsize(0.78) \\ \textsc{f} \textrightarrow \textsc{tr}{} & 81.81 \scriptsize(0.20) & 76.99 \scriptsize(0.19) & 76.82 \scriptsize(0.50) & 74.42 \scriptsize(0.18) & 75.09 \scriptsize(0.05) & 75.85 \scriptsize(0.19) & 76.75 \scriptsize(0.80) & \textbf{77.07 \scriptsize(0.26)} \\ \textsc{s}{} \textrightarrow \textsc{f}{} & 78.59 \scriptsize(0.34) & 75.91 \scriptsize(0.23) & 76.62 \scriptsize(0.38) & 73.89 \scriptsize(0.61) & 73.47 \scriptsize(0.28) & 75.25 \scriptsize(0.19) & \textbf{75.52 \scriptsize(0.89)} & 75.35 \scriptsize(0.56) \\ \textsc{s}{} \textrightarrow \textsc{g}{} & 82.19 \scriptsize(0.12) & 80.91 \scriptsize(0.46) & 81.27 \scriptsize(0.23) & 79.99 \scriptsize(0.36) & 79.16 \scriptsize(0.10) & 80.76 \scriptsize(0.40) & \textbf{81.65 \scriptsize(0.11)} & 80.94 \scriptsize(0.30) \\ \textsc{s}{} \textrightarrow \textsc{te}{} & 78.41 \scriptsize(0.66) & 74.32 \scriptsize(0.57) & 74.27 \scriptsize(0.48) & 72.29 \scriptsize(0.57) & 71.89 \scriptsize(0.07) & 72.66 \scriptsize(0.79) & \textbf{74.09 \scriptsize(0.30)} & 73.38 \scriptsize(0.63) \\ \textsc{s}{} \textrightarrow \textsc{tr}{} & 81.81 \scriptsize(0.20) & 76.81 \scriptsize(0.35) & 78.17 \scriptsize(0.20) & 75.58 \scriptsize(0.54) & 75.77 \scriptsize(0.39) & 76.16 \scriptsize(0.22) & \textbf{77.31 \scriptsize(0.60)} & 77.16 \scriptsize(0.18) \\ \textsc{g}{} \textrightarrow \textsc{f}{} & 78.59 \scriptsize(0.34) & 73.41 \scriptsize(0.73) & 72.62 \scriptsize(0.37) & 71.57 \scriptsize(0.68) & 70.34 \scriptsize(0.73) & 72.66 \scriptsize(0.31) & 72.66 \scriptsize(0.56) & \textbf{73.56 \scriptsize(0.23)} \\ \textsc{g}{} \textrightarrow \textsc{s}{} & 74.09 \scriptsize(0.40) & 72.51 \scriptsize(0.10) & 71.93 \scriptsize(0.25) & 70.17 \scriptsize(0.64) & 69.49 \scriptsize(0.40) & 71.11 \scriptsize(0.38) & 71.14 \scriptsize(0.21) & \textbf{71.36 \scriptsize(0.04)} \\ \textsc{g}{} \textrightarrow \textsc{te}{} & 78.41 \scriptsize(0.66) & 71.52 \scriptsize(0.13) & 72.90 \scriptsize(0.39) & 69.45 \scriptsize(0.96) & 68.67 \scriptsize(0.17) & 71.40 \scriptsize(0.30) & 71.53 \scriptsize(1.04) & \textbf{71.99 \scriptsize(0.67)} \\ \textsc{g}{} \textrightarrow \textsc{tr}{} & 81.81 \scriptsize(0.20) & 77.42 \scriptsize(0.54) & 77.80 \scriptsize(0.42) & 74.35 \scriptsize(0.22) & 74.04 \scriptsize(0.51) & 76.29 \scriptsize(0.10) & 76.16 \scriptsize(0.34) & \textbf{76.79 \scriptsize(0.59)} \\ \textsc{te}{} \textrightarrow \textsc{f}{} & 78.59 \scriptsize(0.34) & 75.07 \scriptsize(0.08) & 75.17 \scriptsize(0.35) & 72.24 \scriptsize(0.59) & 71.49 \scriptsize(0.45) & \textbf{74.48 \scriptsize(0.33)} & 73.34 \scriptsize(0.41) & 73.89 \scriptsize(0.12) \\ \textsc{te}{} \textrightarrow \textsc{s}{} & 74.09 \scriptsize(0.40) & 71.65 \scriptsize(0.50) & 72.16 \scriptsize(0.23) & 69.09 \scriptsize(1.79) & 69.25 \scriptsize(0.31) & 70.94 \scriptsize(0.16) & 70.94 \scriptsize(0.55) & \textbf{71.41 \scriptsize(0.19)} \\ \textsc{te}{} \textrightarrow \textsc{g}{} & 82.19 \scriptsize(0.12) & 78.57 \scriptsize(0.60) & 79.24 \scriptsize(0.31) & 77.80 \scriptsize(0.27) & 76.65 \scriptsize(0.20) & 79.24 \scriptsize(0.35) & 79.65 \scriptsize(0.60) & \textbf{79.78 \scriptsize(0.64)} \\ \textsc{te}{} \textrightarrow \textsc{tr}{} & 81.81 \scriptsize(0.20) & 75.72 \scriptsize(0.37) & 77.29 \scriptsize(0.61) & 74.67 \scriptsize(0.50) & 74.08 \scriptsize(0.25) & 75.27 \scriptsize(0.83) & \textbf{76.11 \scriptsize(0.91)} & 75.95 \scriptsize(0.50) \\ \textsc{tr}{} \textrightarrow \textsc{f}{} & 78.59 \scriptsize(0.34) & 73.22 \scriptsize(0.92) & 72.44 \scriptsize(0.50) & 70.27 \scriptsize(0.45) & 69.08 \scriptsize(0.64) & 72.20 \scriptsize(0.49) & 73.12 \scriptsize(0.08) & \textbf{73.13 \scriptsize(0.22)} \\ \textsc{tr}{} \textrightarrow \textsc{s}{} & 74.09 \scriptsize(0.40) & 70.76 \scriptsize(0.72) & 70.97 \scriptsize(0.26) & 68.35 \scriptsize(0.62) & 67.23 \scriptsize(0.39) & 70.28 \scriptsize(0.37) & 70.67 \scriptsize(0.50) & \textbf{71.28 \scriptsize(0.38)} \\ \textsc{tr}{} \textrightarrow \textsc{g}{} & 82.19 \scriptsize(0.12) & 80.91 \scriptsize(0.28) & 81.67 \scriptsize(0.37) & 79.25 \scriptsize(0.34) & 78.77 \scriptsize(0.32) & 81.26 \scriptsize(0.37) & 81.11 \scriptsize(0.42) & \textbf{81.55 \scriptsize(0.16)} \\ \textsc{tr}{} \textrightarrow \textsc{te}{} & 78.41 \scriptsize(0.66) & 70.41 \scriptsize(1.63) & 71.98 \scriptsize(0.50) & 69.33 \scriptsize(0.41) & 69.45 \scriptsize(0.39) & 70.98 \scriptsize(0.11) & 70.95 \scriptsize(0.19) & \textbf{71.42 \scriptsize(0.12)} \\[0.5ex] \hline Avg & 79.02 \scriptsize(0.34) & 75.13 \scriptsize(0.48) & 75.53 \scriptsize(0.35) & 73.16 \scriptsize(0.55) & 72.26 \scriptsize(0.57) & 74.45 \scriptsize(0.40) & \textbf{74.89 \scriptsize(0.49)} & \textbf{74.98 \scriptsize(0.35)} \\ \hline \end{tabular} % } \caption{\label{tab:mnli_results}F1 scores for \textsc{mnli}{} dataset. We report mean and standard deviation of 3 runs. The five domains are Fiction (\textsc{f}{}), Slate (\textsc{s}{}), Government (\textsc{g}{}), Telephone (\textsc{te}{}), and Travel (\textsc{tr}{}). On average, our method performs better than all baselines. } \end{table*} \subsection{Results} From \Cref{tab:amazon_results} and \Cref{tab:mnli_results} our methods \textsc{ts-dt}-\adapteremoji{}{} and \textsc{joint-dt}-\adapteremoji{}{} perform well in both \textsc{amazon}{} and \textsc{mnli}{}. We find that fine-tuning the task adapter (\textsc{task}-\adapteremoji{}{}) is a strong baseline and, compared to it, we perform well in 17/20 domain adaptation scenarios in \textsc{amazon}{} (largest increase of 8 points for \textsc{c}{} \textrightarrow{} \textsc{ba}{} ) and 19/20 domain adaptation scenarios in \textsc{mnli}{} (largest increase of 2.2 for \textsc{f}{} \textrightarrow{} \textsc{te}{}). One possible explanation of scenarios where our method finds the largest increase is the proximity of the two domains. The overlap in vocabularies (\Cref{fig:vocab_overlap} in the Appendix) between \textsc{c}{} \textrightarrow \textsc{ba}{} in \textsc{amazon}{} and \textsc{f}{} \textrightarrow{} \textsc{te}{} in \textsc{mnli}{} is high, and our method takes advantage of learning domain-invariant information that can be used for efficient domain transfer. Our methods for learning domain-invariant information are necessary to achieve good domain adaptation. \paragraph{\textsc{UDApter}{} is comparable to \textsc{uda}{} methods.} Compared to \textsc{uda}{} methods where all parameters of the backbone model are fine-tuned, we perform close to them on average. \textsc{joint-dt}-\adapteremoji{}{} performs better than \textsc{dsn}{} by 0.2\% in \textsc{amazon}{}. We are within 0.85\% in \textsc{mnli}{} compared to \textsc{dsn}{}. Training \textsc{dann}{} is highly unstable and produces varied results, especially for \textsc{amazon}{} with a small number of examples in each domain. Our adapter method achieves better results compared to \textsc{dann}{} with a minimal modification of the hyperparameters. \paragraph{Replacing \textsc{uda}{} Feature Extractors with Adapter Versions is insufficient.} \textit{Given that fully fine-tuned \textsc{uda}{} methods perform well, can we freeze the feature extractors \textsc{uda}{} methods and fine-tune{} only adapters and perform effective domain adaptation?} We compare our methods with \dann{}-\adapteremoji{} and \dann{}-\adapteremoji-\textsc{mc}{} and outperform them both in \textsc{amazon}{} and \textsc{mnli}{}. This is in line with \citet{karouzos-etal-2021-udalm} that although domain adversarial training brings domain representations closer, it introduces distortion in the semantic space, reducing model performance. This shows that simply replacing feature extractors with their adapter versions in existing \textsc{uda}{} methods is not an effective strategy. \paragraph{Gap to Full Fine-Tuning.} Fine-tuning a PLM with supervised data in the target domain is the upper bound performance for domain adaptation. The gap from full fine-tuning{} is greater when more data are available (3.15 in \textsc{amazon}{} and 4.13 in \textsc{mnli}{}). This is not surprising, as the supervised fine-tuning{} works better with more data. However, while adapters perform closely to complete fine-tuning in supervised scenarios \cite{towards-a-unified-view-of-parameter-efficient-transfer-learning}, there is still a large gap between domain adaptation and complete fine-tuning. \subsection{Further Analysis} \label{sec:further-analysis} \begin{figure*}[t!] \centering \subfloat[ \label{fig:rf_ablation_amazon}]{\includegraphics[width=0.53\textwidth]{figures/RF_ablation_amazon.png}} \subfloat[\label{fig:rf_ablation_mnli}]{\includegraphics[width=0.53\textwidth]{figures/RF_ablation_MNLI.png}} \caption{(a) Performance for \textsc{amazon}{} on the \textsc{c}{} \textrightarrow{} \textsc{ba}{} domain adaptation scenario for different reduction factors. (b) Performance for \textsc{mnli}{} on the \textsc{s}{} \textrightarrow{} \textsc{tr}{} scenario for different reduction factors.} \label{fig:ablation_rf} \end{figure*} \paragraph{Adapter Reduction Factor.} The bottleneck size ($d$) of the adapters plays an important role in the final performance of the model. We show the performance of the models at various reduction factors in \Cref{fig:ablation_rf}. For \textsc{joint-dt}-\adapteremoji{}{}, smaller reduction factors generally perform well in both \textsc{amazon}{} and \textsc{mnli}{}, with performance reducing for larger reduction factors. This shows that the \textsc{joint-dt}-\adapteremoji{}{} method requires a greater number of parameters to reduce divergence and learn task representations together. Since \textsc{ts-dt}-\adapteremoji{}{} adds two adapters, this increases the number of parameters added for the same reduction factor compared to \textsc{joint-dt}-\adapteremoji{}{}. As a result, we find that as the data scale up, relatively low reduction factors work well. \begin{figure}[ht!] \begin{tabular}{l} \includegraphics[trim = 5mm 0mm 15mm 0mm, clip, width=.5\textwidth, right]{figures/skip_layers_amazon.png} \vspace{-0.5cm}\\ \includegraphics[trim = 5mm 0mm 15mm 0mm, clip, width=.5\textwidth, right]{figures/skip_layers_MNLI.png} \\ \end{tabular} \caption{Difference in performance when adapters are removed from certain layers (mentioned inside the cells) for the \textsc{amazon}{} dataset (top) and for \textsc{mnli}{} dataset (bottom). The performance reduces if adapters are removed from certain layers } \label{fig:skip_layer_rf} \end{figure} \paragraph{The removal of adapters from continuous layer spans.} All adapters are not equal. Removing adapters from the first few layers still preserves performance (\Cref{fig:skip_layer_rf}). For \textsc{joint-dt}-\adapteremoji{}{} and \textsc{ts-dt}-\adapteremoji{}{}, the F1 slowly decreases as we continually remove the adapters. However, we obtained a comparable performance after removing the adapters from layers 1-6. This suggests that adapters are effective when added to higher layers, where the divergence between domains is greater at higher layers compared to lower layers \cite{ramesh-kashyap-etal-2021-analyzing}. Thus we can further reduce the number of parameters for domain adaptation. \begin{figure*}[t!] \centering \footnotesize \resizebox{\linewidth}{!}{% \begin{tabular}{c@{}c@{}c@{}c@{}c@{}} \includegraphics[width=\textwidth]{figures/pretrained_slate_travel/layer_1.png}& \includegraphics[width=\textwidth]{figures/pretrained_slate_travel/layer_2.png}& \includegraphics[width=\textwidth]{figures/pretrained_slate_travel/layer_11.png}& \includegraphics[width=\textwidth]{figures/pretrained_slate_travel/layer_12.png} \\ \includegraphics[width=\textwidth]{figures/domain_adapter_slate_travel/layer_1.png}& \includegraphics[width=\textwidth]{figures/domain_adapter_slate_travel/layer_2.png}& \includegraphics[width=\textwidth]{figures/domain_adapter_slate_travel/layer_11.png}& \includegraphics[width=\textwidth]{figures/domain_adapter_slate_travel/layer_12.png}\\ \end{tabular} % } \caption{(top) t-SNE plots for the representations from \texttt{bert-base-uncased}{}. The lower layers are domain invariant while the higher layers are domain-variant (bottom) tSNE plots from the domain adapter trained on the \textsc{s}{} \textrightarrow{} \textsc{tr}{} domain. We reduce the divergence using domain adapters where even higher layers are domain invariant. } \label{fig:domain-adapter-tsne} \end{figure*} \paragraph{t-SNE plots.} The t-SNE \cite{JMLR:v9:vandermaaten08a} plots from domain adapters are shown in \Cref{fig:domain-adapter-tsne} for the data set \textsc{mnli}{}. The lower layers have low divergence and the data from the two domains are interspersed, whereas the higher layers have high divergence. Our method effectively reduces the divergence in higher layers. \paragraph{Composability.} We test the composability of our two-step method \textsc{ts-dt}-\adapteremoji{}{}. We reuse the task adapter trained for \textsc{c}{} \textrightarrow{} \textsc{ba}{} and replace the domain adapter with the domain adapter of \textsc{c}{} \textrightarrow{} \textsc{mr}{} and perform inference on \textsc{c}{} \textrightarrow{} \textsc{mr}{} dataset. The initial F1 of the \textsc{c}{} \textrightarrow{} \textsc{mr}{} dataset was 73.22 and after composing it with a different task adapter, the F1 score is 72.66 -- a minimal performance loss. This shows the composability of \textsc{ts-dt}-\adapteremoji{}{}. \section{Limitations} We have several limitations to our work. We have experimented with only one type of parameter-efficient method, which is the adapter fine-tuning{} method. Several other alternative parameter-efficient methods, such as LoRA \cite{lora}, Bitfit \cite{bitfit}, and other unifying paradigms \cite{towards-a-unified-view-of-parameter-efficient-transfer-learning}, have been proposed in recent times. These methods are modular and can be easily substituted for adapters. Another major limitation of our work is that we cannot explore whether we can learn different tasks over a given pair of domains. For example, for a given pair of domains such as \textsc{news} and \textsc{twitter}, it would be ideal if we learned a domain adapter and reused it for different applications such as sentiment analysis, named entity recognition, among others. We are limited by the availability of data for such scenarios and this would be a potential future work. \section{Introduction} Fine-tuning pretrained language models (PLM) is the predominant method for improving NLP tasks such as sentiment analysis, natural language inference, and other language understanding tasks \cite{wang-etal-2018-glue}. However, fine-tuning{} forces us to modify all the parameters of the model and store one copy of the model for one task. Given the large size of current PLMs, this can be expensive. Furthermore, fine-tuning{} needs large-scale data to be effective and is unstable when using different seeds \cite{han-etal-2021-robust}. A new approach to alleviate this is parameter-efficient fine-tuning{} -- freezing the PLM parameters and fine-tuning{} only a small fraction of the parameters. Fine-tuning with adapters \cite{pmlr-v97-houlsby19a} is one of these methods in which small additional layers are tuned within each PLM layer. Fine-tuning with adapters has many advantages: performance comparable to full fine-tuning{} \cite{towards-a-unified-view-of-parameter-efficient-transfer-learning}, and robustness to different seeds and adversarial examples \cite{han-etal-2021-robust}. Unsupervised domain adaptation (\textsc{uda}{}) aims to adapt models to new domains and considers situations where labeled data are available only in the source domain and unlabeled data are available in the target domain. \textsc{uda}{} methods in general have two components: The first reduces the divergence between the source and target domains, and the second reduces the loss corresponding to a particular task \cite{ramesh-kashyap-etal-2021-domain}. However, they fine-tune{} a large number of parameters and are susceptible to catastrophic forgetting. Adapters \cite{pmlr-v97-houlsby19a} can help solve these problems. However, the benefits of using adapters fine-tuning{} for domain adaptation have been mostly overlooked. \textit{How well can adapter fine-tuning perform across different domains. Can we make domain adaptation more efficient?} In this work, we answer these questions and propose models to perform domain adaptation using adapters. Adapters are known to perform well in low-resource scenarios where a small amount of supervised data is available in a new domain or language \cite{he-etal-2021-effectiveness, pfeiffer-etal-2020-mad}. In this work, using the principles of \textsc{uda}{}, we propose to make domain adaptation more effective using unsupervised data from the target domain. We introduce two methods that we collectively call the \textbf{U} nsupervised \textbf{D} omain \textbf{A} daptation method using ada\textbf{pters} (\textsc{UDApter}{}). The first method is a two-step process: First, we learn \textit{domain adapters} -- where we use a divergence measure to bring two probabilistic distributions closer together. This helps us to learn representations that are independent of the domain from which they come. Second, we use the domain-invariant information learned as input to another task adapter that learns to perform an NLP task using labeled data from the source domain. We combine the two adapters by stacking them. The second method adds a single adapter without stacking, where we simultaneously reduce the divergence between domains and learn the task in the source domain. Domain Adversarial Neural Networks (\textsc{dann}{}) and Domain Separation Networks (\textsc{dsn}{}) are the most common methods for unsupervised domain adaptation in NLP \cite{ramesh-kashyap-etal-2021-domain}. We compare our proposed methods with these strong baselines that fine-tune all model parameters, on Amazon \cite{blitzer-etal-2007-biographies} and the MNLI dataset \cite{williams-etal-2018-broad} consisting of five domains each. \textsc{UDApter}{} performs better than all baselines. It achieves competitive performance compared to UDA methods by fine-tuning only a fraction of the parameters. In an era where large resources are spent to further pretrain language models on large amounts of unsupervised data to achieve domain adaptation \cite{gururangan-etal-2020-dont}, it is necessary to provide cheaper, faster solutions. \section{Conclusion} In this work, we propose \textsc{UDApter}{}, to make unsupervised domain adaptation more parameter-efficient. Our methods outperform other strong baselines, and we show that we can perform better than just training a task adapter on supervised data. We perform competitively to other \textsc{uda}{} methods at a fraction of the parameters and outperform them when there is limited data -- a more practical scenario. Future work should explore other parameter-efficient methods such as prefix-tuning \cite{li-liang-2021-prefix} for domain adaptation. NLP should also consider other avenues, such as continuous adaptation to new domains and adaptation to new domains when there are no data available. \section{Acknowledgments} This research is supported by the SRG grant id: T1SRIS19149 and the Ministry of Education, Singapore, under its AcRF Tier-2 grant (Project no. T2MOE2008, and Grantor reference no. MOET2EP20220-0017). Any opinions, findings, conclusions, or recommendations expressed in this material are those of the author(s) and do not reflect the views of the Ministry of Education, Singapore. \input{sections/limitations.tex} \bibliographystyle{acl_natbib} \section{Conclusion} In this work, we propose \textsc{UDApter}{}, to make unsupervised domain adaptation more parameter-efficient. Our methods outperform other strong baselines, and we show that we can perform better than just training a task adapter on supervised data. We perform competitively to other \textsc{uda}{} methods at a fraction of the parameters and outperform them when there is limited data -- a more practical scenario. Future work should explore other parameter-efficient methods such as prefix-tuning \cite{li-liang-2021-prefix} for domain adaptation. NLP should also consider other avenues, such as continuous adaptation to new domains and adaptation to new domains when there are no data available. \section{Acknowledgments} This research is supported by the SRG grant id: T1SRIS19149 and the Ministry of Education, Singapore, under its AcRF Tier-2 grant (Project no. T2MOE2008, and Grantor reference no. MOET2EP20220-0017). Any opinions, findings, conclusions, or recommendations expressed in this material are those of the author(s) and do not reflect the views of the Ministry of Education, Singapore. \input{sections/limitations.tex} \bibliographystyle{acl_natbib} \section{Method} \label{sec:method} \paragraph{Setup.} We consider an NLP task (sentiment analysis) consisting of data $\mathcal{X}$ and labels $\mathcal{Y}$ (positive, negative). There exist two different distributions, called the source domain $\mathcal{D_S}$ and the target domain $\mathcal{D_T}$ over $\mathcal{X} \times \mathcal{Y}$. Unsupervised domain adaptation (\textsc{uda}{}) consists of a model $\mathcal{C}$ that receives labeled input samples $\mathcal{X_S}: (x_s, y_s)_{s=1}^{n_s} \sim \mathcal{D}_\mathcal{S}$ and unlabeled input $\mathcal{X_T}: (x_t)_{t=1}^{n_t} \sim \mathcal{D}_\mathcal{T}$. The goal of \textsc{uda}{} is to learn a model $\mathcal{C}$ such that we perform well in the NLP task for the target domain $\mathcal{D_T}$. The popular method in \textsc{uda}{} is to learn representations that are invariant in the input domain and still have sufficient power to perform well in the source domain \cite{dann, dsn}. Then, according to the theory of domain divergence \cite{ben2010theory} shows that the error in the target domain is bounded by the error in the source domain and the divergence. The unsupervised domain adaptation method thus consists of two components: the reduction of the divergence measure and a classifier for the source domain. A new classifier must be learned for every pair of source-target domains, and the method fine-tunes a large number of parameters. \textsc{UDApter}{} makes unsupervised domain adaptation more parameter efficient (cf. \Cref{sec:two-step-method}, \Cref{sec:joint-method}) using adapters. We follow the framework proposed by \citet{pmlr-v97-houlsby19a} where small bottleneck layers are added to the transformer layers, fine-tuning{} only the adapter parameters while keeping the other parameters frozen, and propose the following. \subsection{Two-Step Domain and Task Adapters} \label{sec:two-step-method} \paragraph{Domain Adapters.} To learn domain-invariant representations, we first train a domain adapter. The adapter architecture follows the work of \citet{pfeiffer-etal-2021-adapterfusion}, which consists of a simple down-projection followed by an up-projection. In a transformer layer $l$, let $h_l$ be the hidden representation of the layer \textbf{Add \& Norm} and let $r_l$ be the representation of the layer \textbf{Feed-Forward} (\Cref{fig:domain-adapter}), then the adapter makes the following transformation and calculates a new hidden representation. \begin{equation} dom_{l} = W_{up} \cdot f(W_{down} \cdot h_l) + r_l \end{equation} \noindent where $f$ is a nonlinear function (e.g., \textsc{ReLU}), $W_{down} \in \mathbb{R}^{h \times d}$ projects the hidden representations down to a lower dimension, $W_{up} \in \mathbb{R}^{d \times h}$ projects them back to a higher dimension, and $d \ll h$. We pass a sample from the source domain $(x_s^{src}) \sim \mathcal{D_S}$ and one from the target $(x_t^{trg}) \sim \mathcal{D_T}$ through the adapters in layer $l$ and obtain their representations $h^{src}_l$ and $h^{trg}_l$, respectively. We then reduce the divergence between these representations. \begin{equation} \Delta_l = div(dom_l^{src}, dom_l^{trg}) \end{equation} Here $div(\cdot)$ is the divergence function such as the correlation alignment (CORAL) \cite{coral}, the central moment discrepancy (CMD) \cite{cmd} or the multi-kernel maximum mean discrepancy (MK-MMD) \cite{mmd, dsn}. In this work, we use MK-MMD for all of our experiments, since it performed best\footnote{CMD and CORAL also perform similarly to MK-MMD}. Similar ideas are used to adapt representations in computer vision models \cite{deepadaptationnetworks, deepcoral}. The final divergence loss considers all $L$ layers. \begin{equation} \mathcal{L}_{div} = \sum_{l=1}^{L} \Delta_l \end{equation} \paragraph{Task Adapters.} Task adapters are stacked with frozen domain adapters. We pass the representations $dom_l$ from the previous step and the supervised data from the source domain $(x_{s}^{src}, y_s^{src}) \sim \mathcal{D_S}$. Task adapters have the same architecture as domain adapters and perform the following: \begin{equation} task_l = W_{up} \cdot f(W_{down} \cdot dom_l^{src}) + r_l \end{equation} The goal of these task adapters is to learn representations that are task-specific. Only task adapters are updated when training on the end task (sentiment classification, natural language inference) and all other parameters, including domain adapters, are frozen. Regular cross-entropy loss is reduced during training of task adapters: \begin{equation} \mathcal{L}_{task} = softmax\_ce(W_{task} \cdot h_L) \end{equation} $h_L$ is the hidden representations of the last layer of the transformer, $W_{task} \in \mathbb{R}^{h * |\mathcal{Y}|}$ where $|\mathcal{Y}|$ is the number of classes, and $softmax\_ce$ is the softmax followed by cross-entropy. This two-step process deconstructs \textsc{uda}{} methods with a domain adapter and a task adapter. This affords composability, where task adapters can be reused for different pairs of domains (\Cref{sec:further-analysis}). However, domain and task representations can be learned jointly, as explored in the next section. \paragraph{Training Process.} Given a source-target domain adaptation scenario, we first train the domain adapter and save their weights. We then stack the task adapter with the domain adapter, which is trained using the supervised data from the source domain. When training the task adapter, the domain adapter is frozen. During inference, we stack the domain and task adapter. \subsection{Joint Domain Task Adapters} \label{sec:joint-method} This method adds a single adapter that performs the reduction of the divergence measure and learns task representations jointly. For a given supervised sample from the source domain $(x_s^{src}, y_s^{src}) \sim \mathcal{D_S}$ and an unsupervised sample $(x_t^{trg}) \sim \mathcal{D_T}$, let $h_l^{src}, h_l^{trg}$ be the hidden representations of the adapters for $x_s^{src}$ and $x_t^{trg}$ for layer $l$. We reduce the following joint loss: \begin{equation} \mathcal{L} = \lambda \cdot \mathcal{L}_{task} + (1-\lambda) \cdot \mathcal{L}_{div} \end{equation} Here $\mathcal{L}_{task}$ is the task loss on the source domain supervised samples, $\lambda$ is the adaptation factor. Reducing divergence along with cross-entropy loss beyond a certain point makes training unstable and does not contribute to increased performance. Following \cite{dann} we suppress the noisy signal from the divergence function as training progresses and gradually change $\lambda$ from 0 to 1 to reduce the contribution of divergence loss using the following schedule ($\gamma=10$ for all of our experiments): \begin{equation} \lambda = \frac{2}{1+\exp{(-\gamma \cdot p)}} - 1 \end{equation} Similar methods have been proposed to adapt models to other domains by \citet{deepadaptationnetworks} and \citet{wu-etal-2022-learning}. Compared to the two-step process introduced earlier (\Cref{sec:joint-method}), we need to properly control the losses to obtain optimal results and also this method does not offer composability (\Cref{sec:further-analysis}). \section{Discussion} This work shows that domain adaptation in NLP can be made more efficient using adapters. We use adapters fine-tuning{} \cite{pmlr-v97-houlsby19a} proposed before and stacking of adapters that have been proposed before for a cross-lingual setting \cite{pfeiffer-etal-2020-mad} for the unsupervised domain adaptation. The approach we have discussed will make domain adaptation more practical for real-world use cases, making adaptation faster and cheaper. However, in this work, we have used \texttt{bert-base-uncased}{} for all of our methods. Using other backbone transformer models is part of our future work. We deal only with a classification and natural language inference task. Adapters have previously been used for machine translation \cite{bapna-firat-2019-simple} and other generation tasks \cite{zhang-etal-2022-continual}. We need to explore our domain adaptation methods for other generation tasks. In this work, we reduce the marginal distribution of the two distributions. Previous works such as \citet{coregularized-domain-adaptation} show that reducing only the marginal distribution is not sufficient and aligning the label distributions is necessary. However, NLP works do not consider this and would require further investigation by the community. \section{Literature Review} \paragraph{Parameter Efficient Fine-tuning Methods.} Adapters \cite{pmlr-v97-houlsby19a} are task-specific modules added to frozen transformer layers, with only the adapter parameters updated. Their plug-and-play characteristics and the avoidance of catastrophic forgetting have resulted in their use for NLP tasks: machine translation \cite{bapna-firat-2019-simple}, named entity recognition \cite{pfeiffer-etal-2020-mad}, etc. Recently, \cite{he-etal-2021-effectiveness} have shown that they are efficient in scenarios where there is minimal supervised data. However, they neither test their performance under domain shift nor propose methods to improve adapter fine-tuning{}. Closely related to our method is the work of \citet{ngo-trung-etal-2021-unsupervised}, who learns a shared-private representation per layer, similar to \textsc{dsn}{} \cite{dsn}. Their method requires balancing multiple loss functions, compared to our simpler two-step domain adaptation method. The stacking of adapters has been followed before by \cite{pfeiffer-etal-2020-mad} for cross-lingual tasks: learning a language adapter first and stacking a task adapter. However, one language adapter is learned per language, assumes large amounts of unsupervised data to be available in all the languages, and requires supervised data to be available to learn a task, which is not applicable for domain adaptation. Compared to other methods, we make domain adaptation more efficient using principles of unsupervised domain adaptation. \paragraph{Unsupervised Domain Adaptation (\textsc{uda}{}).} Existing \textsc{uda}{} approaches can be categorized into model-centric, data-centric, and hybrid. \textit{Model-centric} approaches involve augmenting feature space or altering the loss function, architecture, or model parameters \cite{blitzer-etal-2006-domain,10.1145/1772690.1772767, dann} have been popular. A popular \textit{model-centric} approach is to use adversarial training between the domain and the task classifier \cite{dann} to extract domain-invariant information. \cite{dsn} in addition preserves domain-specific information. These works involve training a large number of parameters and require careful balancing of multiple loss functions. Our methods build on top of these works and make it more parameter-efficient. Large-scale transformers pretrained on domain-specific corpora have been a norm: biomedical \cite{10.1093/bioinformatics/btz682}, scientific publications \cite{beltagy-etal-2019-scibert}, among others. Another alternative is to continue pretraining generic models on domain-specific data: domain adaptive pretraining \cite{gururangan-etal-2020-dont}. Both solutions are expensive since a huge model has to be stored for every domain while using adapters affords storing a small number of parameters for every domain pair and can be quickly adapted to new domains. \section{Experiments} \renewcommand{\arraystretch}{1.2} \begin{table*}[t!] \centering \footnotesize \resizebox{\linewidth}{!}{% \begin{tabular}{l|c|@{\hskip 0.4in}cc|ccccc} \hline & \colorbox{red!30}{\textbf{Fully Supervised}} & \multicolumn{2}{@{\hskip -0.3in}c@{\hskip 0.1in}|}{\colorbox{orange}{\textbf{Unsupervised Domain Adaptation}}} & \multicolumn{5}{c}{\colorbox{blue!25}{\textbf{Adapter Based}}} \\ Src \textrightarrow Trg & \fireemoji & \textsc{dann}{} & \textsc{dsn}{} & \dann{}-\adapteremoji{} & \dann{}-\adapteremoji-\textsc{mc}{} & \textsc{task}-\adapteremoji{}{} & \textsc{ts-dt}-\adapteremoji{}{} & \textsc{joint-dt}-\adapteremoji{}{} \\ [0.5ex] \hline \textsc{a}{} \textrightarrow{} \textsc{ba}{} & 87.52 \scriptsize(1.96) & 85.57 \scriptsize(3.72) & 89.90 \scriptsize(0.26) & 86.46 \scriptsize(0.26) & \textbf{88.74 \scriptsize (0.64)} & 87.03 \scriptsize(0.26) & 88.24 \scriptsize(0.76) & \textbf{88.74 \scriptsize (0.13)} \\ \textsc{a}{} \textrightarrow{} \textsc{bo}{} & 86.67 \scriptsize(1.06) & 36.48 \scriptsize(0.45) & 84.47 \scriptsize(0.99) & 78.41 \scriptsize(1.14) & 83.36 \scriptsize (0.43) & 84.15 \scriptsize(1.10) & 84.22 \scriptsize(0.76) & \textbf{84.96 \scriptsize (0.28)} \\ \textsc{a}{} \textrightarrow{} \textsc{c}{} & 91.62 \scriptsize(0.37) & 57.51 \scriptsize(13.32) & 88.56 \scriptsize(0.81) & 87.31 \scriptsize(0.39) & 88.75 \scriptsize (0.69) & \textbf{89.67 \scriptsize(0.32)} & 88.76 \scriptsize(1.32) & 89.39 \scriptsize (0.23) \\ \textsc{a}{} \textrightarrow{} \textsc{mr}{} & 82.08 \scriptsize(0.78) & 35.23 \scriptsize(1.99) & 78.08 \scriptsize(0.46) & 75.54 \scriptsize(0.63) & 76.60 \scriptsize (1.06) & 76.63 \scriptsize(0.92) & 77.39 \scriptsize(0.13) & \textbf{77.63 \scriptsize (0.71} ) \\ \textsc{ba}{} \textrightarrow{} \textsc{a}{} & 89.12 \scriptsize(0.38) & 77.52 \scriptsize(11.25) & 87.46 \scriptsize(1.83) & 87.72 \scriptsize(1.85) & 88.47 \scriptsize (0.72) & 88.33 \scriptsize(1.10) & 89.55 \scriptsize(0.10) & \textbf{89.70 \scriptsize (0.23)} \\ \textsc{ba}{} \textrightarrow{} \textsc{bo}{} & 86.67 \scriptsize(1.06) & 43.45 \scriptsize(8.96) & 82.19 \scriptsize(3.70) & 82.89 \scriptsize(3.08) & 83.86 \scriptsize (0.41) & 84.61 \scriptsize(0.39) & 84.38 \scriptsize(0.61) & \textbf{85.01 \scriptsize (0.60} \\ \textsc{ba}{} \textrightarrow{} \textsc{c}{} & 91.62 \scriptsize(0.37) & 47.58 \scriptsize(7.65) & 89.68 \scriptsize(0.71) & 86.63 \scriptsize(0.53) & 88.73 \scriptsize (0.42) & \textbf{90.63 \scriptsize(0.33)} & 87.46 \scriptsize(0.88) & 88.64 \scriptsize (0.30) \\ \textsc{ba}{} \textrightarrow{} \textsc{mr}{} & 82.08 \scriptsize(0.78) & 50.63 \scriptsize(7.43) & 77.88 \scriptsize(0.38) & 74.48 \scriptsize(1.79) & 78.07 \scriptsize (0.34) & 78.74 \scriptsize(0.35) & \textbf{79.42 \scriptsize(0.44)} & 78.44 \scriptsize (0.70)\\ \textsc{bo}{} \textrightarrow{} \textsc{a}{} & 89.12 \scriptsize(0.38) & 37.40 \scriptsize(1.90) & 88.20 \scriptsize(0.51) & 85.90 \scriptsize(0.12) & 85.91 \scriptsize (0.25) & 85.03 \scriptsize(0.36) & 84.79 \scriptsize(0.75) & \textbf{87.46 \scriptsize (0.27)} \\ \textsc{bo}{} \textrightarrow{} \textsc{ba}{} & 87.52 \scriptsize(1.96) & 54.33 \scriptsize(12.49) & 88.56 \scriptsize(0.44) & 82.06 \scriptsize(1.15) & 84.27 \scriptsize (0.11) & 86.50 \scriptsize(0.39) & \textbf{86.84 \scriptsize(0.48)} & 86.41 \scriptsize (0.79)\\ \textsc{bo}{} \textrightarrow{} \textsc{c}{} & 91.62 \scriptsize(0.37) & 39.43 \scriptsize(0.49) & 88.58 \scriptsize(1.01) & 86.94 \scriptsize(0.83) & 87.40 \scriptsize (0.44) & 88.44 \scriptsize(0.53) & 87.86 \scriptsize(0.61) & \textbf{88.53 \scriptsize (0.43)} \\ \textsc{bo}{} \textrightarrow{} \textsc{mr}{} & 82.08 \scriptsize(0.78) & 54.23 \scriptsize(13.94) & 79.07 \scriptsize(1.01) & 76.19 \scriptsize(0.89) & 79.44 \scriptsize (0.86) & 79.44 \scriptsize(0.95) & \textbf{80.52 \scriptsize(0.61)} & 78.91 \scriptsize (0.38)\\ \textsc{c}{}{} \textrightarrow{} \textsc{a}{} & 89.12 \scriptsize(0.38) & 60.93 \scriptsize(3.78) & 89.76 \scriptsize(0.76) & 87.02 \scriptsize(1.86)& 86.63 \scriptsize (0.29) & 87.74 \scriptsize(1.18) & 88.53 \scriptsize{(0.42)} & \textbf{88.92 \scriptsize (0.44)} \\ \textsc{c}{} \textrightarrow{} \textsc{ba}{} & 87.52 \scriptsize(1.96) & 77.29 \scriptsize(3.61) & 89.42 \scriptsize(0.70) & 88.10 \scriptsize(1.13) & 89.14 \scriptsize (0.30) & 81.71 \scriptsize(2.72) & \textbf{89.72 \scriptsize(0.43)} & 89.32 \scriptsize (0.42) \\ \textsc{c}{} \textrightarrow{} \textsc{bo}{} & 86.67 \scriptsize(1.06) & 38.21 \scriptsize(1.40) & 85.56 \scriptsize(0.62) & 81.18 \scriptsize(2.07) & 83.61 \scriptsize (0.67) & 80.55 \scriptsize(0.81) & 84.14 \scriptsize(0.52) & \textbf{85.42 \scriptsize (0.70)} \\ \textsc{c}{} \textrightarrow{} \textsc{mr}{} & 82.08 \scriptsize(0.78) & 35.08 \scriptsize(1.94) & 76.13 \scriptsize(0.54) & 64.99 \scriptsize(5.91) & \textbf{74.22 \scriptsize (0.31)} & 69.53 \scriptsize(1.24) & 73.22 \scriptsize(0.48) & 73.50 \scriptsize (0.84) \\ \textsc{mr}{} \textrightarrow{} \textsc{a}{} & 89.12 \scriptsize(0.38) & 37.07 \scriptsize(4.16) & 82.64 \scriptsize(2.17) & 81.05 \scriptsize(1.15) & 79.56 \scriptsize (0.53) & 82.45 \scriptsize(1.43) & 81.93 \scriptsize(0.47) & \textbf{84.41 \scriptsize (0.43)} \\ \textsc{mr}{} \textrightarrow{} \textsc{ba}{} & 87.52 \scriptsize(1.96) & 38.76 \scriptsize(4.17) & 80.59 \scriptsize(2.18) & 77.95 \scriptsize(1.46) & 79.33 \scriptsize (0.43) & 81.70 \scriptsize(1.22) & 84.28 \scriptsize(0.41) & \textbf{84.91 \scriptsize (0.36)} \\ \textsc{mr}{} \textrightarrow{} \textsc{bo}{} & 86.67 \scriptsize(1.06) & 42.07 \scriptsize(4.86) & 85.13 \scriptsize(0.83) & 82.83 \scriptsize(0.62) & \textbf{84.90 \scriptsize(1.29)} & \textbf{84.90 \scriptsize (0.23)} & 84.47 \scriptsize(0.80) & 84.45 \scriptsize (0.31) \\ \textsc{mr}{} \textrightarrow{} \textsc{c}{} & 91.62 \scriptsize(0.37) & 36.92 \scriptsize(1.86) & 86.56 \scriptsize(0.63) & 84.58 \scriptsize(0.46) & 82.53 \scriptsize (0.92) & 86.68 \scriptsize(0.65) & 86.25 \scriptsize(0.38) & \textbf{88.37 \scriptsize (0.11)} \\ [0.5ex] \hline Avg & 87.40 \scriptsize(0.91) & 49.28 \scriptsize(5.47) & 84.92 \scriptsize(1.03) & 81.91 \scriptsize (1.37) & 83.68 \scriptsize(0.50) & 83.72 \scriptsize(0.88) & 84.60 \scriptsize(0.57) & \textbf{85.16 \scriptsize (0.43)} \\ \hline \end{tabular} % } \caption{\label{tab:amazon_results}F1 scores for \textsc{amazon}{} dataset. We report mean and standard deviation of 3 runs. The five domains are Apparel (\textsc{a}{}), Baby (\textsc{ba}{}), Books (\textsc{bo}{}), Camera\_Photo (\textsc{c}{}) and Movie Reviews (\textsc{mr}{}). On average, our method outperforms all baselines. Our methods are competitive with fully unsupervised domain adaptation methods.} \end{table*} \subsection{Datasets} We evaluate our approach on two representative datasets with different tasks, both in English. \Cref{tab:dataset} shows the details of the datasets. Every dataset has 5 domains, and we consider each domain with every other domain which results in 20 domain adaptation scenarios per dataset, 120 experiments per method, totalling over 1.9K experiments. \noindent \paragraph{\textsc{amazon}:} Multi Domain Sentiment Analysis Dataset \cite{blitzer-etal-2007-biographies} that contains Amazon product reviews for five different types of products (domains): Apparel (\textsc{a}{}), Baby (\textsc{ba}{}), Books (\textsc{bo}), Camera\_Photo (\textsc{c}{}), and Movie Reviews (\textsc{mr}{}). Each review is labeled as positive or negative. We follow the setup in \citep{ramesh-kashyap-etal-2021-domain}. \noindent \paragraph{\textsc{mnli}{}:} The Multigenre Natural Language Inference (MNLI) corpus \cite{williams-etal-2018-broad} contains hypothesis--premise pairs covering a variety of genres: Travel (\textsc{tr}{}), fiction (\textsc{f}{}), telephone (\textsc{te}{}), government (\textsc{g}{}), and slate (\textsc{s}{}). Each pair of sentences is labeled Entailment, Neutral, or Contradiction. The train and validation data set are taken from the train set by sampling 90\% and 10\% samples, respectively. We use the MNLI-matched validation set as our test set. \subsection{Baseline Methods} \noindent \paragraph{Fully supervised.} \textit{Fine-tune (\fireemoji)}: Fine-tunes a language model using labeled data from the target domain. Serves as an upper bound of performance. \noindent \paragraph{Unsupervised Domain Adaptation (\textsc{uda}{}).} \textit{Domain Adversarial Neural Networks} (\textsc{dann}{}): An unsupervised domain adaptation method \cite{dann} that learns domain-invariant information by minimizing task loss and maximizing domain confusion loss with the help of gradient reversal layers. \textit{Domain Separation Networks:} (\textsc{dsn}{}) \cite{dsn} improves \textsc{dann}{}, with additional losses to preserve domain-specific information along with the extraction of domain-invariant information. \texttt{bert-base-uncased}{} serves as a feature extractor for both methods. \noindent \paragraph{Adapter Based.} \textit{\textsc{dann}{} Adapter} (\dann{}-\adapteremoji{}): Similar to \textsc{dann}{}, but we insert trainable adapter modules into every layer of a PLM. \textit{\textsc{dann}{} Adapter with Multiple Classifiers} (\dann{}-\adapteremoji-\textsc{mc}{}): Unlike \textsc{dann}{}-\adapteremoji which involves a single task and domain classifier, here a task and domain classifier are added to each of the last 3 layers of a PLM. The representation of the last layers of a PLM is domain variant \cite{ramesh-kashyap-etal-2021-analyzing}, and this model obtains domain-invariant information\footnote{We tried adding classifiers incrementally to the last few layers. Adding it to the last 3 layers performed the best.} (vi) Task adapter (\textsc{task}-\adapteremoji{}{}): Adapter fine-tuning \cite{pfeiffer-etal-2020-adapterhub} where adapters are fine-tuned in the labeled source domain and tested in the target domain. (vii) Two-step Domain and Task Adapter (\textsc{ts-dt}-\adapteremoji{}{}): This work, where we first train a domain adapter that reduces the probabilistic divergence between two domains and then fine-tunes a task adapter by stacking. (viii) Joint Domain Task Adapter (\textsc{joint-dt}-\adapteremoji{}) - We train a single adapter that reduces the domain and task loss jointly. For all adapter-based experiments, the PLM is frozen, and only adapter modules are trained. Since we use adapters, we only consider other adapter based baselines and omit other methods such as Prefix-tuning \cite{lester-etal-2021-power}. Also, \citep{Zhang2021UnsupervisedDA} target multidomain adaptation and use data from all the domains during training unlike our method and is not a fair comparison. \paragraph{Implementation Details and Evaluation.} For our experiments, we use \texttt{bert-base-uncased}{} \cite{devlin-etal-2019-bert} available in the HuggingFace Transformers library \cite{wolf-etal-2020-transformers} as our backbone. Adapter implementations are from AdapterHub \cite{pfeiffer-etal-2020-adapterhub}. We follow \citep{pfeiffer-etal-2021-adapterfusion} and add only one bottleneck layer after the feedforward layer. We use the AdamW optimizer and a learning rate of $1e-4$ for all our adapter-based training and $2e-5$ otherwise. Only for the smaller \textsc{amazon}{} dataset, we used an adapter bottleneck size (reduction factor) of 32. For all other adapter-based experiments and datasets, we use the default adapter bottleneck size of 16. We performed experiments on three different seeds. We report the mean and standard deviation of the F1 scores. For \textsc{dann}{} we use 0.04 as our $\lambda$ and for \textsc{dsn}{} we use 0.1, 0.1, and 0.3 as our weights for three losses: reconstruction, similarity, and difference respectively. We avoid extensive hyperparameter tuning per domain adaptation scenario for efficiency. \renewcommand{\arraystretch}{1.2} \begin{table*}[t!] \centering \footnotesize \resizebox{\linewidth}{!}{% \begin{tabular}{l|c|@{\hskip 0.4in}cc|ccccc} \hline & \colorbox{red!30}{\textbf{Fully Supervised}} & \multicolumn{2}{@{\hskip -0.3in}c@{\hskip 0.1in}|}{\colorbox{orange}{\textbf{Unsupervised Domain Adaptation}}} & \multicolumn{5}{c}{\colorbox{blue!25}{\textbf{Adapter Based}}} \\ Src \textrightarrow Trg & \fireemoji & \textsc{dann}{} & \textsc{dsn}{} & \dann{}-\adapteremoji{} & \dann{}-\adapteremoji-\textsc{mc}{} & \textsc{task}-\adapteremoji{}{} & \textsc{ts-dt}-\adapteremoji{}{} & \textsc{joint-dt}-\adapteremoji{}{} \\ [0.5ex] \hline \textsc{f}{} \textrightarrow \textsc{s}{} & 74.09 \scriptsize(0.40) & 73.68 \scriptsize(0.21) & 72.36 \scriptsize(0.17) & 70.96 \scriptsize(0.03) & 62.40 \scriptsize(4.79) & 72.36 \scriptsize(0.36) & \textbf{73.46 \scriptsize(0.34)} & 72.30 \scriptsize(0.26) \\ \textsc{f}{} \textrightarrow \textsc{g}{} & 82.19 \scriptsize(0.12) & 79.17 \scriptsize(0.25) & 79.79 \scriptsize(0.21) & 78.73 \scriptsize(0.43) & 77.23 \scriptsize(0.33) & 79.00 \scriptsize(0.46) & 78.65 \scriptsize(0.25) & \textbf{79.79 \scriptsize(0.22)} \\ \textsc{f}{} \textrightarrow \textsc{te}{} & 78.41 \scriptsize(0.66) & 73.72 \scriptsize(0.81) & 75.07 \scriptsize(0.32) & 70.89 \scriptsize(0.74) & 71.68 \scriptsize(0.59) & 70.83 \scriptsize(0.54) & \textbf{73.05 \scriptsize(0.70)} & 71.59 \scriptsize(0.78) \\ \textsc{f} \textrightarrow \textsc{tr}{} & 81.81 \scriptsize(0.20) & 76.99 \scriptsize(0.19) & 76.82 \scriptsize(0.50) & 74.42 \scriptsize(0.18) & 75.09 \scriptsize(0.05) & 75.85 \scriptsize(0.19) & 76.75 \scriptsize(0.80) & \textbf{77.07 \scriptsize(0.26)} \\ \textsc{s}{} \textrightarrow \textsc{f}{} & 78.59 \scriptsize(0.34) & 75.91 \scriptsize(0.23) & 76.62 \scriptsize(0.38) & 73.89 \scriptsize(0.61) & 73.47 \scriptsize(0.28) & 75.25 \scriptsize(0.19) & \textbf{75.52 \scriptsize(0.89)} & 75.35 \scriptsize(0.56) \\ \textsc{s}{} \textrightarrow \textsc{g}{} & 82.19 \scriptsize(0.12) & 80.91 \scriptsize(0.46) & 81.27 \scriptsize(0.23) & 79.99 \scriptsize(0.36) & 79.16 \scriptsize(0.10) & 80.76 \scriptsize(0.40) & \textbf{81.65 \scriptsize(0.11)} & 80.94 \scriptsize(0.30) \\ \textsc{s}{} \textrightarrow \textsc{te}{} & 78.41 \scriptsize(0.66) & 74.32 \scriptsize(0.57) & 74.27 \scriptsize(0.48) & 72.29 \scriptsize(0.57) & 71.89 \scriptsize(0.07) & 72.66 \scriptsize(0.79) & \textbf{74.09 \scriptsize(0.30)} & 73.38 \scriptsize(0.63) \\ \textsc{s}{} \textrightarrow \textsc{tr}{} & 81.81 \scriptsize(0.20) & 76.81 \scriptsize(0.35) & 78.17 \scriptsize(0.20) & 75.58 \scriptsize(0.54) & 75.77 \scriptsize(0.39) & 76.16 \scriptsize(0.22) & \textbf{77.31 \scriptsize(0.60)} & 77.16 \scriptsize(0.18) \\ \textsc{g}{} \textrightarrow \textsc{f}{} & 78.59 \scriptsize(0.34) & 73.41 \scriptsize(0.73) & 72.62 \scriptsize(0.37) & 71.57 \scriptsize(0.68) & 70.34 \scriptsize(0.73) & 72.66 \scriptsize(0.31) & 72.66 \scriptsize(0.56) & \textbf{73.56 \scriptsize(0.23)} \\ \textsc{g}{} \textrightarrow \textsc{s}{} & 74.09 \scriptsize(0.40) & 72.51 \scriptsize(0.10) & 71.93 \scriptsize(0.25) & 70.17 \scriptsize(0.64) & 69.49 \scriptsize(0.40) & 71.11 \scriptsize(0.38) & 71.14 \scriptsize(0.21) & \textbf{71.36 \scriptsize(0.04)} \\ \textsc{g}{} \textrightarrow \textsc{te}{} & 78.41 \scriptsize(0.66) & 71.52 \scriptsize(0.13) & 72.90 \scriptsize(0.39) & 69.45 \scriptsize(0.96) & 68.67 \scriptsize(0.17) & 71.40 \scriptsize(0.30) & 71.53 \scriptsize(1.04) & \textbf{71.99 \scriptsize(0.67)} \\ \textsc{g}{} \textrightarrow \textsc{tr}{} & 81.81 \scriptsize(0.20) & 77.42 \scriptsize(0.54) & 77.80 \scriptsize(0.42) & 74.35 \scriptsize(0.22) & 74.04 \scriptsize(0.51) & 76.29 \scriptsize(0.10) & 76.16 \scriptsize(0.34) & \textbf{76.79 \scriptsize(0.59)} \\ \textsc{te}{} \textrightarrow \textsc{f}{} & 78.59 \scriptsize(0.34) & 75.07 \scriptsize(0.08) & 75.17 \scriptsize(0.35) & 72.24 \scriptsize(0.59) & 71.49 \scriptsize(0.45) & \textbf{74.48 \scriptsize(0.33)} & 73.34 \scriptsize(0.41) & 73.89 \scriptsize(0.12) \\ \textsc{te}{} \textrightarrow \textsc{s}{} & 74.09 \scriptsize(0.40) & 71.65 \scriptsize(0.50) & 72.16 \scriptsize(0.23) & 69.09 \scriptsize(1.79) & 69.25 \scriptsize(0.31) & 70.94 \scriptsize(0.16) & 70.94 \scriptsize(0.55) & \textbf{71.41 \scriptsize(0.19)} \\ \textsc{te}{} \textrightarrow \textsc{g}{} & 82.19 \scriptsize(0.12) & 78.57 \scriptsize(0.60) & 79.24 \scriptsize(0.31) & 77.80 \scriptsize(0.27) & 76.65 \scriptsize(0.20) & 79.24 \scriptsize(0.35) & 79.65 \scriptsize(0.60) & \textbf{79.78 \scriptsize(0.64)} \\ \textsc{te}{} \textrightarrow \textsc{tr}{} & 81.81 \scriptsize(0.20) & 75.72 \scriptsize(0.37) & 77.29 \scriptsize(0.61) & 74.67 \scriptsize(0.50) & 74.08 \scriptsize(0.25) & 75.27 \scriptsize(0.83) & \textbf{76.11 \scriptsize(0.91)} & 75.95 \scriptsize(0.50) \\ \textsc{tr}{} \textrightarrow \textsc{f}{} & 78.59 \scriptsize(0.34) & 73.22 \scriptsize(0.92) & 72.44 \scriptsize(0.50) & 70.27 \scriptsize(0.45) & 69.08 \scriptsize(0.64) & 72.20 \scriptsize(0.49) & 73.12 \scriptsize(0.08) & \textbf{73.13 \scriptsize(0.22)} \\ \textsc{tr}{} \textrightarrow \textsc{s}{} & 74.09 \scriptsize(0.40) & 70.76 \scriptsize(0.72) & 70.97 \scriptsize(0.26) & 68.35 \scriptsize(0.62) & 67.23 \scriptsize(0.39) & 70.28 \scriptsize(0.37) & 70.67 \scriptsize(0.50) & \textbf{71.28 \scriptsize(0.38)} \\ \textsc{tr}{} \textrightarrow \textsc{g}{} & 82.19 \scriptsize(0.12) & 80.91 \scriptsize(0.28) & 81.67 \scriptsize(0.37) & 79.25 \scriptsize(0.34) & 78.77 \scriptsize(0.32) & 81.26 \scriptsize(0.37) & 81.11 \scriptsize(0.42) & \textbf{81.55 \scriptsize(0.16)} \\ \textsc{tr}{} \textrightarrow \textsc{te}{} & 78.41 \scriptsize(0.66) & 70.41 \scriptsize(1.63) & 71.98 \scriptsize(0.50) & 69.33 \scriptsize(0.41) & 69.45 \scriptsize(0.39) & 70.98 \scriptsize(0.11) & 70.95 \scriptsize(0.19) & \textbf{71.42 \scriptsize(0.12)} \\[0.5ex] \hline Avg & 79.02 \scriptsize(0.34) & 75.13 \scriptsize(0.48) & 75.53 \scriptsize(0.35) & 73.16 \scriptsize(0.55) & 72.26 \scriptsize(0.57) & 74.45 \scriptsize(0.40) & \textbf{74.89 \scriptsize(0.49)} & \textbf{74.98 \scriptsize(0.35)} \\ \hline \end{tabular} % } \caption{\label{tab:mnli_results}F1 scores for \textsc{mnli}{} dataset. We report mean and standard deviation of 3 runs. The five domains are Fiction (\textsc{f}{}), Slate (\textsc{s}{}), Government (\textsc{g}{}), Telephone (\textsc{te}{}), and Travel (\textsc{tr}{}). On average, our method performs better than all baselines. } \end{table*} \subsection{Results} From \Cref{tab:amazon_results} and \Cref{tab:mnli_results} our methods \textsc{ts-dt}-\adapteremoji{}{} and \textsc{joint-dt}-\adapteremoji{}{} perform well in both \textsc{amazon}{} and \textsc{mnli}{}. We find that fine-tuning the task adapter (\textsc{task}-\adapteremoji{}{}) is a strong baseline and, compared to it, we perform well in 17/20 domain adaptation scenarios in \textsc{amazon}{} (largest increase of 8 points for \textsc{c}{} \textrightarrow{} \textsc{ba}{} ) and 19/20 domain adaptation scenarios in \textsc{mnli}{} (largest increase of 2.2 for \textsc{f}{} \textrightarrow{} \textsc{te}{}). One possible explanation of scenarios where our method finds the largest increase is the proximity of the two domains. The overlap in vocabularies (\Cref{fig:vocab_overlap} in the Appendix) between \textsc{c}{} \textrightarrow \textsc{ba}{} in \textsc{amazon}{} and \textsc{f}{} \textrightarrow{} \textsc{te}{} in \textsc{mnli}{} is high, and our method takes advantage of learning domain-invariant information that can be used for efficient domain transfer. Our methods for learning domain-invariant information are necessary to achieve good domain adaptation. \paragraph{\textsc{UDApter}{} is comparable to \textsc{uda}{} methods.} Compared to \textsc{uda}{} methods where all parameters of the backbone model are fine-tuned, we perform close to them on average. \textsc{joint-dt}-\adapteremoji{}{} performs better than \textsc{dsn}{} by 0.2\% in \textsc{amazon}{}. We are within 0.85\% in \textsc{mnli}{} compared to \textsc{dsn}{}. Training \textsc{dann}{} is highly unstable and produces varied results, especially for \textsc{amazon}{} with a small number of examples in each domain. Our adapter method achieves better results compared to \textsc{dann}{} with a minimal modification of the hyperparameters. \paragraph{Replacing \textsc{uda}{} Feature Extractors with Adapter Versions is insufficient.} \textit{Given that fully fine-tuned \textsc{uda}{} methods perform well, can we freeze the feature extractors \textsc{uda}{} methods and fine-tune{} only adapters and perform effective domain adaptation?} We compare our methods with \dann{}-\adapteremoji{} and \dann{}-\adapteremoji-\textsc{mc}{} and outperform them both in \textsc{amazon}{} and \textsc{mnli}{}. This is in line with \citet{karouzos-etal-2021-udalm} that although domain adversarial training brings domain representations closer, it introduces distortion in the semantic space, reducing model performance. This shows that simply replacing feature extractors with their adapter versions in existing \textsc{uda}{} methods is not an effective strategy. \paragraph{Gap to Full Fine-Tuning.} Fine-tuning a PLM with supervised data in the target domain is the upper bound performance for domain adaptation. The gap from full fine-tuning{} is greater when more data are available (3.15 in \textsc{amazon}{} and 4.13 in \textsc{mnli}{}). This is not surprising, as the supervised fine-tuning{} works better with more data. However, while adapters perform closely to complete fine-tuning in supervised scenarios \cite{towards-a-unified-view-of-parameter-efficient-transfer-learning}, there is still a large gap between domain adaptation and complete fine-tuning. \subsection{Further Analysis} \label{sec:further-analysis} \begin{figure*}[t!] \centering \subfloat[ \label{fig:rf_ablation_amazon}]{\includegraphics[width=0.53\textwidth]{figures/RF_ablation_amazon.png}} \subfloat[\label{fig:rf_ablation_mnli}]{\includegraphics[width=0.53\textwidth]{figures/RF_ablation_MNLI.png}} \caption{(a) Performance for \textsc{amazon}{} on the \textsc{c}{} \textrightarrow{} \textsc{ba}{} domain adaptation scenario for different reduction factors. (b) Performance for \textsc{mnli}{} on the \textsc{s}{} \textrightarrow{} \textsc{tr}{} scenario for different reduction factors.} \label{fig:ablation_rf} \end{figure*} \paragraph{Adapter Reduction Factor.} The bottleneck size ($d$) of the adapters plays an important role in the final performance of the model. We show the performance of the models at various reduction factors in \Cref{fig:ablation_rf}. For \textsc{joint-dt}-\adapteremoji{}{}, smaller reduction factors generally perform well in both \textsc{amazon}{} and \textsc{mnli}{}, with performance reducing for larger reduction factors. This shows that the \textsc{joint-dt}-\adapteremoji{}{} method requires a greater number of parameters to reduce divergence and learn task representations together. Since \textsc{ts-dt}-\adapteremoji{}{} adds two adapters, this increases the number of parameters added for the same reduction factor compared to \textsc{joint-dt}-\adapteremoji{}{}. As a result, we find that as the data scale up, relatively low reduction factors work well. \begin{figure}[ht!] \begin{tabular}{l} \includegraphics[trim = 5mm 0mm 15mm 0mm, clip, width=.5\textwidth, right]{figures/skip_layers_amazon.png} \vspace{-0.5cm}\\ \includegraphics[trim = 5mm 0mm 15mm 0mm, clip, width=.5\textwidth, right]{figures/skip_layers_MNLI.png} \\ \end{tabular} \caption{Difference in performance when adapters are removed from certain layers (mentioned inside the cells) for the \textsc{amazon}{} dataset (top) and for \textsc{mnli}{} dataset (bottom). The performance reduces if adapters are removed from certain layers } \label{fig:skip_layer_rf} \end{figure} \paragraph{The removal of adapters from continuous layer spans.} All adapters are not equal. Removing adapters from the first few layers still preserves performance (\Cref{fig:skip_layer_rf}). For \textsc{joint-dt}-\adapteremoji{}{} and \textsc{ts-dt}-\adapteremoji{}{}, the F1 slowly decreases as we continually remove the adapters. However, we obtained a comparable performance after removing the adapters from layers 1-6. This suggests that adapters are effective when added to higher layers, where the divergence between domains is greater at higher layers compared to lower layers \cite{ramesh-kashyap-etal-2021-analyzing}. Thus we can further reduce the number of parameters for domain adaptation. \begin{figure*}[t!] \centering \footnotesize \resizebox{\linewidth}{!}{% \begin{tabular}{c@{}c@{}c@{}c@{}c@{}} \includegraphics[width=\textwidth]{figures/pretrained_slate_travel/layer_1.png}& \includegraphics[width=\textwidth]{figures/pretrained_slate_travel/layer_2.png}& \includegraphics[width=\textwidth]{figures/pretrained_slate_travel/layer_11.png}& \includegraphics[width=\textwidth]{figures/pretrained_slate_travel/layer_12.png} \\ \includegraphics[width=\textwidth]{figures/domain_adapter_slate_travel/layer_1.png}& \includegraphics[width=\textwidth]{figures/domain_adapter_slate_travel/layer_2.png}& \includegraphics[width=\textwidth]{figures/domain_adapter_slate_travel/layer_11.png}& \includegraphics[width=\textwidth]{figures/domain_adapter_slate_travel/layer_12.png}\\ \end{tabular} % } \caption{(top) t-SNE plots for the representations from \texttt{bert-base-uncased}{}. The lower layers are domain invariant while the higher layers are domain-variant (bottom) tSNE plots from the domain adapter trained on the \textsc{s}{} \textrightarrow{} \textsc{tr}{} domain. We reduce the divergence using domain adapters where even higher layers are domain invariant. } \label{fig:domain-adapter-tsne} \end{figure*} \paragraph{t-SNE plots.} The t-SNE \cite{JMLR:v9:vandermaaten08a} plots from domain adapters are shown in \Cref{fig:domain-adapter-tsne} for the data set \textsc{mnli}{}. The lower layers have low divergence and the data from the two domains are interspersed, whereas the higher layers have high divergence. Our method effectively reduces the divergence in higher layers. \paragraph{Composability.} We test the composability of our two-step method \textsc{ts-dt}-\adapteremoji{}{}. We reuse the task adapter trained for \textsc{c}{} \textrightarrow{} \textsc{ba}{} and replace the domain adapter with the domain adapter of \textsc{c}{} \textrightarrow{} \textsc{mr}{} and perform inference on \textsc{c}{} \textrightarrow{} \textsc{mr}{} dataset. The initial F1 of the \textsc{c}{} \textrightarrow{} \textsc{mr}{} dataset was 73.22 and after composing it with a different task adapter, the F1 score is 72.66 -- a minimal performance loss. This shows the composability of \textsc{ts-dt}-\adapteremoji{}{}. \section{Limitations} We have several limitations to our work. We have experimented with only one type of parameter-efficient method, which is the adapter fine-tuning{} method. Several other alternative parameter-efficient methods, such as LoRA \cite{lora}, Bitfit \cite{bitfit}, and other unifying paradigms \cite{towards-a-unified-view-of-parameter-efficient-transfer-learning}, have been proposed in recent times. These methods are modular and can be easily substituted for adapters. Another major limitation of our work is that we cannot explore whether we can learn different tasks over a given pair of domains. For example, for a given pair of domains such as \textsc{news} and \textsc{twitter}, it would be ideal if we learned a domain adapter and reused it for different applications such as sentiment analysis, named entity recognition, among others. We are limited by the availability of data for such scenarios and this would be a potential future work. \section{Introduction} Fine-tuning pretrained language models (PLM) is the predominant method for improving NLP tasks such as sentiment analysis, natural language inference, and other language understanding tasks \cite{wang-etal-2018-glue}. However, fine-tuning{} forces us to modify all the parameters of the model and store one copy of the model for one task. Given the large size of current PLMs, this can be expensive. Furthermore, fine-tuning{} needs large-scale data to be effective and is unstable when using different seeds \cite{han-etal-2021-robust}. A new approach to alleviate this is parameter-efficient fine-tuning{} -- freezing the PLM parameters and fine-tuning{} only a small fraction of the parameters. Fine-tuning with adapters \cite{pmlr-v97-houlsby19a} is one of these methods in which small additional layers are tuned within each PLM layer. Fine-tuning with adapters has many advantages: performance comparable to full fine-tuning{} \cite{towards-a-unified-view-of-parameter-efficient-transfer-learning}, and robustness to different seeds and adversarial examples \cite{han-etal-2021-robust}. Unsupervised domain adaptation (\textsc{uda}{}) aims to adapt models to new domains and considers situations where labeled data are available only in the source domain and unlabeled data are available in the target domain. \textsc{uda}{} methods in general have two components: The first reduces the divergence between the source and target domains, and the second reduces the loss corresponding to a particular task \cite{ramesh-kashyap-etal-2021-domain}. However, they fine-tune{} a large number of parameters and are susceptible to catastrophic forgetting. Adapters \cite{pmlr-v97-houlsby19a} can help solve these problems. However, the benefits of using adapters fine-tuning{} for domain adaptation have been mostly overlooked. \textit{How well can adapter fine-tuning perform across different domains. Can we make domain adaptation more efficient?} In this work, we answer these questions and propose models to perform domain adaptation using adapters. Adapters are known to perform well in low-resource scenarios where a small amount of supervised data is available in a new domain or language \cite{he-etal-2021-effectiveness, pfeiffer-etal-2020-mad}. In this work, using the principles of \textsc{uda}{}, we propose to make domain adaptation more effective using unsupervised data from the target domain. We introduce two methods that we collectively call the \textbf{U} nsupervised \textbf{D} omain \textbf{A} daptation method using ada\textbf{pters} (\textsc{UDApter}{}). The first method is a two-step process: First, we learn \textit{domain adapters} -- where we use a divergence measure to bring two probabilistic distributions closer together. This helps us to learn representations that are independent of the domain from which they come. Second, we use the domain-invariant information learned as input to another task adapter that learns to perform an NLP task using labeled data from the source domain. We combine the two adapters by stacking them. The second method adds a single adapter without stacking, where we simultaneously reduce the divergence between domains and learn the task in the source domain. Domain Adversarial Neural Networks (\textsc{dann}{}) and Domain Separation Networks (\textsc{dsn}{}) are the most common methods for unsupervised domain adaptation in NLP \cite{ramesh-kashyap-etal-2021-domain}. We compare our proposed methods with these strong baselines that fine-tune all model parameters, on Amazon \cite{blitzer-etal-2007-biographies} and the MNLI dataset \cite{williams-etal-2018-broad} consisting of five domains each. \textsc{UDApter}{} performs better than all baselines. It achieves competitive performance compared to UDA methods by fine-tuning only a fraction of the parameters. In an era where large resources are spent to further pretrain language models on large amounts of unsupervised data to achieve domain adaptation \cite{gururangan-etal-2020-dont}, it is necessary to provide cheaper, faster solutions.
1,941,325,220,935
arxiv
\section{Introduction}\label{sec:intro} In many open quantum system approaches, the microscopic model underlying the environment consists of an infinite number of harmonic oscillators linearly coupled to system degrees of freedom \cite{weiss2008,may2011,mukamel1995}. The flexibility of this particular model is owed to the fact that both spectrum and frequency-dependent coupling of the environmental modes can be adjusted to reproduce features observed in experiments, e.g.\ to describe the effect of polar solvents on dyes \cite{may2011,mukamel1995,fleming1996} or to treat vibrational modes of molecules \cite{may2011,roden2012}. The spectrum and the frequency-dependent coupling can be encoded in a quantity called spectral density (SD), which is a real function of frequency. The influence of the environment on the system is eventually determined by the bath correlation function (BCF) \cite{may2011,weiss2008,diosi1998,roden2011,suess2014}, which quantifies temporal correlations of environmental degrees of freedom. Via the initial state of the environment, environment temperature is incorporated in the BCF. The partitioning of the total system into explicitly treated degrees of freedom and a rest which is treated by means of a BCF is not fixed in the first place, and different choices might be expedient from different (computational) points of view \cite{roden2012}. That is, in the simulation of molecules embedded in an environment, one might explicitly include one or more strongly coupled (important) vibrational modes in the system part and treat the remaining part effectively. Alternatively, one could consider only the electronic degrees of freedom of the molecules as system part, which in turn are coupled to a highly structured environment. The very same considerations apply in the case of some electronic degrees of freedom being coupled to an imperfect (lossy) cavity \cite{luoma2014,shabani2014}. While an unstructured environment is more easily handled in simulations, the rapid growth of the Hilbert space associated with the explicitly treated environmental modes makes the second choice the favorable one. The effect of different system-environment partitioning has already been discussed in literature \cite{garg1985,hughes2009,martinazzo2011,roden2012}, however, the discussion mostly focused on the SD. In this work, we study how the BCF transforms under different system-environment partitionings. In particular, we examine the effect of two different initial states, a factorizing and a correlated one, on the transformed BCF. We consider an exemplary model consisting of a harmonic bath coupled to a single mode, called ``pseudomode'' (PM). (See \rref{roden2012} for a discussion of how a PM relates to a vibrational mode.) This PM we then couple to another harmonic oscillator that acts as a system, thus allowing us to study the influence of the different BCFs on a system dynamics. For strong coupling between PM and both system oscillator and ohmic bath we find pronounced differences in the dynamics of the mean occupation number of the system oscillator, thus stressing the importance to take heed of the initial state of the composite system in this regime. The paper is structured as follows: In Sec.~\ref{sec:analytic} we introduce the microscopic model on which our discussion is based. We outline the procedure according to which BCFs transform and state analytic formulae for the case of a single PM coupled to a harmonic bath. In Sec.~\ref{sec:numerics}, we evaluate the transformed BCFs numerically and discuss some examples, highlighting the regimes in which notable differences are induced by different initial states. Finally, we summarize our findings in Sec.~\ref{sec:conclusion}. Details of calculations are given in three appendices: In Appendix~\ref{sec:app_SD_BCF} we review the definition of BCF and SD on basis of the microscopic model introduced in the main text. In Appendix~\ref{sec:app_bath_trafos}, we explain how the PM and bath operators can be transformed into a basis in which the combined Hamiltonian of PM and bath is diagonal. Lastly, we review in Appendix~\ref{sec:app_heisenberg_eqns} an alternative derivation of the transformed BCFs on grounds of the Heisenberg equations of motion and state a numerical recipe to solve the occurring integro-differential equation. \section{Model system and analytic transformations}\label{sec:analytic} In this section, we detail model Hamiltonian and framework necessary to perform the analytic transformation of the BCF presented at the end of the section. To that end, we first review the standard model of a system linearly coupled to an environment of independent oscillators (Sec.~\ref{subsec:standard_model}) and introduce the model Hamiltonian we consider in this work (Sec.~\ref{subsec:model_H}). Subsequently, we discuss two particular ways to partition this Hamiltonian into a system and an environment part (Sec.~\ref{subsec:sys_env_partitioning}) and explain how the transformed BCF can be calculated (Sec.~\ref{subsec:BCF_calculation}). Finally, we specify two initial environment states and present the corresponding transformed BCFs (Sec.~\ref{subsec:initial_states}). \subsection{General properties of a system linearly coupled to an environment of independent oscillators}\label{subsec:standard_model} We start our discussion by reviewing the standard model of a system linearly coupled to an environment of independent oscillators, which will allow us later on to point out differences in the transformed BCFs most clearly. In the standard model, the total Hamiltonian is partitioned into three parts, \begin{equation}\label{eq:H_tot} \mathcal{H}_\mathrm{tot} = \mathcal{H}_\mathrm{S} + \mathcal{H}_\mathrm{S-E} + \mathcal{H}_\mathrm{E}, \end{equation} where $\mathcal{H}_\mathrm{S}$ denotes the Hamiltonian of the system, containing the degrees of freedom in which one is interested in, and \begin{equation}\label{eq:H_bath_diagonal_c} \mathcal{H}_\mathrm{E}=\sum_{\mu} \tilde{\omega}_{\mu}c^{\dagger}_{\mu}c_{\mu} \end{equation} the Hamiltonian of the environment. Here, $c_{\mu}$ is the annihilation operator of an environment mode with frequency $\tilde{\omega}_{\mu}$. An environment of the form of $\mathcal{H}_\mathrm{E}$ we call \emph{diagonal}, meaning that the oscillators are uncoupled. The Hamiltonian accounting for the interactions between system and environment reads \begin{equation}\label{eq:H_sys_env_coupling} \mathcal{H}_\mathrm{S-E} = \left(L_\mathrm{S} \sum_{\mu} k_{\mu}c^\dagger_{\mu} + \mathrm{H.c.}\right). \end{equation} The coupling Hamiltonian $\mathcal{H}_\mathrm{S-E}$ couples the environment modes $c_\mu$ via $L_\mathrm{S}$ linearly to the system, with strength $k_{\mu}$. It is convenient to introduce the so-called spectral density encoding this frequency-dependent coupling as ($\omega > 0$) \begin{equation}\label{eq:SD_def_posfreq} J(\omega) = \sum_\mu{|k_\mu|^2\delta(\omega-\tilde{\omega}_\mu)}. \end{equation} Note that we set $\hbar=k_\mathrm{B}=1$ throughout this work. The relevant quantity typically entering open quantum system approaches like Redfield \cite{weiss2008,may2011}, Caldeira-Leggett \cite{weiss2008,breuer2002} and Non-Markovian Quantum State Diffusion (NMQSD) \cite{strunz1996,diosi1998} is the bath correlation function, --- for Hermitian $L_\mathrm{S}$ --- given by \begin{equation} \label{eq:alpha_general} \alpha(t,t') = \mathrm{Tr}_\mathrm{E}\left\{\left(C(t) + C^\dagger(t)\right)\left(C(t') + C^\dagger(t')\right)\hat{\rho}_\mathrm{E}(0)\right\}, \end{equation} with $C(t)$ defined as \begin{equation}\label{eq:C_op_timeevol} C(t) = e^{i\mathcal{H}_\mathrm{E} t}\sum_\mu\left(k_\mu^* c_\mu\right)e^{-i\mathcal{H}_\mathrm{E} t} \equiv \sum_\mu k_\mu^* c_\mu(t). \end{equation} To obtain \eref{eq:alpha_general} in the given form, the total initial state is taken to be \begin{equation}\label{eq:rho_0_tot_factor} \hat{\rho}_\mathrm{tot}(0) = \hat{\rho}_\mathrm{S}(0)\otimes\hat{\rho}_\mathrm{E}(0). \end{equation} This implies that no correlations between system and environment exist before the interaction between system and environment is `turned on'. This assumption, which is typically introduced by virtue of the ease of computation, establishes the significance of system-environment partitioning. For a non-Hermitian system operator $L_\mathrm{S}$, the BCF is no longer given by \eref{eq:alpha_general}. Rather, two correlation functions are required \cite{ritschel2015}, reading \begin{subequations} \begin{align} \alpha_1(t,t') &= \mathrm{Tr}_\mathrm{E}\left\{C(t)C^\dagger(t')\hat{\rho}_\mathrm{E}(0)\right\} \,\mathrm{and} \label{eq:2BCF_microscopical_1}\\ \alpha_2(t,t') &= \mathrm{Tr}_\mathrm{E}\left\{C^\dagger(t)C(t')\hat{\rho}_\mathrm{E}(0)\right\}.\label{eq:2BCF_microscopical_2} \end{align} \end{subequations} If the BCF is stationary, i.e., if $\alpha(t,t')$ is a function of the time difference only, $\alpha(t,t')=\alpha(t-t',0)$, it is convenient to write $\alpha(\tau)\equiv\alpha(\tau,0)$, with $\tau=t-t'$. Note that the stationarity of the BCF depends on the initial state of the environment in general. If the initial state of the diagonal environment $\hat{\rho}_\mathrm{E}(0)$, which enters \eref{eq:alpha_general}, is a thermal state (and if $L_\mathrm{S}$ is Hermitian), the BCF of the environment is of the `standard' form \begin{equation}\label{eq:BCF_coth_def} \alpha(\tau) = \int_0^\infty d\omega J(\omega)\left(\coth\left(\frac{\omega}{2T}\right)\cos(\omega \tau)-i\sin(\omega \tau)\right), \end{equation} with $\tau=t-t'$. In \eref{eq:BCF_coth_def}, $T$ is the temperature of the environment and $J(\omega)$ the SD. For a detailed review of SD and BCF in the standard case, see Appendix~\ref{sec:app_SD_BCF}. \subsection{Model Hamiltonian with PM}\label{subsec:model_H} We now introduce the total Hamiltonian which we focus on in this work, which is of the form \begin{equation}\label{eq:H_tot_detail} \mathcal{H}_\mathrm{tot} = \mathcal{H}_\mathrm{rel} + \mathcal{H}_\mathrm{rel-PM}+\mathcal{H}_\mathrm{PM} + \mathcal{H}_\mathrm{PM-B} + \mathcal{H}_\mathrm{B}. \end{equation} In \eref{eq:H_tot_detail}, $\mathcal{H}_\mathrm{rel}$ contains the relevant degrees of freedom we are interested in. This relevant part is via the Hamiltonian \begin{equation}\label{eq:H_system_PM_coupling} \mathcal{H}_\mathrm{rel-PM} = \left(g^* b L^\dagger + \mathrm{H.c.}\right) \end{equation} linearly coupled to the PM, whose Hamiltonian reads \begin{equation}\label{eq:H_PM_bath_def} \mathcal{H}_\mathrm{PM} = \Omega b^\dagger b. \end{equation} Here, $L$ is some operator in the Hilbert space of the relevant Hamiltonian $\mathcal{H}_\mathrm{rel}$, $g$ a coupling constant quantifying the strength of the coupling, and $\Omega$ the frequency of the PM with annihilation operator $b$. In addition, the PM is coupled to a diagonal bath \begin{equation}\label{eq:H_B} \mathcal{H}_\mathrm{B} = \sum_\lambda{\omega_\lambda a_\lambda^\dagger a_\lambda}, \end{equation} where $\omega_\lambda$ are the frequencies belonging to the bath modes $\lambda$ with annihilation operators $a_\lambda$. The coupling Hamiltonian $\mathcal{H}_\mathrm{PM-B}$ is taken to be bilinear, \begin{equation}\label{eq:H_PM_bath_coupling} \mathcal{H}_\mathrm{PM-B} = \sum_\lambda{\left(\kappa_\lambda^* a_\lambda b^\dagger + \mathrm{H.c.}\right)}, \end{equation} with $\kappa_\lambda$ being the coupling constants quantifying the coupling between PM and bath modes. The generalization of our discussion to several PMs is straightforward in many cases of interest (e.g., for single linear chains of PMs \cite{martinazzo2011a,woods2014} and multiple linear chains \cite{huh2014}), and will be addressed at the end of the section. \subsection{System-environment partitioning}\label{subsec:sys_env_partitioning} We now consider two particular examples of assigning the PM to the different parts of the total Hamiltonian, illustrated in \fref{fig:bath_illustration}. This leads to different choices of the system Hamiltonian $\mathcal{H}_\mathrm{S}$, the environment Hamiltonian $\mathcal{H}_\mathrm{E}$, and the coupling between them. We denote the two different ways of partitioning by SI and SII for the system and EI and EII for the environment, respectively. \subsubsection{PM in the system}\label{subsec:PM_system} The first partitioning [(I) in \fref{fig:bath_illustration}] is to take the PM as part of the system, which amounts to setting \begin{align} \mathcal{H}_\mathrm{SI} & = \mathcal{H}_\mathrm{rel} + \mathcal{H}_\mathrm{rel-PM} + \mathcal{H}_\mathrm{PM},\label{eq:H_Stilde}\\ \mathcal{H}_{\mathrm{EI}} & = \mathcal{H}_{B}, \end{align} and \begin{equation} \mathcal{H}_{\mathrm{SI}-\mathrm{EI}}=\mathcal{H}_\mathrm{PM-B}. \end{equation} Note that the system now contains beside the relevant degrees of freedom also the PM, and that the environment is in the standard form of \eref{eq:H_tot}. \subsubsection{PM in the environment}\label{subsec:PM_bath} The second partitioning is illustrated in panel (II) in \fref{fig:bath_illustration}. Here, the system is given by \begin{equation} \mathcal{H}_\mathrm{SII}=\mathcal{H}_\mathrm{rel}, \end{equation} while the environment $\mathrm{EII}$ contains both PM and bath, \begin{equation}\label{eq:H_Btilde} \mathcal{H}_{\mathrm{EII}} = \mathcal{H}_\mathrm{PM} + \mathcal{H}_\mathrm{PM-B} + \mathcal{H}_\mathrm{B}. \end{equation} Accordingly, the coupling between system and environment is given by \begin{equation} \mathcal{H}_{\mathrm{SII}-\mathrm{EII}} = \mathcal{H}_\mathrm{S-PM}. \end{equation} Note that the resulting Hamiltonian is \emph{not} in the standard form of \eref{eq:H_tot}, as the environment is not diagonal. It can, however, be diagonalized by a simple transformation, as detailed in Appendix~\ref{sec:app_bath_trafos}. \subsection{Calculation of the BCF}\label{subsec:BCF_calculation} In terms of the SD, it is known how to transform between different ways of system-environment partitioning \cite{garg1985,garraway1997,martinazzo2011,roden2011,roden2012}. For this procedure, it is sufficient to know the total Hamiltonian, as the SD is fully encoded in $\mathcal{H}_\mathrm{tot}$. \begin{figure}[tb] \centering \includegraphics[width=\columnwidth]{system-bath-pic.pdf} \caption{Illustration of two different ways of performing a system-environment partitioning in the presence of a PM linearly coupled to both system and bath. In (I), the system part SI consists of the relevant system degrees of freedom linearly coupled to the PM, which in turn is coupled to an (unstructured) environment EI whereas in (II) the system SII directly couples to a (structured) environment EII including the PM.} \label{fig:bath_illustration} \end{figure} The BCF, however, depends on the environment state, denoted by $\hat{\rho}_\mathrm{EI}(0)$ and $\hat{\rho}_\mathrm{EII}(0)$ respectively for the two settings (I) and (II) in \fref{fig:bath_illustration}. Note that to obtain a total initial state of form \eref{eq:rho_0_tot_factor}, we need $\hat{\rho}_\mathrm{tot}(0) = \hat{\rho}_\mathrm{SI}(0)\otimes\hat{\rho}_\mathrm{EI}(0)$ in setting (I) whereas $\hat{\rho}_\mathrm{tot}(0) = \hat{\rho}_\mathrm{SII}(0)\otimes\hat{\rho}_\mathrm{EII}(0)$ in setting (II). Microscopically, the BCF is the (two-time) correlation function of the environment operators in the system-environment coupling. In setting (I), the environment is diagonal and we can therefore directly use \eref{eq:alpha_general}. For setting (II), in contrast, the environment EII is not diagonal. Nonetheless, we can similarly to \eref{eq:C_op_timeevol} write down the time evolution of the environment coupling operator, \begin{equation} B(t) = g^*\,e^{i\mathcal{H}_{\mathrm{EII}} t}\,b\,e^{-i\mathcal{H}_{\mathrm{EII}} t} \equiv g^*\, b(t), \end{equation} whose time dependence arises via transformation into the interaction picture with respect to $\mathcal{H}_{\mathrm{EII}}$. For a Hermitian system operator, $L=L^\dagger$, and $\mathcal{H}_\mathrm{S-PM}$ can be written as $\mathcal{H}_\mathrm{S-PM} = L (g^* b + g b^\dagger)$, which has a Hermitian environment coupling operator. In this case, the BCF is given by \begin{align} \alpha(t,t') &= \mathrm{Tr}_{\mathrm{EII}}\left\{\left(B(t) + B^\dagger(t)\right)\left(B(t') + B^\dagger(t')\right)\hat{\rho}_{\mathrm{EII}}(0)\right\}\nonumber\\ &\equiv \left\langle\left(B(t) + B^\dagger(t)\right)\left(B(t') + B^\dagger(t')\right)\right\rangle_\mathrm{EII} \label{eq:BCF_microscopical}, \end{align} where $\hat{\rho}_{\mathrm{EII}}(0)$ denotes the initial density operator of the environment and the subscript $\mathrm{EII}$ of the trace indicates that the trace is taken over the environmental degrees of freedom. To evaluate the BCF \pref{eq:BCF_microscopical}, it is convenient to take advantage of the existence of a linear transformation between the PM operator $b$ and the operators in which the Hamiltonian $\mathcal{H}_{\mathrm{EII}}$ and the initial state, respectively, are diagonal (cf.\ Appendix~\ref{sec:app_bath_trafos}). Specifically, we first transform the PM operator into the basis in which the environment Hamiltonian $\mathcal{H}_{\mathrm{EII}}$ is diagonal, $b(t) = [S\bar{c}(t)]_0$, by means of the transformation matrix $S$. The time evolution of the annihilation operators $c_\mu$ of the diagonal Hamiltonian $\mathcal{H}_{\mathrm{EII}}$, however, is simply given by $c_\mu(t) = e^{-i\tilde{\omega}_\mu t} c_\mu$. Subsequently, the operators $c_\mu$ are transformed into the basis in which the initial state is diagonal, if necessary, and the BCF is evaluated. \subsection{Choice of initial states of the environment}\label{subsec:initial_states} As discussed in the previous section, the total initial state in case (I) is typically taken to be $\hat{\rho}_\mathrm{tot}(0) = \hat{\rho}_\mathrm{SI}(0)\otimes\hat{\rho}_\mathrm{EI}(0)$. When moving the PM from the system part to the environment part, i.e., going from (I) to (II), one could thus reason that the initial state of the environment EII should be given by $\hat{\rho}_\mathrm{EII}(0) = \hat{\rho}_\mathrm{PM}(0)\otimes\hat{\rho}_\mathrm{EI}(0)$. Conversely, if one considers the PM to be part of the environment EII from the very beginning on, there is no reason why the PM should be uncorrelated with EI. From this point of view, a correlated initial state between PM and EI seems to be more `natural'. To clarify the implications of the two aforementioned choices we employ these initial states when evaluating the BCF in setting (II) in the following. \begin{figure*}[tb] \centering \includegraphics[width=0.8\paperwidth]{BCF_comparison.pdf} \caption{(Color online) Bath correlation functions $\alpha(t,t')/|g|^2$ for different coupling strengths $\eta$ and different reference times $t'$. The left column [panels (a),(c)] shows the BCF for weak PM-bath coupling, $\eta=0.25$, whereas the right column [panels (b),(d)] shows a strong coupling regime, $\eta=1.0$. In the first row [panels (a)-(b)] the reference time of the BCF is set to $t'=0$, in the second row [panels (c)-(d)] $t'=32.5\,\Lambda^{-1}$. Solid blue lines indicate the real part of the BCF with factorizing initial state [\eref{eq:rho_0_factor}], dashed green lines the real part of the BCF with diagonal initial state [\eref{eq:rho_0_diag}]. Dotted lines show the corresponding imaginary parts. The insets in panel (c)-(d) provide a detail of the short-time dynamics, while the inset in (a) shows the SD $J_\mathrm{EI}(\omega)$ (dashed red line) with the position of the PM indicated by a solid vertical line. The BCFs were calculated using $T=46\,\Lambda$ and $\Omega=1.5\,\Lambda$.} \label{fig:numerical_example_BCF} \end{figure*} \subsubsection{Factorizing initial state between PM and EI} The first initial state we consider is the one typically associated with \textit{factorizing} initial conditions between PM and bath EI, \begin{equation}\label{eq:rho_0_factor} \hat{\rho}_{\mathrm{EII}}^\mathfrak{F}(0) = \frac{1}{Z} e^{-\beta\left(\Omega b^\dagger b + \sum_\lambda{\omega_\lambda a_\lambda^\dagger a_\lambda}\right)}, \end{equation} where $\beta=1/T$ is the inverse temperature and the partition function $Z$ is defined such that $\mathrm{Tr}_{\mathrm{EII}}\{\hat{\rho}_{\mathrm{EII}}^\mathfrak{F}(0)\} = 1$. This particular initial state is widely used, owing to its convenient properties in analytic calculations. The physical assumption implied is that during thermal equilibration with an ambient heat bath no correlations are built up between PM and bath EI; a reasoning that only holds in the limit of vanishing coupling between PM and bath since only in this limit independent thermal equilibration (i.e., equilibration to the respective canonical states) of two coupled systems exists \cite{kubo1991,chaudhry2013,yang2014,iles-smith2014} (cf.\ also \rrefs{kampen2004,ambegaokar2007}). Evaluating the BCF \eref{eq:BCF_microscopical} with factorizing initial state, we find (see Appendix~\ref{sec:app_bath_trafos} for details) \begin{align} \alpha(t,t') &= |g|^2\sum_{\mu,\nu,\eta} \left[S_{0\mu}^* S_{0\nu} S_{\eta\mu} S_{\eta\nu}^* e^{i(\tilde{\omega}_\mu t -\tilde{\omega}_\nu t')} n(\omega_\eta) \right.\nonumber\\ &\quad\left. +\,S_{0\mu} S_{0\nu}^* S_{\eta\mu}^* S_{\eta\nu} e^{-i(\tilde{\omega}_\mu t - \tilde{\omega}_\nu t')} \left(n(\omega_\eta)+1\right)\right].\label{eq:BCF_singlePM_factor_num} \end{align} Here, $n(\omega)$ is the mean occupation number of an environment oscillator with frequency $\omega$, \begin{equation} n(\omega) = \frac{1}{(e^{\beta\omega}-1)}. \end{equation} Note that \eref{eq:BCF_singlePM_factor_num} is not in the form of \eref{eq:BCF_coth_def}; in fact, we cannot even write $\alpha(t,t') = \alpha(t-t',0)$. \subsubsection{Thermal (correlated) state of PM and EI} The second initial state we consider we call \textit{diagonal} initial state; we thereby denote the canonical state of the environment $\mathrm{EII}$. This state we obtain for increased coupling between PM and bath EI, since with increasing coupling the equilibrium state of PM and bath will be given by the thermal state of the joint PM-bath environment, which no longer factorizes into a PM and a bath part. Introducing the creation (annihilation) operators of the eigenmodes of the joint PM-bath system $c_\mu^\dagger$ ($c_\mu$), the global thermal state reads \begin{equation}\label{eq:rho_0_diag} \hat{\rho}_{\mathrm{EII}}^\mathfrak{D}(0) = \frac{1}{Z} e^{-\beta\sum_\mu{\tilde{\omega}_\mu c_\mu^\dagger c_\mu}}. \end{equation} Here, the superscript $\mathfrak{D}$ denotes a diagonal initial PM-bath state, implying that at $t=0$ PM and bath have jointly evolved towards a thermal state whose occupation depends on the eigenenergies $\tilde{\omega}_\mu$ of the composite system. As before, $Z$ is defined such that $\mathrm{Tr}_{\mathrm{EII}}\{\hat{\rho}_{\mathrm{EII}}^\mathfrak{D}(0)\} = 1$. For the diagonal initial state the BCF reads ($\tau=t-t'$) \begin{equation}\label{eq:BCF_singlePM_diagonal_num} \alpha(\tau) = |g|^2\sum_\mu |S_{0\mu}|^2 \left[e^{i\tilde{\omega}_\mu\tau} n(\tilde{\omega}_\mu) +e^{-i\tilde{\omega}_\mu\tau} \left(n(\tilde{\omega}_\mu)+1\right)\right]. \end{equation} Note that \eref{eq:BCF_singlePM_diagonal_num} is of the same form as in the standard case [cf.\ \eref{eq:standard_BCF_explicit}] and can hence be written in the standard form \eref{eq:BCF_coth_def} with transformed SD. \subsubsection{Discussion} Equations \pref{eq:BCF_singlePM_factor_num} and \pref{eq:BCF_singlePM_diagonal_num} allow for several observations. Firstly, the BCF comprises of the time evolution of the eigenmodes of the PM-bath environment weighted by the populations of the eigenmodes in the initial state. Secondly, we explicitly see that in case of a diagonal initial state we obtain a stationary BCF, whereas in the case of factorizing initial conditions the BCF is non-stationary (for small times). This is to be expected, since for a PM-bath environment in thermal equilibrium the expectation value of any number operator (e.g.\ $b^\dagger b$) should not depend on time --- which is exactly what we observe if we set $\tau=0$ in \eref{eq:BCF_singlePM_diagonal_num}. (As $\alpha(t,t')\propto \langle b^\dagger(t)b(t') + b(t)b^\dagger(t')\rangle_\mathrm{EII}$, the stationarity of $\langle b^\dagger(t) b(t) \rangle_\mathrm{EII}$ can be directly read off from the BCF.) The procedure outlined above can be generalized straightforwardly to, e.g., linearly-coupled chains of PMs of which the last PM is possibly coupled to a diagonal harmonic bath \cite{martinazzo2011,woods2014}, directly coupled PMs with independent baths \cite{ritschel2014}, or a combination of both \cite{huh2014}. As the BCF of the system is determined by the correlation function of the PM operator directly coupled to the system, we simply need to adjust $\mathcal{H}_{\mathrm{EII}}$ in the above treatment; the calculation of the BCF then proceeds in the exact same manner as detailed above. Since the neglect of initial correlations can lead to noticeable differences in the dynamics \cite{hakim1985,pollak2008,chaudhry2013}, depending on the parameters of the underlying Hamiltonian, we now turn to the discussion of numerical examples. \section{Numerical examples}\label{sec:numerics} \subsection{Evaluation of the transformed BCFs}\label{subsec:num_eval_BCF} In our numerical examples, we take as spectral density for EI an ohmic SD with exponential cutoff ($\omega > 0$), \begin{equation}\label{eq:SD_ohmic_cutoff} J_\mathrm{EI}(\omega) = \eta \omega e^{-\omega/\Lambda}, \end{equation} where $\Lambda$ is the cutoff frequency and $\eta$ a scaling for the overall coupling strength. For numerical purposes, we sample $J_\mathrm{EI}(\omega)$ at discrete frequencies $\omega_\lambda$. The couplings $\kappa_\lambda$ of \eref{eq:H_PM_bath_coupling} we obtain by evaluating the quadrature \cite{roden2012} \begin{equation} \kappa_\lambda = \sqrt{J(\omega_\lambda)\Delta\omega_\lambda}, \end{equation} with $\Delta\omega_\lambda = (\omega_{\lambda+1}-\omega_{\lambda-1})/2$ for $\lambda=2,\dotsc,N-1$; $\Delta\omega_1=\omega_2-\omega_1$ and $\Delta\omega_N=\omega_N-\omega_{N-1}$. \begin{figure*}[tb] \centering \includegraphics[width=0.8\paperwidth]{BCF_FT_SD_comparison_part.pdf} \caption{(Color online) Spectral density $J_\mathrm{EII}(\omega)$ (black, solid line) for (a) $\eta=0.25$ and (b) $\eta=1.0$ obtained from Fourier transformation of the BCF (for details, see text). As the BCFs for diagonal and factorizing initial state (evaluated at $t_\mathrm{cm} = 130\,\Lambda^{-1}$) coincide, they yield identical SDs. In both panels, the red, dashed line indicates the original SD of the ohmic bath, which has been scaled by the factor denoted in red. All other parameters are as in \fref{fig:numerical_example_BCF}.} \label{fig:numerical_example_SD} \end{figure*} The sampling range is chosen such that the full SD is covered. For the particular cases shown, we use $N=4000$ bath oscillators for the numerical discretization, with equidistantly spaced frequencies, starting from $0.002\,\Lambda$. For the sake of clarity of presentation, we choose the PM frequency close to the maximum of the ohmic SD, setting $\Omega=1.5\,\Lambda$. This choice renders the coupling between PM and bath strongly dependent on the overall scaling of the SD, which is quantified by $\eta$. Note that the Hamiltonian $\mathcal{H}_\mathrm{EII}$ is positive for all parameters employed in the numerical calculations discussed in this section, and that the finite recurrence time of the BCF is large enough to observe complete decay of the BCF. We now evaluate the BCFs \esref{eq:BCF_singlePM_factor_num}{eq:BCF_singlePM_diagonal_num} with the described numerical procedure for different couplings $\eta$ and times $t'$. Using the SD of \eref{eq:SD_ohmic_cutoff}, this yields the BCFs displayed in \fref{fig:numerical_example_BCF}. There, the left column [panels (a),(c)] corresponds to relatively weak coupling $\eta=0.25$ whereas in the right column [panels (b),(d)] the PM is relatively strongly coupled to the bath EI, $\eta=1.0$. Furthermore, the first row [panels (a),(b)] show the BCF evaluated at $t'=0$ while the second row [panels (c),(d)] show the BCF evaluated at $t'=32.5\,\Lambda^{-1}$. As can be seen from \frefp{fig:numerical_example_BCF}{(a) and (b)}, at $t'=0$ pronounced differences emerge between the two different initial conditions (blue vs. green) as the coupling $\eta$ is increased. On the one hand, the damping of the BCF is increased in the presence of strong coupling, $\eta=1.0$ (note that the overall time of equilibration of the BCF increases as well), which results in different dynamics for the two different initial states. On the other hand, the initial values of the BCFs, $\alpha(0,0)$, change, highlighting the increasing differences between the initial states that manifest themselves in the dynamics of the BCF. Considering the BCFs evaluated at $t'=32.5\,\Lambda^{-1}$, shown in \frefp{fig:numerical_example_BCF}{(c) and (d)}, we observe that the BCFs obtained from different initial states look very similar, with the strongest difference being a transient equilibration dynamics present for factorizing initial conditions, which gets more noticeable in the strong coupling case. The differences found between the different initial states for $t=t'=0$ have almost vanished for $t=t'=32.5\,\Lambda^{-1}$ [cf.\ panels (a, c) and (b, d), respectively], due to the fact that equilibration has already taken place before $t=32.5\,\Lambda^{-1}$. The non-stationarity of the BCF for factorizing initial conditions is related to what is called ``initial slippage'' if a system is coupled to a Markovian environment. In such systems, a non-Markovian feature can be present at small times due to the fact that Markovian dynamics for the total system requires correlations between system and environment that are not present initially if factorizing initial conditions are employed \cite{suarez1992,weiss2008}. Hence, slippage of initial conditions can remedy non-Markovian dynamics introduced by an initially uncorrelated system-environment state. In the same manner, the BCFs displayed in \fref{fig:numerical_example_BCF} need some time to equilibrate for factorizing initial conditions before reaching `stationarity' \cite{hakim1985}. \subsection{Corresponding SDs}\label{subsec:num_SDs} The SD, as defined in \eref{eq:SD_def_posfreq}, is fully determined by the total Hamiltonian. Consequently, it should not depend on the initial state of the environment. For a diagonal environment, however, the SD can be extracted from the BCF owing to the relation \pref{eq:BCF_coth_def} between BCF and SD. That is, for the diagonal initial state, the SD can be obtained by Fourier transforming \eref{eq:BCF_singlePM_diagonal_num} with respect to the time difference $\tau$ and dividing by the factor $2\pi(n(\omega)+1)$ [cf.\ \eref{eq:BCF_j}]. For factorizing initial state, we can rewrite \eref{eq:BCF_singlePM_factor_num} as a function of the center of mass coordinate $t_\mathrm{cm}=(t+t')/2$ and the time difference $\tau=t-t'$ and perform a Fourier transformation with respect to $\tau$. The spectral densities corresponding to the parameters of \fref{fig:numerical_example_BCF} are shown in \fref{fig:numerical_example_SD}. For large times $t_\mathrm{cm}$, $t_\mathrm{cm} \gtrsim 120\,\Lambda^{-1}$, the difference between the Fourier transformations obtained from the BCFs using a diagonal and a factorizing initial state respectively, vanish. For this reason, we show only a single SD in \fref{fig:numerical_example_SD}. The SDs obtained from the BCFs perfectly agree with the analytical result of \rref{roden2012}, which has been obtained by direct transformation of the SD. (Note that in \rref{roden2012} another convention for the SD has been used, i.e., the SD is defined as $\omega^2 J(\omega)$ in our convention. For a discussion of the advantage of the convention employed in this paper, see \rref{ritschel2014}.) As shown in \fref{fig:numerical_example_SD}, for weak coupling $\eta=0.25$ the SD exhibits only a single peak at approximately the PM frequency $\Omega$, with a width proportional to the coupling $\eta$. For large coupling $\eta=1.0$, the single peak is split into two and the coupling strength at the PM frequency is reduced. This illustrates that for large PM-bath coupling, the bath properties indeed become essential for a system coupled to the PM. \subsection{Corresponding system dynamics}\label{subsec:num_sys_dyn} We now turn to analyzing the effect of the features seen in \fref{fig:numerical_example_BCF} on a system observable. To that end we specify the system operator $L$ ($L^\dagger$) in \eref{eq:H_system_PM_coupling} as the annihilation (creation) operator $d$ ($d^\dagger$) of a harmonic oscillator with frequency $\Omega_\mathrm{sys}$, with the associated system Hamiltonian reading \begin{equation} \mathcal{H}_\mathrm{SII} = \Omega_\mathrm{sys} d^\dagger d. \end{equation} This particular set of non-Hermitian coupling operators allows us to conveniently evaluate the dynamics of the total system via diagonalization, as outlined in Sec.~\ref{subsec:BCF_calculation}. The corresponding correlation functions $\alpha_1(t,t')$ and $\alpha_2(t,t')$ defined in \esref{eq:2BCF_microscopical_1}{eq:2BCF_microscopical_2} can be easily read off from \esref{eq:BCF_singlePM_factor_num}{eq:BCF_singlePM_diagonal_num}. Setting in the above parameters $\eta=1.0$, $\Omega_\mathrm{sys} = 0.46\,\Lambda$, and requiring $n_\mathrm{sys}(0) \equiv \langle d^\dagger(0)d(0)\rangle_\mathrm{EII} = 0$, we obtain \fref{fig:numerical_example_nsys} for two different system-PM couplings, $g/\Lambda=0.3$ and $g/\Lambda=0.08$. Note that only $\alpha_1(t,t')$ is shown in \fref{fig:numerical_example_nsys} since for the parameters chosen, $\alpha_2(t,t')$ is indistinguishable from $\alpha_1(t,t')$ on the scale of the figure. The reason for choosing $\Omega_\mathrm{sys}$ relatively small is that the steady state value of $n_\mathrm{sys}(t)$ decreases with increasing $\Omega_\mathrm{sys}$, such that absolute differences in $n_\mathrm{sys}(t)$ are suppressed for large system frequencies. Likewise, we have to choose $\eta$ large in order to assure that the system dynamics is affected by the bath, being mediated via the PM. \begin{figure}[tb] \centering \includegraphics[width=\columnwidth]{nsys_comparison_part_onecol.pdf} \caption{(Color online) Bath correlation function $\alpha_1(t,0)/|g|^2$ (solid lines) and system mean occupation number $n_\mathrm{sys}(t)\equiv \langle d^\dagger(t)d(t)\rangle_\mathrm{EII}$ (dashed lines) for different initial conditions and different coupling strengths $g$. In the upper panel (a), $g=0.3\,\Lambda$, whereas in the lower panel (b), $g=0.08\,\Lambda$. Thin blue lines indicate the BCF with factorizing initial state [\eref{eq:rho_0_factor}], thick green lines indicate the BCF with diagonal initial state [\eref{eq:rho_0_diag}]. Only the real parts of the BCFs are shown. The inset shows the PM mean occupation number $n_\mathrm{PM}(t)\equiv \langle b^\dagger(t)b(t)\rangle_\mathrm{EII}$ for $g=0.3\,\Lambda$ (dashed line) and $g=0.08\,\Lambda$ (solid line) for both initial states. Except for $\eta=1.0$ and $\Omega_\mathrm{sys} = 0.46\,\Lambda$, the parameters of \fref{fig:numerical_example_BCF} have been used. Note that $\alpha_1(t,0)$ is approximately $\alpha(t,0)/2$ in \fref{fig:numerical_example_BCF}.} \label{fig:numerical_example_nsys} \end{figure} For strong coupling of the PM to both system [$g=0.3\,\Lambda$, \frefp{fig:numerical_example_nsys}{(a)}] and bath, we observe a marked difference in the mean occupation number $n_\mathrm{sys}(t)$ between the results attained from using different initial states. This difference highlights that in case of a strongly coupled PM the initial state of the environment becomes important for the transient system dynamics. In contrast, the equilibrium values of both $n_\mathrm{sys}(t)$ and $n_\mathrm{PM}(t)$ are independent of the initial state \cite{hakim1985}. If we decrease the coupling $g$ to the system mode (or, similarly, the coupling $\eta$ to the bath oscillators), the difference in the dynamics decreases [cf.\ \frefp{fig:numerical_example_nsys}{(b)}] and the appropriate choice of initial conditions of the environment becomes less important. The same applies for choosing $\Omega_\mathrm{sys}$ large, as this results in a lower steady-state value of $n_\mathrm{sys}(t)$ and consequently smaller overall deviations. Besides, the time it takes for the system to equilibrate increases as $g$ is decreased [cf.\ \frefp{fig:numerical_example_nsys}{(b)}], as the equilibration of the system only proceeds via interaction with the PM. Conversely, the PM equilibrates faster for small system-PM coupling $g$ (cf.\ inset), since in this case the strong PM-bath coupling dominates the equilibration dynamics of the PM. This again illustrates that for small system-PM coupling, it is indeed valid to assume a factorized initial environment state since the differences induced by the BCF rapidly vanish from the system's point of view. For low bath temperature, the differences in the dynamics persist, however, they become hardly observable due to the steady-state values (as well as the initial conditions) being significantly smaller as compared to high bath temperature. Hence, at low temperature, absolute deviations are reduced while relative deviations are preserved. Our numerical simulations show for different initial environment states pronounced differences in the transient dynamics of a system that via a PM strongly couples to an ohmic bath. Thus, any scheme sensitive to the transient behavior of the BCF crucially depends on the choice of initial conditions of the total system. \section{Conclusions}\label{sec:conclusion} We have analytically and numerically studied the the BCF resulting from effectively treating a harmonic oscillator (PM) linearly coupled to a harmonic bath as part of the bath, for a factorizing and a correlated initial state between PM and bath respectively. We outlined the procedure to analytically derive the transformed BCF and discussed concrete examples for regimes in which the differences in the BCFs arising from different initial states manifest themselves in a different dynamics of the system, which we take to be a harmonic oscillator linearly coupled to the PM. This establishes a simple framework to evaluate transformed BCFs of PMs coupled to a harmonic bath. In particular, we find that in the case of a correlated (diagonal) initial state the BCF features all the properties typically assumed for a BCF (i.e., stationarity, detailed balance and the relation \eref{eq:BCF_j} between the Fourier transform of the BCF and the SD). By contrast, for a factorizing initial state these properties do not apply, owed to the non-stationarity of the BCF in this case. Only after a transient equilibration dynamics that can induce different system dynamics, they are recovered. The significance of the BCF lies in the fact that it quantifies the effect of the environment --- including environment correlations and temperature --- on the system degrees of freedom. Hence, our analysis complements the investigation of bath transformations (e.g.\ the mapping of a structured bath consisting of many harmonic oscillators to a linear chain of oscillators \cite{martinazzo2011a,woods2014,huh2014}) which focused on the SD. Our findings highlight that (i) the differences between the BCFs for the different initial states chosen (factorizing and diagonal initial state) are negligible for small PM-bath coupling or PM-system coupling respectively, as expected, yet (ii) these differences can have strong impact on the system dynamics if the PM is strongly coupled to both system and bath. Therefore, in case of a strongly coupled PM, an appropriate initial state of the environment has to be used for the transformation of the BCF when considering a system's dynamics in the presence of finite temperature. This emphasizes the relevance of accounting for correlations in strongly coupled systems, which is not specific to our particular system \cite{breuer2002,morozov2012,iles-smith2014}. The question which initial state is to be considered as appropriate cannot be answered a priori, but has to be answered in consideration of the specific case. \acknowledgments We thank Gerhard Ritschel for useful discussions.
1,941,325,220,936
arxiv
\section{Introduction} The context and the notations are those of \cite{FK}. Let $\Omega$ be an irreducible symmetric cone of rank r in a vector space $V$ of dimension n, endowed with an inner product $(.|.)$ for which $\Omega$ is self-dual. We recall that $\Omega$ induces in $V$ a structure of Euclidean Jordan algebra with identity $\mathbf e$ such that $$\overline \Omega=\{x^2: x\in V\}.$$ Let $\{c_1,\cdots,c_r\}$ be a fixed Jordan frame in $V$ and $$V=\underset{1\le i\le j\le r } {\oplus} V_{i,j}$$ be its associated Peirce decomposition of $V$. We denote by $$\Delta_1(x),\cdots,\Delta_r(x)$$ the principal minors of $x\in V$ with respect to the fixed Jordan frame $\{c_1,\cdots,c_r\}$. More precisely, $\Delta_k(x), \hskip 2truemm k=1,\cdots,r$ is the determinant of the projection $P_kx$ of $x$, in the Jordan subalgebra $V^{(k)}=\oplus_{1\le i\le j\le k }V_{i,j}$. We have $\Delta_k(x)>0$, $k=1,\cdots,r$ when $x\in \Omega,$ and the determinant $\Delta$ of the Jordan algebra is given by $\Delta=\Delta_r.$ The generalized power function on $\Omega$ is defined as $$\Delta_{\bf s} (x)=\Delta_1^{s_1-s_2}(x)\Delta_2^{s_2-s_3}(x)\cdots\Delta_r^{s_r}(x), \quad x\in \Omega, \hskip 2truemm {\s=(s_1,\cdots,s_r)}\in\C^r.$$ We adopt the following standard notations: $$ n_k = 2(k-1)\frac{\frac nr -1}{r-1} \quad {\rm and}\quad m_k = 2(r-k)\frac{\frac nr -1}{r-1}.$$ For $\s=(s_1,\cdots,s_r) \in \mathbb R^r$ and $\rho$ real, the notation $\s+\rho$ will stand for the vector whose coordinates are $s_k + \rho, \hskip 2truemm k=1,\cdots,r.$ For $1\leq p\leq\infty$ and $1\leq q <\infty,$ let $L^{p,\,\,q}_\textbf s$ denote the mixed norm Lebesgue space constisting of measurable functions $F$ on $T_\Omega$ such that $$\|F\|_{L^{p,\,\,q}_{\s}}=\left(\int_\Omega\|F(\cdot+iy)\|^q_p\Delta_{\s-\frac{n}{r}}(y)dy\right)^\frac{1}{q}<\infty$$ where $$\|F(\cdot+iy)\|_p=\left(\int_{V}|F(x+iy)|^pdx\right)^\frac{1}{p}$$ (with the obvious modification if $p=\infty$). The \textit{mixed norm weighted Bergman space} $A_\s^{p,q}$ is the (closed) subspace of $L^{p,\,\,q}_\s$ consisting of holomorphic functions. Following \cite{DD}, $A_\s^{p,q}$ is non-trivial if only if $\s_k>\frac {n_k}2, \hskip 2truemm k=1,\cdots, r.$ When $p=q$, we write $L^{p,\,\,q}_\s=L^{p}_\s$ and $A^{p,\,\,q}_\s=A^{p}_\s$ which are respectively the usual weighted Lebesgue space and the usual weighted Bergman space. Moreover, when $p=q=2$ the orthogonal projector $P_\s$ from the Hilbert space $L^2_\s$ onto its closed subspace $A^2_\s$ is called \textit{weighted Bergman projector}. It is well known that $P_\s$ is the integral operator on $L_\s^2$ given by the formula $$P_\s F(z)=\int_{T_\Omega}B_\s (z, u+iv)F(u+iv)\Delta_{\s-\frac{n}{r}}(v)dudv,$$ where $$B_\s (z,u+iv)=d_\s \Delta_{-\s-\frac{n}{r}}(\frac{z-u+iv}{i}) $$ is the reproducing kernel on $A^2_\s,$ called \textit{weighted Bergman kernel} of $T_\Omega$. Precisely, $\Delta_{-\s-\frac{n}{r}}(\frac{x+iy }{i})$ is the holomorphic determination of the $(-\s-\frac{n}{r})$-power which reduces to the function $\Delta_{-\s-\frac{n}{r}}(y )$ when $x =0$. Our first result is an atomic decomposition theorem for functions in mixed norm weighted Bergman spaces on tube domains over symmetric cones. It generalizes the result of \cite{BBGNPR} for usual weighted Bergman spaces on tube domains over symmetric cones and the result of \cite{RT} for mixed norm weighted Bergman spaces on the upper half-plane (the case $n=r=1).$ \newtheorem*{thA}{\bf Theorem A} \begin{thA}{\it Let $\textbf s$ be a vector of $\R^r$ such that $s_k > \frac {n_k}2, \hskip 1truemm k=1,\cdots,r.$ Assume that $P_\s$ extends to a bounded operator on $L_\textbf s^{p,q}$. Then there is a sequence of points $\{z_{l,j}=x_{l,j}+iy_j\}_{l\in\Z, \hskip 1truemmj\in\N}$ in $T_\Omega$ and a positive constant $C$ such that the following assertions hold. \begin{enumerate}[\upshape (i)] \item For every sequence $\{\lambda_{l,j}\}_{l\in\Z, \hskip 1truemm j\in\N}$ such that $$\sum_j\left(\sum_l|\lambda_{l,j}|^p\right)^\frac{q}{p}\Delta_{\textbf s+\frac{nq}{rp}}(y_j)<\infty,$$ the series $$\sum_{l,j}\lambda_{l,j}\Delta_{\textbf s+\frac{nq}{rp}}(y_j)B_\s(z,z_{l,j})$$ is convergent in $A_\textbf s^{p,q}$. Moreover, its sum $F$ satisfies the inequality $$\|F\|_{A_\textbf s^{p,q}}^q\leq C \sum_j\left(\sum_l|\lambda_{l,j}|^p\right)^\frac{q}{p}\Delta_{\textbf s+\frac{nq}{rp}}(y_j)$$ \item Every function $F\in A_\textbf s^{p,q}$ may be written as $$F(z)=\sum_{l,j}\lambda_{l,j}\Delta_{\textbf s+\frac{nq}{rp}}(y_j)B_\textbf s (z,z_{l,j}),$$ with $$\sum_j\left(\sum_l|\lambda_{l,j}|^p\right)^\frac{q}{p}\Delta_{\textbf s+\frac{nq}{rp}}(y_j)\leq C \|F\|_{A_\textbf s^{p,q}}^q$$ \end{enumerate} } \end{thA} Our second result is an interpolation theorem between mixed norm weighted Bergman spaces. It generalizes the result of \cite{BGN1} for usual weighted Bergman spaces. We adopt the following notation. $$q_{\s}=\min_{1\leq k\leq r}\left(1+\frac{s_k-\frac{n_k}{2}}{\frac {m_k}2}\right).$$ \newtheorem*{thB}{\bf Theorem B} \begin{thB}{\it \begin{enumerate}[\upshape (1)] \item Let $\s_0, \s_1\in \mathbb R^r$ be such that $(s_0)_k, (s_1)_k > \frac nr -1,\quad k=1,\cdots,r$ and let $1\leq p_0, p_1\leq \infty$, $1\leq q_0, \hskip 1truemm q_1<\infty$ be such that $1\leq q_i < q_{{\s}_i}$ for every $i=0, 1.$ Then for every $\theta \in (0, 1),$ we have $$[A_{{\s}_0}^{p_0,q_0},\,\,A_{{\s}_1}^{p_1,q_1}]_\theta=A_{\s}^{p,q}$$ with equivalent norms, where $\frac{1}{p}=\frac{1-\theta}{p_0}+\frac{\theta}{p_1}, \frac{1}{q}=\frac{1-\theta}{q_0}+\frac{\theta}{q_1}$ and $\frac{\s}{q}=\frac{(1-\theta){\s}_0}{q_0}+\frac{\theta {\s}_1}{q_1}$. \item Let $\s \in \mathbb R^r$ be such that $s_k > \frac {n_k}2, \hskip 2truemm k=1,\cdots, r.$ Assume that $P_{\s}$ extends to a bounded operator on $L^{p_i, q_i}_{\s}, \hskip 2truemm i=0, 1$ for $1\leq p_0, p_1 \leq \infty$ and $1 <q_0, q_1 < \infty.$ Then for every $\theta \in (0, 1),$ we have $$[A_{\s}^{p_0,q_0},\,\,A_{\s}^{p_1,q_1}]_\theta=A_{\s}^{p,q}$$ with equivalent norms, where $\frac{1}{p}=\frac{1-\theta}{p_0}+\frac{\theta}{p_1}$ and $\frac{1}{q}=\frac{1-\theta}{q_0}+\frac{\theta}{q_1}.$ \item Let ${\s} \in \mathbb R^r$ be such that $(s)_k > \frac nr -1, \hskip 2truemm k=1,\cdots, r,$ let $1\leq p_0 < p_1\leq\infty$ and let $q_0, q_1$ be such that $1\leq q_0 < q_{\s}\leq q_1.$ We assume that $P_{\s}$ extends to a bounded operator on $L^{p_1, q_1}_{\s}.$ Then for some values of $\theta\in (0,1)$, we have $$[A_{\s}^{p_0,q_0},\,\,A_{\s}^{p_1,q_1}]_\theta=A_{\s}^{p,q}$$ with equivalent norms, where $\frac{1}{p}=\frac{1-\theta}{p_0}+\frac{\theta}{p_1}$ and $\frac{1}{q}=\frac{1-\theta}{q_0}+\frac{\theta}{q_1}.$ \end{enumerate} } \end{thB} We recall sufficient conditions on $p, q$ and $\s$ under which the weighted Bergman projector $P_{\s}$ extends to a bounded operator on $L^{p, q}_{\s}.$ We adopt the following notations. $$p_{\s} =1+\min \limits_k \frac {s_k +\frac nr}{(m_k - s_k)_+}$$ and $$q_\s(p)=\min\{p,p'\}q_\s.$$ \begin{thm}[\cite{DD1} and \cite{NT}] The weighted Bergman projector $P_{\s}$ extends to a bounded operator on $L^{p, q}_{\s}$ whenever $\frac 1{q_{\s} (p)}<\frac 1{q}<1-\frac 1{q_{\s} (p)}$ in the following two cases: \begin{enumerate} \item[\rm (i)] $s_j > \frac {n_j}2, \hskip 2truemm j=1,\cdots r$ and $1\leq p < p_{\s}$ \cite{DD1}; \item[\rm (ii)] $s_j > \frac nr -1, \hskip 2truemm j=1,\cdots r$ and $1\leq p \leq \infty$ \cite{NT}. \end{enumerate} \end{thm} We restrict to tube domains over Lorentz cones ($r=2$). For real $\s = (s, s),$ this problem was recently completely solved on tube domains over Lorentz cones (cf. \cite{BGN} for a combination of results from \cite{BD} and \cite{BBGR}; cf. also \cite{BN} for the unweighted case $\s=(\frac nr,\cdots,\frac nr)$). For vectorial $\s = (s_1, s_2),$ Theorem 1.1 was also extended in \cite{BGN} to other values $p$ and $q.$ The plan of this paper is as follows. In Section 2, we overview some preliminaries and useful results about symmetric cones and tube domains over symmetric cones. In Section 3, we study atomic decomposition of mixed norm Bergman spaces and we prove a more precise statement of Theorem A. In Section 4, we study interpolation via the complex method between mixed norm weighted Bergman spaces and we prove Theorem B. In particular we give a more precise statement of assertion (3) of this theorem (Theorem 4.6) and we ask an open question. A final remark will point out a connection between the two main theorems of the paper (Theorem A and Theorem B).\\ \indent For $\s=(s,\cdots, s)$ real, Theorem A and Theorem B were presented in the PhD dissertation of the second author \cite{G}. \section{Preliminaries} Materials of this section are essentially from \cite{FK}. We give some definitions and useful results.\\ Let $\Omega$ be an irreducible symmetric cone of rank $r$ in a real vector space $V$ of dimension $n$ endowed with the structure of Euclidean Jordan algebra with identity $\textbf e$. In particular, $\Omega$ is self-dual with respect to the inner product $$(x|y)=\textbf{tr}(xy)$$ on $V$. \subsection{Group action} Let $G(\Omega)$ be the group of linear transformations of the cone $\Omega$ and $G$ its identity component. By definition, the subgroup $G$ of $G(\Omega)$ is a semi-simple Lie group which acts transitively on $\Omega.$ This gives the identification $\Omega \sim G/K$, where $K:=\{g\in G:\,\,\,\,g\cdot \mathbf e=\mathbf e\}$ is a maximal compact subgroup of $G$. More precisely, $$K=G\cap O(V),$$ where $O(V)$ is the orthogonal group in $V.$ Furthermore, there is a solvable subgroup $T$ of $G$ acting simply transitively on $\Omega$. That is, every $y\in\Omega$ can be written uniquely as $y=t\cdot \mathbf e$, for some $t\in T.$ Let $\{c_1,\cdots,c_r\}$ in $\R^n$ be a fixed Jordan frame in $V$ (that is, a complete system of idempotents) and $$V=\underset{1\leq i\leq j\leq r}{\oplus}V_{i,j}$$ be its associated Peirce decomposition of $V$ where $$ \left\{ \begin{array}{ll} V_{i,i}=\R c_i\\ V_{i,j}=\{x\in V:\,\,c_i x=c_j x=\frac{1}{2}x\}\,\,\textrm{if}\,\,i<j. \end{array} \right. $$ We have $\mathbf e = \sum \limits_{1\leq i\leq r} c_i.$ Then the solvable Lie group $T$ factors as the semidirect product $T=NA=AN$ of a nilpotent subgroup $N$ consisting of lower triangular matrices, and an abelian subgroup $A$ consisting of diagonal matrices. The latter takes the explicit form $$A=\{P(a):\,\,\,\, a=\sum_{i=1}^r a_ic_i,\,\,\,a_i>0\},$$ where $P$ is the quadratic representation of $\R^n$. This also leads to the Iwasawa and Cartan decompositions of the semisimple Lie group $G:$ $$G=NAK \quad {\rm and} \quad G=KAK.$$ Still following \cite{FK}, we shall denote by $\Delta_1(x),\cdots,\Delta_r(x)$ the principal minors of $x\in V$, with respect to the fixed Jordan frame $\{c_1,\cdots,c_r\}$. These are invariant functions under the group $N$, $$\Delta_k(nx)=\Delta_k(x),$$ where $n\in N$, $x\in V$, $k=1,\cdots,r$, and satisfy a homogeneity relation under $A$, $$\Delta_k(P(a)x)=a_1^2\cdots a^2_k\Delta_k(x),$$ if $a=a_1c_1+\cdots+a_rc_r$. The determinant function $\Delta(y)=\Delta_r(y)$ is also invariant under $K$, and moreover, satisfies the formula \begin{equation}\label{Delta} \Delta(gy)=\Delta(ge)\Delta(y)=Det^\frac{r}{n}(g)\Delta(y),\,\,\forall g\in G,\,\,\forall y\in\Omega \end{equation} where $Det$ is the usual determinant of linear mappings. It follows from this formula that the measure $\frac{d\xi}{\Delta^{\frac{n}{r}}(\xi)}$ is $G$-invariant in $\Omega.$ Finally, we recall the following version of Sylvester's theorem. $$\Omega=\{x\in\R^n:\,\,\,\,\Delta_k(x)>0, \hskip 2truemm k=1,\cdots,r\}.$$ \subsection{Geometric properties} With the identification $\Omega\sim G/K$, the cone can be regarded as a Riemannian manifold with the $G$-invariant metric defined by $$<\xi,\eta>_y:=(t^{-1}\xi|t^{-1}\eta)$$ if $y=t\cdot \textbf e$ with $t\in T$ and $\xi$ and $\eta$ are tangent vectors at $y\in\Omega$. We shall denote by $d_\Omega$ the corresponding invariant distance, and by $B_\delta(\xi)$ the associated ball centered at $\xi$ with radius $\delta$. Note that for each $g\in G$, the invariance of $d_\Omega$ implies that $B_\delta(g\xi)=gB_\delta(\xi)$. We also note that \begin{itemize} \item on compact sets of $\mathbb R^n$ contained in $\Omega,$ the invariant distance $d_\Omega$ is equivalent to the Euclidean distance in $\mathbb R^n;$ \item the associated balls $B_\delta$ in $\Omega$ are relatively compact in $\Omega$. \end{itemize} We also need the following crucial invariance properties of $d_\Omega$ and $\Delta_k$, obtained in \cite{BBGNPR, BBGR}. \begin{lemma} Let $\delta_0>0$. Then there is a constant $\gamma0$ depending only on $\delta_0$ and $\Omega$ such that for every $0<\delta\leq\delta_0$ and for $\xi$, $\xi'\in\Omega$ satisfying $d_\Omega(\xi,\xi')\leq \delta$ we have $$\frac{1}{\gamma}\leq\frac{\Delta_k(\xi)}{\Delta_k(\xi')}\leq\gamma,\,\,\forall k=1,\cdots,r.$$ \end{lemma} \begin{lemma} Let $\delta_0>0$ be fixed. Then there exist two constants $\eta_1>\eta_2>0$, depending only on $\delta_0$ and $\Omega$, such that for every $0<\delta\leq\delta_0$ we have $$\{|\xi-e|<\eta_2\delta\}\subset B_\delta(e)\subset\{|\xi-e|<\eta_1\delta\}$$ \end{lemma} The next corollary is an easy consequence of the previous lemma. \begin{lemma}\label{lemma5} Let $\delta_0=1$. Then there is a positive constant $\gamma$ such that for every $\delta \in (0, 1)$ such that $\eta_1 \delta < 1,$ we have $$B_\delta(\xi)\subset\{y\in\Omega:\,\,\,\,y-\gamma\xi\in\Omega\}$$ for all $\xi\in\Omega$. \end{lemma} \subsection{Gamma function in $\Omega$} The generalized gamma function in $\Omega$ is defined in terms of the generalized power functions by $$\Gamma_\Omega(\s)=\int_\Omega e^{-(\xi|e)}\Delta_{\textbf s}(\xi)\frac{d\xi}{\Delta^\frac{n}{r}(\xi)} \quad \quad (\s =(s_1,\cdots,s_r)\in \mathbb R^r).$$ This integral is known to converge absolutely if and only if $\Re e \hskip 1truemm s_k >\frac {n_k}2, \hskip 2truemm k=1,\cdots, r.$ In this case, $$\Gamma_\Omega(\s)=(2\pi)^\frac{n-r}{2}\prod_{k=1}^r\Gamma \left (s_k-\frac{n_k}2 \right ),$$ where $\Gamma$ is the classical gamma function. We shall denote $\Gamma_\Omega(\s)=\Gamma_\Omega (s)$ when $\s=(s,\cdots,s)$. In view of \cite{FK}, the Laplace transform of a generalized power function is given for all $y\in\Omega$ by $$\int_\Omega e^{-(\xi|y)}\Delta_{\textbf s}(\xi)\frac{d\xi}{\Delta^\frac{n}{r}(\xi)}=\Gamma_\Omega(\s)\Delta_\s(y^{-1})$$ for each $\s\in\C^r$ such that $\Re e \hskip 1truemm s_k >\frac {n_k}2$ for all $k=1,\cdots,r$. We recall that $y^{-1}=t^{*-1}\cdot \textbf e$ whenever $y=t\cdot \textbf e$ with $t\in T.$ Here $t^\star$ denotes the adjoint of the transformation $t\in T$ with respect to the inner product $(\cdot|\cdot).$ \\ The power function $\Delta_{\textbf s}(y^{-1})$ can be expressed in terms of the rotated Jordan frame $\{c_r,\cdots,c_1\}$. Indeed if we denote by $\Delta^*_k$, $k=1,\cdots,r$, the principal minors with respect to the rotated Jordan frame $\{c_r,\cdots,c_1\}$ then $$\Delta_{\textbf s}(y^{-1})=[\Delta^*_{\s^*}(y)]^{-1},\,\,\forall \s=(s_1,\cdots,s_r)\in\C^r.$$ Here $\s^*=(s_r,\cdots,s_1)$. \subsection{Bergman distance on the tube domain $T_\Omega$} Following \cite{BBGNPR}, we define a matrix function $\{g_{j,k}\}_{1\leq j,k\leq n}$ on $T_\Omega$ by $$g_{j,k}(z)=\frac{\partial^2}{\partial z_j\partial \bar z_k}\log B(z,z)$$ where $B$ is the unweighted Bergman kernel of $T_\Omega,$ i.e $B=B_\s$ with $\s =(\frac nr,\cdots \frac nr)$. The map $$T_\Omega\ni z\mapsto \H_z$$ with $$\H_z(u,v)=\sum_{j,k=1}^n g_{j,k}(z)u_k\bar v_k,\,\,\,u=(u_1,\cdots,u_n),\,\,v=(v_1,\cdots,v_n)\in\C^n$$ defines a Hermitian metric on $\C^n$, called the Bergman metric. The Bergman length of a smooth path $\gamma:[0,1]\to T_\Omega$ is given by $$l(\gamma)=\int_0^1\{\H_{\gamma(t)}(\overset{.}{\gamma}(t),\overset{.}{\gamma}(t))\}^\frac{1}{2}dt$$ and the Bergman distance $d(z_1,z_2)$ between two points $z_1$, $z_2$ of $T_\Omega$ is $$d(z_1,z_2)=\inf_{\gamma}l(\gamma)$$ where the infimum is taken over all smooth paths $\gamma:[0,1]\to T_\Omega$ such that $\gamma(0)=z_0$ and $\gamma(1)=z_1$. It is well known that the Bergman distance $d$ is equivalent to the Euclidean distance on the compact sets of $\C^n$ contained in $T_\Omega$ and the Bergman balls in $T_\Omega$ are relatively compact in $T_\Omega$. Next, we again denote by $\R^n$ the group of translations by vectors in $\R^n$. Then the group $\R^n\times T$ acts simply transitively on $T_\Omega$ and the Bergman distance $d$ is invariant under the automorphisms of $\R^n\times T$. \subsection{A Whitney decomposition of the tube domain $T_\Omega$} In the sequel, the Bergman ball in $T_\Omega$ with centre at $z$ and radius $\eta$ will be denoted $\mathbf{B}_\eta(z).$ \begin{lemma}\label{inclusion} There exists a constant $R>1$ such that for all $\eta\in (0,1)$ and $z_0=x_0+iy_0\in T_\Omega$, the following inclusions hold : $$\{x+iy\in T_\Omega:\,\,\|g^{-1}(x-x_0)\|<\frac{\eta}{R} \hskip 2truemm \textrm{and}\,\,y\in B_\frac{\eta}{R}(y_0)\}\subset\mathbf{B}_\eta(z_0), $$ $$ \mathbf{B}_\eta(z_0)\subset \{x+iy\in T_\Omega:\,\,\|g^{-1}(x-x_0)\|< {R\eta}\,\,\textrm{and}\,\,y\in B_ {R\eta} (y_0)\}$$ where $g$ is the element of $T$ satisfying $g\cdot e=y_0$. \end{lemma} \begin{proof} From the invariance under translations and automorphisms of $T$ we have that $$g^{-1}(x-x_0)+ig^{-1}y\in {\mathbf{B}_\eta(ie)}$$ for all $x+iy\in \mathbf{B}_\eta(z_0)$. We recall that the Bergman distance $d$ and the Euclidean distance $d_{Eucl}$ are equivalent on compact sets of $\C^n$ contained in $T_\Omega.$ So there exists a constant $R>1$ such that $$\frac{1}{R}d(X+iY,ie)<d_{Eucl}(X+iY,ie)<Rd(X+iY,ie)$$ for all $X+iY\in\overline{\mathbf{B}_1(ie)}$. The proof of the lemma follows from the following equivalence $$d_{Eucl}(X_1+iY_1,X_2+iY_2)\thickapprox\max(\|X_1-X_2\|,\|Y_1-Y_2\|)$$ and the equivalence given in Lemma 2.2 between $d_\Omega$ and the Euclidean distance $||\cdot||$ in $\mathbb R^n$ on compact sets of $\mathbb R^n$ contained in $\Omega.$ \end{proof} The starting point of our analysis is the following Whitney decomposition of the cone $\Omega$ which was obtained e.g. in \cite{BBGR, BBGNPR}. \begin{lemma}\label{lemma7} There is a positive integer $N$ such that given $\delta\in (0,1)$, one can find a sequence of points $\{y_j\}_{j=1,2,\cdots}$ in $\Omega$ with the following property: \begin{enumerate}[\upshape (i)] \item the balls $B_{\frac{\delta}{2}}(y_j)$ are pairwise disjoint; \item the balls $B_\delta(y_j)$ cover $\Omega$; \item each point of $\Omega$ belongs to at most $N$ of the balls $B_\delta(y_j)$ \end{enumerate} \end{lemma} \begin{defin} The sequence $\{y_j\}$ is called a $\delta$-{\it lattice of} $\Omega.$ \end{defin} Our goal is to obtain an atomic decomposition theorem for holomorphic functions in $A_\s^{p,q}$ spaces. To this end, we need to derive a suitable version of the classical Whitney decomposition of $\R^n$. Let $\{y_j\}$ be a $\delta$-{\it lattice of} $\Omega$ and let $g_j \in T$ be such that $g_j \cdot \textbf e = y_j.$ Let $R>1$ be a constant like in Lemma \ref{inclusion}. We adopt the following notations: $$I_{l,j}=\{x\in\R^n:\,\,\,\|g_j^{-1}(x-x_{l,j})\|<\frac{\delta}{R}\}$$ $$I'_{l,j}=\{x\in\R^n:\,\,\,\|g_j^{-1}(x-x_{l,j})\|<\frac{\delta}{2R}\}$$ where $\{x_{l,j}\}$ is a sequence in $\R^n$ to be determined. From Lemma \ref{inclusion} we have immediately the following. \begin{rem}\label{remark1} For the constant $R>1$ of Lemma \ref{inclusion}, the following inclusion holds $$I_{l,j}+iB_{\frac \delta R} (y_j)\subset \textbf{B}_{\delta}(x_{lj}+iy_j).$$ \end{rem} \begin{lemma}\label{lemma8} Let $\delta \in (0, 1).$ There exist a positive constant $R>1,$ a positive integer $N$ and a sequence of points $\{x_{l,j}\}_{l\in\Z,\hskip 1truemm j\in\N}$ in $\R^n$ such that the following hold. \begin{enumerate}[\upshape (i)] \item $\{I_{l,j}\}_l$ form a cover of $\R^n$; \item $\{I'_{l,j}\}_l$ are pairwise disjoint; \item for each $j,$ every point of $\R^n$ belongs to at most $N$ balls $I_{l,j}$. \end{enumerate} \end{lemma} \begin{proof} Fix $j$ in $\N$ and define the collection $\mathcal A_j$ of sets in $\mathbb R^n$ by $$\mathcal{A}_{j}=\left\{A\subset\mathbb{R}^{n}:\,\,\forall\, x,\,\,y\in A\,\,\textrm{distinct}\,\,\, \|g_{j}^{-1}(x-y)\|\geq \frac{\delta}{R}\right\}.$$ Clearly the collection $\mathcal{A}_{j}$ is non empty. Indeed the sets $\{\frac{\delta}{R} y_{j}, 0_{\R^n}\}$ are members of $\mathcal{A}_{j}$. Furthermore, the collection $\mathcal A_j$ is partially ordered with respect to inclusion.\\ Let $\mathcal C$ be a totally ordered subcollection of $\mathcal A_j.$ We set $F=\underset{A\in \mathcal C}{\cup} A.$ Given two distinct elements $x, y$ of $F,$ there are two members $A_1$ and $A_2$ of $\mathcal C$ such that $x\in A_1$ and $y\in A_2.$ But either $A_1 \subset A_2$ or $A_2 \subset A_1.$ So we have either $x, y \in A_1$ or $x, y \in A_2.$ Hence $||g_j^{-1} (x-y)|| \geq \frac \delta R.$ This shows that $F$ is a member of $\mathcal A_j.$ In other words, the collection $\mathcal A_j$ is inductive. An application of Zorn's lemma then gives that the collection $\mathcal A_j$ has a maximal member $E_j.$ We write $E_j=\{x_{l,j}\}_{l\in L_{j}}.$ To prove assertion (ii), consider $l$ and $k$ such that $l\neq k$ and assume that $I'_{l,j}\cap I'_{k,j}$ contain at least an element $x$. Then $$\|g_{j}^{-1}(x_{l,j}-x_{k,j})\|\leq \|g_{j}^{-1}(x_{l,j}-x)\| + \|g_{j}^{-1}(x-x_{k,j})\|<\frac{\delta}{2R}+ \frac{\delta}{2R}=\frac \delta R.$$ This would contradict the property that $\{x_{l,j},\,\,x_{k,j}\}$ is a subset of ${E}_j,$ which is a member of $\mathcal A_j$. For assertion (i), let us suppose $\underset{l\in L_{j}}{\cup} I_{l,j}\neq \mathbb{R}^{n}$. Then there exists $\xi_{j}\in\mathbb{R}^{n}$ such that $\xi_{j}\notin\underset{l\in L_{j}}{\cup}I_{l,j}$. Clearly the set $E_{j}\cup\{\xi_{j} \}$ is a member of $\mathcal{A}_{j}$. This would contradict the maximality of $E_j$ in $\mathcal A_j$. This completes the proof of assertion (i). To prove assertion (iii), we fix $j.$ Given $x\in \mathbb R^n,$ it follows from assertion (i) that there exists a subset $L_j (x)$ of $L_j$ such that $$x\in\underset{l\in L_j(x)}{\cap}I_{l,j}.$$ We will show that there is a positive integer $N$ independent of $\delta$ such that $Card \hskip 2truemm L_j (x)\leq N$ for all $j\in \mathbb N$ and $x\in \mathbb R^n.$ It follows from Lemma 2.4 and Lemma 2.5 that for every $l\in L_{j},$ $${\mathbf B}_{\frac \delta {2R^2}} (x_{l, j} +iy_j) \subset I'_{l, j} \times B_{\frac \delta {2R}} (y_j).$$ So the balls ${\mathbf B}_{\frac \delta {2R^2}} (x_{l, j} +iy_j), \hskip 2truemm l\in L_{j}$ are pairwise disjoint since $R>1.$ Moreover, for every $l\in L_j(x),$ we have $$ \begin{array}{clcr} {\mathbf B}_{\frac \delta {2R^2}} (x_{l, j} +iy_j) &\subset \{\xi +i\sigma \in T_\Omega: \|g_{j}^{-1}(\xi-x)\| <\frac{3\delta}{2R} \hskip 2truemm {\rm and} \hskip 2truemm \sigma \in B_{\frac {\delta} {2R}} (y_j)\}\\ & \subset {\mathbf B}_{\frac {3\delta}2} (x+iy_j). \end{array} $$ For the first inclusion, we applied the triangle inequality. We obtain $$\underset{l\in L_{j}(x)}{\cup}\mathbf{B}_{\frac{\delta}{2R}}(x_{l,j}+iy_j)\subset\mathbf{B}_{ \frac{3\delta R}{2} }(x+iy_j).$$ We call $m$ the invariant measure on $T_\Omega$ given by $$dm(\xi+i\sigma)={\Delta^{-\frac{2n}{r}}(\sigma)}d\xi d\sigma.$$ We conclude that $$Card \hskip 1truemm L_j (x) \leq \frac {m\left (\mathbf B_{\frac {3\delta R}2} (i\mathbf e)\right)}{m\left (\mathbf B_{\frac {\delta}{2R}} (i\textbf e)\right )},$$ because $$m\left (\underset{l\in L_{j}(x)}{\cup}\mathbf{B}_{\frac{\delta}{2R}}(x_{l,j}+iy_j)\right )=Card\hskip 1truemm L_j (x) \times m\left (\mathbf{B}_{\frac{\delta}{2R}}(i\mathbf e)\right ) \leq m\left ({\mathbf B}_{\frac {3\delta R}{2}} (i\mathbf e)\right ).$$ We finally prove that the collection $\{I_{l, j}\}_{l, j}$ is countable. It suffices to show that for each $j,$ the collection $\{I_{l, j}\}_{l\in L_j}$ is countable. We fix $j.$ To every set $I'_{l, j},$ we assign a point of $\mathbb Q^n$ belonging to $I'_{l, j}.$ Since $\bigcap \limits_l I'_{l, j} = \emptyset,$ this defines a one-to-one correspondence from the collection $\{I'_{l, j}\}_{l\in L_j}$ to a subset of $\mathbb Q^n.$ This shows that the collection $\{I'_{l, j}\}_{l\in L_j}$ is at most countable. Moreover the collection $\{I_{l, j}\}_{l\in L_j}$ which has the same cardinal as the collection $\{I'_{l, j}\}_{l\in L_j}$ is infinite: the proof is elementary since $\bigcup \limits_{l\in L_j} I_{l, j}=\mathbb R^n$ is unbounded. The proof of the lemma is complete. \end{proof} \begin{rem} We just proved in Lemma \ref{lemma8} that for each $j=1, 2,\cdots,$ the index set $L_j$ is countable. In analogy with the one-dimensional case \cite{RT}, we took $L_j = \mathbb Z$ in the statement of Lemma \ref{lemma8} and in the statement of Theorem A. \end{rem} \subsection{A $\delta$-lattice in $T_\Omega$} \begin{defin} The sequence $\{z_{lj}=x_{lj} +iy_j\}_{l\in \mathbb Z, j\in \mathbb N}$ defined in Lemma \ref{lemma8} will be called a $\delta$-{\it lattice \hskip 1truemm in} $T_\Omega$. \end{defin} We have the following lemma. \begin{lemma} Let $\{z_{l,j}=x_{l,j}+iy_j\}_{l\in \mathbb Z, j\in \mathbb N}$ be a $\delta$-lattice in $T_\Omega$. There exists a positive constant $C=C(\delta, R)$ such that for all $l\in \mathbb Z$, $j\in \mathbb N,$ the following hold. \begin{enumerate}[\upshape (a)] \item $$\int_{I_{l,j}}dx\leq C\Delta^\frac{n}{r}(y_j).$$ \item $$\int_{\R^n}\sum_{l\in L_j}\chi_{\{x\in I_{l,j}:\,\,\textbf{d}(x+iy,w)<1\}}(x)dx\leq C\Delta^\frac{n}{r}(y_j),\,\,\forall y\in B_\delta(y_j), \forall w\in T_\Omega .$$ \end{enumerate} \end{lemma} \begin{proof} We denote $Det$ the usual determinant of an endomorphism of $\mathbb R^n.$ \begin{enumerate} \item[(a)] We set $u=g_j^{-1}(x-x_{lj}).$ Then \begin{align*} \int_{I_{l,j}}dx&=\int_{\|u\|< \frac{\delta}{R}} Det\hskip 1truemm (g_j)du\\ &=\Delta^\frac{n}{r}(y_j)\int_{\|u\|<\frac{\delta}{R}}du=C\Delta^\frac{n}{r}(y_j). \end{align*} For the second equality, we applied the formula (\ref{Delta}). This proves assertion (a). \item[(b)] By assertion (iii) of Lemma \ref{lemma8}, we have $$\sum_{l\in L_j}\chi_{I_{l,j}} (x) \leq N$$ for every $j.$ Then \begin{align*} \int_{\R^n}\sum_{l\in L_j}\chi_{\{x\in I_{l,j}:\,\,\textbf{d}(x+iy,w)<1\}}(x)dx&\leq N\int_{\{x\in \mathbb R^n: \hskip 1truemm x+iy \in \textbf B_1 (w)\}} dx \end{align*} We set $w=u+iv.$ By Lemma 2.4, we have the implication $$x+iy\in {\textbf B}_1 (w) \Rightarrow ||g^{-1} (x-u)||<R \hskip 2truemm {\rm and} \hskip 2truemm y\in B_R (v)$$ with $\hskip 2truemm g\cdot \textbf e = v.$ So $$\int_{\{x\in \mathbb R^n: \hskip 2truemm x+iy \in \textbf B_1 (w)\}} dx \leq \int_{\{x\in \mathbb R^n: \hskip 2truemm ||g^{-1} (x-u)||<R\}} dx = CDet\hskip 1truemm (g) = C\Delta^{\frac nr} (v).$$ But $d_\Omega (y, y_j) < \delta$ and $d_\Omega (y, v) < R.$ This implies that $d(v, y_j) < \delta + R.$ Henceforth $\Delta^{\frac nr} (v) \leq C \Delta^{\frac nr} (y_j)$ by Lemma 2.1. This gives assertion (b). \end{enumerate} \end{proof} \section{Atomic decomposition} \subsection{The sampling theorem} We first record the following lemma (See e.g. \cite{BBGNPR}). \begin{lemma}\label{lemma10} Let $1\leq p < \infty.$ Given $\delta\in (0,1)$, there exists a positive constant $C$ such that, for each holomorphic function $F$ in $T_\Omega$ we have \begin{enumerate}[\upshape (i)] \item $|F(z)|^p\leq C\delta^{-2n}\int_{\textbf {B}_\delta(z)}|F(u+iv)|^p\frac{du\,dv}{\Delta^\frac{2n}{r}(v)}$; \item if $d (z,\zeta)<\delta$ then $$|F(z)-F(\zeta)|^p\leq C\delta^{p}\int_{\textbf B_1(z)}|F(u+iv)|^p\frac{du\,dv}{\Delta^\frac{2n}{r}(v)}$$ \end{enumerate} \end{lemma} For the second lemma, the reader should refer to \cite{BBGR}, Lemma 4.5. \begin{lemma}\label{lemma11} Suppose $\delta\in(0,1)$ and $1\leq p,q<\infty$. There exists a positive constant $C$ such that \begin{equation} \|F(\cdot+iy)\|^q_p\leq C\int_{B_\delta(y)}\|F(\cdot+iv)\|^q_p\frac{dv}{\Delta^\frac{n}{r}(v)} \end{equation} for every holomorphic function $F$ on $T_\Omega$ and every $y\in\Omega$. \end{lemma} The following is our sampling theorem. \begin{thm}\label{sampling} Let $\delta \in (0, 1)$ satisfy the assumption of Corollary 2.3 and let $\{z_{l,j}=x_{l,j}+iy_j\}_{l\in\Z, \hskip 1truemm j\in\N}$ be a $\delta$-lattice in $T_\Omega$. Let $1\leq p,q<\infty$ and let $\textbf s \in \mathbb R^r$ be such that $s_k > \frac {n_k}2, \hskip 2truemm k=1,\cdots,r.$ There exists a positive constant $C_\delta = C_\delta (\s, p, q)$ such that for every $F\in A_\s^{p,q}$, we have \begin{equation} \sum_j\left(\sum_l|F(z_{l,j})|^p\right)^\frac{q}{p}\Delta_{\s+\frac{nq}{rp}}(y_j)\leq C_\delta\|F\|_{A_\s^{p,q}}^q \end{equation} Moreover, if $\delta$ is small enough, the converse inequality \begin{equation}\label{eqq} \|F\|_{A_\s^{p,q}}^q\leq C_\delta \sum_j\left(\sum_l|F(z_{l,j})|^p\right)^\frac{q}{p}\Delta_{\s+\frac{nq}{rp}}(y_j) \end{equation} is also valid. \end{thm} \begin{proof} From Lemma \ref{lemma10} we have \begin{equation}\label{equation5} |F(z_{l,j})|^{p}\leq C\delta ^{-2n}\int_{{\textbf B}_ \frac{\delta}{2R^2}(z_{l,j})}|F(u+iv)|^p\,\frac{du\,dv}{\Delta^{\frac{2n}{r}}(v)}. \end{equation} It follows from the inclusion $\textbf{B}_ \frac{\delta}{2R^2}(z_{l,j})\subset\left\{u+iv:\,\,u\in I'_{l,j},\,\,v\in B_\frac{\delta}{2R}(y_j)\right\}$ that \begin{equation}\label{equation6} |F(z_{l,j})|^{p}\leq C\delta ^{-2n}\int _{I'_{l,j} }\,du\int_{ B_\frac{\delta}{2R}(y_j) }|F(u+iv)|^p\,\frac{dv}{\Delta^\frac{2n}{r}(v)}. \end{equation} From the equivalence of $\Delta(v)$ and $\Delta(y_j)$ whenever $v\in B_\frac{\delta}{2R}(y_j),$ we obtain that \begin{equation}\label{equation6'} |F(z_{l,j})|^{p}\leq \frac{C\delta ^{-2n}}{\Delta^\frac{2n}{r}(y_j)}\int _{I'_{l,j} }\,du\int_{ B_\frac{\delta}{2R}(y_j) }|F(u+iv)|^p\,dv. \end{equation} Next, a successive application of Lemma \ref{lemma8}, Corollary 2.3 and the non-increasing property of the function $\Omega\ni v\mapsto\|F(\cdot+iv)\|_p^p$ gives the existence of a positive constant $\gamma$ such that \begin{align*} \sum \limits_{l\in L_j}|F(z_{l,j})|^{p}&\leq \frac{C\delta ^{-2n}}{\Delta^\frac{2n}{r}(y_j)}\int _{\R^n }\,du\int_{ B_\frac{\delta}{2R}(y_j) }|F(u+iv)|^p\,dv\\ &=\frac{C\delta ^{-2n}}{\Delta^\frac{2n}{r}(y_j)}\int_{ B_\frac{\delta}{2R}(y_j) }\|F(\cdot+iv)\|_p^p\,dv\\ &\leq \frac{C\delta ^{-2n}}{\Delta^\frac{2n}{r}(y_j)}\int_{ B_\frac{\delta}{2R}(y_j) }\|F(\cdot+i\gamma y_j)\|_p^p\,dv\\ &\leq \frac{C\delta ^{-2n}}{\Delta^\frac{n}{r}(y_j)}\|F(\cdot+i\gamma y_j)\|_p^p. \end{align*} Finally, we obtain \begin{equation}\label{equation7} \sum_j\left(\sum_l|F(z_{l,j})|^p\right)^\frac{q}{p}\Delta_{\s+\frac{nq}{rp}}(y_j)\leq C_\delta^{\frac qp} \sum_j\|F(\cdot+i\gamma y_j)\|_p^q\Delta_\s (y_j). \end{equation} We define the holomorphic function $F_\gamma$ by $$F_\gamma(x+iy)=F(\gamma(x+iy)).$$ By Lemma \ref{lemma11}, we get \begin{equation}\label{equation8} \|F(\cdot+i\gamma y_j)\|_p^q =\gamma^{\frac {nq}p}\|F_{\gamma}(\cdot +iy_{j})\|_{p}^{q}\leq C\gamma^{\frac {nq}p}\int _{ B_\frac{\delta}{2R^2}(y_j) }\|F_{\gamma}(\cdot +iy)\|_{p}^{q}\frac{dy}{\Delta^{\frac{n}{r}}(y)}. \end{equation} It follows from (\ref{equation8}), Lemma \ref{lemma7} and the equivalence of $\Delta(y)$ and $\Delta(y_j)$ whenever $y\in B_\frac{\delta}{2R^2}(y_j)$ that \begin{eqnarray*} \underset{j}{\sum}\|F (\cdot +i\gamma y_{j})\|_{p}^{q}\Delta_{\s}(y_{j})&\leq C\gamma^{\frac {nq}p}\int _{\Omega}\|F_\gamma (\cdot +iy) \|_{p}^{q}\Delta_{\s -\frac{n}{r}}(y)\,dy\\ &=C\int_\Omega ||F(\cdot +i\gamma y)||_p^q \Delta_{\s -\frac nr} (y)dy. \end{eqnarray*} Moreover, taking $v=\gamma y$ we obtain \begin{equation}\label{equation9} \underset{j}{\sum}\|F(\cdot +i\gamma y_{j})\|_{p}^{q} \Delta_{\s}(y_{j}) \leq C(\gamma, \s, p, q)\int _{\Omega}\|F(\cdot +iv)\|_{p}^{q}\Delta_{\s -\frac{n}{r}}(v)\,dv. \end{equation} So the estimate (3.2) is a direct consequence of {(\ref{equation7})} and {(\ref{equation9})}. Conversely, a successive application of Lemma \ref{lemma8}, the triangle inequality and assertion a) of Lemma 2.11 gives \begin{eqnarray*} \|F(\cdot+iy)\|_p^p &\leq C_p\left \{\underset{l\in L_j}{\sum}\int _{I_{l,j}}|F(x+iy)-F(z_{l,j})|^{p}\,dx+ \underset{l\in L_j}{\sum}|F(z_{l,j})|^p\int_{I_{l,j}}\,dx\right \}\\ &\leq C_p\left \{\underset{l\in L_j}{\sum}\int _{I_{l,j}}|F(x+iy)-F(z_{l,j})|^{p}\,dx+ \underset{l\in L_j}{\sum}|F(z_{l,j})|^p\Delta^{\frac nr}(y_j)\right \}. \end{eqnarray*} for all $y\in \Omega$. In the sequel, for fixed $y \in \Omega,$ we set $$K_j(w)=\int_{\R^n}\sum_{l\in L_j}\chi_{\{x\in I_{l,j}:\,\,d(x+iy,w)<1\}}(x)dx$$ and we write $$ N_{p,q}(F) = \int_{y\in\Omega}\underset{j\in\mathbb{N}}{\sum}\chi_{ B_\delta(y_j)}(y)\times$$ $$\left(\int_{v\in\Omega}\int_{\mathbb{R}^{n}} K_j(u+iv) |F(u+iv)|^{p}\chi_{d_\Omega (y, v) < R}\frac{du\,dv}{\Delta (v)^{\frac{2n}{r}}} \right)^{\frac{q}{p}}\Delta_{\s-\frac{n}{r}}(y)dy. $$ Using assertion (ii) of Lemma 3.1, we obtain easily that \begin{align*} \|F\|_{A_{\s}^{p,q}}^{q}&\leq \int_{\underset{j}{\cup}B_\delta(y_j)}\|F(\cdot +iy)\|_{p}^{q}\Delta _{\s -\frac{n}{r}}(y)dy\\ &\leq C_{p,q}\delta ^{q} N_{p,q}(F)+C_{p,q}\underset{j} {\sum} \left( \underset{l}{\sum}|F(z_{l,j})|^{p}\right)^{\frac{q}{p}}\Delta _{ {\s}+\frac{nq}{rp}}(y_{j}). \end{align*} To prove (\ref{eqq}) it suffices to establish the following inequality: $$N_{p,q}(F)\leq C \|F\|_{A_{\s}^{p,q}}^{q}.$$ To this end, first observe that by assertion (b) of Lemma 2.11, we have $$K_j(w) \leq C\Delta^{\frac nr} (y_j), \quad \forall y\in B_\delta (y_j), \hskip 2truemm \forall w\in \Omega.$$ Now by Lemma 2.1, we have the equivalence $\Delta(v)\sim\Delta(y_j)\sim \Delta(y)$ whenever $v\in B_R (y)$ and $y\in B_\delta (y_j)$ with equivalence constants independent of $\delta.$ This combined with an application of assertion (iii) of Lemma \ref{lemma7} gives that $$ N_{p,q}(F)\leq CN\int_\Omega \left(\int_{d(v,y)<R}\|F(\cdot+iv)\|_p^p\frac{dv}{\Delta^\frac{n}{r}(v)}\right)^\frac{q}{p}\Delta_{\s-\frac{n}{r}}(y)dy. $$ Next, from the non-increasing property of the mapping $v\in\Omega\mapsto\|F(\cdot+iv)\|_p,$ Corollary \ref{lemma5} and the $G$-invariance of the measure $\frac {dv}{\Delta^{\frac nr} (v)}$ on $\Omega,$ there exists a positive constant $\gamma$ independent of $\delta$ such that \begin{equation*} N_{p,q}(F)\leq CN\int_\Omega\|F(\cdot+i\gamma y)\|_p^q \Delta_{\s-\frac{n}{r}}(y)dy. \end{equation*} Finally, taking $t=\gamma y$ on the right hand side of the previous inequality, we obtain that $$N_{p,q}(F)\leq C (\gamma)\|F\|_{A_{\s}^{p,q}}^{q}.$$ \end{proof} \subsection{Proof of Theorem A} We can now prove the atomic decomposition theorem (Theorem A). Here is its more precise statement. \begin{thm}\label{thA'} Let $\delta \in (0, 1)$ and let $\{z_{l,j}=x_{l,j}+iy_j\}_{l\in\Z, \hskip 1truemmj\in\N}$ be a $\delta$-lattice in $T_\Omega.$ Let $\textbf s$ be a vector of $\R^r$ such that $s_k > \frac {n_k}2, \hskip 1truemm k=1,\cdots,r.$ Assume that $P_\s$ extends to a bounded operator on $L_\textbf s^{p,q}$. Then there exists a positive constant $C$ such that the following two assertions hold. \begin{enumerate}[\upshape (i)] \item For every sequence $\{\lambda_{l,j}\}_{l\in\Z, \hskip 1truemm j\in\N}$ such that $$\sum_j\left(\sum_l|\lambda_{l,j}|^p\right)^\frac{q}{p}\Delta_{\textbf s+\frac{nq}{rp}}(y_j)<\infty,$$ the series $$\sum_{l,j}\lambda_{l,j}\Delta_{\textbf s+\frac{nq}{rp}}(y_j)B_\s(z,z_{l,j})$$ is convergent in $A_\textbf s^{p,q}$. Moreover, its sum $F$ satisfies the inequality $$\|F\|_{A_\textbf s^{p,q}}^q\leq C_\delta \sum_j\left(\sum_l|\lambda_{l,j}|^p\right)^\frac{q}{p}\Delta_{\textbf s+\frac{nq}{rp}}(y_j)$$ \item For $\delta$ small enough, every function $F\in A_\textbf s^{p,q}$ may be written as $$F(z)=\sum_{l,j}\lambda_{l,j}\Delta_{\textbf s+\frac{nq}{rp}}(y_j)B_\textbf s (z,z_{l,j}),$$ with $$\sum_j\left(\sum_l|\lambda_{l,j}|^p\right)^\frac{q}{p}\Delta_{\textbf s+\frac{nq}{rp}}(y_j)\leq C_\delta \|F\|_{A_\textbf s^{p,q}}^q$$ \end{enumerate} \end{thm} \begin{proof}[Proof of Theorem \ref{thA'}] Let $p\in [1, \infty], \hskip 2truemm q\in(1,\infty), $ and call $p'$ and $q'$ their conjugate exponents, i.e $\frac{1}{p}+\frac{1}{p'}=1$ and $\frac{1}{q}+\frac{1}{q'}=1$. Let $\s \in \mathbb R^r$ such that $s_k > \frac {n_k}2, \hskip 2truemm k=1,\cdots, r.$ Recall that (cf. \cite{DD}) if $P_\s:L^{p',q'}_\s\to A^{p',q'}_\s$ is bounded, then the dual space of $A^{p',q'}_\s$ identifies with $A^{p,q}_\s$ with respect to the pairing $$<F, G>_{\textbf s} = \int_{T_\Omega} F(x+iy)\overline {G(x+iy)}\Delta_{\textbf s - \frac nr} (y)dxdy.$$ Denote by $l^{p,q}_\s$ the space of complex sequences $\{\lambda_{l,j}\}_{l\in\Z, \hskip 1truemmj\in\N}$ such that $$||\{\lambda_{l,j}\}||_{l^{p,q}_\s} = \left (\sum_j \left (\sum_l|\lambda_{l,j}|^p\right )^\frac{q}{p}\Delta_{\s+\frac{nq}{rp}}(y_j)\right )^{\frac 1q}<\infty.$$ We have the duality $l^{p,q}_\s =(l_{\s}^{p' ,q' })'$ with respect to the pairing $$<\lambda,\,\,\mu>_{l_{\s}^{p',q'},\,\,l_{\s}^{p,q}} =\underset{l,j}{\sum}\lambda_{l,j} \overline{\mu}_{l,j}\Delta_{\s+\frac{n}{r}}(y_j).$$ Then from the first part of the sampling theorem, the operator \begin{eqnarray*} R:&A_{\s}^{p',q' }&\to l_{\s}^{p' ,q' }\\ &F&\mapsto RF=\{F(z_{l,j})\}_{l\in\Z,\,j\in\mathbb{N}} \end{eqnarray*} is bounded. So the adjoint operator $R^*$ of $R$ is also a bounded operator from $l^{p,q}_\s$ to $A^{p,q}_\s$. Its explicit formula is $$R^*(\{\lambda _{l,j}\})(z)=\sum _{l,j}\lambda _{l,j}\Delta_{\s +\frac{n}{r}} (y_{j})B_{\s}(z, z_{l,j}).$$ This completes the proof of assertion (i). From the second part of the sampling theorem, if $\delta$ is small enough, the adjoint operator $R^*:l^{p,q}_\nu\to A^{p,q}_\nu$ of $R$ is onto. Moreover, we call $\mathcal{N}$ the subspace of $l^{p,q}_\nu$ consisting of all sequences $\{\lambda_{l,j}\}_{l\in\Z,\,j\in\mathbb{N}}$ such that the mapping $$z\mapsto\sum_{l,j}\lambda_{l,j} \Delta_{\s+\frac{n}{r}}(y_j)B_\s (z,z_{l,j})$$ vanishes identically. Then the linear operator \begin{eqnarray*} \varphi:l_{\textbf s}^{p,q}/\mathcal{N}&\to& A_{\textbf s}^{p,q}\\ \{\lambda _{l,j}\}&\mapsto &\sum _{l,j}\lambda _{l,j}B_{\nu}(z,z_{l,j})\Delta_{\s +\frac{n}{r}} (y_{j}) \end{eqnarray*} is a bounded isomorphism from the Banach quotient space $l_{\nu}^{p,q}/\mathcal{N}$ to $A_{\textbf s}^{p,q}$. The inverse operator $\varphi^{-1}$ of $\varphi$ is continuous. This gives assertion (ii). \end{proof} \section{Interpolation} In this section we determine the interpolation space via the complex method between two mixed norm weighted Bergman spaces. \subsection{Interpolation via the complex method between Banach spaces} Throughout this section we denote by $S$ the open strip in the complex plane defined by $$S=\{z=x+iy\in\C:\,\,\,0<x<1\}.$$ Its closure $\overline S$ is $$\overline S=\{z=x+iy\in\C:\,\,\,0\leq x\leq 1\}.$$ Let $X_0$ and $X_1$ be two compatible Banach spaces, i.e. they are continuously embedded in a Hausdorff topological space. Then $X_0+X_1$ becomes a Banach space with the norm $$\|f\|_{X_0+X_1}=\inf \hskip 1truemm \left (\|f_0\|_{X_0}+\|f_1\|_{X_1}\right ),\,\,\,\,f=f_0 +f_1, \hskip 1truemm f_0\in X_0, \hskip 1truemm f\in X_1\}.$$ We will denote by $\mathcal{F}(X_0,X_1)$ the space of analytic mappings \begin{eqnarray*} f:&\overline {S }&\rightarrow X_0+X_1\\ &\zeta &\mapsto f_\zeta \end{eqnarray*} with the following properties: \begin{enumerate} [\upshape (1)] \item $f$ is bounded and continuous on $\overline S;$ \item $f$ is analytic in $S;$ \item For $k=0, 1$ the function $y\mapsto f_{k+iy}$ is bounded and continuous from the real line into $X_k$. \end{enumerate} The space $\mathcal{F}(X_0,X_1)$ is a Banach space with the following norm: $$\|f\|_{\mathcal{F}}=\max \hskip 2truemm \left (\sup_{\Re e\hskip 1truemm \zeta =0}\|f_\zeta\|_{X_0},\sup_{\Re e\hskip 1truemm \zeta =1}\|f_\zeta\|_{X_1}\right ).$$ If $\theta\in (0,1)$, the complex interpolation space $[X_0,X_1]_\theta$ is the subspace of $\mathcal{F}(X_0,X_1)$ consisting of holomorphic functions $g$ on $T_\Omega$ such that $f_\theta=g$ for some $f\in \mathcal{F}(X_0,X_1)$. The space $[X_0,X_1]_\theta$ is a Banach space with the following norm: $$\|g\|_\theta=\inf \{||f||_{\mathcal{F}(X_0,X_1)}: g=f_\theta\}.$$ Referring to \cite{BL} and \cite{SW} (cf. also \cite{Z}), the complex method of interpolation spaces is functorial in the following sense: if $Y_0$ and $Y_1$ denote two other compatible Banach spaces of measurable functions on $T_\Omega,$ then if $$T:\,\,X_0+X_1\to Y_0+Y_1$$ is a linear operator with the property that $T$ maps $X_0$ boundedly into $Y_0$ and $T$ maps $X_1$ boundedly into $Y_1$, then $T$ maps $[X_0,X_1]_\theta$ boundedly into $[Y_0,Y_1]_\theta$, for each $\theta\in (0,1)$. See \cite{BL} for more information about complex interpolation.\\ A classical example of interpolation via the complex method concerns $L^{p,q}$ spaces with a change of measures. We state it in our setting of a tube domain $T_\Omega$ over a symmetric cone $\Omega.$ \begin{thm}\label{mixed} \cite{C, SW1} Let $1\leq p_0, p_1, q_0, q_1 \leq \infty.$ Given two positive measurable functions (weights) $\omega_0, \hskip 2truemm \omega_1$ on $\Omega,$ then for every $\theta \in (0, 1),$ we have $$[L^{q_0}\left ((\Omega, \omega_0 (y)dy); L^{p_0} (\mathbb R^n, dx)), L^{q_1}((\Omega, \omega_1 (y)dy); L^{p_1} (\mathbb R^n, dx)\right )]_\theta$$ $$=L^{q}((\Omega, \omega (y)dy); L^p (\mathbb R^n, dx))$$ with equal norms, provided that $$\frac{1}{p}=\frac{1-\theta}{p_0}+\frac{\theta}{p_1}$$ $$\frac{1}{q}=\frac{1-\theta}{q_0}+\frac{\theta}{q_1}$$ $$\omega^\frac{1}{q}=\omega_0^\frac{1-\theta}{q_0}\omega_1^\frac{\theta}{q_1}.$$ \end{thm} We finally record the Wolff reiteration theorem \cite{W, JNP} . \begin{thm}\label{W} Let $A_1, A_2, A_3, A_4$ be compatible Banach spaces. Suppose $[A_1, A_3]_\theta = A_2$ and $[A_2, A_4]_\varphi = A_3.$ Then $$[A_1, A_4]_\xi = A_2, \hskip 2truemm [A_1, A_4]_\psi = A_3$$ with $\xi = \frac {\theta \varphi}{1-\theta +\theta \varphi}, \psi = \frac {\varphi}{1-\theta +\theta \varphi}.$ \end{thm} \subsection{Preliminary results on tube domains over symmetric cones} We recall the following notations given in the introduction:\\ $$n_k = \frac {2(\frac nr -1)(k-1)}{r-1}$$ and $$m_k = \frac {2(\frac nr -1)(r-k)}{r-1}$$ for every $k=1,\cdots,r.$ We recall the following two results (\cite{NT, BGN}). \begin{lemma} Let $\s, \t\in \mathbb R^n$ be such that $s_k, t_k > \frac {n_k}2, \hskip 2truemm k=1, \cdots, r.$ Then the subspace $A^{2, 2}_{\t}\cap A^{p, q}_{\s}$ is dense in the weighted Bergman space $A^{p, q}_{\s}$ for all $1\leq p \leq \infty$ and $1\leq q <\infty$ \end{lemma} \begin{cor}\label{cor} Let $\s \in \mathbb R^n$ be such that $s_k > \frac {n_k}2, \hskip 2truemm k=1, \cdots, r.$ Assume that $\t\in \mathbb R^n$ and $1\leq p, q <\infty$ are such that $P_{\t}$ extends to a bounded operator on $L_{\s}^{p,q}.$ Then $P_{\t}$ is the identity on $A_{\s}^{p,q};$ in particular $P_{\t} (L_{\s}^{p,q})=A_{\s}^{p,q}.$ \end{cor} The following theorem was proved in \cite{BGN}. \begin{thm}\label{main3} Let $\s, \t \in \mathbb R^r$ and $1\leq p, q\leq \infty$. Then the positive Bergman operator $P^+_\t$ defined by $$P^+_\t f(\xi+i\tau)=d_\t\int_\Omega\left(\left|\Delta_{-\t-\frac{n}{r}}(\frac{\cdot+i(\tau+v)}{i})\right|*f(\cdot+iv)\right)(\xi)\Delta_{\t-\frac{n}{r}}(v)dv$$ is bounded on $L_{\s}^{p,q}$ when $t_k >\frac {n}r -1, \hskip 2truemm k=1,\cdots,r$ and $$\max \limits_{1\leq k \leq r} \left (1, \frac {s_k - \frac {n_k}2 +\frac {m_k}2}{t_k - \frac {n_k}2}\right ) < q< 1+ \min \limits_{1\leq k \leq r}\left (1, \frac {s_k - \frac {n_k}2}{\frac {m_k}2}\right ) \quad {\rm if} \hskip 2truemm q>1$$ $$\left [{\rm resp.} \hskip 2truemm s_k > \frac {n_k}2 \quad {\rm and} \quad t_k - s_k > \frac {m_k}2 \quad {\rm if} \quad q=1 \right].$$ In this case, $P_\t$ extends to a bounded operator from $L_{\s}^{p,q}$ onto $A_{\s}^{p,q}.$ \end{thm} \begin{proof} For $q>1,$ the first part of this theorem is just the case $\alpha = 0$ in Theorem 3.8 of \cite{BGN} for symmetric cones. The case $q=1$ is an easy exercise (cf. e.g. Theorem II.7 of \cite{BT}). The proof of the surjectivity of $P_\t$ uses the previous corollary. \end{proof} \subsection{Proof of Theorem B} (1) We adopt the following notations: $$||g||_\theta = ||g||_{\left [L^{p_0, q_0}_{\s_0}, \hskip 1truemm L^{p_1, q_1}_{\s_1}\right ]_\theta}$$ and $$||g||^{anal}_\theta = ||g||_{\left [A^{p_0, q_0}_{\s_0}, \hskip 1truemm A^{p_1, q_1}_{\s_1}\right ]_\theta}.$$ It suffices to show the existence of a positive constant $C$ such that the following two estimates are valid. \begin{equation}\label{embed1} ||g||^{anal}_\theta \leq C||g||_{A^{p, q}_{\s}} \quad \quad \forall g\in A^{p, q}_{\s}. \end{equation} \begin{equation}\label{embed2} ||g||_{A^{p, q}_{\s}}\leq ||g||^{anal}_\theta \quad \quad \forall g\in \left [A^{p_0, q_0}_{\s_0}, A^{p_1, q_1}_{\s_1}\right ]_\theta; \end{equation} We first the estimate (\ref{embed1}). By Theorem 4.1, we have $$[L^{p_0, q_0}_{\s_0}, L^{p_1, q_1}_{\s_1}]_\theta=L^{p, q}_{\s}$$ with equivalent norms, provided that $$\frac{1}{p}=\frac{1-\theta}{p_0}+\frac{\theta}{p_1}$$ $$\frac{1}{q}=\frac{1-\theta}{q_0}+\frac{\theta}{q_1}$$ $$\frac{\s}{q}=\frac{(1-\theta)\s_0}{q_0}+\frac{\theta \s_1}{q_1}.$$ In particular, for every $g\in L^{p, q}_{\s},$ we have \begin{equation}\label{lebesgue} ||g||_{L^{p, q}_{\s}} \simeq ||g||_\theta = \inf \left \{||f||_{\mathcal F\left (L^{p_0, q_0}_{\s_0}, \hskip 1truemm L^{p_1, q_1}_{\s_1}\right )}: g=f_\theta\right \}. \end{equation} By Theorem 4.5, for $\t$ large (i.e. each $t_k, \hskip 2truemm k=1,\cdots,r$ is large), the weighted Bergman projector $P_{\t}$ extends to a bounded operator from $L^{p_i, q_i}_{\s_i}$ onto $A^{p_i, q_i}_{\s_i}, \hskip 2truemm i=0, 1$ and hence from $L^{p, q}_{\s}$ onto $A^{p, q}_{\s}.$ Then by Corollary \ref{cor}, for every $g\in A^{p_i, q_i}_{\s_i}, \hskip 2truemm i=0, 1$ and for every $g\in A^{p, q}_{\s},$ we have $P_{\t} g = g.$\\ Now let $g\in A^{p, q}_{\s}.$ For $f\in \mathcal F(L^{p_0, q_0}_{\s_0}, L^{p_1, q_1}_{\s_1}),$ we define the mapping $$ P_{\t}\circ f: \hskip 2truemm \overline {S} \rightarrow A^{p_0, q_0}_{\s_0}+ A^{p_1, q_1}_{\s_1} $$ by $(P_{\t}\circ f)_\zeta = P_{\t}\circ f_\zeta.$ Then $P_{\t} \circ f \in \mathcal F(A^{p_0, q_0}_{\s_0}, A^{p_1, q_1}_{\s_1})$ and if $f_\theta = g,$ we have $(P_{\t} \circ f )_\theta = P_{\t} \circ f_\theta = P_{\t} g = g.$ So $$ \begin{array}{clcr} ||g||^{anal}_\theta &:= \inf \hskip 1truemm \{||\varphi||_{\mathcal F\left (A^{p_0, q_0}_{\s_0}, \hskip 1truemm A^{p_1, q_1}_{\s_1}\right )}: g=\varphi_\theta\}\\ &\leq \left \Vert P_\t \circ f \right \Vert_{\mathcal F\left (A^{p_0, q_0}_{\s_0}, \hskip 1truemm A^{p_1, q_1}_{\s_1}\right )}\\ &:= \max \hskip 1truemm \left \{\sup \limits_{\Re e \hskip 1truemm \zeta =0} ||(P_\t \circ f)_\zeta||_{A^{p_0, q_0}_{\s_0}}, \sup \limits_{\Re e \hskip 1truemm\zeta =1} ||(P_\t \circ f)_\zeta||_{A^{p_1, q_1}_{\s_1}}\right \} \end{array} $$ for every $f\in \mathcal F(L^{p_0, q_0}_{\s_0}, L^{p_1, q_1}_{\s_1})$ such that $f_\theta = g.$ By Theorem \ref{main3}, we get $$||g||^{anal}_\theta \leq C_\t \inf \hskip 1truemm \{||f||_{\mathcal F(L^{p_0, q_0}_{\s_0}, \hskip 1truemm L^{p_1, q_1}_{\s_1})}: f_\theta =g\} \sim C_\t ||g||_{L^{p, q}_{\s}}.$$ This proves the estimate (\ref{embed1}). We next prove the estimate (\ref{embed2}). Let $g\in [A^{p_0, q_0}_{\s_0}, A^{p_1, q_1}_{\s_1}]_\theta.$ We first suppose that $||g||^{anal}_\theta =0,$ i.e. $g=0$ in the Banach space $\left [A^{p_0, q_0}_{\s_0}, A^{p_1, q_1}_{\s_1}\right ]_\theta.$ We notice that \begin{equation}\label{notice} ||\varphi||_{\mathcal F(A^{p_0, q_0}_{\s_0}, \hskip 1truemm A^{p_1, q_1}_{\s_1})}=||\varphi||_{\mathcal F\left (L^{p_0, q_0}_{\s_0}, \hskip 1truemm L^{p_1, q_1}_{\s_1}\right )} \end{equation} for all $\varphi \in \mathcal F\left (A^{p_0, q_0}_{\s_0}, A^{p_1, q_1}_{\s_1}\right ).$ This implies that $$||g||_{\left [L^{p_0, q_0}_{\s_0}, \hskip 1truemm L^{p_1, q_1}_{\s_1}\right ]_\theta}\leq ||g||_{\left [A^{p_0, q_0}_{\s_0}, \hskip 1truemm A^{p_1, q_1}_{\s_1}\right ]_\theta}$$ and hence $||g||_{\left [L^{p_0, q_0}_{\s_0}, L^{p_1, q_1}_{\s_1}\right ]_\theta} =0.$ By the estimate (\ref{lebesgue}), we obtain $||g||_{L^{p, q}_{\s}}=0.$ We next suppose that $0< ||g||^{anal}_\theta < \infty.$ There exists $\varphi \in \mathcal F(L^{p_0, q_0}_{\s_0}, L^{p_1, q_1}_{\s_1})$ such that $g=f_\theta$ and $||\varphi||_{\mathcal F\left (A^{p_0, q_0}_{\s_0}, A^{p_1, q_1}_{\s_1}\right )}\leq 2||g||^{anal}_\theta.$ By (\ref{lebesgue}) and (\ref{notice}), we obtain: $$||g||_{A^{p, q}_{\s}}=||g||_{L^{p, q}_{\s}}\lesssim |g||_{\left [L^{p_0, q_0}_{\s_0}, L^{p_1, q_1}_{\s_1}\right ]_\theta} \lesssim ||\varphi||_{\mathcal F\left (A^{p_0, q_0}_{\s_0}, A^{p_1, q_1}_{\s_1}\right )} \leq 2||g||^{anal}_\theta.$$ This proves the estimate (\ref{embed2}). \vskip 2truemm (2) In this assertion, we have $\s_1=\s_2=\s.$ The weighted Bergman projector $P_{\s}$ extends to a bounded operator from $L^{p_i, q_i}_{\s}$ onto $A^{p_i, q_i}_{\s}, \hskip 2truemm i=0, 1$ and hence from $L^{p, q}_{\s}$ onto $A^{p, q}_{\s}.$ Then by Corollary 4.4, for every $g\in A^{p, q}_{\s},$ we have $P_\s g = g.$ The proof of assertion (2) is the same as the proof of assertion (1) with $\t=\s$ in the present case. More precisely, for the proof of the estimate (\ref{embed1}), we replace the mapping $$ P_{\t}\circ f: \hskip 2truemm \overline {S} \rightarrow A^{p_0, q_0}_{\s_0}+ A^{p_1, q_1}_{\s_1} $$ with $f\in \mathcal F(L^{p_0, q_0}_{\s_0}, L^{p_1, q_1}_{\s_1}),$ by the mapping $$ P_{\s}\circ f: \hskip 2truemm \overline {S} \rightarrow A^{p_0, q_0}_{\s}+ A^{p_1, q_1}_{\s} $$ with $f\in \mathcal F(L^{p_0, q_0}_{\s}, L^{p_1, q_1}_{\s}).$ The proof of the estimate (\ref{embed2}) remains the same. \vskip 2truemm (3) We are going to prove the following more precise statement. \begin{thm} Let $\s \in \mathbb R^r$ be such that $s_k> \frac nr -1, \hskip 2truemm k=1,\cdots, r.$ Let $1\leq p_0, p_1\leq\infty$ and let $q_0, q_1$ be such that $1\leq q_0 < q_{\s}\leq q_1 <\infty.$ Assume that $P_{\s}$ extends to a bounded operator on $L^{p_1, q_1}_{\s}.$ Let $\theta, \varphi \in (0, 1)$ be related by the equation $$(\star) \quad \quad \frac 1{2} = \frac {1-\theta}{q_0}+\theta (\frac {1-\varphi}{2}+\frac {\varphi} {q_1})$$ and assume that $$(\star \star) \quad \quad\varphi <\frac {\frac 12 - \frac 1{q_{\s}}}{\frac 12 - \frac 1{q_1}}.$$ Then for $\xi = \frac {\theta \varphi}{1-\theta +\theta \varphi}, \hskip 2truemm \psi = \frac {\varphi}{1-\theta +\theta \varphi},$ we have $$[A_{\s}^{p_0,q_0}, A_{\s}^{p_1,q_1}]_\xi=A_{{\s}_1}^{p_2, 2} \quad {\rm {and}} \quad [A_{\s}^{p_0,q_0}, A_{\s}^{p_1,q_1}]_\psi=A_{\s}^{p_3, q_3}$$ with equivalent norms, with $$(\star \star \star) \quad \quad \frac 1{p_2}=\frac {1-\xi}{p_0}+\frac {\xi}{p_1}$$ and $$(\star \star \star \star) \quad \left \{ \begin{array}{clcr} \frac 1{p_3} &= \frac {1-\varphi}{p_2}+\frac \varphi {p_1}\\ \frac 1{q_3} &= \frac {1-\varphi}{2}+\frac {\varphi} {q_1} \end{array} \right . . $$ \end{thm} \begin{proof} We apply the Wolff reiteration theorem (Theorem \ref{W}) with $A_1 = A^{p_0, q_0}_\s, \hskip 2truemm A_2 = A^{p_2, 2}_\s, \hskip 2truemm A_3 = A^{p_3, q_3}_\s$ and $A_4 = A^{p_4, q_4}_\s.$ On the one hand, we observe that $q_\s >2$ and hence the couple $(p_2, 2)$ satisfies the condition $$\frac 1{q_{\s} (p_2)} <\frac 12 < 1 - \frac 1{{q_{\s} (p_2)}}$$ of Theorem 1.1. So $P_s$ extends to a bounded operator on $L^{p_2, 2}_{\s}$ as well as we assumed that $P_{\s}$ extends to a bounded operator on $L^{p_1, q_1}_{\s}.$ We next apply assertion (2) of Theorem B to get the identity $[A_2, A_4]_\varphi = A_3$ with $p_3$ and $q_3$ defined by the system $(\star \star \star \star).$\\ On the other hand, the condition $(\star \star)$ and the definition of $q_3$ given by the second equality of $(\star \star \star \star)$ imply that $1<q_3<q_{\s}.$ We recall that $1<q_0<q_{\s}.$ Then by assertion (1) of Theorem B, we obtain the identity $[A_1, A_3]_\theta = A_2$ with $$ \left \{ \begin{array}{clcr} \frac 1{p_2} &= \frac {1-\theta}{p_0}+\frac \theta {p_3}\\ \frac 1{2} &= \frac {1-\theta}{q_0}+\frac {\theta} {q_3} \end{array} \right . $$ The latter identity and the second identity of $(\star \star \star \star)$ give the relation $(\star).$ The former identity and the first identity of $(\star \star \star \star)$ give the relation $(\star \star \star).$ \end{proof} \vskip 2truemm {\bf Question.} Can Theorem 4.6 and consequently Theorem B be extended to other values of the interpolation parameters $p_0, q_0, \s_0, p_1, q_1, \s_1$? \vskip 2truemm {\bf Final remark.} We recall that $g\in [A_{{\s}_0}^{p_0,q_0}, A_{{\s}_1}^{p_1,q_1}]_\theta$ if there exists a mapping $f\in \mathcal F (A_{{\s}_0}^{p_0,q_0}, A_{{\s}_1}^{p_1,q_1})$ such that $f_\theta = g.$ For ${\s}_0 = {\s}_1$ real and $p_i = q_i, \hskip 2truemm i=1,...,r,$ an explicit construction was presented in \cite{BGN1} for such a mapping $f$ in terms of an analytic family of operators and the atomic decomposition of the relevant (usual) Bergman spaces and this construction was generalized in \cite{G} to mixed norm Bergman spaces associated to the same scalar parameter $\s=(\nu,\cdots,\nu)$. It may be interesting to extend this construction to mixed norm Bergman spaces $A_{{\s}_0}^{p_0,q_0}, A_{{\s}_1}^{p_1,q_1}$ associated to more general vectors ${\s}_0, {\s}_1 \in \mathbb R^r.$ \bibliographystyle{plain}
1,941,325,220,937
arxiv
\section{Introduction} In the determination of the parton distribution functions (PDFs) of the proton from fitting to the available deep-inelastic and related hard-scattering data, a long-standing question is the extent to which the limitations of a fixed form of the input parameterisations affect the best fit and the uncertainty of the resulting PDFs. It is certainly the case that various groups performing `global' PDF analyses have had to introduce new parameters to facilitate a good-quality fit to some new data, either because they probe a new PDF combination or a new kinematic range, or simply because new data are much more precise than previous measurements. This has resulted in most groups performing fits to a wide variety of data sets \cite{Martin:2009iq,Lai:2010vv,Alekhin:2012ig,JimenezDelgado:2008hf} using about 4-6 free parameters for each type of parton.\footnote{The HERAPDF fit \cite{Aaron:2009aa} uses fewer free parameters in their study. However, in that analysis the effect of adding extra parameters is included as part of the additional ``parameterisation'' uncertainty.} The NNPDF group \cite{Ball:2012cx} circumvents this issue by using effectively an extremely large and flexible parameterisation, but in order to avoid fitting all the fluctuations in data, they must split data into training and validation sets and have algorithms which determine the methods of both convergence and `stopping'. This means they do not have an easily identifiable `best fit' and it is very difficult to compare the sources of their PDF uncertainty to those for other groups. Indeed, there has been clear sensitivity to their convergence and `stopping' algorithms, though this has been quite small since the set in \cite{Ball:2010de}. It is hypothesised that the lack of parameter flexibility is part of the reason for the need for a `tolerance', or the use of `$\Delta \chi^2 > 1$', to obtain uncertainties in the MSTW (and CTEQ) fits. But studies so far suggest that while this is probably a component, it is not all, or even the dominant reason for this need for inflation of $\Delta \chi^2$ \cite{Pumplin:2009bb,Watt:2012tq} (see, for example, a discussion of relative uncertainties between sets in \cite{DeRoeck:2011na}). So far, there has been surprisingly little investigation of the change of the form of input parameterisations on the uncertainties of PDF sets based on a best fit and expansion about this central PDF set. There are studies by Pumplin \cite{Pumplin:2009bb}, by Glazov, Moch and Radescu \cite{Glazov:2010bw}, and one sentence in the MSTW conference proceedings \cite{Thorne:2010kj}. In this article we investigate the effect of extending the MSTW parameterisation of the input PDFs by changing the interpolating polynomial $(1+\epsilon x^{0.5}+\gamma x)$, which was introduced for separate up and down valence and for sea quarks in \cite{Martin:1994kn}, to a term including up to an $n^{\rm th}$-order Chebyshev polynomial. First, we investigate the most appropriate order of polynomial to use such that sufficient flexibility is achieved, but not so much that one is in danger of fitting fluctuations in the data. To study this, general functions of a suitable shape are generated, and pseudo-data are fitted. We conclude that about a $4^{\rm th}$-order polynomial should generally be adequate. We then try using this type of polynomial in fits to real data, first for the two valence distributions, but also additionally for the sea distribution. We see a significant improvement in the $\chi^2$ for the best global fit at both stages, but the only significant change in the PDFs is for the $u_V$ distribution for $x<0.03$ at high $Q^2\sim 10^4~{\rm GeV}^2$, or slightly higher $x$ at low $Q^2$. At present, and for at least the short-term future, we will have to continue to include the existing deep-inelastic scattering and Drell--Yan data from deuteron targets to achieve a determination of the PDFs of the different quark flavours, particularly at moderate and large values of $x$. It is therefore important to repeat our previous study of improving the nonperturbative corrections to the deuteron structure functions \cite{Thorne:2010kj} from the default \cite{Badelek:1994qg} first used in \cite{Martin:1994kk}, but now using the extended Chebyshev parametric forms of the input PDFs. The results are rather more successful than when using our standard PDF parameterisation, with the deuteron correction being rather similar to that expected from various models, with little variation when different assumptions are made. The change in deuteron corrections is found to change the $d_V$ distribution to a fairly significant extent. For the Chebyshev input parameterisations, without and with the additional freedom in deuteron corrections, we demonstrate that a suitable set of PDF uncertainty eigenvectors can be found, using 23 orthogonal directions in parameter space, rather than the 20 used in the standard MSTW2008 PDF set. Since it is the valence quark PDFs that are affected, we examine the detailed dependence of quark decomposition on the lepton charge asymmetry (which results from $W^{\pm}$ production). We show that the precise combination, and the $x$ range, of the PDFs probed is very dependent on the lepton rapidity and on the lepton $p_T$ cut applied to the data. The lepton charge asymmetry is particularly sensitive to the small-$x$ valence PDFs, and this sensitivity is seen to increase rather dramatically as the $p_T$ cut is raised. The predictions obtained from the new PDFs (using Chebyshev input parameterisations) are compared with both the Tevatron lepton asymmetry data \cite{Abazov:2008qv} that were not used in the MSTW2008 fit, and the recent LHC data, namely the ATLAS lepton rapidity data \cite{Aad:2011dm} and the CMS lepton asymmetry data \cite{Chatrchyan:2012xt}. In all cases the default MSTW2008 PDFs are not optimum for these lepton asymmetry data sets, whereas the new `Chebyshev' PDFs give much improved predictions even though they are obtained from a fit to {\it exactly} the same global data set as used to determine the MSTW2008 PDFs. Although the lepton charge asymmetry, which has extreme sensitivity to the low $x$ valence--sea decomposition, is better fit by the change in the PDFs, we check that the predicted values of the $W^{\pm},~Z$, Higgs, etc. total cross sections are essentially {\it unaltered}. To be precise, they are only changed by amounts far smaller than those due to the PDF uncertainty. \section{Parameterising Input PDFs with Chebyshev Polynomials} \begin{figure}[htb!] \centering \includegraphics[width=1.00\textwidth,clip]{Fig19a} \caption{Behaviour of Chebyshev polynomials $T_i[y(x)]$ of order $i=0$ to 5 as a function of $x$ for different arguments for the expansion variable. The order of the polynomial increases as the structure extends to smaller $x$ values. The order of the polynomial also increases across the visible spectrum (i.e.~dark blue to red).} \label{fig:Fig19} \end{figure} In a recent previous study \cite{Watt:2012tq}, an investigation of the uncertainty of our PDFs using Monte Carlo generated data replicas was performed, as opposed to the use of perturbations about the best fit as was done in the MSTW2008 analysis. Little change was seen when the full 28 MSTW PDF parameters were left free compared to the 20 used in eigenvector generation. To be precise, the uncertainty using $\Delta \chi^2=1$ was compared using the two approaches and significant difference was only seen for the $u_V$ distribution for $x<0.03$ at the input scale $Q_0^2=1~{\rm GeV}^2$, where the uncertainty band expands a little, and for $d_V$ in some $x$ regions. It was particularly reassuring that there is little change in the uncertainty on the gluon distribution despite the number of free parameters being extended from 4 to 7. Since it is difficult to apply our previously used `dynamical tolerance' technique~\cite{Martin:2009iq} for the uncertainty determination to this Monte Carlo method, and since there was little change in the results, it was concluded that the eigenvector approach was justified and would continue to be used in our PDF analyses.\footnote{It was, however, shown how an arbitrary number of Monte Carlo sets of PDFs could be generated starting with the eigenvector definition.} Nevertheless, there was some evidence that an extended parameterisation might lead to some differences in the PDFs of the valence quarks. Hence, we start by investigating this hypothesis. For valence and sea quarks the default MSTW parameterisation for the input at $Q_0^2=1~{\rm GeV}^2$ was taken to be \begin{equation} xf(x,Q_0^2) = A (1-x)^{\eta}x^{\delta}(1+\epsilon x^{0.5}+\gamma x). \label{eq:MSTWparam} \end{equation} The $(1-x)$ power, $\eta$, allows a smooth interpolation to zero as $x \to 1$ and is inspired by number counting rules. The single small-$x$ power, $\delta$, is inspired by the behaviour predicted by Regge theory at small $x$. We found long ago that, first at NNLO \cite{Martin:2000gq}, and also with improved data at NLO \cite{Martin:2001es}, that two terms with different small $x$ powers were needed for the gluon distribution to give the best fit. For the gluon the parameterisation is \begin{equation} xg(x,Q_0^2) = A_g (1-x)^{\eta_g}x^{\delta_g}(1+\epsilon_g x^{0.5}+\gamma_g x) +A_{g'}(1-x)^{\eta_{g'}}x^{\delta_{g'}}. \label{eq:MSTWgluon} \end{equation} The input parameterisations for some other distributions, $\bar d - \bar u$ and $s - \bar s$, take slightly different forms, but these are not very precisely determined, and we will not consider changes to these in this article. Similarly, as previously, $s + \bar s$ is taken to be the same as the sea parameterisation except for the normalisation and $(1-x)$ power, which are left free. The polynomials, interpolating between the high-$x$ and low-$x$ limits, have no real motivation other than the separation of half-integer powers being again inspired by Regge theory, and the two free parameters seeming to be sufficient to obtain an optimum fit. An investigation of introducing either an extra parameter of the form $ax^2$ or $ax^{0.25}$ into the valence quark parameterisation was reported very briefly in \cite{Thorne:2010kj} since neither had a significant effect on the fit quality -- at best they gave $\Delta \chi^2=-4$. However, the introduction of an $ax^2$ term did change the small-$x$ $u_V$ distribution a little outside its uncertainty, and hence, as with the Monte Carlo study, suggests the uncertainty on this PDF, in the range $x < 0.03$, is underestimated. \begin{figure}[htb!] \centering \includegraphics[width=0.48\textwidth]{DashNLIF3Fig34c-40} \includegraphics[width=0.48\textwidth]{DashNLIF4Fig34c-40} \caption{Two examples of the fractional deviation between fitted function and true function for fits with increasing highest order of Chebyshev polynomials for valence-like distributions. The dash length decreases as the highest order of the polynomial increases. The order of the polynomial also increases across the visible spectrum (i.e.~dark blue to red).} \label{fig:Fig33pdf} \end{figure} Here we undertake a much more systematic study. As a basis for the interpolating polynomial we decide to use Chebyshev polynomials (though we looked at, and will mention briefly, other possibilities). So we write \begin{equation} xf(x,Q_0^2) = A(1-x)^{\eta}x^{\delta}\left(1+\sum_{i=1}^n a_i T_{i}(y(x))\right), \label{eq:generalparam} \end{equation} where $y$ is a function of $x$ to be specified. We keep the same form of the $(1-x)$ and $x$ powers in the high- and low-$x$ limits. One of the main motivations for the choice of Chebyshev polynomials is that, not only the end points of the polynomials at $y = \pm 1$ have magnitude 1, but each maximum and minimum between the endpoints does also. Other choices, such as Legendre polynomials, have maxima and minima with smaller magnitudes so they have smaller variations in magnitude away from the endpoints. There is still a choice to make regarding the argument $y$ of the polynomial. We need $y=1$ at the lower limit of $x$, i.e.~$x=0$, and $y=-1$ at the other limit $x=1$, but there are many choices which could satisfy this. In practice the PDFs are measured between a range of roughly $0.0001<x<1$, so we want a choice such that the polynomials vary throughout the whole of this range. The form of the first few polynomials is shown for various choices in Fig.~\ref{fig:Fig19}. Clearly $y=1-2x$ is too concentrated at high $x$ and $y=1-2x^{0.25}$ extends to too low $x$. An alternative of $y=\cos(\pi x)$ is very concentrated at high $x$. We choose $y=1-2\sqrt{x}$ as a convenient definition. This is the same choice as in the study reported in \cite{Pumplin:2009bb}. It is slightly different from the choice in \cite{Glazov:2010bw} which used logarithmic dependence rather than powers of $x$, but the results are similar. A polynomial in $y \equiv 1-2\sqrt{x}$ also has the feature that it is equivalent to a polynomial in $\sqrt{x}$, the same as the default MSTW parameterisation, though for a $n^{\rm th}$-order Chebyshev polynomial the maximum power of $x$ is $x^{n/2}$. The half integer separation of terms is consistent with the Regge physics motivation of the MSTW parameterisation. Pumplin \cite{Pumplin:2009bb} has explained clearly why a parameterisation like (\ref{eq:generalparam}) is advantageous. Most previous parameterisations, including MSTW, have been based on interpolating functions like those in (\ref{eq:MSTWparam}) with a small number of parameters, $\epsilon,\gamma,...$. If the number of parameters is increased to allow more flexibility, the resulting fit becomes unstable, with parameters taking large values and with strong cancellations between the corresponding terms. On the other hand, the parameters, $a_i$, in the Chebyshev form (\ref{eq:generalparam}) are much more convenient, and well-behaved, for fitting to the data. The requirement of smoothness in the input PDFs forces the values of the parameters $a_i$ to be reasonably small at large order $i$. The Chebyshev polynomials of increasingly large order, $n$, model the behaviour of the input distribution at an increasingly fine scale in $x$. However, it is still an open question as to how many parameters are needed to model a parton distribution to sufficient accuracy without also starting to fit fluctuations. So far the standard technique is to impose some artificial restriction such as a requirement on smoothness of the function. In order to test how many parameters are indeed needed for a sufficiently good fit we generate pseudo-data for a valence quark input PDF, say $u_V$, scattered around a function with the general shape of a valence quark distribution obtained from a very large order polynomial $f(x)$ with smoothness constraints applied in order to stop it developing kinks. The function is constrained to give an integrated total of two valence quarks and the fits are all constrained in the same manner. The 1000 pseudo-data points are distributed evenly in $\ln(1/x)$ with their percentage error held constant at $3\%$, i.e.~${\rm error}_i = 0.03f(x_i)$, and with random scatter about the exact function according to the size of the uncertainty. We then find the best fit for this pseudo-data using a parameterisation of the form (\ref{eq:generalparam}) with increasing highest order of the Chebyshev polynomials. This procedure was repeated for a variety of different choices of starting function. Two typical results are shown in Fig.~\ref{fig:Fig33pdf} which shows the percent deviation of the fit function from the full function for two different starting functions. The highest order of the polynomial increases across the visible spectrum (i.e.~dark blue to red). Just using one term in the polynomial, i.e.~$1+a_1T_1(y)$, can give deviations of about $10\%$ over a wide range, but with $2$ terms in the polynomial this reduces to mainly a $\leq 2\%$ deviation (in most cases -- sometimes 2 polynomials can still give significantly larger deviations $\sim 5-10\%$). For $4$ terms there is generally $\leq 1\%$ deviation except at very high $x$. This does not improve very significantly with further terms added. This accuracy should be compared to the uncertainty in the MSTW2008 input PDFs. For valence quarks the 1 sigma uncertainty is at best just lower than $2\%$ for $u_V(x,Q_0^2)$ near $x=0.1$. For $d_V(x,Q_0^2)$ it is nearly always $> 3\%$. \begin{figure}[htb!] \centering \includegraphics[width=0.32\textwidth,clip]{NLIF3Fig33a} \includegraphics[width=0.32\textwidth,clip]{NLIF3Fig33b} \includegraphics[width=0.32\textwidth,clip]{NLIF3Fig33c}\\ \includegraphics[width=0.32\textwidth,clip]{NLIF3Fig33d} \includegraphics[width=0.32\textwidth,clip]{NLIF3Fig33e} \includegraphics[width=0.32\textwidth,clip]{NLIF3Fig33f} \caption{Distribution in $\chi^2$ values versus $x$ for fits to a valence-like quark distribution with increasing highest order of Chebyshev polynomials, going from $1^{\rm st}$ (top left) from left to right for the first row, then from left to right for the second row until the $6^{\rm th}$ order. Note that the vertical axis is not the same in all plots as the number of points with very large $\chi^2$ decreases with the highest order of the polynomial. In this example, for two terms onwards there is a fairly random distribution. } \label{fig:Fig34pdf} \end{figure} In Fig.~\ref{fig:Fig34pdf} we see the $\chi^2$ distribution for increasing highest order of Chebyshev polynomials. This reflects the deviation from the original function, i.e.~for one term in the polynomial there are many points with high $\chi^2$ but the distribution becomes roughly as expected even with just two terms. There is no obvious structure as a function of $x$ in the figure and extremely few $\chi^2$ values, if any, take values greater than 10. The total $\chi^2$ improves dramatically when going from 1 to 2 terms in the polynomial. After this it decreases by a few units with each additional polynomial up to $6^{\rm th}$ order, though this is difficult to appreciate from the plots. In some cases after $n$ terms in the polynomial we start fitting noise, i.e.~the $\chi^2$ becomes lower than that for the true function. The number of polynomials required for this varies, but in the most extreme case happened with just 2. Somewhere between 4--6 is more common, but this feature is not always present when using 6 terms. Fits were also performed using Legendre polynomials. Since a term including up to the $n^{\rm th}$ order Legendre polynomial is just a re-expression of a term including up to the $n^{\rm th}$-order Chebyshev polynomial, the fit quality is the same within numerical accuracy. However, the coefficients are more correlated in the case of the Legendre polynomials, presumably because each term has less variation with $x$ in this case. We have also checked that variation of the size of the error on the pseudo-data leads to no significant difference in the results, other than the values of the $\chi^2$, which becomes higher for poor fits as the error decreases and {\it vice versa}. For $3\%$ uncertainty once the typical deviation is within a percent or so the $\chi^2$ becomes close to one per point. \begin{figure}[htb!] \centering \includegraphics[width=0.48\textwidth,clip]{DashSea1bFig34c-40} \includegraphics[width=0.48\textwidth,clip]{DashSea3bFig34c-40} \caption{Two examples of the fractional deviation between fitted function and true function for fits with increasing highest order of Chebyshev Polynomials for sea-like distributions. The dash length decreases as the highest order of the polynomial increases. The order of the polynomial also increases across the visible spectrum (i.e.~dark blue to red).} \label{fig:SeaFig34} \end{figure} The exercise is repeated for a sea quark type distribution, i.e.~falling more quickly at high $x$ with increasing $x$, and growing far more quickly at small $x$ with decreasing $x$, and for which there is no strong sum rule constraint. For this type of function the convergence to a very good fit is a little slower. Again one term in the polynomial gives a very poor fit and deviations $\sim 10\%$. This time addition of another term reduces the deviation to 3--4$\%$ at most and 4 polynomials usually results in deviation $\leq 2\%$ (except at high $x$). For deviations largely guaranteed to be $< 1\%$ 6 terms in the polynomial are usually required. The results for two examples are shown in Fig.~\ref{fig:SeaFig34}. The greater difficulty in obtaining the excellent description in this case is presumably due both to the lack of the sum rule constraint in the function and the much wider variation in values for the PDF than the valence quark case. However, we should note that in this case of the sea quark distribution (or gluon distribution) the minimum 1 sigma uncertainty in the MSTW2008 input PDFs is $\sim 5\%$. Hence, the deviations with 4 parameters are again much smaller than the intrinsic uncertainty in the function and 4 parameters is very likely to be more than sufficient. In Fig.~\ref{fig:SeaFig33} we see the $\chi^2$ distribution for increasing highest order of Chebyshev polynomials. Again we see that the distribution becomes essentially random with 3--4 terms in the polynomial, but it happens a little more slowly than in the case of a valence-like distribution. The total $\chi^2$ values suggest that there is little evidence for over-fitting of the sea distribution until we get to at least 6 terms with the Chebyshev polynomial. \begin{figure}[htb!] \centering \includegraphics[width=0.32\textwidth,clip]{Sea2bFig33a} \includegraphics[width=0.32\textwidth,clip]{Sea2bFig33b} \includegraphics[width=0.32\textwidth,clip]{Sea2bFig33c}\\ \includegraphics[width=0.32\textwidth,clip]{Sea2bFig33d} \includegraphics[width=0.32\textwidth,clip]{Sea2bFig33e} \includegraphics[width=0.32\textwidth,clip]{Sea2bFig33f} \caption{Distribution in $\chi^2$ values versus $x$ for fits to a sea quark type of distribution with increasing highest order of Chebyshev Polynomials, going from $1^{\rm st}$ (top left) from left to right for the first row, then from left to right for the second row until the $6^{\rm th}$ order. Note that the vertical axis is not the same in all plots as the number of points with very large $\chi^2$ decreases with the highest order of the polynomial. In this example, for four terms onwards there is a fairly random distribution. There is distinct structure for one term, and for two and even three terms a cluster of badly fit points at high $x$.} \label{fig:SeaFig33} \end{figure} \begin{figure}[htb!] \centering \includegraphics[width=0.48\textwidth,clip]{DashUpV1Fig34c2-40} \includegraphics[width=0.48\textwidth,clip]{DashUpV2Fig34c-40} \caption{Two examples of the fractional deviation between fitted function and true function for fits with increasing highest order of Chebyshev Polynomials for valence-like distributions with 1000 pseudo-data between $0.01< x < 0.68$. The dash length decreases as the highest order of the polynomial increases. The order of the polynomial also increases across the visible spectrum (i.e.~dark blue to red).} \label{fig:valrestrict} \end{figure} \begin{figure}[htb!] \centering \includegraphics[width=0.24\textwidth,clip]{UpV1-Fig33a} \includegraphics[width=0.24\textwidth,clip]{UpV1-Fig33b} \includegraphics[width=0.24\textwidth,clip]{UpV1-Fig33c} \includegraphics[width=0.24\textwidth,clip]{UpV1-Fig33d} \caption{Distribution in $\chi^2$ values versus $x$ for fits to a valence quark type of distribution with 1000 pseudo-data between $0.01< x < 0.68$ for increasing highest order of Chebyshev polynomials, going from $1^{\rm st}$ to $4^{\rm th}$ order from left to right. Note that the vertical axis is not the same in all plots as the number of points with very large $\chi^2$ decreases with the highest order of the polynomial. In this example, there is structure for one, two and to some extent three terms, and only with four terms is there a fairly random distribution of $\chi^2$ values.} \label{fig:valrestrictchi} \end{figure} We also investigate the case of fitting pseudo-data for a valence-like distribution, but where the generated pseudo-data are generated only for $0.01<x<0.68$, rather than for all $x$ down to $x=10^{-4}$. This is a more realistic situation, since valence quarks are only constrained by data in roughly this region. In this case two terms in the polynomial often give quite a very poor fit in some $x$ regions within the fit range with deviations $\sim 10\%$. The addition of another term reduces the deviation to 4--5$\%$ at most and 4 polynomials usually results in deviation $\leq 1\%$, except at very high $x$ and much lower $x$ than the range of data. The results for two examples are shown in Fig.~\ref{fig:valrestrict}. The addition of more than 4 terms improves the comparison to the true function for the range of $x$ containing data, but can increase the deviation in the very small-$x$ regime. It can also often lead to over-fitting of the data points, which presumably contributes to the variation in the very low-$x$ range. The $\chi^2$ distribution as a function of $x$ is shown in Fig.~\ref{fig:valrestrictchi}. As in the other examples, a uniform distribution sets in when there are 4 terms in the polynomial. Hence, again four terms seems sufficient, but there is more evidence that more than four terms corresponds to over-fitting in this case. \begin{figure}[htb!] \centering \includegraphics[width=0.6\textwidth,clip]{DashUpV1bFig34c-40} \caption{The deviation between fitted function and true function for fits with increasing highest order of Chebyshev polynomials for a valence-like distribution with 100 pseudo-data between $0.01< x < 0.68$. The dash length decreases as the highest order of the polynomial increases. The order of the polynomial also increases across the visible spectrum (i.e.~dark blue to red).} \label{fig:valrestrict100} \end{figure} Finally we try fitting to the same type of pseudo-data, i.e.~for a valence-like distribution with points $0.01<x<0.68$, but with only 100 pseudo-data points rather than 1000. The results are shown in Fig.~\ref{fig:valrestrict100}. In this case the convergence is quicker, with deviation from the true function of $1\%$ or less in the range of $x$ where points exist with only three terms in the polynomial. In this case there is then no further improvement with extra parameters. For 100 points the match to the true function is already as good as the representation of the true function by 100 data points allows. Overall we see that there is some dependence of the quality of the fit using a given number of terms in the Chebyshev polynomial on shape of the PDF, whether there is a constraint on the function being fit, the $x$ range of the data representing the function and the number of data points. However, it always seems to be the case that 4 terms are sufficient to get an accuracy considerably better than the uncertainty on the MSTW2008 input distributions. It also seems to be the case that having many more than 4 terms leads to a distinct danger of over-fitting. Also, if there are relatively few data points then it is impossible to get a very accurate representation of the true function, and again too many terms in the polynomial can lead to over-fitting and instability in extrapolation to ranges of $x$ outside the data constraint. This is relevant for PDFs with relatively weak data constraints, such as $s - \bar s$, $\bar d -\bar u$ and possibly $s + \bar s$. \section{Impact of Extended Parameterisations on PDF fits} \begin{figure}[htb!] \centering \vspace{-6cm} \includegraphics[width=0.95\textwidth,clip]{mstwcpdiff1} \vspace{-6cm} \caption{The change in the valence PDFs extracted from the MSTW2008 type fit using Chebyshev polynomials for the valence quarks only (MSTW2008CPv) and for valence and sea quarks (MSTW2008CP) compared to the original MSTW2008 PDFs at NLO with their $68\%$ uncertainties given by the dot-dashed lines.} \label{fig:Fig38} \end{figure} Having studied the effects of fitting different `extended' input PDF parameterisations to pseudo-data, we now investigate their effect in the real case of fitting to experimental data. The experimental data points are scattered over a wide range of $Q^2$ values, so both the evolution and the input distributions are required to be correct. Also, the data points are for structure functions, and other related high-energy scattering data, which, in general, not only depend on complicated combinations of PDF flavours, but are also related to them via the convolution with perturbative coefficient functions for the specific process. We perform the fits at next-to-leading order (NLO) in QCD perturbation theory, though MSTW also produce PDFs at leading order (LO) and next-to-next-to leading order\footnote{At NNLO it is necessary to make some approximations in modelling unknown coefficient functions for some processes.} (NNLO), and we will discuss NNLO results, which are very similar, later. Here, we perform fits to exactly the same data as used for the MSTW2008 PDF analysis \cite{Martin:2009iq}, and adopt all the same theory decisions, e.g. heavy flavour schemes, nuclear target corrections, etc., but now with extended input PDF parameterisations. \begin{figure}[htb!] \centering \vspace{-6cm} \includegraphics[width=0.95\textwidth,clip]{mstwcpdiff2} \vspace{-6cm} \caption{The change in the sea and gluon PDFs extracted from the MSTW2008 type fit using Chebyshev polynomials for the valence quarks only (MSTW2008CPv) and for valence and sea quarks (MSTW2008CP) compared to the original MSTW2008 PDFs at NLO with their $68\%$ uncertainties shown by the dot-dashed lines. The gluon is shown at $Q^2=5~{\rm GeV}^2$ rather than the input scale of $Q^2=1~{\rm GeV}^2$ as the fact it goes negative at small $x$ at the latter $Q^2$ makes a ratio plot unclear.} \label{fig:Fig39} \end{figure} To begin, we apply an extended parameterisation with Chebyshev polynomials of highest order $n=4$ only for the valence quark PDFs: $u_V$ and $d_V$. The resulting improvement to the global fit is quite minor; corresponding to $\Delta \chi^2 =-4$ compared to a total of 2543 for 2699 data points in the MSTW2008 fit. There is a large improvement in the description of the BCDMS structure function data \cite{Benvenuti:1989rh}, but a deterioration in the fit to NMC structure function data \cite{Arneodo:1996qe}. This is very similar to the results obtained previously by adding just an $ax^2$ term to the valence parameterisations \cite{Thorne:2010kj}, and, indeed, is similar since Chebyshev polynomials in $(1-2\sqrt{x})$ with highest order $n=4$ do add an $x^2$ term, but also a $x^{1.5}$ term. As in this previous study, the significant change is in $u_V(x)$ for $x \leq 0.03$ at $Q^2=10^4~{\rm GeV}^2$. However, it is a larger change than previously. The comparison to the MSTW2008 PDFs at $Q_0^2=1~{\rm GeV}^2$ and at $Q^2=10^4~{\rm GeV}^2$ with uncertainty bands is shown in Figs.~\ref{fig:Fig38} and ~\ref{fig:Fig39}. A fit is also attempted using $y=(1-2x^{0.25})$ as the argument of the Chebyshev polynomials, which overlaps with using an extra $x^{0.25}$ term in the parameterisation as tried before. As in the previous study \cite{Thorne:2010kj}, this results in an improvement in the fit of less than one unit in $\chi^2$, and much less change in PDFs. Hence, the use of $y=(1-2\sqrt{x})$ receives further justification. We then also applied the extended Chebyshev interpolating polynomial to the sea distribution. For the sea, the MSTW2008 parameterisation was exactly the same form as for the valence quark, i.e.~as in (\ref{eq:MSTWparam}), so the extended parameterisation also has the same form as that for the valence quark PDFs. The only difference is that there is no number sum rule directly constraining one parameter, unlike the valence quark PDFs where the normalisation is constrained. We apply the extended parameterisation for the sea distribution by replacing $(1+\epsilon x^{0.5}+\gamma x)$ by a term including Chebyshev polynomials with highest order $n=4$. We tried also extending the parameterisation for the gluon distribution. For the MSTW2008 gluon distribution we had the more flexible parameterisation in (\ref{eq:MSTWgluon}) where, in practice, the second term chooses a negative normalisation. The normalisation of the first term is set by the momentum sum rule for the PDFs, so in practice there are 7 free parameters. We replaced the polynomial in the first term by one including Chebyshev polynomials with highest order $n=4$, but both the quality of the fit and the resulting PDFs were essentially unchanged. This, together with the fact that the gluon already has 7 free parameters, suggests that the extended Chebyshev polynomial parameterisation is not necessary. However, using Chebyshev polynomials is a more efficient way of expressing the polynomial in the first term of (\ref{eq:MSTWgluon}), so we replaced the previous form by an entirely equivalent term using Chebyshev polynomials with highest order $n=2$. The resulting global fit for the parameterisation with the extended sea quark distribution, and the formally modified, but equivalent, gluon distribution, has $\Delta \chi^2 =-29$. The improvement is mainly for BCDMS structure function data and E866 Drell--Yan \cite{Webb:2003bj} and Drell--Yan asymmetry data \cite{Towell:2001nh}. Again there is a deterioration in the description of the NMC structure function data, but this time a slight improvement in Tevatron lepton asymmetry \cite{Acosta:2005ud,Abazov:2007pm} and $Z$ rapidity data \cite{Abazov:2007jy,Aaltonen:2010zza}. Details will be shown in Section 4. The PDFs are again shown in Figs.~\ref{fig:Fig38} and ~\ref{fig:Fig39}, and compared to the MSTW2008 PDFs and their uncertainty. The change in $u_V$ is similar to the previous case. There is more change in $d_V$ this time, but generally within the $68\%$ uncertainty band. The sea quarks at input show some differences, sometimes a little outside the $68\%$ confidence level uncertainty, but this is essentially washed out by evolution for $Q^2=10^4~{\rm GeV}^2$. All other PDFs show changes that are very small compared to the MSTW2008 uncertainty. In all our new fits we let $\alpha_S(M_Z^2)$ be a free parameter and in all cases it changes by only $0.0002$ or less, i.e.~a change much smaller than the uncertainty of $+0.0012$ or $-0.0015$ at NLO~\cite{Martin:2009bu}. The set of PDFs, with Chebyshev polynomials of highest order $n=4$ applied to the two valence quarks and sea, and Chebyshev polynomials of highest order $n=2$ to the gluon, is denoted by `MSTW2008CP' below. As well as studying the central values, we also investigate the uncertainties of the new PDFs. The standard MSTW2008 PDFs have 28 free parameters in the best fit, but 8 are held fixed when determining the uncertainty eigenvectors because there is too much correlation or anticorrelation between some of the parameters when all are left free. This freedom would have resulted in the 8 extra potential eigenvectors having an extremely non-quadratic behaviour in $\Delta \chi^2$; in general behaving quadratically only in the immediate vicinity of the minimum, then with $\Delta \chi^2$ increasing alarmingly away from this limit. These are examples of the cases where within a limited range of parameter values the fit quality changes extremely little with changes of some parameters. However, the actual PDFs tend to remain very similar as the parameters vary in these cases, it simply being a matter of the redundancy in the parameterisation which allows almost identical PDFs with noticeably different parameterisations. In the MSTW2008 analysis, all PDFs, except the sea (and $s-\bar s$ which only has two free parameters), have the small-$x$ and high-$x$ powers $\delta$ and $\eta$ free in the eigenvector determination, the small-$x$ power being replaced by the normalisation for the sea. For the gluon the 4 free parameters in the eigenvector determination are the two $\eta$ and $\delta$ values in (\ref{eq:MSTWgluon}). For valence and sea quarks we then also let the coefficient of the $x^{0.5}$ term in the polynomial, $\epsilon$, be free. For the determination of the uncertainties of the MSTW2008CP PDFs, we decide to apply more consistency between the PDFs, and for the sea quarks we let the parameter $\delta$ be free in the eigenvector determination, rather than the normalisation. For all the PDFs, other than valence quarks and the light sea, we make the same choices as usual. For the valence and sea quark PDFs we let the coefficients of the first and third Chebyshev polynomials, $a_1$ and $a_3$, be free in the eigenvector determination. Hence we have one more free parameter for each of these PDFs and consequently $23$ rather than 20 eigenvectors. Despite having one extra free parameter the uncertainty on sea quarks for $x\sim 0.001$--$0.01$ at $Q^2=10^4~{\rm GeV}^2$ becomes a little smaller, but becomes noticeably larger for very low $x$. This is undoubtedly due to the exchange of the normalisation for the small-$x$ power as a free parameter. The most significant change in the uncertainty, however, is in the same PDF as for the change in the central value, i.e.~the small-$x$ $u_V$ distribution. The uncertainty starts to become larger with decreasing $x$ at a higher value of $x$, i.e.~about $x=0.01$. Indeed, it is markedly larger between $0.001$ and $0.01$ at $Q^2=10^4~{\rm GeV}^2$, where there was a rather tight `neck' in the MSTW2008 uncertainty of $u_V$ near $x=0.003$, and is significantly larger at very small $x$ values. Hence, in the new MSTW2008CP analysis, the increase in uncertainty in $u_V$ is more in line with where the data constraint finishes, i.e.~about $x=0.01$. All 23 eigenvectors have reasonably quadratic behaviour, the worst being comparable to the worst for MSTW2008. The worst is in fact the eigenvector comprising largely of the first term in the Chebyshev polynomial for $d_V$. This is explained by the fact that this parameter is highly correlated with both the small-$x$ power and the third term in the Chebyshev polynomial for the same PDF. \section{Fitting PDFs with Deuteron Corrections Applied} \begin{figure}[htb!] \centering \vspace{-1.5cm} \includegraphics[width=0.48\textwidth,clip]{deutcorrparam} \includegraphics[width=0.48\textwidth,clip]{deutcorraparam} \vspace{-1.2cm} \caption{The deuteron corrections applied to structure functions in previous versions of the MSTW2008 type fit (left) and in fits where the extended parameterisation for the PDFs is used (right).} \label{fig:deutcorrparam} \end{figure} At present, and for at least the short-term future, it is necessary to include the deep-inelastic scattering data on deuteron targets in the global parton analyses, in order to separate the $u$ and $d$ PDFs at moderate and large $x$ values. Studies of the PDFs obtained with collider data only \cite{Ball:2012cx,Watt:2012tq} show much bigger uncertainties for some PDF flavours. Unfortunately, the deuteron measurements are subject to nuclear corrections. With the increased precision and variety of data (especially with the advent of the decay lepton charge asymmetry measurements from $W^\pm$ production at the Tevatron and LHC) it is necessary to study the effect of the deuteron corrections in some detail. In previous PDF determinations we have included the nuclear corrections from \cite{deFlorian:2003qf} for neutrino structure functions taken with either lead or iron targets, see the discussion in section 7.3 of \cite{Martin:2009iq}, but only included a fixed shadowing correction for small $x$ for deuteron structure functions. In the PDF analysis of \cite{Alekhin:2012ig}, a specific deuteron correction, with associated uncertainty, was applied. Most of the other groups include no deuteron corrections (the issue does not arise in \cite{Aaron:2009aa}), although there have been recent specific investigations (see e.g. \cite{Accardi:2011fa,Brady:2011hb} in the context of the CTEQ-JLab fits, although there are no corrections in \cite{Lai:2010vv}) which suggest the effect is not insignificant, especially at high $x$. As we will explain below, we base our study in this article on the assumption that there need not be deuteron corrections, but allow the fit to choose them if required, with some uncertainty determined by the fit quality. Hence, at the very least we allow some new degree of PDF uncertainty to be associated with the possibility of deuteron corrections. The best fit chooses some small, but significant, corrections at high $x$. \subsection{The parameterisation of the deuteron corrections} In the default MSTW2008 PDF analysis, deuteron structure functions were corrected only for shadowing at small values of $x$ \cite{Badelek:1994qg} with a negative correction starting a little above $x=0.1$ and becoming as much as $-1.5\%$ near $x=0.01$. In \cite{Thorne:2010kj} we presented a much more detailed study. So, first, we briefly summarise the results of this investigation. Basically, we studied the effect of allowing the deuteron corrections to be described by forms with 4 free parameters, which were allowed to vary with no penalty. It was found that the quality of the global fit could improve by a very large amount with the optimum deuteron corrections. It was particularly clear that the comparison to the Tevatron lepton asymmetry data used in the fit improved a great deal, as did the predictions for more recent versions of these data. The deuteron corrections required were of the expected general form, see e.g. \cite{Melnitchouk:1994rv,Kulagin:2010gd,Accardi:2011fa}, with a large positive correction at very high $x$ and a dip for $x \sim 0.5$. However, the dip was somewhat larger than expected, the correction remained negative near $x=0.1$ where it is likely to be positive due to antishadowing effects, and if anything the fit preferred positive corrections near $x=0.03$ rather than the negative shadowing corrections. (Indeed, if the shadowing corrections applied in the MSTW2008 analysis were to be simply removed, and no deuteron corrections applied, then the fit would improve slightly.) Hence, the adopted corrections seemed unsatisfactory. Given that the extended `Chebyshev' parameterisation discussed so far automatically allows improvement in the global fit, and has by far the most effect on valence quarks and the light sea, it seems natural to investigate the question of deuteron corrections in this context. The deuteron corrections applied previously \cite{Thorne:2010kj} were of the form \begin{equation} F^d(x,Q^2) = {\rm c}(x)(F^p(x,Q^2) + F^n(x,Q^2))/2, \end{equation} where $F^n(x,Q^2)$ is obtained from $F^p(x,Q^2)$ just by swapping up and down quarks, and antiquarks, i.e.~isospin symmetry is assumed. The correction factor ${\rm c}(x)$ is taken to be $Q^2$ independent for simplicity and is of the form \begin{eqnarray} {\rm c}(x)&=& (1+0.01N_{\rm c})(1+0.01{\rm c}_1\ln^2(x_p/x)), \qquad x< x_p,\\ {\rm c}(x)&=& (1+0.01N_{\rm c})(1+0.01{\rm c}_2\ln^2(x/x_p)+0.01{\rm c_3}\ln^{20}(x/x_p)), \qquad x>x_p. \label{eq:deutcorr} \end{eqnarray} $x_p$ is a ``pivot point'' for which value the normalisation is set to be $(1+0.01N_{\rm c})$. For $x<x_p$ there is the freedom to increase or decrease smoothly. The same is true above $x=x_p$, but the very large power is also added to allow the expected rapid change of the correction as $x \to 1$ due to Fermi motion. In previous studies $x_p$ was chosen to be 0.08 but here we set $x_p=0.05$. If there is shadowing at low $x$ and also a dip for high, but not too high, $x$ then $x_p$ is where the correction would take its maximum value, expected to be determined by antishadowing corrections. Thus the 4 free parameters describing the deuteron correction, ${\rm c}(x)$, are the ${\rm c}_i$ and $N_{\rm c}$. We do not apply the corrections to the E866 data on Drell--Yan asymmetry \cite{Towell:2001nh}, and this could be improved in future. However, in the region of the majority of (and most precise) data the correction is very small. Very naively the unconstrained deuteron correction can simply allow the deuteron structure function data to be fit as well as possible while other data sensitive to the separation between up and down quarks determine the PDFs. However, there are other constraints, such as sum rules, and in practice the many different types of structure function and other data, all depending on different combinations of flavours, in a global fit, makes the situation more complicated. In principle, extremely precise collider data will make the fit to deuteron data a more-or-less direct fit of deuteron corrections, but this is not yet the case with present data. The deuteron correction \cite{Badelek:1994qg} for the default MSTW2008 fit is shown in the left of Fig.~\ref{fig:deutcorrparam}. It is negative, i.e.~the total correction factor is $<1$ below about $x=0.2$, but becomes larger in magnitude as $x$ decreases. The correction factor for the best fit in our previous study \cite{Thorne:2010kj} is also shown. As explained, it is negative everywhere, except at very high $x$, which seems unlikely. This gives an improvement in $\chi^2$ compared to our usual global fit of $\sim 80$. If the normalisation at $x_p$ was fixed to be 1.005 the correction factor obtained had the expected type of shape, i.e.~turned below 1 at the lowest $x$ and dipped to a minimum near $x=0.6$. However, this resulted in a fit with $\chi^2$ 30 higher than the free deuteron correction, and as seen the dip is now $-5\%$, which is much lower than shadowing models tend to predict. The shape is very different from the correction with all parameters left free. Fixing the normalisation to 1.0025 and setting ${\rm c}_2$ so that the dip is more like $-3\%$, results in a further deterioration of $\Delta\chi^2=5$ to the fit quality. This is not particularly significant, but there seemed to be a tension between the best fit and the expected shape of the correction. \subsection{Deuteron corrections for `Chebyshev' parameterisations} We now repeat the exercise using our extended `Chebyshev' parameterisation for the valence quarks and sea distribution. The quality of the fit is 86 units of $\chi^2$ better than that for the MSTW2008 fit, i.e.~a little better than when we used the standard PDF parameterisation in our previous study \cite{Thorne:2010kj}, and 56 units better than the fit with the same parameterisation and fixed deuteron corrections. There are very large improvements in the fit to (i) the BCDMS deuteron structure function data, (ii) the E866 Drell--Yan asymmetry data and (iii) the Tevatron lepton asymmetry data. The last improves from $\chi^2/N_{\rm pts.}=55/32$ in the MSTW2008 fit to $\chi^2/N_{\rm pts.}=27/32$. The deuteron correction is shown in the right of Fig.~\ref{fig:deutcorrparam}. As with our previous study, when all parameters are left free, the normalisation is smaller than 1, but this time by only $0.5\%$, rather than over $1.5\%$. There is also still a tendency for the correction to turn up, rather than down, at lowest $x$. Since this last feature is unexpected, we investigate it by fixing ${\rm c}_1=0$, so there is no turn up, but no turn down either. This only changes the fit quality by $2$--$3$ units (only a couple of data points really preferring the turn-up), and the normalisation is now 1.0007. We also try setting the normalisation exactly to unity; the fit quality and deuteron correction are almost unchanged. The parameters for the different fits are shown in Table~\ref{tab:deut}. In all three of these fits, the dip minimises at a value of about 0.975 at $x=0.5$. Hence, the deuteron correction is stable as the number of parameters held fixed changes, as is the quality of the fit. Moreover, with the exception of the slight tendency to prefer an upturn near $x=0.01$, the shape is as predicted by standard models \cite{Melnitchouk:1994rv,Kulagin:2010gd,Accardi:2011fa}.\footnote{We may compare our correction with that obtained in a study \cite{Owens:2012bv} which appeared after the submission of our paper. Our result lies between the smaller (CJ12min) and middle (CJ12mid) estimates of the corrections in \cite{Owens:2012bv}; the larger corrections (CJ12max) are disfavoured by the fit quality. Hence, there is complete compatibility.} \begin{table} \centering \begin{tabular}{c|c|c|c|c} \hline\hline PDF set & $N_{\rm c}$ & ${\rm c}_1$ & ${\rm c}_2$ & ${\rm c}_3\times 10^{6}$ \\ \hline 4 parameters & -0.490 & 0.349 & -0.444 & 0.0340 \\ 3 parameters & 0.070 & 0.000 & -0.608 & 0.0336 \\ 2 parameters & 0.000 & 0.000 & -0.573 & 0.0334 \\ \hline\hline \end{tabular} \caption{The values of the parameters for the deuteron correction in the variety of fits.} \label{tab:deut} \end{table} \subsection{The `MSTW2008CPdeut' fit} We take the fit with ${\rm c}_1=0$ as the preferred fit since, while it produces no small $x$ shadowing, it produces no enhancement either. It may be that the expected shadowing is not significant in practice until $x$ below 0.01. Also, it may be expected that the value in the region $x=0.05$ is larger than the 1.0007 found, but this is far from certain. Using the PDFs from this fit, which we call `MSTW2008CPdeut', we find that the prediction for the higher luminosity D{\O} lepton asymmetry data \cite{Abazov:2008qv} integrated over all $p_T$ greater than $25~{\rm GeV}$ gives a fit of quality $\chi^2/N_{\rm pts.}=28/12$. Although this number seems high, this is close to the best that seems possible given the fluctuations in these measurements, and is close to the best that we have obtained even when fitting these data with a very high weight. We do not get a good fit to the data split into two different $p_T$ bins, but this seems to be a problem found by other PDF fitting groups \cite{Lai:2010vv,Ball:2010gb}. The value of $\chi^2$ for each data set is shown for the MSTW2008, MSTW2008CP and MSTW2008CPdeut PDFs in Table~\ref{tab:chisquared}. The valence quarks resulting from the fit with deuteron corrections (MSTW2008CPdeut) are shown compared to the corresponding MSTW2008 PDFs in Fig.~\ref{fig:deutPDF}. For all PDFs, other than valence quarks, there is negligible change compared to those with just the extended parameterisation (MSTW2008CP), i.e.~only the sea distribution changes at all significantly compared to MSTW2008 and in a similar way. The $u_V$ distribution also shows little further change. All the significance is in the change of $d_V$. This increases by a little more than the $68\%$ confidence level uncertainty at most near $x=0.6$ and decreases by the $90\%$ confidence level uncertainty at $x$ near 0.03, the precise value of $x$ decreasing with increasing $Q^2$. This is as one might expect, since the deuteron correction is now larger than the MSTW2008 default at low $x$, so the down quark can be smaller, whereas the major increase at high $x$ is in the same position as the minimum of the dip in the deuteron correction. Part of the effect is also due to the sum rule -- increases at some $x$ values must be compensated by decreases elsewhere. Examining the details of the improvement in the fit quality when including deuteron corrections we find that a large amount comes from the BCDMS data in the highest $x$ bin at $x=0.75$, the positive deuteron correction at this $x$ being preferred. This effect is largest at lowest $Q^2$ and hence lowest $W^2$, so the improvement in $\chi^2$ diminishes as the $W^2$ cut is raised. However, the data extends out to high $Q^2$ so a cut in $W^2$ of $30~{\rm GeV}^2$ is needed to remove the improvement completely. It is also the case that the improvement in the fit quality at this $x$ value is essentially all to do with the large deuteron correction rather than PDF changes. In general the data for which the fit improves with the free deuteron correction is spread over a reasonably wide range of $Q^2$ and $W^2$ and for all $x$ values between $x=0.75$ and $x=0.01$. We perform a fit within the same framework as MSTW2008CPdeut but with cuts in $Q^2$ of $4~{\rm GeV}^2$ and in $W^2$ of $20~{\rm GeV}^2$, i.e.~equivalent to a fit in \cite{Thorne:2011kq}. With these cuts the $\chi^2$ for the fit with free deuteron corrections and 4 Chebyshev polynomials for the quark input parameterisation is 60 units better than our standard fit rather than 86 for the standard cuts. It is concentrated in the same data sets, i.e.~the BCDMS deuteron structure function data and the Tevatron asymmetry data. However, in the former the improvement is reduced due to a loss of some of the most sensitive points, including some at $x=0.75$. For the Tevatron asymmetry data the raised $Q^2$ cut for the standard fit already improves the fit since it removes some of the constraining data on the up and down quark separation in the region $x\sim 0.05$. The form of the deuteron correction with the raised cuts is very similar to that with the full cuts. The ratio of the PDFs in the two fits with raised cuts is similar to that with the usual cuts, but the largest differences are reduced to about $60$-$70\%$ of the size with larger cuts. There is no real evidence that the change between our standard PDFs and those with the extended parameterisation and free deuteron corrections is altered by more conservative cuts other than by the fact that the fairly significant loss of data reduces the change just by allowing the fits to remaining data to be slightly more self consistent even without extended parameterisation or modified deuteron corrections. \begin{figure}[htb!] \centering \vspace{-6cm} \includegraphics[width=0.95\textwidth,clip]{mstwcpdeutdiff} \vspace{-6cm} \caption{The change in the valence quark PDFs extracted from the MSTW2008 type fit using Chebyshev polynomials and deuteron corrections (MSTW2008CPdeut) compared to the original MSTW2008 PDFs at NLO with their $68\%$ uncertainties shown using dot-dashed lines.} \label{fig:deutPDF} \end{figure} \begin{figure}[htb!] \centering \vspace{-2cm} \includegraphics[width=0.6\textwidth,clip]{uvdvunc} \vspace{-2cm} \caption{The change in the $68\%$ uncertainty bands of the valence quark PDFs extracted from the MSTW2008 type fit using Chebyshev polynomials and deuteron corrections (MSTW2008CPdeut) compared to the original MSTW2008 PDFs at NLO.} \label{fig:uvdvunc} \end{figure} \begin{table \centering {\footnotesize \begin{tabular}{l|c|c|c} \hline \hline Data set & MSTW08 & MSTWCP & MSTWCPdeut \\ \hline BCDMS $\mu p$ $F_2$ & 182 / 163 & 173 / 163 & 177 / 163 \\ BCDMS $\mu d$ $F_2$ & 190 / 151 & 168 / 151 & 143 / 151 \\ NMC $\mu p$ $F_2$ & 121 / 123 & 123 / 123 & 120 / 123 \\ NMC $\mu d$ $F_2$ & 102 / 123 & 101 / 123 & 103 / 123 \\ NMC $\mu n/\mu p$ & 130 / 148 & 143 / 148 & 143 / 148 \\ E665 $\mu p$ $F_2$ & 57 / 53 & 55 / 53 & 53 / 53 \\ E665 $\mu d$ $F_2$ & 53 / 53 & 58 / 53 & 57 / 53 \\ SLAC $ep$ $F_2$ & 30 / 37 & 31 / 37 & 31 / 37 \\ SLAC $ed$ $F_2$ & 30 / 38 & 31 / 38 & 31 / 38 \\ NMC/BCDMS/SLAC $F_L$ & 38 / 31 & 39 / 31 & 39 / 31 \\ \hline E866/NuSea $pp$ DY & 228 / 184 & 224 / 184 & 221 / 184 \\ E866/NuSea $pd/pp$ DY & 14 / 15 & 10 / 15 & 7 / 15 \\ \hline NuTeV $\nu N$ $F_2$ & 49 / 53 & 49 / 53 & 54 / 53 \\ CHORUS $\nu N$ $F_2$ & 26 / 42 & 25 / 42 & 26 / 42 \\ NuTeV $\nu N$ $xF_3$ & 40 / 45 & 48 / 45 & 45 / 45 \\ CHORUS $\nu N$ $xF_3$ & 31 / 33 & 34 / 33 & 32 / 33 \\ CCFR $\nu N\to \mu\mu X$ & 66 / 86 & 65 / 86 & 64 / 86 \\ NuTeV $\nu N\to \mu\mu X$ & 39 / 40 & 38 / 40 & 39 / 40 \\ \hline H1 MB 99 $e^+p$ NC & 9 / 8 & 8 / 8 & 8 / 8 \\ H1 MB 97 $e^+p$ NC & 42 / 64 & 40 / 64 & 40 / 64 \\ H1 low $Q^2$ 96--97 $e^+p$ NC & 44 / 80 & 44 / 80 & 44 / 80 \\ H1 high $Q^2$ 98--99 $e^-p$ NC & 122 / 126 & 121 / 126 & 120 / 126 \\ H1 high $Q^2$ 99--00 $e^+p$ NC & 131 / 147 & 129 / 147 & 129 / 147 \\ ZEUS SVX 95 $e^+p$ NC & 35 / 30 & 35 / 30 & 35 / 30 \\ ZEUS 96--97 $e^+p$ NC & 86 / 144 & 87 / 144 & 87 / 144 \\ ZEUS 98--99 $e^-p$ NC & 54 / 92 & 53 / 92 & 53 / 92 \\ ZEUS 99--00 $e^+p$ NC & 63 / 90 & 62 / 90 & 61 / 90 \\ H1 99--00 $e^+p$ CC & 29 / 28 & 28 / 28 & 31 / 28 \\ ZEUS 99--00 $e^+p$ CC & 38 / 30 & 38 / 30 & 35 / 30 \\ H1/ZEUS ep $F_2^{\rm charm}$ & 107 / 83 & 108 / 83 & 108 / 83 \\ H1 99--00 $e^+p$ incl.~jets & 19 / 24 & 19 / 24 & 19 / 24 \\ ZEUS 96--97 $e^+p$ incl.~jets & 30 / 30 & 29 / 30 & 29 / 30 \\ ZEUS 98--00 $e^\pm p$ incl.~jets & 17 / 30 & 16 / 30 & 16 / 30 \\ \hline D{\O} II $p\bar{p}$ incl.~jets & 114 / 110 & 117 / 110 & 113 / 110 \\ CDF II $p\bar{p}$ incl.~jets & 56 / 76 & 57 / 76 & 56 / 76 \\ CDF II $W\to l\nu$ asym. & 29 / 22 & 26 / 22 & 18 / 22 \\ D{\O} II $W\to l\nu$ asym. & 25 / 10 & 20 / 10 & 9 / 10 \\ D{\O} II $Z$ rap. & 19 / 28 & 18 / 28 & 17 / 28 \\ CDF II $Z$ rap. & 49 / 29 & 45 / 29 & 52 / 29 \\ \hline All data sets & \textbf{2543 / 2699} & \textbf{2513 / 2699} & \textbf{2457 / 2699} \\ \hline \hline \end{tabular} } \caption{The values of $\chi^2 / N_{\rm pts.}$ for the data sets included in the global fits. The complete references, details of corrections to data, kinematic cuts applied and definitions of $\chi^2$ are contained in~\cite{Martin:2009iq}.} \label{tab:chisquared} \end{table} \subsection{Allowing for uncertainties on deuteron corrections} We also generate uncertainty eigenvector sets whilst applying deuteron corrections. Doing this with the deuteron corrections fixed at the position of the best fit would be straightforward, but would not account for the uncertainty in the deuteron corrections themselves. Since our best fit is of roughly the form one would expect for these corrections, and since there is no solid basis on which to judge quite how much variation in deuteron corrections is allowed, we choose to simply let the parameters in the deuteron correction go free with no penalty. This is then very similar to our procedure for heavy nuclear corrections, necessary for including neutrino deep-inelastic scattering data in the MSTW2008 analysis \footnote{We note that an NNPDF study on DIS data only noticed a small change of PDFs relative to uncertainties when nuclear corrections were added to the default fit, in which they are omitted \cite{Ball:2009mk}.}, where we take a set of corrections obtained from a global fit to nuclear data \cite{deFlorian:2003qf}, but multiply by a function, similar in form to (\ref{eq:deutcorr}), which allows variations away from the default form with no penalty. In that case, in practice, the variations are small, i.e.~our fit is very compatible with the determined nuclear corrections and the uncertainty in the nuclear corrections determined by the fit quality is a few percent, which seems entirely reasonable. Here we are doing exactly the same thing except that we have no starting deuteron correction, other than implicitly zero correction, to act as a template. Since deuteron corrections are expected to be small, and some groups use zero correction (as have we, as default, i.e.~in the MSTW2008 fit and previously, at high $x$), using no correction as the template seems reasonable. There is, however, a complication. There are 4 free parameters in our deuteron correction whereas we have 3 for our nuclear correction function, and the deuteron structure functions are closely related to the down quark distribution. Hence, there are strong correlations between the 4 deuteron correction parameters, and between them and the parameters for $d_V$. It is impossible to get a stable perturbation about the best fit in terms of eigenvectors letting all 4 deuteron parameters go free. However, we did not even let all parameters go free in the best fit, choosing to fix ${\rm c}_1$ in order to avoid an unlikely tendency for the correction to grow at the smallest $x$ values. Letting it go free when determining eigenvectors would be inconsistent. Hence we let $N_{\rm c}, {\rm c}_2$ and ${\rm c}_3$ go free when obtaining the eigenvectors. The freedom in the normalisation lets the correction at low $x$ vary and there is not much distance between the lowest $x$ data points and the pivot point at which the normalisation is set. We attempt to construct 23 eigenvectors by letting the same parameters as before go free. However, this results in some severely non-quadratic behaviour. The worst case is the eigenvector comprising largely of the first term in the Chebyshev polynomial for $d_V$. As for the fit without deuteron corrections this parameter is highly correlated with both the small-$x$ power and the third term in the Chebyshev polynomial for the same PDF, but the freedom in deuteron corrections make this correlation, and its effects, worse. This is no longer an acceptable eigenvector. Even fixing one more of the deuteron parameters does not help very strongly. Since the main problem is the correlation between the first Chebyshev polynomial and the small-$x$ power, and higher-order polynomials are less influential at small $x$, we try instead letting the second and third Chebyshev polynomials have the free parameters for the valence quarks and sea when finding eigenvectors. This does indeed reduce the correlation between the parameters for $d_V$. It increases the correlation between the parameters for $u_V$, but this does not seem to translate into particularly bad behaviour of the eigenvectors, and this change provides 23 orthogonal eigenvectors with none having worse non-quadratic behaviour than any of the 20 in the MSTW 2008 fit. Hence, we have a preliminary set of uncertainty eigenvectors incorporating both the extended `Chebyshev' parameterisation and the uncertainties due to deuteron corrections. When the fit with deuteron corrections (MSTW2008CPdeut) is compared to the fit using just the extended parameterisation (MSTW2008CP), the $u_V$ uncertainty increases a little bit more quickly below $x=0.05$. However the effect for $d_V$ is more dramatic -- it is more than twice as uncertain as that for MSTW2008 for $x \sim 0.4$, and also about twice as uncertain for $x=0.01$--$0.05$, but not near $x=0.1$ where the data constraints are strongest. This expanded uncertainty means that the MSTW2008 $d_V$ distribution is either within, or just outside the one-sigma uncertainty band for $d_V$ obtained in the fit with deuteron corrections, except for $x < 0.0005$. The uncertainty for the valence quarks for both the MSTW2008CP and MSTW2008CPdeut sets is compared to that for the standard MSTW2008 set in Fig.~\ref{fig:uvdvunc}. For other PDFs the change is less significant. For $u_V-d_V$ the uncertainty for MSTW2008CPdeut is more than twice as big as MSTW2008 for $x\sim 0.4$ and $30$--$50\%$ bigger for $x =0.05$--$0.01$, but only slightly larger near $x=0.1$. The change in the central value and uncertainty for $u_V-d_V$ has important implications for the description of the lepton charge asymmetry, as we shall discuss in detail in Section 5. \subsection{Variation of number of parameters in the fits} So far in the previous two subsections we have considered the results of using 4 Chebyshev polynomials in the expressions for the valence quarks and sea (and 2 in the gluon) as suggested by the results in Section 2. We also consider fits where we reduce the number to 3 (two being equivalent to the default MSTW fits) and increase to 5 and 6 (and 3 and 4 respectively for the gluon). First we consider the improvement in going from MSTW2008 to sets without modified deuteron corrections. A fit with 3 Chebyshev polynomials only improves the $\chi^2$ by 8 units, i.e.~a further 21 units is achieved with 3 extra parameters going to the fit with 4 Chebyshev polynomials described in Section 3. Much of the change from MSTW2008 in the $u_V$ (and $u_V-d_V$) distribution is already achieved with 3 Chebyshev polynomials, but the change in extending to 4 can be larger than the uncertainty on the $u_V$ (and $u_V-d_V$) distributions in the MSTW2008CP set. Increasing to 5 Chebyshev polynomials the fit quality improves by 15 units for 4 extra parameters. The change in some distributions is significant, as expected being proportionally greatest in $u_V-d_V$, though this time largely due to changes in $d_V$ rather than $u_V$. The change can be a little greater than the uncertainty for a small range of $x$ near 0.01, but is generally much smaller, especially at higher $x$. When using 6 Chebyshev polynomials there is a further improvement in $\chi^2$ of 7 units for 4 parameters. Again the change in distributions is most significant for $u_V-d_V$, but is smaller than when increasing from 4 to 5 polynomials, and is actually such as to reduce the largest changes in the 5 Chebyshev polynomial case. There is no particularly obvious case of lack of smoothness in the PDFs with 5 or 6 polynomials, although there is no penalty imposed to ensure this. Hence, we conclude that 4 Chebyshev polynomials for valence and sea quarks is certainly preferable to 3. It is arguable than 5 may be ideal rather than 4, but that the changes from 4 onwards are very rarely large compared to the PDF uncertainty. The case with deuteron corrections included is a little more complicated due to the interplay between the $d_V$ distribution and the deuteron correction. As stated above, there is only a little improvement in the fit quality with 4 Chebyshev polynomials compared to the default MSTW parameterisation, but the deuteron correction is much more sensible in the former case. If we compare the fit quality obtained with a ``sensible'' deuteron correction the improvement in $\chi^2$ is about 20 units when using 4 Chebyshev polynomials. If instead we use 3 we obtain only 7--8 of this. The PDFs make a large fraction of the change from the MSTW2008 set to the MSTW2008CPdeut sets, but as in the previous case, the difference can be bigger than the uncertainty in the MSTW2008CPdeut PDFs. With 5 Chebyshev polynomials the improvement in $\chi^2$ is only 3 for 4 extra parameters. The change in PDFs is proportionally biggest for $u_V-d_V$, and in this case is always within the uncertainty. With 6 Chebyshev polynomials there is an improvement in $\chi^2$ of 10 units for 4 parameters. However, the further change in PDFs is small except for the valence quarks; it is largest for $x$ values at most $x=0.02$ at $Q^2=10^4~{\rm GeV}^2$ where for $d_V$ it can be comparable to the uncertainty. There is no clear sign of lack of smoothness or stability in PDFs. However, for 6 Chebyshev polynomials our variation away from the default nuclear corrections \cite{deFlorian:2003qf} in the best fit becomes large and unrealistic (increasing above 1 and increasing as $x$ decreases near $x=0.01$). Indeed, with a large number of parameters the position of the best fit with free deuteron corrections becomes more difficult to find. Our best fit is found using the Levenberg--Marquardt method which combines the advantages of the inverse-Hessian method and the steepest descent method for minimisation, as described in Section 5.2 of \cite{Martin:2009iq}. It is possible that a local rather than global minimum is found, but this can be investigated by starting the fit from different places in parameter space.\footnote{There is also in principle some sensitivity to the numerical value below which the improvement in $\chi^2$ with successive iterations must fall for the fit to stop. This is normally taken as 0.1, but further reductions generally lead to changes very much smaller than the PDF uncertainty, as has been checked in this study.} This is usually not an issue, but difficulty in finding the best possible fit has been noticed when jet data is removed from the global fit and $\alpha_S(M_Z^2)$ is left free, as the relationship between $\alpha_S(M_Z^2)$ and the gluon then allows many fits with similar quality. A similar problem is found when 5 or 6 Chebyshev polynomials are used and deuteron corrections are also left free, along with (as usual in the MSTW fit) a parameterisation multiplying the nuclear corrections. (Since much of the nuclear target data is for $xF_3(x,Q^2)$ there is a lot of correlation between nuclear corrections and valence quarks.) For 6 Chebyshev polynomials a rather extreme nuclear correction is ultimately preferred. Hence, for the fits with free deuteron corrections we start to lose some stability with enough free parameters, and beyond 4 Chebyshev polynomials for valence and sea quarks we do not see changes in PDFs at all incompatible with the uncertainties, or improvements in $\chi^2$ large when considering the extra degrees of freedom. Hence, 4 Chebyshev polynomials seems an optimal choice. For the fits with the fixed (default) deuteron corrections it is arguable that 5 may be a better choice, but even in this case moving to 6 moves the biggest change in PDFs back towards the situation with 4. In all cases the changes in PDFs when moving from 2 to 4 polynomials for each quark distribution is very much larger than subsequent changes. In fact all PDF versions with 4, 5 or 6 Chebyshev polynomials, either with or without free deuteron corrections are essentially consistent with the MSTW2008CPdeut set and its uncertainties (with some systematic differences between the default deuteron correction and free deuteron correction cases for $d_V$). Hence, we conclude that 4 Chebyshev polynomials seems a sensible number at present, and certainly expresses the main features of the change away from our standard parameterisation. However, four polynomials does not seem to be quite as precise in the real PDF fits as in the fits of a function to pseudo-data in Section 2, presumably because the relationship between the real data and a specific PDF, especially low-$x$ valence quarks, is rather less direct than the idealised case of Section 2. We will check in future updates if a slightly larger number (more than 5 seems very unlikely) might be preferable with the expanded data set. It is possible that the more direct constraints from new data sets can reduce, rather than expand, the optimum number. \section{Lepton Asymmetry at the LHC and PDF Sensitivity} The measurements of the lepton charge asymmetry, from $W^\pm \to \ell^\pm \nu$ production and decay, at the Tevatron and the LHC, probe novel combinations of PDFs. In the next section we shall investigate the effects of the extended `Chebyshev' parameterisation, and of the deuteron corrections, on the MSTW predictions for the observed asymmetries. However, first, to gain insight into the use of these data in PDF analyses, we explore the predicted behaviour of the lepton charge asymmetry at the LHC, based on the LO and zero width expressions for $W$ production and decay, using MSTW2008 NLO PDFs. The NLO and NNLO corrections \cite{Melnikov:2006kv,Catani:2010en} do not change the general picture significantly, though do change the precise values. In particular, we explain how the PDFs result in the interesting dependence of the asymmetry, shown in Fig.~\ref{fig:varptmin}, on the experimental minimum $p_T$ cut applied to the transverse momentum of the decay lepton. \begin{figure}[htb!] \centering \vspace{-1.5cm} \includegraphics[width=0.55\textwidth,clip]{lasy_varptmin2_revised} \vspace{-1.5cm} \caption{The dependence of the asymmetry on the lepton minimum $p_T$ cut. The asymmetry is calculated at leading order and zero width using MSTW2008NLO PDFs.} \label{fig:varptmin} \end{figure} We begin by considering the $W$ charge asymmetry, defined by \begin{equation} A_W(y_W) = \frac{{\rm d}\sigma(W^+)/{\rm d} y_W - {\rm d}\sigma(W^-)/{\rm d} y_W}{{\rm d}\sigma(W^+)/{\rm d} y_W + {\rm d}\sigma(W^-)/{\rm d} y_W}, \end{equation} where $y_W$ is the rapidity of the $W$ boson. At leading order, assuming $u$, $d$ quark and antiquark contributions only, taking the CKM matrix to be diagonal, and writing $u=u_V+\bar q$, $d=d_V+\bar q$, $\bar u = \bar d \equiv \bar q$ (which is an approximation, which becomes more accurate at very small $x$), we have \begin{equation}\label{eq:wasymm} A_W(y_W) \approx \frac{u_V(x_1)\bar q(x_2)+ \bar q(x_1) u_V(x_2) - d_V(x_1)\bar q(x_2) -\bar q(x_1)d_V(x_2)} {u_V(x_1)\bar q(x_2)+ \bar q(x_1) u_V(x_2) + d_V(x_1)\bar q(x_2) +\bar q(x_1)d_V(x_2) + 4\bar q(x_1) \bar q(x_2)}, \end{equation} where $x_{1,2}=(M_W/\sqrt{s})\exp(\pm y_W)$. Contributions from $c,s$ quark scattering can be approximately taken into account by $4\bar q(x_1)\bar q(x_2) \to 4(1+\delta ) \bar q(x_1)\bar q(x_2)$ in the denominator. Two important limits are $y_W = 0$ for which $x_1 = x_2 = x_0 \equiv M_W/\sqrt{s}$, and $y_W \to y_W^{\rm max} = -\log(x_0)$ for which $x_1 \to 1$ and $x_2 \to x_0^2$. Thus \begin{equation}\label{eq:wasymmapprox} A_W(0) \approx \frac{u_V(x_0) - d_V(x_0)} {u_V(x_0) + d_V(x_0) + 2\bar q(x_0)} > 0, \qquad A_W(y_W^{\rm max}) = 1 . \end{equation} In practice, it is usually the lepton charge asymmetry which is measured, defined in a similar way as \begin{equation} \label{eq:leptonasymm} A(y_\ell) = \frac{{\rm d}\sigma(\ell^+)/{\rm d}y_{\ell}-{\rm d}\sigma(\ell^-) /{\rm d}y_{\ell}}{{\rm d}\sigma(\ell^+)/{\rm d}y_{\ell}+{\rm d}\sigma(\ell^-)/{\rm d}y_{\ell}}, \end{equation} where $y_{\ell}$ is the (pseudo)rapidity of the charged lepton.\footnote{For massless leptons, the pseudorapidity $\eta_\ell$ is equal to the rapidity $y_\ell$.} Defining $\theta^*$ to be the emission angle of the charged lepton relative to the proton beam with positive longitudinal momentum in the $W$ rest frame, then $\cos^2\theta^* = 1 - 4p_T^2/M_W^2$, where $p_T$ is the lepton transverse momentum. The rapidities are related by \begin{equation} y_\ell = y_{W} + y^*, \quad y^* = \frac{1}{2}\ln\left(\frac{1+\cos\theta^*}{1-\cos\theta^*}\right). \end{equation} The leading-order parton momentum fractions are then \begin{equation} x_{1,2} = x_0 \exp(\pm y_W) = x_0 \exp(\pm y_\ell) \kappa^{\pm 1} ,\quad \kappa = \left( \frac{1+\vert\cos\theta^*\vert}{ 1-\vert\cos\theta^*\vert } \right)^{1/2} > 1, \end{equation} i.e.~for a given $p_T$ in $0 \leq p_T \leq M_W/2$, there are two solutions corresponding to positive or negative $\cos\theta^*$, or equivalently positive or negative $y^*$. Neglecting overall factors, the analogue of the numerator of \eqref{eq:wasymm} for the lepton asymmetry \eqref{eq:leptonasymm} can be approximated by \begin{eqnarray} \label{eq:leptonasymmapprox} && \left( u_V(x_1^+)\bar q(x_2^+) -\bar q(x_1^+)d_V(x_2^+) + u_V(x_1^-)\bar q(x_2^-) -\bar q(x_1^-)d_V(x_2^-) \right) (1-\cos\theta^*)^2 \nonumber \\ &+& \left( \bar q(x_1^+) u_V(x_2^+) - d_V(x_1^+)\bar q(x_2^+) + \bar q(x_1^-) u_V(x_2^-) - d_V(x_1^-)\bar q(x_2^-) \right) (1+\cos\theta^*)^2 , \end{eqnarray} where \begin{eqnarray} x_1^+ = x_0 \exp( + y_\ell) \kappa & > & x_1^- = x_0 \exp( + y_\ell) \kappa^{-1} \nonumber \\ x_2^+ = x_0 \exp( - y_\ell) \kappa^{-1} & < & x_2^- = x_0 \exp( - y_\ell) \kappa . \end{eqnarray} The explicit $\theta^*$-dependent terms in \eqref{eq:leptonasymmapprox} originate in the $V\pm A$ structure of the $W$ couplings to fermions. Table~\ref{tab:kappa} lists the values of the various quantities that enter in the expression for the lepton asymmetry as functions of $p_T$. \begin{table} \centering \begin{tabular}{c|c|c|c|c|c} \hline\hline $p_T$ (GeV) & $\vert\cos\theta^*\vert$ & $\vert y^*\vert $ & $(1+\cos\theta^*)^2$ & $(1-\cos\theta^*)^2$ & $\kappa$ \\ \hline 0 & 1.000 & $\infty$ & 4.00 & 0.00000 & $\infty$ \\ 5 & 0.992 & 2.77 & 3.97 & 0.00006 & 16.0 \\ 10 & 0.969 & 2.07 & 3.88 & 0.00099 & 7.91 \\ 15 & 0.928 & 1.64 & 3.72 & 0.00522 & 5.17 \\ 20 & 0.867 & 1.32 & 3.49 & 0.01757 & 3.75 \\ 25 & 0.783 & 1.05 & 3.18 & 0.04705 & 2.87 \\ 30 & 0.666 & 0.80 & 2.77 & 0.11181 & 2.23 \\ 35 & 0.492 & 0.54 & 2.23 & 0.25820 & 1.71 \\ $M_W/2$ & 0.000 & 0.00 & 0.00 & 1.00000 & 1.00 \\ \hline\hline \end{tabular} \caption{The dependence of the various $\theta^*$--dependent quantities on the lepton $p_T$.} \label{tab:kappa} \end{table} Note that whether or not one or both $x$ solutions are physical depends on the values of $y_\ell$ and $\kappa$ (i.e.~$p_T$). For the range of lepton $p_T$ accessible to experiment at LHC, there will always be two solutions for sufficiently small $y_\ell$. Then as $y_\ell$ increases, the {\lq$+$\rq} solution disappears first for $x_1^+ > 1$, i.e.~$y_\ell > -\log(x_0\kappa)$, and both solutions disappear for $x_1^- > 1$, i.e.~$y_\ell > -\log(x_0/\kappa)$. For example, for $p_T = 20$~GeV at $\sqrt{s} = 7$~TeV, these limiting values are $y_\ell = 3.14$ and $5.79$ respectively. The $x_1^{\pm}$ and $x_2^{\pm}$ values as a function of $y_\ell$ for three different values of lepton $p_T$ are shown in Fig.~\ref{fig:x_varptmin}. \begin{figure}[htb!] \centering \vspace{-1.5cm} \includegraphics[width=0.55\textwidth,clip]{x_varptmin_2012_revised} \vspace{-1.5cm} \caption{Dependence of $x_1^\pm$ and $x_2^\pm$ on $y_\ell$, for lepton $p_T = M_W/2$, 30~GeV and 20~GeV, at the 7~TeV LHC. For $p_T = M_W/2$ we have $x_1^+=x_1^-$ and $x_2^+=x_2^-$. } \label{fig:x_varptmin} \end{figure} For small or moderate $y_\ell$, the $x_i^+$ and $x_i^-$ contributions in \eqref{eq:leptonasymmapprox} give comparable contributions. In particular, for $y_\ell = 0$, \begin{eqnarray} x_1^+ = x_2^- = x_0 \kappa \equiv X \\ x_1^- = x_2^+ = x_0 / \kappa \equiv x \end{eqnarray} with $X/x = \kappa^2 \geq 1$. For small $p_T$ therefore, $X \gg x$ and as long as $X$ is not too close to $1$ we may expect $V(X)\bar q(x) > V(x) \bar q(X)$, where $V(x)$ denotes either valence quark, in which case (\ref{eq:leptonasymmapprox}) becomes approximately \begin{equation} \left( u_V(X) - d_V(X) \right)\; \bar q(x)\; 4 \cos\theta^* , \end{equation} which, in turn, leads to \begin{equation} A_\ell(0) \approx \frac{u_V(X) - d_V(X)} {u_V(X) + d_V(X) + 2\bar q(X)} . \label{eq:Alep0} \end{equation} Since $u_V(X) - d_V(X)$ increases with increasing $X$ at small $X$, this explains, at least qualitatively, why the lepton asymmetry grows with decreasing $p_{T{\rm min}}$, see Fig.~\ref{fig:varptmin}. As $y_\ell$ increases away from 0, $x_1^+ \to 1$ and the $x_{1,2}^-$ contributions start to dominate. Furthermore, since in this region $x_1^- \gg x_2^-$, the terms with $V(x_1^-) \bar q(x_2^-)$ are the most important. Thus \begin{equation} \label{eq:leptonasymmapprox2} A_\ell(y_\ell)\approx \frac{ u_V(x_1^-) (1 -\cos\theta^*)^2 - d_V(x_1^-) (1+\cos\theta^*)^2 }{ u_V(x_1^-) (1 -\cos\theta^*)^2 + d_V(x_1^-) (1+\cos\theta^*)^2 } . \end{equation} According to Table~\ref{tab:kappa}, for $p_T \to M_W/2$, $\cos\theta^* \to 0$ and $A_\ell(y_\ell) \to A_W(y_\ell)$ because then the $(1 \pm \cos\theta^*)^2$ terms in (\ref{eq:leptonasymmapprox2}) are on the same footing --- the asymmetry is driven by the $u_V > d_V$ inequality which is valid at all $x$, and the lepton asymmetry is always positive. However, for small or moderate $p_T$, $(1+\cos\theta^*)^2 \gg (1-\cos\theta^*)^2$ and so the term proportional to $d_V$ dominates and the asymmetry is negative. Now $d_V(x)$ decreases faster at large $x$ than $u_V(x)$, and so at some point at large $y_\ell$ the approximation \begin{equation} d_V(x_1^-)\bar q(x_2^-) (1+\cos\theta^*)^2 \gg u_V(x_1^-)\bar q(x_2^-) (1-\cos\theta^*)^2 \end{equation} breaks down, i.e.~the $V\pm A$ unfavoured forward $u\bar d \to \ell^+ \nu_\ell$ scattering process will eventually dominate. Evidently this will happen at the $y_\ell$ value for which \begin{equation} u_V(x_1^-) / d_V(x_1^-) \sim (1+\cos\theta^*)^2/ (1-\cos\theta^*)^2 = \kappa^4. \end{equation} The larger the lepton $p_T$ (recall that large $p_T$ means small $\kappa$), the earlier (in terms of increasing $y_\ell$) this will happen, as confirmed by Fig.~\ref{fig:varptmin}. In principle LHCb data should be sensitive to this, but a very significant increase in statistics is needed compared to the present data~\cite{Aaij:2012vn}, for which the MSTW2008 PDFs give a good prediction. When this is achieved then data in bins of different $p_T$ cut will be illuminating. In summary, the behaviour of the lepton asymmetry shown in Fig.~\ref{fig:varptmin} can now be understood in terms of a fairly complex interplay of PDF and $V\pm A$ effects. For small $y_\ell$, the asymmetry is sensitive to the combination $u_V-d_V$ at values of $x$ between $M_W/\sqrt{s}$ and $\kappa(p_{T{\rm min}})M_W/\sqrt{s}$, see (\ref{eq:Alep0}), where $p_{T{\rm min}}$ is the minimum observed $p_T$ of the lepton and where values of $\kappa(p_T)$ are shown in Table \ref{tab:kappa}. This results in a fairly clear decrease in asymmetry for increasing the minimum lepton $p_T$ cut. At high $y_\ell > 3$ the asymmetry is even more sensitive to the minimum lepton $p_T$ cut. In this region valence $u$ and $d$ quarks are being sampled at very high $x$, see Fig.~\ref{fig:x_varptmin}. \section{Predictions for the LHC Using Modified PDFs} \begin{figure}[htb!] \centering \vspace{-1cm} \includegraphics[width=0.7\textwidth,clip]{WZ-Eigenvectors} \vspace{-1cm} \caption{The variation in the quality of the fit to ATLAS vector boson production as a function of rapidity for different MSTW2008 eigenvectors. Eigenvectors 9, 14 and 18 are mainly associated with the gluon, $d_V$ and $u_V$, respectively.} \label{fig:WZ-Eig} \end{figure} The MSTW2008 set of PDFs are found to give very good predictions for the vast majority of relevant measurements made at the LHC to date, see, for example, \cite{Ball:2012wy}. However, an exception is the charged lepton asymmetry arising from $W^\pm$ production, where it is clear that this set of PDFs does not give the optimum description~\cite{Aad:2011dm,Chatrchyan:2012xt,Watt:2012tq,Chatrchyan:2011jz}. In this section we investigate the reason for this deficiency. We will show that the extended `Chebyshev' parameterisation and, to a lesser extent, the improvement in the treatment of the shadowing corrections to the deuteron data, completely remove the problem, without a deterioration in the excellent description of the other data. \subsection{Preparatory study of LHC $W,Z$ rapidity distributions} However, first let us start by describing a recent quantitative study \cite{Watt:2012tq} of the CMS \cite{Chatrchyan:2011jz} and ATLAS \cite{Aad:2011dm} measurements on the {\it charged lepton asymmetry} with cuts on charged lepton $p_T$ of $25~{\rm GeV}$ and $20~{\rm GeV}$, respectively, with a cut on missing energy of $25~{\rm GeV}$ in the latter case. In this paper \cite{Watt:2012tq}, a {\it reweighting} technique, originally introduced in \cite{Ball:2010gb} (modifying an earlier proposal in \cite{Giele:1998gw}) with a slightly different method of application, was used to estimate the effect these new asymmetry data sets would have on both the central value and uncertainty of the MSTW2008 PDF set. The reweighted PDFs were able to turn an initial $\chi^2$ of 2-3 per asymmetry data point into just over 1 per point. As is clear from the previous section, the major PDF sensitivity is in the $u_V-d_V$ distribution. When these new data sets were included, $u_V-d_V$ was found to change most for $x=0.02$ for $Q^2=10^4~{\rm GeV}^2$, increasing by about $5\%$, which is similar to the size of its uncertainty. Due to the sum rule, this also resulted in a reduction for $x<0.001$, a region where there is no data constraint. The reduction is also about the size of the uncertainty. After the reweighting, the uncertainty reduced to about $60$--$70\%$ of its original size near $x=0.02$. The data from the higher luminosity CMS measurement \cite{Chatrchyan:2012xt}, obtained with a higher minimum lepton $p_T$ cut of $35~{\rm GeV}$, were not studied in \cite{Watt:2012tq}, but it is clear from the comparison in \cite{Chatrchyan:2012xt} that their description using MSTW2008 PDFs is worse than the data with lower $p_T$ cut. Here we extend this previous study somewhat. We begin by comparing, at NLO for simplicity, the fit quality to the {\it full} ATLAS lepton rapidity data from \cite{Aad:2011dm}, of which the asymmetry measurements are only a subset, and where information on the size and shape in rapidity of the cross section is lost when taking the ratio of the difference to the sum of $W^+$ and $W^-$ cross sections. Comparing the full data set to the MSTW2008 PDFs at NLO (without higher order electroweak corrections), using APPLgrid \cite{Carli:2010rw} (which uses the MCFM \cite{Campbell:2002tg} code), we calculate $\chi^2/N_{\rm pts.} =60/30$, where the 30 data points are 11 each from $W^{\pm}$ production and 8 from $Z$ production. As for the asymmetry data, it is clear that MSTW2008 PDFs do not provide the optimum fit quality, but we note that the PDF sets of all other groups (for which APPLgrid can easily be used to calculate cross section predictions) seem to also obtain $\chi^2/N_{\rm pts.}$ noticeably more than 1 per point, the best being the CT10 PDF set, with about 1.1 per point.\footnote{We note that the particularly good description by CT10 is probably due to the larger strange distribution in their PDF set than in the others. A study by ATLAS has shown that their data prefer a large strange distribution, in fact seemingly one which is the same size as the $\bar u$ and $\bar d$ distributions even at low $Q^2$ \cite{Aad:2012sb}. We do indeed see some small improvement in fit quality associated with the eigenvector most associated with the strange PDF normalisation, but rather less than for the three eigenvectors 9, 14, 18 mentioned below. This means that only a marginal improvement can be obtained by changing the strange distribution by one standard deviation. We have confirmed this by making a more thorough study. Moreover, a study by the NNPDF group has reached a similar conclusion \cite{Ball:2012cx}.} In order to investigate the manner in which the description of these data could be improved we look at the quality of the $\chi^2$ using each of the uncertainty eigenvectors. This is shown in Fig.~\ref{fig:WZ-Eig}. The fit quality to the ATLAS data improves markedly, i.e.~by about $0.2 \times 30= 6$ units, in one direction for eigenvector 9, which is mainly associated with the gluon distribution, so this alters the common shape and normalisation of all three $(W^\pm,Z)$ rapidity distributions via the dependence of the quarks at high $Q^2$ on the gluon due to evolution. The value of $\chi^2$ improves by slightly less for eigenvectors 14 and 18. These are associated with $d_V$ and $u_V$ respectively; variation in these affects the asymmetry. Due to the fact that we underestimate the asymmetry, it is clear that the eigenvector directions, leading to the improvement, decrease $d_V$ or increase $u_V$ in the region of $x=0.02$. Hence it follows that the fit to these data sets would move the PDFs towards these directions for $u_V$ and $d_V$ (and also modify the gluon distribution to some extent). This conclusion is verified by reweighting the PDFs according to the prescription in \cite{Watt:2012tq}, with asymmetric PDF uncertainties, using the full data on the $W$ and $Z$ rapidity distributions. The result for the difference in the valence quarks, $u_V(x)-d_V(x)$ at $Q^2=10^4~{\rm GeV}^2$, is shown in the upper of Fig.~\ref{fig:WZval}, compared to the MSTW2008 PDFs. The study uses 1000 random PDF sets. After reweighting, the effective number of sets $N_{\rm eff}=190$. The small fraction ($\sim 20\%$) of effective PDFs arising in the reweighting procedure shows that there is a significant variation in the fit quality for different random sets; some give enhanced quality compared to the central MSTW2008 set but many give rather worse quality. There is a distinct tendency for $u_V-d_V$ to increase at $x \approx 0.02$, and correspondingly decrease at lower $x$. However, the effect is less pronounced than that seen in \cite{Watt:2012tq} when using only the asymmetry data, with the change in the average value being less than the uncertainty, even at $x=0.02$. This slightly smaller change has two origins. Firstly the small systematic uncertainties are treated as entirely uncorrelated in the asymmetry, but some history of the correlation persists in the full treatment of the separate $W^+$ and $W^-$ rapidity distributions. For example, one correlated systematic moves the $W^+$ and $W^-$ rapidity distribution up and down in opposite directions, independent of rapidity, and clearly contributes to some correlation in the asymmetry. Maintaining this information allows for a slightly easier fit to the asymmetry. Secondly, and more importantly, if only asymmetry data are used, all PDFs which carry a high weight must improve the comparison for the asymmetry. If the full data are used, some higher weight PDFs produce better fits due to improvement in overall shape of all distributions with rapidity, or improvement in consistency of $W$ and $Z$ data, so not all high weight PDFs have an increase in $u_V-d_V$ at $x\sim 0.02$. This shows that comparing to asymmetry data alone can exaggerate their effect and importance. In the lower of Fig.~\ref{fig:WZval} we also show the effect of the new data on the gluon distribution. The change is not large, but is not entirely insignificant, and an improvement in the shape of the rapidity distributions does require a modification of the gluon distribution. After reweighting the fit quality improves to $\chi^2/N_{\rm pts.}=48/30$. \begin{figure}[htb!] \centering \vspace{-1cm} \includegraphics[width=0.7\textwidth,clip]{uv-dv}\\ \vspace{-1.5cm} \includegraphics[width=0.7\textwidth,clip]{WZgluon} \vspace{-1cm} \caption{The effect of reweighting on the $u_V(x)-d_V(x)$ distribution (top) and $g(x)$ distribution (bottom) at $Q^2=10^4~{\rm GeV}^2$. The hatched (red) band shows the average value and standard deviation of 1000 randomly generated sets of MSTW2008 PDFs and the continuous (green) band the same PDFs after reweighting according to the fit quality for the ATLAS data on $W,Z$ rapidity.} \label{fig:WZval} \end{figure} \subsection{ LHC $W,Z$ rapidity distributions from modified MSTW PDFs} We now come to an important result: that is, the description of the ATLAS $W^\pm,Z$ rapidity data at NLO using the two modified PDF sets extracted earlier in this paper. Namely the PDF set based on the Chebyshev polynomial parameterisation (MSTW2008CP), and the set `MSTW2008CPdeut' including the improved deuteron corrections in addition. The change in $u_V$ in MSTW2008CP is actually rather similar to that for eigenvector 18, though bigger, and the further change in MSTW2008CPdeut is similar to eigenvector 14. As one would expect, this does lead to an improvement in the comparison to the data. The $\chi^2/N_{\rm pts.}$ improves from $60/30$ to $49/30$ for MSTW2008CP, and to 46/30 for MSTW2008CPdeut. This is as good as any other PDF set at NLO, except for CT10 which has a much larger strange quark distribution than that of other global analyses (as discussed in the previous footnote). For the ATLAS asymmetry data the $\chi^2/N_{\rm pts.}$ improves from $30/11$ to $15/11$ for MSTW2008CP to $9/11$ for MSTW2008CPdeut. It is particularly informative to study how the combination $u_V-d_V$ is changed in the new fits with the extended parameterisations. Fig. \ref{fig:uVminusdV} compares the values of $u_V-d_V$ obtained in the MSTW2008CP and MSTW2008CPdeut fits with those of the original MSTW2008 analysis. Indeed, $u_V-d_V$ increases dramatically in the region $x \sim 0.01$--$0.06$ which is probed by the $W^\pm,~Z$ rapidity distributions at the LHC. We emphasise that exactly the same data sets are used in all three analyses. Although some Tevatron lepton asymmetry data were included in the original MSTW2008 analysis, it is remarkable that the extended parameterisations make such a sizeable change to $u_V-d_V$ for $x \sim 0.01$--$0.06$ and improve the overall fit to the data, as well as giving a good description of the LHC lepton asymmetry data (which were not included in the fits). Moreover, the changes in $u_V,~d_V$ are very small in the region $x > 0.05$ where they are constrained by other types of data in the fits. An exception is the approximately 5$\%$ increase in $d_V$ for $x \sim 0.5$ when the deuteron corrections are included. For $x\lapproxeq 0.01$, where there are no data constraints, and which has little impact on the LHC lepton asymmetry, the changes can be extremely large -- ranging from an increase of up to 50$\%$ at $x=0.005$ to a significant decrease below $x \sim 0.0005$. \begin{figure}[htb!] \centering \vspace{-1.5cm} \includegraphics[width=0.7\textwidth,clip]{valencediff} \vspace{-1.5cm} \caption{The combination $u_V-d_V$ obtained in the new MSTW2008CP and MSTW2008CPdeut fits compared with that of the original MSTW2008 analysis. All three analyses fit to exactly the same data sets.} \label{fig:uVminusdV} \end{figure} The large change in $u_V$ and $d_V$ in the extended `Chebyshev' parameterisations in the region $x\lapproxeq 0.01$ unconstrained by data, has an interesting consequence. The small $x$ behaviour of the input valence distributions in the original NLO MSTW2008 parameterisation was controlled by the parametric forms: \begin{eqnarray} xu_V \propto x^{\delta_u}~~~~~~{\rm with}~~~~\delta_u=0.29\pm 0.02, \\ xd_V \propto x^{\delta_d}~~~~~\,\,{\rm with}\hspace{0.1cm}~~~~\delta_d=0.97\pm 0.11, \end{eqnarray} at $Q_0^2=1~{\rm GeV}^2$. On the other hand with the `Chebyshev' parameterisations we have $\delta_u=1.00$ and $\delta_d=0.70$ for the MSTW2008CP fit, and $\delta_u=0.70$ and $\delta_d=0.65$ for the MSTW2008CPdeut fit, where in each case the uncertainties are a little bigger than for the MSTW2008 set. This is much more in line with the Regge expectation that the two powers should be the same, particularly in the case of MSTW2008CPdeut, where the difference is easily consistent with zero within uncertainties. The powers are also fairly close to the Regge expectation $\delta \sim$ 0.5$--$0.6. It seems as though the standard parameterisation for the MSTW2008 valence quarks, combined with the constraint from a variety of data, pushes the small-$x$ valence quarks in a direction somewhat at odds with the LHC asymmetry data. Less constraint from other data, and potentially an equally restrictive, but different, parameterisation, possibly with the small-$x$ behaviour of $d_V$ and $u_V$ tied together more closely, may have provided a better prediction. However, the extended parameterisation, introduced here, seems to be a preferable approach, describing all data sensitive to $u_V$ and $d_V$ well and automatically making the small-$x$ forms of $u_V$ and $d_V$ more similar. \begin{figure}[htb!] \centering \vspace{-1.5cm} \includegraphics[width=0.6\textwidth,clip]{lasy_varptmin_18May2012_zoom_revised} \vspace{-1.5cm} \caption{The variation in the prediction for lepton asymmetry data calculated at LO and zero width using the original and the two modified MSTW2008 PDFs. The sets of curves correspond to different choices of the minimum $p_T$ cut (shown in GeV) applied to the observed charged lepton from the $W$ decay.} \label{fig:lasymod} \end{figure} Let us discuss the features of Fig. \ref{fig:uVminusdV}, and the description of the charged lepton asymmetry, in a little more detail. The change in the $u_V$ and $d_V$ distributions in MSTW2008CP and MSTW2008CPdeut seems ideal for improving the comparison to the ATLAS lepton rapidity data, and for removing the shortcomings in the description of these data by the MSTW2008 set of PDFs. However, it can be noted that the apparent difficulties in the MSTW description of the LHC lepton asymmetry data are very much correlated with the minimum $p_T$ cut applied to the final state lepton. The comparisons with unpublished data obtained with a $p_T=20~{\rm GeV}$ cut seemed perfectly good \cite{Santaolalla:2012ia,ATLASnote}, but those for data obtained with a higher $p_T=35~{\rm GeV}$ cut were much worse \cite{Chatrchyan:2012xt}. From the study in the previous section, we see that changes in the minimum $p_T$ cut on the data change the $x$ range probed in $u_V-d_V$. As $p_{T{\rm min}}$ decreases, the $x$ region expands to include lower and lower values of $x$. Hence, the change that MSTW2008CP and MSTW2008CPdeut make to the lepton asymmetry results are very dependent on the choice of $p_{T{\rm min}}$ of the corresponding data. This is shown explicitly in the results shown in Fig.~\ref{fig:lasymod} (where, as in the previous section, the curves are made using LO formulae and NLO PDFs). We see that the use of MSTW2008CP PDFs increases the lepton asymmetry at low rapidity, more so for a higher minimum $p_T$ cut. That is, the proportional change near $y_\ell=0$ for $p_T=35~{\rm GeV}$ is about $15\%$, whereas for $p_T=20~{\rm GeV}$ it is only about $5\%$. The asymmetry at low rapidity increases slightly further when using MSTW2008CPdeut, but by only a small amount compared to the change resulting from using MSTW2008CP PDFs. Hence, the majority of the change is obtained from an extension in the PDF input parameterisation, and only a minor amount when flexible deuteron corrections are also included. We note from the figure that for the highest $p_T$ cuts the asymmetry for $y_\ell \gapproxeq 2.5$ tends to decrease for MSTW2008CPdeut due to the larger value of $d_V$ for $x\sim 0.4$. \begin{figure}[htb!] \centering \vspace{-9cm} \hspace{1.3cm} \includegraphics[width=0.8\textwidth,clip]{compareatlas} \caption{The improvement in the fit quality of the ATLAS lepton asymmetry data for $p_T > 20$~GeV (and missing transverse energy $\not\mathrel{E}_T^\nu>25$~GeV)~\cite{Aad:2011dm}, in going from the original MSTW2008 $\to$ MSTW2008CP $\to$ MSTW2008CPdeut sets of partons. All three parton sets are obtained by fitting to exactly the same (pre-LHC) data set.} \label{fig:ATLAScomp} \end{figure} \begin{figure}[htb!] \centering \vspace{-9cm} \hspace{1.3cm} \includegraphics[width=0.8\textwidth,clip]{comparecms35} \caption{The improvement in the fit quality of the CMS lepton asymmetry data for $p_{T} > 35$ GeV~\cite{Chatrchyan:2012xt}, in going from the original MSTW2008 $\to$ MSTW2008CP $\to$ MSTW2008CPdeut sets of partons. All three parton sets are obtained by fitting to exactly the same (pre-LHC) data set.} \label{fig:CMS35comp} \end{figure} The detailed NLO comparisons to the ATLAS asymmetry data for $p_T > 20$~GeV (and missing transverse energy $\not\mathrel{E}_T^\nu>25$~GeV) and CMS electron asymmetry data for $p_T > 35 ~{\rm GeV}$ (made using NLO $K$-factors \cite{Watt:2012tq} computed with \textsc{DYNNLO}~\cite{Catani:2009sm}) are shown in Fig.~\ref{fig:ATLAScomp} and Fig.~\ref{fig:CMS35comp}. The CMS data is most sensitive to the valence quark difference at small $x$ and is predicted worst by MSTW2008 PDFs. One sees that the initial poor $\chi^2$ per point of 5.3 for MSTW2008 is reduced to 1.5 for MSTW2008CP, and to slightly less than 1 for MSTW2008CPdeut, where all experimental uncertainties are simply added in quadrature. The last could not be a much better description of the data. The improvement is similar for the ATLAS data, but the standard MSTW2008 PDFs do not give such a poor prediction in this case. Note that these are all predictions (or more precisely, ``postdictions''); neither these data, nor indeed any other LHC data have been used in order to extract the PDFs. The main reason for the small extra change coming from the MSTW2008CPdeut PDFs is simply due to the removal of the significant small-$x$ shadowing deuteron correction in the default MSTW2008 extraction -- recall that our freely determined deuteron correction is extremely small for $x\sim 0.02$. The uncertainty band for the prediction for MSTW2008CP is very similar to that for MSTW2008. The extended parameterisation has changed the average value of $u_V-d_V$ far more than it affects the nominal uncertainty. The uncertainty band for MSTW2008CPdeut is a little bigger, reflecting the extra uncertainty introduced by having a varying deuteron correction. We also examine the effect of the $W$ and $Z$ rapidity data on the MSTW2008CP and MSTW2008CPdeut PDFs by looking at the eigenvector sensitivity and using the {\it reweighting procedure}. The change in $\chi^2$ for each of the eigenvectors of the MSTW2008CP set is shown in Fig.~\ref{fig:WZ-EigCP}. The dominant eigenvector, number 12, is still mainly to do with the gluon, but some variation of the strange quark is mixed in. The situation is very similar for the MSTW2008CPdeut set. Hence, we still obtain a small effect on the gluon distribution similar to that for MSTW2008 shown in the lower of Fig.~\ref{fig:WZval}. However, even with the modified sets there are still some small changes required for the $u_V-d_V$ distribution, as shown in Fig.~\ref{fig:WZvalCPdeut}. After reweighting the fit quality improves to $\chi^2/N_{\rm pts.}=39.5/30$ for MSTW2008CP and $\chi^2/N_{\rm pts.}=38.5/30$ for MSTW2008CPdeut. Note however, that the effective number of PDFs is far greater than in the case for MSTW2008, showing the increased compatibility of the data and the PDFs. Even though some of the eigenvectors that show an improved fit to the data are those with a larger strange quark at small $x$, the amount of weight the random PDFs with larger strange fraction carry is not such to show a clear increase in the strange fraction after reweighting. Essentially the direct constraint from the dimuon data is overwhelming any pull from the ATLAS data. \begin{figure}[htb!] \centering \vspace{-1cm} \includegraphics[width=0.7\textwidth,clip]{WZ-Eigenvectors-proj} \vspace{-1cm} \caption{The variation in the quality of the fit to ATLAS vector boson $(W,Z)$ production as a function of rapidity for different MSTW2008CP eigenvectors.} \label{fig:WZ-EigCP} \end{figure} \begin{figure}[htb!] \centering \vspace{-1cm} \includegraphics[width=0.7\textwidth,clip]{uv-dvCP}\\ \vspace{-1.5cm} \includegraphics[width=0.7\textwidth,clip]{uv-dvCPdeut} \vspace{-1cm} \caption{The effect of reweighting on the $u_V(x)-d_V(x)$ distribution at $Q^2=10^4~{\rm GeV}^2$. The hatched (red) band shows the average value and standard deviation of 1000 randomly generated sets of MSTW2008CP (top) and MSTW2008CPdeut (bottom) PDFs and the continuous (green) band the same PDFs after reweighting according to the fit quality for the ATLAS data on $W,Z$ rapidity.} \label{fig:WZvalCPdeut} \end{figure} We have also calculated the $\chi^2$ for the fits with 3, 5 and 6 Chebyshev polynomials. For 3 polynomials the $\chi^2/N_{\rm pts.}$ is a little above $54/30$ both with and without free deuteron corrections, noticeably worse than for 4 polynomials, where the corresponding numbers were $46/30$ and $49/30$ respectively. For 5 and 6 polynomials the $\chi^2$ is always in the low 40s. Hence this is an improvement on the results with 4 polynomials, but is not as good as the result obtained from reweighting using MSTW2008CP and MSTW2008CPdeut, where the effective number of PDFs remains large. Hence, the additional improvement in the prediction using 5 or 6 polynomials can also be comfortably obtained by minor variations in the PDFs with 4 polynomials, variations which are consistent with the size of the uncertainties on these PDF sets. This adds additional weight to the conclusion that the use of 5 or 6 polynomials adds little value to the use of 4. Hence the MSTW2008CP and MSTW2008CPdeut PDFs result in a big change in the high-$p_T$ cut, low-rapidity lepton asymmetry at the LHC. However, as we have shown, this quantity is extremely specifically sensitive to $u_V -d_V$ for $x \sim M_W/\sqrt{s}$, the PDF combination that changes by far the most. \subsection{Predictions for other LHC data using modified MSTW PDFs} What about the predictions for other LHC observables? Apart from $u_V,d_V$ at low $x$, other PDFs have changed little, especially the gluon distribution which hardly changes at all from MSTW2008 compared to the size of its uncertainties. Similarly, in the new fits $\alpha_S(M_Z^2)$ is left free, but only experiences a tiny change. Hence, we would expect little variation in most cross-section predictions, as compared to those of the MSTW2008 PDFs. This is verified in Table \ref{tab:sigma} where we show the percentage variation in predictions for various standard cross sections compared to those using MSTW2008 PDFs (the Higgs boson predictions are for $M_H=125~{\rm GeV}$). We see that there is extreme stability in the total cross-section predictions. All the changes are inside the uncertainties, and in most cases by very much less than the uncertainty. Even $\sigma(W^+)/\sigma(W^-)$ changes by barely more than $1\%$, reflecting the fact that the largest change in the asymmetry is for a small region of phase space, i.e.~high-$p_T$ and low-$y_\ell$, and, moreover, is rather small compared to the individual cross sections. Hence, the excellent agreement of predictions using MSTW2008 PDFs with the measurements of $W$ and $Z$ total cross sections by ATLAS \cite{Aad:2011dm} and CMS \cite{CMS:2011aa} is not altered. The change is also displayed in Fig.~\ref{fig:sigmaCPdeut}, for a continuous range of LHC collider energy. This illustrates the same results. There is a reasonable change of $\sim 1.5\%$ in the $\bar t t$ cross section for the lowest LHC energies, but this is where the gluon distribution is probed at relatively high $x$, where the uncertainties are largest. The PDF uncertainty on $\sigma_{t \bar t}$ for $7~{\rm TeV}$ is $2.9\%$, or $3.9\%$ when the $\alpha_S(M_Z^2)$ uncertainty is also included. As smaller $x$ values are probed at higher energy the uncertainty reduces, e.g. to $3.1\%$ at $14~{\rm TeV}$, and the difference between the MSTW2008 and MSTW2008CPdeut predictions also reduces to less than $ 1\%$. We also show in Fig.~\ref{fig:luminosities} the parton luminosities for the MSTW2008CP and MSTW2008CPdeut sets compared to those of MSTW2008. These are consistent with the cross section results, i.e.~there is little change in the luminosities except a tendency for the quark--antiquark luminosity to be slightly higher at the largest $\sqrt{\hat s}$ and the gluon--gluon luminosity to be slightly lower in the same region for the MSTW2008CPdeut set. This is because of the increase of the down quark at high $x$ in this set, and a corresponding small decrease in the gluon luminosity coming from the fit to inclusive jet data. However, these changes are comfortably smaller than the PDF uncertainty. \begin{table} \begin{center} \begin{tabular}{|l|l|l|l|} \hline & {CP} & {CPdeut} & unc. \\ \hline $\!\! W\,\, {\rm Tevatron}\,\,(1.96~{\rm TeV})$ & +0.6 & +0.1 & 1.8 \\ $\!\! Z \,\,{\rm Tevatron}\,\,(1.96~{\rm TeV})$ & +0.8 & +0.7 & 1.9 \\ $\!\! W^+ \,\,{\rm LHC}\,\, (7~{\rm TeV})$ & +0.7 & +0.3 & 2.2\\ $\!\! W^- \,\,{\rm LHC}\,\, (7~{\rm TeV})$ & $-0.7$ & $-0.4$ & 2.2\\ $\!\! Z \,\,{\rm LHC}\,\, (7~{\rm TeV})$ & +0.0 & $-0.1$ & 2.2\\ $\!\! W^+ \,\,{\rm LHC}\,\, (14~{\rm TeV})$ & +0.6 & +0.3 & 2.4\\ $\!\! W^- \,\,{\rm LHC}\,\, (14~{\rm TeV})$ & $-0.6$ & $-0.5$ & 2.4 \\ $\!\! Z \,\,{\rm LHC}\,\, (14~{\rm TeV})$ & +0.1 & $-0.1$ & 2.4\\ $\!\! {\rm Higgs} \,\,{\rm Tevatron}$ & $-0.5$ & $-1.8$ & 5.1\\ $\!\!{\rm Higgs} \,\,{\rm LHC}\,\,(7~{\rm TeV})$ & +0.2 & $-0.1$ & 3.3 \\ $\!\!{\rm Higgs} \,\,{\rm LHC}\,\,(14~{\rm TeV})$ & +0.1 & +0.1 & 3.1\\ $\!\! t\bar t \,\,{\rm Tevatron}$ & $+0.5$ & $-0.6$ & 3.2\\ $\!\! t\bar t\,\,{\rm LHC}\,\,(7~{\rm TeV})$ & $-0.4$ & $-1.8$ & 3.9 \\ $\!\! t\bar t\,\,{\rm LHC}\,\,(14~{\rm TeV})$ & $-0.2$ & $-0.8$ & 3.1\\ \hline \end{tabular} \end{center} \caption{The percentage change of various cross sections due to the modifications of the MSTW2008 PDFs. CP denotes the fit with the Chebyshev polynomial input parameterisation, and CPdeut denotes the fit with the deuteron corrections included in addition. To demonstrate the small changes in the cross sections, we also show, in the final column, the symmetrized PDF $+\,\,\alpha_S(M_Z^2)$ percentage uncertainties for the MSTW2008 PDFs.} \label{tab:sigma} \end{table} \begin{figure}[htb!] \centering \vspace{-2cm} \includegraphics[width=0.6\textwidth,clip]{deltasigmaCPdeut_revised} \vspace{-2cm} \caption{The variation in the prediction for various cross sections as a function of energy for the MSTW2008CPdeut PDFs compared to the values obtained from the original MSTW2008 PDFs.} \label{fig:sigmaCPdeut} \end{figure} \begin{figure}[htb!] \centering \vspace{-1cm} \includegraphics[width=0.495\textwidth,clip]{ratiogglumiCPdeut_68cl} \includegraphics[width=0.495\textwidth,clip]{ratioqqbarlumiCPdeut_68cl}\\ \vspace{-0.1cm} \includegraphics[width=0.495\textwidth,clip]{ratioJJlumiCPdeut_68cl} \vspace{-0.2cm} \caption{The parton luminosities for the MSTW2008, the MSTW2008CP and the MSTW2008CPdeut PDFs for production of a final state of particular invariant mass $\sqrt{\hat s}$ at the LHC for $\sqrt{s} = 8~{\rm TeV}$. The top left plot is for the gluon--gluon luminosity, the top right plot for the quark--antiquark luminosity (summed over flavours) and the lower plot for $g+4/9 \sum_q (q + \bar q)$, which is relevant for some quantities that can be initiated by both gluons and quarks, e.g. inclusive jets.} \label{fig:luminosities} \end{figure} \begin{figure}[htb!] \centering \vspace{-1cm} \includegraphics[width=0.7\textwidth,clip]{inclusive-chisquare}\\ \vspace{-1.2cm} \includegraphics[width=0.7\textwidth,clip]{dijet-chisquare} \vspace{-1cm} \caption{The variation in the quality of the fit to ATLAS inclusive jet production (top) and dijet production (bottom) as a function of rapidity for different MSTW2008 eigenvectors.} \label{fig:inclusive-Eig} \end{figure} \begin{figure}[htb!] \centering \vspace{-1cm} \includegraphics[width=0.7\textwidth,clip]{inclusive04}\\ \vspace{-1.5cm} \includegraphics[width=0.7\textwidth,clip]{inclusive06} \vspace{-1cm} \caption{The effect of reweighting on the $g(x)$ distribution at $Q^2=10^4~{\rm GeV}^2$. The hatched (red) band shows the average value and standard deviation of 1000 randomly generated sets of MSTW2008 PDFs and the continuous (green) band the same PDFs after reweighting according to the fit quality for the ATLAS data on inclusive jet production using $R=0.4$ (top) and $R=0.6$ (bottom).} \label{fig:inclusive04} \end{figure} Finally, we consider the description of LHC jet data. The $\chi^2$ is defined as \begin{equation} \chi^2 =\sum_{i=1}^{N_{\rm pts.}} \left(\frac{\hat{D}_{i}-T_{i}}{\sigma_{i}^{\rm uncorr.}}\right)^2 + \sum_{k=1}^{N_{\rm corr.}}r_{k}^2, \label{eq:chi2def} \end{equation} where $\hat{D}_{i} \equiv D_{i} - \sum_{k=1}^{N_{\rm corr.}}r_{k}\, \sigma_{k,i}^{\rm corr.}$ are the data points allowed to shift by the systematic errors in order to give the best fit, $\sigma_{k,i}^{\rm corr.}$ is an absolute correlated uncertainty and normalisation is treated as the other correlated uncertainties. The same definition is used for the comparison to $W,Z$ data, but the definition is less important in that case. The $\chi^2$ per point for the ATLAS inclusive jet data \cite{Aad:2011fc} is $\chi^2/N_{\rm pts.}=70/90$ for jet radius $R=0.4$ and $\chi^2/N_{\rm pts.}=71/90$ for $R=0.6$. The variation as a function of the MSTW eigenvectors for the two different choices of the jet radius is shown in the upper of Fig.~\ref{fig:inclusive-Eig}. (The precise value of $\chi^2$ depends on the detailed manner in which the correlated systematic uncertainties are treated.) The calculation is made at NLO using APPLgrid (which uses the NLOjet++ \cite{Nagy:2001fj,Nagy:2003tz} code) using a renormalisation and factorisation scale choice of $p_T^{\rm jet, max}$ in each rapidity bin, though extremely similar results are obtained using FastNLO \cite{Kluge:2006xs}. The $\chi^2$ for the MSTW2008 central set is very good, at least as good as the PDF sets of other groups, though the data do not discriminate strongly. Similarly the variation in $\chi^2$ between MSTW2008 eigenvectors is very small: at the absolute most about 2 units in $\chi^2$ for the 90 data points -- though this variation is rather smaller than the few units difference observed between different PDF groups. Hence, there is even less possibility of these data improving the knowledge of a single set of PDFs than there is for them to discriminate between different sets. As one might expect, the $\chi^2$ using MSTW2008CP and MSTW2008CPdeut is hardly changed from that for MSTW2008. In the lower of Fig.~\ref{fig:inclusive-Eig} we see that exactly the same picture holds for the ATLAS dijet data (which corresponds to a different analysis of basically the same data set). Here, the scale choice is multiplied by a factor of $\exp(0.3y^{\star})$, where $y^{\star}$ is half the rapidity separation, to avoid instabilities at high rapidity. In this case the $\chi^2$ values are larger but similar between the different PDF groups, and at the very most there is $10\%$ variation in $\chi^2$ value between different eigenvectors, and in most cases much less. In Fig.~\ref{fig:inclusive04} we see the effect of reweighting when using the MSTW2008 PDFs. The small variation between the random PDF sets is illustrated by the very high effective number $N_{\rm eff}$ of PDFs, and in both cases the change in the average value is well within uncertainties, and the decrease in the standard deviation is minimal. With reweighting the fit quality improves by only one or two units in $\chi^2$. Similarly the MSTW2008CP and MSTW2008CPdeut sets give a $\chi^2$ within a couple of units of MSTW2008 for both values of $R$. Future data based on higher luminosity, and with a better determination of systematic uncertainties, will be more constraining and discriminating. \section{PDFs at NNLO} We have repeated the study of extending the parameterisation in the valence and sea quarks, and of investigating deuteron corrections, using NNLO PDFs extracted from an NNLO fit to data. We do not go into details here as all the conclusions are qualitatively similar. The input PDFs at NNLO are not exactly the same shape as at NLO due to higher-order corrections, mainly in the coefficient functions. However, as we will discuss, the relative change in the PDFs from the extended parameterisation and deuteron corrections is altered rather little by this. To be a little more precise, the extended parameterisation leads to an NNLO fit with $\Delta \chi^2 = -37$ compared to the standard NNLO fit, in comparison to $-29$ for the NLO fit. The changes in PDFs are completely analogous, i.e.~the $u_V(x)$ distribution increases for $x \sim 0.01$ and decreases at very small $x$, but there is little change in other PDFs. Despite the slightly larger fit improvement at NNLO, the change in $u_V(x)$ is slightly smaller than at NLO. When deuteron corrections are allowed to be free they choose a form very similar to the NLO case, though with the normalisation marginally smaller than 1 and the precise value changing slightly if the pivot point $x_p$ is varied. If $x_p=0.03$ is used instead of $x_p=0.05$ the fit is only three units worse, but the deuteron correction is very similar to that at NLO, with normalisation marginally above 1. The fit has $\Delta \chi^2=-82$ compared to our default fit, similar to the value of $-86$ at NLO. As at NLO the only additional change in PDFs is another slight change in $u_V(x)$ and a similar change in $d_V(x)$ to that at NLO. Again the changes are qualitatively the same to those at NLO, but slightly smaller. The changes to predicted cross sections (such as those shown in Table \ref{tab:sigma} and Fig. \ref{fig:sigmaCPdeut}) using the modified NNLO PDFs are very similar to those at NLO, perhaps a little smaller in general, and as at NLO are much smaller than the PDF uncertainties. The changes in the modified NNLO PDFs automatically significantly improve the deficiencies that MSTW2008 PDFs have with the LHC asymmetry. The effect from the PDFs is a little less than at NLO due to the smaller change. However, this is well within uncertainties, and is, for example, no longer true for the $x_p=0.03$ fit mentioned above. (Similarly the agreement of the small-$x$ powers $\delta$ for the up and down valence distribution are in better agreement for $x_p=0.03$ -- the values of $\delta$ being very sensitive to small changes in detail.) We also note that the NNLO cross section corrections themselves automatically improve the description of data very slightly. As at NLO, the best fit to the total $W,Z$ rapidity data can still be improved, and at NNLO the fit to the general shape in rapidity is a little worse than at NLO. While the modified PDFs essentially cure the problem with the asymmetry, detailed changes in the gluon and sea are required for the best possible fit as shown by the reweighting exercise at NLO described above. \section{Conclusions} In this paper we have performed global PDF analyses in which the standard MSTW input parameterisations of the PDFs have been made more flexible by replacing the usual $(1+\epsilon x^{0.5} + \gamma x)$ factors in the valence, sea and gluon distributions by Chebyshev polynomial forms $(1+\sum a_iT_i(y))$, with $y=1-2\sqrt{x}$. A Chebyshev form has the advantage that the parameters $a_i$ are well-behaved and, compared to the coefficients in our standard parameterisation, are rather small, with moduli usually $\leq 1$. We demonstrated that about four Chebyshev polynomials are sufficient for high precision and used this number in the valence and sea distributions. However, the gluon distribution, which already had seven parameters, did not require extra free parameters. Hence, only two Chebyshev polynomials were used in this case, which is equivalent to the usual $(1+\epsilon x^{0.5} + \gamma x)$ factor. To explore the effects of using these more flexible input forms we fit to exactly the same data set as was used for the MSTW2008 analysis \cite{Martin:2009iq}. The resulting parton set was called MSTW2008CP. We found some improvement in the fit to the data, but the only significant PDF change was in the valence up-quark distribution, $u_V$, at small $x$. Although left free in the fit, the change in $\alpha_S(M_Z^2)$ is tiny. The use of Chebyshev forms allows a more consistent determination of the uncertainties of the PDFs. In our best fit we have 6 more free parameters than in the original MSTW2008 fit. In the determination of the uncertainties we allow one extra parameter to be free for the $u_V,~d_V$ and sea quark PDFs when evaluating the uncertainty eigenvectors. Hence, in the determination of the uncertainties of the MSTW2008CP PDFs we have 23 eigenvectors, rather than the 20 eigenvectors of the MSTW2008 analysis. Despite having extra eigenvectors, the uncertainties were found to be similar to those of the MSTW2008 partons, but tend to be larger, particularly for valence quarks, at small $x$. The most significant change is in $u_V$ which now has a more realistic uncertainty, without the artificial `neck' for $x\sim 0.003$ at $Q^2=10^4~{\rm GeV}^2$. We also performed a detailed investigation of the nonperturbative corrections to be applied when fitting to the data obtained from deuteron targets. It will be important to continue to include these deuteron data in global PDF analyses for the relatively short term, in order to separate the $u_V$ and $d_V$ PDFs. The deuteron correction factor was parametrised in terms of 4 variables, and various MSTW2008CP global fits were performed allowing some, or all, of these variables to be free. We found that large improvements could be obtained in the description of the deuteron data, and also in the Tevatron charged lepton asymmetry data, as compared to the MSTW2008 analysis. The MSTW2008 fit had a fixed deuteron correction imposed, and then only at small $x$. Using the results from the present study, we have adopted the best, and most realistic, deuteron correction. The corresponding parton set is called MSTW2008CPdeut. The most significant change, in comparison to MSTW2008CP, is in the $d_V$ PDF, and in the uncertainties of both valence PDFs. Again the change in $\alpha_S(M_Z^2)$ is insignificant. In summary, the main changes to the MSTW2008 PDFs obtained in the CP and CPdeut `Chebyshev' fits are in $u_V$ and $d_V$ for $x \lapproxeq 0.03$ at high $Q^2\sim 10^4~{\rm GeV}^2$, or slightly higher $x$ at low $Q^2$: a region where there are few or weak constraints on the valence PDFs from the data used in these fits. There is also an approximately 5$\%$ increase in $d_V$ for $x \sim 0.5$ in the CPdeut fit. We have drawn attention to one type of measurement that is particularly sensitive to $u_V$ and $d_V$ in the small $x$ region. That is the decay charged lepton asymmetry from $W^\pm$ production, which probes a little more deeply into the small $x$ region as the collider energy is increased. For a 7 TeV collider, the probed region is $0.01 \lapproxeq x \lapproxeq 0.05$. The PDF combination sampled by the asymmetry is ($u_V-d_V$) at low lepton rapidities. However, we showed that the description of the asymmetry data has a more intricate dependence on the PDFs at larger rapidities, and also depends sensitively on the experimental minimum $p_T$ cut applied to the observed lepton. In Fig. \ref{fig:uVminusdV} we plotted ($u_V-d_V$) at the scale $Q^2 = 10^4 ~{\rm GeV}^2$ approximately relevant for $W,Z$ production, obtained from the original MSTW2008 analysis together with the behaviour coming from the CP and CPdeut PDF sets; all three sets were fitted to exactly the same (pre-LHC) data. In Figs. \ref{fig:ATLAScomp} and \ref{fig:CMS35comp} we showed the predictions for the ATLAS and CMS asymmetry measurements obtained from the three sets of PDFs. It is remarkable that the `Chebyshev' sets, and, in particular the MSTW2008CPdeut set, give such excellent descriptions of these data, which were not well predicted at low rapidities by the original MSTW2008 set. Note that the improvement is due to using a more flexible and more physically suitable (`Chebyshev') parameterisation of the input PDFs, and to a lesser extent, to taking more care with the deuteron corrections. We emphasise again that the main changes to the MSTW2008 PDFs are in the valence distributions in the small $x$ region which is barely probed by the existing data. Lepton asymmetry at higher LHC energies will sample this region more and at smaller $x$ values. It is not surprising, therefore, that we found that the predictions of the original MSTW2008 PDFs are essentially unchanged for all other observables. In all the cases that we investigated, the changes in the cross sections, compared to those of MSTW2008, were much smaller than the PDF uncertainties. This was even the case for the total $\sigma(W^+)/\sigma(W^-)$ ratio, since each cross section only obtains a small contribution from the high-$p_T$ cut, low-$y_\ell$ region, and moreover the change in the PDFs does not even change the individual differential cross sections by very much, but in a manner which is maximised in the asymmetry measurement. Hence, for the overwhelming majority of processes at the LHC the MSTW2008 PDFs give essentially the same result as those resulting from the investigations in this article. From the results of this paper it is clear that a full update of MSTW2008 PDFs would benefit from an extended parameterisation similar to the form presented, and a modification of the deuteron corrections of some sort, together with some account of the uncertainty associated with these. Moreover, it would require other features, most particularly an inclusion of new data, including the LHC data considered in this article. However, from the preliminary studies already undertaken of including new data in \cite{Thorne:2010kj,Watt:2012tq} and in this article, it is clear that the effects of individual improvements considered so far give no signs of very significant changes (and no obvious signs of accumulation of changes from individual improvements) other than the form of the valence quarks at small $x$. However, the precise results of a full update await a much more extensive study than the more specific one which is the focus of this article. \section*{Acknowledgements} We would like to thank Stefano Forte and Jon Pumplin for discussion on some of the issues in this article. The work of R.S.T. is supported partly by the London Centre for Terauniverse Studies (LCTS), using funding from the European Research Council via the Advanced Investigator Grant 267352.
1,941,325,220,938
arxiv
\section{Background} \label{sec:back} \subsection{Deep Neural Networks} In this work, we focus on deep learning models, e.g., deep neural networks (DNNs) for classification. We introduce a conceptual deep neural network (DNN) as an example in Fig.~\ref{fig:dnn} for simplicity and remark that our approach is applicable for state-of-the-art DNNs in our experiments like ResNet \cite{resnet}, VGG \cite{vgg}, etc. \paragraph*{DNN} \label{subsec:DNN} A DNN classifier is a function $f:X\to Y$, which maps an input $x\in X$ (often preprocessed into a vector) into a label in $y\in Y$. As shown in Fig.~\ref{fig:dnn}, a DNN $f$ often contains an input layer, multiple hidden layers and an output layer. We use $\theta$ to denote the parameters of $f$ which assigns weights to each connected edge between neurons. Given an input $x$, we can obtain the output of each neuron on $x$, i.e., $f(x,ne)$, by calculating the weighted sum of the outputs of all the neurons in its previous layer and then applying an activation function (e.g., Sigmoid, hyperbolic tangent (tanh), or rectified linear unit (relu)) $\phi$. Given a dataset $D=\{(x_i,y_i)\}_{i=1}^n$, a DNN is often trained by solving the following optimization problem: \begin{equation} \min_\theta\frac{1}{n}\sum_{i=1}^n \mathcal{J}(f_\theta(x_i),y_i) \end{equation} , where $\mathcal{J}$ is a loss function which calculates a loss by comparing the model output $f_\theta(x_i)$ with the ground-truth label $y_i$. The most commonly used loss function for multi-class classification tasks is the categorical cross-entropy. The DNN is then trained by computing the gradient w.r.t. the loss for each sample in $D$ and updating $\theta$ accordingly. \begin{figure}[t] \centering \includegraphics[width=0.35\textwidth]{dnn.pdf} \caption{An example DNN to predict cat or dog.} \label{fig:dnn} \end{figure} \subsection{Deep Learning Testing} Most existing deep learning testing works are based on neuron coverage \cite{deepxplore} or its variants \cite{deepgauge}. Simply speaking, a neuron $ne$ is covered if there exists at least one test case $x$ where $f(x,ne)$ is larger than a threshold and thus been activated. We omit the details of other variants and briefly introduce the following testing methods as representatives. We also provide pointers for more details. \vspace{1mm} \noindent\textbf{DeepXplore} \cite{deepxplore} is the first testing work for DNN. DeepXplore proposed the first testing metric, i.e., neuron coverage and a differential testing framework to generate test cases to improve the neuron coverage. \vspace{1mm} \noindent\textbf{DeepHunter} \cite{deephunter} is a fuzzing framework which randomly selects seeds to fuzz guided by multi-granularity neuron coverage metrics defined in \cite{deepgauge}. \vspace{1mm} \noindent\textbf{ADAPT} \cite{adapt} is another recent work which adopts multiple adaptive strategies to generate test cases which could improve the multi-granularity neuron coverage metrics defined in \cite{deepgauge}. \vspace{1mm} \noindent\textbf{Adversarial Attacks} Beside the above testing methods, traditional adversarial attacks like FGSM \cite{fgsm}, JSMA \cite{jsma}, C\&W \cite{cw} and PGD \cite{pgd} attacks are also used to generate test cases in multiple works. \subsection{Problem definition} Unlike existing coverage guided testing works, our goal is to design a robustness-oriented testing framework to improve the DL model robustness by testing. Two key problems are to be answered: 1) how can we design testing metrics which are strongly correlated with model robustness? 2) how can we automatically generate test cases favoring the proposed testing metrics? \section{Conclusion} \label{sec:con} In this work, we propose a novel robustness-oriented testing framework RobOT for deep learning systems towards improving model robustness against adversarial examples. The core of RobOT is a metric called FOL to quantify both the value of each test case in improving model robustness (often via retraining) and the convergence quality of the model robustness improvement. We also propose to utilize the proposed metric to automatically fuzz for more valuable test cases to improve model robustness. We implemented RobOT as a self-contained open-source toolkit. Our experiments on multiple benchmark datasets verify the effectiveness and efficiency of RobOT in improving DL model robustness, i.e., with 67.02\% increasement on the adversarial robustness that is 50.65\% higher than the state-of-the-art work DeepGini. \section{Experimental Evaluation} \label{sec:exp} We have implemented RobOT as a self-contained tookit with about 4k lines of Python code. The source code and all the experiment details are available at \cite{code}. In the following, we evaluate RobOT through multiple experiments. \subsection{Experiment Settings} \paragraph{Datasets and Models} We adopt four widely used image classification benchmark datasets for the evaluation. We summarize the details of the datasets and models used in Tab. \ref{tb:ds}. \paragraph{Test Case Generation} We adopt two kinds of adversarial attacks and three kinds of coverage-guided testing approaches to generate test cases for the evaluation in the following. We summarize all the configurations of the test case generation algorithms in Tab.~\ref{tb:tg}. \paragraph{Test Case Selection Baseline} We adopt the most recent work DeepGini \cite{deepgini} as the baseline of the test case selection strategy. DeepGini calculates a Gini index for each test case according to the output probability distribution of the model. A test case with larger Gini index is considered more valuable for improving model robustness. \paragraph{Robustness Evaluation} We adopt Def. \ref{def:empirical-robustness} to empirically evaluate a model's robustness. In practice, we compose a validation set of adversarial examples $D_v$ for each dataset by combining the adversarial examples generated using both FGSM and PGD (10000 each). The attack parameters are the same with Tab. \ref{tb:tg}. We then evaluate a model's robustness by calculating its accuracy on $D_v$. \begin{table}[] \centering \caption{Datasets and models.} \begin{tabular}{|l|llll|} \hline Dataset & Training & Testing & Model & Accuracy \\ \hline MNIST & 60000 & 10000 & LeNet-5 & 99.02\% \\ Fashion-MNIST & 60000 & 10000 & LeNet-5 & 90.70\% \\ SVHN & 73257 & 26032 & LeNet-5 & 88.84\% \\ CIFAR10 & 50000 & 10000 & ResNet-20 & 90.39\% \\ \hline \end{tabular} \label{tb:ds} \end{table} \begin{table}[] \centering \caption{Test case generation details.} \scalebox{0.74}{ \begin{tabular}{|l|l|llll|} \hline Testing Method & Parameter & MNIST & SVHN & Fashion-MNIST & CIFAR10 \\ \hline FGSM & Step size & 0.3 & 0.03 & 0.03 & 0.01 \\ PGD & Steps & 10 & 10 & 10 & 10 \\ & Step size & 0.3/6 & 0.03/6 & 0.3/6 & 0.01/6 \\ DeepXplore & Relu threshold & 0.5 & 0.5 & 0.5 & 0.5 \\ DLFuzz/ADAPT & Time per seed & 10 s & 10 s & 10 s & 20 s \\ & Relu threshold & 0.5 & 0.5 & 0.5 & 0.5 \\ \hline \end{tabular} } \label{tb:tg} \end{table} \subsection{Research Questions} \vspace{1mm} \noindent\textbf{RQ1: What is the correlation between our FOL metric and model robustness?} To answer this question, we first select three models with different robustness levels for each dataset. The first model (Model 1) is the original trained model. The second model (Model 2) is a robustness-enhanced model which is retrained\footnote{Retaining in this work takes 10 (40 for CIFAR10) additional epochs based on the original model.} by augmenting 5\% of the generated test cases and is more robust than Model 1. The third model (Model 3) is a robustness-enhanced model which is retrained by augmenting 10\% of the generated test cases and is most robust. Then, for each model, we conduct adversarial attacks to obtain a same number (10000 for FGSM and 10000 for PGD) of adversarial examples. \begin{figure*}[t] \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[height=1.45in]{figs/mnist_fol_distribution.pdf} \label{fig:fol:mnist} \end{subfigure \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[height=1.45in]{figs/fashion_fol_distribution.pdf} \label{fig:fol:fashion} \end{subfigure}% \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[height=1.45in]{figs/svhn_fol_distribution.pdf} \label{fig:fol:svhn} \end{subfigure \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[height=1.45in]{figs/cifar_fol_distribution.pdf} \label{fig:fol:cifar} \end{subfigure}% \caption{FOL distribution of adversarial examples for models with different robustness.} \label{fig:fol} \end{figure*} We show the FOL distribution of the adversarial examples for different models in Fig. \ref{fig:fol}. We observe that there is a strong correlation between the FOL distribution of adversarial examples and the model robustness. Specifically, \emph{the adversarial examples of a more robust model have smaller FOL values.} This is clearly evidenced by Fig. \ref{fig:fol}, i.e., for every dataset, the probability density is intensively distributed around zero for Model 3 (the most robust model) while is steadily expanding to larger FOL values for Model 2 and Model 1 (with Model 1 larger than Model 2). The underlying reason is that a more robust model in general has a more \emph{flat} loss distribution and thus a smaller FOL value (since it is based on the loss gradient). \begin{figure}[t] \centering \includegraphics[width=.33\textwidth]{figs/fgsm-pgd_fol_distribution.pdf} \caption{FOL distribution of adversarial examples from FGSM and PGD for CIFAR10 model.} \label{fig:fol:com} \end{figure} In addition, we also observe that adversarial examples crafted by stronger attacks have smaller FOL values. Fig. \ref{fig:fol:com} shows the FOL distribution of adversarial examples from attacking the CIFAR10 model with FGSM and PGD respectively. We could observe that adversarial examples from PGD have significantly smaller FOL values than FGSM. The reason is that stronger attacks like PGD are generating adversarial examples that have better loss convergence quality and induce higher loss. We thus have the following answer to RQ1: \begin{framed} \noindent \emph{Answer to RQ1: FOL is strongly correlated with model robustness. A more robust model have smaller FOL values for adversarial examples. }\end{framed} \vspace{1mm} \noindent\textbf{RQ2: How effective is our FOL metric for test case selection?} To answer the question, we first generate a large set of test cases using different methods, and then adopt different test case selection strategies (i.e., BE-ST, KM-ST and DeepGini) to select a subset of test cases with the same size to retrain the model. A selection strategy is considered more effective if the retrained model with the selected test cases is more robust. We distinguish two different kinds of test case generation algorithms which are both used in the literature, i.e., adversarial attacks and neuron coverage-guided algorithms, for more fine-grained analysis. For adversarial attacks, we adopt FGSM (weak) and PGD (strong) attacks to generate a combined set of test cases. For DeepXplore, DLFuzz and ADAPT, we generate a set of test cases for each of them. The parameters used are consistent with Tab. \ref{tb:tg}. For each set of test cases, we use BE-ST, KM-ST and DeepGini strategy respectively to select x (ranging from 1 to 10) percent of them to obtain a retrained model and evaluate its model robustness. \begin{figure*}[t] \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[height=1.45in]{figs/mnist_attack_robustness.pdf} \label{fig:ts:mnist} \end{subfigure \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[height=1.45in]{figs/fashion_attack_robustness.pdf} \label{fig:ts:fashion} \end{subfigure}% \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[height=1.45in]{figs/svhn_attack_robustness.pdf} \label{fig:ts:svhn} \end{subfigure \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[height=1.45in]{figs/cifar_attack_robustness.pdf} \label{fig:ts:cifar} \end{subfigure}% \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[height=1.45in]{figs/mnist_xplore_robustness.pdf} \label{fig:ts:mnist} \end{subfigure \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[height=1.45in]{figs/fashion_xplore_robustness.pdf} \label{fig:ts:fashion} \end{subfigure}% \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[height=1.45in]{figs/svhn_xplore_robustness.pdf} \label{fig:ts:svhn} \end{subfigure \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[height=1.45in]{figs/cifar_xplore_robustness.pdf} \label{fig:ts:cifar} \end{subfigure}% \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[height=1.45in]{figs/mnist_dlfuzz_robustness.pdf} \label{fig:ts:mnist} \end{subfigure \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[height=1.45in]{figs/fashion_dlfuzz_robustness.pdf} \label{fig:ts:fashion} \end{subfigure}% \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[height=1.45in]{figs/svhn_dlfuzz_robustness.pdf} \label{fig:ts:svhn} \end{subfigure \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[height=1.45in]{figs/cifar_dlfuzz_robustness.pdf} \label{fig:ts:cifar} \end{subfigure}% \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[height=1.45in]{figs/mnist_adapt_robustness.pdf} \label{fig:ts:mnist} \end{subfigure \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[height=1.45in]{figs/fashion_adapt_robustness.pdf} \label{fig:ts:fashion} \end{subfigure}% \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[height=1.45in]{figs/svhn_adapt_robustness.pdf} \label{fig:ts:svhn} \end{subfigure \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[height=1.45in]{figs/cifar_adapt_robustness.pdf} \label{fig:ts:cifar} \end{subfigure}% \caption{Test case selection and robustness improvement with different strategies.} \label{fig:ts} \end{figure*} Fig. \ref{fig:ts} shows the results. We observe that for all the strategies, the retrained model obtained improved resilience to adversarial examples to some extent. Besides, the model's robustness steadily improves as we augment more test cases (from 1\% to 10\%) for retraining. However, we could also observe that in almost all cases (except 1 case), our FOL guided strategies (both BE-ST and KM-ST) have significantly better performance than DeepGini, i.e., achieving 30.48\%, 84.62\%, 54.91\% and 35.92\% more robustness improvement on average for the four different sets of test cases. The reason is that FOL is able to select test cases which have higher and more diverse loss than DeepGini (as shown in Fig. \ref{fig:loss:st} previously), which are better correlated with model robustness. Meanwhile, we observe that the retrained models maintain high accuracy on the test set as well (as summarized in Tab. \ref{tb:acc}). Besides, we observe that different test case generation algorithms obtain different robustness improvements. Among DeepXplore, DLFuzz and ADAPT, ADAPT and DLFuzz have the highest (53.39\% on average) and lowest (31.18\% on average) robustness improvement respectively while DeepXplore is in the middle (48.36\% on average). Adversarial attacks often achieve higher robustness improvement than all three neuron coverage-guided fuzzing algorithms for simpler datasets such as MNIST, Fashion-MNIST and SVHN. This casts shadow on the usefulness of the test cases generated by neuron coverage-guided fuzzing algorithms in improving model robustness and is consistent with \cite{limit,misleading,meaningless}. We further conduct experiments to evaluate and compare how robust the retrained models are when using adversarial examples generated in different ways: one from the attacks (Fig. \ref{fig:ts}), and the other from different testing algorithms. We summarize the result in Tab. \ref{tb:trans}. We observe that the robustness drops noticeably (which is especially the case for CIFAR10), i.e., 18.64\%, 26.41\% and 23.09\% for DeepXplore, DLFuzz, and ADAPT each on average (compared to the results in Fig. \ref{fig:ts}). Nevertheless, our test case selection strategies still outperform DeepGini in all cases. This shows that adversarial examples from adversarial attacks alone are insufficient. It is necessary to improve the diversity of test cases for retraining from a perspective that is well correlated with model robustness. \begin{table}[] \centering \caption{Test accuracy of model before and after retraining with 10 percent of generated test cases using adversarial attacks.} \begin{tabular}{|l|ll|} \hline Dataset & Original & Retrained \\ \hline MNIST & 99.02\% & 98.95\% \\ Fashion-MNIST & 90.70\% & 90.63\% \\ SVHN & 88.84\% & 87.13\% \\ CIFAR10 & 90.39\% & 90.13\% \\ \hline \end{tabular} \label{tb:acc} \end{table} \begin{table*}[] \caption{Robustness performance of models (retrained using adversarial examples from attack algorithms) against test cases generated by DL testing tools.} \centering \scalebox{0.85}{ \begin{tabular}{|l|llll|llll|llll|} \hline & DeepXplore & & & & DLFuzz & & & & ADAPT & & & \\ \cline{2-13} Dataset & BE-ST & KM-ST & DeepGini & Average & BE-ST & KM-ST & DeepGini & Average & BE-ST & KM-ST & DeepGini & Average \\ \hline MNIST & 86.12\% & 80.56\% & 73.74\% & \textbf{80.14\%} & 76.39\% & 74.73\% & 65.59\% & \textbf{72.24\%} & 82.60\% & 75.68\% & 70.36\% & \textbf{76.21\%} \\ Fashion-MNIST & 51.57\% & 47.97\% & 34.14\% & \textbf{44.56\%} & 38.15\% & 35.44\% & 27.16\% & \textbf{33.58\%} & 50.55\% & 47.50\% & 31.92\% & \textbf{43.32\%} \\ SVHN & 37.10\% & 38.29\% & 27.26\% & \textbf{34.55\%} & 32.83\% & 34.83\% & 25.34\% & \textbf{31.00\%} & 25.71\% & 28.51\% & 19.15\% & \textbf{24.46\%} \\ CIFAR10 & 25.25\% & 20.16\% & 12.92\% & \textbf{19.44\%} & 18.28\% & 14.20\% & 9.31\% & \textbf{13.93\%} & 22.37\% & 18.48\% & 12.08\% & \textbf{17.64\%} \\ \hline Average & \textbf{50.01\%} & \textbf{46.75\%} & \textbf{37.01\%} & & \textbf{41.41\%} & \textbf{39.8\%} & \textbf{31.85\%} & & \textbf{45.31\%} & \textbf{42.54\%} & \textbf{33.36\%} & \\ \hline \end{tabular} } \label{tb:trans} \end{table*} \begin{table*}[] \centering \caption{Comparison of FOL-fuzz and ADAPT. $a/b$: a is the result of FOL-fuzz and b is the result of ADAPT.} \begin{tabular}{|l|ll|ll|ll|} \hline & 5 min & & 10 min & & 20 min & \\ \cline{2-7} Dataset & \# Test case & Robustness$\uparrow$ & \# Test case & Robustness$\uparrow$ & \# Test case & Robustness$\uparrow$ \\ \hline MNIST & 1692/2125 & 33.62\%/18.73\% & 3472/4521 & 48.04\%/36.46\% & 7226/8943 & 68.02\%/54.38\% \\ Fashion-MNIST & 4294/5485 & 40.75\%/6.74\% & 8906/10433 & 53.88\%/14.94\% & 18527/21872 & 69.03\%/27.24\% \\ SVHN & 6236/8401 & 24.25\%/21.3\% & 12465/17429 & 30.42\%/27.52\% & 24864/33692 & 39.99\%/34.51\% \\ CIFAR10 & 1029/1911 & 18.62\%/17.03\% & 2006/3722 & 22.07\%/18.12\% & 4050/6947 & 27.36\%/20.54\% \\ \hline Average & \textbf{3313/4480} & \textbf{29.31\%/15.95\%} & \textbf{6712/9026} & \textbf{38.6\%/24.26\%} & \textbf{13667/17864} & \textbf{51.1\%/34.17\%} \\ \hline \end{tabular} \label{tb:fuzz} \end{table*} We thus have the following answer to RQ2: \begin{framed} \noindent \emph{Answer to RQ2: FOL guided test case selection is able to select more valuable test cases to improve the model robustness by retraining. }\end{framed} \vspace{1mm} \noindent\textbf{RQ3: How effective and efficient is our FOL guided fuzzing algorithm?} To answer the question, we compare our FOL guided fuzzing algorithm (FOL-Fuzz) with state-of-the-art neuron coverage-guided fuzzing algorithm ADAPT as follows. We run FOL-fuzz and ADAPT for a same period of time, (i.e., 5 minutes, 10 minutes and 20 minutes) to generate test cases. Then we retrain the model with the test cases to compare their robustness improvement. The hyper-parameters for FOL-Fuzz are set as follows: $\xi=10^{-18},\ k=5,\ \lambda=1,\ iters=3,\ learning\_rate=0.1$. The parameters for ADAPT are consistent with Tab. \ref{tb:tg}. Tab. \ref{tb:fuzz} shows the results. We could observe that within the same time limit, ADAPT generates slightly more adversarial examples, i.e., 10457 compared to 7897 of FOL-Fuzz. A closer look reveals that ADAPT tends to generate a lot of test cases around a seed towards improving the neuron coverage metrics. However, not all these tests are meaningful to improve model robustness. On the other hand, FOL-Fuzz is able to discover more valuable test cases. We could observe that using FOL-Fuzzed test cases (although less than ADAPT) to retrain the model significantly improves the model's robustness than ADAPT, i.e., 39.67\% compared to 24.79\% of ADAPT on average. We thus have the following answer to RQ3: \begin{framed} \noindent \emph{Answer to RQ3: FOL-Fuzz is able to efficiently generate more valuable test cases to improve the model robustness. }\end{framed} \subsection{Threats to Validity} First, our experiments are based on a limited set of test subjects in terms of datasets, types of adversarial attacks and neuron coverage-guided test case generation algorithms. Although we included strong adversarial attack like PGD and state-of-the-art coverage-guided generation algorithm ADAPT, it might be interesting to investigate other attacks like C\&W \cite{cw} and JSMA \cite{jsma}, and fuzzing algorithms like DeepHunter \cite{deephunter}. Second, we adopt an empirical approach to evaluate the model robustness which might be different with different kinds of attacks used. So far it is still an open problem as for how to efficiently measure the robustness of DL models. We do not use more rigorous robustness metric like CLEVER \cite{clever} because it is input-specific and has high cost to calculate (e.g., hours for one input). Third, our testing framework requires a robustness requirement as input which could be application-specific and is relevant to the model as well. In practice, users could adjust the requirement dynamically. \section{Introduction} Deep learning (DL) \cite{dl} has been the core driving force behind the unprecedented breakthroughs in solving many challenging real-world problems such as object recognition \cite{cv} and natural language processing \cite{nlp}. Despite the success, deep learning systems are known to be vulnerable to adversarial examples (or attacks), which are slightly perturbed inputs that are imperceptibly different from normal inputs to human observers but can easily fool state-of-the-art DL systems into making incorrect decisions \cite{fgsm,cw,ma2018characterizing,jiang2019black,wu2020skip}. This not only compromises the reliability and robustness of DL systems, but also raises security concerns on their deployment in safety-critical applications such as face recognition \cite{face}, malware detection \cite{md}, medical diagnosis \cite{finlayson2019adversarial,ma2020understanding} and autonomous driving \cite{av,duan2020adversarial}. Noticeable efforts have been made in the software engineering community to mitigate the threats of adversarial examples and to improve the robustness of DL systems in the presence of adversarial examples \cite{deeppoly,deepxplore,wang2019adversarial}. Among them, formal verification aims to prove that no adversarial examples exist in the neighborhood of a given input. Substantial progress has been made using approaches like abstract interpretation \cite{deeppoly,yang2020improving} and reachability analysis \cite{star}. However, formal verification techniques are in general expensive and only scale to limited model structures and properties (e.g., local robustness \cite{huang2017safety}). Another popular line of work is deep learning testing, which aims to generate test cases that can expose the vulnerabilities of DL models. The test cases can then be used to improve the model robustness by retraining the model, \emph{however, this should not be taken as granted}, as recent studies have shown that test cases generated based on existing testing metrics have limited correlation to model robustness and robustness improvement after retraining~\cite{limit,meaningless}. In this work, we highlight and tackle the problem of effectively generating test cases for improving the adversarial robustness of DL models. \begin{figure*}[t] \centering \includegraphics[width=0.65\textwidth]{framework.pdf} \caption{Overview of RobOT testing framework.} \label{fig:frame} \end{figure*} There are two key elements when it comes to testing DL systems. The first element is the testing metric used to evaluate the quality of a test case or a test suite. Multiple testing metrics, including neuron coverage~\cite{deepxplore}, multi-granularity neuron coverage~\cite{deepgauge} and surprise adequacy~\cite{surprise}, have been proposed. The common idea is to explore as much diversity as possible of a certain subspace defined based on different abstraction levels, e.g., neuron activation~\cite{deepxplore}, neuron activation pattern~\cite{deepgauge}, neuron activation conditions~\cite{concolic}, and neuron activation vector~\cite{surprise}. The second key element is the method adopted for test case generation, which is often done by manipulating a given seed input with the guidance of the testing metric. Existing test case generation techniques such as DeepXplore~\cite{deepxplore}, DeepConcolic~\cite{concolic}, DeepHunter~\cite{deephunter} and ADAPT~\cite{adapt} are mostly designed to improve the neuron coverage metrics of the test cases. While existing testing approaches are helpful in exposing vulnerabilities of DL systems to some extent, recent studies have found that neuron coverage metrics are not useful for improving model robustness~\cite{limit,misleading,meaningless}. As a consequence, unlike in the case of traditional program testing (where the program is surely improved after fixing bugs revealed through testing), one may not improve the robustness of the DL system after testing. In this work, we address the above-mentioned limitations of existing DL testing approaches by proposing a novel DL testing framework called RobOT (i.e., \textit{Rob}ustness-\textit{O}riented \textit{T}esting), which integrates the DL (re)training with the testing. As illustrated in Fig.~\ref{fig:frame}, RobOT distinguishes itself from existing neuron coverage guided testing works in the following important aspects. First, RobOT is robustness-oriented. RobOT takes a user-defined requirement on the model robustness as input and integrates the retraining process into the testing pipeline. RobOT iteratively improves the model robustness by generating test cases based on a testing metric and retraining the model. Second, in RobOT, we propose a novel set of lightweight metrics that are strongly correlated with model robustness. The metrics can quantitatively measure the relevance of each test case for model retraining, and are designed to favor test cases that can significantly improve model robustness, which is in contrast to existing coverage metrics that have little correlation with model robustness. Furthermore, the proposed metrics can in turn provide strong evidence on the model robustness after testing. The output of RobOT is an enhanced model that satisfies the robustness requirement. In a nutshell, we make the following contributions. \begin{itemize} \item We propose a robustness-oriented testing (RobOT) framework for DL systems. RobOT provides an end-to-end solution for improving the robustness of DL systems against adversarial examples. \item We propose a new set of lightweight testing metrics that quantify the importance of each test case with respect to the model's robustness, which are shown to be stronger indicators of the model's robustness than existing metrics. \item We implement in RobOT, a set of fuzzing strategies guided by the proposed metrics to automatically generate high-quality test cases for improving the model robustness. \end{itemize} RobOT is publicly available as an open-source self-contained toolkit \cite{code}. Experiments on four benchmark datasets confirm the effectiveness of RobOT in improving model robustness. Specifically, RobOT achieves 50.65\% more robustness improvement on average compared to state-of-the-art work DeepGini~\cite{deepgini}. \section{The RobOT Framework} \label{sec:meth} In this section, we present RobOT, a novel robustness-oriented framework for testing and re-training DL systems. The overall framework of RobOT is shown in Figure~\ref{fig:frame}. We assume that \emph{a requirement on the model robustness} (Section \ref{sec:robustness}) is provided in prior for quality assurance purpose. Note that the requirement is likely application-specific, i.e., different applications may have different requirements on the level of robustness RobOT integrates the DL (re)training into the testing framework. It starts from the initial training dataset $D_0$, and trains an initial DNN $f_0$ in the standard way. Then, it applies a fuzzing algorithm (see Section~\ref{sec:test-gen}) which is guided by our proposed testing metrics (see Section~\ref{sec:metrics}) to generate a new set of test cases $D_{t}$, for retraining the model $f_0$ to improve its adversarial robustness. The retraining step distinguishes RobOT from existing DL testing works and it places a specific requirement on how the test cases in $D_{t}$ are generated and selected, i.e., the test cases must be helpful in improving $f_0$'s robustness after retraining. We discuss how the test cases are generated in the rest of this section. RobOT iteratively generates the test suite $D_{t}$ and retrains the model $f_{n}$ at each iteration. Afterwards, it checks whether the robustness of the new model $f_n$ is satisfactory using an independent adversarial validation dataset $D_v$, subject to an acceptable degrade of the model's accuracy on normal/non-adversarial data. If the answer is yes, it terminates and outputs the final model $f_{n}$; otherwise, RobOT continues until the model robustness is satisfactory or a predefined testing budget is reached. In the following, we illustrate each component of RobOT in detail. \subsection{DL Robustness: A Formal Definition} \label{sec:robustness} Although many DL testing works in the literature claim a \emph{potential} improvement on the DL model robustness by retraining using the test suite generated, such a conjecture is often not rigorously examined. This is partially due to the ambiguous definition of robustness. For instance, the evaluations of \cite{deepxplore,deeptest,concolic,deephunter} are based on accuracy, in particular empirical accuracy on the validation set~\cite{testing-survey}, rather than robustness. In RobOT, we focus on improving the model \emph{robustness} (without sacrificing accuracy significantly), and we begin with defining robustness. \begin{definition} \textbf{\textit{Global Robustness (GR)}} Given an input region $R$, a DL model $f:R\to Y$ is $(\sigma,\epsilon)$-globally-robust iff $\forall x_1,x_2\in R, ||x_1-x_2||_p\le\sigma \Rightarrow\ ||f(x_1)-f(x_2)||\le\epsilon$. $\hfill \square$ \end{definition} Global robustness is theoretically sound, and yet extremely challenging for testing or verification~\cite{katz2017towards}. To mitigate the complexity, multiple attempts have been made to constrain the robustness into local input space, such as Local Robustness~\cite{huang2017safety}, CLEVER~\cite{clever} and Lipschitz Constant~\cite{xu2012robustness}. These local versions of robustness are however not ideal either, i.e., they have been shown to have their own limitations~\cite{lc_lim,katz2017towards}. For instance, CLEVER relies on the extreme value theory, making it extremely costly to calculate. In RobOT, we adopt a practical empirical definition of robustness, which has been commonly used for model robustness evaluation in the machine learning literature \cite{pgd,pgd-con,zhang2019theoretically,wang2019improving,carlini2019evaluating}. \begin{definition} \label{def:empirical-robustness}\textbf{\textit{Empirical Robustness (ER)}} Given a DL model $f:X\to Y$ and a validation dataset $D_v$, we define its empirical robustness $\mu:(f,D_v,ATT)\to [0,1]$ as $\gamma$, where $ATT$ denotes a given type of adversarial attack and $\gamma$ is the accuracy of $f$ on the adversarial examples obtained by conducting $ATT$ on $\langle D_v,f\rangle$. $\hfill \square$ \end{definition} Intuitively, Def. \ref{def:empirical-robustness} evaluates a model's robustness using its accuracy on the adversarial examples crafted from a validation set $D_v$. Such an empirical view of DL robustness is testing-friendly and it facilitates RobOT to efficiently compare the robustness of the models before and after testing and retraining. Definition~\ref{def:empirical-robustness} is also practical, as it connects the DL robustness with many existing adversarial attacks (such as~\cite{fgsm,pgd,cw}) as a part of the definition. In particular, for the evaluation of RobOT in Section \ref{sec:exp}, we use two popular attacks, i.e., FGSM \cite{fgsm} and PGD (Projected Gradient Descent) \cite{pgd} as $ATT$. \subsection{RobOT DL Testing: A General View} \begin{figure}[t] \centering \includegraphics[width=0.45\textwidth]{testing} \caption{Comparison between traditional and deep learning system quality assurance by testing.} \label{fig:compar} \end{figure} We first compare and highlight the difference between testing traditional software and deep learning systems in Fig.~\ref{fig:compar}. While many testing methods (like random testing~\cite{randoop}, symbolic execution~\cite{klee}, concolic testing~\cite{wang2018towards} and fuzzing \cite{fuzz}) can be applied to identify vulnerabilities or bugs for both the traditional software and the DL systems, the workflow differs for the two after testing is done, i.e., the quality of traditional software is enhanced by patching the found bugs, whereas deep learning systems are improved via retraining. Arguably, the ultimate goal of testing is to improve the system's quality. Such improvement is guaranteed by patching bugs identified through testing in traditional software (assuming regression bugs are not frequent), i.e., the usefulness of a bug-revealing test for traditional software requires no justification. It is not obvious for DL systems, i.e., the usefulness of a test case can only be judged by taking into account the retraining step. Nevertheless, the retraining phase is largely overlooked so far in the deep learning testing literature. Based on the Empirical Robustness definition in Def.~\ref{def:empirical-robustness}, in Alg.~\ref{alg:main}, we present the high level algorithmic design of RobOT for the workflow of DL testing in Figure \ref{fig:compar}. The initial trained model $f_0$ is given as an input in the algorithm and the testing and retraining iterations in RobOT are conducted within the main loop (Lines 2-6). The loop continues until the user-provided empirical robustness requirement is satisfied (Line 2). RobOT aims to bridge the gap between the DL testing and retraining. Let $T$ (Line 3) denote a fuzzing algorithm to generate test cases (guided by certain metrics). The objective of robustness-oriented testing is to improve the model robustness by testing. Formally, given a deep learning model $f$, the goal of RobOT at each iteration is to improve the following: \begin{equation} ER(\argmin_\theta\frac{1}{n}\sum_{i=1}^n\mathcal{J}_{(x_i,y_i)\in D\cup T(f,D)}(\theta,x_i,y_i)). \end{equation} Intuitively, the testing metric should be designed in such a way that after retraining with the generated test cases, the model robustness is improved. This objective directly links the testing metric to the model robustness. In the remaining of this section, we realize the method in Line 3 by answering the question: \emph{how should we design test metrics that are strongly correlated with the model robustness and how can we generate test cases guided by the proposed metrics?} \begin{algorithm}[t] \caption{RobOT($f_0,D,D_v,r,t$)} \label{alg:main} \begin{algorithmic}[1] \State $f=f_0$ \While{$ER(f,D_v,t)<r$} \State $D_t \leftarrow T(f, D)$ \State $D \leftarrow D \cup D_t$ \State Update $f$ by retraining the model with $D$ \EndWhile \State \Return $f$ \end{algorithmic} \end{algorithm} \subsection{Robustness-Oriented Testing Metrics} \label{sec:metrics} Our goal is to design testing metrics which are strongly correlated with model robustness. We note that there have been some efforts in the machine learning community to modify the standard training procedure in order to obtain a more robust model. For instance, the most effective and successful approach so far is robust training, which incorporates an adversary in the training process so that the trained model can be robust by minimizing the loss of adversarial examples in the first place \cite{pgd}: \begin{equation} \min_\theta\frac{1}{n}\sum_{i=1}^n\max_{||x'_i-x_i||_p\le\epsilon}\mathcal{J}(f(\theta,x'_i),y_i). \end{equation} At the heart of robust training is to identify a \emph{strong} (ideally worst-case) adversarial example $x'$ around\footnote{A $\epsilon-$ball defined according to a certain $L_p$ norm.} a normal example $x$ and train the model so that the loss on the strong adversarial example can be minimized. Robust training has shown encouraging results in training more robust models~\cite{pgd,pgd-con}. This inspires us to consider deep learning testing analogously in terms of how we generate test cases (around a normal example) and retrain the model with the test cases to improve the model robustness. The key implication is that when we design robustness-oriented testing metrics to guide testing, we should evaluate the usefulness of a test case from a loss-oriented perspective. Let $x_0$ be the seed for testing. We assume that a test case $x^t$ is generated in the neighborhood $\epsilon-$ball around $x_0$, i.e., $\{x|\ ||x-x_0||_p\le\epsilon\}$ using either a testing method or an adversarial attack. The main intuition is that a test case which induces a higher loss is a stronger adversarial example, which is consequently more helpful in training robust models~\cite{pgd}. Based on this intuition, we propose two levels of testing metrics on top of the loss as follows. \paragraph{Zero-Order Loss (ZOL)} The first metric directly calculates the loss of a test case with respect to the DL model. Formally, given a test case $x^t$ (generated from seed $x$), a DL model $f$, the loss of $x^t$ on $f$ is defined as: \begin{equation} ZOL(x^t,f)=\mathcal{J}(f(\theta,x^t),y), \end{equation} where $y$ is the ground-truth label of $x$. For test cases generated from the same seed, we prefer test cases with higher loss, which are more helpful in improving the model robustness via retraining. \paragraph{First-Order Loss (FOL)} The loss of generated test cases can be quite different for different seeds. In general, it is easier to generate test cases with high loss around seeds which unfortunately do not generalize well. Thus, ZOL is unable to measure the value of the test cases in a unified way. To address this problem, we propose a more fine-grained metric which could help us measure to what degree we have achieved the highest loss in the seed's neighborhood. The intuition is that, given a seed input, the loss around it often first increases and eventually converges if we follow the gradient direction to modify the seed~\cite{pgd}. Thus, a criteria which measures how well the loss converges can serve as the testing metric. A test case with better convergence quality corresponds to a higher loss than its neighbors. Next, we introduce First-Order Stationary Condition (FOSC) to provide a measurement on the loss convergence quality of the generated test cases. Formally, given a seed input $x_0$, its neighborhood area $\mathcal{X}=\{x|\ ||x-x_0||_p\le\epsilon\}$, and a test case $x^t$, the FOSC value of $x^t$ is calculated as: \begin{equation} \label{eq:fosc} c(x^t)= \text{max}_{x\in \mathcal{X}}\langle x-x^t,\nabla_x f(\theta,x^t)\rangle. \end{equation} In~\cite{pgd-con}, it is proved that the above problem has the following closed form solution if we take the $\infty-$norm for $\mathcal{X}$. \begin{equation} \label{eq:fosc:sol1} c(x^t)=\epsilon||\nabla_x f(\theta,x^t)||_1-\langle x^t-x_0,\nabla_x f(\theta,x^t)\rangle. \end{equation} However, many existing DL testing works are generating test cases from the $L_2$ norm neighborhood which makes the above closed-form solution for $L_\infty$ infeasible. We thus consider solving the formulation in Eq. \ref{eq:fosc} with $L_2$ norm and obtain the solution as follows: \begin{equation} \label{eq:fosc:sol2} c(x^t)=\epsilon||\nabla_x f(\theta,x^t)||_2. \end{equation} \begin{proof} According to Cauchy–Schwarz inequality: \begin{align*} |\langle x-x^t,\nabla_x f(\theta,x^t)\rangle|^2\le\\ \langle x-x^t,x-x^t\rangle \cdot \langle \nabla_x f(\theta,x^t),\nabla_x f(\theta,x^t)\rangle\le\\ \epsilon^2\cdot (||\nabla_x f(\theta,x^t)||_2)^2 \end{align*} Since there must exist $x^t$ such that $x-x^t$ and $\nabla_x f(\theta,x^t)$ are in the same direction, we thus have: \begin{align*} \max|\langle x-x^t,\nabla_x f(\theta,x^t)\rangle|^2=\epsilon^2\cdot(||\nabla_x f(\theta,x^t)||_2)^2 \end{align*} Thus, \begin{align*} \max|\langle x-x^t,\nabla_x f(\theta,x^t)\rangle|=\epsilon\cdot ||\nabla_x f(\theta,x^t)||_2 \end{align*} \end{proof} Note that FOSC (in both Eq.~\ref{eq:fosc:sol1} and Eq. \ref{eq:fosc:sol2}) is cheap to calculate, whose main cost is a one-time gradient computation (easy to obtain by all the DL frameworks). The FOSC value represents the first-order loss of a given test case. The loss of a test case converges and achieves the highest value if its FOSC value equals zero. Thus, a smaller FOSC value means a better convergence quality and a higher loss. \paragraph{Comparison with Neuron Coverage Metrics} Compared to neuron coverage metrics, our proposed loss based metrics have the following main differences. First, both ZOL and FOL are strongly correlated to the adversarial strength of the generated test cases and the model robustness. Thus, our metrics can serve as strong indicators on the model's robustness after retraining. Meanwhile, our metrics are also able to measure the value of each test case in retraining, which helps us select valuable test cases from a large amount of test cases to reduce the retraining cost. \subsection{FOL Guided Test Case Selection} In the following, we show the usefulness of the proposed metric through an important application, i.e., test case selection from a massive amount of test cases. Note that by default, we use the FOL metric hereafter due to the limitation of ZOL as described above. Test case selection is crucial for improving the model robustness with limited retraining budget. The key of test case selection is to quantitatively measure the value of each test case. So far this problem remains an open challenge. Prior work like DeepGini has proposed to calculate a Gini index of a test case from the model's output probability distribution~\cite{deepgini}. DeepGini's intuition is to favor those test cases with most uncertainty (e.g., a more flat distribution) under the current model's prediction. Compared to DeepGini, FOL contains fine-grained information at the loss level and is strongly correlated with model robustness. Given a set of test cases $D^t$, we introduce two strategies based on FOL to select a smaller set $D^s\subset D^t$ for retraining the model as follows. Let $D^t=[x_1,x_2,\cdots,x_m]$ be a ranked list in descending order by FOL value, i.e., $FOL(x_i)\ge FOL(x_{i+1})$ for $i\in[1,m-1]$. \begin{algorithm}[t] \caption{KM-ST($D^t,k,n$)} \label{alg:kmst} \begin{algorithmic}[1] \State $D^s=\emptyset$ \State Let $max$ and $min$ be the maximum and minimum FOL value respectively \State Equally divide range $[min,max]$ into $k$ sections $KR=[R_1,R_2,\cdots,R_k]$ \For{Each FOL range $r\in[R_1,R_2,\cdots,R_k]$} \State Randomly select $n/k$ samples $D^r$ from $D^t$ whose FOL values are in $r$ \State $D^s=D^s\cup D^r$ \EndFor \State \Return $D^s$ \end{algorithmic} \end{algorithm} \paragraph{K-Multisection Strategy (KM-ST)} The idea of KM-ST is to uniformly sample the FOL space of $D^t$. Algo. \ref{alg:kmst} shows the details. Assume we need to select $n$ test cases from $D^t$. We equally divide the range of FOL into $k$ sections ($KR$) at line 3. Then for each range $r\in KR$, we randomly select the same number of test cases at line 5. \paragraph{Bi-End Strategy (BE-ST)} The idea of BE-ST is to form $D^s$ by equally combining test cases with small and large FOSC values. This strategy mixes test cases of strong and weak adversarial strength, which is inspired by a recent work on improving standard robust training~\cite{khoury2019adversarial}. Given a ranked $D^t$, we can simply take an equal number of test cases from the two ends of the list to compose $D^s$. \begin{figure*}[t] \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[height= 1.45in]{figs/mnist_loss_st.pdf} \label{fig:st:mnist} \end{subfigure \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[height=1.45in]{figs/fashion_loss_st.pdf} \label{fig:st:fashion} \end{subfigure}% \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[height=1.45in]{figs/svhn_loss_st.pdf} \label{fig:st:svhn} \end{subfigure \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[height=1.45in]{figs/cifar_loss_st.pdf} \label{fig:st:cifar} \end{subfigure}% \caption{Loss of selected test cases for different datasets using different strategies.} \label{fig:loss:st} \end{figure*} Figure~\ref{fig:loss:st} shows the loss map of the selected test cases according to different strategies. We could observe that BE-ST prefers test cases of higher loss, KM-ST uniformly samples the loss space, while DeepGini often prefers test cases with lower loss. \subsection{FOL Guided Fuzzing} \label{sec:test-gen} Next, we introduce a simple yet efficient fuzzing strategy to generate test cases based on FOL. Note that since we have no prior knowledge of the FOL distribution, we are not able to design fuzzing strategy for KM-ST. Instead, we design a fuzzing algorithm for the BE-ST strategy. The idea is to greedily search for test cases in two directions, i.e., with both small or large FOL values. Algo.~\ref{alg:fuzz} presents the details. The inputs include the model $f$, the list of seeds to fuzz $seeds\_list$, the fuzzing region $\epsilon$, the threshold on the small FOL value $\xi$, the number of labels to optimize $k$, a hyper-parameter $\lambda$ on how much we favor FOL during fuzzing and lastly the maximum number of iterations to fuzz for a seed $iters$. For each seed in the list, we maintain a list of seeds $s\_list$ at line 3. After obtaining a seed $x$ from $s\_list$ (line 5), we iteratively add perturbation on it from line 8 to line 28 in a way guided by FOL. We set the following objective for optimization (line 9). \begin{equation} \label{eq:obj} obj=\sum_{i=2}^k P(c_i)-P(c_1)+\lambda \cdot FOL(x') \end{equation} , where $c_i$ is the label with the $i^{\text{th}}$ largest softmax probability of $f$ ($c_1$ with the maximum), $P(c)$ is the softmax output of label $c$ and $k$ is a hyper-parameter. The idea is to guide perturbation towards changing the original label (i.e., generating an adversarial example) whilst increasing the FOL value. We then obtain the gradient of the objective (line 10) and calculate the perturbation based on the gradient by multiplying a learning rate and a randomized coefficient (0.5 to 1.5) to avoid duplicate perturbation (line 11). We run two kinds of checks to achieve the BE-ST strategy at line 15 and line 22 respectively. If the FOL value of the new sample after perturbation (x') is either increasing (line 15) or is smaller than a threshold (line 22), we add $x'$ to the seed list (line 17 and line 23). Furthermore, we add $x'$ to the fuzzing result if it satisfies the check and has a different label with the original seed $x$ (line 19 and line 25). Note that compared to neuron coverage guided fuzzing algorithms which need to profile and update neuron coverage information~\cite{adapt,deephunter}, our FOL guided fuzzing algorithm is much more lightweight, i.e., whose main cost is to calculate a gradient at each step. \begin{algorithm}[t] \caption{FOL-Fuzz($f,seeds\_list,\epsilon,\xi,k,\lambda,iters$)} \label{alg:fuzz} \begin{algorithmic}[1] \State Let $fuzz\_result=\emptyset$ \For{$seed\in seeds\_list$} \State Maintain a list $s\_list=[seed]$ \While{$s\_list$ is not empty} \State Obtain a seed $x=s\_list.pop()$ \State Obtain the label of the seed $c_1=f(x)$ \State Let $x'=x$ \For{$iter=0$ to $iters$} \State Set optimization objective $obj$ using Eq.~\ref{eq:obj} \State Obtain $grads = \frac{\nabla obj}{\nabla x'}$ \State Obtain $perb = processing(grads)$ \State Let $x’ = x' + perb$ \State Let $c'= f(x’)$ \State Let $dis = Dist(x’, x)$ \If{$FOL(x')\ge FOL_m$ and $dis\le\epsilon$} \State $FOL_m=FOL(x')$ \State $s\_list.append(x')$ \If{$c'!=c_1$} \State $fuzz\_result.append(x')$ \EndIf \EndIf \If{$FOL(x')<\xi$ and $dis\le\epsilon$} \State $s\_list.append(x')$ \If{$c'!=c_1$} \State $fuzz\_result.append(x')$ \EndIf \EndIf \EndFor \EndWhile \EndFor \State \Return $fuzz\_result$ \end{algorithmic} \end{algorithm} \section{Related works} \label{sec:rel} This work is mainly related to the following lines of works on building more robust deep learning systems. \paragraph{Deep Learning Testing} Extensive DL testing works are focused on designing testing metrics to expose the vulnerabilities of DL systems including neuron coverage \cite{deepxplore}, multi-granularity neuron coverage \cite{deepgauge}, neuron activation conditions \cite{concolic} and surprise adequacy~\cite{surprise}. Along with the testing metrics, many test case generation algorithms are also proposed including gradient-guided perturbation~\cite{deepxplore,adf}, black-box~\cite{feature_guided_test} and metric-guided fuzzing~\cite{deephunter,adapt,dlfuzz}. However, these testing works lack rigorous evaluation on their usefulness in improving the model robustness (although most of them claim so) and have been shown to be ineffective in multiple recent works \cite{limit,misleading,meaningless}. Multiple metrics have been proposed in the machine learning community to quantify the robustness of DL models as well \cite{xu2012robustness,weng2018towards,brendel2019accurate,clever}. However, most of them are used to evaluate local robustness and hard to calculate. Thus these metrics are not suitable to test directly. Our work bridges the gap by proposing the FOL metric which is strongly correlated with model robustness and integrate retraining into the testing pipeline for better quality assurance. \paragraph{Adversarial Training} The key idea of adversarial training is to improve the robustness of the DL models by considering adversarial examples in the training phase. There are plenty of works on conducting adversarial attacks on DL models (of which we are not able to cover all) to generate adversarial examples such as FGSM ~\cite{fgsm}, PGD \cite{pgd} and C\&W~\cite{cw}. Adversarial training in general may overfit to the specific kinds of attacks which generate the adversarial examples for training~\cite{pgd} and thus can not guarantee robustness on new kinds of attacks. Later, robust training \cite{pgd} is proposed to train robust models by solving a saddle point problem described in Sec. \ref{sec:meth}. DL testing complements these works by generating more diverse adversarial examples. \section*{Acknowledgments} This work was supported by the National Key R\&D Program of China (Grant No. 2020YFB2010900). This work was also supported by the NSFC Program (Grant No. 62061130220, 61833015 and 62088101), the Guangdong Science and Technology Department (Grant No. 2018B010107004) and the National Research Foundation, Singapore under its AI Singapore Programme (AISG Award No.: AISG-RP-2019-012). \bibliographystyle{plain} \section{Background} \label{sec:back} \subsection{Deep Neural Networks} In this work, we focus on deep learning models, e.g., deep neural networks (DNNs) for classification. We introduce a conceptual deep neural network (DNN) as an example in Fig.~\ref{fig:dnn} for simplicity and remark that our approach is applicable for state-of-the-art DNNs in our experiments like ResNet \cite{resnet}, VGG \cite{vgg}, etc. \paragraph*{DNN} \label{subsec:DNN} A DNN classifier is a function $f:X\to Y$, which maps an input $x\in X$ (often preprocessed into a vector) into a label in $y\in Y$. As shown in Fig.~\ref{fig:dnn}, a DNN $f$ often contains an input layer, multiple hidden layers and an output layer. We use $\theta$ to denote the parameters of $f$ which assigns weights to each connected edge between neurons. Given an input $x$, we can obtain the output of each neuron on $x$, i.e., $f(x,ne)$, by calculating the weighted sum of the outputs of all the neurons in its previous layer and then applying an activation function (e.g., Sigmoid, hyperbolic tangent (tanh), or rectified linear unit (relu)) $\phi$. Given a dataset $D=\{(x_i,y_i)\}_{i=1}^n$, a DNN is often trained by solving the following optimization problem: \begin{equation} \min_\theta\frac{1}{n}\sum_{i=1}^n \mathcal{J}(f_\theta(x_i),y_i) \end{equation} , where $\mathcal{J}$ is a loss function which calculates a loss by comparing the model output $f_\theta(x_i)$ with the ground-truth label $y_i$. The most commonly used loss function for multi-class classification tasks is the categorical cross-entropy. The DNN is then trained by computing the gradient w.r.t. the loss for each sample in $D$ and updating $\theta$ accordingly. \begin{figure}[t] \centering \includegraphics[width=0.35\textwidth]{dnn.pdf} \caption{An example DNN to predict cat or dog.} \label{fig:dnn} \end{figure} \subsection{Deep Learning Testing} Most existing deep learning testing works are based on neuron coverage \cite{deepxplore} or its variants \cite{deepgauge}. Simply speaking, a neuron $ne$ is covered if there exists at least one test case $x$ where $f(x,ne)$ is larger than a threshold and thus been activated. We omit the details of other variants and briefly introduce the following testing methods as representatives. We also provide pointers for more details. \vspace{1mm} \noindent\textbf{DeepXplore} \cite{deepxplore} is the first testing work for DNN. DeepXplore proposed the first testing metric, i.e., neuron coverage and a differential testing framework to generate test cases to improve the neuron coverage. \vspace{1mm} \noindent\textbf{DeepHunter} \cite{deephunter} is a fuzzing framework which randomly selects seeds to fuzz guided by multi-granularity neuron coverage metrics defined in \cite{deepgauge}. \vspace{1mm} \noindent\textbf{ADAPT} \cite{adapt} is another recent work which adopts multiple adaptive strategies to generate test cases which could improve the multi-granularity neuron coverage metrics defined in \cite{deepgauge}. \vspace{1mm} \noindent\textbf{Adversarial Attacks} Beside the above testing methods, traditional adversarial attacks like FGSM \cite{fgsm}, JSMA \cite{jsma}, C\&W \cite{cw} and PGD \cite{pgd} attacks are also used to generate test cases in multiple works. \subsection{Problem definition} Unlike existing coverage guided testing works, our goal is to design a robustness-oriented testing framework to improve the DL model robustness by testing. Two key problems are to be answered: 1) how can we design testing metrics which are strongly correlated with model robustness? 2) how can we automatically generate test cases favoring the proposed testing metrics? \section{Conclusion} \label{sec:con} In this work, we propose a novel robustness-oriented testing framework RobOT for deep learning systems towards improving model robustness against adversarial examples. The core of RobOT is a metric called FOL to quantify both the value of each test case in improving model robustness (often via retraining) and the convergence quality of the model robustness improvement. We also propose to utilize the proposed metric to automatically fuzz for more valuable test cases to improve model robustness. We implemented RobOT as a self-contained open-source toolkit. Our experiments on multiple benchmark datasets verify the effectiveness and efficiency of RobOT in improving DL model robustness, i.e., with 67.02\% increasement on the adversarial robustness that is 50.65\% higher than the state-of-the-art work DeepGini. \section{Related works} \label{sec:rel} This work is mainly related to the following lines of works on building more robust deep learning systems. \paragraph{Deep Learning Testing} Extensive DL testing works are focused on designing testing metrics to expose the vulnerabilities of DL systems including neuron coverage \cite{deepxplore}, multi-granularity neuron coverage \cite{deepgauge}, neuron activation conditions \cite{concolic} and surprise adequacy~\cite{surprise}. Along with the testing metrics, many test case generation algorithms are also proposed including gradient-guided perturbation~\cite{deepxplore,adf}, black-box~\cite{feature_guided_test} and metric-guided fuzzing~\cite{deephunter,adapt,dlfuzz}. However, these testing works lack rigorous evaluation on their usefulness in improving the model robustness (although most of them claim so) and have been shown to be ineffective in multiple recent works \cite{limit,misleading,meaningless}. Multiple metrics have been proposed in the machine learning community to quantify the robustness of DL models as well \cite{xu2012robustness,weng2018towards,brendel2019accurate,clever}. However, most of them are used to evaluate local robustness and hard to calculate. Thus these metrics are not suitable to test directly. Our work bridges the gap by proposing the FOL metric which is strongly correlated with model robustness and integrate retraining into the testing pipeline for better quality assurance. \paragraph{Adversarial Training} The key idea of adversarial training is to improve the robustness of the DL models by considering adversarial examples in the training phase. There are plenty of works on conducting adversarial attacks on DL models (of which we are not able to cover all) to generate adversarial examples such as FGSM ~\cite{fgsm}, PGD \cite{pgd} and C\&W~\cite{cw}. Adversarial training in general may overfit to the specific kinds of attacks which generate the adversarial examples for training~\cite{pgd} and thus can not guarantee robustness on new kinds of attacks. Later, robust training \cite{pgd} is proposed to train robust models by solving a saddle point problem described in Sec. \ref{sec:meth}. DL testing complements these works by generating more diverse adversarial examples. \section{Introduction} Deep learning (DL) \cite{dl} has been the core driving force behind the unprecedented breakthroughs in solving many challenging real-world problems such as object recognition \cite{cv} and natural language processing \cite{nlp}. Despite the success, deep learning systems are known to be vulnerable to adversarial examples (or attacks), which are slightly perturbed inputs that are imperceptibly different from normal inputs to human observers but can easily fool state-of-the-art DL systems into making incorrect decisions \cite{fgsm,cw,ma2018characterizing,jiang2019black,wu2020skip}. This not only compromises the reliability and robustness of DL systems, but also raises security concerns on their deployment in safety-critical applications such as face recognition \cite{face}, malware detection \cite{md}, medical diagnosis \cite{finlayson2019adversarial,ma2020understanding} and autonomous driving \cite{av,duan2020adversarial}. Noticeable efforts have been made in the software engineering community to mitigate the threats of adversarial examples and to improve the robustness of DL systems in the presence of adversarial examples \cite{deeppoly,deepxplore,wang2019adversarial}. Among them, formal verification aims to prove that no adversarial examples exist in the neighborhood of a given input. Substantial progress has been made using approaches like abstract interpretation \cite{deeppoly,yang2020improving} and reachability analysis \cite{star}. However, formal verification techniques are in general expensive and only scale to limited model structures and properties (e.g., local robustness \cite{huang2017safety}). Another popular line of work is deep learning testing, which aims to generate test cases that can expose the vulnerabilities of DL models. The test cases can then be used to improve the model robustness by retraining the model, \emph{however, this should not be taken as granted}, as recent studies have shown that test cases generated based on existing testing metrics have limited correlation to model robustness and robustness improvement after retraining~\cite{limit,meaningless}. In this work, we highlight and tackle the problem of effectively generating test cases for improving the adversarial robustness of DL models. \begin{figure*}[t] \centering \includegraphics[width=0.65\textwidth]{framework.pdf} \caption{Overview of RobOT testing framework.} \label{fig:frame} \end{figure*} There are two key elements when it comes to testing DL systems. The first element is the testing metric used to evaluate the quality of a test case or a test suite. Multiple testing metrics, including neuron coverage~\cite{deepxplore}, multi-granularity neuron coverage~\cite{deepgauge} and surprise adequacy~\cite{surprise}, have been proposed. The common idea is to explore as much diversity as possible of a certain subspace defined based on different abstraction levels, e.g., neuron activation~\cite{deepxplore}, neuron activation pattern~\cite{deepgauge}, neuron activation conditions~\cite{concolic}, and neuron activation vector~\cite{surprise}. The second key element is the method adopted for test case generation, which is often done by manipulating a given seed input with the guidance of the testing metric. Existing test case generation techniques such as DeepXplore~\cite{deepxplore}, DeepConcolic~\cite{concolic}, DeepHunter~\cite{deephunter} and ADAPT~\cite{adapt} are mostly designed to improve the neuron coverage metrics of the test cases. While existing testing approaches are helpful in exposing vulnerabilities of DL systems to some extent, recent studies have found that neuron coverage metrics are not useful for improving model robustness~\cite{limit,misleading,meaningless}. As a consequence, unlike in the case of traditional program testing (where the program is surely improved after fixing bugs revealed through testing), one may not improve the robustness of the DL system after testing. In this work, we address the above-mentioned limitations of existing DL testing approaches by proposing a novel DL testing framework called RobOT (i.e., \textit{Rob}ustness-\textit{O}riented \textit{T}esting), which integrates the DL (re)training with the testing. As illustrated in Fig.~\ref{fig:frame}, RobOT distinguishes itself from existing neuron coverage guided testing works in the following important aspects. First, RobOT is robustness-oriented. RobOT takes a user-defined requirement on the model robustness as input and integrates the retraining process into the testing pipeline. RobOT iteratively improves the model robustness by generating test cases based on a testing metric and retraining the model. Second, in RobOT, we propose a novel set of lightweight metrics that are strongly correlated with model robustness. The metrics can quantitatively measure the relevance of each test case for model retraining, and are designed to favor test cases that can significantly improve model robustness, which is in contrast to existing coverage metrics that have little correlation with model robustness. Furthermore, the proposed metrics can in turn provide strong evidence on the model robustness after testing. The output of RobOT is an enhanced model that satisfies the robustness requirement. In a nutshell, we make the following contributions. \begin{itemize} \item We propose a robustness-oriented testing (RobOT) framework for DL systems. RobOT provides an end-to-end solution for improving the robustness of DL systems against adversarial examples. \item We propose a new set of lightweight testing metrics that quantify the importance of each test case with respect to the model's robustness, which are shown to be stronger indicators of the model's robustness than existing metrics. \item We implement in RobOT, a set of fuzzing strategies guided by the proposed metrics to automatically generate high-quality test cases for improving the model robustness. \end{itemize} RobOT is publicly available as an open-source self-contained toolkit \cite{code}. Experiments on four benchmark datasets confirm the effectiveness of RobOT in improving model robustness. Specifically, RobOT achieves 50.65\% more robustness improvement on average compared to state-of-the-art work DeepGini~\cite{deepgini}. \section{Experimental Evaluation} \label{sec:exp} We have implemented RobOT as a self-contained tookit with about 4k lines of Python code. The source code and all the experiment details are available at \cite{code}. In the following, we evaluate RobOT through multiple experiments. \subsection{Experiment Settings} \paragraph{Datasets and Models} We adopt four widely used image classification benchmark datasets for the evaluation. We summarize the details of the datasets and models used in Tab. \ref{tb:ds}. \paragraph{Test Case Generation} We adopt two kinds of adversarial attacks and three kinds of coverage-guided testing approaches to generate test cases for the evaluation in the following. We summarize all the configurations of the test case generation algorithms in Tab.~\ref{tb:tg}. \paragraph{Test Case Selection Baseline} We adopt the most recent work DeepGini \cite{deepgini} as the baseline of the test case selection strategy. DeepGini calculates a Gini index for each test case according to the output probability distribution of the model. A test case with larger Gini index is considered more valuable for improving model robustness. \paragraph{Robustness Evaluation} We adopt Def. \ref{def:empirical-robustness} to empirically evaluate a model's robustness. In practice, we compose a validation set of adversarial examples $D_v$ for each dataset by combining the adversarial examples generated using both FGSM and PGD (10000 each). The attack parameters are the same with Tab. \ref{tb:tg}. We then evaluate a model's robustness by calculating its accuracy on $D_v$. \begin{table}[] \centering \caption{Datasets and models.} \begin{tabular}{|l|llll|} \hline Dataset & Training & Testing & Model & Accuracy \\ \hline MNIST & 60000 & 10000 & LeNet-5 & 99.02\% \\ Fashion-MNIST & 60000 & 10000 & LeNet-5 & 90.70\% \\ SVHN & 73257 & 26032 & LeNet-5 & 88.84\% \\ CIFAR10 & 50000 & 10000 & ResNet-20 & 90.39\% \\ \hline \end{tabular} \label{tb:ds} \end{table} \begin{table}[] \centering \caption{Test case generation details.} \scalebox{0.74}{ \begin{tabular}{|l|l|llll|} \hline Testing Method & Parameter & MNIST & SVHN & Fashion-MNIST & CIFAR10 \\ \hline FGSM & Step size & 0.3 & 0.03 & 0.03 & 0.01 \\ PGD & Steps & 10 & 10 & 10 & 10 \\ & Step size & 0.3/6 & 0.03/6 & 0.3/6 & 0.01/6 \\ DeepXplore & Relu threshold & 0.5 & 0.5 & 0.5 & 0.5 \\ DLFuzz/ADAPT & Time per seed & 10 s & 10 s & 10 s & 20 s \\ & Relu threshold & 0.5 & 0.5 & 0.5 & 0.5 \\ \hline \end{tabular} } \label{tb:tg} \end{table} \subsection{Research Questions} \vspace{1mm} \noindent\textbf{RQ1: What is the correlation between our FOL metric and model robustness?} To answer this question, we first select three models with different robustness levels for each dataset. The first model (Model 1) is the original trained model. The second model (Model 2) is a robustness-enhanced model which is retrained\footnote{Retaining in this work takes 10 (40 for CIFAR10) additional epochs based on the original model.} by augmenting 5\% of the generated test cases and is more robust than Model 1. The third model (Model 3) is a robustness-enhanced model which is retrained by augmenting 10\% of the generated test cases and is most robust. Then, for each model, we conduct adversarial attacks to obtain a same number (10000 for FGSM and 10000 for PGD) of adversarial examples. \begin{figure*}[t] \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[height=1.45in]{figs/mnist_fol_distribution.pdf} \label{fig:fol:mnist} \end{subfigure \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[height=1.45in]{figs/fashion_fol_distribution.pdf} \label{fig:fol:fashion} \end{subfigure}% \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[height=1.45in]{figs/svhn_fol_distribution.pdf} \label{fig:fol:svhn} \end{subfigure \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[height=1.45in]{figs/cifar_fol_distribution.pdf} \label{fig:fol:cifar} \end{subfigure}% \caption{FOL distribution of adversarial examples for models with different robustness.} \label{fig:fol} \end{figure*} We show the FOL distribution of the adversarial examples for different models in Fig. \ref{fig:fol}. We observe that there is a strong correlation between the FOL distribution of adversarial examples and the model robustness. Specifically, \emph{the adversarial examples of a more robust model have smaller FOL values.} This is clearly evidenced by Fig. \ref{fig:fol}, i.e., for every dataset, the probability density is intensively distributed around zero for Model 3 (the most robust model) while is steadily expanding to larger FOL values for Model 2 and Model 1 (with Model 1 larger than Model 2). The underlying reason is that a more robust model in general has a more \emph{flat} loss distribution and thus a smaller FOL value (since it is based on the loss gradient). \begin{figure}[t] \centering \includegraphics[width=.33\textwidth]{figs/fgsm-pgd_fol_distribution.pdf} \caption{FOL distribution of adversarial examples from FGSM and PGD for CIFAR10 model.} \label{fig:fol:com} \end{figure} In addition, we also observe that adversarial examples crafted by stronger attacks have smaller FOL values. Fig. \ref{fig:fol:com} shows the FOL distribution of adversarial examples from attacking the CIFAR10 model with FGSM and PGD respectively. We could observe that adversarial examples from PGD have significantly smaller FOL values than FGSM. The reason is that stronger attacks like PGD are generating adversarial examples that have better loss convergence quality and induce higher loss. We thus have the following answer to RQ1: \begin{framed} \noindent \emph{Answer to RQ1: FOL is strongly correlated with model robustness. A more robust model have smaller FOL values for adversarial examples. }\end{framed} \vspace{1mm} \noindent\textbf{RQ2: How effective is our FOL metric for test case selection?} To answer the question, we first generate a large set of test cases using different methods, and then adopt different test case selection strategies (i.e., BE-ST, KM-ST and DeepGini) to select a subset of test cases with the same size to retrain the model. A selection strategy is considered more effective if the retrained model with the selected test cases is more robust. We distinguish two different kinds of test case generation algorithms which are both used in the literature, i.e., adversarial attacks and neuron coverage-guided algorithms, for more fine-grained analysis. For adversarial attacks, we adopt FGSM (weak) and PGD (strong) attacks to generate a combined set of test cases. For DeepXplore, DLFuzz and ADAPT, we generate a set of test cases for each of them. The parameters used are consistent with Tab. \ref{tb:tg}. For each set of test cases, we use BE-ST, KM-ST and DeepGini strategy respectively to select x (ranging from 1 to 10) percent of them to obtain a retrained model and evaluate its model robustness. \begin{figure*}[t] \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[height=1.45in]{figs/mnist_attack_robustness.pdf} \label{fig:ts:mnist} \end{subfigure \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[height=1.45in]{figs/fashion_attack_robustness.pdf} \label{fig:ts:fashion} \end{subfigure}% \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[height=1.45in]{figs/svhn_attack_robustness.pdf} \label{fig:ts:svhn} \end{subfigure \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[height=1.45in]{figs/cifar_attack_robustness.pdf} \label{fig:ts:cifar} \end{subfigure}% \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[height=1.45in]{figs/mnist_xplore_robustness.pdf} \label{fig:ts:mnist} \end{subfigure \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[height=1.45in]{figs/fashion_xplore_robustness.pdf} \label{fig:ts:fashion} \end{subfigure}% \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[height=1.45in]{figs/svhn_xplore_robustness.pdf} \label{fig:ts:svhn} \end{subfigure \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[height=1.45in]{figs/cifar_xplore_robustness.pdf} \label{fig:ts:cifar} \end{subfigure}% \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[height=1.45in]{figs/mnist_dlfuzz_robustness.pdf} \label{fig:ts:mnist} \end{subfigure \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[height=1.45in]{figs/fashion_dlfuzz_robustness.pdf} \label{fig:ts:fashion} \end{subfigure}% \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[height=1.45in]{figs/svhn_dlfuzz_robustness.pdf} \label{fig:ts:svhn} \end{subfigure \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[height=1.45in]{figs/cifar_dlfuzz_robustness.pdf} \label{fig:ts:cifar} \end{subfigure}% \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[height=1.45in]{figs/mnist_adapt_robustness.pdf} \label{fig:ts:mnist} \end{subfigure \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[height=1.45in]{figs/fashion_adapt_robustness.pdf} \label{fig:ts:fashion} \end{subfigure}% \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[height=1.45in]{figs/svhn_adapt_robustness.pdf} \label{fig:ts:svhn} \end{subfigure \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[height=1.45in]{figs/cifar_adapt_robustness.pdf} \label{fig:ts:cifar} \end{subfigure}% \caption{Test case selection and robustness improvement with different strategies.} \label{fig:ts} \end{figure*} Fig. \ref{fig:ts} shows the results. We observe that for all the strategies, the retrained model obtained improved resilience to adversarial examples to some extent. Besides, the model's robustness steadily improves as we augment more test cases (from 1\% to 10\%) for retraining. However, we could also observe that in almost all cases (except 1 case), our FOL guided strategies (both BE-ST and KM-ST) have significantly better performance than DeepGini, i.e., achieving 30.48\%, 84.62\%, 54.91\% and 35.92\% more robustness improvement on average for the four different sets of test cases. The reason is that FOL is able to select test cases which have higher and more diverse loss than DeepGini (as shown in Fig. \ref{fig:loss:st} previously), which are better correlated with model robustness. Meanwhile, we observe that the retrained models maintain high accuracy on the test set as well (as summarized in Tab. \ref{tb:acc}). Besides, we observe that different test case generation algorithms obtain different robustness improvements. Among DeepXplore, DLFuzz and ADAPT, ADAPT and DLFuzz have the highest (53.39\% on average) and lowest (31.18\% on average) robustness improvement respectively while DeepXplore is in the middle (48.36\% on average). Adversarial attacks often achieve higher robustness improvement than all three neuron coverage-guided fuzzing algorithms for simpler datasets such as MNIST, Fashion-MNIST and SVHN. This casts shadow on the usefulness of the test cases generated by neuron coverage-guided fuzzing algorithms in improving model robustness and is consistent with \cite{limit,misleading,meaningless}. We further conduct experiments to evaluate and compare how robust the retrained models are when using adversarial examples generated in different ways: one from the attacks (Fig. \ref{fig:ts}), and the other from different testing algorithms. We summarize the result in Tab. \ref{tb:trans}. We observe that the robustness drops noticeably (which is especially the case for CIFAR10), i.e., 18.64\%, 26.41\% and 23.09\% for DeepXplore, DLFuzz, and ADAPT each on average (compared to the results in Fig. \ref{fig:ts}). Nevertheless, our test case selection strategies still outperform DeepGini in all cases. This shows that adversarial examples from adversarial attacks alone are insufficient. It is necessary to improve the diversity of test cases for retraining from a perspective that is well correlated with model robustness. \begin{table}[] \centering \caption{Test accuracy of model before and after retraining with 10 percent of generated test cases using adversarial attacks.} \begin{tabular}{|l|ll|} \hline Dataset & Original & Retrained \\ \hline MNIST & 99.02\% & 98.95\% \\ Fashion-MNIST & 90.70\% & 90.63\% \\ SVHN & 88.84\% & 87.13\% \\ CIFAR10 & 90.39\% & 90.13\% \\ \hline \end{tabular} \label{tb:acc} \end{table} \begin{table*}[] \caption{Robustness performance of models (retrained using adversarial examples from attack algorithms) against test cases generated by DL testing tools.} \centering \scalebox{0.85}{ \begin{tabular}{|l|llll|llll|llll|} \hline & DeepXplore & & & & DLFuzz & & & & ADAPT & & & \\ \cline{2-13} Dataset & BE-ST & KM-ST & DeepGini & Average & BE-ST & KM-ST & DeepGini & Average & BE-ST & KM-ST & DeepGini & Average \\ \hline MNIST & 86.12\% & 80.56\% & 73.74\% & \textbf{80.14\%} & 76.39\% & 74.73\% & 65.59\% & \textbf{72.24\%} & 82.60\% & 75.68\% & 70.36\% & \textbf{76.21\%} \\ Fashion-MNIST & 51.57\% & 47.97\% & 34.14\% & \textbf{44.56\%} & 38.15\% & 35.44\% & 27.16\% & \textbf{33.58\%} & 50.55\% & 47.50\% & 31.92\% & \textbf{43.32\%} \\ SVHN & 37.10\% & 38.29\% & 27.26\% & \textbf{34.55\%} & 32.83\% & 34.83\% & 25.34\% & \textbf{31.00\%} & 25.71\% & 28.51\% & 19.15\% & \textbf{24.46\%} \\ CIFAR10 & 25.25\% & 20.16\% & 12.92\% & \textbf{19.44\%} & 18.28\% & 14.20\% & 9.31\% & \textbf{13.93\%} & 22.37\% & 18.48\% & 12.08\% & \textbf{17.64\%} \\ \hline Average & \textbf{50.01\%} & \textbf{46.75\%} & \textbf{37.01\%} & & \textbf{41.41\%} & \textbf{39.8\%} & \textbf{31.85\%} & & \textbf{45.31\%} & \textbf{42.54\%} & \textbf{33.36\%} & \\ \hline \end{tabular} } \label{tb:trans} \end{table*} \begin{table*}[] \centering \caption{Comparison of FOL-fuzz and ADAPT. $a/b$: a is the result of FOL-fuzz and b is the result of ADAPT.} \begin{tabular}{|l|ll|ll|ll|} \hline & 5 min & & 10 min & & 20 min & \\ \cline{2-7} Dataset & \# Test case & Robustness$\uparrow$ & \# Test case & Robustness$\uparrow$ & \# Test case & Robustness$\uparrow$ \\ \hline MNIST & 1692/2125 & 33.62\%/18.73\% & 3472/4521 & 48.04\%/36.46\% & 7226/8943 & 68.02\%/54.38\% \\ Fashion-MNIST & 4294/5485 & 40.75\%/6.74\% & 8906/10433 & 53.88\%/14.94\% & 18527/21872 & 69.03\%/27.24\% \\ SVHN & 6236/8401 & 24.25\%/21.3\% & 12465/17429 & 30.42\%/27.52\% & 24864/33692 & 39.99\%/34.51\% \\ CIFAR10 & 1029/1911 & 18.62\%/17.03\% & 2006/3722 & 22.07\%/18.12\% & 4050/6947 & 27.36\%/20.54\% \\ \hline Average & \textbf{3313/4480} & \textbf{29.31\%/15.95\%} & \textbf{6712/9026} & \textbf{38.6\%/24.26\%} & \textbf{13667/17864} & \textbf{51.1\%/34.17\%} \\ \hline \end{tabular} \label{tb:fuzz} \end{table*} We thus have the following answer to RQ2: \begin{framed} \noindent \emph{Answer to RQ2: FOL guided test case selection is able to select more valuable test cases to improve the model robustness by retraining. }\end{framed} \vspace{1mm} \noindent\textbf{RQ3: How effective and efficient is our FOL guided fuzzing algorithm?} To answer the question, we compare our FOL guided fuzzing algorithm (FOL-Fuzz) with state-of-the-art neuron coverage-guided fuzzing algorithm ADAPT as follows. We run FOL-fuzz and ADAPT for a same period of time, (i.e., 5 minutes, 10 minutes and 20 minutes) to generate test cases. Then we retrain the model with the test cases to compare their robustness improvement. The hyper-parameters for FOL-Fuzz are set as follows: $\xi=10^{-18},\ k=5,\ \lambda=1,\ iters=3,\ learning\_rate=0.1$. The parameters for ADAPT are consistent with Tab. \ref{tb:tg}. Tab. \ref{tb:fuzz} shows the results. We could observe that within the same time limit, ADAPT generates slightly more adversarial examples, i.e., 10457 compared to 7897 of FOL-Fuzz. A closer look reveals that ADAPT tends to generate a lot of test cases around a seed towards improving the neuron coverage metrics. However, not all these tests are meaningful to improve model robustness. On the other hand, FOL-Fuzz is able to discover more valuable test cases. We could observe that using FOL-Fuzzed test cases (although less than ADAPT) to retrain the model significantly improves the model's robustness than ADAPT, i.e., 39.67\% compared to 24.79\% of ADAPT on average. We thus have the following answer to RQ3: \begin{framed} \noindent \emph{Answer to RQ3: FOL-Fuzz is able to efficiently generate more valuable test cases to improve the model robustness. }\end{framed} \subsection{Threats to Validity} First, our experiments are based on a limited set of test subjects in terms of datasets, types of adversarial attacks and neuron coverage-guided test case generation algorithms. Although we included strong adversarial attack like PGD and state-of-the-art coverage-guided generation algorithm ADAPT, it might be interesting to investigate other attacks like C\&W \cite{cw} and JSMA \cite{jsma}, and fuzzing algorithms like DeepHunter \cite{deephunter}. Second, we adopt an empirical approach to evaluate the model robustness which might be different with different kinds of attacks used. So far it is still an open problem as for how to efficiently measure the robustness of DL models. We do not use more rigorous robustness metric like CLEVER \cite{clever} because it is input-specific and has high cost to calculate (e.g., hours for one input). Third, our testing framework requires a robustness requirement as input which could be application-specific and is relevant to the model as well. In practice, users could adjust the requirement dynamically. \section{The RobOT Framework} \label{sec:meth} In this section, we present RobOT, a novel robustness-oriented framework for testing and re-training DL systems. The overall framework of RobOT is shown in Figure~\ref{fig:frame}. We assume that \emph{a requirement on the model robustness} (Section \ref{sec:robustness}) is provided in prior for quality assurance purpose. Note that the requirement is likely application-specific, i.e., different applications may have different requirements on the level of robustness RobOT integrates the DL (re)training into the testing framework. It starts from the initial training dataset $D_0$, and trains an initial DNN $f_0$ in the standard way. Then, it applies a fuzzing algorithm (see Section~\ref{sec:test-gen}) which is guided by our proposed testing metrics (see Section~\ref{sec:metrics}) to generate a new set of test cases $D_{t}$, for retraining the model $f_0$ to improve its adversarial robustness. The retraining step distinguishes RobOT from existing DL testing works and it places a specific requirement on how the test cases in $D_{t}$ are generated and selected, i.e., the test cases must be helpful in improving $f_0$'s robustness after retraining. We discuss how the test cases are generated in the rest of this section. RobOT iteratively generates the test suite $D_{t}$ and retrains the model $f_{n}$ at each iteration. Afterwards, it checks whether the robustness of the new model $f_n$ is satisfactory using an independent adversarial validation dataset $D_v$, subject to an acceptable degrade of the model's accuracy on normal/non-adversarial data. If the answer is yes, it terminates and outputs the final model $f_{n}$; otherwise, RobOT continues until the model robustness is satisfactory or a predefined testing budget is reached. In the following, we illustrate each component of RobOT in detail. \subsection{DL Robustness: A Formal Definition} \label{sec:robustness} Although many DL testing works in the literature claim a \emph{potential} improvement on the DL model robustness by retraining using the test suite generated, such a conjecture is often not rigorously examined. This is partially due to the ambiguous definition of robustness. For instance, the evaluations of \cite{deepxplore,deeptest,concolic,deephunter} are based on accuracy, in particular empirical accuracy on the validation set~\cite{testing-survey}, rather than robustness. In RobOT, we focus on improving the model \emph{robustness} (without sacrificing accuracy significantly), and we begin with defining robustness. \begin{definition} \textbf{\textit{Global Robustness (GR)}} Given an input region $R$, a DL model $f:R\to Y$ is $(\sigma,\epsilon)$-globally-robust iff $\forall x_1,x_2\in R, ||x_1-x_2||_p\le\sigma \Rightarrow\ ||f(x_1)-f(x_2)||\le\epsilon$. $\hfill \square$ \end{definition} Global robustness is theoretically sound, and yet extremely challenging for testing or verification~\cite{katz2017towards}. To mitigate the complexity, multiple attempts have been made to constrain the robustness into local input space, such as Local Robustness~\cite{huang2017safety}, CLEVER~\cite{clever} and Lipschitz Constant~\cite{xu2012robustness}. These local versions of robustness are however not ideal either, i.e., they have been shown to have their own limitations~\cite{lc_lim,katz2017towards}. For instance, CLEVER relies on the extreme value theory, making it extremely costly to calculate. In RobOT, we adopt a practical empirical definition of robustness, which has been commonly used for model robustness evaluation in the machine learning literature \cite{pgd,pgd-con,zhang2019theoretically,wang2019improving,carlini2019evaluating}. \begin{definition} \label{def:empirical-robustness}\textbf{\textit{Empirical Robustness (ER)}} Given a DL model $f:X\to Y$ and a validation dataset $D_v$, we define its empirical robustness $\mu:(f,D_v,ATT)\to [0,1]$ as $\gamma$, where $ATT$ denotes a given type of adversarial attack and $\gamma$ is the accuracy of $f$ on the adversarial examples obtained by conducting $ATT$ on $\langle D_v,f\rangle$. $\hfill \square$ \end{definition} Intuitively, Def. \ref{def:empirical-robustness} evaluates a model's robustness using its accuracy on the adversarial examples crafted from a validation set $D_v$. Such an empirical view of DL robustness is testing-friendly and it facilitates RobOT to efficiently compare the robustness of the models before and after testing and retraining. Definition~\ref{def:empirical-robustness} is also practical, as it connects the DL robustness with many existing adversarial attacks (such as~\cite{fgsm,pgd,cw}) as a part of the definition. In particular, for the evaluation of RobOT in Section \ref{sec:exp}, we use two popular attacks, i.e., FGSM \cite{fgsm} and PGD (Projected Gradient Descent) \cite{pgd} as $ATT$. \subsection{RobOT DL Testing: A General View} \begin{figure}[t] \centering \includegraphics[width=0.45\textwidth]{testing} \caption{Comparison between traditional and deep learning system quality assurance by testing.} \label{fig:compar} \end{figure} We first compare and highlight the difference between testing traditional software and deep learning systems in Fig.~\ref{fig:compar}. While many testing methods (like random testing~\cite{randoop}, symbolic execution~\cite{klee}, concolic testing~\cite{wang2018towards} and fuzzing \cite{fuzz}) can be applied to identify vulnerabilities or bugs for both the traditional software and the DL systems, the workflow differs for the two after testing is done, i.e., the quality of traditional software is enhanced by patching the found bugs, whereas deep learning systems are improved via retraining. Arguably, the ultimate goal of testing is to improve the system's quality. Such improvement is guaranteed by patching bugs identified through testing in traditional software (assuming regression bugs are not frequent), i.e., the usefulness of a bug-revealing test for traditional software requires no justification. It is not obvious for DL systems, i.e., the usefulness of a test case can only be judged by taking into account the retraining step. Nevertheless, the retraining phase is largely overlooked so far in the deep learning testing literature. Based on the Empirical Robustness definition in Def.~\ref{def:empirical-robustness}, in Alg.~\ref{alg:main}, we present the high level algorithmic design of RobOT for the workflow of DL testing in Figure \ref{fig:compar}. The initial trained model $f_0$ is given as an input in the algorithm and the testing and retraining iterations in RobOT are conducted within the main loop (Lines 2-6). The loop continues until the user-provided empirical robustness requirement is satisfied (Line 2). RobOT aims to bridge the gap between the DL testing and retraining. Let $T$ (Line 3) denote a fuzzing algorithm to generate test cases (guided by certain metrics). The objective of robustness-oriented testing is to improve the model robustness by testing. Formally, given a deep learning model $f$, the goal of RobOT at each iteration is to improve the following: \begin{equation} ER(\argmin_\theta\frac{1}{n}\sum_{i=1}^n\mathcal{J}_{(x_i,y_i)\in D\cup T(f,D)}(\theta,x_i,y_i)). \end{equation} Intuitively, the testing metric should be designed in such a way that after retraining with the generated test cases, the model robustness is improved. This objective directly links the testing metric to the model robustness. In the remaining of this section, we realize the method in Line 3 by answering the question: \emph{how should we design test metrics that are strongly correlated with the model robustness and how can we generate test cases guided by the proposed metrics?} \begin{algorithm}[t] \caption{RobOT($f_0,D,D_v,r,t$)} \label{alg:main} \begin{algorithmic}[1] \State $f=f_0$ \While{$ER(f,D_v,t)<r$} \State $D_t \leftarrow T(f, D)$ \State $D \leftarrow D \cup D_t$ \State Update $f$ by retraining the model with $D$ \EndWhile \State \Return $f$ \end{algorithmic} \end{algorithm} \subsection{Robustness-Oriented Testing Metrics} \label{sec:metrics} Our goal is to design testing metrics which are strongly correlated with model robustness. We note that there have been some efforts in the machine learning community to modify the standard training procedure in order to obtain a more robust model. For instance, the most effective and successful approach so far is robust training, which incorporates an adversary in the training process so that the trained model can be robust by minimizing the loss of adversarial examples in the first place \cite{pgd}: \begin{equation} \min_\theta\frac{1}{n}\sum_{i=1}^n\max_{||x'_i-x_i||_p\le\epsilon}\mathcal{J}(f(\theta,x'_i),y_i). \end{equation} At the heart of robust training is to identify a \emph{strong} (ideally worst-case) adversarial example $x'$ around\footnote{A $\epsilon-$ball defined according to a certain $L_p$ norm.} a normal example $x$ and train the model so that the loss on the strong adversarial example can be minimized. Robust training has shown encouraging results in training more robust models~\cite{pgd,pgd-con}. This inspires us to consider deep learning testing analogously in terms of how we generate test cases (around a normal example) and retrain the model with the test cases to improve the model robustness. The key implication is that when we design robustness-oriented testing metrics to guide testing, we should evaluate the usefulness of a test case from a loss-oriented perspective. Let $x_0$ be the seed for testing. We assume that a test case $x^t$ is generated in the neighborhood $\epsilon-$ball around $x_0$, i.e., $\{x|\ ||x-x_0||_p\le\epsilon\}$ using either a testing method or an adversarial attack. The main intuition is that a test case which induces a higher loss is a stronger adversarial example, which is consequently more helpful in training robust models~\cite{pgd}. Based on this intuition, we propose two levels of testing metrics on top of the loss as follows. \paragraph{Zero-Order Loss (ZOL)} The first metric directly calculates the loss of a test case with respect to the DL model. Formally, given a test case $x^t$ (generated from seed $x$), a DL model $f$, the loss of $x^t$ on $f$ is defined as: \begin{equation} ZOL(x^t,f)=\mathcal{J}(f(\theta,x^t),y), \end{equation} where $y$ is the ground-truth label of $x$. For test cases generated from the same seed, we prefer test cases with higher loss, which are more helpful in improving the model robustness via retraining. \paragraph{First-Order Loss (FOL)} The loss of generated test cases can be quite different for different seeds. In general, it is easier to generate test cases with high loss around seeds which unfortunately do not generalize well. Thus, ZOL is unable to measure the value of the test cases in a unified way. To address this problem, we propose a more fine-grained metric which could help us measure to what degree we have achieved the highest loss in the seed's neighborhood. The intuition is that, given a seed input, the loss around it often first increases and eventually converges if we follow the gradient direction to modify the seed~\cite{pgd}. Thus, a criteria which measures how well the loss converges can serve as the testing metric. A test case with better convergence quality corresponds to a higher loss than its neighbors. Next, we introduce First-Order Stationary Condition (FOSC) to provide a measurement on the loss convergence quality of the generated test cases. Formally, given a seed input $x_0$, its neighborhood area $\mathcal{X}=\{x|\ ||x-x_0||_p\le\epsilon\}$, and a test case $x^t$, the FOSC value of $x^t$ is calculated as: \begin{equation} \label{eq:fosc} c(x^t)= \text{max}_{x\in \mathcal{X}}\langle x-x^t,\nabla_x f(\theta,x^t)\rangle. \end{equation} In~\cite{pgd-con}, it is proved that the above problem has the following closed form solution if we take the $\infty-$norm for $\mathcal{X}$. \begin{equation} \label{eq:fosc:sol1} c(x^t)=\epsilon||\nabla_x f(\theta,x^t)||_1-\langle x^t-x_0,\nabla_x f(\theta,x^t)\rangle. \end{equation} However, many existing DL testing works are generating test cases from the $L_2$ norm neighborhood which makes the above closed-form solution for $L_\infty$ infeasible. We thus consider solving the formulation in Eq. \ref{eq:fosc} with $L_2$ norm and obtain the solution as follows: \begin{equation} \label{eq:fosc:sol2} c(x^t)=\epsilon||\nabla_x f(\theta,x^t)||_2. \end{equation} \begin{proof} According to Cauchy–Schwarz inequality: \begin{align*} |\langle x-x^t,\nabla_x f(\theta,x^t)\rangle|^2\le\\ \langle x-x^t,x-x^t\rangle \cdot \langle \nabla_x f(\theta,x^t),\nabla_x f(\theta,x^t)\rangle\le\\ \epsilon^2\cdot (||\nabla_x f(\theta,x^t)||_2)^2 \end{align*} Since there must exist $x^t$ such that $x-x^t$ and $\nabla_x f(\theta,x^t)$ are in the same direction, we thus have: \begin{align*} \max|\langle x-x^t,\nabla_x f(\theta,x^t)\rangle|^2=\epsilon^2\cdot(||\nabla_x f(\theta,x^t)||_2)^2 \end{align*} Thus, \begin{align*} \max|\langle x-x^t,\nabla_x f(\theta,x^t)\rangle|=\epsilon\cdot ||\nabla_x f(\theta,x^t)||_2 \end{align*} \end{proof} Note that FOSC (in both Eq.~\ref{eq:fosc:sol1} and Eq. \ref{eq:fosc:sol2}) is cheap to calculate, whose main cost is a one-time gradient computation (easy to obtain by all the DL frameworks). The FOSC value represents the first-order loss of a given test case. The loss of a test case converges and achieves the highest value if its FOSC value equals zero. Thus, a smaller FOSC value means a better convergence quality and a higher loss. \paragraph{Comparison with Neuron Coverage Metrics} Compared to neuron coverage metrics, our proposed loss based metrics have the following main differences. First, both ZOL and FOL are strongly correlated to the adversarial strength of the generated test cases and the model robustness. Thus, our metrics can serve as strong indicators on the model's robustness after retraining. Meanwhile, our metrics are also able to measure the value of each test case in retraining, which helps us select valuable test cases from a large amount of test cases to reduce the retraining cost. \subsection{FOL Guided Test Case Selection} In the following, we show the usefulness of the proposed metric through an important application, i.e., test case selection from a massive amount of test cases. Note that by default, we use the FOL metric hereafter due to the limitation of ZOL as described above. Test case selection is crucial for improving the model robustness with limited retraining budget. The key of test case selection is to quantitatively measure the value of each test case. So far this problem remains an open challenge. Prior work like DeepGini has proposed to calculate a Gini index of a test case from the model's output probability distribution~\cite{deepgini}. DeepGini's intuition is to favor those test cases with most uncertainty (e.g., a more flat distribution) under the current model's prediction. Compared to DeepGini, FOL contains fine-grained information at the loss level and is strongly correlated with model robustness. Given a set of test cases $D^t$, we introduce two strategies based on FOL to select a smaller set $D^s\subset D^t$ for retraining the model as follows. Let $D^t=[x_1,x_2,\cdots,x_m]$ be a ranked list in descending order by FOL value, i.e., $FOL(x_i)\ge FOL(x_{i+1})$ for $i\in[1,m-1]$. \begin{algorithm}[t] \caption{KM-ST($D^t,k,n$)} \label{alg:kmst} \begin{algorithmic}[1] \State $D^s=\emptyset$ \State Let $max$ and $min$ be the maximum and minimum FOL value respectively \State Equally divide range $[min,max]$ into $k$ sections $KR=[R_1,R_2,\cdots,R_k]$ \For{Each FOL range $r\in[R_1,R_2,\cdots,R_k]$} \State Randomly select $n/k$ samples $D^r$ from $D^t$ whose FOL values are in $r$ \State $D^s=D^s\cup D^r$ \EndFor \State \Return $D^s$ \end{algorithmic} \end{algorithm} \paragraph{K-Multisection Strategy (KM-ST)} The idea of KM-ST is to uniformly sample the FOL space of $D^t$. Algo. \ref{alg:kmst} shows the details. Assume we need to select $n$ test cases from $D^t$. We equally divide the range of FOL into $k$ sections ($KR$) at line 3. Then for each range $r\in KR$, we randomly select the same number of test cases at line 5. \paragraph{Bi-End Strategy (BE-ST)} The idea of BE-ST is to form $D^s$ by equally combining test cases with small and large FOSC values. This strategy mixes test cases of strong and weak adversarial strength, which is inspired by a recent work on improving standard robust training~\cite{khoury2019adversarial}. Given a ranked $D^t$, we can simply take an equal number of test cases from the two ends of the list to compose $D^s$. \begin{figure*}[t] \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[height= 1.45in]{figs/mnist_loss_st.pdf} \label{fig:st:mnist} \end{subfigure \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[height=1.45in]{figs/fashion_loss_st.pdf} \label{fig:st:fashion} \end{subfigure}% \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[height=1.45in]{figs/svhn_loss_st.pdf} \label{fig:st:svhn} \end{subfigure \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[height=1.45in]{figs/cifar_loss_st.pdf} \label{fig:st:cifar} \end{subfigure}% \caption{Loss of selected test cases for different datasets using different strategies.} \label{fig:loss:st} \end{figure*} Figure~\ref{fig:loss:st} shows the loss map of the selected test cases according to different strategies. We could observe that BE-ST prefers test cases of higher loss, KM-ST uniformly samples the loss space, while DeepGini often prefers test cases with lower loss. \subsection{FOL Guided Fuzzing} \label{sec:test-gen} Next, we introduce a simple yet efficient fuzzing strategy to generate test cases based on FOL. Note that since we have no prior knowledge of the FOL distribution, we are not able to design fuzzing strategy for KM-ST. Instead, we design a fuzzing algorithm for the BE-ST strategy. The idea is to greedily search for test cases in two directions, i.e., with both small or large FOL values. Algo.~\ref{alg:fuzz} presents the details. The inputs include the model $f$, the list of seeds to fuzz $seeds\_list$, the fuzzing region $\epsilon$, the threshold on the small FOL value $\xi$, the number of labels to optimize $k$, a hyper-parameter $\lambda$ on how much we favor FOL during fuzzing and lastly the maximum number of iterations to fuzz for a seed $iters$. For each seed in the list, we maintain a list of seeds $s\_list$ at line 3. After obtaining a seed $x$ from $s\_list$ (line 5), we iteratively add perturbation on it from line 8 to line 28 in a way guided by FOL. We set the following objective for optimization (line 9). \begin{equation} \label{eq:obj} obj=\sum_{i=2}^k P(c_i)-P(c_1)+\lambda \cdot FOL(x') \end{equation} , where $c_i$ is the label with the $i^{\text{th}}$ largest softmax probability of $f$ ($c_1$ with the maximum), $P(c)$ is the softmax output of label $c$ and $k$ is a hyper-parameter. The idea is to guide perturbation towards changing the original label (i.e., generating an adversarial example) whilst increasing the FOL value. We then obtain the gradient of the objective (line 10) and calculate the perturbation based on the gradient by multiplying a learning rate and a randomized coefficient (0.5 to 1.5) to avoid duplicate perturbation (line 11). We run two kinds of checks to achieve the BE-ST strategy at line 15 and line 22 respectively. If the FOL value of the new sample after perturbation (x') is either increasing (line 15) or is smaller than a threshold (line 22), we add $x'$ to the seed list (line 17 and line 23). Furthermore, we add $x'$ to the fuzzing result if it satisfies the check and has a different label with the original seed $x$ (line 19 and line 25). Note that compared to neuron coverage guided fuzzing algorithms which need to profile and update neuron coverage information~\cite{adapt,deephunter}, our FOL guided fuzzing algorithm is much more lightweight, i.e., whose main cost is to calculate a gradient at each step. \begin{algorithm}[t] \caption{FOL-Fuzz($f,seeds\_list,\epsilon,\xi,k,\lambda,iters$)} \label{alg:fuzz} \begin{algorithmic}[1] \State Let $fuzz\_result=\emptyset$ \For{$seed\in seeds\_list$} \State Maintain a list $s\_list=[seed]$ \While{$s\_list$ is not empty} \State Obtain a seed $x=s\_list.pop()$ \State Obtain the label of the seed $c_1=f(x)$ \State Let $x'=x$ \For{$iter=0$ to $iters$} \State Set optimization objective $obj$ using Eq.~\ref{eq:obj} \State Obtain $grads = \frac{\nabla obj}{\nabla x'}$ \State Obtain $perb = processing(grads)$ \State Let $x’ = x' + perb$ \State Let $c'= f(x’)$ \State Let $dis = Dist(x’, x)$ \If{$FOL(x')\ge FOL_m$ and $dis\le\epsilon$} \State $FOL_m=FOL(x')$ \State $s\_list.append(x')$ \If{$c'!=c_1$} \State $fuzz\_result.append(x')$ \EndIf \EndIf \If{$FOL(x')<\xi$ and $dis\le\epsilon$} \State $s\_list.append(x')$ \If{$c'!=c_1$} \State $fuzz\_result.append(x')$ \EndIf \EndIf \EndFor \EndWhile \EndFor \State \Return $fuzz\_result$ \end{algorithmic} \end{algorithm} \section*{Acknowledgments} This work was supported by the National Key R\&D Program of China (Grant No. 2020YFB2010900). This work was also supported by the NSFC Program (Grant No. 62061130220, 61833015 and 62088101), the Guangdong Science and Technology Department (Grant No. 2018B010107004) and the National Research Foundation, Singapore under its AI Singapore Programme (AISG Award No.: AISG-RP-2019-012). \bibliographystyle{plain}
1,941,325,220,939
arxiv
\section{Introduction} \section{Introduction} \medskip \subsection{Arctic phenomenon} Geometrically constrained two-dimensional statistical models are known to display the so-called arctic phenomenon in the presence of suitable boundary conditions. This includes ``free fermion" dimer models, where typically dimers choose a preferred crystalline orientation near boundaries while they tend to be disordered (liquid-like) away from the boundaries: the arctic phenomenon is the formation of a sharp phase boundary as the domain is scaled by a large overall factor, the so-called arctic curve separating frozen crystalline from disordered liquid phases. The first observed instance of this phenomenon is the celebrated arctic circle arising in the domino tilings of the Aztec diamond \cite{JPS}, and a general theory was developed for dimers \cite{KO2,KOS}. The free fermion character of these models can be visualized in their formulation in terms of non-intersecting lattice paths, i.e. families of paths with fixed ends, subject to the condition that they share no vertex (i.e. avoid each-other), and can consequently be expressed in terms of free lattice fermions. A manifestation of the free fermion models is that their arctic curves are always analytic, and usually algebraic at ``rational" values of interaction parameters, such as in the uniformly weighted cases. Beyond free fermions, the archetypical model for paths allowed to interact by ``kissing" i.e. sharing a vertex at which they bounce against each-other, is the Six Vertex (6V) model. The families of paths describing the model are called osculating paths. With so-called Domain Wall Boundary Conditions (DWBC) the 6V model exhibits an arctic phenomenon in its disordered phase, which was predicted via non-rigorous methods\cite{CP2010,CNP}, the latest of which being the Tangent Method introduced by Colomo and Sportiello \cite{COSPO}. The new feature arising from these studies is that the arctic curves are generically {\it no longer analytic}, but rather {\it piecewise analytic}. For instance, the arctic curve for large Alternating Sign Matrices (uniformly weighted 6V-DWBC) is made of four pieces of different ellipses as predicted in \cite{CP2010} and later proved in \cite{Aggar}. The Tangent Method was validated recently in a number of situations, mostly in free fermion situations \cite{CPS,DFLAP,PRarctic,DFGUI,DFG2,DFG3,CorKeat}. However, a simple transformation using the integrability of the models allowed to deduce from the 6V results the arctic curves for another model of osculating paths: the Twenty Vertex (20V) model with DWBC1,2 \cite{BDFG}. The 20V model is the triangular lattice version of the 6V model: in one formulation the configurations of the model are orientation assignments of all edges of the lattice, in such a way that the {\it ice rule} is obeyed at each vertex, namely that there be an equal number of edges pointing towards and outwards (2+2 for 6V, 3+3 for 20V). In \cite{DFGUI}, four possible variations around DWBC were considered for the 20V model, denoted DWBC1,2,3,4. In the present paper, we will concentrate on the 20V-DWBC3 model on a quadrangle, which was recently shown to have the same number of configurations as domino tilings of the Aztec Triangle of suitable size \cite{DF20V}. The proof uses again the integrability of the model to relate its partition function to that of the 6V model with another type of DWBC, called U-turn, considered by Kuperberg in \cite{kuperberg2002symmetry}, and whose partition function has a nice determinantal form \cite{tsu,kuperberg2002symmetry}. In this paper, we set the task of deriving the arctic curves for the U-turn 6V model, and as by-products, those of the 20V-DWBC3, and of the Domino Tiling of the Aztec Triangle. \subsection{Arctic curves and the Tangent Method} The systems we are considering in this note are all described in terms of osculating or non-intersecting paths, and are expected to display an arctic curve phenomenon. The rough idea behind the Tangent Method is as follows. The $n$ paths describing the model's configurations have fixed starting and endpoints, and form a ``soup" whose boundary tends to a subset of the arctic curve. Indeed, this boundary is a solid/liquid separation between an empty phase and one with disordered path configurations. Consider the outermost path forming that boundary: if we displace the endpoint of this path to a point say $L$ outside of the original domain, the path will have to detach itself from the soup, and continue to its endpoint within a mostly empty space, once it gets away from the soup formed by the other paths, where it is most likely to follow a geodesic (a line in all cases of this paper, due to a general argument of \cite{DFLAP}). This geodesic is expected to be {\rm tangent} to the arctic curve in the large $n$ limit. The corresponding path is therefore used as a probe into the arctic curve: the geodesic is determined by the point $L$ and the point $K$ at which it exits the original domain\footnote{Both points $K$ and $L$ scale linearly with the size $n$ so that a thermodynamic limit can be reached.}. The partition function $\Sigma_{n,L}$ of the new model is now a sum over the possible positions of $K$ of the product of two partition functions: (1) $Z_{n,K}$ the partition function of the $n$ osculating/non-intersecting paths on the original domain, in which the outer path is conditioned to end at point $K$ instead of its original endpoint. (2) $Y_{K,L}$ the partition function of a single path subject to the same weighting, in some empty space from the point $K$ to the new endpoint $L$. The quantity $\Sigma_{n,L}=\sum_K Z_{n,K}\, Y_{K,L}$ is dominated at large $n$ by contributions from the most likely exit point $K(n,L)$. The arctic curve is then recovered as the envelope of the family geodesics through $L$ and $K(n,L)$ for varying $L$ (in rescaled coordinates). We see that the crucial ingredient in this method is the refined partition function $Z_{n,K}$ in which the outer path is conditioned to exit the domain at point $K$, or rather its normalized version, the ``refined one-point function" $H_{n,K}=Z_{n,K}/Z_n$ where we have divided by the original partition function $Z_n$. Computing exactly the leading large $n,K,L$ asymptotics of $H_{n,K}$ and $Y_{K,L}$ leads to the determination of $K(n,L)$ by solving a steepest descent problem, and eventually to the arctic curve. After revisiting the case of the 6V model for pedagogical purposes in Section \ref{6vsec}, we will apply the Tangent Method in Section \ref{6vpsec} to the case of the 6V' model on the $(2n-1)\times n$ rectangular grid (a simplified version of the U-turn 6V model), in Section \ref{20vsec} to the case of the 20V model with DWBC3 on the quadrangle $\mathcal Q_n$, and finally in Section \ref{DTsec} to the domino tilings of the Aztec triangle $\mathcal T_n$. Note that the Tangent Method was previously applied in \cite{PRarctic} to a particular ``free fermion" case of the U-turn 6V model, where the arctic curve is a half-circle: the results of Section \ref{6vsec} extend this to arbitrary values of the parameters. \subsection{Outline of the paper and main results} The paper is organized as follows. In Section \ref{secmodels} we define the four models studied in this paper. These include: the 6V model with Domain Wall Boundary Conditions (DWBC), the 6V model with U-turn Boundary Conditions and the related 6V' model, the 20V model with DWBC3 of Ref.~\cite{DF20V}, and finally the Domino Tiling problem of the Aztec Triangle introduced and studied in Refs.~\cite{DFGUI,DF20V}. We show in particular that all models are described by families of weighted osculating/non-intersecting paths. In Section \ref{sectan}, we describe the Tangent Method in general and how it applies to the determination of the arctic curves of our models. The next sections are all organized in a similar way, and treat the various models. For each case, we first derive compact relations obeyed by the partition function and one-point function of the model, allowing for extracting asymptotic results. The latter are used to apply the Tangent Method, and finally obtain the arctic curves of the model. While Section \ref{6vsec} revisits the known case of the 6V model with DWBC, as a pedagogical warmup, the remaining Sections provide new results: Section \ref{6vpsec} is about the 6V' model, Section \ref{20vsec} the 20V model with DWBC3, and Section \ref{DTsec} the Domino Tilings of the Aztec Triangle. We obtain arctic curves in all cases: Theorems \ref{6VNEthm},\ref{6VpNEthm},\ref{20VNEthm} and \ref{DTthm} cover respectively the cases of 6V,6V',20V and Domino Tilings. We gather a few concluding remarks in Section \ref{seconc}. \medskip \noindent{\bf Acknowledgments.} We are thankful to G.A.P. Ribeiro for bringing Refs.~\cite{RIBKOR,PRarctic} to our attention. We acknowledge support from the Morris and Gertrude Fine endowment, the NSF grant DMS18-02044, and the NSF RTG Grant DMS19-37241. \bigskip \section{Models, Paths and the Tangent Method}\label{modelsec} \subsection{The models}\label{secmodels} In this paper we consider 4 different models: three vertex models with particular Domain-Wall type boundary conditions (6 Vertex on a $n\times n$ square grid, 6 Vertex with U-turn boundaries on a $2n-1\times n$ rectangular grid, and 20 Vertex on the quadrangle $\mathcal Q_n$), and one model of domino tilings of the Aztec triangle $\mathcal T_n$. \subsubsection{6V model with DWBC} \begin{figure} \begin{center} \includegraphics[width=14cm]{six.eps} \end{center} \caption{\small The 6 vertex environments obeying the ice rule on the square lattice (top) and their osculating path reformulation (bottom). We have indicated the corresponding types a,b,c.} \label{fig:six} \end{figure} \begin{figure} \begin{center} \includegraphics[width=14cm]{oscsixv.eps} \end{center} \caption{\small (a) A sample configuration of the 6V model with DWBC (for $n=9$) and (b) its reformulation in terms of osculating paths.} \label{fig:oscsixv} \end{figure} The 6V model is the archetype of integrable ice-type model on the two-dimensional square lattice. Its configurations are obtained by orienting the edges of the lattice (with arrows) in such a way that each vertex has exactly two entering and two out-going arrows (the so-called ``ice rule"). This gives rise to the ${4\choose 2}=6$ local environments of Fig.~\ref{fig:six} (top row), traditionally called a,b,c types. Here we consider the 6V model on an $n\times n$ square grid with fixed Domain Wall Boundary Conditions (DWBC), i.e. the $2n$ horizontal boundary arrows ($n$ on the West (W) and $n$ on the East (E) boundaries) pointing towards the square domain and the $2n$ vertical ones ($n$ on the North (N) and $n$ on the South (S) boundaries) outwards (see Fig.~\ref{fig:oscsixv} (a) for an illustration). Finally, the configurations are weighted by the product of local vertex weights over the domain\footnote{We restrict throughout the paper to the Disordered regime, in which all weights are trigonometric.}, parameterized by real ``spectral parameters" $u,v$ attached to the horizontal and vertical line that intersect at the vertex, taking the following values in the so-called Disordered regime, which we consider in this paper: \begin{equation}\label{6vweights} a=\rho\sin(u-v+\eta) ,\qquad b=\rho\sin(u-v-\eta),\qquad c=\rho\sin(2\eta)\end{equation} for a,b,c type vertices respectively. The overall fixed factor $\rho>0$ emphasizes the projective nature of the weights and the homogeneity of the partition function (weighted sum over configurations), from which $\rho^{n^2}$ factors out. Positivity of the weights imposes the condition: \begin{equation}\label{domain6v} \eta<u-v < \pi-\eta , \qquad 0<\eta<\frac{\pi}{2} .\end{equation} In the following we shall consider the homogeneous partition function $Z_n^{6V}[u,v]\equiv Z_n^{6V}[u-v]$ in which all horizontal spectral parameters at taking the value $u$ and all vertical ones the value $v$, so that weights are uniformly defined by \eqref{6vweights}, and where we note that both the weights and the partition function only depend on the quantity $u-v$. As stressed in \cite{CP2009}, the partition function enjoys the following crucial symmetry property: \begin{equation}\label{sym6v} Z_n^{6V}[\pi-(u-v)]=Z_n^{6V}[u-v] \end{equation} This is a consequence of the reflection symmetry of the weights: indeed, the DWBC are unchanged if we reflect the domain w.r.t. say a horizontal line. However, such a reflection interchanges vertices of types $a \leftrightarrow b$ while keeping $c$-type environments unchanged. The same result is independently obtained by keeping the original setting, but applying the transformation $(u-v)\to \pi-(u-v)$ which leaves the domain \eqref{domain6v} invariant, and under which the weights $a$ and $b$ are interchanged, while $c$ remains invariant, and \eqref{sym6v} follows. This model was extensively studied \cite{Lieb,DWBC,IKdet}, and turned out to play a crucial role in Kuperberg's proof of the Alternating Sign Matrix (ASM) conjecture \cite{kuperberg1996another,Bressoud}. The enumeration of ASM is realized at the ``ASM point" where all weights are equal to $1$, namely: \begin{equation}\label{asmpoint} \eta=\frac{\pi}{6},\qquad u-v=\frac{\pi}{2},\qquad \rho=\frac{1}{\cos(\eta)} \end{equation} while the refined enumeration (with a factor $\tau$ per entry $-1$ in the ASM) is provided by picking \begin{equation}\label{tauasmpoint}u-v=\frac{\pi}{2}, \qquad \rho=\frac{1}{\cos(\eta)}, \qquad \tau=4\sin^2(\eta), \qquad 0<\eta<\frac{\pi}{2} \end{equation} namely $(a,b,c)=(1,1,\sqrt{\tau})$, with the particular cases of $1,2,3$-enumeration, for the choices $\eta=\frac{\pi}{6},\frac{\pi}{4},\frac{\pi}{3}$ respectively. The ``20V-DWBC1,2 point" is another interesting combinatorial point, which corresponds to the identification of the number of 20V DWBC1,2 configurations in terms of 6V DWBC \cite{DFGUI}, with the choice: \begin{equation}\label{20vpoint}\eta=\frac{\pi}{8},\qquad u-v=\frac{5\pi}{8}, \qquad \rho=\sqrt{2} \end{equation} corresponding to weights $(a,b,c)=(1,\sqrt{2},1)$. More recently the thermodynamic free energy of the model was obtained in \cite{KORZIN,PZ6V,BleFok}, and the arctic curves were derived using various semi-rigorous methods, such as the Tangent Method in \cite{COSPO,CPS}, and further used in \cite{BDFG} to determine the arctic curves of the 20V DWBC1,2 models. The configurations of the model can be rephrased in terms of families of osculating paths as follows. Pick a base orientation of arrows, say to the left and down, and mark all the edges of any given configurations that respect the base orientation. Note that all the W and S boundary edges are marked, while the N and E ones are not. The marked edges can be combined into paths say starting at the W boundary and ending at the S one, with right and down steps only, that are non-intersecting but may kiss/osculate by pairs at fully marked vertices: the corresponding six local configurations are depicted on the second row of Fig.~\ref{fig:six}. The osculating path formulation is well adapted to the Tangent Method as we shall see below. For illustration, we have represented in Fig.~\ref{fig:oscsixv} a sample 6V DWBC configuration both in the arrow (a) and osculating path (b) formulations. \subsubsection{6V model with U-turn boundary and 6V' model}\label{gen6vpsec} \begin{figure} \begin{center} \includegraphics[width=16cm]{6vUp.eps} \end{center} \caption{\small (a) U-turn boundary 6V model: each U-turn (marked by a black dot along the W boundary) transmits the arrow orientation through the dot. (b) When all $u_i=-\theta-\eta$, the weights $y_d=0$, hence all arrows go up through the U-turns, which may be cut out as shown. The bottom row becomes trivially fixed to the same b-type vertex. (c) the 6V' model is finally obtained by cutting out the trivial b-type vertices.} \label{fig:6vUtop} \end{figure} Kuperberg considered different symmetry classes of ASM, which in turn correspond to different variations around the 6V-DWBC model \cite{kuperberg2002symmetry}. In particular he found a remarkable connection between Vertically Symmetric ASMs (VSASM) and the 6V model with so-called U-turn boundary conditions (6V-U), also considered independently by Tsuchya \cite{tsu}. The 6V-U model is defined on a rectangular grid of square lattice of size $2n\times n$ with the usual DWBC along the N,S boundaries (each with $n$ outgoing vertical arrows) and E boundary (with $2n$ entering horizontal arrows), while the W boundary has U-turns connecting the $2n$ horizontal boundary edges (which we label $0,1,2,...,2n-1$ from bottom to top) by $n$ consecutive pairs $(2i,2i+1)$, $i=0,1,2,...,n-1$ (see Fig.~\ref{fig:6vUtop} (a) for an illustration). Each U-turn transmits the arrow orientation through the marked dot. The horizontal lines connected by a U-turn receive horizontal spectral parameters $-u_i$ (even label $2i$) and $u_i$ (odd label $2i+1$), while vertical spectral parameters are denoted by $v_i$, $i=1,2,...,n$ from left to right. As before, we consider the Disordered regime, with trigonometric weights depending on horizontal and vertical spectral parameters as in the case of the 6V-DWBC model. The local weights, say at the intersection of a horizontal line with spectral parameter $u$ and vertical line with spectral parameter $v$ are: \begin{equation}\label{6voddweights} a_o=\rho_o\sin(u-v+\eta) ,\ b_o=\rho_o\sin(u-v-\eta),\ c_o=\rho_o\sin(2\eta)\end{equation} on odd rows, while we must apply the transformation $u\to -u$ on even rows, resulting in: \begin{equation}\label{6vevenweights} a_e=\rho_e\sin(\eta-u-v) ,\ b_e=\rho_e\sin(-u-v-\eta),\ c_e=\rho_e\sin(2\eta)\end{equation} and where the overall constant factors $\rho_o,\rho_e>0$ emphasize the projective character of the weights. Finally, U-turns receive weights: \begin{equation}\label{uweights} y_u(u)=-\sin(u-\theta+\eta) \qquad y_d(u)= \sin(u+\theta+\eta) \end{equation} according to whether the transmitted arrow goes up or down. We must further constrain $u,v,\theta$ so that the weights of configurations of the 6V-U model are positive. A natural choice is $0<\theta<\frac{\pi}{2}$ and the following domain for the $u,v,\eta$ parameters: \begin{equation}\label{domain6} \eta<u-v<\pi-\eta,\quad \eta-\pi <u+v<-\eta, \quad 0<\eta<\frac{\pi}{2} \end{equation} For the purpose of this paper, we will consider the uniform case, where all horizontal odd spectral parameters are equal with value $u_i=u$ for all $i$, and all vertical ones are equal, with value $v_j=v$ for all $j$, so that weights of odd/even rows are given by \eqref{6voddweights} and \eqref{6vevenweights} respectively. Moreover, we pick $\theta=-u-\eta$, thus enforcing that at each U-turn the arrows go up\footnote{This choice simplifies the model by fixing the orientations of all arrows along the W boundary. However, we argue that the thermodynamics of the model are insensitive to that choice. For instance, the thermodynamic free energy, a bulk quantity, is independent of the choice of $\theta$ (see Remark \ref{thetarem} below). So is the one-point function (see Remark \ref{thetaonerem} below). As a consequence, the arctic curves of the U-turn 6V and of the 6V' models are expected to be identical.}. Dividing each U-turn into two horizontal edges, we now obtain arrows that alternate in/out along the W boundary (as shown in Fig.~\ref{fig:6vUtop} (b)). Note that the bottom row of vertices has all its edge orientations fixed by the ice rule. Upon dividing by the corresponding product of local even b-type weights, we may safely remove the $n$ vertices of the bottom line. After dividing by the weights of the removed vertices and U-turns, we are left with the 6V model on a rectangular grid of square lattice with size $2n-1\times n$, and with usual DWBC along the N,E,S boundaries, while arrows alternate in/out from bottom to top along the W boundary (as depicted in Fig.~\ref{fig:6vUtop} (c)). Note that the rows are now labelled $1,2,...,2n-1$ from bottom to top. By lack of a better name, we shall refer to this model as the 6V' model, and denote by $Z_n^{6V'}[u,v]$ the corresponding homogeneous partition function. Similarly to the 6V-DWBC case, this partition function enjoys a reflection symmetry property: \begin{equation}\label{6vpsym} Z_n^{6V'}[-u,-\pi-v]=Z_n^{6V'}[u,v] \end{equation} Indeed, like in the 6V case, applying a reflection w.r.t. a horizontal line to the rectangular domain interchanges vertices of type $a_o\leftrightarrow b_o$ and $a_e \leftrightarrow b_e$ while $c$-type vertices are unchanged. The same result is obtained in the original setting by applying the transformation $(u,v)\to (-u,-\pi-v)$, which leaves the domain \eqref{domain6} invariant, and \eqref{6vpsym} follows. We now examine a few ``combinatorial points" in parameter space, where the partition function of the 6V' model has some known combinatorial interpretations. Similarly to the 6V-DWBC case, the enumeration of Vertically Symmetric ASM (VSASM) is realized \cite{kuperberg2002symmetry} at the ``VSASM point" of the 6V' model, where all weights are equal to $1$, namely: \begin{equation}\label{vsasmpoint} \eta=\frac{\pi}{6},\qquad u=0,\qquad v=-\frac{\pi}{2},\qquad \rho_o=\rho_e=\frac{1}{\cos(\eta)} \end{equation} while the refined enumeration corresponds to \cite{kuperberg2002symmetry}: \begin{equation}\label{tauvsasmpoint} u=0,\qquad v=-\frac{\pi}{2},\qquad \rho_o=\rho_e=\frac{1}{\cos(\eta)}, \qquad \tau=4\sin^2(\eta) \end{equation} with even and odd weights $(a_i,b_i,c_i)=(1,1,\sqrt{\tau})$, with the particular cases of $1,2,3$-enumeration of VSASM corresponding respectively to $\eta=\frac{\pi}{6},\frac{\pi}{4},\frac{\pi}{3}$. The ``20V-DWBC3 point" is another interesting combinatorial point, which corresponds to the identification of the number of 20V DWBC3 configurations on $\mathcal Q_n$ in terms of 6V' DWBC \cite{DF20V}, with the choice: \begin{equation}\label{20v3point} \eta=\frac{\pi}{8},\qquad u=\frac{\pi}{8},\qquad v=-\frac{\pi}{2},\qquad \rho_o=\rho_e=\sqrt{2} \end{equation} \begin{figure} \begin{center} \includegraphics[width=9cm]{oscu6vp.eps} \end{center} \caption{\small (a) A sample configuration of the 6V' model (for $n=5$) and (b) its reformulation in terms of osculating paths.} \label{fig:oscu6vp} \end{figure} Like in the 6V-DWBC case, the configurations of the model may be rephrased in terms of osculating paths. Using the same recipe, we see that configurations are in bijection with families of $n$ osculating paths, starting at odd horizontal edges along the W boundary, and ending at all vertical edges along the S boundary. For illustration, we have represented in Fig.~\ref{fig:oscu6vp} a sample 6V' configuration both in the arrow (a) and osculating path (b) formulations. The U-turn 6V/6V' model thermodynamic free energy was derived in Ref.~\cite{RIBKOR}, and arctic curves were derived in the VSASM case \cite{DFLAP} and in the particular free fermion case corresponding to $\eta=\frac{\pi}{4}$, $u=0$, $v=-\frac{\pi}{2}$ \cite{PRarctic}. \subsubsection{20V model with DWBC3}\label{20vpresec} \begin{figure} \begin{center} \includegraphics[width=14cm]{twentyV.eps} \end{center} \caption{\small The twenty vertex environments obeying the ice rule on the triangular lattice (top two rows) together with their osculating Schr\"oder path reformulation (bottom two rows).} \label{fig:vingt} \end{figure} The 20V model is a two-dimensional ice-type model defined on the triangular lattice. As in the 6V case, edges are oriented in such a way that at each vertex there are exactly three incoming and three outgoing arrows. This gives rise to the ${6\choose 3}=20$ local vertex configurations depicted in Fig.~\ref{fig:vingt}. Recently this model was considered with special boundary conditions \cite{DFGUI} emulating DWBC on some particular domains. For simplicity, the triangular lattice is represented with vertices in ${\mathbb Z}^2$, and edges of the square lattice are supplemented by the second diagonal of each square face. Edges are accordingly called horizontal, vertical and diagonal. In Ref. \cite{DFGUI}, four types of boundary conditions (DWBC1,2,3,4) were considered on a $n\times n$ square grid in this representation, with remarkable combinatorial properties. The DWBC1,2 are closest to the 6V-DWBC, and correspond to arrows entering the domain on the W and E boundaries, and exiting on the N and S boundaries, with a particular choice of the NW and SE corner diagonal edges as belonging to the W and S boundaries respectively (DWBC1) or to the N and E (DWBC2). The DWBC3 is a more relaxed version of DWBC, where only the horizontal arrows point toward the domain on the W boundary, and only vertical arrows point outward on the S boundary, while all other arrows point outward on W and N, and inward on S and E. In \cite{DFGUI}, a family of pentagonal extensions of the grid was considered, and the corresponding 20V configurations were conjectured to correspond to the domino tilings of special domains, viewed as truncations of the Aztec Triangle. The conjecture was proved in \cite{DF20V} for the maximal extension, namely the 20V model with DWBC3 on the quadrangle $\mathcal Q_n$ of shape $n\times n\times (2n-1)\times n$ (see Fig.~\ref{fig:osc20v} (a) for an illustration), whose partition function was shown to be identical to that of domino tilings of the Aztec triangle $\mathcal T_n$ (see Fig.~\ref{fig:dt0} (a)). In the present paper, we shall concentrate on this model. \begin{figure} \begin{center} \includegraphics[width=14cm]{osc20v.eps} \end{center} \caption{\small (a) A sample configuration of the 20V model with DWBC3 on the quadrangle $\mathcal Q_n$ (with $n=9$ here) and (b) its osculating Schr\"oder path reformulation.} \label{fig:osc20v} \end{figure} \begin{figure} \begin{center} \includegraphics[width=14cm]{generalweights20V.eps} \end{center} \caption{\small The seven classes of vertices of the 20V model (in osculating Schr\"oder path formulation), and their corresponding weights $\omega_i$, $i=0,1,2,...,6$.} \label{fig:weight20v} \end{figure} Like in the 6V case, we may rephrase the arrow configurations of the 20V model in terms of osculating paths with horizontal, vertical and diagonal steps along the corresponding edges of the lattice (these are usually called Schr\"oder paths). This is done similarly by picking a base orientation (right, down, and diagonal down and right) of all the edges of the lattice, and marking only those edges of a given configuration of the 20V model that agree with it. The selected edges are assembled again into non-intersecting, but possibly kissing paths travelling to the right and down. We have represented in Fig.~\ref{fig:vingt} (bottom two rows) the 20 local path configurations at a vertex corresponding to the 20 arrow configurations (top two rows). In the osculating Schr\"oder path formulation, the DWBC3 on $\mathcal Q_n$ gives rise to families of $n$ osculating paths starting at the $n$ odd horizontal edges along the W boundary, and ending at the $n$ vertical edges of the diagonal SW boundary, as displayed in Fig.~\ref{fig:osc20v} (b). As detailed in \cite{DFGUI,Kel}, the model receives integrable weights inherited from those of the 6V model upon resolving the triple intersections of spectral parameter lines at each vertex into three simple intersections corresponding to three 6V models on three distinct lattices. Integrability was used in \cite{DF20V} to transform the partition function of the 20V DWBC3 model into that of a 6V' model, for a particular normalization of spectral parameters of the 20V model. With this normalization, the seven local vertex weights corresponding to the dictionary of Fig.~\ref{fig:weight20v} read respectively: \begin{eqnarray}\label{weights20V} \omega_0&=&\nu\, \sin(u-v+\eta)\, \sin(\eta-u-v)\,\sin(2u+2\eta)\nonumber \\ \omega_1&=&\nu\, \sin(u-v-\eta)\, \sin(-u-v-\eta)\,\sin(2u+2\eta)\nonumber \\ \omega_2&=&\nu\, \sin(u-v-\eta)\, \,\sin(2u+2\eta)\,\sin(2\eta)\nonumber \\ \omega_3&=&\nu\, \{ \sin^3(2\eta)+\sin(u-v+\eta)\, \sin(-u-v-\eta)\,\sin(2u) \}\nonumber \\ \omega_4&=&\nu\,\sin(2u+2\eta) \, \sin(\eta-u-v)\,\sin(2\eta) \nonumber \\ \omega_5&=&\nu\, \sin(u-v-\eta)\,\sin(\eta-u-v)\,\sin(2\eta) \nonumber \\ \omega_6&=&\nu\, \sin(u-v-\eta)\,\sin(\eta-u-v)\,\sin(2u) \end{eqnarray} where again the fixed overall factor $\nu>0$ emphazises the projective nature of the weights. Note that each vertex is the intersection of three lines (horizontal, vertical, diagonal) each of which carries a spectral parameter ($\eta+u,\ v,\ -u$ respectively). The domain of parameters ensuring positivity of the weights is : \begin{equation}\label{domain20} 0<u<\frac{\pi}{2}-\eta \qquad \eta <u-v<\pi-\eta \qquad \eta<-u-v<\pi-\eta \qquad 0<\eta<\frac{\pi}{2} \end{equation} (Note the similarity with the domain \eqref{domain6} for the 6V' model, the only extra condition being that $u>0$.). Note the existence of a combinatorial point where the weights are uniform and all equal to $1$: \begin{equation}\label{combipoint20v}\eta=\frac{\pi}{8},\qquad u=\eta=\frac{\pi}{8},\qquad v=-4\eta=-\frac{\pi}{2}, \qquad \nu=\sqrt{2} . \end{equation} identical to the 20V-DWBC3 point of the 6V' model, where the partition functions of both models are related \cite{DF20V}. \subsubsection{Domino Tilings of the Aztec Triangle}\label{dtrefsec} \begin{figure} \begin{center} \includegraphics[width=16cm]{DT0.eps} \end{center} \caption{\small (a) A sample domino tiling of the Aztec triangle $\mathcal T_n$ (here for $n=6$) and (b) its non-intersecting Schr\"oder path formulation. The dictionary for the path steps (horizontal, vertical, empty, diagonal) is indicated for each of the four (bicolored) domino configurations. } \label{fig:dt0} \end{figure} Our fourth class of objects is the tiling configurations by means of $2\times 1$ dominos of the ``Aztec Triangle" of order $n$ \cite{DFGUI,DF20V}, denoted $\mathcal T_n$, depicted in Fig.~\ref{fig:dt0} (a). The identity between the number of 20V-DWBC3 configurations on $\mathcal Q_n$ and the number of domino tilings of the Aztec triangle $\mathcal T_n$ was conjectured in \cite{DFGUI} and proved in \cite{DF20V}. In Sect.~\ref{DTsec} below, we will make use of this correspondence to determine the limit shape of typical domino tilings of $\mathcal T_n$ for large $n$. It proves useful to rephrase the domino tiling problem in terms of non-intersecting lattice paths, as indicated in Fig.~\ref{fig:dt0} (b), where the indicated dictionary between bi-colored dominos and path steps has been used to reexpress bijectively the tiling configuration into a family of non-intersecting lattice paths with fixed ends on the diagonal NW and S boundaries of the domain. As indicated, the paths may have horizontal, vertical and diagonal steps and are therefore non-intersecting Schr\"oder paths. \subsection{Tangent Method: combining one-point functions and paths}\label{sectan} This section details how the Tangent Method of \cite{COSPO} works and how we are going to apply it to the four models studied in this paper: the 6V-DWBC, 6V', 20V-DWBC3 and finally the Domino Tiling of the Aztec Triangle, all expressed in the (possibly osculating) path formulation. \subsubsection{The Tangent Method}\label{sectanbis} \begin{figure} \begin{center} \includegraphics[width=16cm]{Alltgt.eps} \end{center} \caption{\small Application of the Tangent Method to determine the NE branch of the arctic curve, illustrated for the four models studied in this paper: 6V-DWBC (top left), 6V' (bottom left), 20V-DWBC3 (top right) and Domino Tilings of the Aztec Triangle (bottom right). In all cases, the endpoint of the topmost path is displaced from its original position (white dot) to the right, at some distance $\ell$ (red dot). The partition function splits into the modified partition function of the model $Z_{n,k}$ with exit point at position $k$ (pink domain) and that, $Y_{k,\ell}$, of a single path from the exit point to the displaced endpoint with the same ambient weights (light blue domain). The Tangent Method uses the most likely position $k=k(\ell)$ namely the one giving the largest contribution to the total partition function. The relevant portion of arctic curve is given by the envelope of the family of lines through $(\ell,0)$ and $(0,k(\ell))$ (the green and red points), in rescaled coordinates with the origin at the SE corner of the original domain. All dimensions are expressed in units of the underlying lattice grid.} \label{fig:alltgt} \end{figure} As explained in the Introduction, the Tangent Method consists in finding the most likely exit point from the original domain of the topmost path, given that its end has been displaced away from the domain. To determine this point, we consider the full partition function $\Sigma_{n,\ell}$ of the model, which is made of two pieces (corresponding respectively to the pink and light blue domains in Fig.~\ref{fig:alltgt}): \begin{itemize} \item{} The modified partition function $Z_{n,k}$ for the set of $n$ weighted paths in the original domain, with the (topmost) $n$-th path constrained to exit the domain along the E border at a fixed height $k$ (green dot in Fig.~\ref{fig:alltgt}), normalized into the ``refined one-point function" $H_{n,k}=Z_{n,k}/Z_n$. \item{} The partition function $Y_{k,\ell}$ of a single weighted path constrained to start at the previous exit point and end at a fixed endpoint (displaced at distance $\ell$ from its original position, second green dot in Fig.~\ref{fig:alltgt}). \end{itemize} The full partition function reads: $\Sigma_{n,\ell}= \sum_{k=1}^{\mu n} H_{n,k}\, Y_{k,\ell}$, where $\mu=1$ for the 6V-DWBC and the Domino Tiling models, and $\mu=2$ in the other models. Note that all weights (including those of the single path) are those of the underlying vertex model; in particular, the vertices not visited by the single path (in the light blue zones of Fig.~\ref{fig:alltgt}) receive the weight of the empty vertex configuration. Next we go to the large $n=N$ scaling limit and use large $N$ estimates of both partition functions to find the leading contribution to the sum in $\Sigma_{n,\ell}$. More precisely, setting $n=N$, $\ell=\lambda N$, the limiting solution of the saddle-point approximation to the sum, in the form of some function $\kappa(\lambda)$ where $k(\ell)=\mu N \kappa(\lambda)$ is the most likely position of the exit point. The (rescaled) arctic curve is then obtained as the envelope of the family of lines through the most likely exit point and the fixed endpoint, both functions of the parameter $\lambda$. More precisely, we must estimate the large $N$ behavior of the total partition function $\Sigma_{n,\ell}$: $$\Sigma_{N,\lambda N} \simeq \mu N\int_{0}^{1} d\kappa H_{N,\mu \kappa N} Y_{\mu\kappa N,\lambda N}$$ where we have replaced the summation by an integral over the rescaled variable $\kappa=k/(\mu N)$. In Sections \ref{6vsec},\ref{6vpsec},\ref{20vsec} and \ref{DTsec} below, we work out the explicit asymptotics of both functions in the integrand, in the form $H_{N,\mu\kappa N}\simeq e^{NS_H(\kappa)}$ and $Y_{\mu\kappa N,\lambda N}\simeq e^{NS_Y(\kappa)}$. The leading contribution to the integral comes from the solution $\kappa=\kappa(\lambda)$ to the saddle-point equation $\partial_\kappa S_H+\partial_\kappa S_Y=0$. This gives the most likely exit point $\big(0,\mu\kappa(\lambda)\big)$ (in rescaled variables). The tangent line in rescaled variables is the line through $\big(0,\mu\kappa(\lambda)\big)$ and $(\lambda,0)$, with equation \begin{equation}\label{tgteq} y+A x-B=0, \qquad A=\frac{\mu\kappa(\lambda)}{\lambda}, \quad B=\mu \kappa(\lambda) \end{equation} As we shall see, the family of tangent lines is best described in terms of the parameter $\xi$ (the deviation from uniform vertical spectral parameter in the last (E-most) column $v_n=v+\xi$). In particular the relationship between $\lambda$ and $\kappa(\lambda)$ takes the parametric form: $(\kappa(\lambda),\lambda)=(\kappa[\xi],\lambda[\xi])$, for $\xi\in I$, $I$ an interval determined by the conditions that $\lambda[\xi]>0$ and $\kappa[\xi]\in [0,1]$. The envelope of the family of lines \begin{equation}\label{tgtfamily} F_\xi(x,y):=y+A[\xi]x-B[\xi]=0\end{equation} is determined as the solution of the linear system $F_\xi(x,y)=\partial_\xi F(x,y)=0$, and gives rise to the parametric equations for the arctic curve: \begin{equation} \label{acurve} x=X[\xi]:=\frac{B'[\xi]}{A'[\xi]},\qquad y=Y[\xi]:=B[\xi]-\frac{A[\xi]}{A'[\xi]}B'[\xi] , \qquad (\xi\in I) \end{equation} By the geometry of the problem, only a portion of the arctic curve can be obtained in this way: moving the exit point to the right along the line through all other exit points covers a portion of arctic curve between a point of tangency to the E vertical border of the original domain (when the endpoint tends to its original position) and a point of tangency to the horizontal N border of the domain (when the endpoint tends to infinity on the right along the line), or equivalently corresponding to the slope $A[\xi]\in [0,\infty)$. This condition was used to restrict the domain of the variable $\xi\in I$. This portion of arctic curve is on the NE corner of the domain, and we shall refer to it as the NE branch of the arctic curve. The case of the Domino Tiling of the Aztec Triangle is simpler: as a (free fermion) dimer model, it is expected on general grounds \cite{KOS} to have an analytic arctic curve, equal to the analytic continuation of its NE branch. As we shall see, the cases of 6V-DWBC, 6V' and 20V-DWBC3 are more involved, and lead in general to non-analytic arctic curves. \subsubsection{Other branches}\label{obsec} To reach other portions of the arctic curve, we will have to resort to various tricks, all based on the same principle: we switch to a different interpretation of the configurations of the original model, to express them in terms of different families of paths, to which the Tangent Method can be applied again. \noindent{\bf 6V-DWBC and 6V' cases.} \begin{figure} \begin{center} \includegraphics[width=14cm]{flip6vp.eps} \end{center} \caption{\small From NE to SE branch in the 6V' model: (a) configuration of the 6V' model (b) after application of the Vertical Flip (VF) (c) after application of the Reflection (R), leading to another 6V' configuration, with weights of $a_i$ and $b_i$ types interchanged for $i=e,o$.} \label{fig:flip6vp} \end{figure} In the case of the 6V-DWBC/6V' model arctic curves, we have access to the SE branch by reinterpreting the 6V configurations in terms of paths with the same starting points (every point/every other point along the W vertical border) but with endpoints along the N border (see Fig.~\ref{fig:flip6vp} (a-b) for an illustration in the 6V' case). This corresponds to redefining the base orientation of edges (and direction of travel of the paths) to be to the right and up: we call this transformation on the paths Vertical Flip ($\bf VF$). The osculation at each vertex must be {\it redefined} so that all paths now go horizontally right and vertically up. The SE branch of the arctic curve is obtained by applying the Tangent Method to this new family of paths. The easiest way to do so is to reflect the picture w.r.t. a horizontal line, so that the setting is that of the 6V' model again: we call this transformation Reflection ($\bf R$) as shown in Fig.~\ref{fig:flip6vp} (c). The net effect of the composition of $\bf VF$ followed by $\bf R$ on the model is simply to interchange the a and b type weights, a transformation also implemented by the involution $*$ acting on the spectral parameters as follows: \begin{eqnarray} &&u\to u^*=\pi-u \qquad\qquad\qquad ({\rm 6V-DWBC})\label{6vstar}\\ &&(u,v)\to (u^*=-u,v^*=-\pi-v)\qquad ({\rm 6V'})\label{6vpstar} \end{eqnarray} This gives rise\footnote{Here and in the following the superscript $*$ indicates that the corresponding quantity is obtained by changing $u\to u^*$ (6V-DWBC) or $(u,v)\to (u^*,v^*)$ (6V').} to the new weights $a^*=b,b^*=a,c^*=c$ (6V-DWBC) or $a_i^*=b_i,b_i^*=a_i, c_i^*=c_i$, $i=o,e$ (6V'). The ``upside-down" one-point function must also be reinterpreted as $H_{n,k}=H_{n,\mu n-k}^*$ where the latter is computed with the new transformed weights. Similarly, the path partition function $Y_{k,\ell}$ with starting point $(0,k)$ and endpoint $(\ell,0)$ is reinterpreted as the partition function $Y_{\mu n-k,\ell}^*$ with the new weights. More precisely setting the origin of the rescaled domain at the SE corner of the domain, the vertices of the rescaled domain are: SE:(0,0), NE:(0,$\mu$), NW:(-1,$\mu$), SW:(-1,0). The large $n=N$ optimization problem leading to the most likely exit point $(0,\mu\kappa(\lambda))$ leads now to the most likely exit point $(0,\mu(1-\kappa^*(\lambda)))$ and the associated family of rescaled lines. The SE branch of arctic curve is obtained by reflecting back the envelope of this family, and effectively amounts to applying $*$ to the NE branch and reflecting it w.r.t. the line $y=\mu/2$, namely applying the transformation \begin{equation} \label{flip6v6vp} (x,y)\mapsto (x,\mu-y) \end{equation} with $\mu=1$ for the 6V-DWBC model, and $\mu=2$ for the 6V' model. \begin{figure} \begin{center} \includegraphics[width=16cm]{20Vshear.eps} \end{center} \caption{\small The shear trick: (a) configuration of the 20V-DWBC3 (b) after application of the Vertical Flip (VF) (c) after application of the Reflection (R) (d) after the Shear transformation (S). } \label{fig:shear} \end{figure} \noindent{\bf 20V-DWBC3 case.} In the case of the 20V-DWBC3 model illustrated in Fig.~\ref{fig:shear} (a), the same idea leading to the SE branch must be adapted (by adapting the ``shear trick" devised in Ref.~\cite{BDFG}). More precisely, as in the 6V-DWBC and 6V' cases, we first redefine the base edge orientation so that horizontal/diagonal edges point right/down, but vertical edges point up. Alternatively, compared to the original base orientation, this simply interchanges vertical edges which are occupied by path steps with vertical edges which are empty and vice versa (like in the 6V,6V' cases, we call this $\bf VF$=Vertical Flip, see Fig.~\ref{fig:shear} (b)). The osculation at each vertex must be redefined so that all paths now go horizontally right, diagonally down and vertically up: in particular the paths now end on the $n$ vertical edges of the N boundary (see Fig.~\ref{fig:shear} (b)). To match this with a 20V configuration of $\mathcal Q_n$, we must reflect the configuration w.r.t. a horizontal line (we call this again $\bf R$=Reflection, see Fig.~\ref{fig:shear} (c)). However, the quadrangular domain $\mathcal Q_n$ is not invariant under horizontal reflection: to recover it, we apply a shear transformation as indicated in Fig.~\ref{fig:shear} (d) (we call this $\bf S$=Shear). More precisely setting the origin of the rescaled domain at the SE corner of the domain, the vertices of the rescaled quadrangle $\mathcal Q_N/N$ are: SE:(0,0), NE:(0,2), NW:(-1,2), SW:(-1,1). Applying successively (in rescaled variables $(x,y)$) the reflection $(x,y)\to (x,-y)$, shear $(x,y)\to (x,y-x)$, and finally translation by $(0,2)$ leaves the domain invariant, but effectively flips the orientation of the vertical edges in the 20V-DWBC3 configuration. Note that the final configuration after application of $\bf VF,\ R, \ S$ (Fig.~\ref{fig:shear} (d)) is slightly different from a 20V-DWBC3 configuration, as all the starting steps along the W boundary are diagonal, as opposed to horizontal. This clearly makes no difference in the case of uniform weights, however for non-uniform weights this changes the weights along the W boundary. We argue nonetheless that this mild boundary effect does not affect the asymptotic behavior of bulk quantities as all other weights are the same in both situations. In particular, we expect the arctic curve to be the same in the original 20V-DWBC3 model with horizontal starting steps along the W boundary, and in the modified one, where all starting steps are diagonal. Let us now examine the fate of the local vertex environments of Fig.~\ref{fig:weight20v} under the sequence of transformations $\bf VF,\ R, \ S$. It is easy to see that under this transformation the types of vertices are mapped as follows: $\omega_0\leftrightarrow \omega_1$, $\omega_2\leftrightarrow \omega_4$, while all other types are preserved. For illustration, the top left vertex of type $\omega_2$ of Fig.~\ref{fig:weight20v} is successively transformed into the top left vertex of type $\omega_4$ as follows: $$ \raisebox{-1.cm}{\hbox{\epsfxsize=14cm\epsfbox{vfrs.eps}}} $$ We finally note that the involution $*$ which maps the weights $(\omega_0,\omega_1,\omega_2,\omega_3,\omega_4,\omega_5,\omega_6)\mapsto (\omega_1,\omega_0,\omega_4,\omega_3,\omega_2,\omega_5,\omega_6)$ is simply given by \begin{equation}\label{20vstar}(u,v)\mapsto (u^*,v^*)=(u,-v-\pi)\qquad ({\rm 20V-DWBC3}) . \end{equation} As before, we have to reinterpret $H_{n,k}=H_{n,2n-k}^*$ and $Y_{k,\ell}=Y_{2n-k,\ell}^*$ leading to the most likely exit point $2N(1-\kappa^*(\lambda))$ in rescaled variables $\kappa=k/(2N)$ and $\lambda=\ell/N$. The SE branch is finally obtained by applying the reflection/shear/translation $(x,y)\mapsto (x,2-x-y)$ to the NE one after applying $*$, namely changing $v\to -v-\pi$. This gives access to the SE branch in all 6V-DWBC, 6V' and 20V-DWBC3 cases. Except in the free fermion cases, where arctic curves are expected to be analytic, we have no prediction for other portions of arctic curve when they exist. \section{6V Model}\label{6vsec} \subsection{Partition function and one-point function} \subsubsection{Inhomogeneous partition function} A general result \cite{IKdet} provides a determinant formula for the partition function $Z_n^{6V}[{\mathbf u},{\mathbf v}]$ of the inhomogeneous 6V-DWBC model, with horizontal/vertical spectral parameters ${\mathbf u}=u_1,u_2,...,u_n$/${\mathbf v}=v_1,v_2,...,v_n$. \begin{thm} Let $$m(u,v):=m(u-v):=\frac{1}{\sin(u-v+\eta)\sin(u-v-\eta)}$$ The full inhomogeneous 6V-DWBC partition function reads: \begin{equation}\label{sixinho} Z_{n}^{6V}[{\mathbf u},{\mathbf v}]=\rho^{n^2}\sin^n(\eta)\det_{1\leq i,j \leq n}\big(m(u_i,v_j)\big) \times \frac{\prod_{i,j=1}^n \sin(u_i-v_j+\eta)\sin(u_i-v_j-\eta)}{\prod_{1\leq i<j \leq n}\sin(u_i-u_j)\, \sin(v_j-v_i)} \end{equation} \end{thm} \subsubsection{Homogeneous limit} The homogeneous limit $Z_n^{6V}[u-v]$ of the inhomogeneous partition function $Z_n^{6V}[{\mathbf u},{\mathbf v}]$ in which $u_i\to u$ and $v_i\to v$ for all $i$ involves the quantity \begin{equation}\label{deltahomo6v} \Delta_n[u-v]:=\lim_{u_1,u_2,...,u_n\to u\atop v_1,v_2,...,v_n\to v} \frac{\det_{1\leq i,j \leq n}\big(m(u_i,v_j)\big)}{\prod_{1\leq i<j \leq n}(u_i-u_j)(v_j-v_i)}=: \frac{1}{\prod_{i=1}^{n-1} (i!)^2} D_n[u-v] \end{equation} Using Taylor expansion of rows and columns leads to the determinant $$D_n[u]=\det_{0\leq i,j\leq n-1}\left(\partial_u^{i+j}m(u)\right)$$ This determinant obeys a simple quadratic relation as a consequence of Pl\"ucker/Desnanot-Jacobi relations (up to some permutation of rows and columns) relating a determinant to some of its minors of size 1 and 2 less, summarized in the following lemma. \begin{lemma}\label{desnajac} Given an $n+1\times n+1$ square matrix $M$, its determinant $|M|$ and minors $|M|_{a}^{b}$ (with row $a$ and column $b$ removed), and $|M|_{a_1,a_2}^{b_1,b_2}$ (with rows $a_1,a_2$ and columns $b_1,b_2$ removed are related via: $$ |M| \times |M|_{n,n+1}^{n,n+1} =|M|_n^n \times |M|_{n+1}^{n+1}-|M|_{n+1}^n \times |M|_n^{n+1} $$ \end{lemma} Applying this to the matrix $M=\big(\partial_u^{i+j}m(u)\big)_{0\leq i,j\leq n}$, we easily get $$ D_{n+1}[u]\, D_{n-1}[u]= D_{n}[u] \, \partial_u^2 D_{n}[u]- (\partial_u D_{n}[u])^2 =D_n[u]^2 \partial_u^2 {\rm Log}(D_n[u]) $$ As a direct consequence, we have \begin{thm}\label{6vpartthm} The quantity $\Delta_n[u]$ obeys the following recursion relation for all $n\geq 1$: \begin{equation}\label{6vdelta} \frac{\Delta_{n+1}[u] \, \Delta_{n-1}[u] }{\Delta_n[u]^2} = \frac{1}{n^2} \partial_u^2 {\rm Log}(\Delta_n[u]) \end{equation} with the convention that $\Delta_0[u]=1$. \end{thm} Note that this relation determines $\Delta_n[u]$ recursively, from the initial data $\Delta_0[u]=1$ and $\Delta_1[u]=m[u]$. Finally the homogeneous partition function $Z_n^{6V}[u,v]$ is expressed as \begin{equation}\label{6Vpart} \frac{Z_n^{6V}[u-v]}{\sin^n(2\eta)}= \Delta_n[u-v] \, \left(\rho \sin(u-v+\eta)\sin(u-v-\eta)\right)^{n^2} \end{equation} \subsubsection{One-point function} We now consider a slightly more general limit, in which we take $u_1,u_2,...,u_n\to u$ and $v_1,v_2,...,v_{n-1}\to v$ but the last vertical spectral parameter is kept arbitrary, say $v_n=w=v+\xi$. The corresponding partition function $Z_n^{6V}[u-v;\xi]$ is again obtained as a limit of the inhomogeneous formula \eqref{sixinho}. We have \begin{eqnarray} \frac{Z_n^{6V}[u-v;\xi]}{\sin^n(2\eta)}&=& \Delta_n[u-v;\xi] \ \left(\rho \sin(u-v+\eta)\sin(u-v-\eta)\right)^{n(n-1)} \nonumber \\ &&\qquad\qquad \times \,\left(\rho \sin(u-v-\xi+\eta)\sin(u-v-\xi-\eta)\right)^{n} \label{ref6vZ} \end{eqnarray} in terms of a function $$\Delta_n[u-v;\xi]:= \lim_{{u_1,u_2,...,u_n\to u\atop v_1,v_2,...,v_{n-1}\to v}\atop v_n\to v+\xi} \frac{\det_{1\leq i,j \leq n}\big(m(u_i-v_j)\big)}{\prod_{1\leq i<j \leq n}\sin(u_i-u_j)\sin(v_j-v_i)}=: \frac{(-1)^{n-1}\, (n-1)!}{\sin^{n-1}(\xi)\, \prod_{i=1}^{n-1} (i!)^2} D_n[u-v;\xi]$$ where \begin{equation}\label{6vDN} D_n[u;\xi]=\det\left( \{\partial_u^{i+j} m(u)\}_{0\leq i\leq n-1\atop 0\leq j\leq n-2} \Bigg\vert \{\partial_u^i m(u-\xi) \}_{0\leq i\leq n-1}\right) \end{equation} identical to $D_n[u]$ except for its last column. We define the ``one-point function" as the ratio $$H_n^{6V}[u;\xi]:= \frac{Z_n^{6V}[u;\xi]}{Z_n^{6V}[u]}= \frac{\Delta_n[u;\xi]}{\Delta_n[u]} \, \left(\frac{\sin(u-\xi+\eta)\sin(u-\xi-\eta)}{\sin(u+\eta)\sin(u-\eta)}\right)^{n} $$ Applying again Lemma \ref{desnajac} this time with the matrix $M$ in the expression \eqref{6vDN} for $D_{n+1}[u;\xi]$, we find that \begin{equation}\label{newdenjac6v} D_{n+1}[u;\xi]\, D_{n-1}[u] = D_n[u] \, \partial_u D_n[u;\xi]- D_n[u;\xi]\, \partial_u D_n[u] \end{equation} We also introduce the reduced one-point function \begin{equation}\label{red6v} H_n[u;\xi]:=(-1)^{n-1}\,(n-1)! \frac{D_n[u;\xi]}{D_n[u]}= \sin^{n-1}(\xi)\, \frac{\Delta_n[u;\xi]}{\Delta_n[u]} \end{equation} in terms of which \begin{equation} \label{oneptfromred6v} H_n^{6V}[u;\xi]=\sin(\xi) \,H_n[u;\xi]\, \left(\frac{\sin(u-\xi+\eta)\sin(u-\xi-\eta)}{\sin(u+\eta)\sin(u-\eta)\sin(\xi)}\right)^{n} \end{equation} The reduced one-point function is determined by the following relation, as a direct consequence of \eqref{newdenjac6v}. \begin{thm}\label{6v1ptthm} The reduced one-point function $H_n[u;\xi]$ of the 6V-DWBC model satisfies the following relation: \begin{equation}\label{6vonept} \frac{H_{n+1}[u;\xi]}{H_n[u;\xi]}\, \frac{\Delta_{n+1}[u]\,\Delta_{n-1}[u]}{\Delta_n[u]^2} +\frac{1}{n} \partial_u {\rm Log}(H_n[u;\xi])=0 \end{equation} \end{thm} \subsection{Large $n$ limit: free energy and one-point function asymptotics} In the following sections, we use the fact that the 6V weights depend on the quantity $u-v$ only. Without loss of generality we shall set $v=0$ from now on. \subsubsection{Free energy} In this section, we reproduce an argument of \cite{KORZIN,CP2009} leading to the large $n$ asymptotics of the partition function of the 6V-DWBC model. The free energy per site $f^{6V}[u]$ of the 6V-DWBC model is defined via the large $n=N$ limit $$ f^{6V}[u] =-\lim_{N\to \infty} \frac{1}{N^2} {\rm Log}(Z_N^{6V}[u]) $$ or equivalently as the leading asymptotics $Z_N^{6V}[u]\simeq_{N\to\infty} e^{-N^2f^{6V}[u]}$. Substituting this into \eqref{6Vpart}, we get: \begin{equation}\label{sixvf} f^{6V}[u] =f[u] -{\rm Log}\left(\rho \sin(u+\eta)\sin(u-\eta)\right) \end{equation} in terms of the limit $$f[u]:=-\lim_{N\to \infty} \frac{1}{N^2} {\rm Log}(\Delta_N[u]) $$ Finally, substituting the large $N$ asymptotics $\Delta_N[u]\simeq_{N\to\infty} e^{-N^2 f[u]}$ into \eqref{6vdelta} yields the following differential equation (1D Liouville equation) $$e^{-2 f[u]} +\partial_u^2 f[u]=0 $$ To fix the solution, let us derive some symmetry and some limit of $\Delta_N[u]$. First note that the reflection symmetry $u\to \pi-u$ from \eqref{sym6v}, together with the relation \eqref{sixvf} imply that $$f[\pi-u]=f[u] .$$ Next let us consider the limit $u \to \eta$ of $\Delta_n[u]$. Setting $u=\eta+\epsilon$, we may perform the homogeneous limit \eqref{deltahomo6v} by setting $u_i=u+\epsilon x_i$, $v_i=\epsilon y_i$ (recall we have set $v=0$) and taking all $x_i,y_i\to 0$. We have for small $\epsilon$ $$ m(u_i-v_j)= \frac{1}{\epsilon \sin(2\eta)\, (1+x_i-y_j)} +O(1) $$ Using the Cauchy determinant formula, we find that $$\frac{\det_{1\leq i,j \leq n}(m(u_i-v_j))}{\prod_{1\leq i<j\leq n} (u_i-u_j)(v_j-v_i)}\simeq_{\epsilon\to 0} \frac{1}{\epsilon^{n^2}} \frac{\det\left( \frac{1}{1+x_i-y_j} \right) }{\prod_{i<j} (x_i-x_j)(y_j-y_i)} =\frac{1}{\epsilon^{n^2}}\prod_{i,j=1}^n \frac{1}{1+x_i-y_j}\to_{x_i\to 0\atop y_j\to 0} \frac{1}{\epsilon^{n^2}}$$ We deduce that $$\left\{\lim_{N\to \infty} -\frac{1}{N^2} {\rm Log}(\Delta_N[\eta+\epsilon])\right\}\Bigg\vert_{\epsilon\to 0} \simeq {\rm Log}(\epsilon) \ \Rightarrow \ f[\eta+\epsilon]\vert_{\epsilon\to 0}\simeq {\rm Log}(\epsilon)$$ Defining $W[u]:=e^{f[u]}$, we find that $W$ satisfies the following conditions: \begin{eqnarray*} W[u]\, \partial_u^2W[u]-(\partial_u W[u])^2&=&\left\vert \begin{matrix} W[u] & \partial_u W[u]\\ \partial_u W[u] & \partial_u^2 W[u] \end{matrix}\right\vert=1\\ W[\pi-u]=W[u], \quad W[\eta]&=&0 . \end{eqnarray*} The constant Wronskian condition implies that $W[u]$ obeys a second order linear differential equation, with general solution of the form $W[u]=\frac{\sin({\alpha} u +\beta)}{{\alpha}}$. The parameter $\beta$ is fixed by the vanishing condition $W[\eta]=0$, and ${\alpha}$ by the symmetry condition $W[\pi-u]=W[u]$. We find the solution \begin{equation}\label{W6v}W[u]=\frac{\sin({\alpha}(u-\eta))}{{\alpha}} \end{equation} where ${\alpha}(\pi-u-\eta)=\pi-{\alpha}(u-\eta)$, and finally \begin{equation}\label{fsol}f[u]={\rm Log}\left(\frac{\sin({\alpha}(u-\eta))}{{\alpha}} \right), \qquad {\alpha}=\frac{\pi}{\pi-2\eta} \end{equation} Substituting this into \eqref{sixvf}, we finally get the thermoynamic free energy of the 6V-DWBC model in the Disordered regime: \begin{equation}\label{6vfreeenergy} f^{6V}[u]= -{\rm Log}\left(\frac{{\alpha} \rho \sin(u+\eta)\sin(u-\eta)}{\sin({\alpha}(u-\eta))} \right) \end{equation} with ${\alpha}$ as in \eqref{fsol}. \subsubsection{One-point function} We present now a simplified version of the argument given in \cite{CP2009} to derive the asymptotics of the one-point function $H_n[u;\xi]$. By eq.\eqref{6vonept} we may infer the large $n=N$ leading behavior of $H_n[u;\xi]$ to be: \begin{equation}\label{asymtoH6v} H_N[u;\xi]\simeq_{N\to \infty} e^{-N \psi[u;\xi]} \end{equation} Substituting this into \eqref{6vonept} yields the differential equation: $$ e^{-\psi[u;\xi]-2 f[u]} -\partial_u \psi[u;\xi]=0 \ \Rightarrow \ \partial_u e^{\psi[u;\xi]} =e^{-2 f[u]} $$ Using the result \eqref{fsol} for $f[u]$, this is easily integrated into $$e^{\psi[u;\xi]}= c[\xi] -{\alpha}\cot({\alpha}(u-\eta)) $$ for some integration constant $c[\xi]$ independent of $u$, and ${\alpha}$ as in \eqref{fsol}. To fix the integration constant, let us consider the limit when $u-\xi-\eta\to 0$, by setting $\xi=u-\eta+\epsilon$ for a small $\epsilon\to 0$. Noting that $$\left\{\partial_u^i m[u-\xi]\right\}\Big\vert_{\xi=u-\eta+\epsilon}=-\frac{i!}{\sin(2\eta)\, \epsilon^{i+1}}+O(\epsilon^{-i}) $$ we see that the determinant for $D_N[u;\xi]$ \eqref{6vDN} is dominated by the term in the last row and column $i=j=N-1$, resulting in the leading behavior \begin{eqnarray*}\frac{D_N[u;\xi]}{D_N[u]} &\simeq& -(N-1)!\,\frac{D_{N-1}[u]}{\sin(2\eta) \, \epsilon^{N}\, D_N[u]}\simeq (N-1)!\,\frac{W^{2N}}{\epsilon^N}\\ &\Rightarrow& H_N[u;u-\eta+\epsilon]\simeq_{\epsilon\to 0} \left(\frac{W^2}{\epsilon\, \sin(u-\eta)} \right)^N \end{eqnarray*} where we have used the defining relation \eqref{red6v} for $H_N[u;\xi]$. Sending $\epsilon\to 0$, we conclude that $$\lim_{\xi \to u-\eta} e^{\psi[u;\xi]}=0$$ This immediately gives $c[\xi]={\alpha} \cot({\alpha}\,\xi)$, and finally \begin{equation}\label{psi6v} e^{\psi[u;\xi]}=\frac{{\alpha} \sin({\alpha}(u-\xi-\eta))}{\sin ({\alpha}\,\xi)\sin({\alpha}(u-\eta))} \end{equation} Collecting all the above results, we finally get the asymptotics of the 6V one-point function. \begin{thm}\label{oneptasy6v} The 6V-DWBC one-point function $H_n^{6V}[u;\xi]$ has the following large $n=N$ behavior: \begin{eqnarray*}H_N^{6V}[u;\xi]&\simeq& e^{-N \psi^{6V}[u;\xi]} \\ \psi^{6V}[u;\xi]&=& -{\rm Log}\left( \frac{\sin ({\alpha}\,\xi)\sin({\alpha}(u-\eta))\sin(u-\xi+\eta)\sin(u-\xi-\eta)}{{\alpha} \sin({\alpha}(u-\xi-\eta))\sin(\xi)\sin(u+\eta)\sin(u-\eta)} \right) \end{eqnarray*} with ${\alpha}$ as in \eqref{fsol}. \end{thm} \begin{proof} By the relation \eqref{oneptfromred6v}, we immediately get $$\psi^{6V}[u]=\psi[u]+{\rm Log}\left(\frac{\sin(\xi)\sin(u+\eta)\sin(u-\eta)}{\sin(u-\xi+\eta)\sin(u-\xi-\eta)} \right) $$ and the Theorem follows. \end{proof} \subsubsection{Refined partition functions and one-point functions} To apply the Tangent Method, we need the large $n,k$ asymptotics of the refined partition functions $Z_{n,k}^{6V}[u]$, $k=1,2,...,n$, defined as follows. Given a configuration of $n$ osculating paths contributing to $Z_{n}^{6V}[u]$ (with all horizontal spectal parameters equal to $u$ and all vertical ones to $0$), let us focus on the topmost path: let us record the first visit of this path to the east-most vertical line, say at the intersection with the $k$-th horizontal line from the bottom. Note that the path accesses the last vertical via a horizontal step, and ends with $k$ vertical steps until the east-most endpoint. We define the refined partition functions $Z_{n,k}^{6V}[u]$ to be the sum of all contributions in which the topmost path has these $k+1$ last steps. The quantities $Z_{n,k}^{6V}[u]$ turn out to be generated by the semi-inhomogeneous partition function $Z_n^{6V}[u;\xi]$ \eqref{ref6vZ}, for which the last vertical spectral parameter is replaced by $\xi$. Introducing relative weights $(\bar a,\bar b,\bar c)$ for the last column, as the following ratios of weights at $v=\xi$ by those at $v=0$: $${\bar a} =\frac{\sin(u-\xi+\eta)}{\sin(u+\eta)},\quad {\bar b}= \frac{\sin(u-\xi-\eta)}{\sin(u-\eta)},\quad {\bar c}=1\ , $$ we have the following decomposition: $$Z_n^{6V}[u;\xi]=\sum_{k=1}^n Z_{n,k}^{6V}[u]\, {\bar b}^{k-1} \,{\bar c}\, {\bar a}^{n-k}={\bar a}^{n-1} \sum_{k=1}^n \tau^{k-1} Z_{n,k}^{6V}[u]$$ in terms of a parameter \begin{equation}\label{tauparam} \tau:= \frac{\bar b}{\bar a} =\frac{\sin(u-\xi-\eta)\,\sin(u+\eta)}{\sin(u-\xi+\eta)\,\sin(u-\eta)}\end{equation} In applying the Tangent Method, we truncate the topmost path after its last horizontal step (see an illustration in the top left of Fig.~\ref{fig:alltgt} in the pink domain). The effect of removing the last $k$ vertical steps and replacing them by empty edges is an overall multiplication by a factor $(1/c)(a/b)^{k-1}$ (as we have cut\footnote{Here we choose not to attach any weight to the end vertex at height $k$, as it will be part of the partition function of the single path treated in next section.} the turning c-type vertex and replaced the $k-1$ b-type vertices by a-type ones). This suggests to define {\it refined one-point functions} as the ratios $$ H_{n,k}[u]:=\frac{1}{c}\left(\frac{a}{b}\right)^{k-1}\frac{Z_{n,k}^{6V}[u]}{Z_{n}^{6V}[u]} $$ The above relation between refined partition functions turns into the following relation between one-point function and refined one-point functions: \begin{equation}\label{relaoneptonept} c \frac{H_n^{6V}[u;\xi]}{{\bar a}^{n-1}}=\sum_{k=1}^n t^{k-1} H_{n,k}[u] \end{equation} where we have used a new parameter \begin{equation}\label{tparam} t=\frac{b}{a} \tau=\frac{\sin(u-\xi-\eta)}{\sin(u-\xi+\eta)}=:t_{6V}[\xi] \ .\end{equation} Let us now consider the large $n=N$ scaling limit of $H_{n,k}[u]$ in which the ratio $\kappa=k/N$ is kept finite. Using the relation \eqref{relaoneptonept} and the asymptotics of the one-point function $H_N^{6V}[u;\xi]$ of Theorem \ref{oneptasy6v}, we have at leading order as $N\to \infty$: \begin{equation} H_{N,\kappa N}[u]\simeq \oint \frac{dt}{2i\pi t} e^{-N S_0(\kappa,t)}, \quad S_0(\kappa,t)= \kappa\, {\rm Log}(t) +\varphi^{6V}[u;\xi] \end{equation} where we have defined $$\varphi^{6V}[u;\xi] := \psi^{6V}[u;\xi]+{\rm Log}(\bar a) =-{\rm Log}\left( \frac{\sin ({\alpha}\,\xi)\sin({\alpha}(u-\eta))\sin(u-\xi-\eta)}{{\alpha} \sin(\xi)\sin(u-\eta)\sin({\alpha}(u-\xi-\eta))} \right) .$$ Note that we are using the variable $t$ as integration variable (to extract the coefficient $H_{n,k}[u]$ of $t^{k-1}$), and that $\xi$ is an implicit function of $t$ upon inverting the equation $t_{6V}[\xi]=t$. The integral is dominated by the solution of the saddle-point equation $\partial_t S_0(\kappa,t)=0$ or equivalently $\partial_\xi S_0(\kappa,t_{6V}[\xi])=0$ resulting in \begin{eqnarray}\label{kappa6v} \kappa&=&\kappa_{6V}[\xi]:= -\frac{t_{6V}[\xi]}{\partial_\xi t_{6V}[\xi]} \partial_\xi \varphi^{6V}[u;\xi] \nonumber \\ &&\!\!\!\!\!\!\!\!\!\!\!\!= \left\{ \cot(u-\xi-\eta)+\cot(\xi)-{\alpha}\cot({\alpha} \xi)-{\alpha}\cot({\alpha}(u-\xi-\eta)) \right\}\frac{\sin(u-\xi+\eta)\sin(u-\xi-\eta)}{\sin(2\eta)} \nonumber \\ \end{eqnarray} with ${\alpha}$ as in \eqref{fsol}. \subsection{Paths} \subsubsection{Partition function} The second ingredient of the Tangent Method is the partition function $Y_{k,\ell}$ for a single weighted path in empty space with the same weights as the 6V osculating paths (see an example of such a path in the light blue domain of Fig.~\ref{fig:alltgt} top left). Note that the path starts where the topmost one in $H_{n,k}[u]$ stopped, namely with a preliminary horizontal step, and end with a vertical step at position $\ell$ (measured from the original position in $Z_n$) on the S boundary. Note also that all empty vertices receive the weight $a$ of \eqref{6vweights}. We may therefore factor out an unimportant overall weight $a^{n\ell}$, and weight the path by the product of its relative vertex weights: $$a_0=1,\qquad b_0=\frac{b}{a}=\frac{\sin(u-\eta)}{\sin(u+\eta)},\qquad c_0=\frac{c}{a}=\frac{\sin(2\eta)}{\sin(u+\eta)} \ ,$$ for respectively a,b and c type vertices. Let us use a step-to-step transfer matrix formulation of the path, namely a matrix $T$ describing the transfer from a step to the next. Each step may be in either of two states: horizontal or vertical, and the matrix entry is the corresponding 6V weight at the vertex shared by the step and its successor, which we multiply by an extra weight $z, w$ if the next step is horizontal, vertical respectively. This gives the matrix $$T_{6V}=\begin{pmatrix} b_0\, z & c_0\, z\\ c_0\, w & b_0\, w \end{pmatrix}$$ allowing to express the generating function $P(z,w):=\sum_{k,\ell\geq 0} Y_{k,\ell}\, z^\ell w^{k+1}$ as $$P(z,w)=\begin{pmatrix} 0 & 1\end{pmatrix} \cdot ({\mathbb I}- T_{6V})^{-1} \begin{pmatrix} 1\\ 0\end{pmatrix}= \frac{c_0 w}{1-b_0 z- b_0 w(1+\frac{c_0^2-b_0^2}{b_0} z )}$$ Using the new weights $$ \gamma_1:= b_0= \frac{\sin(u-\eta)}{\sin(u+\eta)}, \quad \gamma_2:=\frac{c_0^2-b_0^2}{b_0}=\frac{\sin(3\eta-u)}{\sin(u-\eta)} $$ We deduce that \begin{equation}\label{path6v} Y_{k,\ell} = c_0 \gamma_1^k \frac{(1+\gamma_2\, z)^k}{(1-\gamma_1\, z)^{k+1}} \Bigg\vert_{z^\ell}=c_0 \sum_{{P_1\geq 0\atop 0\leq P_2\leq k}\atop P_1+P_2= \ell} {P_1+k\choose k} {k\choose P_2} \, \gamma_1^{k+P_1}\, \gamma_2^{P_2} \end{equation} \subsubsection{Asymptotics} We now consider the scaling limit of large $n=N$ and $\kappa=k/N, \lambda=\ell/N$ fixed. Replacing the summation in \eqref{path6v} with an integral over $p_2=P_2/N$ and using the Stirling formula, we find the leading large $N$ behavior of $Y_{k,\ell}$: \begin{eqnarray*} Y_{\kappa N,\lambda N} &\simeq& \int_0^\kappa dp_2 e^{-N S_1(\kappa,p_2)}\\ S_1(\kappa,p_2)&=& p_2 {\rm Log}(\frac{p_2}{\gamma_2})+(\kappa-p_2){\rm Log}(\kappa-p_2)+(\lambda-p_2){\rm Log}(\lambda-p_2)\\ &&\qquad -(\kappa+\lambda-p_2){\rm Log}(\gamma_1(\kappa+\lambda-p_2)) \end{eqnarray*} \subsection{Arctic curves} We now apply the Tangent Method. We must solve for the saddle-point equations for the total action $S(\kappa,\xi,p_2)=S_0(\kappa,t[\xi])+S_1(\kappa,p_2)$, namely $\partial_\kappa S=0$ and $\partial_{p_2} S=0$, while the last equation $\partial_\xi S=0$ eventually allows us to solve for $\kappa=\kappa[\xi]$ by using the result \eqref{kappa6v}. We get: $$\frac{ t (\kappa-p_2)}{\gamma_1(\kappa+\lambda-p_2)} =1 ,\qquad \frac{\gamma_1}{\gamma_2} \frac{p_2(\kappa+\lambda-p_2)}{(\kappa-p_2)(\lambda-p_2)} =1 $$ with the following unique solution: $$ \frac{p_2}{\kappa}= \frac{\sin(u-3\eta)\sin(\xi)}{\sin(u-\xi-\eta)\sin(2\eta)} , \qquad \frac{\kappa}{\lambda}= \frac{\sin(u-\xi+\eta)\sin(u-\xi-\eta)}{\sin(\xi)\sin(\xi-2\eta)} $$ parameterized by $\xi$ via $t=t_{6V}[\xi]$ \eqref{tparam} and $\kappa=\kappa_{6V}[\xi]$ \eqref{kappa6v}. In particular this determines $\kappa$ as a function of $\lambda$ in the parametric form $(\kappa,\lambda)=(\kappa_{6V}[\xi],\lambda_{6V}[\xi])$, where $$\lambda_{6V}[\xi]:= \kappa_{6V}[\xi] \ \frac{\sin(\xi)\sin(\xi-2\eta)}{\sin(u-\xi+\eta)\sin(u-\xi-\eta)} $$ This allows us to identify the slope $A[\xi]=\frac{\kappa_{6V}[\xi]}{\lambda_{6V}[\xi]}$ and the intercept $B[\xi]=\kappa_{6V}[\xi]$ for the family of tangents: $F_\xi[x,y]= y+ A[\xi]\, x -B[\xi]=0$. The parameter $\xi$ is constrained by the condition that $A[\xi]>0$ which implies that $\xi\in [u+\eta-\pi,0]$. Using the expression for the envelope \eqref{acurve}, we arrive at the final result. \begin{thm}\label{6VNEthm} The NE portion of the arctic curve for the 6V-DWBC model in the Disordered regime is predicted by the Tangent Method to be given parametrically by: $$ x=X_{NE}^{6V}[\xi]:=\frac{B'[\xi]}{A'[\xi]},\qquad y=Y_{NE}^{6V}[\xi]:=B[\xi]-\frac{A[\xi]}{A'[\xi]}B'[\xi] ,\qquad (\xi\in [u+\eta-\pi,0])$$ where \begin{eqnarray*} A[\xi]&=&\frac{\sin(u-\xi+\eta)\sin(u-\xi-\eta)}{\sin(\xi)\sin(\xi-2\eta)}\\ B[\xi]&=&\left\{ \cot(u-\xi-\eta)+\cot(\xi)-{\alpha}\cot({\alpha} \xi)-{\alpha}\cot({\alpha}(u-\xi-\eta)) \right\}\\ &&\qquad \qquad \times \, \frac{\sin(u-\xi+\eta)\sin(u-\xi-\eta)}{\sin(2\eta)} \end{eqnarray*} with ${\alpha}$ as in \eqref{fsol}. \end{thm} As explained in Sect.~\ref{obsec}, we easily get the SE portion of the arctic curve, by applying the transformation $*$: $u\mapsto u^*=\pi- u$, and the change of coordinates \eqref{flip6v6vp} for $\mu=1$. As a result we have the following. \begin{thm}\label{6VSEthm} The SE portion of the arctic curve for the 6V-DWBC model in the Disordered regime is predicted by the Tangent Method to be given parametrically by: $$ x=X_{SE}^{6V}[\xi]:=X_{NE}^{6V}[\xi]^*,\qquad y=Y_{SE}^{6V}[\xi]:=1-Y_{NE}^{6V}[\xi]^* ,\qquad (\xi\in [\eta-u,0])$$ with $X_{NE}^{6V},Y_{NE}^{6V}$ as in Theorem \ref{6VNEthm}. \end{thm} Finally, we note that the weights \eqref{6vweights} and the DWBC are invariant under central symmetry, which reflects all arrow orientations. As a consequence, the arctic curve of the 6V-DWBC model is centro-symmetric as well, and it can be easily completed by applying the central symmetry $(x,y)\mapsto (-1-x,1-y)$ to the NE and SE branches to respectively produce the SW and NW ones. \begin{remark}\label{rem6v} At the self-dual point $u=u^*=\frac{\pi}{2}$, the arctic curve is symmetric w.r.t. the horizontal line $y=1/2$, as well as the vertical line $x=-1/2$ by the central symmetry. The full curve is then obtained by successive reflections of the NE branch; as an example the limit shape of ASMs \cite{CP2010} is made of 4 reflected portions of ellipse. This is no longer true if $u\neq \frac{\pi}{2}$. \end{remark} \section{6V' model}\label{6vpsec} \subsection{Partition function and one-point function} \subsubsection{Inhomogeneous partition function} The partition function of the inhomogeneous U-turn boundary 6V model was derived by Kuperberg and independently by Tsuchya \cite{kuperberg2002symmetry,tsu}. Let $$m_U(u,v):=\frac{1}{\sin(u-v+\eta)\sin(u-v-\eta)}-\frac{1}{\sin(u+v+\eta)\sin(u+v-\eta)}$$ Note that as opposed to the 6V case, this is no longer a function of $u-v$ only, but includes a reflected term which is a function of $u+v$. \begin{thm}\label{kupthm} The U-turn boundary 6V partition function reads: \begin{eqnarray}\label{Udelta} &&Z_{n}^{6V-U}[{\mathbf u},{\mathbf v};\theta]=(\rho_e\rho_o)^{n^2}\, \det_{1\leq i,j \leq n}\big(m_U(u_i,v_j)\big) \nonumber \\ &&\qquad \times{ \scriptstyle \frac{\left\{\prod_{i=1}^n \sin(\theta-v_i)\sin(2u_i+2\eta)\sin(2\eta) \right\}\left\{\prod_{i,j=1}^n \sin(u_i-v_j+\eta)\sin(u_i-v_j-\eta)\sin(u_i+v_j+\eta)\sin(u_i+v_j-\eta)\right\}}{\left\{\prod_{1\leq i<j \leq n}\sin(u_i-u_j)\, \sin(v_j-v_i)\right\}\left\{\prod_{1\leq i\leq j\leq n}\sin(u_i+u_j)\,\sin(v_i+v_j)\right\}}} \end{eqnarray} \end{thm} As mentioned above and illustrated in Fig.~\ref{fig:6vUtop}, the 6V' model corresponds to the choice of parameter $\theta=-u-\eta$, which ensures that $y_u(u)=0$ for all U-turns. The partition function corresponding to this choice, where we cut out the U-turns of Fig.~\ref{fig:6vUtop} (a) and remove their weights, as well as the weights of the trivially fixed b-type vertices of the bottom row in Fig.~\ref{fig:6vUtop} (b), reads: \begin{equation}\label{UUp} Z_n^{6V'}[u,{\mathbf v}]=\lim_{u_i\to u\atop \theta\to -u-\eta} \frac{(-1)^n\, Z_n^{6V-U}[{\mathbf u},{\mathbf v};\theta]}{\sin^n(2u+2\eta)\,\rho_e^n\, \prod_{i=1}^n \sin(-u-v_i-\eta) } \end{equation} where we have identified the limit of the U-turn weights to be $y_d(u_i)\to -\sin(2u+2\eta)$ and that of the $b$ weights of the bottom (even) row to be $b_e\to \rho_e\sin(-u-v_i-\eta)$. \begin{remark}\label{thetarem} Note that in \eqref{Udelta} the dependence on the parameter $\theta$ is only through the prefactor $\prod_i \sin(\theta-v_i)$. The ``worst case scenario" is the homogeneous limit where all $v_i\to v$, and where this gives a factor $\sin^n(\theta-v)$. In any case, this does not affect the value of the thermodynamic free energy $f=\lim_{n\to\infty} -\frac{1}{n^2}{\rm Log}(Z_n^{6V'})$, which is independent of $\theta$. We may therefore safely fix the value of $\theta$ to suit our needs. \end{remark} \subsubsection{Homogeneous limit} Like in the 6V case, the homogeneous limit where we take all $u_i\to u$ and all $v_i\to v$ involves the quantity: $$\Delta_n[u,v]:= \lim_{u\to u_i\atop v\to v_i} \frac{\det_{1\leq i,j \leq n}\left( m_U(u_i,v_j)\right)}{\prod_{1\leq i<j\leq n} (u_i-u_j)(v_j-v_i)} $$ Upon Taylor-expanding rows and columns, we may rewrite: $$\Delta_n[u,v]=(-1)^{n(n-1)/2}\, \det_{0\leq i,j\leq n-1}\left( \frac{\partial_u^i\partial_v^j m_U(u,v)}{i!j!} \right)=:\frac{1}{\prod_{i=0}^{n-1}(i!)^2} D_n[u,v]$$ where the determinant $D_n[u,v]$ reads \begin{equation}\label{dofuv} D_n[u,v]=\det_{0\leq i,j\leq n-1}\left((-1)^j\partial_u^i\partial_v^j m_U(u,v) \right)\end{equation} Using the relation \eqref{UUp} and the result of Theorem \ref{kupthm}, we obtain the homogeneous partition function of the 6V' model: \begin{equation}\label{sixvpf} \frac{Z_n^{6V'}[u,v]}{\rho_e^{n^2-n}\rho_o^{n^2}\sin^n(2\eta)}=\Delta_n[u,v]\, \frac{\big( \sin(u-v+\eta)\,\sin(u-v-\eta)\,\sin(u+v+\eta)\,\sin(u+v-\eta)\big)^{n^2}}{\big(\sin(2u)\,\sin(2v)\big)^{n(n+1)/2}} \end{equation} To determine $\Delta_n[u,v]$ one uses like in the 6V case the Pl\"ucker/Desnanot-Jacobi relation of Lemma \ref{desnajac} applied to the $n+1\times n+1$ matrix $M$ in the definition of $D_{n+1}[u,v]$ \eqref{dofuv}: $$D_{n+1}[u,v]\,D_{n-1}[u,v]=\partial_u D_n[u,v]\, \partial_vD_n[u,v]-D_n[u,v]\, \partial_u\partial_v D_n[u,v]$$ which implies \begin{equation}\label{pluone} \frac{D_{n+1}[u,v]\,D_{n-1}[u,v]}{(D_n[u,v])^2}+ \partial_u\partial_v {\rm Log}\big(D_n[u,v]\big) =0 \end{equation} As a direct consequence, we have: \begin{thm}\label{6vpdeltathm} The quantity $\Delta_n[u,v]$ obeys the following recursion relation: \begin{equation}\label{deltarecur} \frac{\Delta_{n+1}[u,v]\,\Delta_{n-1}[u,v]}{\Delta_{n}[u,v]^2}+\frac{1}{n^2} \partial_u\partial_v {\rm Log}\big(\Delta_{n}[u,v]\big) =0 \end{equation} \end{thm} Note that the latter can be used to determine $\Delta_n[u,v]$ recursively, starting with $\Delta_0[u,v]=1$ and $\Delta_1[u,v]=m_U(u,v)$, as we illustrate now with a few simple examples. \begin{example}\label{classicex} Let us consider the ``classical limit" $\eta\to 0$, where: $$ m_U(u,v)=\frac{1}{\sin^2(u-v)}-\frac{1}{\sin^2(u+v)} =\frac{\sin(2u)\sin(2v)}{\sin^2(u-v)\,\sin^2(u+v)}$$ We have: \begin{thm}\label{classithm} In the classical case $\eta=0$, we have for all $n\geq 1$: $$ \Delta_{n}[u,v]=n! \left(\frac{\sin(2u)\sin(2v)}{\sin^2(u-v)\,\sin^2(u+v)} \right)^{n(n+1)/2} $$ \end{thm} \begin{proof} The proof is by induction on $n$, using \eqref{deltarecur}, and follows from the relation $$\frac{\sin(2u)\sin(2v)}{\sin^2(u-v)\,\sin^2(u+v)}-\partial_u\partial_v {\rm Log}\left(\sin(u-v)\,\sin(u+v)\right) =0 \ .$$ \end{proof} \noindent Note that the corresponding 6V' partition function vanishes, however we get a finite limit for the quantity $$\lim_{\eta\to 0} \frac{Z_n^{6V'}[u,v]}{\rho_e^{n^2-n}\rho_o^{n^2}\sin^n(2\eta)}=n!\left( \sin(u-v)\sin(u+v)\right)^{n(n-1)} $$ This result has a simple interpretation: sending $\eta\to 0$ implies both $c_e$ and $c_o$ type vertices have vanishing weights. However, without the ability to turn right, none of the osculating paths can satisfy the boundary conditions, unless each path is allowed at least one right turn. In this formulation, we must no longer see the paths as osculating, but rather as {\it crossing} at fully occupied $a$-type vertices. The minimal case is if each path has exactly one turn (and the vanishing weight $\sin^n(2\eta)$ is divided before taking the $\eta\to 0$ limit). For each $i=1,2,...,n$, the $i$-th path from the bottom starts with say $j=\sigma(i)$ horizontal steps, then turns right and ends with $2i-1$ vertical steps at the $j$-th endpoint. Clearly there are as many such configurations as permutations $\sigma$ of the $n$ path ends, which accounts for an overall factor of $n!$. Collecting all the Boltzmann weights gives the remaining factor. \end{example} \begin{example}\label{picex} We now consider the ``free fermion" case $\eta=\frac{\pi}{4}$, where \begin{eqnarray*} m_U(u,v)&=&\frac{1}{\sin(u-v+\frac{\pi}{4})\sin(u-v-\frac{\pi}{4})} -\frac{1}{\sin(u+v+\frac{\pi}{4})\sin(u+v-\frac{\pi}{4})}\\ &=&\frac{4\sin(2u)\sin(2v)}{\cos(2(u-v))\cos(2(u+v))} \end{eqnarray*} \begin{thm}\label{picthm} In the free fermion case $\eta=\frac{\pi}{4}$, we have for all $n\geq 1$: $$ \Delta_{n}[u,v]=\frac{\left(4\sin(2u)\sin(2v)\right)^{n(n+1)/2}\left(4\,\cos(2u)\cos(2v)\right)^{n(n-1)/2}}{\left(\cos\big(2(u-v)\big)\,\cos\big(2(u+v)\big)\right)^{n^2}} $$ \end{thm} \begin{proof} The proof is by induction on $n$ using \eqref{deltarecur}, and follows from the relation $$ \frac{4\sin(4u)\sin(4v)}{\cos^2(2(u-v))\cos^2(2(u+v))}+\partial_u\partial_v{\rm Log}\left(\cos(2(u-v))\cos(2(u+v))\right)=0 $$ \end{proof} \noindent The corresponding 6V' partition function reads: \begin{equation}\label{freepicex} \frac{Z_n^{6V'}[u,v]}{\rho_e^{n^2-n}\rho_o^{n^2}}=\left(\cos(2u)\cos(2v)\right)^{n(n-1)/2} \end{equation} \end{example} \subsubsection{One-point function} As in the case of the 6V model, we consider the semi-homogeneous partition function $Z_n^{6V'}[u,v;\xi]$ with the same boundary conditions as $Z_n^{6V'}[u,v]$ but with a different vertical spectral parameter in the last column, set to $v_n=v+\xi$. It is again obtained as a limit of \eqref{UUp} and reads: \begin{eqnarray}\label{pluone} &&\!\!\!\!\!\!\!\!\! \frac{Z_n^{6V'}[u,v;\xi]}{\rho_e^{n^2-n}\rho_o^{n^2}\sin^n(2\eta)}\!\!=\!\!\Delta_n[u,v;\xi] \frac{\big( \sin(u-v+\eta)\sin(u-v-\eta)\sin(u+v+\eta)\sin(u+v-\eta)\big)^{n(n-1)}}{\sin(2\xi+2v)\big(\sin(\xi+2v)\big)^{n-1}\big(\sin(2u)\big)^{\frac{n(n+1)}{2}} \big(\sin(2v)\big)^{\frac{n(n-1)}{2}}}\nonumber\\ &&\quad\qquad \qquad \times \, \left(\sin(u-v-\xi+\eta)\sin(u-v-\xi-\eta)\sin(u+v+\xi+\eta)\sin(u+v+\xi-\eta)\right)^{n}\nonumber\\ \end{eqnarray} in terms of the semi-homogeneous quantity $$\Delta_n[u,v;\xi]:=\lim_{u_1,...u_n\to u\atop v_1,...,v_{n-1}\to v, v_n\to v+\xi} \frac{\det_{1\leq i,j\leq n} \left( m_U(u_i,v_j)\right)}{\prod_{1\leq i<j\leq n}\sin(u_i-u_j)\,\sin(v_j-v_i)}$$ Repeating the Taylor expansion of rows and columns except the last one, we may rewrite: \begin{eqnarray}\Delta_n[u,v;\xi]&=& \frac{(-1)^{n(n-1)/2}}{\sin^{n-1}(\xi)}\, \det\left\{\left(\frac{\partial_u^i\partial_v^j m_U(u,v)}{i!j!} \right)_{i=1,..,n-1\atop j=0,...,n-2} \Big\vert\left(\frac{\partial_u^{i}m_U(u,w)}{i!}\right)_{i=0,...,n-1}\right\}\nonumber \\ &=:&\frac{(-1)^{n-1}\, (n-1)!}{\sin^{n-1}(\xi)\, \prod_{i=0}^{n-1}(i!)^2} D_n[u,v;\xi] \label{deltad6vp} \end{eqnarray} where the determinant $D_n[u,v;\xi]$ reads \begin{equation}\label{deteronept6vp} D_n[u,v;\xi]=\det\left\{\left\{(-1)^j\partial_u^i\partial_v^j m_U(u,v) \right\}_{i=1,..,n-1\atop j=0,...,n-2}\Big\vert \left\{\partial_u^{i}m_U(u,v+\xi)\right\}_{i=0,...,n-1}\right\} \end{equation} As before, we define the one-point function $H_n^{6V'}[u,v;\xi]$ as the ratio: \begin{eqnarray} &&H_n^{6V'}[u,v;\xi]:=\frac{Z_n^{6V'}[u,v;\xi]}{Z_n^{6V'}[u,v]}= \frac{\Delta_n[u,v;\xi]}{\Delta_n[u,v]}\, \frac{\sin(\xi+2v)}{\sin(2\xi+2v)}\,\left(\frac{\sin(2v)}{\sin(\xi+2v)}\right)^n \nonumber \\ &&\quad \qquad \times\left( \frac{\sin(u-v-\xi+\eta)\,\sin(u-v-\xi-\eta)\,\sin(u+v+\xi+\eta)\,\sin(u+v+\xi-\eta)}{\sin(u-v+\eta)\,\sin(u-v-\eta)\,\sin(u+v+\eta)\,\sin(u+v-\eta)}\right)^n \label{ratioref} \end{eqnarray} The Pl\"ucker/Desnanot-Jacobi relation of Lemma \ref{desnajac} applied to the $n+1\times n+1$ matrix $M$ in the definition of $D_{n+1}[u,v;\xi]$ \eqref{deteronept6vp} implies the following: $$D_{n+1}[u,v;\xi]\, D_{n-1}[u,v] = D_n[u,v;\xi]\, \partial_u D_n[u,v] -D_n[u,v]\, \partial_u D_n[u,v;\xi]$$ Introducing the reduced one-point function \begin{equation}\label{redone6vp} H_n[u,v;\xi]:= (-1)^{n-1} (n-1)! \frac{D_n[u,v;\xi]}{D_n[u,v]} =\sin^{n-1}(\xi) \frac{\Delta_n[u,v;\xi]}{\Delta_n[u,v]} , \end{equation} we may recast the above into the following. \begin{thm}\label{6vponeptthm} The reduced one-point function of the 6V' model obeys the following relation: \begin{equation}\label{hone} \frac{H_{n+1}[u,v;\xi]}{H_n[u,v;\xi]} \frac{\Delta_{n-1}[u,v]\,\Delta_{n+1}[u,v]}{\Delta_n[u,v]^2}+\frac{1}{n} \partial_u {\rm Log}(H_n[u,v;\xi] ) =0 \end{equation} \end{thm} Together with \eqref{deltarecur}, this determines $H_n[u,v;\xi]$ recursively, using the initial data $H_1[u,v;\xi]=\frac{m_U(u,v+\xi)}{m_U(u,v)}$, and in turn the one-point function $H_n^{6V'}[u,v;\xi]$ via: \begin{eqnarray} &&H_n^{6V'}[u,v;\xi]= H_n[u,v;\xi]\,\frac{\sin(\xi)\sin(\xi+2v)}{\sin(2\xi+2v)}\,\left(\frac{\sin(2v)}{\sin(\xi)\sin(\xi+2v)}\right)^n \nonumber \\ &&\quad \qquad \times\left( \frac{\sin(u-v-\xi+\eta)\,\sin(u-v-\xi-\eta)\,\sin(u+v+\xi+\eta)\,\sin(u+v+\xi-\eta)}{\sin(u-v+\eta)\,\sin(u-v-\eta)\,\sin(u+v+\eta)\,\sin(u+v-\eta)}\right)^n \label{oneptfin6vp} \end{eqnarray} \subsection{Large $n$ limit: free energy and one-point function asymptotics} \subsubsection{Free energy} For large $n=N$, like in the 6V case, the relation \eqref{deltarecur} leads to the following leading behavior for the function $\Delta_n[u,v]$: \begin{equation}\label{asymptoD} \Delta_N[u,v]\simeq e^{-N^2 f[u,v]} \end{equation} for some function $f[u,v]$ to be determined (see \cite{RIBKOR} for a full derivation). \noindent{\bf Liouville equation and free energy.} For large $n=N$, substituting the behavior \eqref{asymptoD} into eq.\eqref{deltarecur}, and expanding at leading order in $N^{-1}$, we get the following 2D Liouville partial differential equation equation for the function $f[u,v]$: \begin{equation}\label{liouville} \partial_u\partial_v f[u,v]-e^{-2 f[u,v] } =0 \end{equation} Introducing the function $W[u,v]:=e^{f[u,v]}$ this may be rewritten as: $$W\,\partial_u\partial_vW-\partial_uW\,\partial_vW =1$$ The general solution $W$ of this equation is known to be \cite{Liouville,Crowdy}: \begin{equation}\label{gensol} W[u,v]=\frac{g(u)-h(v)}{|g'(u)h'(v)|^{\frac{1}{2}}} \end{equation} for some arbitrary differentiable functions $g,h$. In \cite{RIBKOR}, the functions $g,h$ are fixed by use of symmetries and known limits of $W$, leading to the following. \begin{thm}[\cite{RIBKOR}]\label{freeconj} The leading asymptotics of the determinant $\Delta_n[u,v]$ is given by $W[u,v]=\lim_{N\to\infty} \Delta_N[u,v]^{-\frac{1}{N^2}}$ where: \begin{equation}\label{conjW} W[u,v]=\frac{\sin({\alpha}(u-v-\eta))\,\sin({\alpha}(-u-v-\eta))}{{\alpha}\, |\sin(2{\alpha} u)\, \sin(2{\alpha} (v+\eta))|^\frac{1}{2}} \end{equation}with \begin{equation}\label{vala}{\alpha}=\frac{\pi}{\pi-2\eta} \end{equation} \end{thm} Theorem \ref{freeconj} gives access to the full free energy $f^{6V'}$ of the 6V' model, as defined by the large $N$ asymptotics $Z_N^{6V'}[u,v]\simeq e^{-N^2\, f^{6V'}[u,v]}$, where as a consequence of \eqref{sixvpf}, we have: \begin{equation}\label{f6vprime} f^{6V'}[u,v]=f[u,v]+{\rm Log}\left( \frac{\sqrt{|\sin(2u)\,\sin(2v)|}}{\rho_e\rho_o\sin(u-v+\eta)\,\sin(u-v-\eta)\,\sin(u+v+\eta)\,\sin(u+v-\eta)} \right) \end{equation} This leads immediately to the following. \begin{cor}[\cite{RIBKOR}] The free energy of the 6V' model in the Disordered regime reads: \begin{eqnarray}\label{fren6vp} f^{6V'}[u,v]&=&{\scriptstyle \frac{1}{2} }\,{\rm Log} \left\vert \frac{\sin(2u)\,\sin(2v)}{\sin(2{\alpha} u)\, \sin(2{\alpha} (v+\eta))}\right\vert \nonumber \\ &&\quad + {\rm Log}\left( \frac{\sin({\alpha}(u-v-\eta))\,\sin({\alpha}(-u-v-\eta))}{{\alpha}\, \rho_e\rho_o\sin(u-v+\eta)\,\sin(u-v-\eta)\,\sin(u+v+\eta)\,\sin(u+v-\eta)} \right) \end{eqnarray} \end{cor} We also have access to the free energy $f^{20V}$ of the 20V DWBC3 model defined in Sect.~\ref{20vpresec}, which will be studied in Section \ref{20vsec} below. The free energy is defined via $Z_N^{20V}[u,v]\simeq e^{-N^2\, f^{20V}[u,v]}$ for large $N$. As a consequence of \eqref{vingtvpf} which relates the partition functions of the 20V-DWBC3 and 6V' models (see also Ref.~\cite{DF20V}), we have the relation: \begin{equation}\label{f20v} f^{20V}[u,v]=f^{6V'}[u,v]+\frac{1}{2}{\rm Log} \left(\nu^3\, \sin^3(2u+2\eta)\,\sin(u-v-\eta)\,\sin(u+v-\eta)\right) \end{equation} Let us apply this to the uniform case \eqref{combipoint20v}, where the partition function $Z_n^{6V'}$ of the $6V'$ model on the $(2n-1)\times n$ grid is related to the number of configurations $Z_n^{20V}$ of the 20V model with DWBC3 on the quadrangle $\mathcal Q_n$ \cite{DF20V} (see Sect.~\ref{20vpresec}). Using Theorem \ref{freeconj} and the relations \eqref{fren6vp} and \eqref{f20v}, and approaching the desired value $v=-4\eta+\epsilon$, while $u=\eta$, we get for $\eta=\frac{\pi}{8}$, ${\alpha}=\frac{4}{3}$, $\nu=\sqrt{2}$: \begin{eqnarray*} e^{f^{20V}}&=&\lim_{\epsilon\to 0} \left\vert\frac{\sin(2\eta)\,\sin(-8\eta+2\epsilon)}{\sin(\frac{8}{3}\eta))\, \sin(-8\eta+\frac{8}{3}\epsilon)} \right\vert^{\frac{1}{2}} \frac{3\, \sin(\frac{16}{3}\eta)\sin(\frac{8}{3}\eta)}{4\, \nu^{3/2}\,\sin^2(4\eta)\,\sin(6\eta)\,\sin(2\eta)}=\frac{3^{9/4}}{2^{9/2}} \ . \end{eqnarray*} This is in agreement with the asymptotics of the exact conjectured formula of Ref.~\cite{DF20V} for the uniformly weighted partition function, namely: \begin{equation}\label{20vpf} Z_N^{20V}=2^{N(N-1)/2}\prod_{i=0}^{N-1}\frac{(4i+2)!}{(n+2i+1)!}\simeq \left(\frac{2^{9/2}}{3^{9/4}}\right)^{N^2}\ , \end{equation} easily derived by use of the Stirling formula. \subsubsection{One-point function} We now derive the large $n=N$ asymptotics of the one-point function $H_n^{6V'}[u,v;\xi]$ \eqref{ratioref}. From Eq. \eqref{oneptfin6vp}, the latter is simply expressed in terms of the reduced one-point function $H_n[u,v;\xi]$ \eqref{redone6vp}. Like in the 6V case, we first derive a differential equation governing the asymptotic behavior of $H_n[u,v;\xi]$, and compute a number of limits to fix integration constants. It turns out that our Conjecture \ref{freeconj} is sufficient to determine asymptotics completely. By Theorem \ref{6vponeptthm}, $H_n[u,v;\xi]$ must satisfy \eqref{hone}, which implies the leading asymptotic behavior \begin{equation}\label{asymptoh} H_N[u,v;\xi]\simeq_{N\to\infty} e^{-N \psi[u,v;\xi]} \end{equation} for some function $\psi[u,v;\xi]$. As a simple confirmation, using the definition \eqref{redone6vp} and the fact that $\Delta_n[u,v;0]=\Delta_n[u,v]$, we find that $H_n[u,v;\xi]\simeq_{\xi\to 0} \xi^{n-1}$, resulting in: \begin{equation}\label{firsxtlimxi} \psi[u,v;\xi]\simeq_{\xi \to 0} -{\rm Log}(\xi)\end{equation} \noindent{\bf Differential equation.} Substituting the expressions \eqref{asymptoD} and \eqref{asymptoh} into eq. \eqref{hone} for $n=N$, and expanding to leading order in $N^{-1}$, we get the following partial differential equation: \begin{equation} \partial_u\psi[u,v;\xi]-e^{-2f[u,v]-\psi[u,v;\xi]}=0 \label{psione} \end{equation} \noindent{\bf Limits.} In addition to the limit \eqref{firsxtlimxi} above, let us consider the limit $u-v-\xi-\eta\to 0$, by setting $\xi=u-v-\eta-\epsilon$ and sending $\epsilon\to 0$. The entries of the last column of the determinant $D_N[u,v;v+\xi]$ \eqref{deteronept6vp} read: $$ \partial_u^i m_U(u,u-\eta-\epsilon) = \frac{1}{\sin(2\eta)}\frac{(-1)^i i!}{\epsilon^{i+1} }+O(\epsilon^{-i})$$ The dominant term is in the last row and results in $$D_N[u,v;u-\eta-\epsilon] \simeq \frac{(-1)^{N-1} (N-1)!}{\sin(2\eta)\,\epsilon^{N} } D_{N-1}[u,v] $$ We deduce that \begin{eqnarray*} H_N[u,v;u-v-\eta-\epsilon]&=&(-1)^{N-1}\, (N-1)!\frac{D_N[u,v;u-\eta-\epsilon]}{D_N[u,v]}\\ &&\!\!\!\!\!\!\!\!\!\! \simeq_{\epsilon\to 0} \frac{ (N-1)!^2}{\sin(2\eta)\,\epsilon^{N} }\,\frac{D_{N-1}[u,v]}{D_{N}[u,v]} \simeq \frac{1}{\epsilon^{N} }\,\frac{\Delta_{N-1}[u,v]}{\Delta_{N}[u,v]}\simeq \frac{W[u,v]^{2N}}{\epsilon^{N} } \end{eqnarray*} where we have used the large $N$ asymptotics $\Delta_N[u,v]\simeq W[u,v]^{-N^2}$. Matching this with the asymptotics \eqref{asymptoh}, we conclude that \begin{equation}\label{seclimxipsi} \psi[u,v,u-v-\eta+\epsilon]_{\epsilon\to 0}\simeq {\rm Log}\left\vert\frac{\epsilon}{W[u,v]^2}\right\vert \end{equation} Repeating the analysis for $\xi=\eta-u-v+\epsilon$, we find analogously: \begin{equation}\label{thirdlimxipsi}\psi[u,v,\eta-u-v+\epsilon]_{\epsilon\to 0}\simeq {\rm Log}\left\vert\frac{\epsilon}{W[u,v]^2}\right\vert \end{equation} \noindent{\bf Solution.} Note that Eq.~\ref{psione} may be rewritten in the form $\partial_u(e^{\psi})=e^{-2f}=W^{-2}$. This can be integrated w.r.t. the variable $u$ as follows: \begin{equation}\label{integ} e^{\psi[u,v,\xi]}= c[v,\xi] -\frac{{\alpha} \sin(2{\alpha}(v+\eta))}{\sin({\alpha}(u-v-\eta))\,\sin({\alpha}(u+v+\eta))} \end{equation} for some integration constant $c[v,\xi]$ independent of $u$. We now use the limit \eqref{seclimxipsi} to express that, for $\xi=u-v-\eta+\epsilon$ and $\epsilon\to 0$, we have $e^{\psi[u,v;\xi]}\to 0$. This gives: $$ c[v,u-v-\eta] =\frac{{\alpha} \sin(2{\alpha} (v+\eta))}{\sin({\alpha}(u-v-\eta))\,\sin({\alpha}(u+v+\eta))}$$ which is valid for all $u,v$. In particular, setting $u=v+\eta+\xi$ yields the integration constant $$c[v,\xi] =\frac{{\alpha} \sin(2{\alpha}(v+\eta))}{\sin({\alpha} \xi)\,\sin({\alpha}(\xi+2v+2\eta))} $$ which we plug back into \eqref{integ} to finally get: \begin{equation}\label{solpsi} \psi[u,v;\xi]= {\rm Log}\left( \frac{{\alpha} \,\sin(2{\alpha} (v+\eta))\,\sin({\alpha}(u-v-\xi-\eta))\,\sin({\alpha}(u+v+\xi+\eta))}{\sin({\alpha}(u-v-\eta))\,\sin({\alpha}(u+v+\eta))\,\sin({\alpha}\xi)\,\sin({\alpha}(\xi+2v+2\eta))} \right) \end{equation} Using the relation \eqref{oneptfin6vp} this leads to the following result for the one-point function asymptotics. \begin{thm}\label{asympto6vponept} The one-point function $H_n^{6V'}[u,v;\xi]$ has the following large $n=N$ behavior: \begin{eqnarray*}&&H_N^{6V}[u,v;\xi]\simeq e^{-N \psi^{6V'}[u,v;\xi]} \\ &&\psi^{6V'}[u,v;\xi]= -{\rm Log}\left( \frac{\sin({\alpha}(u-v-\eta))\,\sin({\alpha}(u+v+\eta))\,\sin({\alpha}\xi)\,\sin({\alpha}(\xi+2v+2\eta))}{{\alpha} \,\sin(2{\alpha} (v+\eta))\,\sin({\alpha}(u-v-\xi-\eta))\,\sin({\alpha}(u+v+\xi+\eta))} \right) \\ &&\ - {\rm Log}\left(\frac{\sin(2v) \sin(u-v-\xi+\eta)\sin(u-v-\xi-\eta)\sin(u+v+\xi-\eta)\sin(u+v+\xi+\eta)}{\sin(\xi)\sin(\xi+2v)\sin(u-v+\eta)\sin(u-v-\eta)\sin(u+v-\eta)\sin(u+v+\eta)}\right) \end{eqnarray*} with ${\alpha}$ as in \eqref{fsol}. \end{thm} As a consistency check, we find that $\lim_{\xi\to 0} \psi^{6V'}[u,v;\xi]=0$, in agreement with the fact that $H_n^{6V'}[u,v;0]=1$ by definition. \begin{remark}\label{thetaonerem} In the case of the more general U-turn 6V model (with arbitrary value of the parameter $\theta$, we already showed in Remark \ref{thetarem} that the thermodynamic free energy of the model is independent of $\theta$, therefore identical to that of the 6V' model. The same argument may be applied to the one-point function, whose leading asymptotics is independent of $\theta$ as well, and therefore the {\it same} for U-turn 6V and 6V' models. \end{remark} \begin{remark} Independently of Theorem \ref{freeconj}, eq. \eqref{psione} can be solved in terms of the generic function $g$ which determines the general solution \eqref{gensol} to the Liouville equation with the correct symmetries and limits, namely such that $h(v)=g(v+\eta)$, with the expression: $$ W[u,v]=\frac{g[u]-g[v+\eta]}{\sqrt{|g'(u)g'(v+\eta)|}} $$ Solving Eq.~\ref{psione} in the same manner as above, we obtain: $$\psi[u,v;\xi]={\rm Log}\left( \frac{ (g(u)-g(v+\xi+\eta))\, g'(v+\eta)}{(g(u)-g(v+\eta))(g(v+\xi+\eta)-g(v+\eta))}\right) $$ In particular, we recover the solution for the 6V-DWBC case by picking $g(u)=\tan({\alpha} u)$, which leads to \begin{eqnarray*}W[u,v]&=& \frac{\sin({\alpha}(u-v-\eta))}{{\alpha}}=W[u-v]\\ \psi[u,v;\xi]&=&{\rm Log}\left( \frac{{\alpha}\sin({\alpha} (u - v - \eta -\xi))}{\sin({\alpha}\xi)\,\sin({\alpha}(u-v-\eta))}\right)=\psi[u-v;\xi] \end{eqnarray*} in agreement with \eqref{W6v} and \eqref{psi6v}. \end{remark} \subsection{Paths} \subsubsection{Partition function} With the setting of Fig.~\ref{fig:alltgt} (bottom left, light blue domain), we wish to compute the partition function $Y_{k,\ell}$ of a single path of the 6V' model in the first quadrant ${\mathbb Z}_+^2$, with starting point $(0,k)$ and endpoint $(\ell,0)$. The weights of the path are those of the 6V' model, namely $(b_o,c_o)$ for a path (going straight, turning) at a vertex with second coordinate $y=2j$, $j=0,1,2,...$ and $(b_e,c_e)$ for a path (going straight, turning) at a vertex with second coordinate $y=2j+1$, $j=0,1,2,...$ However the path crosses a domain of empty vertices, each receiving weights $a_e,a_o$ depending on the parity of their second coordinate $y$. Factoring an overall weight $(a_o)^{n\ell}(a_e)^{(n-1)\ell}$ which does not affect our study, the weights of the path steps must be divided by $a_e,a_o$ and finally read: \begin{eqnarray}\label{moreweights} &&b_0=\frac{b_o}{a_o}=\frac{\sin(u-v-\eta)}{\sin(u-v+\eta)}, \qquad c_0=\frac{c_o}{a_o}= \frac{\sin(2\eta)}{\sin(u-v+\eta)},\nonumber \\ &&b_1=\frac{b_e}{a_e}=\frac{\sin(u+v+\eta)}{\sin(u+v-\eta)}, \qquad c_1=\frac{c_e}{a_e}=\frac{\sin(2\eta)}{\sin(\eta-u-v)} \end{eqnarray} for vertices with $y=2j$ and $y=2j+1$ respectively. Note that the path has a horizontal step just before entering the first quadrant, and has a final vertical step. The partition function $Y_{k,\ell}$ is computed by use of a transfer matrix technique. Each path is travelled from N,W to S,E, and the transfer matrix is a $4\times 4$ matrix $T_{6V'}$ whose entries correspond to the vertex weight for the transition from the entering step at each visited vertex to the outgoing step, with the four possible configurations $(-,o),(\vert, o),(-,e),(\vert,e)$ of horizontal/vertical step ending at an odd/even vertex. Moreover we include an extra weight $z,w$ per horizontal, vertical outgoing step respectively. The matrix $T_{6V'}$ reads: $$T_{6V'}=\begin{pmatrix} b_0 z & c_0 z & 0 & 0\\ 0 & 0 & c_1 w & b_1 w\\ 0 & 0 & b_1 z & c_1 z \\ c_0 w &b_0 w & 0 & 0 \end{pmatrix}$$ We deduce the generating function for the $Y_{k,\ell}$: \begin{eqnarray} Y^{6V'}(z,w)&=&\sum_{k,\ell\geq 0} Y_{k,\ell}\, w^{k+1} z^{\ell}=(0,0,0,1) ({\mathbb I}-T_{6V'})^{-1} \begin{pmatrix}1\\0\\1\\0\end{pmatrix} \nonumber \\ &=&\frac{w(c_0(1-b_1 z)+ c_1 w(b_0+(c_0^2-b_0^2)z))}{(1-b_0 z)(1-b_1 z)-w^2(b_0+(c_0^2-b_0^2)z)(b_1+(c_1^2-b_1^2)z)}\nonumber \\ &=&c_0\sum_{j\geq 0} w^{2j+1} \frac{(b_0+(c_0^2-b_0^2)z)^j(b_1+(c_1^2-b_1^2)z)^j}{(1-b_0 z)^{j+1}(1-b_1 z)^j} \nonumber \\ &&\qquad \qquad + c_1\sum_{j\geq 0} w^{2j+2} \frac{(b_0+(c_0^2-b_0^2)z)^{j+1}(b_1+(c_1^2-b_1^2)z)^j}{(1-b_0 z)^{j+1}(1-b_1 z)^{j+1}}\nonumber \\ &=&\sum_{k\geq 0} w^{k+1} c_\epsilon \frac{ \left(\gamma_1(1+\gamma_3 z)\right)^{\frac{k+\epsilon}{2}} \left(\gamma_2(1+\gamma_4 z)\right)^{\frac{k-\epsilon}{2}}}{(1-\gamma_1 z)^{1+\frac{k-\epsilon}{2}} (1-\gamma_2 z)^{\frac{k+\epsilon}{2}}} \label{genpa6v} \end{eqnarray} where we have used the notation $\epsilon:=k$ mod 2 (with $\epsilon\in \{0,1\}$), and the following weights: \begin{eqnarray} &&\gamma_1=b_0=\frac{\sin(u-v-\eta)}{\sin(u-v+\eta)},\quad \gamma_2=b_1=\frac{\sin(u+v+\eta)}{\sin(u+v-\eta)}, \nonumber \\ &&\gamma_3=\frac{c_0^2-b_0^2}{b_0}=-\frac{\sin(u-v+3\eta)}{\sin(u-v-\eta)},\quad \gamma_4= \frac{c_1^2-b_1^2}{b_1}=-\frac{\sin(u+v+3\eta)}{\sin(u+v+\eta)} \label{weights6vpath} \end{eqnarray} To obtain \eqref{genpa6v}, we have used the fact that the first step of path is horizontal with $y$ parity unspecified (and receives no weight $z$), and the last step is vertical, with $y=0$ (and receives the weight $w$). \subsubsection{Asymptotics}\label{secasym6v} We wish to take the large $n=N$ scaling limit with $\kappa=k/(2N)$ and $\lambda=\ell/N$ finite. Further expanding \eqref{genpa6v} in powers of $z$, we find: \begin{eqnarray} Y_{k,\ell}&=&\sum_{P_1,P_2,P_3,P_4\geq 0\atop P_1+P_2+P_3+P_4=\ell} {\frac{k-\epsilon}{2}+P_1\choose P_1} {\frac{k+\epsilon-2}{2}+P_2\choose P_2} {\frac{k+\epsilon}{2}\choose P_3} {\frac{k-\epsilon}{2}\choose P_4} \gamma_1^{P_1+\frac{k+\epsilon}{2}}\gamma_2^{P_2+\frac{k-\epsilon}{2}} \gamma_3^{P_3}\gamma_4^{P_4} \nonumber \\ Y_{2\kappa N,\lambda N}&\simeq& \int_0^1 dp_2 dp_3 dp_4 e^{-N S_1^{6V'}(\kappa, p_2,p_3,p_4)}\nonumber \\ &&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!S_1^{6V'}(\kappa, p_2,p_3,p_4)=-(\kappa+\lambda-p_2-p_3-p_4)\,{\rm Log}(\kappa+\lambda-p_2-p_3-p_4)\nonumber\\ &&\!\!\!\!\!\!+(\lambda-p_2-p_3-p_4)\,{\rm Log}(\lambda-p_2-p_3-p_4)-(\kappa+p_2)\,{\rm Log}(\kappa+p_2)+p_2\,{\rm Log}(p_2)\nonumber\\ &&\!\!\!\!\!\!+p_3\,{\rm Log}(p_3)+(\kappa-p_3)\,{\rm Log}(\kappa-p_3)+p_4\,{\rm Log}(p_4)+(\kappa-p_4)\,{\rm Log}(\kappa-p_4)\nonumber\\ &&\!\!\!\!\!\!- (\kappa+\lambda-p_2-p_3-p_4)\,{\rm Log}(\gamma_1)-(\kappa+p_2)\,{\rm Log}(\gamma_2)-p_3\,{\rm Log}(\gamma_3)-p_4\,{\rm Log}(\gamma_4) \label{asymptopath6v} \end{eqnarray} Here we have eliminated $P_1$ and replaced the remaining summations over $P_i$ by integrations over $p_i=P_i/n$ in $[0,1]$. Note that this covers the case of vahishing weights $\gamma_i$ for $i=3$ or $4$ as well: if $\gamma_i=0$ we simply suppress $P_i$ from the above expression, which in turn corresponds to taking the $p_i\to 0$ limit at finite $\gamma_i$ in \eqref{asymptopath6v}. \subsection{Refined one-point functions and asymptotics} \subsubsection{Refined partition function} Let $Z_{n,k}^{6V'}[u,v]$ denote the {\it refined} partition function of the 6V' model on the rectangular grid of size $(2n-1)\times n$ with uniform weights (\ref{6voddweights}-\ref{6vevenweights}), in which the rightmost path is conditioned to first visit the rightmost vertical line at a point at position $k\in[1,2n-1]$ (counted from bottom to top), before going vertically down until its endpoint, as illustrated in Fig.~\ref{fig:alltgt} (bottom left, pink domain, with the $k$ final steps removed). This quantity is easily related to the semi-homogeneous partition function $Z_n^{6V'}[u,v;\xi]$ as follows. In the latter, only the weights of the last column (with spectral parameter $v_n=v+\xi$) are different, and depend on the parity of the vertex height. Let us denote by $({\bar a}_i,{\bar b}_i,{\bar c}_i)$, $i=o,e$ the relative 6V' weights (ratio of the value at $v+\xi$ by that at $v$): \begin{eqnarray*} && {\bar a}_o=\frac{\sin(u-v-\xi+\eta)}{\sin(u-v+\eta)},\quad {\bar b}_o=\frac{\sin(u-v-\xi-\eta)}{\sin(u-v-\eta)}, \quad {\bar c}_o=1\\ && {\bar a}_e=\frac{\sin(u+v+\xi-\eta)}{\sin(u+v-\eta)}, \quad {\bar b}_e=\frac{\sin(u+v+\xi+\eta)}{\sin(u+v+\eta)}, \quad {\bar c}_e=1 \end{eqnarray*} Contributions to $Z_{n,k}^{6V'}[u,v]$ have a last column with $k-1$ bottom vertices of type b (vertical step), the $k$-th vertex of type c (right turn), and the top $2n-1-k$ vertices of type a (empty). Splitting contributions according to the parity of the position of the point of entry into the last column of the rightmost path, we arrive at: \begin{eqnarray*} Z_n^{6V'}[u,v;\xi] &=&\sum_{j=1}^n Z_{n,2j-1}^{6V'}[u,v]({\bar b}_o{\bar b}_e)^{j-1} {\bar c}_o ({\bar a}_e {\bar a}_o)^{n-j} +\sum_{j=1}^{n-1}Z_{n,2j}^{6V'}[u,v] ({\bar b}_o{\bar b}_e)^{j-1} {\bar b}_o {\bar c}_e {\bar a}_o ({\bar a}_e {\bar a}_o)^{n-j-1}\\ &=&({\bar a}_e {\bar a}_o)^{n-1}\sum_{j=1}^n \tau^{j-1} \{ Z_{n,2j-1}^{6V'}[u,v]+ Z_{n,2j}^{6V'}[u,v]\, \sigma \} \end{eqnarray*} where we have used the values ${\bar c}_o={\bar c}_e=1$ and the parameters \begin{eqnarray*} \tau&:=&\frac{{\bar b}_o{\bar b}_e}{{\bar a}_e {\bar a}_o} =\frac{\sin(u-v-\xi-\eta)\sin(u+v+\xi+\eta)\sin(u-v+\eta)\sin(u+v-\eta)}{\sin(u-v-\eta)\sin(u+v+\eta)\sin(u-v-\xi+\eta)\sin(u+v+\xi-\eta)}\\ \sigma&:=&\frac{{\bar b}_o }{{\bar a}_e}=\frac{\sin(u-v-\xi-\eta)\sin(u+v-\eta)}{\sin(u-v-\eta)\sin(u+v+\xi-\eta)} \end{eqnarray*} For use with the Tangent Method, we need to consider the refined one-point function $H_{n,k}[u,v]$ defined as the ratio of the partition function of the 6V' model in which the topmost path ends at position $k$ with a horizontal last step between the $n-1$-st vertical and the rightmost vertical, to that of the usual 6V' partition function. Note that the numerator is slightly different from the refined partition function $Z_{n,k}^{6V'}[u,v]$ as the rightmost path does not continue with $k$ vertical steps after hitting the rightmost vertical. Consequently, we must replace the $k-1$ corresponding b-type weights with a-type weights: $$H_{n,2j-1}[u,v]:=\left(\frac{a_o\,a_e}{b_o\, b_e}\right)^{j-1}\, \frac{Z_{n,2j-1}^{6V'}[u,v]}{Z_{n}^{6V'}[u,v]} ,\quad H_{n,2j}[u,v]:=\frac{a_o}{b_o} \,\left(\frac{a_o\,a_e}{b_o\, b_e}\right)^{j-1}\, \frac{Z_{n,2j}^{6V'}[u,v]}{Z_{n}^{6V'}[u,v]}$$ In terms of the one-point function $H_n^{6V'}[u,v;\xi]$ \eqref{ratioref}, the above identity reads: \begin{eqnarray*}H_n^{6V'}[u,v;\xi]=({\bar a}_e {\bar a}_o)^{n-1} \sum_{j=1}^n t^{j-1} \{ H_{n,2j-1}[u,v]+ H_{n,2j}[u,v]\, s \}\\ \end{eqnarray*} where \begin{eqnarray*} t&=&\tau \,\frac{b_o\,b_e}{a_o\, a_e} =\frac{\sin(u-v-\xi-\eta)\sin(u+v+\xi+\eta)}{\sin(u-v-\xi+\eta)\sin(u+v+\xi-\eta)}\\ s&=&\sigma \, \frac{b_o}{a_o} = \frac{\sin(u-v-\xi-\eta)\sin(u+v-\eta)}{\sin(u-v+\eta)\sin(u+v+\xi-\eta)} \end{eqnarray*} \subsubsection{Asymptotics} We wish to estimate the leading behavior of the one-point function $H_{N,k}[u,v]$ for large $N$ and $\kappa=k/( 2N)$ finite. To this end, we use the asymptotics of the function $H_N^{6V'}[u,v;\xi]$ (Theorem \ref{asympto6vponept}) to estimate for large $N$: \begin{eqnarray} &&\frac{H_N^{6V'}[u,v;\xi]}{({\bar a}_e {\bar a}_o)^{N-1}} \simeq e^{-N\varphi^{6V'}[u,v;\xi]} \nonumber \\ &&\varphi^{6V'}[u,v;\xi]=\psi^{6V'}[u,v;\xi]+{\rm Log}\left( \frac{\sin(u-v-\xi+\eta)\sin(u+v+\xi-\eta)}{\sin(u-v+\eta)\sin(u+v-\eta)}\right) \nonumber \\ &&\qquad\qquad\quad\ =-{\rm Log}\left(\frac{\sin(2v)\sin(u-v-\xi-\eta)\,\sin(u+v+\xi+\eta)}{\sin(2v+\xi)\sin(u-v-\eta)\,\sin(u+v+\eta)}\right)\nonumber \\ &&\qquad\qquad -{\rm Log}\left( \frac{\sin({\alpha}\xi)\sin({\alpha}(\xi+2v+2\eta))\sin({\alpha}(u-v-\eta))\sin({\alpha}(u+v+\eta))}{{\alpha} \,\sin(\xi)\sin(2{\alpha} (v+\eta))\sin({\alpha}(u-v-\xi-\eta))\sin({\alpha}(u+v+\xi+\eta))} \right) \label{phi6v} \end{eqnarray} This leads finally to the following result. \begin{thm}\label{6vpasythm} The large $N$ asymptotics of the refined one-point function for the 6V'model are given by: \begin{eqnarray} H_{N,2\kappa N}[u,v]&\simeq& \oint \frac{dt}{2i\pi t} e^{-N S_0^{6V'}(\kappa,t) }\nonumber \\ S_0^{6V'}(\kappa,t)&=& \varphi^{6V'}[u,v,\xi]+\kappa \, {\rm Log}(t) \label{action6v} \end{eqnarray} with $\varphi^{6V'}[u,v,\xi]$ as in \eqref{phi6v}, and where $\xi$ can be thought of as an implicit function of the variable $t$, upon inversion of the relation \begin{equation}\label{txi6v} t=t_{6V'}[\xi]:=\frac{\sin(u-v-\xi-\eta)\sin(u+v+\xi+\eta)}{\sin(u-v-\xi+\eta)\sin(u+v+\xi-\eta)} \end{equation} \end{thm} The leading contribution to \eqref{action6v} is determined by the solution of the saddle point equation $\partial_t S_0^{6V'}(\kappa,t)=0$ or equivalently $\partial_\xi S_0^{6V'}(\kappa,t_{6V'}[\xi])=0$, leading to: \begin{equation}\label{sapo6v} \kappa=\kappa_{6V'}[\xi]:=-\frac{t_{6V'}[\xi]}{\partial_\xi t_{6V'}[\xi]} \partial_\xi \varphi^{6V'}[u,v;\xi] \end{equation} Explicitly we have: \begin{eqnarray} \kappa_{6V'}[\xi] &=&\left\{\cot(u-v-\eta-\xi)+\cot(\xi)+\cot(\xi+2v)-\cot(u+v+\eta+\xi)\right.\nonumber \\ &&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\left. -{\alpha}\big(\cot({\alpha}(u-v-\eta-\xi))+\cot({\alpha}\xi)+\cot({\alpha}(\xi+2v+2\eta))-\cot({\alpha}(u+v+\eta+\xi)) \big)\right\} \nonumber \\ &&\quad \times\ \frac{\sin(u + v -\eta+\xi) \sin(u + v +\eta +\xi)\sin(u - v -\eta-\xi) \sin(u - v +\eta -\xi)}{\sin(2\eta)\big(\cos(2\eta)-\cos(2u)\cos(2v+2\xi)\big)}\label{rsol} \end{eqnarray} with ${\alpha}=\frac{\pi}{\pi-2\eta}$ as usual. \subsection{Arctic curves} \subsubsection{NE branch} As explained above, the first application of the Tangent Method gives access to the portion of the arctic curve situated in the NE corner of the rectangular domain. \begin{thm}\label{6VpNEthm} The NE branch of the arctic curve for the 6V' model as predicted by the Tangent Method is given by the parametric equations $$ x=X_{NE}^{6V'}[\xi]= \frac{B'[\xi]}{A'[\xi]} \qquad y=Y_{NE}^{6V'}[\xi]=B[\xi]-\frac{A[\xi]}{A'[\xi]}B'[\xi]$$ with the parameter range: $$ \xi\in \big[\eta+|u|-v-\pi,0\big] $$ and where $$A[\xi]=2\,\frac{\sin(u-v-\eta-\xi)\sin(u-v+\eta-\xi)\sin(u+v-\eta+\xi)\sin(u+v+\eta+\xi)}{\sin(\xi-2\eta)\sin(\xi)\big(\cos(2\eta)- \cos(2u)\cos(2v+2\xi)\big)}$$ and $B[\xi]=2\kappa_{6V'}[\xi]$, with $\kappa_{6V'}[\xi]$ is as in \eqref{rsol}. \end{thm} \begin{proof} We may now bring together the ingredients of the Tangent Method. We determine the family of tangents $F_\xi(x,y)=y+A[\xi]x-B[\xi]$ defined in Sect. \ref{sectan}. We already identified the intercept $B[\xi]=2\kappa_{6V'}[\xi]$ with $\kappa_{6V'}[\xi]$ given by \eqref{rsol}. To determine the slope $A[\xi]=2\kappa/\lambda$, we must find the leading contribution to the total partition function \begin{eqnarray*} &&\sum_{k=1}^{2n-1} H_{N,k}[u,v] \, Y_{k,\ell} \simeq \int_0^1 d\kappa H_{N,2\kappa N}[u,v]\, Y_{2\kappa N,\lambda N}\simeq \int_0^1 d\kappa dp_2 dp_4,dp_5 e^{-N S^{6V'}(\kappa,p_2,p_4,p_5,t)} \\ &&S^{6V'}(\kappa,p_2,p_4,p_5,t):=S_0^{6V'}(\kappa,t)+S_1^{6V'}(\kappa,p_2,p_4,p_5) \end{eqnarray*} with $S_0^{6V'}(\kappa,t)$ as in \eqref{action6v} and $S_1^{6V'}(\kappa,p_2,p_3,p_4)$ as in \eqref{asymptopath6v}. As in the 6V case, the saddle-point equation $\partial_\xi S^{6V'}=0$ is solved by \eqref{rsol}, and amounts to parameterizing $\kappa=\kappa_{6V'}[\xi]$ in terms of the parameter $\xi$. The saddle-point equations $\partial_\kappa S^{6V'}=\partial_{p_2}S^{6V'}=\partial_{p_3}S^{6V'}=\partial_{p_4}S^{6V'}=0$ give rise to the system of algebraic equations: \begin{eqnarray*} \frac{t}{\gamma_1\gamma_2}&=& \frac{(p_2+\kappa)(\kappa+\lambda-p_2-p_3-p_4)}{(p_3-\kappa)(p_4-\kappa)}\\ \frac{\gamma_1}{\gamma_2}&=&\frac{(p_2+\kappa)(\lambda-p_2-p_3-p_4)}{p_2(\kappa+\lambda-p_2-p_3-p_4)}\\ \frac{\gamma_1}{\gamma_3}&=& \frac{(\kappa-p_3)(\lambda-p_2-p_3-p_4)}{p_3(\kappa+\lambda-p_2-p_3-p_4)}\\ \frac{\gamma_1}{\gamma_4}&=& \frac{(\kappa-p_4)(\lambda-p_2-p_3-p_4)}{p_4(\kappa+\lambda-p_2-p_3-p_4)}\\ \end{eqnarray*} Substituting the values of the weights $\gamma_i$ \eqref{weights6vpath} and $t=t_{6V'}[\xi]$ \eqref{txi6v}, we find the unique solution such that $\lambda,\kappa>0$: \begin{eqnarray} \frac{p_2}{\kappa}&=& -\frac{\sin(u+v+\eta)\sin(\xi)}{\sin(2\eta)\sin(u+v-\eta+\xi)} \qquad \frac{p_3}{\kappa}= \frac{\sin(u-v-3\eta)\sin(\xi)}{\sin(2\eta)\sin(u-v-\eta-\xi)} \nonumber \\ \frac{p_4}{\kappa}&=& \frac{\sin(u+v+3\eta)\sin(\xi)}{\sin(2\eta)\sin(u+v+\eta+\xi)}\nonumber \\ \frac{\kappa}{\lambda}&=& \frac{\sin(u-v-\eta-\xi)\sin(u-v+\eta-\xi)\sin(u+v-\eta+\xi)\sin(u+v+\eta+\xi)}{\sin(\xi-2\eta)\sin(\xi)\big(\cos(2\eta)- \cos(2u)\cos(2v+2\xi)\big)}\nonumber \\ &&\label{valal} \end{eqnarray} Using the parametrization $\kappa=\kappa_{6V'}[\xi]$, we may interpret the last equation as determining $\lambda$ as a function $\lambda_{6V'}[\xi]$ of the parameter $\xi$, where: \begin{equation}\label{lamofxi} \lambda_{6V'}[\xi]:=\kappa_{6V'}[\xi]\, \frac{\sin(\xi-2\eta)\sin(\xi)\big(\cos(2\eta)- \cos(2u)\cos(2v+2\xi)\big)}{\sin(u-v-\eta-\xi)\sin(u-v+\eta-\xi)\sin(u+v-\eta+\xi)\sin(u+v+\eta+\xi)} \end{equation} To summarize, we have found the most likely exit point $\kappa$ as an implicit function of the arbitrary parameter $\lambda$, via the parametric equations $(\kappa,\lambda)=(\kappa_{6V'}[\xi],\lambda_{6V'}[\xi])$, which results in the family of tangent lines with equations $F_\xi(x,y)=0$. The theorem follows from the expressions \eqref{acurve}, by identifying the slope $A[\xi]=2\kappa_{6V'}[\xi]/\lambda_{6V'}[\xi]$, while the range of the parameter $\xi$ corresponds to imposing $A[\xi] \in [0,\infty)$. \end{proof} \subsubsection{SE branch} As mentioned in Sect.~\ref{obsec}, a simple transformation of the model gives access to the portion of the arctic curve situated in the SE corner of the rectangular domain: we must change parameters $(u,v)\mapsto (u^*,v^*)=(-u,-v-\pi)$ and coordinates $(x,y)\mapsto (x,2-x-y)$. \begin{thm}\label{6VpSEthm} The SE branch of the arctic curve for the 6V' model is given by the parametric equations $$ x=X_{SE}^{6V'}[\xi]={X_{NE}^{6V'}}^*[\xi] \qquad y=Y_{SE}^{6V'}[\xi]=2-{Y_{NE}^{6V'}}^*[\xi]\qquad (\xi\in [\eta+|u|+v,0] )$$ with $X_{NE}^{6V'},Y_{NE}^{6V'}$ as in Theorem \ref{6VpNEthm}, and where the superscript $*$ stands for the transformation $(u,v)\mapsto (u^*,v^*)=(-u,-v-\pi)$, which we have also applied to the parameter range. \end{thm} \begin{remark}\label{symrem} In the case $v=-\frac{\pi}{2}=v^*$, we note that the equation of the tangent is invariant under $u\to -u=u^*$. We deduce that the arctic curve is symmetric w.r.t. the line $y=1$, and that the SE branch is simply the reflection of the NE branch: $X_{SE}=X_{NE}$, $Y_{SE}=2-Y_{NE}$. This is no longer true when $v\neq -\frac{\pi}{2}$. \end{remark} \subsection{Examples} In this section, we illustrate Theorems \ref{6VpNEthm} and \ref{6VpSEthm} with some concrete examples. \subsubsection{The ``6V" case $u=0$, $v=-\frac{\pi}{2}$} \begin{figure} \begin{center} \begin{minipage}{0.4\textwidth} \centering \includegraphics[width=4cm]{sixv.eps} \end{minipage}\hfill \begin{minipage}{0.6\textwidth} \centering \includegraphics[width=7.8cm]{6vvs6vp.eps} \end{minipage} \end{center} \caption{\small Arctic curves for the 6V' model with parameter $u=0$. Left: case $v=-\frac{\pi}{2}$, with $\eta$ ranging from $0^+$ (outermost curve) to $\frac{\pi}{2}^-$ (innermost curve): all curves are symmetric w.r.t. the line $y=1$, and coincide with those of the 6V-DWBC model (NE/SE portions). Right: Arctic curve of the 6V' model for $\eta=\frac{\pi}{3}$, $u=0$ and $v=-\frac{\pi}{2}-\frac{\pi}{12}$ (red curve) compared with the arctic curve of the 6V-DWBC model (scaled by a factor of 2) for the same value of $\eta$ and the value $u=\frac{\pi}{2}+\frac{\pi}{12}$ leading to the same Boltzmann weights (blue curve).} \label{fig:arctic1} \end{figure} The condition $u=0$ implies that all horizontal spectral parameters are equal, and that the Boltzmann weights (\ref{6voddweights}-\ref{6vevenweights}) lose their dependence on the parity of the row (upon taking $\rho_e=\rho_o=\rho$). In fact this gives a mapping to the weights \eqref{6vweights} of the ordinary 6V model via $(u_{6V'},v_{6V'})\mapsto (0,-u_{6V})$. We may wonder how the U-turn boundary condition has affected the thermodynamics of the 6V-DWBC model. In fact, extending the usual connection between ASM and VSASM, it is easy to identify the 6V' model at $u=0$ with a 6V-DWBC model on a grid of ``double" size $2n+1\times 2n+1$, and whose configurations are vertically symmetric, i.e. invariant under reflection w.r.t. a vertical line. As noted in Remark \ref{rem6v}, the parameter $u$ in the 6V-DWBC case may be interpreted as an anisotropy parameter. Indeed, the value $u=\frac{\pi}{2}$ corresponds for the 6V-DWBC model to identical weights $a=b$ which imply invariance of the partition function under reflection w.r.t. a horizontal line. However, when $u\neq \frac{\pi}{2}$, this is no longer true, as the weights $a\neq b$ are interchanged in the reflection. As a consequence, the tangency points of the arctic curve to the boundary of the domain move away from their symmetric positions. We expect therefore a connection between 6V-DWBC and 6V' models only at the isotropic point $u_{6V}=\frac{\pi}{2}$, corresponding to $(u_{6V'},v_{6V'})=(0,-\frac{\pi}{2})$. Note that this point corresponds to the $\tau$-enumeration of VSASM (for the 6V' side) and ASM (for the 6V side), with $\tau= 4\sin^2(\eta)$. \begin{thm}\label{tenum} For arbitrary $0<\eta<\frac{\pi}{2}$, the arctic curve for the 6V' model with $(u_{6V'},v_{6V'})=(0,-\frac{\pi}{2})$ as obtained via the Tangent Method assuming Conjecture \ref{freeconj} holds is identical to that of the 6V-DWBC model with $u_{6V}=\frac{\pi}{2}$ in the NE/SE sector, up to global rescaling. \end{thm} \begin{proof} As our choice of parameters is invariant under the symmetry $*$ for both the 6V' case $(u_{6V'},v_{6V'})^*=(u_{6V'},v_{6V'})=(0,-\frac{\pi}{2})$ and the 6V case $u_{6V}^*=u_{6V}=\frac{\pi}{2}$, we simply have to compare the envelope of the corresponding families of tangent lines leading to the NE branches, as given by Theorems \ref{6VNEthm} and \ref{6VpNEthm}. We have the two families (we add a superscript 6V,6V' to avoid ambiguities): $$ y+A^{6V}[\xi]x -B^{6V}[\xi]=0 \quad {\rm and} \quad y+A^{6V'}[\xi]x -B^{6V'}[\xi] =0 $$ We find: $$ \lim_{u\to 0,v\to-\frac{\pi}{2}} A^{6V'}[\xi]=\lim_{u\to \frac{\pi}{2}} A^{6V}[\xi],\qquad \lim_{u\to 0,v\to-\frac{\pi}{2}} B^{6V'}[\xi] =2 \lim_{u\to \frac{\pi}{2}} B^{6V}[\xi] $$ while the 6V and 6V' ranges of the parameter $\xi$ coincide with $\xi \in [\eta-\frac{\pi}{2},0]$. We deduce that upon rescaling of $x$ and $y$ by a factor of $2$ the two families are identical, and conclude that $(X_{NE}^{6V'}[\xi],Y_{NE}^{6V'}[\xi])=2 (X_{NE}^{6V}[\xi],Y_{NE}^{6V}[\xi])$. The SE branch identification follows immediately from our remark on the symmetry $*$, leading to $(X_{SE}^{6V'}[\xi],Y_{SE}^{6V'}[\xi])=2 (X_{SE}^{6V}[\xi],Y_{SE}^{6V}[\xi])$ as well, and the Theorem follows. \end{proof} A particular case of Theorem \ref{tenum} corresponds to the uniform case where the 6V-DWBC model boils down to the enumeration of ASM, and the 6V' model to that of VSASM. The arctic curves for both these cases were derived in \cite{CP2009,CP2010} and \cite{DFLAP} respectively, and shown to coincide. For illustration, we have represented in Fig.~\ref{fig:arctic1} (left) the corresponding arctic curves for some values of $\eta$ ranging from $0^+$ to $\frac{\pi}{2}^-$: the curves are identical to the NE/SE portions of the arctic curve of the 6V-DWBC model, upon a rescaling by a global factor of $2$. For $\eta\to 0^+$, we find the following limiting arctic curve: $$(X_{NE},Y_{NE})\vert_{\eta\to 0}= \left( \frac{1}{\pi}(2\xi-\sin(2\xi)),1-\frac{1}{\pi}(2\xi+\sin(2\xi))\right)\quad (\xi\in [-\frac{\pi}{2},0])\\ $$ The limit $\eta\to \frac{\pi}{2}^-$ is singular, as the parameter ${\alpha}=\pi/(\pi-2\eta)$ diverges. However, one can take a double scaling limit $\eta=\frac{\pi}{2}-\epsilon$, $\xi=\epsilon \zeta$, and $\epsilon\to 0$, in which case the limiting curve reads: \begin{eqnarray*}X_{NE}\vert_{\epsilon\to 0}&=& \frac{(2+\zeta)^2(\cos(2\pi\zeta)-1+2\pi\zeta^2(\pi(1-\zeta^2)\cos(\pi\zeta)+2\zeta \sin(\pi\zeta))}{4(1+\zeta+\zeta^2)\sin^2(\pi \zeta)}\\ Y_{NE}\vert_{\epsilon\to 0}&=& \frac{(1+\zeta)^2(3\sin^2(\pi\zeta)+\pi(1-\zeta)^2(\pi\zeta(2+\zeta)\cos(\pi\zeta)-2(1+\zeta)\sin(\pi\zeta))}{2(1+\zeta+\zeta^2)\sin^2(\pi \zeta)} \end{eqnarray*} for $\zeta\in (-1,0]$. By contrast, in the anisotropic case where $u_{6V'}=0$ but $v_{6V'}\neq -\frac{\pi}{2}$ (and $u_{6V}=-v_{6V'}\neq \frac{\pi}{2}$), the arctic curves no longer coincide. For illustration, the predicted NE and SE portions of arctic curve of the 6V' model at $\eta=\frac{\pi}{3}$, $u_{6V'}=0$, $v_{6V'}= -\frac{\pi}{2}-\frac{\pi}{12}$ are depicted in Fig.~\ref{fig:arctic1} (right), together with the arctic curve of the 6V-DWBC model with the same values of the weights (i.e. with same $\eta$ and $u_{6V}=-v_{6V'}=\frac{\pi}{2}+\frac{\pi}{12}$): the resulting curves are very different. In particular, the 6V' curve is anchored at the endpoints $(-1,0)$ and $(-1,2)$ with horizontal tangents, whereas the 6V curve has horizontal tangents at different points $\big(2(1-\frac{2}{\sqrt{3}}),2\big)\simeq (-.309,2)$ and $\big(4(\frac{1}{\sqrt{3}}-1),0\big)\simeq (-1.69,0)$. \subsubsection{The ``Free fermion" case $\eta=\frac{\pi}{4}$} \begin{figure} \begin{center} \begin{minipage}{0.5\textwidth} \centering \includegraphics[width=5cm]{freevsym.eps} \end{minipage}\hfill \begin{minipage}{0.5\textwidth} \centering \includegraphics[width=5cm]{anisofree.eps} \end{minipage} \end{center} \caption{\small Arctic curves for the free fermion case $\eta=\frac{\pi}{4}$ of the 6V' model. Left: symmetric case $v=-\frac{\pi}{2}$, with $u$ ranging from $0^+$ (outermost curve on the vertical $x=-1$) to $\frac{\pi}{4}^-$ (innermost curve on the vertical $x=-1$): all curves are symmetric w.r.t. the line $y=1$. Right: asymmetric case $v=-\frac{\pi}{2}-\frac{\pi}{12}$, with $u$ ranging from $0^+$ (outermost curve on the vertical $x=-1$) to $\frac{\pi}{6}^-$ (innermost curve on the vertical $x=-1$ degenerating to a segment).} \label{fig:arctic2} \end{figure} This case is nicer in the sense that arctic curves are expected to be analytic. In particular, we checked that the SE portion of the arctic curve is indeed the analytic continuation of the NE one. In Fig.\ref{fig:arctic2} (left) we represent arctic curves for $\eta=\frac{\pi}{4}$ and the isotropic value $v=-\frac{\pi}{2}$ with $u$ ranging from $0^+$ to $\frac{\pi}{4}^-$. The $u=0$ arctic curve is given by $(x,y)=\big(\cos(2\xi)-1,\sin(2\xi)+1\big)$: it is the half-circle $(x+1)^2+(y-1)^2=1$ with $x\geq -1$, first obtained in \cite{PRarctic}. The case $u=\frac{\pi}{4}$ is singular. As before, we consider the double-scaling limit $u=\frac{\pi}{4}+\epsilon$ and $\xi=\epsilon \, \zeta$, leading to the limiting curve: $$(x,y)=\left(- \frac{\zeta^2}{1+\zeta^2},\frac{(1+\zeta)^2}{1+\zeta^2}\right) $$ equal to the ellipse $(2x+1)^2 +(y-1)^2 =1$ inscribed in the rectangle $[-1,0]\times [0,2]$. We see that the gap between the endpoints of the arctic curve on the vertical $x=-1$ ranges from $2$ (semi-circle case) to $0$ (ellipse case). This type of arctic curve was also encountered when considering lozenge tilings (an archetypical free fermion model) with free boundary conditions in \cite{DFR}. In Fig.~\ref{fig:arctic2} (right) we represent arctic curves for $\eta=\frac{\pi}{4}$ and a sample anisotropic value $v=-\frac{\pi}{2}-\frac{\pi}{12}$ with $u$ ranging from $0^+$ to $\frac{\pi}{6}^-$. The $u=0$ arctic curve is the quartic: $$(x,y)=\left(-\sin^2(\xi)\big( 1+\frac{\cos^2(\xi)}{\cos^2(\frac{\pi}{6}-\xi)}\big),\frac{1}{2}\,\frac{\cos^2(\frac{\pi}{6}+\xi)}{\cos^2(\frac{\pi}{6}-\xi)}\big(1+\sqrt{3}\sin(\frac{\pi}{3}+2\xi)\big)\right) $$ The limit $u\to \frac{\pi}{6}^-$ is singular, but the double scaling limit $u=\frac{\pi}{6}-\epsilon$ and $\xi= \zeta \sqrt{\epsilon}$ and $\epsilon\to 0$ leads to the segment $$ (x,y)=\left(-1+\frac{1}{1+\frac{4}{\sqrt{3}}\zeta^2},\frac{2}{1+\frac{4}{\sqrt{3}}\zeta^2}\right) \qquad (\zeta\in [0,\infty) )$$ that joins point $(-1,0)$ to $(0,2)$. \subsubsection{20V case} \begin{figure} \begin{center} \includegraphics[width=8cm]{20vfrom6vp.eps} \end{center} \caption{\small Arctic curve (NE and SE portions) of the ``20V point" of the 6V' model, with $\eta=u=\frac{\pi}{8}$, $v=-\frac{\pi}{2}$ (symmetric curve in red) together with the arctic curve of the associated 6V model, with $\eta=\frac{\pi}{8}$, $u=\frac{5\pi}{8}$, scaled by a factor of $2$.} \label{fig:arctic3} \end{figure} This case corresponds to $\eta=\frac{\pi}{8}$, $u=\eta=\frac{\pi}{8}$ and $v=-4\eta=-\frac{\pi}{2}$, by analogy with the 6V model with DWBC whose partition function is identical to that of the uniformly weighted 20V model with DWBC1,2 studied in Refs. \cite{DFGUI,BDFG}. The corresponding NE/SE portions of arctic curve are depicted in Fig.~\ref{fig:arctic3}. \subsubsection{Generic case} \begin{figure} \begin{center} \includegraphics[width=5cm]{generic6vp.eps} \end{center} \caption{\small Arctic curve (NE and SE portions) of the 6V' model with $\eta=\frac{\pi}{3}$, $u=\frac{\pi}{12}$ and $v$ varying from $-\frac{\pi}{2}$ (leftmost curve on top) to $-\frac{\pi}{2}-\frac{\pi}{12}$ (rightmost curve on top).} \label{fig:arctic4} \end{figure} We present a ``generic case" in Fig.~\ref{fig:arctic4} with $\eta=\frac{\pi}{3}$, $u=\frac{\pi}{12}$ and $v$ varying from $-\frac{\pi}{2}$ to $-\frac{\pi}{2}-\frac{\pi}{12}$. As before the case $v=-\frac{\pi}{2}-\frac{\pi}{12}$ is singular, but may be investigated via a double scaling limit, leading to the segment joining $(-1,0)$ to $(0,2)$. \section{20V model with DWBC3}\label{20vsec} \subsection{Partition function and one-point function} In Ref.~ \cite{DF20V} the partition function of the 20V-DWBC3 model was related to that of the 6V' model, by use of the integrability of the weights \eqref{weights20V}. More precisely, let us denote by $Z_n^{20V}[u,{\mathbf v}]$ the semi-homogeneous partition function of the 20V-DWBC3 model, with all horizontal spectral parameters equal to $\eta+u$, all diagonal ones to $-u$ and arbitrary vertical spectral parameters ${\mathbf v}=v_1,v_2,...,v_n$, and by $Z_n^{6V'}[u,{\mathbf v}]$ the partition function of the 6V' model with horizontal spectral parameters all equal to $u$ and arbitrary vertical spectral parameters ${\mathbf v}$. We have: \begin{thm}\label{20v6vthm}{\cite{DF20V}} The following relation holds for all $n\geq 1$: \begin{equation}Z_n^{20V}[u,{\mathbf v}]={\alpha}^{n(3n-1)/2} Z_n^{6V'}[u,{\mathbf v}] \sin(2u+2\eta)^{n(3n-1)/2} \prod_{i=1}^n \sin(u-v_i-\eta)^{i-1} \sin(\eta-u-v_i)^{i} \end{equation} \end{thm} In the homogeneous case where all $v_i=v$ for all $i$, this reduces to: \begin{equation}\label{vingtvpf}Z_n^{20V}[u,v]= {\alpha}^{n(3n-1)/2} Z_n^{6V'}[u,v] \sin(2u+2\eta)^{n(3n-1)/2} \sin(u-v-\eta)^{n(n-1)/2} \sin(\eta-u-v)^{n(n+1)/2} \end{equation} Next we define the one-point function $H_n^{20V}[u,v;\xi]$ as the ratio: \begin{eqnarray}\label{inho20v} H_n^{20V}[u,v;\xi]&:=&\frac{Z_n^{20V}[u,v;\xi]}{Z_n^{20V}[u,v]}\nonumber \\ &=& \left(\frac{\sin(u-v-\xi-\eta)}{\sin(u-v-\eta)}\right)^{n-1} \, \left(\frac{\sin(\eta-u-v-\xi)}{\sin(\eta-u-v)}\right)^n \, H_n^{6V'}[u,v;\xi] \end{eqnarray} where in $Z_n^{20V}[u,v;\xi]$ we have kept $v_1=v_2=\cdots=v_{n-1}=v$ but relaxed the last value $v_n=v+\xi$. Like in the 6V and 6V' cases, this function will be a crucial ingredient of the Tangent Method. \subsection{Refined one-point functions and asymptotics} \subsubsection{Refined partition function}\label{ref20vsec} \begin{figure} \begin{center} \includegraphics[width=7.5cm]{domain2.eps} \end{center} \caption{\small A sample contribution to the refined partition function $Z_{n,k}^{20V}[u,v]$. In this particular example, the contribution pertains to $Z_{n,k}^{20V \backslash}[u,v]$. The medallions detail the various weights involved in the last column.} \label{fig:ref20v} \end{figure} Let $Z_{n,k}^{20V}[u,v]$ denote the partition function of the 20V model on the quadrangle $Q_n$ with uniform weights \eqref{weights20V}, in which the rightmost path is conditioned to first visit the rightmost column at a point at position $k\in [1,2n-1]$ (see Fig.~\ref{fig:ref20v} for an illustration). We may split this partition function into $Z_{n,k}^{20V}[u,v]=Z_{n,k}^{20V-}[u,v]+Z_{n,k}^{20V \backslash}[u,v]$ according to whether the topmost path accesses the point $k$ via a horizontal $-$ or diagonal $\backslash$ step, before terminating with $k$ vertical steps until its endpoint. This quantity is easily related to the partially inhomogeneous partition function $Z_{n}^{20V}[u,v;\xi]$ \eqref{inho20v}. Recall that for the latter the weights are homogeneous with parameters $u,v$ except for the $n$-th column in which $v$ is replaced by $v+\xi$. Let ${\bar \omega_i}:=\omega_i[u,v+\xi]/\omega_i[u,v]$ be the {\it relative} Boltzmann weights for the last column, as compared to the homogeneous values. Specifically, using the weights: \begin{eqnarray*} &&{\bar \omega_0}=\frac{\sin(u-v-\xi+\eta)\, \sin(\eta-u-v-\xi)}{\sin(u-v+\eta)\, \sin(\eta-u-v)}, \quad {\bar \omega_2}=\frac{ \sin(u-v-\xi-\eta)}{ \sin(u-v-\eta)} \nonumber \\ &&{\bar \omega_1}= \frac{\sin(u-v-\xi-\eta)\, \sin(-u-v-\xi-\eta)}{\sin(u-v-\eta)\, \sin(-u-v-\eta)}, \quad {\bar \omega_4}=\frac{\sin(\eta-u-v-\xi)}{\sin(\eta-u-v)} \end{eqnarray*} we find the following relation, expressing the decomposition of the contributions to $Z_{n}^{20V}[u,v;\xi]$ according to the configurations of their topmost path (see Fig.~\ref{fig:ref20v} for an illustration): \begin{equation}\label{sumrule} \sum_{k=1}^{2n-1} \left({\bar\omega}_4 \,Z_{n,k}^{20V -}[u,v]+{\bar \omega}_2\,Z_{n,k}^{20V \backslash}[u,v]\right) {\bar \omega}_0^{2n-k-1} {\bar\omega}_1^{k-1} =Z_{n}^{20V}[u,v;\xi] \end{equation} Introducing the parameters $$\tau:=\frac{{\bar\omega}_1}{{\bar \omega}_0},\qquad \sigma:=\frac{{\bar \omega}_2}{{\bar\omega}_4}$$ this reads: $$Z_{n}^{20V}[u,v;\xi] ={\bar\omega}_4 \, {\bar \omega}_0^{2n-2} \sum_{k=1}^{2n-1} \tau^{k-1}\, (Z_{n,k}^{20V -}[u,v]+\sigma\, Z_{n,k}^{20V \backslash}[u,v])$$ \subsubsection{Refined one-point function} As in the 6V' case, the corresponding (normalized) refined one-point functions $H_{n,k}^{20V -}[u,v],H_{n,k}^{20V \backslash}[u,v]$ are ratios of slightly modified refined partition functions to the original homogeneous partition function $Z_n^{20V}[u,v]$. The corresponding configurations have a topmost path that stops at the point $k$ after a last step from the $n-1$-th vertical to the $n$-th one (see Fig.~\ref{fig:alltgt} top right, pink domain). Compared to $Z_{n,k}^{20V -}[u,v],Z_{n,k}^{20V \backslash}[u,v]$, we must remove the last $k$ vertical steps of the topmost path, and thus replace the $k$ corresponding weights by $1$ (instead of $\omega_4,\omega_2$) for the turning vertex, and by $\omega_0$ (instead of $\omega_1$) for the $k-1$ vertices crossed by the path: \begin{equation}\label{1pt20v} H_{n,k}^{20V -}[u,v]=\frac{1}{\omega_4}\left(\frac{\omega_0}{\omega_1}\right)^{k-1} \frac{Z_{n,k}^{20V -}[u,v]}{Z_n^{20V}[u,v]},\qquad H_{n,k}^{20V \backslash}[u,v]=\frac{1}{\omega_2}\left(\frac{\omega_0}{\omega_1}\right)^{k-1} \frac{Z_{n,k}^{20V \backslash}[u,v]}{Z_n^{20V}[u,v]} \end{equation} We deduce the relation \begin{equation}\label{relaonept20v} H_n^{20V}[u,v;\xi]=\frac{Z_n^{20V}[u,v;\xi]}{Z_n^{20V}[u,v]}=\omega_4[u,v;\xi]\,{\bar \omega}_0^{2n-2} \sum_{k=1}^{2n-1} t^{k-1} (H_{n,k}^{20V -}[u,v]+s\, H_{n,k}^{20V \backslash}[u,v]) \end{equation} where we have used the parameters \begin{equation}\label{t20v} t= \tau \frac{\omega_1}{\omega_0}=\frac{\sin(u-v-\xi-\eta)\, \sin(-u-v-\xi-\eta)}{\sin(u-v-\xi+\eta)\, \sin(\eta-u-v-\xi)}=:t_{20V}[\xi],\quad s=\sigma \frac{\omega_2}{\omega_4}= \frac{ \sin(u-v-\xi-\eta)}{\sin(\eta-u-v-\xi)} \end{equation} We note that the function $t_{20V}[\xi]$ is identical to $t_{6V'}[\xi]$ of the 6V' model \eqref{txi6v}. \subsubsection{Relation to 6V' one-point function} Using eq.\eqref{inho20v}, and noting moreover that ${\bar a}_o{\bar a}_e={\bar \omega}_0$, we may express: \begin{equation}\label{relutfin} \frac{H_n^{20V}[u,v;\xi]}{{\bar \omega}_0^{2n-1}}=\left(\frac{\sin(u-v-\xi-\eta)}{\sin(u-v-\eta)}\right)^{n-1} \, \left(\frac{\sin(u-v+\eta)}{\sin(u-v-\xi+\eta)}\right)^n\, \frac{H_n^{6V'}[u,v;\xi]}{({\bar a}_o{\bar a}_e)^{n-1}} \end{equation} \subsubsection{Asymptotics} We now turn to large $n=N$ asymptotics of the one-point functions \eqref{1pt20v} with the scaled exit point position $\kappa =k/(2N)$ kept finite. We first note that the relation \eqref{relutfin} yields the large $N$ asymptotics \begin{eqnarray}\frac{H_N^{20V}[u,v;\xi]}{{\bar \omega}_0^{2N-1}}&\simeq& e^{-N\,\varphi^{20V}[u,v,\xi]} \nonumber \\ \varphi^{20V}[u,v;\xi]&=&\varphi^{6V'}[u,v,\xi]-{\rm Log}\left(\frac{\sin(u-v-\xi-\eta)\sin(u-v+\eta)}{\sin(u-v-\xi+\eta)\sin(u-v-\eta)}\right) \nonumber \\ &&\!\!\!\!\!\!\!\!\!\! =-{\rm Log}\left(\frac{\sin(2v)\sin^2(u-v-\xi-\eta)\,\sin(u+v+\xi+\eta)\,\sin(u-v+\eta)}{\sin(2v+\xi)\sin^2(u-v-\eta)\,\sin(u+v+\eta)\,\sin(u-v-\xi+\eta)}\right)\nonumber \\ &&\!\!\!\!\!\!\!\!\!\! -{\rm Log}\left( \frac{\sin({\alpha}\xi)\sin({\alpha}(\xi+2v+2\eta))\sin({\alpha}(u-v-\eta))\sin({\alpha}(u+v+\eta))}{{\alpha} \,\sin(\xi)\sin(2{\alpha} (v+\eta))\sin({\alpha}(u-v-\xi-\eta))\sin({\alpha}(u+v+\xi+\eta))} \right)\label{asymptoh20v} \end{eqnarray} As the parameter $s$ is finite and independent of $k$, using the relation \eqref{relaonept20v}, the connection between $H_n^{20V}[u,v;\xi]$ to the 6V' one-point function \eqref{relutfin} and finally the asymptotics \eqref{inho20v}, we get identical leading behaviors for both one-point functions. \begin{thm}\label{20vasyonethm} The large $n=N$ scaling limit of the refined one-point functions $H_{n,k}^{20V -}[u,v]$ and $H_{n,k}^{20V \backslash}[u,v]$ reads: \begin{eqnarray*} H_{N,2\kappa N}^{20V -}[u,v]&\simeq& H_{N,2\kappa N}^{20V \backslash}[u,v]\simeq \oint \frac{dt }{2i\pi t} e^{-N\, S_0^{20V}(\kappa,t)} \\ S_0^{20V}(\kappa,t)&:=& \varphi^{20V}[u,v;\xi]+2\kappa\,{\rm Log}(t) \end{eqnarray*} where $\varphi^{20V}[u,v;\xi]$ is as in \eqref{asymptoh20v}, and in which the variables $t$ and $\xi$ are related via $t=t_{20V}[\xi]$ \eqref{t20v}. \end{thm} As before, the integral is dominated at large $N$ by the solution of the saddle-point equation $\partial_t S_0^{20V}(\kappa,t)=0$, or equivalently, changing integration variables to $\xi$: $\partial_\xi S_0^{20V}(\kappa,t_{20V}[\xi])=0$. Using the identification $t_{20V}[\xi]=t_{6V'}[\xi]$, this is easily solved as \begin{eqnarray} &&\!\!\!\!\!\!\!\!\!\!\! \kappa= \kappa_{20V}[\xi]:=-\frac{1}{2}\,\frac{t_{6V'}[\xi]}{\partial_\xi t_{6V'}[\xi]}\, \partial_\xi \varphi^{20V}[u,v;\xi]\nonumber \\ &&\!\!\!\!\!\!\!\!\!\!\! =\frac{\kappa_{6V'}[\xi]}{2}+\frac{\cot(u-v-\xi+\eta)-\cot(u-v-\xi-\eta)}{2\,\sin(2\eta)}\,\frac{\sin^2(u-v-\xi+\eta)\sin^2(u-v-\xi-\eta)}{\cos(2u)\cos(2\xi+2v)-\cos(2\eta)}\nonumber \\ \label{kappa20v} \end{eqnarray} with $\kappa_{6V'}[\xi]$ as in (\ref{sapo6v}-\ref{rsol}). \subsection{Paths} $\ $ \subsubsection{Partition function} With the setting of Fig.~\ref{fig:alltgt} (top right, light blue domain), we wish to compute the partition function $Y_{k,\ell}(\beta_1,\beta_2)$ of a single (Schr\"oder) path of the 20V model in the first quadrant ${\mathbb Z}_+^2$, with starting point $(0,k)$ and endpoint $(\ell,0)$. We include a weight $\beta_1,\beta_2$ according to the configuration of the step taken before entering the path domain (last step in the pink domain, respectively horizontal or diagonal). The paths receive homogeneous 20V weights \eqref{weights20V}, with horizontal, vertical, diagonal uniform spectral parameters $u+\eta,\ v,\ -u$ respectively, while all vertices not visited by the path receive the weight $\omega_0$. As in the previous cases, we may factor out an unimportant overall factor $\omega_0^{k\ell}$ (where $k\ell$ is the area of the light blue rectangle $[0,\ell]\times [0,k]$ in Fig.~\ref{fig:alltgt} top right), and weight the vertices visited by the path by and extra factor $\frac{1}{\omega_0}$. The partition function $Y_{k,\ell}(\beta_1,\beta_2)$ is computed by use of a transfer matrix technique (see \cite{BDFG} Appendix B for details with slightly different definitions). Each path is travelled from N,W to S,E, and the transfer matrix is a $3\times 3$ matrix whose entries correspond to the vertex weight for the transition from the entering step at each visited vertex to the outgoing step. The three states are $(-,\backslash,\vert)$ for respectively a horizontal, diagonal, vertical step ending at the transition vertex. Moreover we include an extra weight $z,zw,w$ per horizontal, diagonal, vertical outgoing step respectively. Note that the step prior to entering the quadrant (exit from the rectangular domain) may be either horizontal (with an extra weight $\beta_1$) or diagonal (with an extra weight $\beta_2$), while the last step is vertical. The transfer matrix $T_{20V}$ reads: $$T_{20V}=\frac{1}{\omega_0} \begin{pmatrix} \omega_6 z & \omega_5 z & \omega_4 z\\ \omega_5 z w & \omega_3 z w & \omega_2 z w \\ \omega_4 w & \omega_2 w & \omega_1 w \end{pmatrix}$$ The generating function for the $Y_{k,\ell}$ reads $$\sum_{k,\ell\geq 0} Y_{k,\ell}(\beta_1,\beta_2) \,z^k w^{\ell+1} = (0,0,1) ({\mathbb I}- T_{20V})^{-1} \begin{pmatrix} \beta_1\\ \beta_2\\ 0\end{pmatrix} $$ This is a rational fraction with denominator $\det({\mathbb I}- T_{20V})=1-{\alpha}_1 w-{\alpha}_2 z -{\alpha}_3 z w-{\alpha}_4 zw^2-{\alpha}_5 z^2w-{\alpha}_6 z^2w^2$, where \begin{eqnarray} &&{\alpha}_1=\frac{\omega_1}{\omega_0},\quad {\alpha}_2=\frac{\omega_6}{\omega_0},\quad {\alpha}_3=\frac{\omega_0\omega_3+\omega_4^2-\omega_1\omega_6}{\omega_0^2}\nonumber\\ && \label{denom}\\ &&{\alpha}_4=\frac{\omega_2^2-\omega_1\omega_3}{\omega_0^2} ,\quad {\alpha}_5=\frac{\omega_5^2-\omega_6\omega_3}{\omega_0^2}, \quad {\alpha}_6=\frac{2\omega_2\omega_4\omega_5+\omega_1\omega_6\omega_3-\omega_3\omega_4^2-\omega_1\omega_5^2-\omega_6\omega_2^2}{\omega_0^3} \nonumber \end{eqnarray} \subsubsection{Asymptotics} We now consider the large $n=N,k,\ell$ limit, with $\kappa= k/(2N)$ and $\lambda=\ell/N$ fixed. Like in Sect. \ref{secasym6v} above, the asymptotics of $Y_{k,\ell}$ are determined by the denominator \eqref{denom}, and read (see also Ref. \cite{BDFG} appendix B for details): \begin{eqnarray} &&\qquad \qquad\quad \ Y_{2\kappa N,\lambda N}\simeq \int_0^1 dp_3 dp_4 dp_5 dp_6 e^{-NS_1^{20V}(\kappa,p_3,p_4,p_5,p_6)}\nonumber \\ &&S_1^{20V}(\kappa,p_3,p_4,p_5,p_6)=-(2\kappa+\lambda-p_3-2p_4-2p_5-3p_6){\rm Log}(2\kappa+\lambda-p_3-2p_4-2p_5-3p_6)\nonumber \\ &&\qquad\qquad\qquad\qquad +(2\kappa-p_3-2p_4-p_5-2p_6){\rm Log}\left(\frac{2\kappa-p_3-2p_4-p_5-2p_6}{{\alpha}_1}\right)\nonumber \\ &&\qquad\qquad\qquad\qquad+(\lambda-p_3-p_4-2p_5-2p_6){\rm Log}\left(\frac{\lambda-p_3-p_4-2p_5-2p_6}{{\alpha}_2}\right)\nonumber \\ &&\qquad\qquad\qquad\qquad+\sum_{i=3}^6 p_i{\rm Log}\left(\frac{p_i}{{\alpha}_i}\right) \label{asymptopath20v} \end{eqnarray} As before this also covers the case of vanishing weights ${\alpha}_i$ by taking the limit $p_i\to 0$ at finite ${\alpha}_i$ in the above. \subsection{Arctic curves} \begin{thm}\label{20VNEthm} The NE branch of the arctic curve for the 20V-DWBC3 model on the quadrangle $\mathcal Q_n$ is predicted by the Tangent Method to be: $$x=X_{NE}^{20V}[\xi]=\frac{B'[\xi]}{A'[\xi]}, \qquad y=Y_{NE}^{20V}[\xi]= B[\xi]-\frac{A[\xi]}{A'[\xi]}B'[\xi] $$ where $B[\xi]=2 \kappa_{20V}[\xi]$ with $\kappa_{20V}[\xi]$ as in \eqref{kappa20v}, and where $A[\xi]$ is given by \begin{eqnarray} A[\xi] &=& \frac{\cos(2\eta)-\cos(u+v+\eta)\cos(u+v-\eta+2\xi)}{\cos(2\eta)-\cos(2u)\cos(2v+2\xi)}\, \frac{\sin(u-v-\eta-\xi)\sin(u-v+\eta-\xi)}{\sin(\xi)\sin(\xi-2\eta)}\nonumber \\ \label{A20v} \end{eqnarray} and with the parameter range: $$ \xi\in \big[\eta+u-v-\pi,0\big] $$ \end{thm} \begin{proof} We may now bring together the ingredients of the Tangent Method. We determine the family of tangents $F_\xi(x,y)=y+A[\xi]x-B[\xi]$ defined in Sect. \ref{sectan}. We already identified the intercept $B[\xi]=2\kappa_{20V}[\xi]$ with $\kappa_{20V}[\xi]$ given by \eqref{kappa20v}. To determine the slope $A[\xi]=2\kappa/\lambda$, we must find the leading contribution to the total partition function \begin{eqnarray*} &&\sum_{k=1}^{2n-1} H_{N,k}[u,v] \, Y_{k,\ell} \simeq \int_0^1 d\kappa H_{N,2\kappa N}[u,v]\, Y_{2\kappa N,\lambda N}\simeq \int_0^1 d\kappa dp_3 dp_4,dp_5 dp_6 e^{-N S^{20V}(\kappa,p_3,p_4,p_5,p_6,t)} \\ &&S^{6V'}(\kappa,p_2,p_4,p_5,t):=S_0^{20V}(\kappa,t)+S_1^{20V}(\kappa,p_3,p_4,p_5,p_6) \end{eqnarray*} with $S_0^{20V}(\kappa,t)$ as in \eqref{action6v} and $S_1^{20V}(\kappa,p_3,p_4,p_5,p_6)$ as in \eqref{asymptopath20v}. As in the 6V case, the saddle-point equation $\partial_\xi S^{20V}=0$ is solved by \eqref{kappa20v}, and amounts to parameterizing $\kappa=\kappa_{20V}[\xi]$ in terms of the parameter $\xi$. The saddle-point equations $\partial_\kappa S^{20V}=\partial_{p_3}S^{20V}=\partial_{p_4}S^{20V}=\partial_{p_5}S^{20V}=\partial_{p_6}S^{20V}=0$ give rise to the system of algebraic equations: \begin{eqnarray*} \frac{t}{{\alpha}_1}&=&\frac{2\kappa+\lambda-p_3-2p_4-2p_5-3p_6}{2\kappa-p_3-2p_4-p_5-2p_6} \\ \frac{{\alpha}_1{\alpha}_2\,p_3}{{\alpha}_3}&=&\frac{(2\kappa-p_3-2p_4-p_5-2p_6)(\lambda-p_3-p_4-2p_5-2p_6)}{2\kappa+\lambda-p_3-2p_4-2p_5-3p_6}\\ \frac{{\alpha}_1^2{\alpha}_2\,p_4}{{\alpha}_4}&=&\frac{(2\kappa-p_3-2p_4-p_5-2p_6)^2(\lambda-p_3-p_4-2p_5-2p_6)}{(2\kappa+\lambda-p_3-2p_4-2p_5-3p_6)^2}\\ \frac{{\alpha}_1{\alpha}_2^2\,p_5}{{\alpha}_5}&=&\frac{(2\kappa-p_3-2p_4-p_5-2p_6)(\lambda-p_3-p_4-2p_5-2p_6)^2}{(2\kappa+\lambda-p_3-2p_4-2p_5-3p_6)^2}\\ \frac{{\alpha}_1^2{\alpha}_2^2\,p_6}{{\alpha}_6}&=&\frac{(2\kappa-p_3-2p_4-p_5-2p_6)^2(\lambda-p_3-p_4-2p_5-2p_6)^2}{(2\kappa+\lambda-p_3-2p_4-2p_5-3p_6)^3} \end{eqnarray*} Substituting the values of $t=t_{20V}[\xi]$ \eqref{t20v} and of the weights ${\alpha}_i$ \eqref{denom} expressed using \eqref{weights20V}: \begin{eqnarray*} && {\alpha}_1=\frac{\sin(u-v-\eta)\sin(u+v+\eta)}{\sin(u-v+\eta)\sin(u+v-\eta)}, \quad {\alpha}_2=\frac{\sin(2u)\sin(u-v-\eta)}{\sin(2u+2\eta)\sin(u-v+\eta)}\\ && {\alpha}_3= \frac{2\sin(2\eta)\sin(2u)\big(\sin(u-v+{\scriptstyle \frac{\pi}{4}})\sin(u+v+{\scriptstyle \frac{\pi}{4}})-\sin^2(2\eta)\big)}{\sin(2u+2\eta)\sin(u-v+\eta)\sin(u+v-\eta)}\\ && {\alpha}_4=\frac{\sin(2u)\sin(u-v-\eta)\sin(u+v+3\eta)}{\sin(2u+2\eta)\sin(u-v+\eta)\sin(\eta-u-v)}\\ &&{\alpha}_5= \frac{\sin(2u-2\eta)\sin(u-v-\eta)\sin(u+v+\eta)}{\sin(2u+2\eta)\sin(u-v+\eta)\sin(\eta-u-v)}\\ &&{\alpha}_6=\frac{\sin(u-v-3\eta)\sin(u+v+3\eta)\sin(2u-2\eta)}{\sin(u-v+\eta)\sin(u+v-\eta)\sin(2u+2\eta)} \end{eqnarray*} we find the unique solution such that $\lambda,\kappa>0$: \begin{eqnarray} \frac{p_3}{\kappa}&=& \frac{2\sin(\xi-2\eta)\sin(\xi)\sin(u+v+\xi-\eta)\sin(u+v+\xi+\eta)}{\sin(2\eta)\sin(u-v-\eta)\big(\cos(2u)\cos(u+v+\eta)-\cos(2\eta)\cos(u-v-2\xi+\eta)\big)}\nonumber \\ &&\qquad \times\, \frac{\cos^2(2u)-\cos(4\eta)-\sin(2u)\sin(2v+2\eta)}{\cos(2\eta)-\cos(u+v+\eta)\cos(u+v+2\xi-\eta)}\nonumber \\ \frac{p_4}{\kappa}&=& \frac{\sin(2u)\sin(u+v+3\eta)\sin(u-v-\xi+\eta)}{\sin^2(2\eta)\sin(u-v-\xi-\eta)\big(\cos(2u)\cos(u+v+\eta)-\cos(2\eta)\cos(u-v-2\xi+\eta)\big)} \nonumber \\ &&\qquad \times \ \frac{\sin(\xi-2\eta)\sin(\xi)\sin^2(u+v+\xi-\eta)}{\cos(2\eta)-\cos(u+v+\eta)\cos(u+v+2\xi-\eta)} \nonumber \\ \frac{p_5}{\kappa}&=& \frac{2\sin(2u-2\eta)\sin(u+v+\eta)}{\sin^2(2\eta)\big(\cos(2u)\cos(u+v+\eta)-\cos(2\eta)\cos(u-v-2\xi+\eta)\big)}\nonumber \\ &&\qquad \times \ \frac{\sin^2(\xi)\sin^2(u+v+\xi+\eta)}{\cos(2\eta)-\cos(u+v+\eta)\cos(u+v+2\xi-\eta)} \nonumber \\ \frac{p_6}{\kappa}&=& \frac{2\sin(2u-2\eta)\sin(u-v-3\eta)\sin(u+v+3\eta)\sin^2(\xi)}{\sin^2(2\eta)\sin(u-v-\eta)\big(\cos(2u)\cos(u+v+\eta)-\cos(2\eta)\cos(u-v-2\xi+\eta)\big)}\nonumber \\ &&\qquad \times \ \frac{\sin(u-v-\xi+\eta)\sin(u+v+\xi-\eta)\sin(u+v+\xi+\eta)}{\sin(u-v-\xi-\eta)\big(\cos(2\eta)-\cos(u+v+\eta)\cos(u+v+2\xi-\eta)\big)}\nonumber \\ \frac{\kappa}{\lambda}&=& \frac{\sin(u-v-\xi-\eta)\sin(u-v-\xi+\eta)\big(\cos(2\eta)-\cos(u+v+\eta)\cos(u+v+2\xi-\eta)\big)}{2\sin(\xi-2\eta)\sin(\xi)\big(\cos(2\eta)- \cos(2u)\cos(2v+2\xi)\big)}\nonumber \\ &&\label{valal} \end{eqnarray} Using the parametrization $\kappa=\kappa_{20V}[\xi]$, we may interpret the last equation as determining $\lambda$ as a function $\lambda_{20V}[\xi]$ of the parameter $\xi$, where: \begin{eqnarray} \lambda_{20V}[\xi]&:=&\kappa_{20V}[\xi]\, \frac{2 \sin(\xi)\sin(\xi-2\eta)}{\sin(u-v-\eta-\xi)\sin(u-v+\eta-\xi)} \nonumber \\ &&\qquad \times\ \frac{\cos(2\eta)-\cos(u+v+\eta)\cos(u+v-\eta+2\xi)}{\cos(2\eta)-\cos(2u)\cos(2v+2\xi)}\label{lam20vofxi} \end{eqnarray} To summarize, we have found the most likely exit point $\kappa$ as an implicit function of the arbitrary parameter $\lambda$, via the parametric equations $(\kappa,\lambda)=(\kappa_{20V}[\xi],\lambda_{20V}[\xi])$, which results in the family of tangent lines $F_\xi(x,y)=0$. The theorem follows from the expressions \eqref{acurve}, by identifying the slope $A[\xi]=2\kappa_{20V}[\xi]/\lambda_{20V}[\xi]$, while the range of the parameter $\xi$ corresponds to imposing $A[\xi] \in [0,\infty)$. \end{proof} As explained in Section \ref{obsec}, the SE branch of the arctic curve is easily obtained by applying the transformation $(u,v)\mapsto (u^*,v^*)=(u,-v-\pi)$ and the change of coordinates $(x,y)\mapsto (x,2-x-y)$. \begin{thm}\label{20VSEthm} The SE branch of the arctic curve for the 20V-DWBC3 model is given by the parametric equations $$ x=X_{SE}^{20V}[\xi]={X_{NE}^{20V}}[\xi]^* \qquad y=Y_{SE}^{20V}[\xi]=2-X_{NE}^{20V}[\xi]^*-{Y_{NE}^{20V}}[\xi]^*\qquad (\xi\in \big[\eta+u+v,0\big] )$$ with $X_{NE}^{20V},Y_{NE}^{20V}$ as in Theorem \ref{20VNEthm}, and where the superscript $*$ stands for the transformation $(u,v)\mapsto (u^*,v^*)=(u,-v-\pi)$, which we have also applied to the range of $\xi$. \end{thm} \subsection{Examples} $\ $ \begin{figure} \begin{center} \begin{minipage}{0.5\textwidth} \centering \includegraphics[width=5cm]{20vflat.eps} \end{minipage}\hfill \begin{minipage}{0.5\textwidth} \centering \includegraphics[width=5cm]{20vskew.eps} \end{minipage} \end{center} \caption{\small Left: Arctic curve of the 20V-DWBC3 in the cases $u=0$, $v=-\frac{\pi}{2}$ and $\eta$ varying from $0+$ (outermost curve) to $\frac{\pi}{2}^-$ (innermost curve). Right: Arctic curve of the 20V-DWBC3 in the cases $u=0$, $\eta=\frac{\pi}{6}$ and $v$ varying between $-\frac{\pi}{2}-\frac{\pi}{6}$ (topmost NE curve) and $-\frac{\pi}{2}$ (bottommost).} \label{fig:arctic5} \end{figure} We now illustrate the results of Theorems \ref{20VNEthm} and \ref{20VSEthm} in a few examples. \subsubsection{\bf Case $u=0$.} In this case the arctic curve is entirely made of its NE and SE portions, as it touches the W boundary at points $(-1,1)$ and $(-1,2)$, both with a tangent of slope $-1/2$ (corresponding to $A=1/2$) for all values of $\eta,v$. We have represented in Fig.~\ref{fig:arctic5} (left) the arctic curves for the self-dual value $v=-\frac{\pi}{2}$ and for $\eta$ ranging from $0^+$ to $\frac{\pi}{2}^-$. The arctic curve for $\eta=0$ reads: $$ (X_{NE}[\xi],Y_{NE}[\xi])=\left(\frac{2\xi-\sin(2\xi)}{\pi},1-2\frac{\xi}{\pi} \right) \qquad (\xi\in [-\frac{\pi}{2},0])$$ The limit $\eta\to\frac{\pi}{2}^-$ is singular, however we find a finite result by setting $\eta=\frac{\pi}{2}-\epsilon$ and $\xi=\epsilon \zeta$, and then sending $\epsilon\to 0$, with the result: \begin{eqnarray*}X_{NE}&=& \frac{(2+\zeta)^2(\cos(2\pi\zeta)-1+2\pi\zeta^2(\pi(1-\zeta^2)\cos(\pi\zeta)+2\zeta \sin(\pi\zeta))}{4(1+\zeta+\zeta^2)\sin^2(\pi \zeta)}\\ Y_{NE}&=&1+\frac{1}{\zeta}-\frac{\pi(1-\zeta^2)}{2\sin(\pi\zeta)}\\ &&+\frac{(2+\zeta)(2\zeta^2+2\zeta-1)(\cos(2\pi\zeta)-1+2\pi\zeta^2(\pi(1-\zeta^2)\cos(\pi\zeta)+2\zeta \sin(\pi\zeta))}{8(1+\zeta+\zeta^2)\sin^2(\pi \zeta)} \end{eqnarray*} In all these cases, the SE branch is given by $(X_{SE},Y_{SE})=(X_{NE},2-X_{NE}-Y_{NE})$ as $v=v^*$. We also represent non-selfdual cases in Fig.~\ref{fig:arctic5} (right), for $u=0$, $\eta=\frac{\pi}{6}$ and $v$ varying between $-\frac{\pi}{2}-\frac{\pi}{6}$ and $-\frac{\pi}{2}$. We see that the tangency point on the vertical $x=0$ moves away from the self-dual point $(0,1)$, and that the curves are no longer nested as in the $v=-\frac{\pi}{2}$ case. \subsubsection{\bf Uniform case.} \begin{figure} \begin{center} \begin{minipage}{0.4\textwidth} \centering \includegraphics[width=4.cm]{arctic20v.eps} \end{minipage}\hfill \begin{minipage}{0.59\textwidth} \centering \includegraphics[width=6.cm]{clover20v.eps} \end{minipage} \end{center} \caption{\small Left: Arctic curve (NE=red and SE=blue portions) of the uniform 20V-DWBC3 model on its rescaled domain (black), corresponding to $\eta=\frac{\pi}{8}$, $u=\frac{\pi}{8}$ and $v=-\frac{\pi}{2}$. Right: Arctic curve of the uniform 20V-DWBC3 model (NE branch in thicker blue line, SE branch in dashed black), together with the analytic continuation of its NE portion (in red). The arrow indicates the shear transformation from the latter to the SE branch.} \label{fig:arctic5b} \end{figure} As is easily checked on the weights \eqref{weights20V}, the uniform case corresponds to $\eta=\frac{\pi}{8}=u$, $v=-\frac{\pi}{2}$ and $\nu=\sqrt{2}$ \eqref{combipoint20v}. The NE and SE portions of the arctic curve predicted by Theorems \ref{20VNEthm} and \ref{20VSEthm} have a vertical tangent at $(0,1)$, a horizontal tangent at $\big( \frac{2}{3}(\sqrt{3}-3),2\big)\simeq (-.845,2)$ and a diagonal tangent of slope $-1$ at $\big( \frac{2}{3}(\sqrt{3}-3),\frac{2}{3}(3-\sqrt{3})\big)\simeq (-.845,.845)$. We have represented in Fig.~\ref{fig:arctic5} (left) the NE and SE portions of the arctic curve together with the rescaled quadrangular domain $\lim_{n\to \infty} \mathcal Q_n/n$. As pointed out before, the Tangent Method does not allow to predict the NW and SW portions of the arctic curve. It is interesting however to notice that the NE portion of the curve is algebraic. With a suitable shift of the origin to the point $(-2,1)$, namely substituting $(x,y)\to (x-2,y+1)$, we obtain the following algebraic equation: \begin{equation}\label{algebraic} 3^6 (x^2+y^2-{\scriptstyle \frac{2}{3}})^5 -5^3\, 3^3 (x^2+y^2-{\scriptstyle \frac{2}{3}})^3-2\, 3^2\,5^4 (x^2+y^2-{\scriptstyle \frac{2}{3}})^2-2^2\, 5^5 (x^2+y^2-4 x^2 y^2)=0 \end{equation} We have represented this algebraic curve in Fig.~\ref{fig:arctic5b} (right) together with the NE portion of the uniform 20V-DWBC3 curve, and the scaled quadrangular domain (in black). We see that the SE portion of the arctic curve (dashed black curve) is obtained as the shear of the analytic continuation of the NE portion (red curve). \subsubsection{\bf Free fermion case} \begin{figure} \begin{center} \begin{minipage}{0.5\textwidth} \centering \includegraphics[width=5cm]{free20vone.eps} \end{minipage}\hfill \begin{minipage}{0.5\textwidth} \centering \includegraphics[width=5cm]{free20vtwo.eps} \end{minipage} \end{center} \caption{\small Left: Arctic curve of the free fermion 20V-DWBC3 model for $\eta=\frac{\pi}{4}$, $u=\frac{\pi}{16}$ and $v$ varying from $-\frac{\pi}{2}$ (topmost) to $-\frac{\pi}{2}+3\frac{\pi}{16}$ (bottommost). Right: same, but with $v$ varying from $-\frac{\pi}{2}-3\frac{\pi}{16}$(topmost) to $-\frac{\pi}{2}$ (bottommost).} \label{fig:arctic6} \end{figure} In view of the connection to the 6V' model (with same values of $\eta,u,v$) it is clear that $\eta=\frac{\pi}{4}$ plays the role of free fermion point. In particular, we expect the arctic curve to be analytic. As a highly non-trivial check, we have verified that at $\eta=\frac{\pi}{4}$ and for all allowed values of $u,v$ the SE branch is the analytic continuation of the NE branch. Like in the 6V' case, we also get access to the NW and SW branches via analytic continuation. We have represented in Fig.~\ref{fig:arctic6} a sequence of cases with $\eta=\frac{\pi}{4}$, $u=\frac{\pi}{16}$ and $v$ varying (1) from $-\frac{\pi}{2}$ to $-\frac{\pi}{2}+\frac{3\pi}{16}$ (left) and (2) from $-\frac{\pi}{2}-\frac{3\pi}{16}$ to $-\frac{\pi}{2}$ (right). We see that in the case (1) the curve is anchored at the point $(-1,2)$ while the other end along the vertical $x=-1$ varies along the W boundary. The reverse phenomenon is observed in the case (2), where the curve is anchored at the point $(-1,1)$ and its other end varies along the W boundary. \subsubsection{\bf Generic case} \begin{figure} \begin{center} \begin{minipage}{0.5\textwidth} \centering \includegraphics[width=5cm]{20vgenericone.eps} \end{minipage}\hfill \begin{minipage}{0.5\textwidth} \centering \includegraphics[width=5cm]{20vgenerictwo.eps} \end{minipage} \end{center} \caption{\small Left: Arctic curve of the 20V-DWBC3 model for $\eta=\frac{\pi}{8}$, $v=-\frac{\pi}{2}-\frac{\pi}{32}$ and $u$ varying from $0$ (innermost) to $\frac{3\pi}{16}$. Right: same, but with $u$ varying from $\frac{3\pi}{16}$(bottommost) to $\frac{11\pi}{32}$ (segment).} \label{fig:arctic7} \end{figure} We finally present in Fig.~\ref{fig:arctic7} a ``generic" case with no special symmetry: $\eta=\frac{\pi}{8}$, and $v=-\frac{\pi}{2}-\frac{\pi}{32}\neq v^*=-\frac{\pi}{2}+\frac{\pi}{32}$, for $u$ varying from $0$ to $\frac{11\pi}{32}=\pi+v-\eta$. The last value is singular, and must be approached as $u=\frac{11\pi}{32}-\epsilon$, $\xi=\epsilon^{1/2}\,\zeta$ with $\epsilon\to 0$. The result is a line segment joining the points $(-1,1)$ and $(0,2)$. \section{Aztec triangle domino tilings}\label{DTsec} \subsection{Partition function and one-point functions} In Ref.~ \cite{DF20V}, a correspondence was established between the 20V-DWBC3 model on $\mathcal Q_n$ and the Domino Tiling problem of the Aztec triangle $\mathcal T_n$. First it was shown that the models share the same uniformly weighted partition function (total number of configurations): $$ Z_n^{20V}=Z_n^{DT} $$ Next this correspondence was refined by considering the (uniformly weighted) 20V-DWBC3 refined partition functions $Z_{n,k}^{20V}$, $k=1,2,...,2n-1$, equal to the refined partition function of Sect.~\ref{ref20vsec} with the parameters \eqref{combipoint20v}. Their counterparts are the refined partition functions of the Domino tiling problem $Z_{n,k}^{DT}$ defined in a silimar manner, using the non-intersecting Schr\"oder path formulation of Sect.~\ref{dtrefsec}, as the number of configurations in which the topmost path is conditioned to first enter the last vertical at position $k=0,1,...,n-1$, before ending with $k$ vertical steps (see the pink domain in Fig.~\ref{fig:alltgt} (bottom right) for an illustration). In Ref.~\cite{DF20V}, it was shown that \begin{equation}\label{usefuldt} Z_{n,k}^{DT}= Z_{n,n+k+1}^{20V}+Z_{n,n+k}^{20V} \end{equation} This implies the following relation between the corresponding refined one-point functions $H_{n,k}^{20V}=Z_{n,k}^{20V}/Z_n^{20V}$ and $H_{n,k}^{DT}=Z_{n,k}^{DT}/Z_n^{DT}$: \begin{equation}\label{relaonept20vdt}H_{n,k}^{DT}= H_{n,n+k+1}^{20V}+H_{n,n+k}^{20V} \end{equation} \subsection{Arctic curves} \subsubsection{Asymptotics of the one-pont function} As usual, we explore the asymptotics of the refined one-point function $H_{n,k}^{DT}$ for the Domino Tiling model in the scaling limit of large $n=N$ and $\kappa=k/N$ finite. The relation \eqref{relaonept20vdt} allows immediately to express: \begin{thm}\label{DTasyonethm} The large $n=N$ asymptotics of the refined one-point function $H_{n,k}^{DT}$ for the Domino Tiling model reads: \begin{equation}H_{N,\kappa N}^{DT}= 2 \, H_{N,2 \kappa' N}^{20V}, \qquad \kappa'=\frac{1+\kappa}{2} \end{equation} and \begin{eqnarray*}H_{N,\kappa N}^{DT}&\simeq& \oint \frac{dt}{2i\pi t} e^{-NS_0^{DT}(\kappa,t)} \\ S_0^{DT}(\kappa,t)&=& S_0^{20V}({\scriptstyle \frac{1+\kappa}{2}},t)=\varphi^{20V}[u,v;\xi] +(1+\kappa)\, {\rm Log}(t) \end{eqnarray*} where the variables $\xi$ and $t$ are dependent through the relation $t=t_{20V}[\xi]$ \eqref{t20v}. \end{thm} Similarly to the 6V' and 20V cases, the saddle-point equation in the variable $\xi$ reads $\partial_\xi S_0^{DT}(\kappa,t_{20V}[\xi])=0$, with the solution: \begin{equation}\label{kappadt} \kappa=\kappa_{DT}[\xi]:= 2 \kappa_{20V}[\xi]-1 \qquad (\xi\in [-\frac{\pi}{4},0])\end{equation} with $\kappa_{20V}[\xi]$ as in \eqref{kappa20v}, and where the range of $\xi$ ensures that $\kappa_{20V}\in [1,2]$ hence $\kappa_{DT}\in [0,1]$. \subsubsection{Asymptotics of Path partition function} By definition, and comparing Fig.~\ref{fig:alltgt} top right and bottom right (light blue domains), we have in the uniform case: $Y_{k,\ell}^{DT}= Y_{k,\ell}^{20V}$. We deduce the asymptotics $$Y_{\kappa N ,\lambda N}^{DT} \simeq \int_0^1 dp_3 e^{-NS_1^{DT}(\kappa,p_3)}, \qquad S_1^{DT}(\kappa,p_3)=S_1^{20V}(\kappa,p_3) $$ with $S_1^{20V}$ the uniform weight version of \eqref{asymptopath20v}: $$ S_1^{20V}(\kappa,p_3)=-(\kappa+\lambda-p_3){\rm Log}(\kappa+\lambda-p_3)+(\kappa-p_3){\rm Log}(\kappa-p_3)+(\lambda-p_3){\rm Log}(\lambda-p_3)+p_3{\rm Log}(p_3)$$ \subsubsection{Arctic curves via the Tangent Method} Strictly speaking, the Tangent Method only predicts the NE portion of the arctic curve. However, the domino tiling problem is of the ``free fermion" class, as it involves only non-intersecting lattice paths (or alternatively the dual is just a dimer model, for which the general results of \cite{KOS} apply). As such, it has an analytic arctic curve, hence we may safely use the analytic continuation of the NE portion predicted by the Tangent Method. \begin{figure} \begin{center} \begin{minipage}{0.66\textwidth} \centering \includegraphics[width=7cm]{arcticdt.eps} \end{minipage} \begin{minipage}{0.33\textwidth} \includegraphics[width=4cm]{dt20v.eps} \end{minipage} \end{center} \caption{\small Left: Arctic curve of the uniformly weighted Domino Tiling problem of the Aztec triangle, tangent to the NW, N and E boundaries. Right: comparison with the arctic curve of the 20V-DWBC3 model: the blue portion is the common NE branch of the two curves, represented with their respective rescaled domains.} \label{fig:arcticdt} \end{figure} \begin{thm}\label{DTthm} The arctic curve for the uniform Domino Tilings of the Aztec triangle, as predicted via the Tangent Method, reads $$ x=X^{DT}[\xi]= \frac{B'[\xi]}{A'[\xi]} \qquad y=Y^{DT}[\xi]=B[\xi]-\frac{A[\xi]}{A'[\xi]}B'[\xi]\qquad (\xi\in [-\frac{3\pi}{8},0])$$ where $$B[\xi]:=\kappa_{DT}[\xi]\qquad {\rm and} \qquad A[\xi]:= -\cot(2 \xi) $$ with $\kappa_{DT}[\xi]$ as in \eqref{kappadt}. \end{thm} \begin{proof} The rescaled tangent lines are now through the points $(0,\kappa)$ and $(\lambda, 0)$, governed by the equation $y+A x-B=0$ with $A=\kappa/\lambda$, $B=\kappa$. We have already determined the most likely exit point $\kappa=\kappa_{DT}[\xi]$ \eqref{kappadt}, leading to $B[\xi]=\kappa_{DT}[\xi]$. To determine $A[\xi]$ we solve the saddle-point equations $\partial_{\kappa} S^{DT}(\kappa,t,p_3)=\partial_{p_3} S^{DT}(\kappa,t,p_3)=0$, in terms of the total action $S^{DT}(\kappa,t,p_3):=S_0^{DT}(\kappa,t)+S_1^{DT}(\kappa,p_3)$. These read $$ t=\frac{\kappa-p_3}{\kappa+\lambda-p_3},\qquad \frac{p_3(\kappa+\lambda-p_3)}{(\kappa-p_3)(\lambda-p_3)}=1 $$ and are easily solved into $$ \frac{p_3}{\kappa_{DT}[\xi]}=\frac{t[\xi]-1}{2t[\xi]}=\frac{\sin(\xi)}{\sqrt{2}\, \sin(\xi-\frac{\pi}{4})},\qquad \frac{\kappa_{DT}[\xi]}{\lambda}=\frac{2t[\xi]}{t[\xi]^2-1}=-\cot(2 \xi)=A[\xi]$$ The range of parameter $\xi$ for the NE portion of arctic curve is $\xi\in [-\frac{\pi}{4},0]$, ensuring that $\kappa_{DT}[\xi]\in [0,1]$, however as noted above we may extend the range to cover the entire domain, which corresponds to $\kappa_{DT}[\xi]\in [0,2]$, namely $\xi\in [-\frac{3\pi}{8},0]$, and the Theorem follows. \end{proof} We illustrate the result of Theorem \ref{DTthm} in Fig.~\ref{fig:arcticdt} (left). Note that the curve has a vertical tangent at the origin, and a horizontal tangent at the point $\big(\frac{2}{3}(\sqrt{3}-3),1\big)$, while it ends tangencially on the diagonal NW boundary at the point $\big(2\frac{\sqrt{2}}{3}-2,2\frac{\sqrt{2}}{3}\big)$. We note that the curve of Theorem \ref{DTthm} is a portion of an algebraic curve. In fact, changing the origin to $(-2,0)$ by applying the substitution $(x,y)\to (x-2,y)$, we find that this curve is given by the {\it same} equation \eqref{algebraic} as in the uniform 20V-DWBC3 case. This is illustrated in Fig.~\ref{fig:arcticdt} (right) where we have represented both rescaled domains, and their common NE branch of the arctic curve (in blue). \section{Conclusion}\label{seconc} In this paper, we have presented the Tangent Method derivation of arctic curves for the disordered phase of the 6V-DWBC, 6V', 20V-DWBC3 models, as well as for the Domino Tilings of the Aztec Triangle. The main ingredient used is the large size asymptotics of refined one-point functions, which we derived from the form of the thermodynamic free energy of the 6V' model in the disordered phase (Theorem \ref{freeconj}), and then deducing all relevant asymptotics from there. Our method however only predicts the NE and SE branches of the relevant arctic curves. It would be desirable to find the remaining NW and SW branches of the arctic curves when applicable. \begin{figure} \begin{center} \begin{minipage}{0.49\textwidth} \centering \includegraphics[width=5.5cm]{asmdpp.eps} \end{minipage} \begin{minipage}{0.5\textwidth} \includegraphics[width=5cm]{tsscpp.eps} \end{minipage} \end{center} \caption{\small Left: the arctic curve of the uniform 6V-DWBC/ASM model (red and dashed black curves inside the square) and the analytic continuation of the NE branch (blue ellipse inscribed in a hexagon); the arrow indicates the shear mapping the latter to the SE branch (dashed black curve). Right: the arctic curve for TSSCPP (blue curve inside the pink triangular domain), and that for the lozenge tiling of the regular hexagon obtained by multiple reflections (red circle).} \label{fig:asmgen} \end{figure} The results for the 6V' and 20V-DWBC3 models of the present paper complement earlier results on the 6V-DWBC \cite{COSPO,DFLAP} and 20V-DWBC1,2 models \cite{BDFG}, which display non-analytic arctic curves as well. The key to the non-analyticity can be traced back to the symmetries of the systems, allowing for determining their SE branch in terms of the NE branch of another system obtained by applying an involution $*$ to its weights together with a geometric transformation of the plane involving a reflection and possibly a shear. Note also that our results for the 6V' model also apply to the more general case of U-turn boundary 6V model, which is expected to share the {\it same} arctic curves. Finally, let us compare the situation of the 20V-DWBC3 model to that of ASMs, with the known enumeration formula: $$ ASM_n= \prod_{j=0}^{n-1} \frac{(3j+1)!}{(n+j)!} \ ,$$ a formula strikingly reminiscent of \eqref{20vpf}. The analogy goes further: we have found that the NE/SE portion of arctic curve for large uniform 20V-DWBC3 configurations is piecewise algebraic, the SE portion being equal to a shear transformation of the analytic continuation of the NE portion (see Fig.~\ref{fig:arctic5b} right). The same holds for ASMs, whose NE/SE portion of arctic curve is piecewise elliptic, the SE portion being obtained by a shear transformation of the ellipse containing the NE portion (see Fig.~\ref{fig:asmgen} left). The algebraic curve \eqref{algebraic} clearly plays a role similar to this ellipse. Finally, recall that ASMs of size $n$ are also in same number as TSSCPP \cite{tsscpp}, which can be viewed as rhombus tilings of a regular hexagon with edges of length $2n$, which satisfy all the symmetries of the hexagon. The triangular fundamental domain under these symmetries occupies $\frac{1}{12}$-th of the hexagon, which is recovered by successive reflections (see Fig.~\ref{fig:asmgen} right for an illustration). As such, the arctic curve for TSSCPP was argued in \cite{DFR} to be identical to that of the full hexagon without any symmetry constraint, i.e. the inscribed circle in the uniform case. There is a clear analogy between TSSCPP and the Domino Tilings of the Aztec triangle, in which the algebraic curve \eqref{algebraic} plays the role of this circle. \begin{figure} \begin{center} \includegraphics[width=7cm]{cloverdt2.eps} \end{center} \caption{\small The expected arctic curve of the uniformly weighted Domino Tiling problem of Ciucu's cruciform region (in red), is obtained as the multiple reflection of the arctic curve of the Aztec triangle (in blue). The resulting clover-shaped curve is the analytic continuation to the whole plane of the blue portion.} \label{fig:cloverdt} \end{figure} Recently Ciucu \cite{CIUconj} noticed a relation between the number of domino tilings of the Aztec triangle $\mathcal T_n$ and that of a cruciform domain $C^{n-1,n,n,n-2}_{2n-1,2n-1}$, obtained by ``symmetrization", namely a succession of ``reflections" of the original Aztec triangle. We believe that the curve \eqref{algebraic}, which is the analytic continuation of the arctic curve for the triangle, is in fact the complete arctic curve for the rescaled large $n$ cruciform domain. As visual evidence, we have displayed both curves in Fig.~\ref{fig:cloverdt}, together with the original asymptotic Aztec triangle (shaded in pink) and its 7 reflected copies. Fig.~\ref{fig:cloverdt} suggests that, similarly to the TSSCPP case, the Aztec triangle could be the fundamental domain for symmetric tilings of a crosslike shaped domain probably similar to that considered by Ciucu. \bibliographystyle{amsalpha}
1,941,325,220,940
arxiv
\section{Introduction} \label{sec1} The long gluino lifetime is a trademark of Split Supersymmetry\ \cite{savas,split,noi4}. The experimental discovery of a slowly--decaying gluino~\cite{gluino} would not only be a strong indication for {Split Supersymmetry}, but it would also allow for a measurement of the effective supersymmetry--breaking scale $\widetilde{m}$, which cannot be directly extracted from particle dynamics at the LHC. Moreover, the gluino lifetime is a crucial parameter to determine the cosmological constraints on the theory \cite{savas,arvanietal}. Therefore, for both experimental and theoretical considerations, it is very important to have a precise prediction of the gluino lifetime and branching ratios. For what concerns the gluino decay processes in the MSSM, tree--level results for the decays into chargino or neutralino and two quarks and one--loop results for the radiative decay into neutralino and gluon can be found in the literature \cite{decays}. In {Split Supersymmetry}, however, the quantum corrections to the gluino decay processes can be very significant, because they are enhanced by the potentially large logarithm of the ratio between the gluino mass $m_{\tilde{g}}$ and the scale $\widetilde{m}$ at which the interactions responsible for gluino decay are mediated. A fixed--order calculation of these processes in {Split Supersymmetry} would miss terms that are enhanced by higher powers of the large logarithm. In order to get a reliable prediction for the gluino decay width, the large logarithmic corrections have to be resummed by means of standard renormalization group techniques. Recently, a calculation of the gluino decay widths in Split Supersymmetry\ was presented in ref.~\cite{jim}, working at tree level for 3--body decays and in (not resummed) one--loop approximation for 2--body decays. In this paper we will present a calculation of the gluino decay processes that includes all leading corrections in $\alpha_s$ and $\alpha_t$, the strong and top--Yukawa coupling constants. As we will show, the inclusion and resummation of leading--order corrections give sizeable modifications of the gluino branching ratios, even for moderate values of $\widetilde{m}$. The structure of the paper is as follows: in sect.~\ref{sec2} we list the operators in the effective Lagrangian of {Split Supersymmetry} that are responsible for the decays of the gluino, and the high--energy boundary conditions on the corresponding Wilson coefficients; in sect.~\ref{sec3} we determine the renormalization group evolution of the Wilson coefficients, and we express the operators in the low--energy effective Lagrangian in terms of mass eigenstates; in sect.~\ref{sec4} we discuss our numerical results for the branching ratios and total width of the gluino decays in {Split Supersymmetry}; in sect.~\ref{sec5} we consider the possibility of gluinos decaying into gravitino; in sect.~\ref{concl} we present our conclusions. Finally, in the appendix we provide the analytical formulae for the gluino decay widths. \section{The Effective Lagrangian} \label{sec2} Below the squark and slepton mass scale $\widetilde{m}$, the effective Lagrangian of Split Supersymmetry\ describes the dynamics of Standard Model (SM) particles together with higgsinos and gauginos. At the level of renormalizable interactions, there is a conserved $G$--parity (under which only the gluino is odd) preventing gluino decay. However, integrating the squarks out of the underlying supersymmetric theory induces non--renormalizable interactions that violate the $G$--parity. Restricting our analysis up to dimension--6 operators, the $G$--odd effective Lagrangian at the matching scale $\widetilde{m}$ is given by \begin{equation} {\cal L} = \frac{1}{\wt{m}^2}\,\sum_{i=1}^7 C^{\,\wt{\scriptscriptstyle B}}_i\,Q^{\,\wt{\scriptscriptstyle B}}_i \;+\; \frac{1}{\wt{m}^2}\,\sum_{i=1}^2 C^{\,\wt{\scriptscriptstyle W}}_i\,Q^{\,\wt{\scriptscriptstyle W}}_i \;+\; \frac{1}{\wt{m}^2}\,\left(\sum_{i=1}^5 C^{\,\wt{\scriptscriptstyle H}}_i\,Q^{\,\wt{\scriptscriptstyle H}}_i + {\rm h.c.}\right) . \end{equation} We are working in the basis of interaction eigenstates for gauginos and higgsinos, neglecting the effect of electroweak symmetry breaking, since $\widetilde{m} \gg M_Z$. The $G$--odd operators involving the $B$--ino ($\widetilde{B}$) are \begin{eqnarray} Q^{\,\wt{\scriptscriptstyle B}}_1 & = & \overline{\widetilde{B}} \,\gamma^{\mu}\, \gamma_{5}\, {\tilde g}^a\; \otimes\;\sum_{k=1}^2 \; \overline{q}_{\sss L}^{\,(k)} \,\gamma_{\mu} \, T^a \,q_{\sss L}^{\,(k)} \label{qb1}\\ Q^{\,\wt{\scriptscriptstyle B}}_2 & = & \overline{\widetilde{B}} \,\gamma^{\mu}\, \gamma_{5}\, {\tilde g}^a\; \otimes\; \sum_{k=1}^2\;\overline{u}_{\sss R}^{\,(k)} \,\gamma_{\mu} \, T^a \, u_{\sss R}^{\,(k)} \\ Q^{\,\wt{\scriptscriptstyle B}}_3 & = & \overline{\widetilde{B}} \,\gamma^{\mu}\, \gamma_{5}\, {\tilde g}^a\; \otimes\; \sum_{k=1}^2\;\overline{d}_{\sss R}^{\,(k)} \,\gamma_{\mu} \, T^a\, d_{\sss R}^{\,(k)} \\ Q^{\,\wt{\scriptscriptstyle B}}_4 & = & \overline{\widetilde{B}} \,\gamma^{\mu}\, \gamma_{5}\, {\tilde g}^a\; \otimes\; \overline{q}_{\sss L}^{\,(3)} \,\gamma_{\mu} \, T^a \,q_{\sss L}^{\,(3)}\\ Q^{\,\wt{\scriptscriptstyle B}}_5 & = & \overline{\widetilde{B}} \,\gamma^{\mu}\, \gamma_{5}\, {\tilde g}^a\; \otimes\; \overline{t}_{\sss R} \,\gamma_{\mu} \, T^a \, t_{\sss R} \\ Q^{\,\wt{\scriptscriptstyle B}}_6 & = & \overline{\widetilde{B}} \,\gamma^{\mu}\, \gamma_{5}\, {\tilde g}^a\; \otimes\; \overline{b}_{\sss R} \,\gamma_{\mu} \ T^a\, b_{\sss R} \\ Q^{\,\wt{\scriptscriptstyle B}}_7 & = & \overline{\widetilde{B}} \,\sigma^{\mu\nu} \, \gamma_5 \, {\tilde g}^a\;G^a_{\mu\nu} ,\label{qb7} \end{eqnarray} where $k$ is a generation index, $T^a$ are the SU(3) generators and $G^a_{\mu\nu}$ is the gluon field strength. Assuming that the squark mass matrices are flavour--diagonal, the Wilson coefficients of the operators $Q^{\,\wt{\scriptscriptstyle B}}_i$ at the matching scale $\widetilde{m}$ are \begin{eqnarray} \label{matchB1} C^{\,\wt{\scriptscriptstyle B}}_1(\widetilde{m} )=C^{\,\wt{\scriptscriptstyle B}}_4(\widetilde{m} ) = - \frac{g_s\,g^{\prime}}{6}\, r_{\tilde{q}_{\sss L}} \,, && C^{\,\wt{\scriptscriptstyle B}}_2(\widetilde{m} )=C^{\,\wt{\scriptscriptstyle B}}_5(\widetilde{m} ) = \frac{2\,g_s\,g^{\prime}}{3}\, r_{\tilde{u}_{\sss R}} \,, \\ \label{matchB2} C^{\,\wt{\scriptscriptstyle B}}_3(\widetilde{m} )=C^{\,\wt{\scriptscriptstyle B}}_6(\widetilde{m} ) = - \frac{g_s\,g^{\prime}}{3}\,r_{\tilde{d}_{\sss R}} \,, && C^{\,\wt{\scriptscriptstyle B}}_7 (\widetilde{m} )= \frac{g_s^2\,g^{\prime}}{128\,\pi^2}\,(m_{\tilde{g}}-m_{\scriptscriptstyle\widetilde{B}})\, \sum_{q} \,(r_{\tilde{q}_{\sss L}}-r_{\tilde{q}_{\sss R}})\,Q_q \,, \label{cb7} \end{eqnarray} where $r_{\tilde{q}} = \wt{m}^2/m_{\tilde{q}}^2$. Note that $C^{\,\wt{\scriptscriptstyle B}}_7$ vanishes for mass--degenerate squarks. The $G$--odd operators involving the $W$--ino ($\widetilde{W}$) are \begin{eqnarray} Q^{\,\wt{\scriptscriptstyle W}}_1 & = & \overline{\widetilde{W}^{\scriptscriptstyle A}} \,\gamma^{\mu}\,\gamma_5 \, {\tilde g}^a\; \otimes\;\sum_{k=1}^2 \; \overline{q}_{\sss L}^{\,(k)} \,\gamma^{\mu} \,\tau^{\scriptscriptstyle A}\,T^a\, q_{\sss L}^{\,(k)} \label{qw1}\\ Q^{\,\wt{\scriptscriptstyle W}}_2 & = & \overline{\widetilde{W}^{\scriptscriptstyle A}} \,\gamma^{\mu}\,\gamma_5 \, {\tilde g}^a\; \otimes\; \overline{q}_{\sss L}^{\,(3)} \,\gamma^{\mu} \,\tau^{\scriptscriptstyle A}\,T^a\, q_{\sss L}^{\,(3)} , \label{qw2} \end{eqnarray} where $\tau^{\scriptscriptstyle A}$ are the Pauli matrices. The matching conditions for the Wilson coefficients are \begin{equation} \label{matchW} C^{\,\wt{\scriptscriptstyle W}}_1(\widetilde{m} )=C^{\,\wt{\scriptscriptstyle W}}_2(\widetilde{m} ) = - \frac{g_s\,g}{2}\, r_{\tilde{q}_{\sss L}} . \end{equation} For the higgsinos, we use a compact notation in which the two Weyl states $\widetilde{H}_u$ and $\widetilde{H}_d$ are combined in a single Dirac fermion $\widetilde{H}\equiv \widetilde{H}_u + \varepsilon \,\widetilde{H}_d^c$, where $\varepsilon$ is the antisymmetric matrix (with $\varepsilon_{12}=1$) acting on the SU(2) indices. The states $\widetilde{H}_u$ and $\widetilde{H}_d$ can be recovered by chiral decomposition, $\widetilde{H}_u =\widetilde{H}_{\sss L}$ and $\widetilde{H}_d =-\varepsilon \,(\widetilde{H}^c)_{\sss L}$. Keeping only the third--generation Yukawa couplings, the $G$--odd operators involving higgsinos are \begin{eqnarray} Q^{\,\wt{\scriptscriptstyle H}}_1 & = & \overline{\widetilde{H}}_{\sss L} \, {\tilde g}^a_{\sss R} \; \otimes\; \varepsilon \, \overline{q}_{\sss L}^{\,(3)} \, T^a\, t_{\sss R} \label{qh1}\\ Q^{\,\wt{\scriptscriptstyle H}}_2 & = & \overline{\widetilde{H}}_{\sss L} \,\sigma^{\mu\nu} \, {\tilde g}^a_{\sss R}\; \otimes\; \varepsilon \,\overline{q}_{\sss L}^{\,(3)} \,\sigma_{\mu\nu} \, T^a\,t_{\sss R} \\ Q^{\,\wt{\scriptscriptstyle H}}_3 & = & \overline{\widetilde{H}}_{\sss R} \, {\tilde g}^a_{\sss L}\; \otimes\; \overline{b}_{\sss R} \, T^a\,q_{\sss L}^{\,(3)} \\ Q^{\,\wt{\scriptscriptstyle H}}_4 & = & \overline{\widetilde{H}}_{\sss R} \,\sigma^{\mu\nu} \, {\tilde g}^a_{\sss L} \; \otimes\; \overline{b}_{\sss R} \,\sigma_{\mu\nu} \,T^a\, q_{\sss L}^{\,(3)}\\ Q^{\,\wt{\scriptscriptstyle H}}_5 & = & \overline{\widetilde{H}}_{\sss L} \,\sigma^{\mu\nu} \, {\tilde g}^a_{\sss R}\; h\;G^a_{\mu\nu}, \label{qh5} \end{eqnarray} where $h$ is the Higgs doublet. The Wilson coefficients at the matching scale $\widetilde{m}$ are \begin{equation} \label{matchH1} C^{\,\wt{\scriptscriptstyle H}}_1(\widetilde{m} ) = \frac{g_s\,h_t}{\sqrt{2}\sin\beta}\, (r_{\tilde{q}_{\sss L}}-r_{\tilde{u}_{\sss R}})\,,\;\;\;\;\;\; C^{\,\wt{\scriptscriptstyle H}}_2(\widetilde{m} ) = \frac{g_s\,h_t}{4\,\sqrt{2}\sin\beta}\, (r_{\tilde{q}_{\sss L}}+r_{\tilde{u}_{\sss R}})\,, \end{equation} \begin{equation} \label{matchH2} C^{\,\wt{\scriptscriptstyle H}}_3(\widetilde{m} ) = \frac{g_s\,h_b}{\sqrt{2}\cos\beta}\, (r_{\tilde{q}_{\sss L}}-r_{\tilde{d}_{\sss R}})\,,\;\;\;\;\;\; C^{\,\wt{\scriptscriptstyle H}}_4(\widetilde{m} ) = -\frac{g_s\,h_b}{4\,\sqrt{2}\cos\beta}\, (r_{\tilde{q}_{\sss L}}+r_{\tilde{d}_{\sss R}})\,, \end{equation} \begin{equation} C^{\,\wt{\scriptscriptstyle H}}_5(\widetilde{m} ) = \frac{g_s^2\,h_t^2}{32\,\sqrt{2}\,\pi^2\,\sin\beta} (r_{\tilde{q}_{\sss L}}+r_{\tilde{u}_{\sss R}}) . \label{ch5} \end{equation} Here $h_t$ and $h_b$ are the top and bottom Yukawa couplings, and $\tan\beta$ is a free parameter of {Split Supersymmetry}. Before proceeding to the operator renormalization, we want to make some remarks. {\it (i)} We recall that all coupling constants appearing in the expressions of the Wilson coefficients given above have to be computed at the scale $\widetilde{m}$. {\it (ii)} Note that we have given the Wilson coefficients of the 4--fermion operators at the leading perturbative order, while the coefficients of the operators $Q^{\,\wt{\scriptscriptstyle B}}_7$ and $Q^{\,\wt{\scriptscriptstyle H}}_5$ are given at the next order (one--loop approximation). The operator anomalous dimensions will be computed in sect.~\ref{sec3} at the leading order in the strong and top--Yukawa couplings $\alpha_s=g_s^2/(4\pi)$ and $\alpha_t =h_t^2/(4\pi)$. Therefore, the gluino 3--body decays, mediated only by 4--fermion operators, will be computed by resumming all $\alpha_{s,t}\ln (\widetilde{m} / m_{\tilde{g}} )$ corrections, but neglecting terms ${\cal O} [\alpha^{n+1}_{s,t}\ln^n (\widetilde{m} / m_{\tilde{g}} )]$ with $n \ge 0$. For the radiative 2--body gluino decay into a gluon and a neutralino, a greater accuracy is more appropriate. The expressions of $C^{\,\wt{\scriptscriptstyle B}}_7$ and $C^{\,\wt{\scriptscriptstyle H}}_5$ given in eqs.~(\ref{cb7}) and (\ref{ch5}), together with leading--order anomalous dimensions and one--loop matrix elements [see eq.~(\ref{coeffg}) below], allow us to determine the 2--body decay amplitude neglecting terms ${\cal O} [\alpha^{n+1}_{s,t}\ln^n (\widetilde{m} / m_{\tilde{g}} )]$ with $n \ge 1$. This means that we have resummed all large logarithms at the leading order in all cases, but our formulae for 2--body gluino decays contain also the complete ${\cal O} (\alpha_{s,t})$ terms, relevant when the logarithm is not large. {\it (iii)} If $\widetilde{m}$ is close to the GUT scale, in presence of gauge--coupling unification there is no solid justification for the approximation of computing $\alpha_s$ contributions to the anomalous dimensions, neglecting electroweak corrections. However, because of the large SU(3) coefficients, we consider our approximation to be fairly adequate, even for $\widetilde{m}$ as large as $10^{13}$~GeV, which is the maximum value of $\widetilde{m}$ consistent with the negative searches for anomalous heavy isotopes. {\it (iv)} In \eq{matchH2} we have included the contribution from the bottom Yukawa coupling $h_b$, since these coefficients are enhanced when $\tan\beta$ is large. There are no $\tan\beta$ enhancements in the evolution below $\widetilde{m}$, and therefore our results are reliable for any value of $\tan\beta$. {\it (v)} Split Supersymmetry\ is free from flavour problems, therefore our assumption that squark mass matrices are diagonal is unnecessary. On the other hand, a certain degree of mass degeneracy among squarks is required by gauge-coupling unification. In the results presented in sect.~\ref{sec4} we take for simplicity all squark masses to be equal. \section{Operator Renormalization} \label{sec3} The renormalization--group flow for the Wilson coefficients is determined by the equations \begin{eqnarray} \mu\frac{d\vec C}{d \mu}&=& {\hat \gamma}^T(\alpha_s,\alpha_t) \,\vec C \label{runc}\\ \mu\frac{d\alpha_s}{d \mu}&=& -\beta_s \,\frac{\alpha_s^2}{2\pi}\\ \mu\frac{d\alpha_t}{d \mu}&=& -\beta_t \,\frac{\alpha_t^2}{2\pi}-\beta_{st} \frac{\alpha_s \alpha_t}{2\pi}, \end{eqnarray} where $\mu$ is the renormalization scale and, in Split Supersymmetry, we have $\beta_s=5$, $\beta_t=-9/2$ and $\beta_{st}=8$. The anomalous--dimension matrix $\hat \gamma$ can be expressed as \begin{equation} {\hat \gamma}_{ij}=-2\,b_{ij}-\delta_{ij}\sum_f a_f , \label{gaman} \end{equation} where $b_{ij}$ are extracted from the poles of the one--loop renormalization of the operators $Q_i$ ($Q_i \rightarrow b_{ij}\, Q_j/\epsilon + \cdots$). In \eq{gaman} the sum is over all fields entering the operator $Q_i$, and the field anomalous dimensions $a_f$ are given by \begin{equation} \label{wfrs} a_{q^k_{\sss L}} = -\frac{1}{4\,\pi} \left( \alpha_s \,C_F + \frac{\alpha_t}{2}\,\delta_{k3}\right) \,,\;\;\;\;\; a_{u^k_{\sss R}}= -\frac{1}{4\,\pi} \left( \alpha_s \,C_F + \alpha_t\,\delta_{k3}\right) \,,\;\;\;\;\; a_{d_{\sss R}} = -\frac{\alpha_s \,C_F}{4\,\pi}\,, \end{equation} \begin{equation} a_{\tilde{g}} = -\frac{\alpha_s \,N_c}{4\,\pi} \,,\;\;\;\;\; a_{h} = -\frac{\alpha_t \,N_c}{4\,\pi} \,,\;\;\;\;\; a_g = \frac{\alpha_s}{4\,\pi} \; \left( N_c - \frac 23 \,N_f \right) . \end{equation} Here $k$ is a generation index, $C_F=(N_c^2-1)/(2N_c)$, $N_c=3$, $N_f=6$. Note that the gluon anomalous dimension $a_g$ (given here in the Feynman gauge) is different from the SM value because it includes the gluino contribution. We find that the anomalous--dimension matrices of the $B$--ino operators in eqs.~(\ref{qb1})--(\ref{qb7}), of the $W$--ino operators in eqs.~(\ref{qw1})--(\ref{qw2}), and of the higgsino operators in eqs.~(\ref{qh1})--(\ref{qh5}) are respectively \begin{equation} {\hat \gamma}^{(a)}=\frac{\alpha_s}{4\pi} \gamma_s^a+ \frac{\alpha_t}{4\pi} \gamma_t^a+\frac{\sqrt{\alpha_s \alpha_t}}{4\pi} \gamma_{st}^a ,~~~~a=\widetilde{\scriptscriptstyle B},\widetilde{\scriptscriptstyle W},\widetilde{\scriptscriptstyle H} \label{gamgam} \end{equation} \begin{equation} \gamma_s^{\widetilde{\scriptscriptstyle B}}=\frac 13 \pmatrix{ 8-9N_c & 8 & 8 & 8 & 8 & 8 & 0 \cr 4 & 4-9N_c & 4 & 4 & 4 & 4 & 0 \cr 4 & 4 & 4-9N_c & 4 & 4 & 4 & 0 \cr 4 & 4 & 4 & 4-9N_c & 4 & 4 & 0 \cr 2 & 2 & 2 & 2 & 2-9N_c & 2 & 0 \cr 2 & 2 & 2 & 2 & 2 & 2-9N_c & 0 \cr 0 & 0 & 0 & 0 & 0 & 0 & 2N_f-18 N_c }, \label{gammabino} \end{equation} \begin{equation} \gamma_t^{\widetilde{\scriptscriptstyle B}}= \pmatrix{ 0 & 0 & 0 & 0 & 0 & 0 & 0 \cr 0 & 0 & 0 & 0 & 0 & 0 & 0 \cr 0 & 0 & 0 & 0 & 0 & 0 & 0 \cr 0 & 0 & 0 & 1 & \!\!\!\!-2 & 0 & 0 \cr 0 & 0 & 0 & \!\!\!\!-1 & 2 & 0 & 0 \cr 0 & 0 & 0 & 0 & 0 & 0 & 0 \cr 0 & 0 & 0 & 0 & 0 & 0 & 0 },~~~~\gamma_{st}^{\widetilde{\scriptscriptstyle B}}=0, \end{equation} \begin{equation} \gamma_s^{\widetilde{\scriptscriptstyle W}}=\pmatrix{ -3N_c & 0 \cr 0 & -3N_c } ,~~~~ \gamma_t^{\widetilde{\scriptscriptstyle W}}=\pmatrix{ 0 & 0 \cr 0 & 1 },~~~~\gamma_{st}^{\widetilde{\scriptscriptstyle W}}=0, \end{equation} \begin{equation} \gamma_s^{\widetilde{\scriptscriptstyle H}}=\pmatrix{ \frac{3}{N_c} & 0 & 0 & 0 & 0 \cr 0 & -4N_c-\frac{1}{N_c} & 0 & 0 & 0 \cr 0 & 0 & \frac{3}{N_c} & 0 & 0 \cr 0 & 0 & 0 & -4N_c-\frac{1}{N_c} & 0 \cr 0 & 0 & 0 & 0 & \frac 23 N_f -6N_c }, \end{equation} \begin{equation} \gamma_t^{\widetilde{\scriptscriptstyle H}}=\frac 12 \pmatrix{ 3 & 0 & 0 & 0 & 0 \cr 0 & 3 & 0 & 0 & 0 \cr 0 & 0 & 1 & 0 & 0 \cr 0 & 0 & 0 & 1 & 0 \cr 0 & 0 & 0 & 0 & 2N_c },~~~~ \gamma_{st}^{\widetilde{\scriptscriptstyle H}}=\pmatrix{ 0 & 0 & 0 & 0 & 0 \cr 0 & 0 & 0 & 0 & 4 \cr 0 & 0 & 0 & 0 & 0 \cr 0 & 0 & 0 & 0 & 0 \cr 0 & 2 & 0 & 0 & 0 } . \end{equation} For coefficients with only multiplicative renormalization (which is the case for $C^{\,\wt{\scriptscriptstyle B}}_7$, $C^{\,\wt{\scriptscriptstyle W}}_{1,2}$, $C^{\,\wt{\scriptscriptstyle H}}_{1,3,4}$), \eq{runc} can be easily integrated, with the result \begin{equation} C_i(\mu)= C_i(\widetilde{m} ) \,\eta_s^{\left (\frac{\gamma_s}{2\beta_s}-\frac{\beta_{st} \gamma_t}{2 \beta_s\beta_t}\right)} \ \eta_t ^{\frac{\gamma_t}{2\beta_t}}~~~~~{\rm for~~} C_i=C^{\,\wt{\scriptscriptstyle B}}_7,~C^{\,\wt{\scriptscriptstyle W}}_{1,2},~C^{\,\wt{\scriptscriptstyle H}}_{1,3,4} . \end{equation} We have defined \begin{eqnarray} \label{etas} \eta_s \equiv \frac{\alpha_s(\widetilde{m} )}{\alpha_s(\mu)}&=&1+\frac{\alpha_s(\widetilde{m} )}{2\pi} \beta_s \ln \frac{\mu}{\widetilde{m}} \,, \\ \label{etat} \eta_t \equiv \frac{\alpha_t(\widetilde{m} )}{\alpha_t(\mu )}&=& \eta_s^\frac{\beta_{st}}{\beta_s} + \frac{\alpha_t(\widetilde{m} ) \beta_t}{\alpha_s(\widetilde{m} ) \left( \beta_{st} -\beta_s \right)} \left( \eta_s^\frac{\beta_{st}}{\beta_s} -\eta_s \right) . \end{eqnarray} The evolution of the Wilson coefficients for the other $B$--ino operators involves operator mixing and the solution of \eq{runc} is given by \begin{eqnarray} C^{\,\wt{\scriptscriptstyle B}}_i (\mu) &=& \eta_s^{-\frac {9}{10}} \left[ C^{\,\wt{\scriptscriptstyle B}}_i (\widetilde{m}) +y \,{\overline C}(\widetilde{m} )\right] ~~~~i=1,2,3,6 \label{pinc}\\ C^{\,\wt{\scriptscriptstyle B}}_4 (\mu) &=& \eta_s^{-\frac {9}{10}} \left[ (1+z)\,C^{\,\wt{\scriptscriptstyle B}}_4 (\widetilde{m}) -z \,C^{\,\wt{\scriptscriptstyle B}}_5 (\widetilde{m} ) +y \,{\overline C}(\widetilde{m} )\right] ,\\ C^{\,\wt{\scriptscriptstyle B}}_5 (\mu ) &=& \eta_s^{-\frac {9}{10}} \left[ (1+2z)\,C^{\,\wt{\scriptscriptstyle B}}_5 (\widetilde{m}) -2z \,C^{\,\wt{\scriptscriptstyle B}}_4 (\widetilde{m} ) +y \,{\overline C}(\widetilde{m} )\right] ,\label{pall} \end{eqnarray} where ${\overline C}=C^{\,\wt{\scriptscriptstyle B}}_1 /3 +(C^{\,\wt{\scriptscriptstyle B}}_2+C^{\,\wt{\scriptscriptstyle B}}_3+C^{\,\wt{\scriptscriptstyle B}}_4)/6+(C^{\,\wt{\scriptscriptstyle B}}_5+C^{\,\wt{\scriptscriptstyle B}}_6)/12$, $y=\eta_s^{\,4/5}-1$, and $z=(\eta_s^{\,8/15}\,\eta_t^{-1/3}-1)/3$. Because of the non--vanishing contribution from $\gamma^{\widetilde{\scriptscriptstyle H}}_{st}$, the equations for $C^{\,\wt{\scriptscriptstyle H}}_2$ and $C^{\,\wt{\scriptscriptstyle H}}_5$ cannot be solved analytically. The numerical results for the renormalization coefficients $\Delta_{ij}$, defined by \begin{equation} \label{Deltas} \pmatrix{C^{\,\wt{\scriptscriptstyle H}}_2 (\mu)\cr C^{\,\wt{\scriptscriptstyle H}}_5 (\mu)} =\pmatrix{ \Delta_{22} &\Delta_{25} \cr \Delta_{52} &\Delta_{55}} \pmatrix{C^{\,\wt{\scriptscriptstyle H}}_2 (\widetilde{m} )\cr C^{\,\wt{\scriptscriptstyle H}}_5 (\widetilde{m} )}\,, \end{equation} are shown in \fig{fig:coeffs} for a representative choice of $\alpha_s(\widetilde{m} )$ and $\alpha_t(\widetilde{m} )$. Despite the fact that the high--energy boundary condition on $C^{\,\wt{\scriptscriptstyle H}}_5$, eq.~(\ref{ch5}), is suppressed by a loop factor, a sizeable value of $C^{\,\wt{\scriptscriptstyle H}}_5$ can be generated through the mixing with $C^{\,\wt{\scriptscriptstyle H}}_2$. \begin{figure}[t] \begin{center} \mbox{\epsfig{file=RGE.eps,width=11cm}} \end{center} \caption{\sf Renormalization group flow of $C^{\,\wt{\scriptscriptstyle H}}_2$ and $C^{\,\wt{\scriptscriptstyle H}}_5$, expressed in terms of the coefficients $\Delta_{ij}$ of \eq{Deltas}, for $\alpha_s(\widetilde{m} )=0.05$ and $\alpha_t(\widetilde{m} )=0.03$. The solid, dashed, dotted, and dot--dashed lines correspond to $\Delta_{22}$, $\Delta_{25}$, $\Delta_{52}$ and $\Delta_{55}$, respectively. } \label{fig:coeffs} \end{figure} A computation of the ${\cal O}(\alpha_s)$ part of the anomalous dimensions, restricted to the four--fermion operators, has been given in the appendix of ref.~\cite{arvanietal}. From the comparison with eq.~(\ref{gammabino}) it appears that the authors of ref.~\cite{arvanietal} have omitted the mixing among the $B$--ino operators induced by the penguin diagrams. Also, we disagree with ref.~\cite{arvanietal} on the anomalous dimensions of the higgsino operators. Once we have evolved the Wilson coefficients down to the renormalization scale at which we compute the gluino decay width, it is convenient to express the operators in terms of chargino and neutralino mass eigenstates. With the usual definitions for the chargino and neutralino mixing matrices $U$, $V$ and $N$, which we assume to be real, the $B$--ino, $W$--ino and higgsino spinors can be expressed as \begin{equation} \overline{\widetilde{W}^+} = \overline{\chi^+_i}\, \left(U_{i1}\,P_{L}+V_{i1}\,P_{R}\right),\;\;\;\;\;\; \overline{\widetilde{H}^+} = \overline{\chi^+_i}\, \left(U_{i2}\,P_{L}+V_{i2}\,P_{R}\right), \end{equation} \begin{equation} \overline{\widetilde{B}} = \overline{\chi^0_i}\,N_{i1}\,,\;\;\; \overline{\widetilde{W}^3} = \overline{\chi^0_i}\,N_{i2} \,,\;\;\; \overline{\widetilde{H}^0} = \overline{\chi^0_i}\, \left(N_{i4}\, P_{R} - N_{i3}\,P_{L}\right) , \end{equation} where $P_{L}$ and $P_{R}$ are the chiral projectors. In the basis of mass eigenstates, the effective Lagrangian becomes \begin{equation} \label{lagr2} {\cal L} = \frac{1}{\wt{m}^2}\,\sum_{j} C^{\,{\chi}^0_i}_j\,Q^{\,{\chi}^0_i}_j \;+\; \frac{1}{\wt{m}^2}\,\left(\sum_j C^{\,{\chi}^+_i}_j\,Q^{\,{\chi}^+_i}_j + {\rm h.c.}\right) . \end{equation} The operators involving neutralinos and quarks and their corresponding Wilson coefficients are \begin{eqnarray} Q^{\,{\chi}^0_i}_{1\,q_{\sss L},q_{\sss R}} &=& \overline{\chi^0_i}\,\gamma^{\mu}\,\gamma_5\,\,{\tilde g}^a\;\otimes\; \sum_{k=1}^2\,\overline{q}^{\,(k)}_{\sss L,R}\,\gamma_{\mu}\,T^a\,q^{(k)}_{\sss L,R} \hspace{2cm}(q=u,d)\,, \\ Q^{\,{\chi}^0_i}_{2\,q_{\sss L},q_{\sss R}} &=& \overline{\chi^0_i}\,\gamma^{\mu}\,\gamma_5\,{\tilde g}^a\;\otimes\; \overline{q}_{\sss L,R}\,\gamma_{\mu}\,T^a\,q_{\sss L,R} \hspace{2.75cm}(q=t,b)\,, \\ Q^{\,{\chi}^0_i}_{3\,q_{\sss L},q_{\sss R}} &=& \overline{\chi^0_i}_{\sss R,L}\,{\tilde g}^a_{\sss L,R}\;\otimes\; \overline{q}_{\sss R,L}\,T^a\,q_{\sss L,R} \hspace{3.45cm}(q=t,b)\,, \\ Q^{\,{\chi}^0_i}_{4\,q_{\sss L},q_{\sss R}} &=& \overline{\chi^0_i}_{\sss R,L}\,\sigma^{\mu\nu}\,\gamma_5\, {\tilde g}^a_{\sss L,R}\;\otimes\; \overline{q}_{\sss R,L}\,\sigma_{\mu\nu}\,T^a\,q_{\sss L,R} \hspace{1.65cm}(q=t,b)\,, \end{eqnarray} \begin{eqnarray} \label{wils1} && C^{\,{\chi}^0_i}_{1\,u_{\sss L}} = C^{\,\wt{\scriptscriptstyle B}}_1\,N_{i1} + C^{\,\wt{\scriptscriptstyle W}}_1\,N_{i2}\,,\;\;\;\;\; C^{\,{\chi}^0_i}_{1\,u_{\sss R}} = C^{\,\wt{\scriptscriptstyle B}}_2\,N_{i1}\,,\\ && C^{\,{\chi}^0_i}_{1\,d_{\sss L}} = C^{\,\wt{\scriptscriptstyle B}}_1\,N_{i1} - C^{\,\wt{\scriptscriptstyle W}}_1\,N_{i2}\,,\;\;\;\;\; C^{\,{\chi}^0_i}_{1\,d_{\sss R}} = C^{\,\wt{\scriptscriptstyle B}}_3\,N_{i1}\,,\\ && C^{\,{\chi}^0_i}_{2\,t_{\sss L}} = C^{\,\wt{\scriptscriptstyle B}}_4\,N_{i1} + C^{\,\wt{\scriptscriptstyle W}}_2\,N_{i2}\,,\;\;\;\; C^{\,{\chi}^0_i}_{3\,t_{\sss L}} = - C^{\,\wt{\scriptscriptstyle H}}_1\,N_{i4}\,,\;\;\;\;\; C^{\,{\chi}^0_i}_{4\,t_{\sss L}} = C^{\,\wt{\scriptscriptstyle H}}_2\,N_{i4}\,,\\ && C^{\,{\chi}^0_i}_{2\,t_{\sss R}} = C^{\,\wt{\scriptscriptstyle B}}_5\,N_{i1}\,,\;\;\;\;\; C^{\,{\chi}^0_i}_{3\,t_{\sss R}} = - C^{\,\wt{\scriptscriptstyle H}}_1\,N_{i4}\,,\;\;\;\;\; C^{\,{\chi}^0_i}_{4\,t_{\sss R}} = -C^{\,\wt{\scriptscriptstyle H}}_2\,N_{i4}\,,\\ && C^{\,{\chi}^0_i}_{2\,b_{\sss L}} = C^{\,\wt{\scriptscriptstyle B}}_4\,N_{i1} - C^{\,\wt{\scriptscriptstyle W}}_2\,N_{i2}\;\;\;\; C^{\,{\chi}^0_i}_{3\,b_{\sss L}} = - C^{\,\wt{\scriptscriptstyle H}}_3\,N_{i3}\,,\;\;\;\;\; C^{\,{\chi}^0_i}_{4\,b_{\sss L}} = -C^{\,\wt{\scriptscriptstyle H}}_4\,N_{i3}\,,\\ && \label{wils2} C^{\,{\chi}^0_i}_{2\,b_{\sss R}} = C^{\,\wt{\scriptscriptstyle B}}_6\,N_{i1}\,,\;\;\;\;\; C^{\,{\chi}^0_i}_{3\,b_{\sss R}} = - C^{\,\wt{\scriptscriptstyle H}}_3\,N_{i3}\,,\;\;\;\;\; C^{\,{\chi}^0_i}_{4\,b_{\sss R}} = C^{\,\wt{\scriptscriptstyle H}}_4\,N_{i3}. \end{eqnarray} The operators involving charginos and quarks and their Wilson coefficients are \begin{eqnarray} Q^{\,{\chi}^+_i}_{1\,{\scriptscriptstyle L,R}} &=& \overline{\chi^+_i}_{\sss L,R}\,\gamma^{\mu}\,{\tilde g}^a_{\sss L,R}\;\otimes\; \sum_{k=1}^2 \,\overline{d}_{\sss L}^{\,(k)}\,\gamma_{\mu}\,T^a\, u_{\sss L}^{(k)} \\ Q^{\,{\chi}^+_i}_{2\,{\scriptscriptstyle L,R}} &=& \overline{\chi^+_i}_{\sss L,R}\,\gamma^{\mu}\,{\tilde g}^a_{\sss L,R}\;\otimes\; \overline{b}_{\sss L}\,\gamma_{\mu}\,T^a\,t_{\sss L} \\ Q^{\,{\chi}^+_i}_{3\,{\scriptscriptstyle L,R}} &=& \overline{\chi^+_i}_{\sss R,L}\,{\tilde g}^a_{\sss L,R}\;\otimes\; \overline{b}_{\sss R,L}\,T^a\,t_{\sss L,R} \\ Q^{\,{\chi}^+_i}_{4\,{\scriptscriptstyle L,R}} &=& \overline{\chi^+_i}_{\sss R,L}\,\sigma^{\mu\nu}\,{\tilde g}^a_{\sss L,R}\;\otimes\; \overline{b}_{\sss R,L}\,\sigma_{\mu\nu}\,T^a\,t_{\sss L,R} \end{eqnarray} \begin{eqnarray} && \label{wils3} \CCL{1} \;=\; -\sqrt{2}\,C^{\,\wt{\scriptscriptstyle W}}_1\,V_{i1}\,,\;\;\;\;\; \CCR{1} \;=\; \sqrt{2}\,C^{\,\wt{\scriptscriptstyle W}}_1\,U_{i1}\,, \\ && \CCL{2} = -\sqrt{2}\,C^{\,\wt{\scriptscriptstyle W}}_2\,V_{i1}\,,\;\;\: \CCL{3} = C^{\,\wt{\scriptscriptstyle H}}_3\,U_{i2}\,,\;\;\;\;\; \CCL{4} = C^{\,\wt{\scriptscriptstyle H}}_4\,U_{i2}\,,\\ && \CCR{2} = \sqrt{2}\,C^{\,\wt{\scriptscriptstyle W}}_2\,U_{i1}\,,\;\;\;\;\; \CCR{3} = C^{\,\wt{\scriptscriptstyle H}}_1\,V_{i2}\,,\;\;\;\;\; \CCR{4} = C^{\,\wt{\scriptscriptstyle H}}_2\,V_{i2}\,. \label{wils4} \end{eqnarray} All Wilson coefficients in eqs.(\ref{wils1})--(\ref{wils2}) and (\ref{wils3})--(\ref{wils4}) are evaluated at the scale $\mu$ at which the gluino decay width is computed (we take $\mu=m_{\tilde{g}}$ in our numerical analysis). The magnetic operator involving a neutralino and a gluon is \begin{equation} Q^{\,{\chi}^0_i}_g = \overline{\chi^0_i}\,\sigma^{\mu\nu}\,\gamma_5\,{\tilde g}^a\,G_{\mu\nu}^a\,. \end{equation} In order to reach the desired accuracy in the ${\tilde g}\to g {\tilde \chi}^0$ process, we need to include the matrix element contribution coming from the diagram in which the two top quarks in the operator $Q^{\,\wt{\scriptscriptstyle H}}_2$ close in a loop emitting a gluon. This results into an ``effective'' Wilson coefficient \begin{equation} \label{coeffg} \left.C^{\,{\chi}^0_i}_g\right._{\rm \!\!\!eff} (\mu) =C^{\,\wt{\scriptscriptstyle B}}_7(\mu)\,N_{i1}+C^{\,\wt{\scriptscriptstyle H}}_5(\mu)\,N_{i4}\,v + \frac{g_s\,h_t}{8\pi^2}\,C^{\,\wt{\scriptscriptstyle H}}_2(\mu)\,\,N_{i4}\,v\,\ln\frac{m_t^2}{\mu^2}\,, \end{equation} where $v$ is the Higgs vacuum expectation value and we take $\mu =m_{\tilde{g}}$. From the effective Lagrangian of \eq{lagr2} we can compute the gluino decay widths and complete expressions can be found in the appendix. The same effective Lagrangian correctly describes also the interactions that lead to the decays ${\tilde g}\to g g{\tilde \chi}^0$ and ${\tilde g}\to g h^0 {\tilde \chi}^0$. However, since these processes are subleading, we will not explicitly calculate their decay widths. \section{Results} \label{sec4} We are now ready to discuss the results of our computation of the decay width and branching ratios of the gluino in {Split Supersymmetry}. The input parameters relevant to our analysis are the sfermion mass scale $\widetilde{m}$, the physical gluino mass $m_{\tilde{g}}$ and $\tan \beta$, which in {Split Supersymmetry} is interpreted as the tangent of the angle that rotates the finely tuned Higgs doublets. To simplify the analysis we assume that the squark masses are degenerate, i.e.~we set $r_{\tilde{q}_{\sss L}} = r_{\tilde{u}_{\sss R}} = r_{\tilde{d}_{\sss R}} = 1$ in the matching conditions of the Wilson coefficients. The gluino mass parameter in the Lagrangian, $M_3$, is extracted from $m_{\tilde{g}}$ including radiative corrections, and the other gaugino masses $M_1$ and $M_2$ are computed from $M_3$ assuming unification at the GUT scale. The higgsino mass parameter $\mu$ is determined as a function of $M_2$ by requiring that the relic abundance of neutralinos is equal to the dark--matter density preferred by WMAP data \cite{wmap} (see fig.~11 of ref.~\cite{split}\,). The sign of $\mu$ remains a free parameter, but since it does not affect our results for the gluino decays in a significant way we will assume $\mu>0$ throughout our analysis. The effective couplings of gauginos and higgsinos at the weak scale, needed to compute the chargino and neutralino mass matrices, are determined from their high--energy (supersymmetric) boundary values by means of the renormalization--group equations of {Split Supersymmetry}, given in ref.~\cite{split}. Finally, the SM input parameters relevant to our analysis are: the physical masses for the top quark and gauge bosons, $m_t=178$ GeV, $M_Z = 91.187$ GeV and $M_W=80.41$ GeV; the running bottom mass computed at the scale of the top mass, $m_b(m_t) = 2.75$ GeV; the Fermi constant, $G_F = 1.166 \times 10^{-5}$ GeV$^{-2}$; the running strong coupling computed at the scale of the top mass, $\alpha_s(m_t) = 0.106$. \begin{figure}[t] \begin{center} \mbox{\epsfig{file=lifetime_tb2_plus.eps,width=11cm}} \end{center} \vspace{-2mm} \caption{\sf Gluino lifetime $\tau_{\tilde{g}}$ as a function of the sfermion mass scale $\wt{m}$, for different values of the physical gluino mass $m_{\tilde{g}}$. The other free parameters are chosen as $\tan\beta = 2$ and $\mu>0$. The dashed horizontal line corresponds to the age of the universe, $\tau_{\scriptscriptstyle U}=14$ Gyr.} \label{fig:life} \end{figure} To start our discussion, we show in \fig{fig:life} the gluino lifetime $\tau_{\tilde{g}}$ (in seconds) as a function of the sfermion mass scale $\wt{m}$, for $\tan\beta = 2$ and four different values of the physical gluino mass ($m_{\tilde{g}}$ = 0.5, 1, 2 and 5 TeV, respectively). It can be seen that $\tau_{\tilde{g}}$ is about 4 seconds for $m_{\tilde{g}}=1$ TeV and $\wt{m} = 10^9$ GeV. A value of $\tau_{\tilde{g}}$ equal to the age of the universe (14~Gyr) corresponds to $\widetilde{m} = (1.1, \, 2.1,\, 4.5,\, 13) \times 10^{13}$ GeV for $m_{\tilde{g}}$ = 0.5, 1, 2 and 5 TeV, respectively. \begin{figure}[p] \begin{center} \mbox{ \epsfig{figure=BRvsMS_tb20_mg500_plus.eps,width=8.15cm} \epsfig{figure=BRvsMS0_tb20_mg500_plus.eps,width=7.85cm}} \vspace*{6mm} \mbox{ \epsfig{figure=BRvsMS_tb20_mg1000_plus.eps,width=8.15cm} \epsfig{figure=BRvsMS0_tb20_mg1000_plus.eps,width=7.85cm}} \vspace*{6mm} \mbox{ \epsfig{figure=BRvsMS_tb20_mg2000_plus.eps,width=8.15cm} \epsfig{figure=BRvsMS0_tb20_mg2000_plus.eps,width=7.85cm}} \end{center} \caption{\sf Branching ratios for the gluino decay channels $\chi^0 g$ (dashed lines), $\chi^0 q \bar{q}$ (dotted) and $\chi^{\pm} q \bar{q}^{\,\prime}$ (dot--dashed), summed over all possible neutralino or chargino states, as a function of $\wt{m}$, for three values of $m_{\tilde{g}}$: 500 GeV (upper plots), 1 TeV (middle) and 2 TeV (lower). The curves in the left (right) plots do (do not) include the resummation of the leading logarithmic corrections. Other parameters are $\tan\beta=20$ and $\mu>0$.} \label{fig:bratios} \end{figure} In \fig{fig:bratios} we show the branching ratios for the three decay processes $\tilde{g} \rightarrow \chi^0 g$, $\tilde{g} \rightarrow \chi^0 q \bar{q}$ and $\tilde{g} \rightarrow \chi^{\pm} q \bar{q}^{\,\prime}$ (summed over all neutralino or chargino states) as a function of $\wt{m}$, for $\tan\beta = 20$ and three different values of $m_{\tilde{g}}$\,: 500 GeV (upper plots), 1 TeV (middle plots) and 2 TeV (lower plots). The value of $\tan\beta$ has little impact on these results. The plots on the left of \fig{fig:bratios} represent the results of our full calculation, including the resummation of the leading logarithmic corrections controlled by $\alpha_s$ and $\alpha_t$. The plots on the right represent instead the lowest--order results that do not include the resummation. We obtain the latter results by replacing the Wilson coefficients of the four--fermion operators in the low--energy effective Lagrangian with their tree--level expressions in terms of gauge and Yukawa couplings [eqs.~(\ref{matchB1})--(\ref{matchB2}), (\ref{matchW}) and (\ref{matchH1})--(\ref{matchH2})], and the Wilson coefficient of the magnetic operator with its one--loop expression. The plots in \fig{fig:bratios} show that the branching ratio of the radiative decay $\tilde{g} \rightarrow \chi^0 g$ decreases for increasing $m_{\tilde{g}}$ and increases for increasing $\wt{m}$. In fact, as stressed in ref.~\cite{jim}, the ratio between the two--body and three--body decay rates computed at lowest order scales like $m_t^2/m_{\tilde{g}}^2\;[1-\ln(\wt{m}^2/m_t^2)]^2$, where the logarithmic term comes from the top--stop loop that generates the magnetic gluino--gluon--higgsino interaction. For large values of $\wt{m}$, the resummation of the logarithms becomes necessary. Comparing the plots on the left and right sides of \fig{fig:bratios}, we see that the resummation of the leading logarithmic corrections tends to enhance the three--body decays and suppress the radiative decay. The effect of the corrections on the branching ratios is particularly visible when, like in the middle and lower plots, neither the two--body nor the three--body channels are obviously dominant in the range $10^8$ GeV $< \wt{m} < 10^{13}$ GeV, relevant to Split Supersymmetry. \begin{figure}[t] \begin{center} \mbox{\hspace{8mm} \epsfig{file=corrvsMS_tb20_mg1000_plus.eps,width=11cm}} \end{center} \caption{\sf Effect of the radiative corrections on the partial widths for the decays $\tilde{g}\rightarrow\chi^0 g$ (dashed lines), $\tilde{g}\rightarrow\chi^0 q \bar{q}$ (dotted) and $\tilde{g}\rightarrow\chi^{\pm} q \bar{q}^{\,\prime}$ (dot--dashed) as a function of $\wt{m}$. The parameters are chosen as $m_{\tilde{g}} = 1$ TeV, $\tan\beta = 20$ and $\mu>0$.} \label{fig:corrs} \end{figure} \begin{figure}[t] \begin{center} \mbox{\epsfig{file=gamma_tb20_plus_2000.eps,width=11cm}} \end{center} \vspace{-2mm} \caption{\sf The normalization $N$ of eq.~(\ref{scaling}) as a function of the sfermion mass scale $\wt{m}$, with (solid lines) and without (dashed lines) resummation of the leading logarithmic corrections. The upper, middle and lower sets of curves correspond to $m_{\tilde{g}} =$ 0.5, 1 and 2 TeV, respectively. The other free parameters are chosen as $\tan\beta = 20$ and $\mu>0$.} \label{fig:gamma} \end{figure} To further illustrate the effect of the resummation of the leading logarithmic corrections, we plot in fig.~\ref{fig:corrs} the ratio $\Gamma/\Gamma_0$ of the partial decay widths with and without resummation, for the processes $\tilde{g} \rightarrow \chi^0 g$, $\tilde{g} \rightarrow \chi^0 q \bar{q}$ and $\tilde{g} \rightarrow \chi^{\pm} q \bar{q}^{\,\prime}$. We fix $m_{\tilde{g}} = 1$ TeV, $\tan\beta = 20$ and $\mu>0$, but we have checked that the qualitative behaviour of the corrections is independent of the precise choice of the parameters. It can be seen from fig.~\ref{fig:corrs} that for large enough values of $\wt{m}$ the radiative corrections can be of the order of 50--100\%, and that they enhance the widths for the three--body decays and suppress the width for the radiative decay. To conclude this section, we discuss the scaling behaviour of the gluino lifetime and total decay width. The lifetime $\tau_{\tilde{g}} = \hbar/\Gamma_{\rm tot}$ can be written as \begin{equation} \label{scaling} \tau_{\tilde{g}} = \frac{4 \,{\rm sec}}{N}\, \times \,\left(\frac{\wt{m}}{10^9\,{\rm GeV}}\right)^4 \times \left(\frac{1 \,{\rm TeV}}{m_{\tilde{g}}}\right)^5 , \end{equation} where the normalization $N$ is of order unity and depends on $\wt{m}$ and $m_{\tilde{g}}$ (and only very mildly on $\tan\beta$). In \fig{fig:gamma} we show $N$ as a function of $\wt{m}$ for $\tan\beta = 20$ and three different values of the physical gluino mass ($m_{\tilde{g}} = 0.5,\, 1$ and 2 TeV, respectively). The non--vanishing slope of $N$ represents the deviation of the total gluino decay width from the naive scaling behaviour $\Gamma_{\rm tot} \propto m_{\tilde{g}}^5\,/\,\wt{m}^4$. The solid lines in the plot represent the results of our full calculation, whereas the dashed lines represent the lowest--order results that do not include the resummation. For low values of $m_{\tilde{g}}$ the contribution of the radiative decay dominates (see \fig{fig:bratios}), thus the total decay width departs visibly from the naive scaling and is significantly suppressed by the resummation of the radiative corrections. On the other hand, for large values of $m_{\tilde{g}}$ the three--body decays dominate, and the effect of the resummation is to enhance the total decay width. Finally, for the intermediate value $m_{\tilde{g}}=1$ TeV there is a compensation between the corrections to the radiative decay width and those to the three--body decay widths, and the net effect on the total decay width of the resummation of the leading logarithmic corrections is rather small. \vspace{-1mm} \section{Gluino Decay into Gravitinos} \label{sec5} \vspace{-1mm} Split Supersymmetry\ opens up the possibility of direct tree-level mediation of the original supersymmetry breaking to the SM superfields, without the need of a hidden sector~\cite{noi4}. In usual low-energy supersymmetry, this possibility is impracticable: for $F$--term breaking some scalars remain lighter than the SM matter fermions, and for $D$--term breaking gaugino masses cannot be generated at the same order of scalar masses. In Split Supersymmetry\ a large hierarchy between scalar and gaugino masses is acceptable, and indeed models have been proposed~\cite{noi4,babu} with direct mediation of $D$--term supersymmetry breaking. Therefore, in Split Supersymmetry\ the original scale of supersymmetry breaking $\sqrt{F}$, which is related to the gravitino mass by \begin{equation} m_{3/2}=\sqrt{\frac{8\pi}{3}} \frac{F}{M_{\rm Pl}}, \end{equation} could be as low as the squark mass scale $\widetilde{m}$. This means that the interactions between the gluino and (the spin--1/2 component of) the gravitino, which are suppressed by $1/F$, could be as strong as those considered in the previous sections, which are suppressed by $1/\widetilde{m}^2$. For $m_{3/2} \ll m_{\tilde{g}}$, the gravitino interactions can be obtained, through the supersymmetric analogue of the equivalence theorem~\cite{equi}, from the goldstino derivative coupling to the supercurrent. This approximation is valid as long as $\sqrt{F} \ll 6\times (m_{\tilde{g}} /1\,{\rm TeV})^{1/2} \times 10^{10}$~GeV. Using the equations of motion, we can write the effective goldstino ($\widetilde G$) interactions for on--shell particles as \begin{equation} {\cal L}=\frac 1F \left( -m_{{\tilde q}_{\sss L}}^2 {\tilde q}_{\sss L} {\bar q}_{\sss L} -m_{{\tilde q}_{\sss R}}^2 {\tilde q}_{\sss R} {\bar q}_{\sss R} +\frac{m_{\tilde{g}}}{4\sqrt{2}}\,{\overline{{\tilde g}^a}}\, \sigma^{\mu\nu} \gamma_5 \,G^a_{\mu \nu} \right) \tilde G +{\rm h.c.} \label{goldlag} \end{equation} Below $\widetilde{m}$, the effective Lagrangian describing the interactions between the gluino and the goldstino becomes \begin{equation} {\cal L} = \frac{1}{F}\,\sum_{i=1}^5 C^{\,\wt{\scriptscriptstyle G}}_i\,Q^{\,\wt{\scriptscriptstyle G}}_i\,, \label{eglag} \end{equation} \begin{eqnarray} Q^{\,\wt{\scriptscriptstyle G}}_1 & = & \overline{\widetilde{G}} \,\gamma^{\mu}\, \gamma_{5}\, {\tilde g}^a\; \otimes\;\sum_{k=1,2 \atop q=u,d}\; \overline{q}^{\,(k)} \,\gamma_{\mu} \, T^a \,q^{\,(k)} \label{opg1}\\ Q^{\,\wt{\scriptscriptstyle G}}_2 & = & \overline{\widetilde{G}} \,\gamma^{\mu}\, \gamma_{5}\, {\tilde g}^a\; \otimes\; \overline{q}_{\sss L}^{\,(3)} \,\gamma_{\mu} \, T^a \,q_{\sss L}^{\,(3)}\\ Q^{\,\wt{\scriptscriptstyle G}}_3 & = & \overline{\widetilde{G}} \,\gamma^{\mu}\, \gamma_{5}\, {\tilde g}^a\; \otimes\; \overline{t}_{\sss R} \,\gamma_{\mu} \, T^a \, t_{\sss R} \\ Q^{\,\wt{\scriptscriptstyle G}}_4 & = & \overline{\widetilde{G}} \,\gamma^{\mu}\, \gamma_{5}\, {\tilde g}^a\; \otimes\; \overline{b}_{\sss R} \,\gamma_{\mu}\, T^a\, b_{\sss R} \\ Q^{\,\wt{\scriptscriptstyle G}}_5 & = & \overline{\widetilde{G}} \,\sigma^{\mu\nu} \,\gamma_5\, {\tilde g}^a\;G^a_{\mu\nu}. \label{opg5} \end{eqnarray} The Wilson coefficients at the matching scale $\widetilde{m}$ are \begin{equation} \label{boundgoldstino} C^{\,\wt{\scriptscriptstyle G}}_1=C^{\,\wt{\scriptscriptstyle G}}_2=C^{\,\wt{\scriptscriptstyle G}}_3=C^{\,\wt{\scriptscriptstyle G}}_4 = -\frac{g_s}{\sqrt{2}}\,,\;\;\;\;\;\;\; C^{\,\wt{\scriptscriptstyle G}}_5 = -\frac{m_{\tilde{g}}}{2\,\sqrt{2}}. \end{equation} Note that the coefficients of the interactions in \eq{eglag} have no dependence on $\widetilde{m}$, because the squark mass square in the propagators of the particles we integrate out is exactly cancelled by the squark mass square in the goldstino coupling in \eq{goldlag}. The operator renormalization proceeds analogously to the discussion in sect.~\ref{sec3}. The anomalous dimension matrix of the operators in eqs.~(\ref{opg1})--(\ref{opg5}) is given by \eq{gamgam} with \begin{equation} \gamma_s^{\widetilde{\scriptscriptstyle G}}=\frac 13 \pmatrix{ 16-9N_c & 16 & 16 & 16 & 0 \cr 4 & 4-9N_c & 4 & 4 & 0 \cr 2 & 2 & 2-9N_c & 2 & 0 \cr 2 & 2 & 2 & 2-9N_c & 0 \cr 0 & 0 & 0 & 0 & 2N_f-18 N_c }, \end{equation} \begin{equation} \gamma_t^{\widetilde{\scriptscriptstyle G}}= \pmatrix{ 0 & 0 & 0 & 0 & 0 \cr 0 & 1 & \!\!\!\!-2 & 0 & 0 \cr 0 & \!\!\!\!-1 & 2 & 0 & 0 \cr 0 & 0 & 0 & 0 & 0 },~~~~\gamma_{st}^{\widetilde{\scriptscriptstyle G}}=0\,. \end{equation} The evolution of the Wilson coefficients for the goldstino operators has the simple analytic form \begin{eqnarray} C^{\,\wt{\scriptscriptstyle G}}_i (\mu) &=& \eta_s^{-\frac {9}{10}} \left[ C^{\,\wt{\scriptscriptstyle G}}_i (\widetilde{m} ) +y \,{\overline C^{\,\wt{\scriptscriptstyle G}}}(\widetilde{m} )\right] ~~~i=1,4 \label{pincg}\\ C^{\,\wt{\scriptscriptstyle G}}_2 (\mu) &=& \eta_s^{-\frac {9}{10}} \left[ (1+z)\,C^{\,\wt{\scriptscriptstyle G}}_2 (\widetilde{m} ) -z \,C^{\,\wt{\scriptscriptstyle G}}_3 (\widetilde{m} ) +y \,{\overline C^{\,\wt{\scriptscriptstyle G}}}(\widetilde{m} )\right] ,\\ C^{\,\wt{\scriptscriptstyle G}}_3 (\mu ) &=& \eta_s^{-\frac {9}{10}} \left[ (1+2\,z)\,C^{\,\wt{\scriptscriptstyle G}}_3 (\widetilde{m} ) -2\,z \,C^{\,\wt{\scriptscriptstyle G}}_2 (\widetilde{m} ) +y \,{\overline C^{\,\wt{\scriptscriptstyle G}}}(\widetilde{m} )\right] \label{pallg},\\ C^{\,\wt{\scriptscriptstyle G}}_5(\mu) &= &\eta_s^{-\frac {7}{5}}\,C^{\,\wt{\scriptscriptstyle G}}_5 (\widetilde{m} )\,, \end{eqnarray} where ${\overline C^{\,\wt{\scriptscriptstyle G}}}=2\,C^{\,\wt{\scriptscriptstyle G}}_1 /3 +C^{\,\wt{\scriptscriptstyle G}}_2/6+(C^{\,\wt{\scriptscriptstyle G}}_3+C^{\,\wt{\scriptscriptstyle G}}_4)/12$, $y=\eta_s^{\,4/5}-1$, and $z=(\eta_s^{\,8/15}\,\eta_t^{-1/3}-1)/3$. The quantities $\eta_s$ and $\eta_t$ have been defined in eqs.~(\ref{etas}) and (\ref{etat}), respectively. \begin{figure}[t] \begin{center} \vspace*{-4mm} \mbox{\epsfig{file=goldstino_tb2_10to9.eps,width=11cm}} \end{center} \vspace{-4mm} \caption{\sf Branching ratio for the decay $\tilde g \rightarrow \widetilde G \, g$ as a function of $\sqrt{F}/\wt{m}$, for different values of the physical gluino mass $m_{\tilde{g}}$. The other free parameters are chosen as $\wt{m} = 10^{9}$ GeV, $\tan\beta = 2$ and $\mu>0$.} \label{fig:goldstino} \end{figure} The formulae for the gluino decay widths into goldstino and quarks and into goldstino and gluon can be found in the appendix. In \fig{fig:goldstino} we show the branching ratio for the process $\tilde g \rightarrow \widetilde G \, g$ as a function of the ratio $\sqrt{F}/\wt{m}$, for $\wt{m} = 10^9$ GeV, $\tan\beta=2\,,\;\mu>0$ and different values of the gluino mass. The branching ratio for the decay into goldstino and quarks, suppressed by phase space, is always at or below the 1\% level. It can be seen from \fig{fig:goldstino} that the gluino decay into goldstino and gluon is largely dominant when $\sqrt{F}$ is as low as $\wt{m}$. In fact, the decays into charginos or neutralinos and quarks (relevant for large values of $m_{\tilde{g}}$) are suppressed by phase space, while the radiative decay into gluon and neutralinos (relevant for smaller values of $m_{\tilde{g}}$) is suppressed by $m_t^2/m_{\tilde{g}}^2$ and a loop factor. With respect to the scaling behaviour outlined in eq.~(\ref{scaling}), the additional contribution to the total gluino decay width coming from the decay into goldstino and gluon can significantly suppress the gluino lifetime. In fact, for $\sqrt{F}= \wt{m}$ the normalization $N$ in eq.~(\ref{scaling}) takes on values of order 40--50 for $\wt{m} > 10^8$ GeV. On the other hand, the widths for the gluino decays into goldstino are suppressed by a factor $\wt{m}^4/F^{2}$ with respect to those for decays into charginos or neutralinos. Fig.~\ref{fig:goldstino} shows that as soon as we depart from the condition $\sqrt{F}= \wt{m}$ the branching ratio for $\tilde g \rightarrow \widetilde G \, g$ falls off very quickly, and already for $\sqrt{F}/\wt{m}$ as large as 10 the gluino decays into goldstino are below the 1\% level. \vspace{-4mm} \section{Conclusions} \vspace{-4mm} \label{concl} If Split Supersymmetry\ is the correct theory to describe physics beyond the Standard Model, one of its most spectacular manifestations might be the discovery of a very long--lived gluino at the LHC. In this paper we provided a precise determination of the gluino lifetime and branching ratios. Applying to Split Supersymmetry\ the effective Lagrangian and renormalization group techniques, we discussed the proper treatment of the radiative corrections that are enhanced by the large logarithm of the ratio between the sfermion mass scale and the gluino mass. We computed the anomalous dimensions of the operators relevant to the gluino decay, that allow us to resum to all orders in the perturbative expansion the leading logarithmic corrections controlled by $\alpha_s$ and $\alpha_t$. We also provided explicit analytical formulae for the gluino decay widths in terms of the Wilson coefficients of the effective Lagrangian of Split Supersymmetry. For representative values of the input parameters, we discussed the numerical impact of the radiative corrections and found that they can modify substantially the gluino decay width and branching ratios. Finally, we considered models with direct mediation of supersymmetry breaking, and we found that the gluino decays into gravitinos might dominate over the other decay modes. \newpage \section*{Appendix} We present in this appendix the explicit formulae for the leading three--body and two--body gluino decay widths. All the results are expressed in terms of the Wilson coefficients of the effective Lagrangian of {Split Supersymmetry}, discussed in sects.~\ref{sec2}, \ref{sec3} and \ref{sec5}. \paragraph{Three--body decays into quarks and chargino or neutralino:} denoting the momenta of the decay products as $(p_1,p_2,p_3) \equiv (p_{q_I},p_{\overline{q}_J},p_{\chi})$, and $s_{ij} = (p_i+p_j)^2$, the three--body decay amplitude is given by \begin{equation} \label{decay} \Gamma_{\chi\, q_I\overline{q}_J} = \frac{1}{256\,\pi^3\,m_{\tilde{g}}^3\;\widetilde{m}^4} \;\int \; \overline{\left|{\cal M}\right|^2} \; ds_{13}\,ds_{23}\,. \end{equation} The bar over $\left|{\cal M}\right|^2$ denotes the average over colour and spin of the gluino and the sum over colour and spin of the final state particles (the dependence on $\widetilde{m}$ has been factored out). The limits of the integration in the ($s_{13},s_{23}$) plane are \begin{eqnarray} s_{13}^{\rm max} &= &m_{q_I}^2 + m_{\chi}^2 + \frac{1}{2\,s_{23}}\, \left[(m_{\tilde{g}}^2-m_{q_I}^2-s_{23})\,(s_{23}-m_{\overline{q}_J}^2+m_{\chi}^2)\right. \label{s13max} \nonumber\\ &&\hspace{4cm}+\left.\lambda^{1/2}(s_{23},m_{\tilde{g}}^2,m_{q_I}^2) \,\lambda^{1/2}(s_{23},m_{\overline{q}_J}^2,m_{\chi}^2)\right]\,,\\ s_{13}^{\rm min} &= &m_{q_I}^2 + m_{\chi}^2 + \frac{1}{2\,s_{23}}\, \left[(m_{\tilde{g}}^2-m_{q_I}^2-s_{23})\,(s_{23}-m_{\overline{q}_J}^2+m_{\chi}^2)\right. \nonumber\\ &&\hspace{4cm}-\left.\lambda^{1/2}(s_{23},m_{\tilde{g}}^2,m_{q_I}^2) \,\lambda^{1/2}(s_{23},m_{\overline{q}_J}^2,m_{\chi}^2)\right]\,,\\ s_{23}^{\rm max} &= & (|m_{\tilde{g}}|-m_{q_I})^2\,,\\ s_{23}^{\rm min} &= & (|m_{\chi}|+m_{\overline{q}_J})^2\,, \label{s23min} \end{eqnarray} where $\lambda(x,y,z) = x^2+y^2+z^2 -2\,(xy+xz+yz)$. In the computation of the decays involving quarks of the first and second generation we can neglect the quark masses and we find \begin{eqnarray} \label{gammacharg} \Gamma_{\chi_i^+d\overline{u}}=\Gamma_{\chi_i^-u\overline{d}}&=& \frac{m_{\tilde{g}}^5}{1536\,\pi^3\,\wt{m}^4}\left[ \left(\left.\CCL{1}\right.^2+\left.\CCR{1}\right.^2\right)\,g(x_i) - 2\,\CCL{1}\,\CCR{1}\, f(x_i)\right]\,,\\ &&\nonumber\\ \label{gammaneut} \Gamma_{\chi_i^0q\overline{q}}&=& \frac{m_{\tilde{g}}^5}{768\,\pi^3\,\wt{m}^4}\; \left(\left.\CNL{1}\right.^2+\left.\CNR{1}\right.^2\,\right)\, \biggr[g(x_i) + f(x_i)\biggr]\;\;\;\; (q=u,d)\,, \end{eqnarray} where $x_i = m_{\chi_i}/m_{\tilde{g}}$, and we have included an overall factor 2 to take into account the sum over the two generations of light quarks. The functions $f$ and $g$ are defined as: \begin{eqnarray} g(x) & = & 1-8\,x^2+8\,x^6-x^8-12\,x^4\,\ln x^2 \,,\\ f(x) & = & 2\,x+18\,x^3-18\,x^5-2\,x^7+12\,x^3(1+x^2)\,\ln x^2\,. \end{eqnarray} For generic quark masses the integration of the squared amplitude $\overline{\left|{\cal M}\right|^2}$ on the $(s_{13},s_{23})$ plane cannot be performed analytically, and in order to compute the total decay width we must resort to a numerical integration. The squared amplitude for the processes $\tilde{g}\rightarrow \chi_i^+\,b\,\overline{t}$ and $\tilde{g}\rightarrow \chi_i^-\,t\,\overline{b}$ is given by \begin{eqnarray} \label{Mtb} \overline{\left|{\cal M}\right|^2} &=& \left.\CCL{2}\right.^2\,(m_{\tilde{g}}^2+m_t^2-s_{13})\,(s_{13}-m_{\chi^+_i}^2-m_b^2)\nonumber\\ && \nonumber\\ &+&\left.\CCR{2}\right.^2 \,(m_{\tilde{g}}^2+m_b^2-s_{23})\,(s_{23}-m_{\chi^+_i}^2-m_t^2) \nonumber\\ && \nonumber\\ &+& \frac{1}{4}\, \left(\left.\CCL{3}\right.^2+\left.\CCR{3}\right.^2\right)\; (m_{\chi^+_i}^2+m_{\tilde{g}}^2-s_{13}-s_{23})\,(s_{13}+s_{23}-m_t^2-m_b^2)\nonumber\\ && \nonumber\\ &+& 4\, \left(\left.\CCL{4}\right.^2+\left.\CCR{4}\right.^2\right)\; \biggr[(m_{\chi^+_i}^2+m_{\tilde{g}}^2-s_{13}-s_{23})\, (s_{13}+s_{23}-m_t^2-m_b^2-4\,m_{\chi^+_i}^2)\nonumber\\ &&\hspace{4cm}+4\,(s_{13}-m_{\chi^+_i}^2)\, (s_{23}-m_{\chi^+_i}^2)-4\,m_t^2\,m_b^2\biggr]\nonumber\\ && \nonumber\\ &+&2\,\CCL{2}\,\CCR{2}\,m_{\tilde{g}}\,m_{\chi^+_i}\, (s_{13}+s_{23}-m_{\chi^+_i}^2-m_{\tilde{g}}^2)\nonumber\\ && \nonumber\\ &+&\left(\CCR{2}\,\CCR{3} + 12\,\CCR{2}\,\CCR{4} \right) \,m_{\chi^+_i}\,m_t\,(s_{23}-m_b^2-m_{\tilde{g}}^2)\nonumber\\ && \nonumber\\ &+&\left(\CCR{2}\,\CCL{3} + 12\,\CCR{2}\,\CCL{4} \right) \,m_{\tilde{g}}\,m_b\,(s_{23}-m_t^2-m_{\chi^+_i}^2)\nonumber\\ && \nonumber\\ &-&\left(\CCL{2}\,\CCR{3} - 12\,\CCL{2}\,\CCR{4} \right) \,m_{\tilde{g}}\,m_t\,(s_{13}-m_b^2-m_{\chi^+_i}^2)\nonumber\\ && \nonumber\\ &-&\left(\CCL{2}\,\CCL{3} - 12\,\CCL{2}\,\CCL{4} \right) \,m_{\chi^+_i}\,m_b\,(s_{13}-m_t^2-m_{\tilde{g}}^2)\nonumber\\ && \nonumber\\ &+&2\,\left(\CCL{3}\,\CCL{4} + \CCR{3}\,\CCR{4} \right) \,\left[(m_{\tilde{g}}^2+m_{\chi^+_i}^2-s_{13}-s_{23})\,(s_{23}-s_{13}+m_b^2-m_t^2)\right.\nonumber\\ &&\hspace{4cm} \left.+ 2\,m_b^2\,(s_{23}-m_t^2-m_{\chi^+_i}^2)-2\,m_t^2\,(s_{13}-m_b^2-m_{\chi^+_i}^2)\right]\nonumber\\ && \nonumber\\ &-&2\,\left(\CCL{3}\,\CCR{3} + 48\,\CCL{4}\,\CCR{4} \right) \,m_{\tilde{g}}\,m_{\chi^+_i}\,m_t\,m_b\;. \end{eqnarray} \newpage The squared amplitude for the processes $\tilde{g}\rightarrow \chi_i^0\,t\,\overline{t}$ and $\tilde{g}\rightarrow \chi_i^0\,b\,\overline{b}$ is given by \begin{eqnarray} \label{Mqq} \overline{\left|{\cal M}\right|^2} &=& \left(\left.\CNL{2}\right.^2+\left.\CNR{2}\right.^2\,\right)\; \left[(m_{\tilde{g}}^2+m_q^2-s_{13})\,(s_{13}-m_q^2-m_{\chi^0_i}^2)\right.\nonumber\\ &&\hspace{1cm}\left.+\,(m_{\tilde{g}}^2+m_q^2-s_{23})\,(s_{23}-m_q^2-m_{\chi^0_i}^2) + 2\,m_{\tilde{g}}\,m_{\chi^0_i}\,(m_{\tilde{g}}^2+m_{\chi^0_i}^2-s_{13}-s_{23})\right]\nonumber\\ && \nonumber\\ &+& \frac{1}{4}\,\left(\left.\CNL{3}\right.^2+\left.\CNR{3}\right.^2\,\right) \,(m_{\chi^0_i}^2+m_{\tilde{g}}^2-s_{13}-s_{23})(s_{13}+s_{23}-2\,m_q^2)\nonumber\\ && \nonumber\\ &+&4\,\left(\left.\CNL{4}\right.^2+\left.\CNR{4}\right.^2\,\right) \,\left[(m_{\chi^0_i}^2+m_{\tilde{g}}^2+4\,m_q^2-s_{13}-s_{23}) (s_{13}+s_{23}-2\,m_q^2-4\,m_{\chi^0_i}^2)\right.\nonumber\\ && \hspace{4cm}\left.+4\,(s_{13}-m_q^2-m_{\chi^0_i}^2)\, (s_{23}-m_q^2-m_{\chi^0_i}^2)+8\,m_q^2\,m_{\chi^0_i}^2)\right]\nonumber\\ && \nonumber\\ &+&4\,\CNL{2}\,\CNR{2}\,m_q^2\, (s_{13}+s_{23}+4\,m_{\tilde{g}}\,m_{\chi^0_i}-2\,m_q^2)\nonumber\\ && \nonumber\\ &+&\left(\CNR{2}\,\CNR{3}-\CNL{2}\,\CNL{3}\right) \,m_q\,\left[m_{\chi^0_i}\,(m_{\tilde{g}}^2+m_q^2-s_{13})+m_{\tilde{g}}\,(m_{\chi^0_i}^2+m_q^2-s_{23})\right]\nonumber\\ && \nonumber\\ &+&\left(\CNR{2}\,\CNL{3}-\CNL{2}\,\CNR{3}\right) \,m_q\,\left[m_{\chi^0_i}\,(m_{\tilde{g}}^2+m_q^2-s_{23})+m_{\tilde{g}}\,(m_{\chi^0_i}^2+m_q^2-s_{13})\right]\nonumber\\ && \nonumber\\ &+& 12\,\left(\CNR{2}\,\CNR{4}-\CNL{2}\,\CNL{4}\right) \,m_q\,\left[m_{\tilde{g}}\,(m_{\chi^0_i}^2+m_q^2-s_{23})-m_{\chi^0_i}\,(m_{\tilde{g}}^2+m_q^2-s_{13})\right]\nonumber\\ && \nonumber\\ &+& 12\,\left(\CNR{2}\,\CNL{4}-\CNL{2}\,\CNR{4}\right) \,m_q\,\left[m_{\chi^0_i}\,(m_{\tilde{g}}^2+m_q^2-s_{13})-m_{\tilde{g}}\,(m_{\chi^0_i}^2+m_q^2-s_{23})\right]\nonumber\\ && \nonumber\\ &+&2\,\left(\CNL{3}\,\CNL{4}+\CNR{3}\,\CNR{4}\right) \,(m_{\tilde{g}}^2+m_{\chi^0_i}^2+2\,m_q^2-s_{13}-s_{23})\,(s_{23}-s_{13})\nonumber\\ && \nonumber\\ &-&2\,\left(\CNL{3}\,\CNR{3}+48\,\CNL{4}\,\CNR{4}\right) \,m_{\tilde{g}}\,m_{\chi^0_i}\,m_q^2\;\;\;\;\;\;\;\;\;\;\;\; (q=t,b). \end{eqnarray} We have checked that inserting in eqs.~(\ref{Mtb})--(\ref{Mqq}) the high--energy (i.e.~non resummed) expressions for the Wilson coefficients given in sects.~\ref{sec2} and \ref{sec3} we reproduce the tree--level results of ref.~\cite{jim}. \newpage \paragraph{Two--body decays into neutralino and gluon:} the width for the radiative decay of the gluino, $\tilde{g}\rightarrow g\,\chi_i^0$, is \begin{equation} \Gamma_{\chi^0_i g} = \frac{(m_{\tilde{g}}^2-m_{\chi^0_i}^2)^3}{2\,\pi\,m_{\tilde{g}}^3\,\wt{m}^4}\, \left(\left.C^{\,{\chi}^0_i}_g\right._{\rm \!\!\!eff} \right)^{2}\,. \end{equation} The use of the effective coefficient $\left.C^{\,{\chi}^0_i}_g\right._{\rm \!\!\!eff}$ defined in eq.~(\ref{coeffg}) allows us to reproduce the complete one--loop result when the resummation is switched off. \paragraph{Decays into goldstino:} the decay width into goldstino and quarks of the first and second generation is: \begin{equation} \Gamma_{\widetilde{G}\overline{q}q} = \frac{m_{\tilde{g}}^5}{192\,\pi^3\,F^2}\;\left.C^{\,\wt{\scriptscriptstyle G}}_1\right.^{\,2}\,, \end{equation} where we have summed over all four light quark flavours. The gluino decay width into goldstino and third--generation quarks is as in eq.~(\ref{decay}), with $\widetilde{m}^4$ replaced by $F^2$. The squared decay amplitude, which has to be integrated numerically on the $(s_{13},s_{23})$ plane, is given by \begin{eqnarray} \overline{\left|{\cal M}\right|^2} &=& \left(\left.C^{\,\wt{\scriptscriptstyle G}}_{q_{\sss L}}\right.^2+\left.C^{\,\wt{\scriptscriptstyle G}}_{q_{\sss R}}\right.^2\right)\; \left[(m_{\tilde{g}}^2+m_q^2-s_{13})\,(s_{13}-m_q^2) +\,(m_{\tilde{g}}^2+m_q^2-s_{23})\,(s_{23}-m_q^2)\right]\nonumber\\ && \nonumber\\ &+&4\,C^{\,\wt{\scriptscriptstyle G}}_{q_{\sss L}}\,C^{\,\wt{\scriptscriptstyle G}}_{q_{\sss R}}\,m_q^2\, (s_{13}+s_{23}-2\,m_q^2)\;\;\;\;\;\;\;\;\;\;\;\;\; (q=t,b)\,, \end{eqnarray} where \begin{equation} C^{\,\wt{\scriptscriptstyle G}}_{t_{\sss L}} = C^{\,\wt{\scriptscriptstyle G}}_{b_{\sss L}} = C^{\,\wt{\scriptscriptstyle G}}_2\,,\;\;\;\;\;\;\; C^{\,\wt{\scriptscriptstyle G}}_{t_{\sss R}} = C^{\,\wt{\scriptscriptstyle G}}_3\,,\;\;\;\;\;\;\; C^{\,\wt{\scriptscriptstyle G}}_{b_{\sss R}} = C^{\,\wt{\scriptscriptstyle G}}_4\,. \end{equation} Finally, the gluino decay width into gluon and goldstino is: \begin{equation} \Gamma_{\widetilde{G} g} = \frac{m_{\tilde{g}}^3}{2\,\pi\,F^2}\,\left.C^{\,\wt{\scriptscriptstyle G}}_5\right.^{\,2}\,\;. \end{equation} \section*{Acknowledgements} We thank M.~Toharia and J.~Wells for precious help in the comparison with the results of ref.~\cite{jim}. We also thank M.~Gorbahn, U.~Haisch and P.~Richardson for useful discussions. P.~S.~thanks the CERN Theory Division and INFN, Sezione di Torino for hospitality during the completion of this work. The work of P.~G.\ is supported in part by the EU grant MERG-CT-2004-511156 and by MIUR under contract 2004021808-009. \newpage
1,941,325,220,941
arxiv
\section{Introduction} Recent discoveries of numerous exoplanets have been revealing the diversity of planetary systems \cite[e.g.][]{Marois2008}. However, the origin of such variety is still uncertain. Planets should have formed in the protoplanetary disks and it is essential to understand their evolution in order to resolve why such differences exist. In studies of lower-mass young stars such as T Tauri stars, transitional disks have received attention from a view of planet formation. Transitional or pre-transitional disks are protoplanetary disks with an inner hole and/or gaps indicated by the weak near-infrared(NIR)/mid-infrared(MIR) excess in their spectral energy distribution (SED) \citep{Strom1989,Espaillat2007}. Since a primordial protoplanetary disk must have continuous distributions of dust/gas without gaps and the disk structure will be affected by planet formation, those disks with an inner hole and/or gaps must be in a transitional phase from a primordial to an evolved planetary-system stage. Disks around nearby Herbig Ae/Be stars have also been studied extensively in the context of the disk evolution and planet formation, but in a different classification approach. Based on an analysis of SEDs, \cite{Meeus2001} classified Herbig Ae/Be stars into two groups: group I sources, which show both a power-law and a blackbody components up to far-infrared (FIR) wavelengths in their SEDs, and group II sources, whose SEDs can be well modeled with only a single power-law from MIR to FIR wavelengths. They suggested that group I has a flaring disk, while the disk around group II is geometrically flat. There are several scenarios proposed for an evolutionary link between groups I and II sources. \cite{Dullemond2004b} showed that SEDs of group I sources can be interpreted as hydrostatic disks with flaring geometry, while group II sources are an evolved version of group I sources, which have undergone grain growth and grain settling to the mid-plane of the disk. Such a settled disk would become a self-shadowed disk by a puffed-up inner rim that accounts for weak FIR emission \citep{Dullemond2004a}. \cite{Marinas2011} performed a MIR imaging survey of Herbig Ae/Be disks at 12 and 18 \,$\mu$m. They found that group I disks show more extended emission than those of group II and suggested that the trend can be naturally understood in terms of the difference in the disk geometry. Recent high-spatial resolution observations at various wavelengths have revealed a more complex structure such as a hole or gaps in disks. In particular, there is a growing evidence for the presence of a hole and/or gaps toward group I sources, such as AB Aur \citep{Lin2006,Honda2010}, HD 142527 \citep{Fukagawa2006,Fujiwara2006,Verhoeff2011}, HD 135344B \citep{Brown2009,Grady2009}, HD 36112 \citep{Isella2010}, HD 169142 \citep{Grady2007,Honda2012}, Oph IRS 48 \citep{Geers2007}, HD 100546 \citep{Bouwman2003,Benisty2010}, HD 139614 \citep{Matter2014}, and HD 97048 \citep{Maaskant2013}. Recently, \cite{Honda2012} and \cite{Maaskant2013} proposed that group I sources possess a disk with a strongly depleted inner region (i.e. a transitional disk). Such a discontinuous structure is different from the one originally proposed for group I disks. As little/no evidence for a hole and/or gaps is reported toward group II disks and they seem to have a radially continous structure, previous interpretation on an evolutional path from group I to group II needs to be reconsidered. In this paper, we present results of an imaging survey of nearby (roughly within 200 pc) Herbig Ae/Be stars at 24.5\,$\mu$m using the 8.2-m Subaru Telescope and 8.1-m Gemini Telscope. At 24.5\,$\mu$m, the point spread function (PSF) is relatively stable compared to those at shorter wavelengths because of a larger Fried length, which enables us to discuss small extended structures with high reliability. In addition, it allows us to investigate the cooler outer part of the disk at a dust temperature $\sim$100 K. Early examples of our imaging survey have been published \citep{Fujiwara2006,Honda2010,Honda2012,Maaskant2013}. This paper gives a summary of the survey. \section{Observations and Data Reduction} \subsection{Subaru/COMICS data} We made imaging observations of Herbig Ae/Be stars using COMICS \citep[Cooled Mid-Infrared Camera and Spectrometer;][]{Kataza2000,Okamoto2003,Sako2003} on the 8.2-m Subaru Telescope with the Q24.5 filter ($\lambda_c$=24.5\,$\mu$m, $\Delta\lambda$=0.8\,$\mu$m). We also observed part of the targets with the Q18.8 filter ($\lambda_c$=18.8\,$\mu$m, $\Delta\lambda$=0.8\,$\mu$m). The chopping throw was 10$''$ and the chopping frequency was 0.45 Hz. The pixel scale is 0.130$''$/pix. Immediately before and/or after the observations of the target, we made observations of PSF reference stars. A summary of the observations is given in Table \ref{obssummary}. For the data reduction, we employed a shift-and-add method to rectify the blurring caused by tracking and/or registration errors. The imaging data consist of 0.983 sec on-source integration frames of coadded-exposures at each beam position. First, the fluctuation of the thermal background and the dark current signals were removed by the differentiation of the chopped pair frames. The object is bright enough to be recognized even in 0.983 sec integration chop-subtracted frames. We estimated the peak position of the source by a Gaussian fitting without difficulty. Then we shifted the frames so as to align the peak position and summed up the frames. We excluded the frames whose Gaussian full-widths at half-maximum (FWHMs) deviate more than 1 $\sigma$ from the mean value. \subsection{Gemini/T-ReCS data} Observations were performed using T-ReCS \citep{Telesco1998} on the 8.1-m Gemini South telescope. T-ReCS uses a Raytheon 320 $\times$ 240 pixel Si:As IBC array, with a pixel scale of 0.08633$\pm$0.00013$''$ pixel$^{-1}$, providing a field of view (FOV) of 27.6$''$ $\times$ 20.7$''$. The Q$_{\mbox{b}}$ ($\lambda_{c}$ = 24.56 $\mu$m, $\Delta\lambda$ = 0.94 $\mu$m, 50\% cut-on/off) filter was used in the present observations. A summary of the observations is shown also in Table \ref{obssummary}. Observations were made using a standard chop-nod technique to remove time-variable sky background and telescope thermal emission, and to reduce the effect of 1/{\it f} noise from the array-electronics. In all observations the chop-throw was 15$''$, the chop-angle 45$^{\circ}$~E of N, and the telescope was nodded approximately every 40 s. Standard stars were observed immediately before and/or after each object observation using the same instrumental configuration. The data were reduced using the Gemini IRAF package. The difference for each chopped pair was calculated and a pair of the nod sets were then differentiated and combined to create a single image. All the nodding data were examined if they had high background due either to the presence of terrestrial clouds or temporarily high water vapor precipitation. No data were found to be severely affected by these problems. Although there is a slight difference in the characteristics of the filters used by COMICS and T-ReCS, we will refer to Q24.5 and Q$_b$ as 25\,$\mu$m throughout this paper. \section{Results} Since the observed images show circularly-symmetric shape and the azimuthal variation is not significant, we focus on the radial profile of the targets and do not discuss azimuthal structures in this study. We first made azimuthally-averaged radial profiles of the targets and relevant PSF stars at 25\,$\mu$m as shown in Figure \ref{radiprofigure}. We then measured their FWHMs from the profiles directly; we call these `direct FWHMs' ($\Phi_{d,target}$ and $\Phi_{d,PSF}$ for the targets and PSFs, respectively). These FWHMs are the real extension of the sources convolved with the instrumental FWHM. These measurements are summarized in Table \ref{Table2} accompanied with those of the corresponding PSF stars. The radial brightness profiles of most targets are comparable to or slightly wider compared to that of the PSF stars. As a quantitative measure of the intrinsic size of the MIR emission from the disk, we employ a quadrature-subtraction of the FWHM of the PSF star from that of the target following \cite{Marinas2011}. We call this `intrinsic FWHM' ($\Phi_{i}$), which is derived from $$\Phi_{i} = \sqrt{\Phi_{d,target}^2-\Phi_{d,PSF}^2}.$$ Although this method provides a correct size only when the intrinsic radial profiles of both target and the PSF star are given by a Gaussian, we adopt this method to semi-quantitatively discuss the extension of the sources with the same measure for the sake of simplicity. Eight sources are observed by both COMICS and T-ReCS, and the results were consistent with each other within the measurement errors. To be conservative, we adopt the smaller and more stringent value of the intrinsic FWHM on these cases. The derived values are summarized in Table \ref{Table3}. \section{Discussion} \subsection{Trends on the extended emission} To investigate possible trends of the 25\,$\mu$m extension of the Herbig Ae/Be stars with other parameters, we collected the distance, stellar luminosity, classification of the group proposed by \cite{Meeus2001}, and the MIR spectral index given by the flux density ratio at 13.5 and 30\,$\mu$m \citep{Acke2004,Acke2006,Acke2010,Meeus2012}. The flux densities at 13.5 and 30\,$\mu$m reflect the underlying continuum shape and are chosen to avoid MIR dust features such as silicates and polycyclic aromatic hydrocarbons (PAHs). For the objects whose spectral index is not available, we calculated it ourselves using the {\it ISO} or {\it Spitzer} archive spectra. We also converted the diameter in arcseconds to AU using the distance to the objects given in Table \ref{Table3}. We added AB Aur, which was part of our survey but its results were published earlier in \cite{Honda2010}, to Table 3. We notice that group I sources tend to show more extended MIR emission than group II sources. Nine out of 11 group I sources are resolved (i.e., 82\%) with a signal-to-noise ratio (S/N) larger than three, and so are 4 out of 11 group II sources, which, however, is only 36\%. This trend is similar to the results of the study by \cite{Marinas2011} at 12 and 18\,$\mu$m. The present results confirm the trend also at 25\,$\mu$m. In Figure \ref{L-FWHMplot}, we plot the intrinsic FWHM ($\Phi_i$) against the stellar luminosity ($L_*$). One may expect that luminous sources show more extended emission, however, we could not find clear trend in the plot. Some sources in our sample are not resolved even though they are luminous ($L_* > 40 L_\odot$). On the other hand, when we plot the intrinsic FWHM against the MIR spectral index (Figure \ref{MIRcolor-FWHMplot}), we find that significantly extended (FWHM $>$ 40 AU) sources all belong to `red' group I. Such objects exhibit the MIR spectral indices [30/13.5] larger than 4.2, while moderately extended or unresolved sources all show below that value, even amongst group I. It is also interesting to note that the MIR spectral indices of well-resolved MIR disk sources such as HD142527 \citep{Fujiwara2006}, Oph IRS48 \citep{Geers2007}, and HD141569 \citep{Fisher2000,Marsh2002} are 5, 10.4, and 6.8, respectively, in accordance with the present finding. We therefore suggest that the redder Herbig Ae/Be stars with the MIR spectral index larger than ~4.2 exhibit more extended MIR emission. In general, group I sources tend to show MIR continuum emission redder than those in group II. Thus the present finding is consistent with the trend that group I sources are likely to exhibit more extended emission than group II sources. \subsection{Origin of extended emission of MIR red source} The origin of {\it Q-}band (16--25\,$\mu$m) extended emission of the group I Herbig Ae/Be stars or red MIR sources have so far been discussed by several groups. \cite{Honda2010,Honda2012} and \cite{Maaskant2013} demonstrate the difficulty in explaining the extended {\it Q-}band emission of group I sources with a continuous disk. The {\it Q-}band emission from a continuous disk is mostly originated from dust grains located in the inner $\leq$ 10 AU, which corresponds to $\sim$0.07$''$, if located at a typical distance of $\sim$150 pc to our targets. Considering the PSF size ($\sim$0.7$''$) at 25\,$\mu$m, this is too small to be resolved with 8 meter class telescopes. This situation may apply to most unresolved targets in our sample. In contrast, we have definitely resolved many group I sources, indicating that the continuous disk interpretation is not valid for these objects. The shape of SEDs for group I sources can be interpreted as having a MIR dip because of the rising FIR emission. The dip indicates that hot/warm dust grains responsible for the MIR radiation are depleted in the inner region of the protoplanetary disk. An inner hole and/or gaps in the disk can naturally explain both the MIR dip in the SED and the extended emission in the {\it Q-}band. The presence of an inner hole, for example, causes the inner edge of the disk to be directly illuminated by the central star. This edge, being relatively further away because of the inner hole, produces the red MIR index (i.e, a large [30/13.5] ratio) and the strong FIR radiation as reflected in the SED, as well as the extended {\it Q-}band emission. In fact, significantly extended sources (intrinsic FWHM $>$ 40AU) in our sample exhibit MIR spectral indices [30/13.5] larger than 4.2, indicating that the dust temperature of the inner edge of the outer disk to be $\sim$155 K assuming a blackbody. Were the luminosity of the central star 30 $L_\odot$ (a typical value for well extended group I sources), the distance to the 155-K blackbody would be at about 18 AU, indicating an inner hole diameter of 36 AU, which corresponds to approximately 0.24$''$ if located at a typical distance of 150 pc. An emission region of this size, when convolved with the PSF of the telescope, can be (marginally) resolved with 8-meter class telescopes in the {\it Q-}band. We suggest that this applies to our resolved sample. This interpretation is supported by an increasing number of detections of inner holes and gaps in group I protoplanetary disks by high-spatial resolution observations in the NIR and at radio wavelengths as described in section 1. On the other hand, little evidence has been reported for inner holes and/or gaps towards group II sources, which is also consistent with the present results of unresolved or limited extended emission. Continuous disks seem to be rather difficult to resolve in the {\it Q-}band with 8-m class telescopes. In general, group I sources tend to show redder MIR continuum emission than group II sources. In our sample, most group I sources show the MIR spectral index higher than 2, while that of most group II objects is below 2. This is consistent with a recent classification criterion put forward by \cite{Khalefinejad2014} that the MIR index [30/13.5] of group I sources is greater than 2.1. A blackbody at T $\sim$ 195 K would yeild a MIR index of 2. Thus the dust temperature of the inner edge of the outer disk around group I objects must be below 195 K, which puts the inner edge at some distance from the central star, producing an extended Q-band emission. Both the high MIR index and the extended {\it Q-}band emission can naturally be accounted for by the presence of the inner edge of the outer disk located at some distance. As mentioned earlier, there is now growing evidence that an inner hole and/or gaps exist in the protoplanetary disk of group I sources. Our finding above (the general trend between the extended Q-band emission and the MIR color for group I objects) also appears to reconfirm this view. \subsection{Comparison with models} To show the effect of a gap in the disk on the Q-band image size, we constructed disk models and derived FWHM of the disk image at 25\,$\mu$m for comparison with the observations. We follow the model used in \cite{Maaskant2013} who employed the radiative transfer tool MCMax \citep{Min2009}. Parameters used in this model are summarized in Table \ref{Table4}. We focus on the model with these typical Herbig Ae parameters as an example only, and we are not going to construct models that best predict individual imaging results. First, we constructed a cotinuous disk model (no gap) whose SED has a rising FIR flux density similar to the group I objects. The model image at 25\,$\mu$m is displayed in the top panel of Fig.\ref{ModelImage}, and the SED in Fig.\ref{SED-Model}. Then we introduced a radial gap in this model. The gap inner radius is fixed at 1 AU, and the gap outer radius was increased from 10 to 50 AU in a 10-AU interval. Images and SEDs for these models are shown in Fig.\ref{ModelImage} and Fig.\ref{SED-Model}, respectively. The morphology of the gapped disk images is dominated by a ring-like emission arising from the inner edge of the outer disk (see top panels of Fig.\ref{ModelImage}). As the size of the gap is increased, more thermal radiation from the increasingly larger gapped area is removed from the SED, resulting in a weakened MIR emission and an enhanced FIR flux (see Fig.\ref{SED-Model}). This in turn was reflected in a growingly redder [30/13.5] colour as the gap size widened. The disk images were subsequently convolved with an Airy pattern with a FWHM = 0.65$''$ (assumed to represent the telescope beam; see bottom panels of Fig.\ref{ModelImage}). We measured $\Phi_d$ and derived $\Phi_i$ from these final images in the same manner as described in Section 3 (see Fig.\ref{Gap-FWHMplot-Model}). As can be seen, the models with outer gap radii smaller than 20 AU are comparable with the `nogap' model (equivalent within uncertainties), implying that they would either be unresolved or only marginally resolved at best. On the other hand, when the outer gap radius $\ga 30$ AU, they would easily be resolved at 25\,$\mu$m. Thus our 25\,$\mu$m imaging survey is sensitive to the presence of a large ($\ga 30$~AU) gap. Fig.\ref{MIRcolor-FWHMplot-Model.eps} is the same as Fig.\ref{MIRcolor-FWHMplot}, except we added the models points. In our sample models, disks with [30/13.5] $\ga$ 4.2 can be achieved when the gap outer radius becomes $\ga 25$~AU and such disk can be well-resolved in our observations. This trend is almost consistent with our observational findings that the well-extended Herbig Ae/Be sources show MIR index larger than 4.2. Again, it demonstrates that our 25\,$\mu$m imaging survey is sensitive to disks with large gaps. \subsection{Group I sources as (pre-)transitional disks} Since the presence of an inner hole and/or gaps has been shown to be a common characteristic for group I Herbig Ae/Be stars \citep[e.g.][]{Lin2006,Fukagawa2006,Fujiwara2006,Verhoeff2011,Brown2009,Grady2007,Grady2009,Isella2010,Honda2010,Honda2012,Geers2007,Bouwman2003,Benisty2010,Matter2014,Maaskant2013}, \cite{Honda2012} and \cite{Maaskant2013} suggest that most group I sources can be classified as (pre-)transitional disks. Transitional or pre-transitional disks, which are originally suggested for low-mass young stars such as T-Tauri stars, are protoplanetary disks with an inner hole and/or gaps indicated by the weak NIR excess in the SED \citep{Strom1989,Espaillat2007}. Because the primordial disk is thought to have a continuous distribution of dust without gaps and because planet formation could produce a hole and/or gaps in the disk, those disks with (large) inner hole and/or gaps must be in a transitional phase from a primordial to an evolved stage. On the other hand, an evolutionary scenario for Herbig Ae/Be stars is still a matter of debate. \cite{Meeus2001} proposed that group I disk is flared while that of group II is flat, based on the analysis of the SED. A possible evolutionary scenario was suggested in which group I flaring disk evolves into group II flat disk by grain growth and sedimentation/settling of grains to the disk mid-plane. However, the present study indicates that the group I disk is a (pre-)transitional disk with an inner cleared region and/or gaps, while the group II disk is a continuous disk. These observational pieces of information imply that evolution from group I into group II is unlikely. As \cite{Meeus2001} pointed out that there is no significant difference of the age between groups I and II, it is more likely that both sources may have evolved differently from a primordial continuous flaring disk, a common ancester as discussed in \cite{Maaskant2013}. This scenario is quite similar to the T Tauri disk evolutionary scenario proposed by \cite{Currie2010}. He presented two main pathways for the evolution of T Tauri disks: those that form an inner hole/gap and others that deplete more homologously. The present study suggests a similarity between the evolutionary scenarios of T Tauri and Herbig Ae/Be disks. \acknowledgments We are grateful to all of the staff members of the Subaru Telescope. We also thank Dr. Hitomi Kobayashi and Dr. Yuji Ikeda at Kyoto-Nijikoubou Co., Ltd.. This research was partially supported by KAKENHI (Grant-in-Aid for Young Scientists B: 21740141) by the Ministry of Education, Culture, Sports, Science and Technology (MEXT) of Japan.
1,941,325,220,942
arxiv
\section{Introduction} Superstring theory is a promising candidate of the unified theory. Lacking for the dynamical principle determining the true string vacuum, many efforts have been devoted to construct semi-realistic string models directly in four dimensions. The physical content turns out to be rich enough that there have been found many semi-realistic models. It is important to extract such model-independent properties characteristic to string models that are not shared by usual field theoretic approach to unified theories. The original surprise in superstring theory was the anomaly cancellation found in ten dimensions by Green and Schwarz\cite{GS}. As the result of the global consistency condition, modular invariance, of the world-sheet theory, we have many examples of four-dimensional string models which show a miraculous pattern of gauge anomaly cancellation. In particular, we often have string models possessing so-called ``anomalous'' $U(1)$ gauge symmetry. This $U(1)$ gauge symmetry has non-vanishing contributions to anomalies from chiral fermions. In fact, these anomalies are canceled via four-dimensional counterpart of Green-Schwarz mechanism, {\it i}.{\it e}., by assigning the nonlinear transformation to the axion-like field. Since such anomaly cancellation mechanism of anomalous $U(1)$\ is intimately related to the consistency of string theory, we can expect characteristic and interesting consequences from this mechanism. [A discussion related to this point can be found in Ref.~\cite{moduli}, where an anomalous $U(1)$\ is used to derive a constraint on non-perturbative effects in string theory.] Many string models are known which possess the anomalous $U(1)$\ gauge symmetry. These, however, have been analyzed in a model-dependent way and no criteria for the appearance of the anomalous $U(1)$\ gauge symmetry has been completely clarified yet. Which class of string models does give rise to the anomalous $U(1)$? At present, we need a detailed analysis of massless spectrum in each model and struggle with $U(1)$ charges to see whether a string model contains an anomalous $U(1)$. This situation is quite unsatisfactory not only for theoretical purpose but also for phenomenological applications that we mention shortly. Therefore a systematic analysis is desirable. To do this is the main motivation of the present work. The anomalous $U(1)$\ gauge symmetry with Green-Schwarz anomaly cancellation mechanism is very interesting by itself and has actually received renewed interests. The presence of mixed $U(1)$-gravitational anomaly in particular implies that its $U(1)$ generator $Q$ is not traceless ${\rm Tr}\, Q\neq 0$ leading to the generation of Fayet-Iliopoulos term\cite{FI}. This in turn breaks\footnote{ Otherwise, the supersymmetry itself breaks down. This possibility is potentially interesting if we consider the ``dual'' theory in which the supersymmetry breaking would show up in a non-perturbative manner. } the anomalous $U(1)$\ automatically\cite{DSW}. The breaking scale is calculated\cite{DSWii} to be just below the string scale: \begin{equation} M_{{\rm A}}^2 = \frac{{\rm Tr}\, Q}{192\pi^2}M_{{\rm st}}^2 \ . \end{equation} Unlike the conventional scheme of grand unifications, we need not entangle with complicated Higgs structure. Many interesting applications of anomalous $U(1)$\ come from this automatic breaking. There are several contexts in which applications of the anomalous $U(1)$\ are addressed. The anomalous $U(1)$\ has most widely been discussed in constructing a string model with a realistic gauge group and matter content; the anomalous $U(1)$\ breaking sometimes triggers further breaking of gauge symmetries and therefore provides a way of reducing the rank of gauge group% \cite{degenerate,reduce,U1 charge}. It was also pointed out\cite{IB} that we have an interesting way of calculating the weak mixing angle without any grand unification symmetries since the anomaly cancellation condition relates the normalization of gauge couplings to anomaly coefficients. Also as several authors observed, the breaking scale $M_{{\rm A}}$ is quite impressive for the origin of various hierarchical structures in particle physics. Utilizing the ratio $M_{{\rm A}}/M_{{\rm st}}$, higher dimensional couplings could explain hierarchical structures in the fermion masses and mixing angles. Actually there are several proposals\cite{% texture:non-anomalous,texture:IR,texture:others} for realistic fermion mass matrices based on a family $U(1)$ symmetry, which is often anomalous, while stringy selection rules were used in Ref.~\cite{texture2}. {}Furthermore, generalizing $U(1)$ $D$-term contributions in supergravity models\cite{Di,KMY}, it was argued\cite{anoma,KK,mass:others} that the anomalous $U(1)$\ gives a new source to non-universality of soft breaking scalar masses and the cosmological constant through Fayet-Iliopoulos $D$-term, both of which should be taken into account when we are to have the universal scalar mass and vanishing cosmological constant. A remarkable possibility has recently been pointed out on the basis of these developments: An anomalous $U(1)$\ can be used to construct a model with supersymmetry breaking% \cite{SUSY-breaking:mech,SUSY-breaking:model}. Also in cosmological context, some authors have argued\cite{cosmology,inflation} that it can play a role in constructing a model of inflation. A possible solution to doublet-triplet splitting problem has also been suggested\cite{splitting}. Now, it is desirable to take a systematic approach to string models with anomalous $U(1)$\ in order to further explore these possibilities. The first issue to be discussed is to clarify the condition under which we have an anomalous $U(1)$\ gauge symmetry in string models. Then the second issue is the pattern of the anomalous $U(1)$\ breaking: to characterize the flat directions along which the anomalous $U(1)$\ breaking occurs and to see what kind of consequences we have after such breaking. In this paper, we shall mainly discuss the first issue by examining orbifold string models. We find that the appearance of anomalous $U(1)$\ is constrained by several reasons. We show in particular that an orbifold model possesses an anomalous $U(1)$\ only if it contains massless twisted matter which leads to the mixing between the visible and hidden sectors. We also give several examples of discrete symmetries that forbid the appearance of anomalous $U(1)$. Moreover, we argue that the analysis of orbifold models in a Wilson line background can essentially be reduced to the analysis in the absence of a Wilson line. We then give a general procedure for classifying orbifold string models which possess anomalous $U(1)$. Concerning the second issue, we give a brief discussion on discrete symmetries unbroken after the anomalous $U(1)$\ breaking. These discrete symmetries would be relevant to phenomenological problems such as suppression of dangerous couplings. We also suggest a possible relation of the anomalous $U(1)$\ to discrete $R$ symmetries. Our interest on the discrete $R$ symmetry originates not only from phenomenological but also string theoretical aspects. It was argued\cite{DS,naturalness} that a certain discrete $R$ symmetry can protect (0,2) string vacua against the instability\cite{DSWW} due to the world-sheet instanton effects. In particular, Dine and Seiberg constructed an example of such $R$ symmetry in a (2,2) string vacuum (the standard embedding of $Z_3$ orbifold) which guarantees the existence of the flat directions along which the (2,2) vacuum can be deformed into (0,2) ones. Other aspects concerning the stability of (0,2) vacua are recently discussed in Refs.~\cite{% Distler-Greene,Distler-Kachru,Silverstein-Witten}. This issue is beyond the scope of this paper. This paper is organized as follows. In the next section, we give a review of the orbifold construction. In particular we recall the expressions for the mass formula and generalized GSO projection operator which determine massless spectrum. We also recall the formulas for the $U(1)$ level and charges. In section three, after recapitulating the universal nature of $U(1)$ anomalies in string theories, we discuss the conditions for a $U(1)$ to be anomalous. Our procedure for classifying and identifying the anomalous $U(1)$\ is presented in section four. We illustrate it by working out $Z_3$ orbifold models. [A classification of $Z_4$ orbifold models is given in Appendix~B.] In section five, we briefly discuss the issues on the flat directions and anomalous $U(1)$\ breaking. The final section is devoted to concluding remarks. Some concrete examples for the orbifold models with the anomalous $U(1)$\ are given in appendices. \section{Orbifold Construction} In this section, we give a review of the orbifold construction of four-dimensional string models\cite{orbifold:original}. {}For details, see Refs.~\cite{Orbi1,Orbi2,Orbi3}. The Hilbert space of heterotic string theory is a tensor product of the right-moving sector, which is responsible for space-time supersymmetry, and the left-moving sector, which gives rise to gauge symmetries. The right moving sector is a superconformal field theory of the $(4+6)$-dimensional string coordinates $X^{\mu=0,1,2,3}$ and $X^{i=1,2,3}$ (and their complex conjugates $X^{\swb{i}}$) as well as their world-sheet superpartners, NSR fermions $(\psi^i,\psi^{\swb{i}};\psi^{\mu})$. The latter are conveniently bosonized, $\psi^t=-ie^{iH^t}$, into $H^t$ ($t=1,2,3$ and $t=4,5$ correspond to six- and four-dimensional parts, respectively) which are related to Cartan part of $SO(9,1)$ current algebra. The momenta $p^t$ of $H^t$ span an $SO(9,1)$ weight lattice $\Gamma_{SO(9,1)}$. Neveu-Schwarz sector (space-time boson) corresponds to vectorial weights $p^t_v$ with integer entries and Ramond sector (space-time fermion) to spinorial weights $p^t_s$ with half-integer entries. The conformal field theory of the left moving sector includes four-dimensional string coordinates $\wb{X}^{\mu}$, and six-dimensional parts $\wb{X}^i$ and $\wb{X}^{\swb{i}}$. Adopting the bosonic formulation for the gauge sector, we also have anti-chiral coordinates parametrizing the maximal torus of $E_8\times E_8'$, $\wb{X}^{\hat{I}}=(\wb{X}^I,\,\wb{X}^{I'})$, whose momenta $P^{\hat{I}}=(P^I,\,P^{I'})$ span an $E_8\times E_8'$ root lattice $\Gamma_{E_8\times E_8'}$ ($I,I'=1,\cdots,8$). The root vectors of $E_8$ satisfy $(P^I)^2=2$ and are represented as \begin{eqnarray} P^I&=&\left(\underline{\pm 1, \pm 1, 0,0,0,0,0,0}\right) \equiv\left(\underline{\pm 1, \pm 1, 0^6}\right)\ , \label{vector root} \\ P^I&=&\left(\pm\1{2}, \pm\1{2}, \pm\1{2}, \pm\1{2}, \pm\1{2}, \pm\1{2}, \pm\1{2}, \pm\1{2}\right) \label{spinor root} \end{eqnarray} with the even number of minus signs for the latter case. Here and hereafter the underline means permutations and repeated entries are indicated by superscripts. In a toroidal compactification to four dimensions, the resultant space-time supersymmetry charges form four-dimensional representation of $SU(4)\simeq SO(6)$ subgroup of $SO(9,1)$. In the orbifold construction, we divide the six-dimensional torus $T^6$ by its discrete isometry, called point group $P$, in order to have precisely $N=1$ supersymmetry. We choose the $SO(9,1)$ weight $p_Q^t$ of the surviving supercharge to be $p_Q^t=\pm (1,1,1\,;\,\pm 1,\pm 1)/2$ (with even $-$'s). We concentrate on symmetric $Z_N$ orbifolds. Then the six-dimensional part of the string coordinates combines into $X^i(z,\wb{z})=X^i(z)+\wb{X}^i(\wb{z})$ which become the coordinates of a torus $T^6={\bf R}^6/\Lambda$ through dividing by a root lattice $\Lambda$ of a semi-simple Lie algebra. The orbifold modding is realized through a further division of this torus by the point group $P=Z_N$, which is generated by a twist $\theta$, $\theta^N=1$. These operations can equivalently be expressed through division by a space group $S=(\Lambda,P)$; $T^6/P\simeq {\bf R}^6/S$. In order to achieve the modular invariance, we should include twisted sectors. In $Z_N$ orbifold models, we have the $k$-th twisted sector $T_k$ ($k=1,2,\cdots, N-1$) as well as the untwisted sector $U$. In the $T_k$-sector, the six-dimensional coordinates satisfy \begin{equation} X^i(e^{2\pi i}z,e^{-2\pi i}\wb{z})=e^{2\pi ikv_i}X^i(z,\wb{z})+e^i \ , \label{boundary condition:X} \end{equation} where $e^{2\pi iv_i}$ is the eigenvalue of the twist $\theta$ on $X^i$ and the vector $e^i$ belongs to the defining lattice $\Lambda$ of the torus $T^6$. The right-moving world-sheet supersymmetry then requires \begin{equation} H^i(e^{2\pi i}z)=H^i(z)+2\pi k v^i \ . \label{boundary condition:H} \end{equation} The shift vector $v^t=(v^i;\,0,0)$ in $\Gamma_{SO(9,1)}$ should be orthogonal to the weight $p^t_Q$ of the surviving supercharge. {}For the Abelian embedding of the twist into gauge group, we associate a shift $V^{\hat{I}}$ to the rotation $\theta$ as well as a Wilson line $a^{\hat{I}}$ to the translation $e^i$ of the space group. Then the boundary condition of sixteen-dimensional gauge coordinates is \begin{equation} \wb{X}^{\hat{I}}(e^{-2\pi i}\wb{z})= \wb{X}^{\hat{I}}(\wb{z})+2\pi kV^{\hat{I}} +2\pi m^ia_i^{\hat{I}} \ , \label{boundary condition:gauge} \end{equation} where the integer $m^i$ labels fixed points. The modular invariance restricts the possible choice of shift vectors in $\Gamma_{E_8\times E_8'}$ and Wilson lines according to \begin{equation} N\left[ \sum_{i=1}^3\left(v^i\right)^2- \sum_{{\hat I}=1}^{16}\left(V^{\hat I}+m^ia_i^{\hat I}\right)^2\right] =~\hbox{ even integer } \ . \label{modular: shift} \end{equation} All the possible shifts $v^t=(v^i\,;\,0,0)$ and $V^{\hat{I}}=(V^I\,;\,V^{I'})$ are known for each $Z_N$ orbifold construction\cite{shift}. The simplest choice is the standard embedding; $V^{\hat{I}}=(v^i,0^5\,;\,0^8)$, $a^{\hat{I}}=0$. We also refer to the following type of the shift as {\it a quasi-standard embedding} \begin{equation} V^{\hat{I}}=(v^i,0^5\,;\,V^{I'}) \ . \label{quasi} \end{equation} On-shell string states are created by vertex operators acting on the vacua of (super)conformal field theories on the string world sheet, $V_{{\rm R}}(z)\ket{0}_{{\rm R}}\otimes V_{{\rm L}}(\wb{z})\ket{0}_{{\rm L}}$. The internal part of the vertex operators takes the form \begin{equation} V_{{\rm R}}\sim e^{ip_{{\rm R}}^iX^i}e^{ip^tH^t} \ , \qquad V_{{\rm L}}\sim e^{ip_{{\rm L}}^i\swb{X}^i}e^{iP^{\hat I}\swb{X}^{\hat I}} \ . \end{equation} In twisted sectors, we should drop the momenta $p_{{\rm R},{\rm L}}^i$ and replace the $SO(9,1)$ and $E_8\times E_8$ momenta with shifted ones defined by \begin{equation} \wt{p}^t\equiv p^t+kv^t \ , \qquad \wt{P}^{\hat{I}}\equiv P^{\hat{I}}+kV^{\hat{I}}+m^ia_i^{\hat{I}} \ . \label{shifted mom} \end{equation} Each twisted sector $T_k$ has several subsectors corresponding to fixed points labeled by the twist $\theta^k$ and $e^i$ (by the conjugacy class of the space group, to be precise). The vertex operator for such twisted state includes the twist field \cite{orbifold-CFT}, $\sigma(k,e)$, which creates the twisted vacuum $\ket{k,e}=\sigma(k,e)\ket{0}$ and expresses the twisted boundary condition (\ref{boundary condition:X}) of the internal string coordinates $X^{i=1,2,3}$. These twist fields contribute to the conformal dimension of the ground state in the $k$-th twisted sector by an amount \begin{equation} c^{(k)}\equiv\1{2}\sum_{i=1}^3 \eta_{(k)}^i\left(1-\eta_{(k)}^i\right) \ , \qquad \eta^i_{(k)}\equiv \abs{kv^i}-{\rm Int}\,\abs{kv^i} \ . \label{casimir} \end{equation} We also recall that there arise some complications concerning the twisted vacua for higher twisted sectors ($k=2,\cdots,N-2$) in non-prime order orbifolds ($N=4,6,8,12$), which are relevant for the generalized GSO projection. In this case, the fixed points of the higher twist $\theta^k$ do not necessarily fixed by the single twist $\theta$ but transform into each other. Therefore we have to take their linear combination to form an eigenstate under the single twist\cite{KO:linear comb}. In Ref.~\cite{Orbi3}, such eigenstates were explicitly constructed with their eigenvalues $e^{i\gamma}$ under the single twist. Mass formulas and physical states are most easily described in light-cone gauge\rlap.\footnote{ In our convention, we simply remove the last component of the $SO(9,1)$ momentum to get the transverse $SO(8)$ momentum. } Mass formulas are obtained by counting the conformal dimensions of vertex operators and are given, for the right- and left-moving $T_k$-sectors (untwisted sector $U$ for $k=0$), respectively, by \begin{eqnarray} \1{8}m^2_{{\rm R}} &=&\1{2}\sum_{i=1}^3\left(p^i_{{\rm R}}\right)^2 + \1{2}\sum_{t=1}^4\left(\wt{p}^t\right)^2 + N_{{\rm R}}^{(k)}-\1{2}+c^{(k)} \ , \label{mass:right} \\ \1{8}m^2_{{\rm L}} &=&\1{2}\sum_{i=1}^3\left(p^i_{{\rm L}}\right)^2 + \1{2}\sum_{\hat{I}=1}^{16}\left(\wt{P}^{\hat{I}}\right)^2 + N_{{\rm L}}^{(k)}-1+c^{(k)} \ , \label{mass:left} \end{eqnarray} where oscillator numbers $N_{{\rm R},{\rm L}}^{(k)}$ take the fractional value which is a multiple of $1/N$. Alternatively we can express the contribution from sixteen-dimensional gauge part in terms of the Kac-Moody algebra in the following way. Let $\prod_aG_a$ be the gauge group. If the state with the (shifted) momentum $\wt{P}^{\hat{I}}$ transforms in the representation $\otimes_aR_a$, then we have a formula \begin{equation} \1{2}\sum_{\hat{I}=1}^{16}\left(\wt{P}^{\hat{I}}\right)^2 = \sum_ah_a\left(R_a\right) \equiv \sum_a{C_2(R_a) \over k_a + C_2(G_a)} \ , \label{h} \end{equation} where $k_a$ is a Kac-Moody level and $C_2(R_a)$ ($C_2(G_a)$) is a Casimir for the $R_a$ (adjoint) representation of the group $G_a$. If the group $G_a$ is Abelian, the conformal dimension of the state carrying its $U(1)$ charge $Q_a$ is given by $h_a(Q_a)=Q^2_a/k_a$. The physical states of orbifold models should be invariant under the full action of the orbifold twist. {}For the untwisted sector, this leads to modulo integer conditions \begin{equation} P^{\hat{I}}V^{\hat{I}}-p^tv^t=P^{\hat{I}}a^{\hat{I}}= 0 \ , \label{utmass} \end{equation} from which we can identify gauge groups and massless untwisted matter contents. Massless gauge bosons correspond to the $SO(8)$ weights $p_v^t=(0^3\,;\,\pm 1)$ and so satisfy $P^{\hat{I}}V^{\hat{I}}=P^{\hat{I}}a^{\hat{I}}=0$ while massless untwisted matter fields to $p_v^t=(\ul{1,0^2}\,;\,0)$. {}For instance, models with the quasi-standard embedding (\ref{quasi}) always contain an $E_6$ gauge group. Generically, a $U(1)$ factor gauge group appears corresponding to the non-vanishing element of the shift or Wilson lines although some combination of non-vanishing elements may correspond to the Cartan part of a non-Abelian group. When a $U(1)$ corresponds to a basis vector $V_Q^{\hat I}$, the level $k_Q$ and the charge $Q$ of the state with a momentum $\wt{P}^{\hat I}$ are given, respectively, by\cite{realistic,U1 charge} \begin{eqnarray} k_Q&=&2\sum_{\hat I=1}^{16}\left(V_Q^{\hat I}\right)^2 \ , \label{level:def} \\ Q&=&\sum_{{\hat I}=1}^{16}V_Q^{\hat I}\wt{P}^{\hat I} \ . \label{charge:def} \end{eqnarray} The physical states in twisted sectors are singled out by the generalized GSO projection operator\cite{Orbi1,GSO,Orbi3} \begin{equation} G_k\equiv\1{N}\sum_{h=0}^{N-1}\left(\Delta_k\right)^h \nonumber \end{equation} so that only the states with $\Delta_k=1$ survive. Here we recall from Ref.~\cite{Orbi3} the expression for the operator $\Delta_k$ in the absence of a Wilson line:\footnote{ The expression in the presence of Wilson lines can be found in Refs.~\cite{KO:WL,Orbi3}. } \begin{eqnarray} \Delta_k&\equiv& e^{i\gamma}e^{2\pi i\left(N_{{\rm R}}+N_{{\rm L}}\right)}e^{2\pi i\Theta_k} \ , \label{GSO:delta} \\ \Theta_k&\equiv& \2{k}\left[\sum_i\left(v^i\right)^2 -\sum_{\hat{I}}\left(V^{\hat{I}}\right)^2\right] +\left[\sum_{\hat{I}}\wt{P}^{\hat{I}}V^{\hat{I}} -\sum_i\wt{p}^iv^i\right] \ , \label{GSO:theta} \end{eqnarray} where the first term in $\Theta_k$, which is important for non-standard embedding cases, expresses $Z_N$-transformation property of twisted ground states while the second expresses that of vertex operators. The phase $e^{i\gamma}$ described above should be kept in the case of the higher twisted sectors of non-prime order orbifolds. {}For the case of $Z_{N=3,7}$ orbifold models, the GSO projection is trivial and the level matching condition $m_{{\rm R}}^2=m_{{\rm L}}^2$ is known to be enough to guarantee the modular invariance. \section{Constraints on ``Anomalous'' $U(1)$} String models without left-moving world-sheet supersymmetry, {\it i}.{\it e}., (0,2) models, often lead to anomalous $U(1)$\ symmetry. [Illustrative examples are given in appendix~A.] We wish to have a criterion for the appearance of anomalous $U(1)$. In this section, we examine the massless conditions described in the previous section and find that the appearance of anomalous $U(1)$\ is constrained by several reasons. \subsection{Visible-Hidden Sector Mixing in Twisted Sector} The first category of such constraints comes from the universal nature of Green-Schwarz mechanism. Suppose that we have a gauge group $U(1)_A\times\prod_aG_a$ and the $U(1)_A$ is anomalous. In the presence of anomalous $U(1)$, the K\"ahler potential of the dilaton-axion chiral multiplet $S$ is given at one-string loop\footnote{ This K\"ahler potential includes the term which becomes the Fayet-Iliopoulos term after the dilaton develops the VEV. Notice that the nonlinear transformation (\ref{dilaton:trf}) explains why we can have the Fayet-Ilopoulos term in supergravity Lagrangian {\it without} $U(1)$ $R$ symmetry unlike the conventional wisdom in supergravity\cite{FI:R}. We also note that any other one-string loop effects are not considered here including the possible kinetic mixing between several $U(1)$'s\cite{U1 mixing}. } by\cite{DSW} \begin{equation} K=-\ln\left(S+S^{\dagger}-\delta_{\rm GS}\,V_A\right) \ , \label{dilaton kahler} \end{equation} where $V_A$ is the vector multiplet of $U(1)_A$ and $\delta_{\rm GS}$ is a constant related to the mixed gravitational anomaly; it is ${\rm Tr}\, Q_A=96\pi^2\sqrt{k_A}\delta_{\rm GS}$ where $Q_A$ and $k_A$ are the charge and level of $U(1)_A$, respectively. On the other hand, the gauge kinetic function $f_a$ of a factor group $G_a$ is given at string tree level by $f_a=k_aS$, \begin{equation} {\cal L}_{{\rm gauge}}= \1{4}\sum_ak_a\int d^2\theta\, S W^{\alpha (a)}W_\alpha^{(a)} + \hbox{H.c.}\ , \label{gauge kinetic} \end{equation} where the summation is taken over all gauge groups including the anomalous $U(1)_A$. The existence of the axion-like coupling of ${\rm Im}\,S$ in eq.~(\ref{gauge kinetic}) enables us to cancel the pure $U(1)_A^3$ and mixed $U(1)_A$ - $G_a^2$ anomalies as well as the mixed gravitational one by combining the nonlinear transformation of the dilaton-axion field with the $U(1)_A$ gauge transformation \begin{eqnarray} V_A&\longrightarrow&V_A+\2{i}\left(\Lambda-\Lambda^\dagger\right) \ , \label{vector:trf} \\ S&\longrightarrow&S+\2{i}\delta_{\rm GS}\,\Lambda \ , \label{dilaton:trf} \end{eqnarray} where $\Lambda$ is a parameter chiral superfield. In any modular invariant string theory in four-dimensions, all the $U(1)$ anomalies should be canceled in this manner\cite{SW:anomaly}. Hence their anomaly coefficients should satisfy the following universality relation: \begin{equation} \1{k_a}\mathop{{\rm Tr}\,}_{G_a}T(R)Q_A =\1{3}{\rm Tr}\, Q_A^3 =\1{24}{\rm Tr}\, Q_A\,\left(\,\equiv 8\pi^2\delta_{\rm GS}\,\right) \ , \label{universal:without U1} \end{equation} where $2\,T(R)$ is the index of the representation $R$, and we have rescaled $Q_A$ so that $k_A=1$. We refer to this relation as the universal GS relation. It is important to realize that when the gauge group contains several $U(1)$'s, each $U(1)$ should satisfy the universal GS relation. This property, which can explicitly be confirmed by examples given in Appendices, follows from the uniqueness of the anomalous $U(1)$; we can always find a unique $U(1)_A$ which may be anomalous so that other $U(1)$'s are anomaly free\cite{SW:anomaly,U1 charge}. Then the universal GS relation for this $U(1)_A$ reads \begin{equation} \1{k_a}\mathop{{\rm Tr}\,}_{~G_a}T(R)Q_A ={\rm Tr}\, Q_B^2Q_A =\1{3}{\rm Tr}\, Q_A^3 =\1{24}{\rm Tr}\, Q_A=8\pi^2\delta_{\rm GS} \ , \label{universal} \end{equation} where $Q_B$ is the charge of any non-anomalous $U(1)$ and has been rescaled so that $U(1)_B$ has level one. By using this equation, we can show that any linear combinations, $U(1)_{\alpha}=\alpha U(1)_A+\beta U(1)_B$ and $U(1)_{\beta}=\beta U(1)_A-\alpha U(1)_B$ with $\alpha^2+\beta^2=1$, satisfy the same relations as eq.~(\ref{universal}) with rescaled GS constants, $\alpha\delta_{\rm GS}$ and $\beta\delta_{\rm GS}$, respectively. We thus see that any $U(1)$ satisfies the universal GS relation even if it does not coincide with the true anomalous $U(1)$. This fact is useful in the following discussion. We can derive several constraints on anomalous $U(1)$\ from the universal nature of anomaly in string theory. The basic observation is that if a $U(1)$ symmetry has no mixed $U(1)$ - $G_a^2$ anomaly for a certain group $G_a$, then all the $U(1)$ - $G_b^2$ anomalies should vanish for any $G_b$. Therefore we can judge whether a $U(1)$ is anomalous or not by examining just a single type of anomaly. Practically it is easiest to examine the mixed $U(1)$ anomaly with the largest gauge group $G_{\ell}$ since the massless conditions tightly restrict the appearance of nontrivial representations of $G_{\ell}$. {}For instance, massless matter fields in nontrivial representations of $E_8$ are forbidden and anomalies involving $E_8$ always vanish. This explains why all (2,2) models which lead to an unbroken $E_8$ can not have an anomalous $U(1)$\ symmetry. To derive further constraints, we write a gauge group in the form \begin{equation} G=G_{{\rm vis}}\times G_{{\rm hid}} = \left[\prod_a G_a \times U(1)^m \right] \times \left[\prod_b G'_b\times U(1)'^n\right] \ , \label{vis-hid} \end{equation} where the visible- and hidden-sector groups $G_{{\rm vis}}$ and $G_{{\rm hid}}$ are originated from $E_8$ and $E_8'$, respectively. Generally it is possible that some massless states transform nontrivially under both of visible and hidden groups. If the model contains no such state, we call it the model with the complete separation of the visible and hidden sectors. Then the universal nature (\ref{universal}) clearly tells us that models with the complete separation have no anomalous $U(1)$\ symmetry. Actually we have even stronger constraints: A $U(1)_a$ gauge group in the visible sector is anomaly free if there is a hidden group $G_b'$ so that any massless $G_b'$-charged field has vanishing $U(1)_a$ charge. Notice, furthermore, that there is no mixing between visible and hidden sectors in the untwisted sector: all massless untwisted fields in the visible sector have vanishing $E_8'$-momentum $P^{I'}=0$ and are neutral under $G_{{\rm hid}}$ and vice versa. The mixing can arises only through the shift (\ref{shifted mom}) of $E_8\times E_8'$ momenta. Hence we can restrict ourselves to twisted sectors and conclude that {\it if the visible-hidden sector mixing is absent for massless twisted matter fields, the model has no anomalous $U(1)$\ symmetry}. We now apply the above constraint to show that many models whose gauge group contains an $E_7$ or $E_6$ do not have an anomalous $U(1)$. {}First consider models with an $E_7$. Here we restrict ourselves to the models with $k_a=1$. The expression~(\ref{h}) tells us that the massless condition (\ref{mass:left}) forbids the representation with the conformal dimension larger than $1$. Then, other than the singlet, massless matter fields can only belong to the representation $\ul{56}$, which has the conformal dimension $h(\ul{56})=3/4$ as is seen from $C_2(E_7)=18$ and $C_2(\ul{56})=57/4$. Actually there are many models in which massless $\ul{56}$'s are forbidden in twisted sectors or have vanishing $U(1)$ charges. An example is the $T_1$-sector of $Z_3$ orbifold models in which $c^{(1)}=1/3$ and thus the field in $\ul{56}$ can not satisfy the massless condition (\ref{mass:left}). Another example is the $T_2$-sector of $Z_4$ orbifold models in which $c^{(2)}=1/4$. Even if a massless $\ul{56}$ appears in this sector, it can not have a non-vanishing charge for any $U(1)$ group since the massless condition is already saturated, $h(\ul{56})+c^{(2)}-1=0$. The same is true in any twisted sectors of $Z_3$, $Z_4$, $Z_6$-I, $Z_7$ and $Z_8$-I orbifold models in the classification of Refs.~\cite{Orbi2,Orbi3}. Therefore, if the hidden gauge group contains an $E_7$, all $U(1)$ symmetries in the visible sector are anomaly free for the $Z_N$ orbifold models other than $Z_6$-II, $Z_8$-II, $Z_{12}$-I and -II. Note, however, that since our constraint here utilizes the absence of the visible-hidden sector mixing, it does not exclude the anomalous $U(1)$\ in the same sector as an $E_7$. The possible hidden gauge groups which include an $E_7$ is $E'_7\times U(1)'$ other than anomaly free $E'_7\times SU(2)'$. The $U(1)'$ accompanied by the $E'_7$ is anomalous if the untwisted sector\footnote{ In the untwisted sector with $P^{\hat I}V^{\hat I}\equiv 1/2$ which is $N=2$ subsector, the degeneracy factor is important. } contains a massless $\ul{56}$. Actually an explicit analysis shows that it {\it is} always anomalous before Wilson lines are turned on. A similar analysis can be extended to the models with an $E_6$ gauge group. The $E_6$ group with $k=1$ allows only the $\ul{27}$ and its conjugate representations since we have $C_2(E_6)=12$, $C_2(\ul{27})=26/3$ and $h(\ul{27})=2/3$. Therefore the same result as in the models with an $E_7$ can be derived for the $Z_3$ orbifold models containing an $E_6$ gauge group. On the other hand, other $Z_N$ orbifold models are less constrained even if they contain an $E_6$ gauge group (although the appearance of massless $\ul{27}$'s is not so often since their conformal dimension is large). Here we only make some comments on the models with the quasi-standard embeddings (\ref{quasi}). Even if we restrict ourselves to this class of models, there are some examples in which an anomalous $U(1)$\ appears even in the opposite sector to the $E_6$. If we restrict further to the cases without a Wilson line, however, an explicit analysis shows that the $E_6$ group derived from the quasi-standard embeddings does not have any nontrivial twisted matter fields and all $U(1)$'s in the opposite sector to the $E_6$ are anomaly free. Especially in the case of $Z_7$ orbifold models, all the quasi-standard embeddings do not lead to the anomalous $U(1)$\ in the visible as well as hidden sectors. One may wonder whether the above constraints were not so powerful since the realistic model building involves the Wilson lines which do not allow such a large gauge group as an $E_7$ or $E_6$. As we shall see in the next section, however, the origin of anomalous $U(1)$\ can be traced back to the cases without a Wilson line and so the constraints given here turn out to be already powerful. \subsection{Discrete Symmetries} The second category of constraints comes from discrete symmetries of a spectrum. In this section, we examine the mass formulas and describe several examples of such symmetries that forbid the appearance of an anomalous $U(1)$. As is clear from the discussion on $U(1)$ basis given around eq.~(\ref{universal}), we can examine the visible and hidden sectors separately and so we concentrate on the visible-sector gauge group coming from the first $E_8$. As noted above, a $U(1)$ gauge symmetry generically corresponds to the non-vanishing element of the shift $V^I$ or Wilson lines $a^I$; such non-vanishing element breaks the $E_8$ and generically corresponds to an unbroken $U(1)$ basis. On the other hand, a vanishing element of the shift and Wilson line generically corresponds to the Cartan basis of a non-Abelian gauge group. Only exception\cite{reduce,U1 charge} is the case with the total shift of the form $kV^I+m^ia_i^I=(*^7,0)$, where $*$ indicates non-zero entries. In any case, if $J$-th and $K$-th components of the shift and Wilson lines vanish simultaneously, they correspond to the Cartan parts of a non-Abelian group. In this subsection, we are mainly interested in the $U(1)$'s that correspond to non-vanishing elements of a Wilson line. It is instructive, however, to reconsider why the anomaly cancels for the $U(1)$ which corresponds to a vanishing element of the shift and Wilson line. Suppose that the $J$-th components of the shift and Wilson line are zero: $V^J=a^J=0$. Then we observe that $\wt{P}^J=P^J$ and that {\it the mass formulas (\ref{mass:left}), physical state conditions (\ref{utmass}) and generalized GSO projection operator (\ref{GSO:delta}), (\ref{GSO:theta}) are all invariant under the transformation which reverses the sign of the $J$-th component of $E_8$-momenta}: \begin{equation} P^I=\left(\cdots, P^J, \cdots\,\right) \ \longrightarrow\ \wb{P}^I\equiv\left(\cdots, -P^J, \cdots\,\right) \ . \label{Z2 symmetry} \end{equation} This implies that if the state with $P^I$ is massless and physical, there exists the massless physical state with $\wb{P}^I$ and thus the $U(1)$ charges corresponding to the $J$-th Cartan generator sum up to vanish \begin{equation} \sum P^J=0 \ , \end{equation} where the summation is taken over all massless states. In this way, the absence of the anomaly for the $U(1)$'s corresponding to vanishing elements of the shift and Wilson line can be understood on the basis of the discrete symmetry of the spectrum. In fact, there is a subtlety in the above discussion. If the $E_8$-momentum $P^I$ corresponds to a spinorial root (\ref{spinor root}), the operation (\ref{Z2 symmetry}) flips the chirality. Then we need another component $V^K=a^K=0$ so that the simultaneous change $P^J\rightarrow -P^J$ and $P^K\rightarrow -P^K$ preserves the chirality. Otherwise, we have to examine whether or not the states with a spinorial root exist and contribute to the anomaly. Now let us extend the above argument to show that there is no anomaly in the $U(1)$ corresponding to a non-vanishing element of Wilson lines if the shift and Wilson lines are orthogonal to each other. [This includes the case with a vanishing shift.] We mainly consider the $Z_3$ orbifold models with a single independent Wilson line $a^I$ for simplicity. Because of the orthogonality, we can use an $E_8$ transformation so that $V^J=0$ for $a^J\neq 0$. We examine the untwisted and twisted sectors separately. {}For the untwisted sector, precisely the same argument as above applies. If an $E_8$-momentum $P^J$ satisfies the massless conditions (\ref{utmass}) for untwisted fields, the momentum $\wb{P}^I$ whose $J$-th component is replaced with $-P^J$ also satisfies it. This is the symmetry from which we conclude that the charges of the $U(1)$ corresponding to the non-vanishing element $a^J\neq 0$ of the Wilson line sums up to vanish in the untwisted sector. {}For the twisted sector, the momenta of states are shifted as in eq.~(\ref{shifted mom}). On each two-dimensional $Z_3$ orbifold, there are three subsectors corresponding to three fixed points with $e^i=m\,e^i_{SU(3)}$ ($m=0,\,\pm 1$) in eq.~(\ref{boundary condition:X}), where $e^i_{SU(3)}$ is the simple root of the $SU(3)$ lattice. In the subsector corresponding to $m=0$, the massless condition (\ref{mass:left}) is symmetric under $P^{\hat J} \rightarrow -P^{\hat J}$ and the situation is similar to the untwisted sector. Interestingly, if $P^{\hat J}$ satisfies the massless condition (\ref{mass:left}) for $m=1$, $-P^{\hat J}$ satisfies it for $m=-1$ and therefore the contributions to the anomaly cancel between $m=1$ and $m=-1$ subsectors. Note that if there exists the massless state with a spinorial root, we must take care of the chirality as noted before. We need another component $V^K=0$ so that the simultaneous change $P^J\rightarrow -P^J$ and $P^K\rightarrow -P^K$ preserves the chirality. This is always possible in $Z_3$ orbifold models and therefore we have $\sum\wt{P}^J=\sum\wt{P}^K=0$. We thus see that if the Wilson line is orthogonal to the shift, there exists a discrete symmetry which guarantees the cancellation of anomaly for the $U(1)$ basis that corresponds to non-vanishing elements of the Wilson line. In this case, the anomalous $U(1)$\ basis is related only with non-vanishing elements of the shift. {\it If the $E_8$ breaking by the shift does not produce an anomalous $U(1)$, there is no anomaly even if several $U(1)$'s appear by switching on the Wilson lines orthogonal to the shift}. {}For other orbifold models, we can find similar symmetries. {}For instance, the two-dimensional $Z_4$ orbifold has two subsectors corresponding to $m=0,1$. The corresponding Wilson line $a^I$ satisfies $2a^I=0$ up to $\Gamma_{E_8}$ as was shown in Refs.~\cite{KO:WL,Orbi3}. This implies that the subsector with $m=1$ is equivalent to one with $m=-1$ as far as the massless condition is concerned. If there is no massless state with a spinorial $\wt{P}^I$, this equivalence will lead to a symmetry of the massless spectrum. If a Wilson line is not orthogonal to the shift, the symmetry described above is broken. Even in such a case, however, it happens that a similar symmetry is found if the Wilson line and shift satisfy a certain relation. Let us take the $Z_3$ orbifold model with $V^J=2/3$ and $a^J=2/3$ as an example. The twisted states in the subsector with $m=-1$ have $\wt{P}^J=P^J$ while the states in the subsectors with $m=0$ and $m=1$ have the shifted momenta, $\wt{P}^J=P^J+2/3$ and $\wt{P}^J=P^J+4/3=(P^J+2)-2/3$, respectively. Therefore the sum of the quantum number $\wt{P}^{\hat J}$ vanishes for the twisted states as in the above discussion (although such a cancellation does not always work in the untwisted sector). A similar cancellation between twisted states can be found in several combinations of the shift and Wilson line. Next we study another symmetry in the massless spectrum. Consider the $Z_7$ orbifold model with the standard embedding $V^{\hat{I}}=\left(1,2,-3,0^5;0^8\right)/7$. This model has the gauge group $G=E_6\times U(1)_1\times U(1)_2\times E_8'$ whose $U(1)$ bases can be taken as \begin{equation} V_1^I=\frac{\sqrt{3}}{2}\left(1,-1,0,0^5\right) \ , \qquad V_2^I=\frac{1}{2}\left(1,1,-2,0^5\right) \ , \label{Z7 basis:U} \end{equation} or equivalently, \begin{equation} \wt{V}_1^I=\frac{1}{2\sqrt{7}}\left(5,-4,-1,0^5\right) \ , \qquad \wt{V}_2^I=\frac{\sqrt{3}}{2\sqrt{7}}\left(1,2,-3,0^5\right) \ . \label{Z7 basis:T} \end{equation} Although the absence of $U(1)$ anomaly in this model can be proved by other reasons such as the existence of the $E_8'$ or (2,2) superconformal symmetry, this model has remarkable symmetries that guarantee the anomaly cancellation. \begin{table}[t] \begin{center} \begin{tabular}{@{\vrule width 1pt \ }r|cc|cc@{\ \vrule width 1pt}} \noalign{\hrule height 1pt} &\multicolumn{2}{c|}{$\ul{27}$} &\multicolumn{2}{c@{\ \vrule width 1pt}}{$\ul{1}$}\\ \cline{2-5} & $Q_1 $ & $Q_2$ & $Q_1/\sqrt{3}$ & $Q_2/\sqrt{3}$ \\ \noalign{\hrule height 1pt} \hline\hline \noalign{\hrule height 1pt} $U_1$ & $\2{\sqrt{3}}$ & $\2{1}$ & $-1$ & $0$\\ \hline $U_2$ & $-\2{\sqrt{3}}$ & $\2{1}$ & $\2{1}$ & $-\2{\sqrt{3}}$\\ \hline $U_4$ & $0$ & $-1$ & $\2{1}$ & $\2{\sqrt{3}}$\\ \noalign{\hrule height 1pt} \end{tabular} \end{center} \caption[$U(1)$ charges of untwisted matters]% {$U(1)$ charges of untwisted matters in $Z_7$ standard embedding} \label{table:U1 charge:untwisted} \end{table} \begin{table}[hbt] \begin{center} \begin{tabular}{@{\vrule width 1pt \ }l|cc|cc|cc|cc@{\ \vrule width 1pt}} \noalign{\hrule height 1pt} &\multicolumn{2}{c|}{$\ul{27}$~~($7\times 1$)} &\multicolumn{2}{c|}{$\ul{1}$ ~~($7\times 1$)} &\multicolumn{2}{c|}{$\ul{1}$ ~~($7\times 2$)} &\multicolumn{2}{c@{\ \vrule width 1pt}}{$\ul{1}$~~($7\times 4$)}\\ \cline{2-9} & $\sqrt{7}\wt{Q}_1$ & $\sqrt{7}\wt{Q}_2$ & $\frac{\sqrt{7}}{2\sqrt{3}}\wt{Q}_1$ & $\frac{\sqrt{7}}{2\sqrt{3}}\wt{Q}_2$ & $\frac{\sqrt{7}}{3}\wt{Q}_1$ & $\frac{\sqrt{7}}{3}\wt{Q}_2$ & $\frac{\sqrt{7}}{\sqrt{3}}\wt{Q}_1$ & $\frac{\sqrt{7}}{\sqrt{3}}\wt{Q}_2$ \\ \noalign{\hrule height 1pt} \hline\hline \noalign{\hrule height 1pt} $T_1$ & $-\2{1}$ & $-\2{\sqrt{3}}$ & $-\2{\sqrt{3}}$ & $-\2{1}$ & $\2{1}$ & $-\2{\sqrt{3}}$ & $0$ & $1$ \\ \hline $T_2$ & $-\2{1}$ & $+\2{\sqrt{3}}$ & $0$ & $1$ & $-1$ & $0$ & $+\2{\sqrt{3}}$ & $-\2{1}$ \\ \hline $T_4$ & $1$ & $0$ & $+\2{\sqrt{3}}$ & $-\2{1}$ & $\2{1}$ & $+\2{\sqrt{3}}$ & $-\2{\sqrt{3}}$ & $-\2{1}$ \\ \noalign{\hrule height 1pt} \end{tabular} \end{center} \caption[$U(1)$ charges of twisted matters]% {$U(1)$ charges of twisted matters in $Z_7$ standard embedding} \label{table:U1 charge:twisted} \end{table} The untwisted sector has three subsectors $U_1$, $U_2$ and $U_4$ which are classified by the value $\sum_ip^iv^i=1/7,\,2/7$ and $4/7$. Each untwisted subsector has a $\ul{27}+\ul{1}$ as the massless matter. In each twisted sector $T_{1,2,4}$, we have \begin{equation} 7\times \left[\ul{27}+\ul{1}+2\times\ul{1}+4\times\ul{1}\right] \ , \end{equation} where the first $\ul{27}$'s come from non-oscillated states, and remaining singlets from oscillated states with $N_{{\rm L}}=1/7,\,2/7,\,4/7$, respectively. The $U(1)$ charges of these fields are shown in Table~\ref{table:U1 charge:untwisted} for the untwisted states in the basis (\ref{Z7 basis:U}) and in Table~\ref{table:U1 charge:twisted} for the twisted states in the basis (\ref{Z7 basis:T}). The states in each column of both tables form a triplet and thus the anomaly cancels between them. Actually the massless spectrum of this model is symmetric under \begin{equation} X^1\ \longrightarrow\ X^2\ \longrightarrow\ X^3\ \longrightarrow\ X^1 \ , \label{Z3} \end{equation} which respectively rotates the untwisted subsectors $U_{1,2,4}$ as well as the twisted sectors $T_{1,2,4}$ with the same oscillator number into each other. We can find a similar symmetry in some other orbifold models (even with non-standard embeddings). The partial list includes the untwisted states for $Z_8$-I and $Z_{12}$-I orbifold models with quasi-standard embeddings, whose defining shifts are $v^i=1/8(1,2,-3)$ and $1/12(1,4,-5)$, respectively. We also note that the above $Z_7$ orbifold model has another type of symmetry. We see from Table~\ref{table:U1 charge:twisted} that the $U(1)$ charges of the singlets with different oscillator numbers sum up to vanish in a single twisted sector. [The numbers in parentheses of Table~\ref{table:U1 charge:twisted} show the multiplicity of the states.] This is the symmetry within each subsector corresponding to the fixed point and might be interesting when we include a Wilson line, whose role is to resolve the degeneracy between the fixed points. \section{Classification of ``Anomalous'' $U(1)$ in Orbifold Models} In this section we examine in detail $Z_3$ orbifold models and give a procedure for classifying the models with an anomalous $U(1)$. We work out the models in the absence of a Wilson line and extend the analysis to the case with a Wilson line. Although we deal only with $Z_3$ orbifold models, our procedure is general and can be applied to other models. Let us first recapitulate the classification of $Z_3$ orbifold models in the absence of a Wilson line with attention to the visible-hidden sector mixing. Modular invariant pairs of shifts $(V^I\,;\,V^{I'})$ are classified into five types including a trivial one as \begin{eqnarray} & {\rm No.~0}: & (3V^I;\,3V^{I'})= (0^8;0^8) \ , \nonumber \\ & {\rm No.~1}: & (3V^I;\,3V^{I'})= (2,1,1,0^5;0^8)\ , \nonumber \\ & {\rm No.~2}: & (3V^I;\,3V^{I'})= (2,1,1,0^5;2,1,1,0^5) \ , \label{shift5} \\ & {\rm No.~3}: & (3V^I;\,3V^{I'})= (1,1,0^6;2,0^7)\ , \nonumber \\ & {\rm No.~4}: & (3V^I;\,3V^{I'})= (2,1^4,0^3;2,0^7)\ , \nonumber \end{eqnarray} up to $E_8 \times E_8'$ automorphisms\rlap.\footnote{ One can subtract $E_8\times E_8'$ roots so that the total shift has the length less than one. See also Ref.~\cite{reduction}. } The first model is a trivial one with unbroken $E_8 \times E_8'$ gauge group and no massless matter field. The second one (model No.~1) corresponds to the standard embedding with $E_6\times SU(3)\times E_8'$ gauge group. The massless twisted matter fields are \begin{equation} (27,1;\,1') \quad {\rm for}\quad N_{{\rm L}}=0 \ , \qquad ( 1,3;\,1') \quad {\rm for}\quad N_{{\rm L}}=\3{1} \ . \label{matter:No.1} \end{equation} This model has no visible-hidden sector mixing. The third one (model No.~2) corresponds to a quasi-standard embedding and leads to the gauge group $E_6\times SU(3)\times E_6'\times SU(3)'$. This model has the visible-hidden sector mixing due to the massless twisted matter \begin{equation} (1,3;\,1',3') \quad {\rm for}\quad N_{{\rm L}}=0 \ , \label{matter:No.2} \end{equation} but this mixing does not contribute to anomaly. Thus the models No.~0, 1 and 2 contain no U(1) gauge group and are anomaly free. On the other hand, an anomalous $U(1)$\ does arise in the models No.~3 and 4 as we describe in detail in Appendix~A. We now proceed to models in the presence of a Wilson line. Each subsector corresponding to the fixed point labeled by the intergers $m^i$ has the total shift of the form \begin{equation} \left(\,V^I+m^ia^I_i\,;\,V^{I'}+m^ia^{I'}_i\,\right) \ . \label{total shift} \end{equation} It is remarkable that {\it any modular invariant pair of the total shift is equivalent to one of five shifts} (\ref{shift5}) {\it up to $E_8 \times E_8'$ automorphisms}. This property enables us to have a simple classification of anomalous $U(1)$. We call the shift $V^{\hat I}=(V^I\,;\,V^{I'})$ which is equivalent to the total shift (\ref{total shift}) of the subsector under consideration as {\it an equivalent shift}. Similarly we call such a model without a Wilson line that has the equivalent shift of the subsector under consideration as {\it an equivalent model}. All the complication comes from the fact that such equivalent shifts of subsectors can be different from each other: {}For example it is possible that the subsector labeled by $m^i=(0,0,0)$ has an equivalent shift of No.~0 type while the equivalent shift for $m^i=(1,0,0)$ is of No.~1 type. {}For an illustrative purpose, let us first discuss the model with the shift No.~1 and a single Wilson line: \begin{eqnarray} 3V^{\hat I}\! &=&(2,1,1,0,0,0,0^2)(0,0,0,0^5)' \ , \nonumber \\ 3\,a_1^{\hat I} &=&(0,0,0,2,1,1,0^2)(2,1,1,0^5)' \ . \label{example1} \end{eqnarray} The gauge group $E_6\times SU(3)_1\times E_8'$ of the model No.~1 is broken by this Wilson line to \begin{equation} SU(3)_1\times SU(3)_2\times SU(3)_3\times SU(2)\times U(1) \times E_6' \times SU(3)' \ . \label{gauge:example1} \end{equation} The twisted sector of this model has three subsectors corresponding to $m^1=0,~\pm 1$. The subsector with $m^1=0$ has the same structure as the twisted sector of the model No.~1; the massless matter fields appear in the representation of $E_6\times SU(3)_1\times E_8'$ as in eq.~(\ref{matter:No.1}). Actually the $E_6$ is broken to $SU(3)_2 \times SU(3)_3 \times SU(2) \times U(1)$ under which the $\ul{27}$ decomposes into $(3,3,2)_1+(3,3,1)_{-2}$. Nevertheless, it is important to realize that matter fields appear as if they form an $E_6$ multiplet. It is then clear that this subsector does not contribute to anomaly since the $U(1)$ in eq.~(\ref{gauge:example1}) is a part of the $E_6$ in this subsector. Similarly, the subsector with $m^1=1$ has the equivalent shift of No.~2 type and the massless matter fields appear in the representation of $E_6\times SU(3)_1\times E_6'\times SU(3)'$. In this case, all the fields are singlet under the broken $E_6$ and so has vanishing $U(1)$ charges. Thus this subsector does not contribute to anomaly in spite of the visible-hidden sector mixing due to the field (\ref{matter:No.2}). [Even if the $SU(3)$ is broken to $SU(2)\times U(1)$ or $U(1)^2$ by another Wilson line, the field $(1,3\,;\,1',3')$ does not produce any anomalies.] The subsector with $m^1=-1$ is also of No.~2 type and has no contribution to anomaly. Thus this model has only No.~1 and 2 types of twisted massless matter fields and does not contain the anomalous $U(1)$. An important lesson from the above example is that {\it the massless matter content in any subsector is precisely the same as in the corresponding twisted sector of the equivalent model}. This can directly be checked by looking at the massless condition. [Degeneracy factors also coincide with each other when counted in the corresponding subsector in the equivalent model.] As far as each subsector is concerned, the effect of Wilson lines is just to decompose the representation of matter fields according to the unbroken gauge group, and the situation is quite similar to the conventional Higgs mechanism of grand unified theories. This is not true for a whole model, of course, since each subsector can correspond to a different type of models. Note also that some of untwisted matter fields are projected out by including a Wilson line. Now it is clear what criteria we have for the appearance of an anomalous $U(1)$: If the total shift (\ref{total shift}) is equivalent to one of the shifts No.~0, 1 and 2 listed in eq.~(\ref{shift5}) for all the subsectors $m^i=0,~\pm 1$, then the model contains no anomalous $U(1)$. On the other hand, a model contains an anomalous $U(1)$\ if there is a subsector whose total shift is equivalent to the shift No.~3 or 4. If there are several such subsectors, the anomalous $U(1)$\ is their linear combination (as long as the cancellation between them does not occur). Let us illustrate how our procedure for finding an anomalous $U(1)$\ works in concrete examples. First consider the model with \begin{eqnarray} 3V^{\hat I}\!&=&(2,1,1,0^5)(2,1,1,0^5)' \ , \nonumber \\ 3\,a_1^{\hat I}&=&(-2,0,0,0^5)(0,-1,-1,0^5)' \ . \label{example2} \end{eqnarray} This model has the gauge group $$ SO(10)\times SU(2) \times U(1)_1 \times U(1)_2 \times SO(10)'\times SU(2)' \times U(1)'_1 \times U(1)'_2 \ , $$ whose $U(1)$ bases we take as \begin{eqnarray} \hskip -1cm &&U(1)_1\,:~~Q_1 =(0,1,1,0^5)(0^8)' \ , \qquad U(1)_1'\,:~~Q_1'=(0^8)(0,1,1,0^5)' \ , \nonumber \\ \hskip -1cm &&U(1)_2\,:~~Q_2 =(1,0,0,0^5)(0^8)' \ , \qquad U(1)_2'\,:~~Q_2'=(0^8)(1,0,0,0^5)' \ . \label{U1basis} \end{eqnarray} The subsectors with $m^1=0$ and $m^1=-1$ have the total shifts equivalent to No.~2 type shift and do not contribute to anomaly. The equivalent shift in the subsector with $m^1=1$ is of No.~3 type and thus an anomalous $U(1)$\ arises from this subsector. The massless matter fields appear in the representation of broken $E_7\times U(1)_1\times SO(14)'\times U(1)_2'$, and as is shown in Appendix~A, the $U(1)_1$ is anomalous while the $U(1)_2'$ is anomaly free. The remaining $U(1)_2$ and $U(1)_1'$ are anomaly free since they are contained in the $E_7$ or $SO(14)'$. In this way, one can conclude that this model contains an anomalous $U(1)$, whose basis is given by $(0,1,1,0^5)(0^8)'$. Next we discuss the model in which two subsectors contribute to anomaly. An example of such model is given by \begin{eqnarray} 3V^{\hat I}\!&=&(0,1,1,0^5)(2,0,0,0^5)' \ , \nonumber \\ 3\,a_1^{\hat I}&=&(-2,-1,-1,0^5)(-2,-1,-1,0^5)' \ . \label{example3} \end{eqnarray} This model has the same gauge group as the previous one and we take the same $U(1)$ bases (\ref{U1basis}). The subsector with $m^1=0$ has the equivalent shift of No.~3 type and the massless matter fields form the multiplets under the broken $E_7\times U(1)_1\times SO(14)'\times U(1)'_2$. In particular, there is the visible-hidden mixing due to $(1;\,14')_{(2,-1)/3}$, which contributes to anomaly so that ${\rm Tr}\,_{SO(10)'}Q_1\neq 0$. In addition, the subsector with $m^1=1$ also has the total shift equivalent to No.~3 type shift. The massless matter fields appear in the representations under the broken $SO(14)\times U(1)_2\times E_7'\times U(1)_1'$ in this subsector. In particular, the existence of $(14;\,1')_{(1,-2)/3}$ causes ${\rm Tr}\,_{SO(10)}Q'_1\neq 0$. The subsector with $m^1=-1$ is of No.~2 type and does not contribute to anomaly. As a result, the true anomalous $U(1)$\ is a linear combination of $U(1)_1$ and $U(1)_1'$, whose coefficients are determined by calculating ${\rm Tr}\,_{SO(10)}Q'_1$ and ${\rm Tr}\,_{SO(10)'}Q_1$ as $Q_1-Q_1'$. The third example is the model in which the anomaly cancels between two subsectors: \begin{equation} 3V^{\hat I}=3\,a_1^{\hat I} =(2,1,1,1,1,0^3)(2,0^7)' \ . \label{example4} \end{equation} The gauge group is $SU(9)\times SO(14)'\times U(1)'$ as in the model No.~4, but the inclusion of the Wilson line removes all the massless untwisted matter. The subsector with $m^1=-1$ has the model No.~0 as an equivalent model and contains no massless matter fields. The matter content of $m^1=0$ subsector is the same as in the twisted sector of the model No.~4 and is given by nine copies of $(9,1')_{2/3}$. In the $m^1=+1$ subsector, the total shift is $V^{\hat I}+a_1^{\hat I}=2V^{\hat I}$ which is just the shift of the $T_2$-sector in the model No.~4. Therefore this subsector contains nine copies of $(9^*,1')_{-2/3}$ as is seen from the general fact\cite{CP} that the matter content in the $T_2$-sector is $CPT$ conjugate of that in the $T_1$-sector. These fields cancel the anomaly arising from the $m=0$ subsector. We can extend these analysis to more general cases as well as other orbifold models. Actually such an analysis proceeds as follows: \begin{enumerate} \renewcommand{\labelenumi}{\labelenumii} \item[(i)] Classify all the models before a Wilson line is included: In each model, work out its twisted matter content and identify the basis of an anomalous $U(1)$\ and its charges. \item[(ii)] Turn on Wilson lines. {}For each subsector of a twisted sector $T_k$, \begin{enumerate} \item identify the equivalent model\rlap.\footnote{ A careful analysis shows that the GSO projection in the presence of a Wilson line results in the same degeneracy factor as in the corresponding twisted subsector of the equivalent model. } This subsector has no contribution to anomaly if the $T_k$-sector of the equivalent model does not contain the massless matter which contributes to the visible-hidden sector mixing. \item ``pull back'' the anomalous $U(1)$\ basis $V_{Q^{(0)}}$ in the equivalent model to get the basis $V_Q$ of the anomalous $U(1)$\ in this subsector. Schematically, \def-\!\!\!-\!\!\!-\!\!\!-\!\!\!\longrightarrow{-\!\!\!-\!\!\!-\!\!\!-\!\!\!\longrightarrow} \def\longleftarrow\!\!\!-\!\!\!-\!\!\!-\!\!\!-{\longleftarrow\!\!\!-\!\!\!-\!\!\!-\!\!\!-} \begin{eqnarray} \hskip -2cm \hbox{shift in $E_8\times E'_8$ lattice} :\ \ \! kV^{\hat I}+m^ia_i^{\hat I}\ \mathop{{-\!\!\!-\!\!\!-\!\!\!-\!\!\!\longrightarrow}}^{R_{E_8\times E_8'}}&&\!\!\!\!\!\! kV^{\hat I} \nonumber\\ &&\!\!\!\Downarrow\ \hbox{step (i)} \label{scheme} \\ \hskip -2cm \hbox{anomalous $U(1)$\ basis}~ :\ \ \qquad V_Q^{\hat I}\qquad \mathop{\longleftarrow\!\!\!-\!\!\!-\!\!\!-\!\!\!-}_{R_{E_8\times E_8'}^{-1}}&&\!\!\!\!\! V_{Q^{(0)}}^{\hat I} \nonumber \end{eqnarray} \item decompose the matter content of the $T_k$-sector of the equivalent model according to the unbroken gauge group in the presence of Wilson lines. In general, the ``pull back'' operation as in (\ref{scheme}) is necessary also in this step. \end{enumerate} \item[(iii)] Put these subsectors as well as the untwisted sector together to get the whole model\rlap.\footnote{ It might be interesting to observe that the construction of an orbifold model in a Wilson line background resembles the construction of a fiber bundle. } The true basis of the anomalous $U(1)$\ is given by a linear combination of $U(1)$ bases obtained in the step (b). The massless matter content and $U(1)$ charges are obtained from the results of the step (c). \end{enumerate} A classification of modular invariant pairs of the shifts $(V^I\,;\,V^{I'})$ is already available\cite{shift,Orbi2} and it is straightforward to identify the basis of $U(1)$ which has the visible-hidden sector mixing contributing to anomaly. As an example, a classification of visible-hidden sector mixing in $Z_4$ orbifold models is given in Appendix~B, where several new features are observed. We finish this section by two remarks. {}Firstly, the absence of an anomalous $U(1)$\ can be established by examining only twisted sectors in spite of the fact that the untwisted sector may contribute to anomaly. This is a special case of more general phenomena: As in the example (\ref{example3}), the true anomalous $U(1)$\ is a linear combination of several $U(1)$'s when several subsectors contribute to the anomaly. In such a case, the true basis of the anomalous $U(1)$\ can be determined by calculating the mixed anomaly between the visible and hidden sectors and therefore by examining only twisted sectors. These are the consequences of the universal nature of anomaly in string theory as described in section~3.1. Secondly, a cancellation of anomaly may occur between several subsectors which contribute to the anomaly. We should remark that such a cancellation can be understood, in some cases, by the discrete symmetry described in section~3.2. An example is provided by the model (\ref{example4}). We note also that the absence of anomaly in the model (\ref{example1}) can be understood by the discrete symmetry, {\it i}.{\it e}., by the orthogonality of the Wilson line to the shift. In this way, the analysis based on the discrete symmetries plays a role complimentary to the analysis in this section. \section{``Anomalous'' $U(1)$ Breaking and Discrete Symmetries} The anomalous $U(1)$\ gauge symmetry breaks automatically once the dilaton VEV is fixed by yet unknown mechanism. There exists a flat direction along which some scalar fields develop the VEV's to cancel the Fayet-Iliopoulos term. Then the next issue to be discussed is along which flat direction the anomalous $U(1)$\ breaking occurs, and what kind of consequences we have after such anomalous $U(1)$\ breaking. We defer the discussion on the former issue to future publication and comment briefly on the latter here. In general discrete symmetries survive breaking of a continuous symmetry. Suppose that the anomalous $U(1)$\ symmetry is broken by a VEV of a scalar field with the $U(1)$ charge $Nq$. If other fields which remain massless after this breaking have charges quantized in units of $q$, a discrete $Z_N$ subgroup of the original anomalous $U(1)$\ is left unbroken. This $Z_N$ symmetry has $Z_N$ anomaly which comes from two sources. As was discussed in Ref.~\cite{discrete:IR} for breaking of an anomaly free $U(1)$, a discrete gauge anomaly arises by integrating the matter fields which gain mass terms through the symmetry breaking. A new contribution arises from the original $U(1)$ anomaly. With ${\rm Tr}\,'$ denoting a summation over massless fields after the anomalous $U(1)$\ breaking, the $Z_N^3$ anomaly can be written as \begin{equation} \1{3}{\rm Tr}\,'Q_A^3=8\pi^2 \delta_{\rm GS} +\1{3}\left(mN+\eta n{N^3\over 8}\right) \ , \label{discrete anomaly:cubic} \end{equation} where $m$ and $n$ are some integers and $\eta =1,\,0$ for $N=\,$even, odd, respectively. The first term is the contribution from the $U(1)$ anomaly and is proportional to the Green-Schwarz coefficient $\delta_{\rm GS}$. The second term is the contribution from massive fields, for which we have used the formula given in Ref.~\cite{discrete:IR}. Similarly the discrete version of the mixed gravitational anomaly is calculated to be \begin{equation} \1{24}{\rm Tr}\,'Q_A=8\pi^2 \delta_{\rm GS} +\1{24}\left(pN+\eta q\2{N}\right) \ , \label{discrete anomaly:grav} \end{equation} where $p$ and $q$ are integers. The integers $m$, $n$, $p$ and $q$ depend on the flat direction along which the anomalous $U(1)$\ breaks. If these $Z_N$ anomalies are to be cancelled by the Green-Schwarz mechanism\cite{coping,discrete:Ib}, we should have a relation \begin{equation} \1{3}{\rm Tr}\,'Q_A^3 = \1{24}{\rm Tr}\,' Q_A \qquad {\rm mod} \ N \ , \end{equation} which leads to the following constraint \begin{equation} {8m-p \over 24}+\eta {2N^2-q \over 48}=\,{\rm integer} \ . \end{equation} We can regard this as a constraint on possible flat directions. Note that $\delta_{\rm GS}$ cancels out and so this constraint is independent of the original anomaly. We can also calculate mixed $Z_N$ anomalies for gauge symmetries, and have further constraints if these anomaly coefficients for $Z_N$ symmetry are to be universal. Note that the anomalous $U(1)$\ breaking sometimes triggers the further gauge symmetry breaking if the scalar fields which develop the VEV's are charged under other gauge groups $G$. This is the mechanism which is widely used to reduce the rank of gauge group in semi-realistic string models\cite{degenerate,reduce,U1 charge}. In such a case, some linear combination of $U(1)$ and $G$ survives and the discrete symmetry in the above discussion may be taken to be orthogonal to it. In addition to gauge symmetries, there is another type of symmetries which are broken in the course of the anomalous $U(1)$\ breaking. We are interested in discrete $R$ symmetries and let us consider them in $Z_3$ orbifold models as an example. The $Z_3$ orbifold is symmetric under the independent $Z_3$ rotation of each complex plane as \begin{eqnarray} X^i&\longrightarrow&e^{2\pi i \wt{v}^i}X^i \ , \label{R:X} \end{eqnarray} where $\wt{v}^i=(n^1,n^2,n^3)/3$ with arbitrary integers $0\le n^i<3$. The right-moving world-sheet supersymmetry requires that the fermionic string coordinates $\psi^i\sim e^{iH^i}$ should be rotated simultaneously, and these rotations are realized by the independent shift of each $H^{i=1,2,3}$ as \begin{eqnarray} H^i&\longrightarrow&H^i+2\pi\wt{v}^i \ , \label{R:H} \end{eqnarray} which rotates the space-time supercharge $Q\sim\prod_ie^{i H^i/2}$ as \begin{equation} Q\ \longrightarrow\ e^{\pi i\sum_i \wt{v}^i}Q \,\left(\,=e^{\3{\pi i}\sum_i n^i}Q\,\right) \ . \label{R:supercharge} \end{equation} Unless $\sum_i \wt{v}^i$ is an even integer, these discrete symmetries (\ref{R:X}) and (\ref{R:H}) do not commute with the space-time supersymmetry and are $R$ symmetries\cite{DS,naturalness}. These $R$ symmetries are generated by the rotation by $e^{2\pi i/3}$ of the $i$-th complex plane. We call such generating element $R_i$. The $R_i$ charge of a state can be read off as follows. Massless scalar fields in the untwisted sector, generally denoted by $U_{1,2,3}$, have $SO(8)$ momenta $p^t=(p^i\,;\,0)$ with $p^i=(1,0,0)$, $(0,1,0)$ and $(0,0,1)$, respectively, and the massless twisted field $T$ has $\wt{p}^t=(1,1,1\,;\,0)/3$. These fields transform under the $R_i$ as \begin{equation} R_i\,:\ U_j\ \longrightarrow\ e^{\3{2\pi i}\delta_{ij}}U_j \ , \qquad T\ \longrightarrow\ e^{{2\pi i \over 9}}T \ , \end{equation} from which we see that each $R_i$ generates $Z_{18}$ symmetry. The $R$ symmetries (\ref{R:X}) and (\ref{R:H}) can be accompanied by discrete rotations of the left-moving gauge coordinates. In the fermionic formulation, they are $Z_3$ rotations of $\lambda^{\hat I}\sim e^{i\swb{X}^{\hat I}}$, which are equivalently realized by discrete shifts of the bosonic coordinates as \begin{eqnarray} \wb{X}^{\hat I}&\longrightarrow& \wb{X}^{\hat I}+2\pi\wt{V}^{\hat I} \ . \label{R:G} \end{eqnarray} Here $\wt{V}^{\hat I}=n^{\hat I}/3$ with integers $0\le n^{\hat I}<3$ if the ${\hat I}$-th component of the shift $V^{\hat I}$ is non-vanishing. [For vanishing component $V^{\hat I}=0$, we should take $\wt{V}^{\hat I}=0$ since eq.~(\ref{R:G}) no longer give a symmtry transformation.] In the (2,2) vacuum, in order to preserve the left-moving world-sheet supersymmetry, we should associate such $E_8\times E'_8$ rotation that is realized by $\wt{V}^{\hat I}=(\wt{v}^i,0^5\,;\,0^8)$ with the discrete rotations of $X^i$ and $\psi^i$. In (0,2) vacua, however, the discrete rotations (\ref{R:G}) are independent and we are free to associate them with eqs.~(\ref{R:X}) and (\ref{R:H}). Note that the discrete symmetries (\ref{R:G}) themselves are nothing but discrete parts of gauge symmetries. These $R$-symmetries (\ref{R:H}) and discrete parts (\ref{R:G}) of gauge symmetries are generally broken when the anomalous $U(1)$\ breaks along a flat direction. Suppose that the matter field which develops the VEV to cancel the Fayet-Iliopoulos term has $SO(8)$ and $E_8\times E_8'$ momenta $\wt{p}^i$ and $\wt{P}^{\hat I}$, respectively. The corresponding vertex operator transforms under eqs.~(\ref{R:X}), (\ref{R:H}) and (\ref{R:G}) as\footnote{ {}For an oscillated state, the phase $\exp(2\pi iN_{{\rm L}})$ should be multiplied. Note also that if we are to transform the twist field at the same time, we have the corresponding phase from the twisted ground state. In such a case, the $R$ charge of the state can be read off by the formula similar to the GSO phase (\ref{GSO:theta}). } \begin{equation} V_{{\rm R}}V_{{\rm L}} \ \longrightarrow\ e^{2\pi i\left(\wt{P}^{\hat I}\wt{V}^{\hat I} -\wt{p}^{i}\wt{v}^{i}\right)} V_{{\rm R}}V_{{\rm L}} \ . \label{Rcharge} \end{equation} Thus the original $R$ symmetries generated by $R_i$ are generally broken, and the surviving $R$ symmetries are such combinations of $R_i$ and discrete symmetries that satisfy \begin{equation} \wt{P}^{\hat I}\wt{V}^{\hat I}-\wt{p}^{i}\wt{v}^{i} =\,{\rm integer} \ . \label{survive} \end{equation} If several matter fields develop their VEV's, the shifts $\wt{v}^i$ and $\wt{V}^{\hat I}$ of the surviving $R$ symmetry should satisfy this relation for each pair of $\wt{p}^i$ and $\wt{P}^{\hat I}$. Let us illustrate the situation by taking the $Z_3$ orbifold model with No.~4 shift, whose flat directions were analyzed in Ref.~\cite{degenerate}. This model has the gauge group $SU(9)\times SO(14)' \times U(1)'$ and the massless matter content is shown in Appendix A. {}First consider the simplest flat direction where the negatively charged field $(1,14')_{-1}$ in the untwisted sector develops the VEV which cancels the positive Fayet-Iliopoulos term and breaks $SO(14)'$ simultaneously. In this case, the discrete gauge symmetry which survives the breaking is $Z_6$ under which \begin{equation} \phi_{64'}\ \longrightarrow\ e^{k\pi i}\phi_{64'} \ , \qquad \phi_{9}\ \longrightarrow\ e^{\3{4\pi i}k}\phi_{9} \ , \end{equation} where $k=0,\,\cdots,\,5$. On the other hand, if the field $(1,14')$ which develops the VEV has $p^i=(1,0,0)$, the $R$ symmetries generated by $R_1$ are broken while the $R_{2,3}$ are left unbroken. Although the $R_1$ is broken, we can find a new unbroken $R$-symmetry by combining the discrete part of gauge symmetries so as to satisfy eq.~(\ref{survive}), {\it i}.{\it e}., by combining the symmetry (\ref{R:G}) generated by $\wt{V}^{\hat I}=(0^8\,;\,2/3,0^7)$. Notice that this is exactly the basis for the anomalous $U(1)$, and thus the surviving $R$ symmetry is a combination of the broken $R_1$ and the broken part of anomalous $U(1)$\ gauge symmetry. This $Z_3$ orbifold model has another flat direction, where $(9,1')_{2/3}$ as well as $(1,14')_{-1}$ develop their VEV's. These $(9,1')_{2/3}$ fields come from the twisted sector and have $\wt{p}^i=(1,1,1)/3$. Switching on their VEV's breaks the above new $R$-symmetry associated with the anomalous $U(1)$. Even in this case, however, the unbroken $R$-symmetry can be found by further combining the center of the $SU(9)$ generated by $\wt{V}^{\hat I}=(1,1,1,1,2,0,0,0\,;\,0^8)/3$. The other $Z_3$ orbifold models also have unbroken $R$-symmetries after symmetry breaking along flat directions. This analysis can be extended to $Z_3$ orbifold models with Wilson lines and other orbifold models. Unbroken $R$-symmetries, in general, contain discrete subgroups of broken gauge symmetries including the anomalous $U(1)$. \section{Conclusion and discussion} We have studied the origin of anomalous $U(1)$\ gauge symmetry in the orbifold construction of four-dimensional string models. By utilizing the universal nature of the anomaly in string theory, we have derived several conditions for the absence of $U(1)$ anomaly. We also have found several discrete symmetries which guarantee the cancellation of anomaly. We have then presented a procedure for classifying the orbifold string models which possess an anomalous $U(1)$\ and for identifying the true basis of the anomalous $U(1)$. We have found several constraints on the anomalous $U(1)$, which may be regarded as the first step to have the complete criteria for the appearance of an anomalous $U(1)$. These constraints are rather independent of the detailed structure of models and therefore enable us to conclude the absence of an anomalous $U(1)$\ before going into the analysis of massless spectra. According to them, one can conclude the absence of an anomalous $U(1)$\ by the following reasons: (i) the absence of the visible-hidden sector mixing in the twisted sectors (as in the models with a large gauge group), (ii) the existence of discrete symmetries, which can be found by examining the relation between the shift and Wilson lines (such as the orthogonality). The former is powerful enough to allow us to classify the models in the absence of a Wilson line. Such a classification is the basis for more general and detailed analysis in the presence of Wilson lines. On the other hand, the latter plays an important role complimentary to such a detailed analysis. One of the main results of the present paper is to give a general procedure for classifying and identifying the anomalous $U(1)$\ in orbifold string models. According to our procedure, the problem is reduced to the classification of models in the absence of a Wilson line. Once we work out this restricted class of models and identify what type of shifts and twisted sectors lead to an anomalous $U(1)$, we can easily extend the analysis to the case with Wilson lines. This greatly simplifies the actual analysis of an anomalous $U(1)$\ since we can avoid the tedious calculation of $U(1)$ charges. Our procedure is based on the fact that an orbifold model is constructed by modular invariant combinations of the twisted subsectors corresponding to fixed points and that any such subsector in the presence of Wilson lines is equivalent by $E_8\times E_8'$ automorphisms to some twisted sector in the absence of a Wilson line. The actual analysis is quite easy when such an equivalence is realized by a trivial $E_8\times E_8'$ automorphism as we demonstrated in concrete examples. When nontrivial $E_8\times E_8'$ automorphisms are needed, the analysis will be somehow involved. The investigation on this point will be given elsewhere. The fact that an orbifold model in a Wilson line background is constructed by assigning various types of shifts to twisted subsectors and by combining such subsectors in a modular invariant way, might be interesting by itself. We have observed the similarity to the construction of a fiber bundle. More importantly, this fact makes it clear that the origin of an anomalous $U(1)$\ can be traced back to the orbifold twist itself which is realized by the shift $V^{\hat I}$ in the Abelian embedding adopted here. This may not be surprising since it is this twist operation that gives rise to the chiral structure of the models. [In this respect, it may be interesting to recall that $E_8\times E_8$ heterotic string theory itself may be regarded as an orbifold in the M-theoretical picture\cite{horava-witten}. It might also be interesting to speculate that any chiral structure could be traced to an orbifold in some sense.] Several problems are left untouched in the present paper. One of the most important problems is to give a general characterization of the flat directions along which the anomalous $U(1)$\ breaks. Such an analysis of the flat directions are indispensable for many applications of the anomalous $U(1)$\ to the realistic model building, such as the construction of string models with realistic gauge groups and matter contents, fermion mass matrices\cite{texture:IR,texture:others,texture2}, supersymmetry breaking\cite{SUSY-breaking:mech,SUSY-breaking:model}, and the calculation of soft breaking terms\cite{anoma,KK,mass:others}. Our results, the classification of an anomalous $U(1)$\ in particular, will be useful for such an analysis of the flat directions. Also helpful will be the results of Ref.~\cite{flat} where generic flat directions of $Z_{2n}$ orbifold models were worked out (although the breaking of the anomalous $U(1)$\ was not taken into account). In attempts to construct a realistic string model, discrete symmetries including $R$ symmetries are important, for instance, to constrain phenomenologically dangerous couplings. We have discussed discrete symmetries that survive the anomalous $U(1)$\ breaking and observed a possible relation to the broken anomalous $U(1)$. A further study on such a remnant of the anomalous $U(1)$\ will be desired. It is a remarkable possibility that the anomalous $U(1)$\ gauge symmetry plays an important role in constructing a promising model of supersymmetry breaking\cite{SUSY-breaking:mech,SUSY-breaking:model}. If this is the case, it is very important to search for a simple and concrete example of the string model in which the supersymmetry breaking mechanism of ref.~\cite{SUSY-breaking:mech} is realized. {}For this purpose, we note that there are many such orbifold models without a Wilson line that possess an anomalous $U(1)$\ and that these models deserve further study even if they are not necessarily realistic by themselves. Recently much work has been devoted to understand nonperturbative aspects in supersymmetric gauge theories and string theories. In particular, it was pointed out in Ref.~\cite{moduli} that the nonperturbative effects of the form $e^{-aS}$ are constrained by the anomalous $U(1)$\ as well as discrete symmetries. [This may be understood by observing that if we use the field $e^{-S}$, the anomalous $U(1)$\ is linearly realized and the Green-Schwarz coefficient $\delta_{\rm GS}$ is just its anomalous $U(1)$\ charge.] In this sense, the anomalous $U(1)$\ gauge symmetry might be related to some of nonperturbative and universal aspects of the theory. It is therefore important to study the anomalous $U(1)$\ from the viewpoint of string duality and M-theory. Our approach here to the anomalous $U(1)$\ can be extended to other constructions of four-dimensional string models such as Calabi-Yau compactifications\cite{CY} and fermionic constructions\cite{fermi}. In particular, the absence of the visible-hidden sector mixing will provide one of the criteria for anomaly free models in any construction. On the other hand, a new clue might be obtained by studying orbifold models with the non-Abelian embedding. The fact that the rank of gauge groups can be lowered in such a construction\cite{nonabelian} might be related to the gauge symmetry breaking triggered by the anomalous $U(1)$\ breaking. \section*{Acknowledgements} The authors are grateful to Y.~Kawamura and J.~Louis for useful discussions. They would like to express their sincere thanks to M.~Bando and I.A.~Sanda, the organizers of Ontake Summer Institute where a part of the work was done. H.~N. is grateful to the colleagues at Niigata University for encouragement and to INS for hospitality. H.~N. is supported in part by the Grant-in-Aid for Scientific Research (\#~08740198) from the Ministry of Education, Science and Culture of Japan. \newpage
1,941,325,220,943
arxiv
\section{Introduction} \iffalse Large Language Models (LLMs) are the current dominant force in natural language processing, such as the ground-breaking GPT-3 model \citep{https://doi.org/10.48550/arxiv.2005.14165} up to 175 billion parameters. As these models approach astronomically large sizes, their required compute resources increase dramatically, incurring huge financial cost and environmental impact. It is therefore of great importance to compress LLMs while preserving their performance. \fi Large language models (LLMs), such as the state-of-the-art GPT-3 model \citep{https://doi.org/10.48550/arxiv.2005.14165} with up to 175 billion parameters, have achieved remarkable performance in natural language processing (NLP) tasks. However, training and deploying such massive models also poses significant challenges in terms of computational cost, energy consumption, and environmental impact. Therefore, it is crucial to develop effective methods to reduce the size of LLMs without compromising their quality. Neural network pruning is a long-standing model compression method \citep{PhysRevA.39.6600, NIPS1988_07e1cd7d, https://doi.org/10.48550/arxiv.1803.03635, 80236, https://doi.org/10.48550/arxiv.2003.03033}. It can be broadly classified into two types: \emph{unstructured} and \emph{structured}. Unstructured pruning removes individual weights from the network based on some criteria, resulting in sparse weight matrices that can be stored and processed more efficiently. Structured pruning, on the other hand, eliminates whole components, such as neurons, channels, or blocks, leading to smaller architectures to reduce end-to-end inference latency. While unstructured pruning has been extensively studied and applied to LLMs \citep{Wang_2020, https://doi.org/10.48550/arxiv.2104.08682, https://doi.org/10.48550/arxiv.2111.05754, https://doi.org/10.48550/arxiv.2205.11005}, structured pruning is more challenging and less explored. However, structured pruning is also more desirable in many practical scenarios, such as deploying these models on resource-constrained devices or providing fast services based on LLMs. Existing work on structured pruning for LMs focuses on BERT-like networks \citep{https://doi.org/10.48550/arxiv.1810.04805} that consist of an encoder-decoder or an encoder-only architecture \citep{https://doi.org/10.48550/arxiv.2009.08065, https://doi.org/10.48550/arxiv.2204.00408, https://doi.org/10.48550/arxiv.2206.12562, https://doi.org/10.48550/arxiv.2105.14636}. These models are mainly used for natural language understanding (NLU) tasks, such as question answering, sentiment analysis, or natural language inference. Among the various methods, Block Movement Pruning \citep{https://doi.org/10.48550/arxiv.2109.04838} is a recent and popular technique that removes weight blocks based on movement. However, there is a lack of systematic research on structured pruning for decoder-only architectures such as GPT-2 \cite{radford2019language}, GPT-3 \cite{https://doi.org/10.48550/arxiv.2005.14165}, or GPT-Neo \cite{gpt-neo}, which are mainly used for natural language generation (NLG) tasks, such as text summarization, machine translation, or text completion. While there are some works that apply unstructured pruning \citep{https://doi.org/10.48550/arxiv.2205.11005} or many kinds of orthogonal compression techniques to decoder-only LLMs \citep{https://doi.org/10.48550/arxiv.2012.09852, https://doi.org/10.48550/arxiv.2110.08460, edalati-etal-2022-kronecker, tao2022compression, https://doi.org/10.48550/arxiv.2201.08542, https://doi.org/10.48550/arxiv.2111.00160}, there is no comprehensive evaluation of traditional structured pruning for these models on NLG tasks. In this work, we compress decoder-only auto-regressive language models. Due to the lack of prior literature towards the same goal, we evaluate the performance of several general-domain pruning methods on NLG tasks, including magnitude and movement pruning. However, we find these methods can struggle or under-perform compared to na\"ive baselines, leading to the following question: \begin{center} \textit{What determines the performance of structured pruning on generative language models?} \end{center} We aim to fill this gap by conducting a systematic study of structured \emph{fine-pruning} (pruning while finetuning) methods for decoder-only LLMs on NLG tasks\footnote{All code publicly available at \url{https://github.com/santacml/nn_pruning_uniqueness}}, and further proposing a novel method that combines the strengths of different existing methods. Our main contributions are:\vspace{-3pt} \begin{itemize} \item To our knowledge, we perform the \textbf{first systematic evaluation} of several structured pruning methods to decoder-only LLMs on NLG tasks. However, we find that these established methods only achieve marginal improvements over a simple baseline in which we \textbf{randomly prune} neurons during the finetuning process, which is vastly different from the cases of pruning BERT-like models. \item Along with our evaluation of the original form of these pruning methods, we also systematically evaluate the impacts of knowledge distillation to their performances. We find \textbf{distillation largely closes the gaps between different methods}, further narrowing the advantage of best methods over random pruning. \item We propose an empirical analysis framework for structured pruning that relies on \textbf{two fundamental measures of redundancy: sensitivity and uniqueness}. Sensitivity reflects how much the removal of a network component affects the output of the model, while uniqueness reflects how much the information provided by a network component differs from others. The prediction given by our framework matches well with empirical observations, and help reveal the latent properties that determines the performance of different pruning methods. \item To show the impact made possible by our analysis, we propose a proof-of-concept method, \textbf{Globally Unique Movement} (GUM), that aims to maximize both sensitivity and uniqueness by pruning network components based on their global movement and local uniqueness scores. GUM outperforms the existing methods on several NLG tasks and achieves competitive compression rates, proving that future methods should preserve both sensitivity and uniqueness. \end{itemize} \section{Background \& Methods} There are many general-domain pruning methods. We focus on \textit{fine-pruning}, a relevant technique for language models which performs automated gradual pruning \citep{https://doi.org/10.48550/arxiv.1710.01878} while fine-tuning. We focus on pruning the MLPs of generative models. At inferencing time, generative models can cache attention vector states. Therefore, especially in large models, MLPs account for more time than attention for new tokens. MLPs also seem to store factual knowledge \citep{https://doi.org/10.48550/arxiv.1909.01066, https://doi.org/10.48550/arxiv.2202.05262}, making their reduction possibly challenging.\vspace{-3pt} \paragraph{Notation and Background} \label{sec:notation} We shall define some notations for the MLP layers. Let $\sigma(\cdot): \mathbb{R}^{m}\mapsto \mathbb{R}^m$ be an element-wise activation function (e.g. GELU), and let $W_1 \in \mathbb{R}^{m\times d}, W_2 \in \mathbb{R}^{d\times m}$ be two weight matrices and $b \in \mathbb{R}^m$ be the bias vector. For an input token $x\in\mathbb{R}^d$, the MLP layer output of $x$ is expressed as $\mathrm{MLP}(x) = x + W_2h(x)$ with intermediate output $h(x) = \sigma(W_1\mathsf{LN}(x))$, where $\mathsf{LN}$ represents layer normalization. We use $\odot$ to denote element-wise multiplications. Lastly, we use $\mathcal{L}$ to denote the loss of the task. We study methods of reducing $m$, the intermediate dimension, which is usually set at $m=4d$. \paragraph{Movement Pruning} \label{apx:mvmt} Movement Pruning \citep{https://doi.org/10.48550/arxiv.2005.07683} is a popular fine-pruning method. In this paper, we focus on the \textit{block} version of movement pruning \citep{https://doi.org/10.48550/arxiv.2109.04838}, and we first introduce the original unstructured method. Let $\mathcal{L}(W)$ be the task loss with weight parameters $W$. For each weight parameter $W_{i,j}$, we compute a accumulated score $S_{i,j} $ at iteration $T$, by the following expression\footnote{Gradients are calculated straight-through to the mask scores, otherwise it is undefined \citep{https://doi.org/10.48550/arxiv.1308.3432}.}: \begin{equation} \label{eq:sensitivity} S_{i,j}^{(T)} = - \eta_S\sum_{t\leq T}W_{i,j}^{(t)}\cdot\frac{\partial \mathcal{L}(W^{(t)})}{\partial W_{i,j}} \end{equation} Afterwards, the scores are used to compute a mask $M$ with entries $M_{i,j} \in {\{0, 1\}}$. And we apply the mask by $W' = M \odot W$, $b' = M \odot b$, and $U' = U \odot M$ to remove the masked weights. There are two ways to compute the mask $M$: $M = \text{Top}_v(S)$ for \textbf{hard movement} and $M = 1_{\mathrm{sigmoid}(S) > \tau}$ for \textbf{soft movement}. $\tau$ and $v$ are both hyperparameters, and $\text{Top}_v(S)$ is defined as: \begin{equation} \text{Top}_v(S)_i = \begin{cases} 1, & \text{ $S_i$ in top $v\%$, } \\ 0, & \text{otherwise.} \end{cases} \end{equation} Additionally, mask scores are regularized via a regularization term with multiplier $\lambda_{\mathrm{mvp}}$ of the form $ R(S) = \lambda_{\mathrm{mvp}}\sum_{i,j}\mathrm{sigmoid}(S_{i,j})$. Hard movement prunes all layers by the same amount. Soft movement, however, allows for adaptive sparsity for different layers, which is known to be crucial for high-sparsity regimes~\citep{https://doi.org/10.48550/arxiv.1802.03494, https://doi.org/10.48550/arxiv.1801.06519}, and is seen to be the superior method for NLU tasks in \cite{https://doi.org/10.48550/arxiv.2005.07683}. Block pruning expands on this method to operate on groups of weights by combining mask scores per block, allowing for structured pruning \citep{https://doi.org/10.48550/arxiv.2109.04838}. \vspace{-3pt} \paragraph{Magnitude Pruning} \label{background:magnitude} We use a mask version of block magnitude pruning (a block extension of group-lasso, like \cite{shen2021prune}) as the baseline. For each set $G$ of parameters, we assign $S_G = (\sum_{(i,j)\in G}|W_{i,j}|^2)^{1/2}$ as the mask score and gradually prune groups with the smallest scores. \vspace{-3pt} \paragraph{Gradual Random pruning} Random pruning approaches have been explored previously \citep{https://doi.org/10.48550/arxiv.1711.05908, https://doi.org/10.48550/arxiv.2003.03033}, and in particular gradual, random pruning \citep{https://doi.org/10.48550/arxiv.1902.09574} has been found to perform relatively well. We further explore random pruning in conjunction with distillation. Our gradual random method freezes $S$ at random initialization for the duration of finetuning and prunes using $\text{Top}_v(S)$. \vspace{-3pt} \paragraph{Knowledge Distillation} In practice, pruning is often paired with knowledge distillation \citep{https://doi.org/10.48550/arxiv.1503.02531} to boost performance, such as in \cite{https://doi.org/10.48550/arxiv.2005.07683}. Distillation loss adds KL divergence between a teacher model and smaller student model. When used, we distill from a finetuned, full-parameter version of the model being pruned. \begin{table}[t!]\label{tbl:methods} \small \begin{tabular}{c|ccccc} \specialrule{.7pt}{1pt}{1pt} & Magnitude & Gradual Random & Hard Mvmt. & Soft Mvmt. & GUM \\ \specialrule{.7pt}{1pt}{1pt} Score $S$ & $L_2$-norms & Random (frozen) & Eq. \ref{eq:sensitivity} & Eq. \ref{eq:sensitivity} & Eq. \ref{eq:sensitivity} \\ Selection & $\text{Top}_v(S)$ & $\text{Top}_v(S)$ & $\text{Top}_v(S)$ & Threshold & $\text{Top}_v(S)$ \\ Structure & Local & Local & Local & Global & Global \\ Regularization & None & None & $R(S)$ & $R(S)$ & $ R(S)+R_{\mathrm{sim}}(S)$ (Eq.\ref{eq:unique-regularization})\\ Criteria & Magnitude & Random & Sensitivity & Sensitivity & $\begin{matrix} \textrm{Sensitivity} \textrm{\&Uniqueness} \end{matrix}$\\ \specialrule{.7pt}{0pt}{0pt} \end{tabular} \caption{\small Comparison of pruning methods used. $R(S)$ is defined in Section~\ref{sec:notation} and $R_{\mathrm{sim}}(S)$ is described in Eq \ref{eq:unique-regularization}.} \end{table} \section{Evaluating Fine-Pruning for Generative Language Models} We present our framework for understanding the redundancy of pruning methods. In this work, we focus on improving the seminal work of movement pruning proposed in \cite{https://doi.org/10.48550/arxiv.2005.07683}. However, na\"ively applying movement often results in incremental or worse performance compared to random. We dissect our results using a systematic framework and analyze their behaviors and properties. \subsection{Observations of Previous Pruning Methods} \paragraph{Soft Movement (BERT's Best Method) Struggles for GPT-like Models} It is shown in \cite{https://doi.org/10.48550/arxiv.2109.04838} that soft movement enjoys better performance over hard movement when block pruning encoder-decoder models. However, we find the method severely struggles when using the original implementation\footnote{Soft movement is the best to our knowledge at time of writing. Code is available at \url{ https://github.com/huggingface/nn_pruning}. In this code, mask scores are added directly to the optimizer and are affected by optimizer algorithm or other hyperparameters. We use this code for a fair comparison between architectures, but manually updating the mask according to the definition might help \citep{https://doi.org/10.48550/arxiv.2206.12562}. } on GPT-like models due to highly sensitive hyperparameters. For instance, the mask regularization parameter $\lambda_{\mathrm{mvp}}$ can either be too large and prune too aggressively, or too little, resulting in under-pruning as shown below. Even after grid searching $\lambda_{\mathrm{mvp}}$ we still find subpar performance, and given the extremely high runtimes for this method as listed in Appendix \ref{apx:runtime}, we find this method impractical to use. \iffalse It is shown in \cite{https://doi.org/10.48550/arxiv.2005.07683} that the soft-movement method enjoys better performance. However, the complicated formulation of this method, coupled with the large number of sensitive hyperparameters make it struggles at structured pruning of GPT-like models. For instance, the mask regularization parameter $\lambda_{\mathrm{mvp}}$ can either be too large to prune aggresively, or too little to prune to the desired left-over percentage. Even grid searching $\lambda_{mvp}$ still resulted in subpar performance. \fi \paragraph{Random Pruning Works Surprisingly Well} One might expect movement to easily beat random pruning, however, we find their performances to only slightly differ or sometimes match, especially under distillation. Other works have noted random pruning's effectiveness \citep{https://doi.org/10.48550/arxiv.1902.09574}, but we find the difference in generative tasks to be particularly slim. As shown in Tables~\ref{tbl:wikisql-neo} and~\ref{tbl:wikitext-neo}, random pruning performs very close to both hard and soft movement pruning over WikiSQL and Wikitext datasets. Moreover, when combined with distillation, the gaps are largely closed between random pruning and other methods, which is also another intriguing observation itself, as we discuss below. \paragraph{Distillation Closes the Gaps Between Different Methods} As shown in Table~\ref{tbl:wikisql-neo} and Table~\ref{tbl:wikisql-gpt2}, methods with very different performances would perform rather similar if distilled from a non-pruned, finetuned model. Indeed, both WikiSQL and Wikitext experiments in Table~\ref{tbl:wikitext-neo} and Table~\ref{tbl:wikitext-gpt2} showed that when the network has fewer left-over neurons (e.g., 10\% or 25\%), the difference of accuracies or perplexities often fall below half of the difference without distillation. This observation remains consistent across models of different sizes, architectures, and tasks. Results for GPT-neo with $1.3$-billion parameters in Table~\ref{tbl:wikisql-1.3b} shows that pruning a larger model can still benefit from distillation. Knowledge distillation often boosts the performance of weaker methods even more, which might suggest the differences between methods are largely due to the inability to learn more diverse features set in fine-pruning, as suggested by the theoretical results in \cite{allen2020towards} on distillations. \begin{table}[t!] \begin{center} \begin{tabular}{ c|cccc} \specialrule{1.5pt}{1pt}{1pt} Model & Method & 50\% / + Distil & 25\% / + Distil & 10\% / + Distil \\ \specialrule{.5pt}{1pt}{1pt} \multirow{5}{*}{GPT-Neo-125m } & Soft & 63.81 / 65.25 & 63.23 / 65.04 & 62.996$^*$ / 64.95$^*$ \\ & $L_2$-Magnitude & 62.71 / 65.32 & 61.35 / 64.58 & 61.10 / 63.90 \\ & Gradual Random & 64.29 / 65.74 & 63.06 / 65.10 & 62.27 / 64.63 \\ Finetuned: \textbf{65.92} & Hard/$\text{Top}_v$ & 65.33 / 65.94 & 64.88 / 65.79 & 64.23 / 65.17 \\ & GUM & 63.88 / \textbf{66.23} & 64.36 / \textbf{66.18} & 63.81 / \textbf{ 65.65 } \\ \specialrule{1.5pt}{1pt}{1pt} \end{tabular} \end{center} \caption{\textbf{GPT-Neo-125m:} Performance in $\text{Acc}_{lf}$ on the validation set for decreasing amount leftover on WikiSQL. GUM outperforms compared to other methods, soft movement struggles to match other methods, and gradual random nearly performs as well as $\text{Top}_v$. * indicates having 1-3\% excess leftover neurons unpruned.}\label{tbl:wikisql-neo} \end{table} \begin{table}[t!] \begin{center} \begin{tabular}{ c|cccc} \specialrule{1.5pt}{1pt}{1pt} Model & Method & 50\% / + Distil & 25\% / + Distil & 10\% / + Distil \\ \specialrule{.5pt}{1pt}{2pt} \multirow{5}{*}{GPT-2-sm }& Soft & 68.27 / 69.32 & 67.74 / 69.314 & 67.25$^*$ / 69.11$^*$ \\ & $L_2$-Magnitude & 68.33 / 69.62 & 66.92 / 68.96 & 66.02 / 68.43 \\ & Gradual Random & 69.07 / 69.61 & 67.78 / 69.35 & 66.77 / 69.00 \\ Finetuned: \textbf{70.32} & Hard/$\text{Top}_v$ & 69.62 / 69.93 & 69.10 / 69.33 & 68.30 / 69.26 \\ & GUM & 68.62 / \textbf{70.38} & 68.82 / \textbf{69.63} & 68.07 / \textbf{ 69.46} \\ \specialrule{1.5pt}{1pt}{1pt} \end{tabular} \end{center} \caption{\textbf{GPT-2-sm:} Performance in $\text{Acc}_{lf}$ on the validation set for decreasing amount leftover on WikiSQL. * indicates having 1-3\% excess leftover neurons unpruned.}\label{tbl:wikisql-gpt2} \end{table} \begin{figure}[b!] \centering \includegraphics[width=\textwidth]{neo.pdf}\vspace{-10pt} \caption{Sensitivity and Uniqueness measured on the training set for GPT-Neo-125m. The vertical axis is defined as the ratio of the corresponding metric between the pruned model and a baseline model (which is non-pruned and fully fine-tuned) with a maximum of 1x. We are able to use these graphs to analyze and compare the performance of different pruning methods. Details of measurements are given in Appendix~\ref{apx:graphs}. } \label{fig:neo_uniq_sens} \end{figure} \subsection{Two Types of Redundancy Measures: Sensitivity and Uniqueness} \label{sec:saliency_uniqueness} In order to understand why these pruning methods display such behaviors, we devise a framework to characterize the leftover neurons of pruned network based on two criteria: sensitivity and uniqueness\footnote{These are two known concepts in literature, but have not been both combined into one pruning method.}. We formalize these notions of redundancy as follows: \begin{definition}[Redundancy Criteria]\label{def:redundancy} Given a set of neurons $\{h_i(\cdot)\}_{i \in [m]}$ and input $X$, we call one neuron $h_i $ \textbf{redundant} if it meets at least one of the following two conditions: \vspace{-3pt} \begin{enumerate} \item \textbf{Sensitivity/Saliency}: the neuron is \textbf{not salient} if its outputs are either negligible or has small gradient when optimizing for the downstream task, mathematically described as \begin{equation*} \mathbb{E}\Big[|h_i(X)\cdot\frac{\partial \mathcal{L}}{ \partial h_i(X)}|\Big] \approx 0 \end{equation*} \vspace{-10pt} \item \textbf{Uniqueness}: the neuron is \textbf{not unique} if its outputs could be reconstructed entirely with a linear combination of the outputs from other neurons, mathematically described as \begin{equation*} h_i(X) \in \mathrm{span}(\{h_j(X)\}_{j\neq i}),\quad \text{over all inputs } X \end{equation*} \end{enumerate} \end{definition} \iffalse \textcolor{red}{ Our findings suggest that pruning can significantly improve the efficiency and generalization of pretrained language models for various downstream tasks. However, not all neurons are equally important or useful for the task at hand. To identify and eliminate the redundant ones, we devise a framework that measures the redundancy of each neuron based on two criteria: sensitivity and uniqueness. Sensitivity captures how much a neuron contributes to the task objective, while uniqueness captures how much information it provides that is not already captured by other neurons. We formalize these notions of redundancy as follows:\vspace{-3pt} \begin{enumerate} \item \textbf{Sensitivity/Saliency}: A neuron $h_i $ is \textbf{not salient} if its outputs are either very small or have very small gradients with respect to the task objective $\mathcal{L}$. This means that changing the neuron's output has little effect on the task performance. Mathematically, we define the sensitivity of a neuron $h_i$ as the expected value of the product of its output and its gradient: \begin{equation*} \mathbb{E}\Big[h_i(X)\cdot\frac{\partial \mathcal{L}}{ \partial h_i(X)}\Big] \end{equation*} and we say that a neuron is redundant if its sensitivity is close to zero. \vspace{-8pt} \item \textbf{Uniqueness}: A neuron $h_i $ is \textbf{not unique} if its outputs can be completely reconstructed by a linear combination of the outputs of other neurons. This means that the neuron does not add any new information to the representation. Mathematically, we define the uniqueness of a neuron $h_i$ as the condition that its output is not in the span of the outputs of other neurons, i.e., \begin{equation*} h_i(X) \notin \mathrm{span}(\{h_j(X)\}_{j\neq i}),\quad \text{for any input } X \end{equation*} and we say that a neuron is redundant if its uniqueness is violated. \end{enumerate}} \fi \iffalse Given these intriguing results, we are motivated to find the factors that affect pruning performance. We propose a framework encompassing the following two measures for redundancies, for the task of fine-pruning a pretrained language model over downstream tasks with objective $\mathcal{L}$: \begin{definition}[Redundancy]\label{def:redundancy} Given a set of neurons $\{h_i(\cdot)\}_{i \in [m]}$, we call one neuron $h_i $ \textbf{redundant} if it meets one of the following two conditions:\vspace{-3pt} \begin{enumerate} \item \textbf{Sensitivity/Saliency}: the neuron is not salient if its outputs are either negligible or has small gradient when optimizing for the downstream task. \begin{equation*} \mathbb{E}\Big[h_i(X)\cdot\frac{\partial \mathcal{L}}{ \partial h_i(X)}\Big] \approx 0 \end{equation*} \vspace{-8pt} \item \textbf{Uniqueness}: the neuron is not unique if its outputs could be reconstructed entirely with a linear combination of the outputs from other neurons, described as \begin{equation*} h_i(X) \in \mathrm{span}(\{h_j(X)\}_{j\neq i}),\quad \text{over all inputs } X \end{equation*} \end{enumerate} \end{definition} \fi Intuitively, a sensitive neuron has outputs that greatly contribute to the final output, while a unique neuron has outputs which are different from that of others. These metrics are independent from one another, so a neuron could be highly salient but replaceable by other neurons, or it could be highly unique but ultimately contribute little to the network. Consider a toy example where two neurons $h_i$ and $h_j$ have the same non-zero weights and large gradient. Neuron $h_i$ could easily be removed by doubling the outputs of $h_j$, so they are not unique, but both are highly salient. \subsection{The General Trade-off Between Sensitivity and Uniqueness} Equipped with Definition~\ref{def:redundancy}, we find the following important trends. Figure~\ref{fig:neo_uniq_sens} and Figure~\ref{fig:gpt2_uniq_sens} show that without distillation, different methods often have a preference of focus on one of the redundancy measures. We now comment on trends across all experimental result tables (Table~\ref{tbl:wikisql-neo}, \ref{tbl:wikisql-gpt2}, \ref{tbl:wikitext-neo}, \ref{tbl:wikitext-gpt2}, \ref{tbl:wikisql-1.3b}, \ref{tbl:samsum-neo}, \ref{tbl:samsum-gpt2}). Regardless of method, as more pruning occurs, sensitivity decreases and uniqueness increases in general. The best performing methods strike a balance between both measures, establishing a strong correlative link between these metrics and final pruning performance. Under distillation, however, sensitivity seemingly concentrates across methods, which likely corresponds to the . For individual methods, we can further characterize their redundancy properties as below that align well with their performances: \begin{itemize} \item \textit{Magnitude pruning} universally scores worst on both metrics, explaining its poorer performance in all experiments. However, with distillation, gaps of sensitivity between methods noticeably decreases, which partially describes why distillation improves it significantly. \item \textit{Random pruning} obtains similar distillation sensitivity and uniqueness, though slightly lower, to hard movement, lending credence to its overall high performance. However, sensitivity is markedly lower without distillation as is reflected in all figures. \emph{This shows that hard movement does not target uniqueness}, given random pruning does not target uniqueness. \item \textit{Soft movement pruning} also usually scores poorly on both metrics, and sometimes abysmally as in its sensitivity in Figure~\ref{fig:neo_uniq_sens}, helping describe its overall poor performance. \item \textit{Hard movement pruning} consistently obtains the highest sensitivity with not-far-behind uniqueness across different datasets and architectures. This correlates with the high performance when distillation is not used. However, when combined with distillation, the gaps of sensitivity between methods converge, and the advantage of hard movement fades. \item \textit{GUM}, our proof-of-concept method, nearly always obtains best uniqueness while maintaining decent sensitivity, further improved using distillation, explaining its superiority across various tasks. However, GUM has a larger performance increase for GPT-Neo-125m than for GPT-2-sm on WikiSQL; this is explained in Figure \ref{fig:gpt2_uniq_sens} as pruned GPT-2 already has high baseline uniqueness for WikiSQL ($\sim$2x) so further increase incurs diminishing returns. \end{itemize} Given the training/validation split and general noise in the datasets, there are some outlier points, for instance, GUM's surprisingly poor distilled uniqueness for Wikitext in Figure~\ref{fig:neo_uniq_sens}. We observe higher absolute uniqueness on Wikitext in general (around 95\% of neurons are unique per cosine similarity), meaning pruned uniqueness can vary over datasets. \iffalse \paragraph{Digging Deeper into Specific Methods} We may turn to these graphs to analyze each method individually. \textbf{Magnitude} universally scores poorly on both metrics, explaining the poor performance of the method in all experiments. \textbf{Soft Movement} also usually scores poorly on both metrics, and sometimes abysmally as in \ref{fig:neo_uniq_sens}, again explaining its poor performance. \textbf{Random pruning} obtains similar distillation sensitivity and uniqueness, though slightly lower, to hard movement, lending credence to its overall high performance. However, sensitivity is markedly lower without distillation as is reflected in all tables. We point to this as proof that hard movement does not target uniqueness. \textbf{Hard Movement} regularly obtains the highest sensitivity, explaining it's high performance. \textbf{GUM} nearly always obtains best uniqueness while sacrificing sensitivity, remedied using distillation, explaining it's superiority across various tasks. However, GUM has a larger performance increase for GPT-Neo-125m than for GPT-2-sm on WikiSQL; this is explained in \ref{fig:gpt2_uniq_sens} as pruned GPT-2 already has high baseline uniqueness for WikiSQL ($\sim$2x) so further increase incurs diminishing returns. Given the training/validation split and general noise, there are some outlier points, for instance, GUM's surprisingly poor distilled uniqueness for Wikitext in Figure \ref{fig:neo_uniq_sens}. Movement clearly does not target uniqueness given that it's score is often matched by random pruning. Random pruning also obtains surprisingly high saliency, lending credence to its high performance. Soft movement obtains poor or sometimes abysmal performance on both metrics. We may turn to these graphs to examine particular data points; non-distillation GUM under-performs for WikiSQL in \ref{tbl:wikisql-neo} and \ref{tbl:wikisql-gpt2}; this is explained in both figures as GUM sacrificed too much sensitivity. GUM has a larger performance increase for GPT-Neo-125m than for GPT-2-sm on WikiSQL; this is explained in \ref{fig:gpt2_uniq_sens} as pruned GPT-2 already has high baseline uniqueness for WikiSQL ($\sim$2x) so further increase incurs diminishing returns. Given the training/validation split and general noise, there are some outlier points, for instance, GUM's surprisingly poor distilled uniqueness for Wikitext in Figure \ref{fig:neo_uniq_sens}. \fi \iffalse And the best performing methods would have a balanced \todo{finish} \begin{itemize} \item The best performing models will have both high sensitivity and high uniqueness \item GUM generally increases uniqueness compared to $\text{Top}_v$, though sometimes sacrificing sensitivity \item Movement does not target uniqueness as defined in \ref{def:redundancy}, demonstrated through random pruning often matching the uniqueness of $\text{Top}_v$ \item Random pruning also nearly matches $\text{Top}_v$ in sensitivity, explaining its surprisingly high performance \item Regardless of method, uniqueness generally increases and sensitivity generally decreases as more pruning occurs \end{itemize} In addition, we can read even further into these results to explain why some methods failed or succeeded for a particular run (perhaps due to suboptimal hyperparameters). For instance: \begin{itemize} \item Non-distillation GUM under-performs for WikiSQL in \ref{tbl:wikisql-neo} and \ref{tbl:wikisql-gpt2}; this is explained in \ref{fig:gpt2_uniq_sens} where GUM reduced overall sensitivity \item GUM does not increase performance for GPT-2 as much as GPT-Neo on WikiSQL; this is explained in \ref{fig:gpt2_uniq_sens} as pruned GPT-2 already has high baseline uniqueness for WikiSQL ($\sim$2x) \item GUM and $\text{Top}_v$ both under-perform for distilled models in \ref{tbl:wikitext-neo} and \ref{tbl:wikitext-gpt2}; this is explained in both figures as models are less sensitive and less unique after distillation \end{itemize} \fi \section{Globally Unique Movement}\label{sec:GUM} After observing the lack of uniqueness amongst leftover neurons, we set out to improve the performance of hard movement. We introduce two techniques which together comprise Globally Unique Movement (GUM). In essence, we encourage a score-weighted uniqueness term by multiplying the score regularizer and the cosine similarity together, to obtain a balance of uniqueness and sensitivity. \begin{figure}[t!] \centering \includegraphics[width=\textwidth]{gpt2.pdf}\vspace{-10pt} \caption{Sensitivity and Uniqueness measured on the training set for GPT-2-sm. The vertical axis is defined as the ratio of the corresponding metric between the pruned model and a baseline model (which is non-pruned and fully fine-tuned) with a maximum of 1x. Details of measurements are given in Appendix~\ref{apx:graphs}.} \label{fig:gpt2_uniq_sens} \end{figure} \paragraph{Tackling Non-Unique Redundancy} \label{sec:uniqreg} Regularizing or pruning via similarity is a well-explored topic \citep{8603826, ijcai2018-453, https://doi.org/10.48550/arxiv.1507.06149, 9145735}. However, our approach integrate more cleanly with movement to insulate weights from regularization, with only a small increase in training time as listed in Appendix \ref{apx:runtime}. Our approach regularizes mask scores based on cosine similarity \footnote{Solving for linear combinations of neurons during training is prohibitively expensive, so we consider cosine similarity as a "first-order" proxy.}. Cosine similarity between the outputs of any two neurons given input $X$ (for example, a batch of tokens) is defined simply as \begin{equation} \label{eq:cosine_sim} \mathrm{sim}(h_i(X),h_j(X)) = \frac{h_i(X)^\top h_j(X)}{\|h_i(X)\|_2*\|h_j(X)\|_2}\end{equation} However, calculating similarity with only intra-batch estimates is noisy and unreliable, so we introduce a running version of its estimates in Algorithm \ref{alg: cosine} to obtain the cosine similarity between neurons $h_i(\cdot)$ and $h_j(\cdot)$. Now, we build on the regularization term $R_{\mathrm{sim}}(S)$ of movement pruning to define a new regularization: Let $N_{\textrm{left}}$ be the number of leftover neurons, for each group $j \in [m]$ and its corresponding score $S_j$, we define a term $U_j = \frac{1}{N_{\mathrm{left}}}\sum_{i\in [m],i\neq j}\mathbf{sim}(h_j,h_i) $, and then we multiply $U_j$ to the original terms in $R(S)$ to obtain \begin{equation}\label{eq:unique-regularization} R_{\mathrm{sim}}(S) = \lambda_{\mathrm{mvp}}\sum\nolimits_{j} U_j\cdot\mathrm{sigmoid}(S_j) \end{equation} \paragraph{Global $\text{Top}_v$ for Soft-Like Movement} Hard movement removes the same amount of weights per layer independently. Global $\text{Top}_v$ instead uses $\text{Top}_v$ function on the set of all mask scores in the network jointly. Global $\text{Top}_v$ was originally explored for movement \citep{https://doi.org/10.48550/arxiv.2005.07683} and was found to perform similarly. We find when used in conjunction with uniqueness regularization, Global outperforms Local. Global $\text{Top}_v$ intuitively allows for more flexibility when pruning. When pruning locally, it is necessary to choose the least common pruning percent - if one layer requires 50\% neurons before severe degradation, all layers must keep 50\%. Global comparison removes this loophole in a similar manner to soft movement \footnote{Appendix \ref{apx:global} shows an example pruning distribution for one pruned network.}. \section{Results}\label{sec:results} In this section, we present results on three different kinds of generative language modeling tasks: language modeling with Wikitext-103 \cite{merity2016pointer}, text-to-text generation and natural language understanding with SAMsum \cite{Gliwa_2019}, and exact match measurable text-to-code generation with WikiSQL \cite{zhongSeq2SQL2017}. Details and hyperparameters are listed in Appendix~\ref{apx:hparams}. When distilling, the teacher model used is the finetuned version of the original-size model. To ensure trends hold when scaling up, we present one experiment with GPT-Neo-1.3b in section \ref{sec:wikitext}. For all pruning amounts, we will present in terms of final percentage \textit{leftover} - i.e., 75\% of neurons remain after pruning. For soft movement, final prune percentage is shown in parentheses when it differs from desired by a large amount. In general, GUM is found to outperform $\text{Top}_v$ by a margin similar to the difference between $\text{Top}_v$ and gradual random pruning, with some exceptions. While small, we argue this gap shows the effectiveness of preserving neuron uniqueness alongside saliency. \begin{algorithm}[!t] \caption{Running Cosine Similarity Update\label{alg: cosine}} \begin{algorithmic}[1] \Require a set of neurons $h_{i}(\cdot)$ for $i \in [m]$, inputs from a set $\mathcal{Z}$ (usually intermediate outputs of attention layers), an update multiplier $\lambda_{\textrm{sim}}$; \State Initialize $\mathbf{sim}^{(0)}(h_i,h_j)= 0$ for $i,j \in [m]$, $C_{i,j}^{(0)} = 0$, and $Q_i^{(0)} = 0$ for $i \in [m]$; \While{training} \State Sample a input $X \in \mathcal{Z}$, compute the output vector neuron $h_j(X)$ for $j \in [m]$\footnote{If a batch of inputs are fed, we concatenate their outputs to one output vector}. \State Update $C_{i,j}^{(t+1)} \gets (1 - \lambda_{\mathrm{sim}})C_{i,j}^{(t)} + \lambda_{\mathrm{sim}}h_i(X)^{\top} h_j(X),\quad \forall i,j \in [m]$ \State Update $Q_{i}^{(t+1)} \gets (1 - \lambda_{\mathrm{sim}}) Q_{i}^{(t)} + \lambda_{\mathrm{sim}} \|h_i(X)\|_2^2,\quad \forall i \in [m]$ \State Update similarity by: \[\textstyle\mathbf{sim}^{(t+1)}(h_i,h_j) \gets C_{i,j}^{(t+1)}/\sqrt{Q_i^{(t+1)}Q_j^{(t+1)}},\quad \forall i,j \in [m]\] \EndWhile \end{algorithmic} \end{algorithm} \begin{table}[h] \begin{tabular}{ c|ccccc} \specialrule{1.5pt}{1pt}{2pt} Model & Method & 75\% / + Distil & 50\% / + Distil & 25\% / + Distil \\ \specialrule{.5pt}{1pt}{1pt} \multirow{5}{*}{GPT-Neo-125m } & Soft & 17.814 / 17.651 & 19.470 / 19.053 & 21.169 / 20.469$^*$\\ & $L_2$-Magnitude & 18.524 / 18.048 & 20.834 / 20.041 & 23.692 / 22.604 \\ & Gradual Random & 17.307 / 17.144 & 18.900 / 18.410 & 21.458 / 20.546 \\ Finetuned: \textbf{16.138} & Hard/$\text{Top}_v$ & 16.974 / 17.142 & 18.253 / 18.369 & 20.495 / 20.194 \\ & GUM & \textbf{ 16.822} / 17.158 & \textbf{ 17.881} / 18.314 & 20.059 / \textbf{19.833 } \\ \specialrule{1.5pt}{1pt}{1pt} \end{tabular} \caption{\textbf{GPT-Neo-125m:} Performance in perplexity (PPL) on the validation set for decreasing amount leftover on Wikitext-103. * indicates having 1-3\% excess leftover neurons are unpruned.}\label{tbl:wikitext-neo} \end{table} \paragraph{Wikitext-103}\label{sec:wikitext} Results on the Wikitext-103 dataset \cite{merity2016pointer}, one of the most popular datasets for causal language modeling, are shown in Tables \ref{tbl:wikitext-neo} and \ref{tbl:wikitext-gpt2}. Because performance on Wikitext-103 is in perplexity (PPL), it is a highly consistent and discriminatory dataset to prune on. We are unable to ever fully recover original model performance after pruning, suggesting that any compression increases uncertainty. Distillation generally hurts performance across all methods. \begin{table}[H] \begin{center} \begin{tabular}{ c|ccccc} \specialrule{1.5pt}{1pt}{1pt} Model & Method & 75\% / + Distil & 50\% / + Distil & 25\% / + Distil \\ \specialrule{.5pt}{1pt}{1pt} \multirow{5}{*}{GPT-2-sm} & Soft & 16.754 / 16.950 & 18.261 / 18.051 & 19.948 / \textbf{19.473$^*$} \\ & $L_2$-Magnitude & 17.399 / 17.414 & 19.595 / 19.178 & 22.593 / 21.667 \\ & Gradual Random & 16.574 / 16.823 & 17.974 / 17.862 & 20.444 / 19.798 \\ Finetuned: \textbf{15.571} & Hard/$\text{Top}_v$ & 16.363 / 16.730 & 17.611 / 17.742 & 20.016 / \textbf{19.663} \\ & GUM & \textbf{ 16.242} / 16.680 & \textbf{17.444} / 17.692 & 19.877 / 19.681 \\ \specialrule{1.5pt}{1pt}{1pt} \end{tabular} \end{center} \caption{\textbf{GPT-2-sm:} Performance in perplexity (PPL) on the validation set for decreasing amount leftover on Wikitext-103. * indicates having 1-3\% excess leftover neurons unpruned. \label{tbl:wikitext-gpt2}} \end{table} \paragraph{WikiSQL} \label{sec:wikisql} As opposed to the other datasets, WikiSQL \cite{zhongSeq2SQL2017} contains hard ground-truth labels for comparison via Exact Match (EM). Due to this, our best performance is achieved in WikiSQL, where GUM is able to remove up to 75\% of neurons while maintaining performance on GPT-Neo. Results are shown in Tables \ref{tbl:wikisql-neo} and \ref{tbl:wikisql-gpt2}. We also present results for GPT-Neo-1.3b only on WikiSQL in Table \ref{tbl:wikisql-1.3b}. Results for this experiment follow a similar trend to smaller models. \begin{table}[H] \begin{center} \begin{tabular}{ c|cccc} \specialrule{1.5pt}{1pt}{1pt} Model & Method & 50\% / + Distil & 25\% / + Distil & 10\% / + Distil \\ \specialrule{.5pt}{1pt}{2pt} \multirow{1}{*}{GPT-Neo-1.3B } & Gradual Random & 72.18 / \textbf{74.76} & 70.38 / 73.83 & 68.56 / 72.77 \\ Finetuned: \textbf{74.88} & Hard/$\text{Top}_v$ & 73.33 / 74.75 & 72.18 / 74.54 & 71.14 / 73.77 \\ & GUM & 72.88 / 74.70 & 71.80 / \textbf{74.62} & 71.35 / \textbf{74.157} \\ \specialrule{1.5pt}{1pt}{1pt} \end{tabular} \end{center} \caption{\textbf{GPT-Neo-1.3B:} Performance in $\text{Acc}_{lf}$ for decreasing amount leftover on WikiSQL. \label{tbl:wikisql-1.3b}} \end{table} \paragraph{SAMsum}\label{sec:samsum} Results on SAMsum \cite{Gliwa_2019} are presented in Tables \ref{tbl:samsum-neo} and \ref{tbl:samsum-gpt2}. Popular for encoder-decoder models, this dataset entails summarizing short instant message conversations. Larger generative models have been explored for this task \citep{https://doi.org/10.48550/arxiv.2105.12544, https://doi.org/10.48550/arxiv.1911.00536}, achieving competitive results. Here we observe that GUM generally outperforms $\text{Top}_v$, following the trends of other datasets. \begin{table}[h] \small \begin{center} \begin{tabular}{ cc|cccc} \specialrule{1.5pt}{1pt}{1pt} Method & Leftover \% & Rouge\_1 & Rouge\_2 & Rouge\_L & Rouge\_LSUM \\ \specialrule{.5pt}{1pt}{1pt} No Prune & & 38.68 & 14.74 & 31.73 & 31.76 \\ \specialrule{.5pt}{1pt}{1pt} \multirow{3}{*}{Gradual Random} & 50\% / + Distil & 35.54 / 36.82 & 12.71 / 13.40 & 29.11 / 30.28 & 29.04 / 30.27 \\ & 25\% / + Distil & 33.11 / 35.71 & 11.01 / 13.13 & 27.39 / 29.50 & 27.37 / 29.48 \\ & 10\% / + Distil & 31.83 /34.60 & 10.02 / 11.72 & 26.49 / 28.40 & 26.46 / 28.34 \\ \specialrule{.5pt}{1pt}{1pt} \multirow{3}{*}{Hard/$\text{Top}_v$} & 50\% / + Distil & 37.68 / 36.94 & 14.17 / 13.72 & 31.12 / 30.64 & 31.09 / 30.62 \\ & 25\% / + Distil & 36.38 / 37.34 & 13.00 / \textbf{14.24} & 29.96 / \textbf{31.17} & 29.95 / \textbf{31.15 } \\ & 10\% / + Distil & 33.07 / 36.12 & 10.95 / 12.62 & 27.70 / 29.70 & 27.68 / 29.67 \\ \specialrule{.5pt}{1pt}{1pt} \multirow{3}{*}{GUM } & 50\% / + Distil & 37.22 / \textbf{38.45} & 13.79/\textbf{ 14.27} & 30.72 / \textbf{31.35 } & 30.72 / \textbf{31.36} \\ & 25\% / + Distil & 36.18 / \textbf{37.57} & 13.18 / 13.71 & 29.99 / 30.91 & 30.00/ 30.93 \\ & 10\% / + Distil & 34.72 / \textbf{36.52} & 11.88 / \textbf{13.40} & 28.82 / \textbf{29.97} & 28.79 / \textbf{29.96} \\ \specialrule{1.5pt}{1pt}{1pt} \end{tabular} \end{center} \caption{\textbf{GPT-Neo-125m: }Validation results on SAMsum. Higher is better for all metrics. In general, more pruning hurts performance, and GUM outperforms $\text{Top}_v$. \label{tbl:samsum-neo}} \end{table} \begin{table}[h] \small \begin{center} \begin{tabular}{ cc|cccc} \specialrule{1.5pt}{1pt}{1pt} Method & Leftover \% & Rouge\_1 & Rouge\_2 & Rouge\_L & Rouge\_LSUM \\ \specialrule{.5pt}{1pt}{1pt} No Prune & & 40.83 & 16.85 & 33.72 & 33.70 \\ \specialrule{.5pt}{1pt}{1pt} \multirow{3}{*}{Gradual Random} & 50\% / + Distil & 39.29 / 39.69 & 15.25 / 15.74 & 32.16 /32.72 & 32.15 / 32.72 \\ & 25\% / + Distil & 38.09 / 39.73 & 14.43 / 15.35 & 31.16 / 32.72 & 31.15 / 32.66 \\ & 10\% / + Distil & 36.91 / 38.88 & 13.02 / 14.71 & 30.13 / 31.85 & 30.13 / 31.83 \\ \specialrule{.5pt}{1pt}{1pt} \multirow{3}{*}{Hard/$\text{Top}_v$} & 50\% / + Distil & 39.93 / 40.23 & 16.43 / 16.07 & 33.40 / 33.39 & 33.38 / 33.39 \\ & 25\% / + Distil & 38.84 / \textbf{40.49} & 14.88 / \textbf{16.12 }& 31.83 / \textbf{33.46} & 31.82 /\textbf{ 33.45} \\ & 10\% / + Distil & 38.28 / 39.49 & 14.58 / 15.32 & 31.36 / 32.57 & 31.37 / 32.57 \\ \specialrule{.5pt}{1pt}{1pt} \multirow{3}{*}{GUM} & 50\% / + Distil & 38.80 /\textbf{ 40.74 } & 15.52 / \textbf{16.22 } & 32.27 / \textbf{33.99 } & 32.23 / \textbf{33.95 } \\ & 25\% / + Distil & 37.56 / 39.74 & 14.24 / 15.72 & 30.94 / 32.93 & 30.91 / 32.93 \\ & 10\% / + Distil & 39.12 /\textbf{ 39.80 } & 15.01 /\textbf{ 15.79 } & 32.20 /\textbf{ 32.96 } & 32.21 / \textbf{ 32.95 } \\ \specialrule{1.5pt}{1pt}{1pt} \end{tabular} \end{center} \caption{\textbf{GPT-2-sm: }Validation results on SAMsum. Higher is better for all metrics. In general, more pruning hurts performance, and GUM outperforms $\text{Top}_v$. \label{tbl:samsum-gpt2}} \end{table} \iffalse \begin{table}[h] \begin{center} \begin{tabular}{ c|ccccc} \specialrule{1.5pt}{1pt}{1pt} Method \& Leftover \% & BLEU & NIST & METEOR & ROUGE\_L & CIDEr \\ \specialrule{.5pt}{1pt}{1pt} No Prune & 67.34 & 8.5962 & 0.4598 & 0.7112 & 2.4399 \\ \specialrule{.5pt}{1pt}{1pt} $\text{Top}_v$ 75\% & 66.79 & 8.5152 & 0.4576 & 0.7068 & 2.4350 \\ $\text{Top}_v$ 25\% & \underline{67.54} & 8.5873 & \underline{0.4603} & \underline{0.7114} & 2.4347 \\ $\text{Top}_v$ 10\% & 66.85 & 8.4976 & 0.4578 & 0.7066 & 2.4217 \\ \specialrule{.5pt}{1pt}{1pt} GUM 75\% & 67.52 & \underline{8.6014} & 0.4597 & 0.7094 & \underline{2.4389 }\\ GUM 25\% & 66.24 & 8.4346 & 0.4567 & 0.7073 & 2.4136 \\ GUM 10\% & 66.09 & 8.4032 & 0.4571 & 0.7064 & 2.4136 \\ \specialrule{1.5pt}{1pt}{1pt} \end{tabular} \end{center} \caption{\textbf{GPT-2-sm: }Validation results on the E2E NLG Challenge. Higher is better for all metrics. Even at 10\% leftover, performance is relatively similar to the baseline, for both $\text{Top}_v$ and GUM. \label{tbl:e2e-gpt2}} \end{table} \fi \vspace{-5pt} \section{Additional Related Works} \vspace{-5pt} \iffalse Our focus in this work is largely on modifying movement pruning given its demonstrated effectiveness over other methods. \fi \paragraph{General Domain Pruning} Neural net pruning has been proposed years before the explosion of deep learning research \citep{PhysRevA.39.6600, NIPS1988_07e1cd7d, 80236}, and are summarized in an outdated survey \cite{reed1993pruning}. Previous works have explored many approaches of pruning neural nets \citep{https://doi.org/10.48550/arxiv.1608.03665, han2015learning, han2015deep, li2016pruning}. Recently, the \textit{lottery ticket hypothesis} \cite{https://doi.org/10.48550/arxiv.1803.03635} proposed a new direction to prune at initialization instead. However, there is also a massive divergence of methods or claims that it is not worthwhile \cite{ https://doi.org/10.48550/arxiv.1810.05270,https://doi.org/10.48550/arxiv.2003.03033}. Regardless, many strong techniques exist in modern incarnations across all kinds of architectures \citep{https://doi.org/10.48550/arxiv.1611.05128, Luo_2017_ICCV, He_2018_ECCV}. \iffalse here is a thesis on pruning, again, bert \url{https://dspace.lib.ntua.gr/xmlui/bitstream/handle/123456789/55068/thesisAchlatis.pdf?sequence=1} this is another work I found that uses the same magnitude pruning: \url{https://openaccess.thecvf.com/content/CVPR2022/papers/Shen_When_To_Prune_A_Policy_Towards_Early_Structural_Pruning_CVPR_2022_paper.pdf} \fi \vspace{-5pt} \paragraph{Compressing Language Models} Compressing LLMs in particular has spawned a unique kind of pruning. Given that LLMs first undergo pre-training on massive amounts of data, works such as \cite{https://doi.org/10.48550/arxiv.2104.08682, https://doi.org/10.48550/arxiv.2111.05754, https://doi.org/10.48550/arxiv.2205.11005} find ways to prune and finetune these models on downstream data. Building on automated gradual pruning \citep{https://doi.org/10.48550/arxiv.1710.01878} and learned threshold pruning \citep{https://doi.org/10.48550/arxiv.2003.00075}, movement pruning \citep{https://doi.org/10.48550/arxiv.2005.07683} and further block movement \citep{https://doi.org/10.48550/arxiv.2109.04838} have become highly popular methods for \textit{fine-pruning}, combining finetuning with pruning for overall best performance. Since then, many works have attempted to improve on movement pruning \citep{https://doi.org/10.48550/arxiv.2105.14636, https://doi.org/10.48550/arxiv.2206.12562, https://doi.org/10.48550/arxiv.2204.00408, https://doi.org/10.48550/arxiv.2204.09656}. \vspace{-5pt} \section{Conclusion} \vspace{-5pt} In this paper, we have performed an evaluation of structured pruning on generative language models, finding existing methods to improve over random pruning less than expected. In addition, we have proposed a framework for analyzing pruning methods in terms of uniqueness and saliency, two important criteria for preserving model quality and diversity. We have presented a novel method based on these metrics, GUM, for structured pruning of generative models, based on uniqueness regularization and global $\text{Top}_v$ pruning. We also acknowledge the limitations of our method, which can reduce saliency and performance, suggesting possible directions for improving uniqueness pruning. We believe our work can serve as an initial step towards understanding and improving structured pruning of generative models. \iffalse We have performed an evaluation of structured pruning on generative models. In doing so, we analyze existing pruning methods through uniqueness and saliency, and expanded on movement with a lean, novel method. Uniqueness regularization and global $\text{Top}_v$ together address specific problems that exist when performing model pruning. Our work has only been initial exploration into pruning the MLP layers within of generative models. Uniqueness and sensitivity have even larger implications, and could together be used to examine pruning in general. In addition, GUM is not a perfect method - while it always increases uniqueness, it can hurt sensitivity and result in subpar performance. It would be possible to form more complex, proactive uniqueness pruning methods instead of a simple reactive method via regularization. We hope our analysis framework can provide a stable base for future exploration. \textcolor{red}{ In this paper, we have presented a novel method, GUM, for structured pruning of generative models, based on uniqueness regularization and global $\text{Top}_v$ pruning. We have also proposed a framework for analyzing pruning methods in terms of uniqueness and saliency, two important criteria for preserving model quality and diversity. Our method can be applied to the MLP layers of various generative models, but there are still many open questions and challenges for pruning other components, such as attention heads or MoE modules. We also acknowledge the limitations of our method, which can reduce saliency and performance, and suggest possible directions for improving uniqueness pruning. Our work is an initial step towards understanding and optimizing structured pruning of generative models.} \fi \iffalse \section{Submission of conference papers to ICLR 2023} ICLR requires electronic submissions, processed by \url{https://openreview.net/}. See ICLR's website for more instructions. If your paper is ultimately accepted, the statement {\tt {\textbackslash}iclrfinalcopy} should be inserted to adjust the format to the camera ready requirements. The format for the submissions is a variant of the NeurIPS format. Please read carefully the instructions below, and follow them faithfully. \subsection{Style} Papers to be submitted to ICLR 2023 must be prepared according to the instructions presented here. Authors are required to use the ICLR \LaTeX{} style files obtainable at the ICLR website. Please make sure you use the current files and not previous versions. Tweaking the style files may be grounds for rejection. \subsection{Retrieval of style files} The style files for ICLR and other conference information are available online at: \begin{center} \url{http://www.iclr.cc/} \end{center} The file \verb+iclr2023_conference.pdf+ contains these instructions and illustrates the various formatting requirements your ICLR paper must satisfy. Submissions must be made using \LaTeX{} and the style files \verb+iclr2023_conference.sty+ and \verb+iclr2023_conference.bst+ (to be used with \LaTeX{}2e). The file \verb+iclr2023_conference.tex+ may be used as a ``shell'' for writing your paper. All you have to do is replace the author, title, abstract, and text of the paper with your own. The formatting instructions contained in these style files are summarized in sections \ref{gen_inst}, \ref{headings}, and \ref{others} below. \section{General formatting instructions} \label{gen_inst} The text must be confined within a rectangle 5.5~inches (33~picas) wide and 9~inches (54~picas) long. The left margin is 1.5~inch (9~picas). Use 10~point type with a vertical spacing of 11~points. Times New Roman is the preferred typeface throughout. Paragraphs are separated by 1/2~line space, with no indentation. Paper title is 17~point, in small caps and left-aligned. All pages should start at 1~inch (6~picas) from the top of the page. Authors' names are set in boldface, and each name is placed above its corresponding address. The lead author's name is to be listed first, and the co-authors' names are set to follow. Authors sharing the same address can be on the same line. Please pay special attention to the instructions in section \ref{others} regarding figures, tables, acknowledgments, and references. There will be a strict upper limit of 9 pages for the main text of the initial submission, with unlimited additional pages for citations. \section{Headings: first level} \label{headings} First level headings are in small caps, flush left and in point size 12. One line space before the first level heading and 1/2~line space after the first level heading. \subsection{Headings: second level} Second level headings are in small caps, flush left and in point size 10. One line space before the second level heading and 1/2~line space after the second level heading. \subsubsection{Headings: third level} Third level headings are in small caps, flush left and in point size 10. One line space before the third level heading and 1/2~line space after the third level heading. \section{Citations, figures, tables, references} \label{others} These instructions apply to everyone, regardless of the formatter being used. \subsection{Citations within the text} Citations within the text should be based on the \texttt{natbib} package and include the authors' last names and year (with the ``et~al.'' construct for more than two authors). When the authors or the publication are included in the sentence, the citation should not be in parenthesis using \verb|\citet{}| (as in ``See \citet{Hinton06} for more information.''). Otherwise, the citation should be in parenthesis using \verb|\citep{}| (as in ``Deep learning shows promise to make progress towards AI~\citep{Bengio+chapter2007}.''). The corresponding references are to be listed in alphabetical order of authors, in the \textsc{References} section. As to the format of the references themselves, any style is acceptable as long as it is used consistently. \subsection{Footnotes} Indicate footnotes with a number\footnote{Sample of the first footnote} in the text. Place the footnotes at the bottom of the page on which they appear. Precede the footnote with a horizontal rule of 2~inches (12~picas).\footnote{Sample of the second footnote} \subsection{Figures} All artwork must be neat, clean, and legible. Lines should be dark enough for purposes of reproduction; art work should not be hand-drawn. The figure number and caption always appear after the figure. Place one line space before the figure caption, and one line space after the figure. The figure caption is lower case (except for first word and proper nouns); figures are numbered consecutively. Make sure the figure caption does not get separated from the figure. Leave sufficient space to avoid splitting the figure and figure caption. You may use color figures. However, it is best for the figure captions and the paper body to make sense if the paper is printed either in black/white or in color. \begin{figure}[h] \begin{center} \fbox{\rule[-.5cm]{0cm}{4cm} \rule[-.5cm]{4cm}{0cm}} \end{center} \caption{Sample figure caption.} \end{figure} \subsection{Tables} All tables must be centered, neat, clean and legible. Do not use hand-drawn tables. The table number and title always appear before the table. See Table~\ref{sample-table}. Place one line space before the table title, one line space after the table title, and one line space after the table. The table title must be lower case (except for first word and proper nouns); tables are numbered consecutively. \begin{table}[t] \caption{Sample table title} \label{sample-table} \begin{center} \begin{tabular}{ll} \multicolumn{1}{c}{\bf PART} &\multicolumn{1}{c}{\bf DESCRIPTION} \\ \hline \\ Dendrite &Input terminal \\ Axon &Output terminal \\ Soma &Cell body (contains cell nucleus) \\ \end{tabular} \end{center} \end{table} \section{Default Notation} In an attempt to encourage standardized notation, we have included the notation file from the textbook, \textit{Deep Learning} \cite{goodfellow2016deep} available at \url{https://github.com/goodfeli/dlbook_notation/}. Use of this style is not required and can be disabled by commenting out \texttt{math\_commands.tex}. \centerline{\bf Numbers and Arrays} \bgroup \def1.5{1.5} \begin{tabular}{p{1in}p{3.25in}} $\displaystyle a$ & A scalar (integer or real)\\ $\displaystyle {\bm{a}}$ & A vector\\ $\displaystyle {\bm{A}}$ & A matrix\\ $\displaystyle {\tens{A}}$ & A tensor\\ $\displaystyle {\bm{I}}_n$ & Identity matrix with $n$ rows and $n$ columns\\ $\displaystyle {\bm{I}}$ & Identity matrix with dimensionality implied by context\\ $\displaystyle {\bm{e}}^{(i)}$ & Standard basis vector $[0,\dots,0,1,0,\dots,0]$ with a 1 at position $i$\\ $\displaystyle \text{diag}({\bm{a}})$ & A square, diagonal matrix with diagonal entries given by ${\bm{a}}$\\ $\displaystyle {\textnormal{a}}$ & A scalar random variable\\ $\displaystyle {\mathbf{a}}$ & A vector-valued random variable\\ $\displaystyle {\mathbf{A}}$ & A matrix-valued random variable\\ \end{tabular} \egroup \vspace{0.25cm} \centerline{\bf Sets and Graphs} \bgroup \def1.5{1.5} \begin{tabular}{p{1.25in}p{3.25in}} $\displaystyle {\mathbb{A}}$ & A set\\ $\displaystyle \mathbb{R}$ & The set of real numbers \\ $\displaystyle \{0, 1\}$ & The set containing 0 and 1 \\ $\displaystyle \{0, 1, \dots, n \}$ & The set of all integers between $0$ and $n$\\ $\displaystyle [a, b]$ & The real interval including $a$ and $b$\\ $\displaystyle (a, b]$ & The real interval excluding $a$ but including $b$\\ $\displaystyle {\mathbb{A}} \backslash {\mathbb{B}}$ & Set subtraction, i.e., the set containing the elements of ${\mathbb{A}}$ that are not in ${\mathbb{B}}$\\ $\displaystyle {\mathcal{G}}$ & A graph\\ $\displaystyle \parents_{\mathcal{G}}({\textnormal{x}}_i)$ & The parents of ${\textnormal{x}}_i$ in ${\mathcal{G}}$ \end{tabular} \vspace{0.25cm} \centerline{\bf Indexing} \bgroup \def1.5{1.5} \begin{tabular}{p{1.25in}p{3.25in}} $\displaystyle {a}_i$ & Element $i$ of vector ${\bm{a}}$, with indexing starting at 1 \\ $\displaystyle {a}_{-i}$ & All elements of vector ${\bm{a}}$ except for element $i$ \\ $\displaystyle {A}_{i,j}$ & Element $i, j$ of matrix ${\bm{A}}$ \\ $\displaystyle {\bm{A}}_{i, :}$ & Row $i$ of matrix ${\bm{A}}$ \\ $\displaystyle {\bm{A}}_{:, i}$ & Column $i$ of matrix ${\bm{A}}$ \\ $\displaystyle {\etens{A}}_{i, j, k}$ & Element $(i, j, k)$ of a 3-D tensor ${\tens{A}}$\\ $\displaystyle {\tens{A}}_{:, :, i}$ & 2-D slice of a 3-D tensor\\ $\displaystyle {\textnormal{a}}_i$ & Element $i$ of the random vector ${\mathbf{a}}$ \\ \end{tabular} \egroup \vspace{0.25cm} \centerline{\bf Calculus} \bgroup \def1.5{1.5} \begin{tabular}{p{1.25in}p{3.25in}} $\displaystyle\frac{d y} {d x}$ & Derivative of $y$ with respect to $x$\\ [2ex] $\displaystyle \frac{\partial y} {\partial x} $ & Partial derivative of $y$ with respect to $x$ \\ $\displaystyle \nabla_{\bm{x}} y $ & Gradient of $y$ with respect to ${\bm{x}}$ \\ $\displaystyle \nabla_{\bm{X}} y $ & Matrix derivatives of $y$ with respect to ${\bm{X}}$ \\ $\displaystyle \nabla_{\tens{X}} y $ & Tensor containing derivatives of $y$ with respect to ${\tens{X}}$ \\ $\displaystyle \frac{\partial f}{\partial {\bm{x}}} $ & Jacobian matrix ${\bm{J}} \in \mathbb{R}^{m\times n}$ of $f: \mathbb{R}^n \rightarrow \mathbb{R}^m$\\ $\displaystyle \nabla_{\bm{x}}^2 f({\bm{x}})\text{ or }{\bm{H}}( f)({\bm{x}})$ & The Hessian matrix of $f$ at input point ${\bm{x}}$\\ $\displaystyle \int f({\bm{x}}) d{\bm{x}} $ & Definite integral over the entire domain of ${\bm{x}}$ \\ $\displaystyle \int_{\mathbb{S}} f({\bm{x}}) d{\bm{x}}$ & Definite integral with respect to ${\bm{x}}$ over the set ${\mathbb{S}}$ \\ \end{tabular} \egroup \vspace{0.25cm} \centerline{\bf Probability and Information Theory} \bgroup \def1.5{1.5} \begin{tabular}{p{1.25in}p{3.25in}} $\displaystyle P({\textnormal{a}})$ & A probability distribution over a discrete variable\\ $\displaystyle p({\textnormal{a}})$ & A probability distribution over a continuous variable, or over a variable whose type has not been specified\\ $\displaystyle {\textnormal{a}} \sim P$ & Random variable ${\textnormal{a}}$ has distribution $P$\\% so thing on left of \sim should always be a random variable, with name beginning with \r $\displaystyle \mathbb{E}_{{\textnormal{x}}\sim P} [ f(x) ]\text{ or } \mathbb{E} f(x)$ & Expectation of $f(x)$ with respect to $P({\textnormal{x}})$ \\ $\displaystyle \mathrm{Var}(f(x)) $ & Variance of $f(x)$ under $P({\textnormal{x}})$ \\ $\displaystyle \mathrm{Cov}(f(x),g(x)) $ & Covariance of $f(x)$ and $g(x)$ under $P({\textnormal{x}})$\\ $\displaystyle H({\textnormal{x}}) $ & Shannon entropy of the random variable ${\textnormal{x}}$\\ $\displaystyle D_{\mathrm{KL}} ( P \Vert Q ) $ & Kullback-Leibler divergence of P and Q \\ $\displaystyle \mathcal{N} ( {\bm{x}} ; {\bm{\mu}} , {\bm{\Sigma}})$ & Gaussian distribution % over ${\bm{x}}$ with mean ${\bm{\mu}}$ and covariance ${\bm{\Sigma}}$ \\ \end{tabular} \egroup \vspace{0.25cm} \centerline{\bf Functions} \bgroup \def1.5{1.5} \begin{tabular}{p{1.25in}p{3.25in}} $\displaystyle f: {\mathbb{A}} \rightarrow {\mathbb{B}}$ & The function $f$ with domain ${\mathbb{A}}$ and range ${\mathbb{B}}$\\ $\displaystyle f \circ g $ & Composition of the functions $f$ and $g$ \\ $\displaystyle f({\bm{x}} ; {\bm{\theta}}) $ & A function of ${\bm{x}}$ parametrized by ${\bm{\theta}}$. (Sometimes we write $f({\bm{x}})$ and omit the argument ${\bm{\theta}}$ to lighten notation) \\ $\displaystyle \log x$ & Natural logarithm of $x$ \\ $\displaystyle \sigma(x)$ & Logistic sigmoid, $\displaystyle \frac{1} {1 + \exp(-x)}$ \\ $\displaystyle \zeta(x)$ & Softplus, $\log(1 + \exp(x))$ \\ $\displaystyle || {\bm{x}} ||_p $ & $L^p$ norm of ${\bm{x}}$ \\ $\displaystyle || {\bm{x}} || $ & $L^2$ norm of ${\bm{x}}$ \\ $\displaystyle x^+$ & Positive part of $x$, i.e., $\max(0,x)$\\ $\displaystyle \bm{1}_\mathrm{condition}$ & is 1 if the condition is true, 0 otherwise\\ \end{tabular} \egroup \vspace{0.25cm} \section{Final instructions} Do not change any aspects of the formatting parameters in the style files. In particular, do not modify the width or length of the rectangle the text should fit into, and do not change font sizes (except perhaps in the \textsc{References} section; see below). Please note that pages should be numbered. \section{Preparing PostScript or PDF files} Please prepare PostScript or PDF files with paper size ``US Letter'', and not, for example, ``A4''. The -t letter option on dvips will produce US Letter files. Consider directly generating PDF files using \verb+pdflatex+ (especially if you are a MiKTeX user). PDF figures must be substituted for EPS figures, however. Otherwise, please generate your PostScript and PDF files with the following commands: \begin{verbatim} dvips mypaper.dvi -t letter -Ppdf -G0 -o mypaper.ps ps2pdf mypaper.ps mypaper.pdf \end{verbatim} \subsection{Margins in LaTeX} Most of the margin problems come from figures positioned by hand using \verb+\special+ or other commands. We suggest using the command \verb+\includegraphics+ from the graphicx package. Always specify the figure width as a multiple of the line width as in the example below using .eps graphics \begin{verbatim} \usepackage[dvips]{graphicx} ... \includegraphics[width=0.8\linewidth]{myfile.eps} \end{verbatim} or \begin{verbatim} \usepackage[pdftex]{graphicx} ... \includegraphics[width=0.8\linewidth]{myfile.pdf} \end{verbatim} for .pdf graphics. See section~4.4 in the graphics bundle documentation (\url{http://www.ctan.org/tex-archive/macros/latex/required/graphics/grfguide.ps}) A number of width problems arise when LaTeX cannot properly hyphenate a line. Please give LaTeX hyphenation hints using the \verb+\-+ command. \subsubsection*{Author Contributions} If you'd like to, you may include a section for author contributions as is done in many journals. This is optional and at the discretion of the authors. \fi
1,941,325,220,944
arxiv
\section{Introduction} Understanding the origin of life on Earth is one of the most challenging problems in astrophysics in the framework of astrobiology. To shed light on this complex topic, it is absolutely needed a comprehensive study of the chemical complexity of the interstellar medium (ISM) that feeds the formation of stars and planets. In this sense, the chemical family of nitriles can give important clues. Nitriles, organic compounds with a $-$C$\equiv$N functional group, play a crucial role in prebiotic chemistry since they are key intermediates in the formation of amino acids, peptides, nucleic acids and nucleobases (e.g. \citealt{balucani2009}). Adenine (H$_5$C$_5$N$_5$), one of the nucleobases of DNA and RNA, is a basic ingredient in the RNA-world scenario for the origin of life on Earth (e.g. \citealt{kaiser&balucani2001}, \citealt{ehrenfreund2001}, \citealt{bernstein2004}). \citet{oro1961} reported the synthesis of adenine from a solution of HCN and NH$_3$ under conditions similar to those thought to have existed on the primitive Earth. \citet{chakrabarti2000} proposed that adenine might be formed during the chemical evolution of a star-forming molecular cloud through the oligomerization of HCN in the gas phase in four steps: \begin{equation*} \rm HCN \rightarrow H_2C_2N_2 \rightarrow NH_2CH(CN)_2 \rightarrow \end{equation*} \begin{equation*} \rm \rightarrow NH_2(CN)C = C(CN)NH_2 \rightarrow H_5C_5N_5 \end{equation*} However, by performing theoretical calculations, \citet{smith2001} and \citet{yim2012} showed that the first step, i.e. the formation of an HCN dimer from two HCN molecules, does not occur efficiently in the conditions of the ISM. Therefore, the question of how HCN dimers can be formed remains open. Since they are precursors of adenine, understanding their formation mechanisms is of crucial importance from an astrobiological point of view. The most stable dimer of HCN is C-cyanomethanimine (HNCHCN hereafter), which presents two different isomers: the Z-isomer and the E-isomer (\citealt{clemmons1983}). These species are stereoisomers about the double bond N=C (see e.g. Fig. 1 of \citealt{takano1990}) and the conversion from the Z- to the E-isomer requires an energy of 15.95 kK. The laboratory experiments and chemical calculations by \citet{takano1990} and \citet{zaleski2013}, respectively, indicate that the Z-isomer is more stable and lower in energy than the E-isomer, with an estimated energy difference in the range 238$-$382 K. Nevertheless, only the high-energy E-isomer has been detected in the ISM so far. Several lines were reported in absorption towards the bright continuum of the cluster of hot cores located in the SgrB2N complex (\citealt{zaleski2013}), while the Z-conformer was elusive. Recent searches of the Z-conformer in a sample of low-mass star-forming regions have also been unsuccessful (\citealt{melosso2018}). In this work, we report the results of an interstellar search for the Z-conformer of HNCHCN (Z-HNCHCN hereafter). Using an IRAM 30m spectral survey, we searched for this species in the G+0.693-0.027 giant molecular cloud (G+0.693 hereafter) in the Galactic Center. This region exhibits high gas kinetic temperatures ranging from $\sim$50$\,$K to $\sim$150$\,$K (\citealt{Guesten1985,huettemeister_kinetic_1993,Rodriguez-Fernandez2001,ginsburg_dense_2016,Krieger2017,zeng2018}), low dust temperatures of $\leq$30$\,$K (\citealt{Rodriguez-Fernandez2004}), and relatively low H$_2$ gas densities ($\sim$10$^4$ cm$^{-3}$; \citealt{rodriguez-fernandez2000}). Due to the low H$_2$ densities, the molecules are sub-thermally excited and hence the excitation temperatures are low ($\sim$5-20 K; \citealt{requena-torres_organic_2006,martin_tracing_2008,rivilla2018,zeng2018}). G+0.693 is one of the most chemically rich reservoirs in our Galaxy. Many molecular species have been identified in this cloud, including some of prebiotic relevance such as complex organic molecules (COMs; \citealt{requena-torres_organic_2006,requena-torres_largest_2008,zeng2018}) and phosphorus-bearing species (\citealt{rivilla2018}). In particular, numerous nitrile molecules have been already detected in the source such as C$_3$N, HC$_3$N, HC$_5$N, HC$_7$N, CH$_2$CN, CH$_3$CN, CH$_3$C$_3$N, NH$_2$CN and HOCN (\citealt{zeng2018}). This rich nitrile chemistry makes this source an excellent target to search for more complex nitriles, and in particular Z-HNCHCN. \begin{table} \centering \tabcolsep 2.0pt \caption{Transitions of HNCHCN detected towards G+0.693.} \begin{tabular}{c c c c c c} \hline Frequency & Transition & logA$_{\rm ul}$ & E$_{\rm up}$ & $\int{T_{\rm A}^*d}$v & Detection \\ (GHz) & & (s$^{-1}$) & (K) & (K km s$^{-1}$) & level \\ \hline \multicolumn{6}{c}{Z-isomer} \\ \hline 85.283 & 9(1,9)$-$8(1,8) & -5.2107 & 23 & 0.68$\pm$0.25 & 13.8 \\ 86.996 & 9(0,9)$-$8(0,8) & -5.1795 & 21 & 0.91$\pm$0.30 & 15.9 \\ 89.247 & 9(1,8)$-$8(1,7) & -5.1515 & 24 & 0.63$\pm$0.24 & 7.7 \\ 94.735 & 10(1,10)$-$9(1,9) & -5.0705 & 27 & 0.47$\pm$0.21 & 13.7 \\ 96.569 & 10(0,10)$-$9(0,9) & -5.0413 & 26 & 0.62$\pm$0.25 & 7.1 \\ 109.017 & 11(1,10)$-$10(1,9) & -4.8849 & 34 & 0.27$\pm$0.16 & 4.8 \\ \hline \multicolumn{6}{c}{E-isomer} \\ \hline 84.425 & 9(1,9)$-$8(1,8) & -4.4608 & 23 & 0.71$\pm$0.05 & 13.9 \\ 93.791 & 10(1,10)$-$9(1,9) & -4.3204 & 28 & 0.51$\pm$0.04 & 11.3 \\ 95.422 & 10(0,10)$-$9(0,9) & -4.2937 & 25 & 0.71$\pm$0.05 & 17.9 \\ 97.501 & 10(1,9)$-$9(1,8) & -4.2698 & 29 & 0.47$\pm$0.04 & 8.4 \\ 104.895 & 11(0,11)$-$10(0,10) & -4.1684 & 30 & 0.46$\pm$0.03 & 6.9 \\ \hline \end{tabular} \label{tab:transitions} \end{table} \section{Observations} We used in this work a spectral line survey towards the quiescent molecular cloud G+0.693 carried out with the IRAM 30m telescope at Pico Veleta\footnote{This paper makes use of observations obtained with the IRAM- 30m telescope. IRAM is supported by INSU/CNRS (France), MPG (Germany), and IGN (Spain).} (Spain) and the NRAO\footnote{The National Radio Astronomy Observatories is a facility of the National Science Foundation, operated under a cooperative agreement by Associated Universities, Inc.} 100$\,$m Robert C. Byrd Green Bank telescope (GBT) in West Virginia (USA), covering frequencies from 12 to 272 GHz . The observations were centered at the coordinates $\alpha$(J2000.0)= 17$^h$ 47$^m$ 22$^s$ and $\delta$(J2000.0)= -28$^{\circ}$ 21$^{\prime}$ 27$^{\prime\prime}$. We refer to \citet{zeng2018} for more detailed information on the observations. \begin{figure*} \includegraphics[width=15.5cm]{fig-Z-HNCHCN.eps} \caption{IRAM 30m spectra of Z-HNCHCN towards the Galactic Center quiescent giant molecular cloud G+0.693. The red curves correspond to the LTE best fit obtained with MADCUBA$-$AUTOFIT. The quantum numbers of each transition are shown in blue in each panel (see also Table \ref{tab:transitions}). Other molecular species identified in the spectra are indicated with magenta labels.} \label{fig-Z} \end{figure*} \section{Analysis and Results} \label{results} The identification of the lines was performed using the SLIM (Spectral Line Identification and Modeling) tool of the MADCUBA package\footnote{Madrid Data Cube Analysis on ImageJ is a software developed at the Center of Astrobiology (CAB) in Madrid; http://cab.inta-csic.es/madcuba/Portada.html; see Mart\'in et al., in prep; \citet{rivilla_first_2016,rivilla_formation_2017}.}, which contains the spectroscopic information from the Cologne Database for Molecular Spectroscopy (CDMS\footnote{http://www.astro.uni-koeln.de/cdms}) \citep{muller_cologne_2001,muller_cologne_2005,endres_cologne_2016} and the Jet Propulsion Laboratory (JPL) catalogue\footnote{http://spec.jpl.nasa.gov/} \citep{pickett_submillimeter_1998}. We have used the CDMS entries for the two isomers of HNCHCN, which contain the spectrocopy from \citet{takano1990}, \citet{zaleski2013} and \citet{melosso2018}. Since there are not collisional coefficients available for these species we performed the analysis assuming local thermodynamic equilibrium conditions (LTE). We generated with SLIM-MADCUBA a synthetic spectrum to compare with the observations. We confirmed the presence of ten transitions of Z-HNCHCN in the spectra towards G+0.693 with a significant detection level ($>$4$\sigma$; see below). Six of these transitions are unblended, i.e., they are not contaminated by emission from other molecular species, while the other four are blended with other species. We checked that the unblended transitions are not associated with any of the $>$90 species we have identified toward this source (\citealt{requena-torres_largest_2008,rivilla2018,zeng2018}; see Figure \ref{fig-Z}). We note that not a single transition of Z-HNCHCN predicted by the LTE spectrum is missing in the data. The spectroscopic information of the six unblended transitions are summarized in Table \ref{tab:transitions}, and the spectra are shown in Figure \ref{fig-Z}. This is the first detection of this species in the ISM. Then, we used the MADCUBA-AUTOFIT tool that compares the observed spectra with the LTE synthetic spectra, taking into account all the transitions considered, and it provides the best non-linear least-squared fit using the Levenberg-Marquardt algorithm. The free parameters in the fit are: column density ($N$) of the molecule, excitation temperature ($T_{\rm ex}$), velocity ($\varv$), and full width half maximum ($FWHM$). We did not apply a beam dilution factor since it is well known that the molecular emission towards this source is extended over the beam (e.g. \citealt{requena-torres_organic_2006,martin_tracing_2008,rivilla2018}). For Z-HNCHCN, since the algorithm did not converge leaving all four parameters free, we fixed the linewidth to 20 km s$^{-1}$, which reproduces well the observed spectra and it is consistent with the values inferred for other $-$CN species (\citealt{zeng2018}), and rerun AUTOFIT. The results of the fit are summarized in Table \ref{tab:parameters}. We derived an excitation temperature of 8$\pm$2 K, very similar to that determined for other complex species in this region (\citealt{requena-torres_largest_2008,zeng2018}), and a column density of (2.0$\pm$0.6)$\times$10$^{14}$ cm$^{-2}$. We present in Table \ref{tab:transitions} the velocity integrated intensity ($\int{T_{\rm A}^* d}$v) of the identified transitions resulting from the fit. We calculated the detection level of each transition comparing the velocity integrated intensity with $ rms \times \sqrt{\delta {\rm v}/FWHM}\times FWHM$, where {\it rms} is the noise measured in line-free spectral ranges close to each transition, and $\delta {\rm v}$ is the spectral resolution of the spectra. Three of the identified transitions of Z-HNCHCN are above 13$\sigma$, two above 7$\sigma$ and one above 4$\sigma$ (Table \ref{tab:transitions}). We repeated the analysis for the E-isomer. We identified eight transitions above $>$4$\sigma$, of which five are unblended (Table \ref{tab:transitions} and Figure \ref{fig-E}). Since the AUTOFIT algorithm did not converge leaving T$_{\rm ex}$ as a free parameter, we fixed it to the value found for the Z-isomer, 8 K. We obtained a column density of (0.33$\pm$0.03)$\times$10$^{14}$ cm$^{-2}$. Three transitions are detected above 11$\sigma$, and two above 6$\sigma$ (Table \ref{tab:transitions}). Both conformers have velocities of around $\sim$68 km s$^{-1}$, consistently with many other molecules observed towards this region (see e.g. \citealt{zeng2018}). The molecular ratio between the two conformers is [Z/E]=6.1$\pm$2.4. The total molecular abundance of C-cyanomethanimine, considering both isomers, is 1.74$\times$10$^{-9}$. We derived the fractional molecular abundances by dividing their column densities by the H$_2$ column density (N$_{\rm H_2}$) measured in G+0.693. We adopted N$_{\rm H_2}$= 1.35 $\times$10$^{23}$ cm$^{-2}$ as inferred by \citet{martin_tracing_2008} from C$^{18}$O observations. In our calculations, we assumed that all molecules show a similar spatial distribution than C$^{18}$O, i.e. all molecules arise from the same volume. The derived abundances are presented in Table \ref{tab:parameters}. The Z-isomer has a relatively high abundance of 1.5$\times$10$^{-9}$, which is comparable to those of other nitrogen-bearing species in this source such as CH$_3$CN or HC$_5$N and higher than those of e.g. CH$_3$NCO and C$_2$H$_5$CN (\citealt{zeng2018}). We also searched in the spectra of G+0.693 for other molecules that have been proposed as possible precursors of HNCHCN (see further discussion in Section \ref{discussion}): the cyanogen radical (CN), methanimine (CH$_2$NH) and cyanogen (NCCN). Since CN is optically thick towards G+0.693, we have analyzed the optically thin isotopologue $^{13}$CN. The spectra and the spectroscopic information of the studied $^{13}$CN transitions are shown in Appendix \ref{appendix}. The results of AUTOFIT are presented in Table \ref{tab:parameters}. Assuming the isotopic ratio of $^{12}$C/$^{13}$C$\sim$21 derived in this source by \citet{armijos-abendano_3_2014}, we obtained a CN fractional abundance of 1.5$\times$10$^{-8}$. The results of CH$_2$NH were previously presented in \citet{zeng2018} and are also shown in Table \ref{tab:parameters}. Since the detection of NCCN is not possible through radio and millimeter observations due to the lack of a permanent electric dipole moment, we searched for its protonated form, NCCNH$^+$. We confirmed the presence of this species through the detection of the J=10-9 and J=11-10 rotational transitions (see Appendix \ref{appendix}). To our knowledge this is the third detection of this species in the ISM after those in the dark cloud TMC-1 and the L483 dense core (\citealt{agundez2015}). Our analysis yielded a fractional molecular abundance of 1.4$\times$10$^{-12}$ for this species (Table \ref{tab:parameters}). If we assume a [NCCNH$^+$]/[NCCN] ratio of $\sim$10$^{-4}$ as inferred from the chemical modelling of \citet{agundez2015}, the abundance of NCCN would be 1.4$\times$10$^{-8}$ cm$^{-2}$. \begin{figure*} \includegraphics[width=15.5cm]{fig-E-HNCHCN.eps} \caption{IRAM 30m spectra of E-HNCHCN towards the Galactic Center quiescent giant molecular cloud G+0.693. The red curves correspond to the LTE best fit obtained with MADCUBA$-$AUTOFIT. The quantum numbers of each transition are shown in blue in each panel (see also Table \ref{tab:transitions}). Other molecular species identified in the spectra are indicated with magenta labels.} \label{fig-E} \end{figure*} \section{Discussion} \label{discussion} Due to the lack of detections, very little is known about the formation of HNCHCN. There is no chemical formation route of this species included in the astrochemical databases KIDA\footnote{Kinetic Database for Astrochemistry: http://kida.obs.u-bordeaux1.fr} (\citealt{wakelam2012}) and UMIST\footnote{http://udfa.ajmarkwick.net/index.php} (\citealt{mcelroy2013}). \citet{zaleski2013} suggested that radical chemistry on the surface of dust grains might form HNCHCN. More recently, two possible formation routes have been proposed. \citet{vazart2015} studied the neutral-neutral gas-phase reaction between the cyanogen radical and methanimine: \begin{equation} \rm CN + CH_2NH \rightarrow HNCHCN + H \end{equation} A different chemical pathway has been proposed by \citet{shivani2017} both on the surface of icy dust grains and in the gas phase: \begin{equation} \rm NCCN + H +H \rightarrow HNCHCN \end{equation} All proposed formation paths seem to be barrierless, which suggests that both gas-phase and grain surface reactions are able to form efficiently HNCHCN provided that the precursors are sufficiently abundant. Our data indicate that the reactants of the proposed reactions (CN, CH$_2$NH and NCCN) are relatively abundant in G+0.693, with abundances ranging from 4$\times$10$^{-9}$ to 1.5 $\times$10$^{-8}$ (Table \ref{tab:parameters}), which are higher than the derived abundance of HNCHCN by factor of around 9, 2 and 8, respectively. This suggests that these mechanisms might be able to explain the high abundance of HNCHCN (1.74$\times$10$^{-9}$) inferred in this cloud. Since we have detected for the first time both isomers, we can use the [Z/E] ratio to constrain the proposed formation scenarios. \citet{vazart2015} showed that the gas-phase formation route from CN and CH$_2$NH produces a ratio [Z/E]$\sim$1.5, regardless of the temperature. The calculations by \citet{shivani2017} predict a [Z/E] ratio of 0.9 in gas-phase and of 1 on the surface of dust grains. Therefore, both pathways fail to explain the observed ratio of $\sim$6, which might indicate that we are missing key formation routes and/or destruction reactions. A complete study including all the formation and destruction channels of the involved species is needed before drawing firm conclusions. Interestingly, the [Z/E] ratio found in G+0.693 seems to indicate that the two isomers are close to thermodynamic equilibrium at the kinetic temperature $T_{\rm k}$ of the cloud. If this is the case, the abundances of the isomers are related through the expression: \begin{equation} \label{eq-isomers} [Z/E]=\frac{N(Z)}{N(E)} = \frac{1}{g} \times exp\left(\frac{\Delta E}{T_{\rm k}}\right) , \end{equation} where $\Delta E$ is the energy difference between the isomers, and $g$ accounts for the statistical weights, which in this case is 1. \citet{takano1990} derived experimentally an energy difference of 237$-$382 K, which is in good agreement with the value of 370 K inferred with the quantum chemical calculations by \citet{zaleski2013}, and with the value of 307 K more recently estimated by \citet{puzzarini2015}. Then, the observed [Z/E] ratio of 6.1 implies a $T_{\rm k}$ in the range 130$-$210 K, which is in good agreement with the kinetic temperature found by \citet{zeng2018} in G+0.693. This suggests that the two isomers are in thermodynamic equilibrium at the T$_{\rm k}$ of the gas. We note that also the populations of other isomers in the ISM seem to be in thermodynamic equilibrium at T$_{\rm k}$, as e.g. the conformers of ethyl formate (C$_2$H$_5$OCHO) in the hot molecular cores located in the W51 and Orion KL regions (\citealt{rivilla_chemical_2017,tercero_discovery_2013}). Since the isomerization barrier between the E$-$ and Z$-$isomers of HNCHCN is very high (15.95 kK; \citealt{zaleski2013}) this process cannot occur in the ISM. This means that the $T_{\rm k}$ derived from eq. \ref{eq-isomers} reflects the temperature at which the molecules were formed. Since the dust in G+0.693 is cold ($\leq$30 K; \citealt{Rodriguez-Fernandez2004} ), and the gas temperatures are high ($\sim$50$\,$K to $\sim$150$\,$K; (e.g. \citealt{Guesten1985,huettemeister_kinetic_1993,Rodriguez-Fernandez2001,ginsburg_dense_2016,Krieger2017,zeng2018}), this opens two possible chemical pathways: i) gas-phase reactions occurring at the high kinetic temperatures of the cloud; and ii) formation on dust triggered by non-thermal energetic events like cosmic-ray impacts, and their subsequent release by grain sputtering in moderate-velocity shock waves. The latter scenario is plausible in the case of G+0.693 since large-scale low-velocity shocks are widespread in the region due to the encounter of two streams of molecular gas (\citealt{hasegawa1994,henshaw2016}). However, the current observations do not allow to discriminate between these two possible chemical routes. Whatever the formation mechanism, our analysis of the first detection in the ISM of the Z-isomer of HNCHCN reveals that its abundance is higher than that of the E-conformer by a factor of 6. Given the proposed role of HNCHCN as precursor of adenine (\citealt{ESCHENMOSER2007,chakrabarti2000,balucani2012,jung2013}), the relative high abundance of this species, 1.5$\times$10$^{-9}$, argues in favor of an efficient synthesis of key precursors of adenine in space. This is a crucial step to understand how the basic ingredients of life could have been assembled in the ISM before their incorporation to the primitive Earth. The role of HNCHCN in the formation of more complex nitrile dimers, and in particular adenine, should be addressed in detail with new detections of HNCHCN in more interstellar sources and with chemical modelling. \begin{table} \centering \tabcolsep 2.5pt \caption{Derived parameters of the HNCHCN isomers detected towards G+0.693} \begin{tabular}{l c c c c c} \hline Species & N & T$_{\rm ex}$ & v$_{\rm LSR}$ & FWHM & Abundance \\ & ($\times$10$^{14}$ cm$^{-2}$) & (K) & (km s$^{-1}$) & (km s$^{-1}$) & ($\times$10$^{-10}$)\\ \hline Z$-$HNCHCN & 2.0$\pm$0.6 & 8$\pm$2 & 68.3$\pm$0.8 & 20$^{(a)}$ & 15 \\ E$-$HNCHCN & 0.33$\pm$0.03 & 8$^{(a)}$ & 68.0$\pm$0.8 & 21$\pm$2 & 2.4\\ \hline $^{13}$CN & 0.94$\pm$0.03 & 10$^{(a)}$ & 71.6$\pm$0.4 & 18.8$\pm$0.9 & 7.0 \\ CN & & & & & 150$^{(c)}$ \\ CH$_2$NH$^{(b)}$ & 5.4$\pm$0.3 & 9.7$\pm$0.4 & 69$\pm$1 & 25$\pm$1 & 40 \\ NCCNH$^{+}$ & 0.0019$\pm$0.004 & 10$^{(a)}$ & 69$\pm$2 & 22$\pm$5 & 0.014\\ NCCN & & & & & 140$^{(d)}$ \\ \hline \end{tabular} {(a) Parameter fixed in the MADCUBA$-$AUTOFIT analysis. (b) From \citet{zeng2018}. (c) Assuming the isotopic ratio of $^{12}$C/$^{13}$C$\sim$21 derived in G+0.693 by \citet{armijos-abendano_3_2014}. (d) Assuming a [NCCNH$^+$]/[NCCN] ratio of $\sim$10$^{-4}$, as inferred from chemical modelling by \citet{agundez2015}. } \label{tab:parameters} \end{table} \section*{Acknowledgements} We thank the anonymous referee for his/her instructive comments and suggestions. V.M.R. has received funding from the European Union's H2020 research and innovation programme under the Marie Sk\l{}odowska-Curie grant agreement No 664931. J.M.-P. acknowledges partial support by the MINECO and FEDER funding under grants ESP2015-65597-C4-1 and ESP2017-86582-C4-1-R. \bibliographystyle{mnras}
1,941,325,220,945
arxiv
\section{Introduction} The superconductivity of the quaternary transition metal oxyphosphides LaFePO ($T_c$~=~3.2~K) and LaNiPO ($T_c$ = 3.0 -- 4.3 K) was discovered recently\cite{Kamihara06,Watanabe07,Tegel08} although these compounds have been under study for more than ten years.~\cite{Zimmer95} In spite of the relatively low superconducting transition temperatures $T_c$, these materials are important because they triggered the extensive search for superconductivity in oxyarsenides $LnMe$AsO (where $Ln$ = La, Ce, Pr, Nd, Sm, Gd; $Me$ = 3$d$ metals). This search has proven to be remarkably successful, reaching $T_c$s of up to 55 K\cite{Kamihara08,Wen08,Chen08,Ren08a,Ren08b,Liu08} and strong upper critical fields H$_{c2}$ of up to 100~T.~\cite{Senatore08} Despite the focus on oxyarsenides, Ni and Fe oxyphosphides still attract attention\cite{Lebeque07,Zhang08,Liang07,Che08,Si08} because while they share the same bi-layer structure as $LnMe$AsO materials, the mechanism for superconductivity appears to be different. In particular the value of the total electron-phonon coupling constant $\lambda$ in LaFeAsO is much lower than in conventional electron-phonon coupling superconductors, for example, compare the $\lambda$ = 0.21 of LaFeAsO\cite{Boeri08} with the $\lambda$ = 0.44 of Al (where $T_c$ = 1.3 K for Al), and even the inclusion of multiband effects fails to explain the observed $T_c$ of 26 K.~\cite{Boeri09} For LaNiPO the coupling constant is more then two times higher ($\lambda$ = 0.58) and the superconducting properties can be described within the Migdal-Eliashberg theory.~\cite{Boeri09} Herein we interpret soft X-ray emission and absorption spectra of LaNiPO and LaFeAsO\cite{Kurmaev08b} and compare the measurements with our local density approximation with Dynamical Mean Field Theory (LDA+DMFT) electronic structure calculations to investigate the similarities and differences between these two types of superconductors. To assist in investigating these materials, electronic structure calculations of LaFePO\cite{Skornyakov10} ($T_c$ = 3.2 K\cite{Kamihara06}) and NiO\cite{Kunes07} were used. LaFeAsO, unlike LaNiPO and LaFePO, is not superconducting unless doping\cite{Li08,Dong08} or high pressure\cite{Yang09,Chu09} is applied to suppress the magnetic transition temperature.~\cite{Chu09} Since these materials share the same basic ambient crystal structure and atomic constituents, yet exhibit different low-temperature properties, a basic study of the ambient electronic structure of these three materials is of interest, especially since the bulk electronic structure of these materials is insensitive to temperature or magnetic phase changes.~\cite{Yildirim09} \section{Experimental and Calculation Details} Single crystals of LaNiPO were synthesized by heating a mixture of 375.0 mg La (99.9\%, Smart Elements), 201.7 mg NiO (99.99\%, Sigma-Aldrich) and 83.6 mg P (red, 99\%, Sigma-Aldrich) with 2000 mg Sn (99.99\%, Alfa Aesar) in an alumina crucible, which was sealed in a silica tube under an atmosphere of purified argon. The sample was heated to 1173 K at a rate of 40 K/h, kept at this temperature for 10 days and slowly cooled down to room temperature at a rate of 3 K/h. The crucible was smashed and the tin bar dissolved in 6 M HCl at room temperature. The remaining sample consisted of single crystals of LaNiPO beside small amounts of LaNi$_2$P$_2$ (7\%), Ni$_2$SnP (4\%) and Ni$_3$Sn$_4$ ($<$ 1\%). Further attempts to optimize the synthesis conditions with regard to reaction temperature or duration were unsuccessful. Samples prepared directly from the starting material without tin flux yielded only small amounts of LaNiPO with LaNi$_2$P$_2$ as the main product. For details of preparation see~Ref.~\onlinecite{Tegel08}. The soft X-ray absorption and emission measurements of the metal $L_{2,3}$ edges were performed at the soft X-ray fluorescence endstation of Beamline 8.0.1 at the Advanced Light Source in the Lawrence Berkeley National Laboratory.~\cite{Jia95} The endstation uses a Rowland circle geometry X-ray spectrometer with spherical gratings and an area-sensitive multichannel detector. We measured the resonant and non-resonant Ni $L_{2,3}$ (3$d$,4$s$ $\to$ 2$p$ transition) X-ray emission spectra (XES) for LaNiPO. Additional non-resonant XES measurements of the Ni $L_{2,3}$ edges of Ni metal foil and NiO were obtained as reference standard. The instrumental resolving power (E/$\Delta$E) for emission measurements was about 10$^3$. The X-ray absorption spectra (XAS) were measured in total electron yield (TEY) mode for the Ni $L_{2,3}$ edges. The instrumental resolving power (E/$\Delta$E) for absorption measurements was about 5 $\times$ 10$^3$. All absorption spectra were normalized to the incident photon current using a highly transparent gold mesh in front of the sample to correct for intensity fluctuations in the incoming photon beam. The excitation energies for the Ni $L_{2,3}$ resonant X-ray emission spectra were determined from the XAS spectra and the energies were selected at the $L_3$ and $L_2$ thresholds. Electronic structure calculations were performed within the pseudopotential plane-wave method PWSCF, as implemented in the Quantum ESPRESSO package.~\cite{PW} We used the generalized gradient approximation in the Perdew-Burke-Ernzerhof version\cite{Perdew96} for the exchange-correlation potential in the Rappe-Rabe-Kaxiras-Joannopoulos form.~\cite{Rappe90} The Brillouin zone integration was performed with a 15 $\times$ 15 $\times$ 15 {\bf k}-point grid. A kinetic-energy cutoff of 45 Ry was employed for the plane-wave expansion of the electronic states. The experimentally determined lattice parameters and internal atom positions of LaNiPO (a = 4.0461 $\AA$, c = 8.100 $\AA$)\cite{Watanabe07} were used. To include dynamical correlation effects in the 3$d$ shell of Ni, we performed the LDA+DMFT\cite{Held06} calculations for LaNiPO. Following the Wannier function projection procedure of Ref.~\onlinecite{Korotin08}, we constructed an effective $H_{LDA}$ Hamiltonian and then used it to solve the Dynamical Mean-Field Theory (DMFT)\cite{Georges96} self-consistency equations. The $H_{LDA}$ Hamiltonian contained 22 bands due to five Ni 3$d$, three O 2$p$, and three P 3$p$ orbitals per formula unit, projected in a single energy window that explicitly takes into account the hybridization between $p$ and $d$ electrons.~\cite{Korotin08} The DMFT auxiliary impurity problem was solved by the hybridization function expansion Continuous-Time Quantum Monte-Carlo method.~\cite{CT} The elements of Coulomb interaction matrix were parameterized by $U$ and $J$ parameters.~\cite{LAZ95} We used interaction parameters $U=8$ eV and $J=1$ eV for LaNiPO similar to the values obtained in~Ref.~\onlinecite{Kunes07}. Calculations were performed in the paramagnetic state at the inverse temperature $\beta=1/T =$ 20~eV$^{-1}$. The real-axis self-energy needed to calculate spectral functions was obtained by the Pad\`e approximant.~\cite{pade} \section{Results and Discussion} \begin {figure}[!t] \includegraphics[width=0.42\textwidth]{figure1.eps} \caption {(Color online) Total and partial densities of states for LaNiPO (in comparison with LaFeAsO from~Ref.~\onlinecite{Anisimov09}) obtained within the density functional theory (DFT) calculations. The dashed lines in the Ni, Fe 4$s$, 3$d$ DOS refer to the metal 4$s$~states magnified by a factor of 10. La does not have any significant contribution to the valence band, and is not shown here.} \label{fig1} \end{figure} The calculated noncorrelated electronic structure of LaNiPO is shown in Fig.~\ref{fig1} in comparison with the structure of LaFeAsO.~\cite{Anisimov09} These calculations are in agreement with other DOS calculations available to date.~\cite{Lebeque07,Zhang08} In all cases the far bottom of the valence band (--11 eV) consists of P 3$s$ or As 4$s$. The top of the valence band (--2~eV to 0~eV) consists almost solely of metal 3$d$ states in both cases. Between --2 and --4 eV there is strong hybridization between the O 2$p$ and Ni 3$d$ states in LaNiPO; in LaFeAsO there are far fewer Fe 3$d$ states in this region indicating much weaker hybridization. The situation is the same with LaFePO.~\cite{Skornyakov10} LaNiPO also has a reduction in metal 3$d$ states and total states at the Fermi level compared to LaFeAsO and LaFePO. This may explain why Ni-based superconductors have lower $T_c$ values than FeAs-based superconductors. For all compounds the P~3$s$,~3$p$ and As~4$s$,~4$p$ states, respectively, occupy the same basic region in the valence band and do not contribute significantly to the Fermi level. The La 5$p$ states are identical for both compounds, they have atomic-like character and do not contribute to the valence band; they are not shown here. \begin {figure}[!t] \includegraphics[width=0.42\textwidth]{figure2.eps} \caption {(Color online) Densities of states for Ni 3$d$ and Fe 3$d$ orbitals obtained within DFT (filled areas) and the LDA+DMFT total 3$d$ spectral functions (solid lines). Data for LaFeAsO\cite{Anisimov09} and NiO\cite{Kunes07} are given for comparison.} \label{fig2} \end{figure} The spectral function from the LDA+DMFT calculation for the Ni 3$d$ states of LaNiPO is shown in Fig.~\ref{fig2} (middle panel). In the energy interval from --1 to 1~eV near the Fermi energy the 3$d$ spectral function is close to the noncorrelated LDA density of states. However, below these energies the spectral function is substantially renormalized with the formation of a strong peak at --1.5~eV, and the appearance of a lower Hubbard band: the broad peak centered at --9.1~eV. Thus this picture resembles the LDA+DMFT results for LaFeAsO\cite{Anisimov09} (upper panel) only for the energies above --6 eV, since in LaFeAsO the lower Hubbard band was not found. In NiO, however, a similar broad peak centered at $~$--10 eV is obtained, as shown in the lower panel of Fig.~\ref{fig2}. This lower Hubbard band is evidence for strong correlations,~\cite{Kunes07} and this is a clear indication of strong correlations in LaNiPO, similar to NiO.~\cite{Kurmaev08a} The comparison of the LDA+DMFT calculation for Ni 3$d$ states of LaNiPO and NiO and Fe 3$d$ states of LaFeAsO with the resonantly excited Ni $L_3$ and Fe $L_3$ X-ray emission spectra which probe occupied Me 3$d$ states is presented in Fig.~\ref{fig3}. The occurrence of the lower Hubbard band in LaNiPO and NiO and its absence in LaFeAsO is confirmed by experimental XES spectra. \begin {figure}[!t] \includegraphics[width=0.42\textwidth]{figure3.eps} \caption {(Color online) Comparison of the LDA+DMFT total 3$d$ spectral functions (LaFeAsO from Ref.~\onlinecite{Anisimov09} and NiO from Ref.~\onlinecite{Kunes07}) with the resonantly excited Ni $L_3$ and Fe $L_3$ X-ray emission spectra.} \label{fig3} \end{figure} The soft X-ray metal (Ni, Fe) $L_{2,3}$ spectra are shown in Fig.~\ref{fig4}. The metal $L_{2,3}$ XES indicate two main bands separated by the spin-orbit splitting of the metal 2$p$ states. The lower intensity high energy band corresponds to the $L_2$ emission line (3$d$,4$s$ $\to$ 2$p$$_{1/2}$ transitions), and the higher intensity low energy band corresponds to the $L_3$ emission line (3$d$,4$s$ $\to$ 2$p$$_{3/2}$ transitions). The resonant $L_2$ and $L_3$ XES (curves b and c in the bottom panels, respectively) have the same basic shape. The lack of resonant features indicates that the spectra primarily measuring the partial occupied DOS rather than multiplet or inelastic scattering effects. Note that the La $M_{4,5}$ XES appears below the Ni $L_3$ emission line in the resonant Ni $L_3$ spectrum. The metal $L_{2,3}$ XAS are presented in the top panels of Fig.~\ref{fig4}. According to dipole selection rules ($\Delta l$ = $\pm$ 1) they correspond to the excitation of metal 2$p$-core level electrons into unoccupied 3$d$ states. Unfortunately these spectra can not probe the unoccupied 3$d$ DOS directly because the core-hole causes an increased effective nuclear charge distorting the local DOS levels. Further, simulating $L_{2,3}$ XAS requires considering multiplet splitting, hybridization, and crystal field effects. One such simulation was recently conducted for LaFeAsO in~Ref.~\onlinecite{Kroll08}, to our knowledge no similar simulation of LaNiPO exists. Therefore we include the metal $L_{2,3}$ XAS only for completeness. Resonantly excited Ni $L_3$ XES of LaNiPO (curve c) shows the presence of La $M_{4,5}$ XES because excitation energy in this case is very close to resonant excitation of La $M$-emission spectra. \begin {figure}[!t] \includegraphics[width=0.42\textwidth]{figure4.eps} \caption {Summary of spectra for LaNiPO (left panel) and LaFeAsO (right panel). The upper panels show the metal $L_{2,3}$ XAS (in TEY mode), the lower panels the resonant and non-resonant metal $L_{2,3}$ XES. The excitation energies are indicated by arrows in the XAS plots.} \label{fig4} \end{figure} \begin {figure}[!t] \includegraphics[width=0.42\textwidth]{figure5.eps} \caption {Comparison of non-resonant metal $L_{2,3}$ XES of NiO, LaNiPO, and Ni (left side panels) and FeO, LaFeAsO, and Fe (right side panels) from Ref.~\onlinecite{Kurmaev08b}. The I($L_2$)/I($L_3$) ratios for each system are shown in the top panels, and the full width at half maximum (FWHM) of the $L_3$ bands are shown in the middle panels. The metal $L_{2,3}$ XES are shown in the bottom panels, for easy reference. The I($L_2$)/I($L_3$) ratios were calculated by taking the quotient of the integrals of the $L_2$ and $L_3$ bands.} \label{fig5} \end{figure} The ratio of the integral-intensity of the metal $L_2$ and $L_3$ peaks (the I($L_2$)/I($L_3$) ratio) for LaFeAsO is roughly the same as that of metallic Fe, and quite different from that of strongly correlated FeO (see Fig.~\ref{fig5}, right side, bottom panel). In a free atom, the I($L_2$)/I($L_3$) ratio should be equal to 1/2 as the ratio is based solely on the statistical population of the 2$p$$_{1/2}$ and 2$p$$_{3/2}$ levels. In metals the radiationless $L_2$$L_3$$M_{4,5}$ Coster-Kronig (C-K) transitions greatly reduce the I($L_2$)/I($L_3$) ratio,~\cite{Raghu08} and the I($L_2$)/I($L_2$) ratio can be used as a measure for the electron correlation strength of a transition metal compound\cite{Kurmaev05} (see Fig.~\ref{fig5}, right side, top panel). The full width at half maximum (FWHM) of the $L_3$ band in LaFeAsO is again closer to that of metallic Fe than FeO (see Fig.~\ref{fig5}, right side, middle panel). While this does not directly prove anything, it suggests that the Fe 3$d$ electronic structure of LaFeAsO may be similar to that of metallic Fe. The shape and statistics of the Fe $L_{2,3}$ XES indicate that the Fe 3$d$ states in LaFeAsO are not strongly correlated. In contrast to LaFeAsO, the I($L_2$)/I($L_3$) ratio for LaNiPO is much greater than that of Ni metal, as is the FWHM of the LaNiPO $L_3$ band (see Fig.~\ref{fig5}, left side, bottom panel). Indeed, the I($L_2$)/I($L_3$) ratio and $L_3$ FWHM for LaNiPO (see Fig.~\ref{fig5}, left side, top and middle panels) are rather close to those of correlated NiO. Since the transition metal I($L_2$)/I($L_3$) ratio is over 50\% greater in LaNiPO than LaFeAsO, and since in NiO is comparable to FeO in terms of ``correlation strength'',~\cite{Anisimov91} the Ni 3$d$ states are more correlated than the LaFeAsO Fe 3$d$ states. \section{Conclusions} We have studied the electronic structure of LaNiPO excited by synchrotron soft X-ray emission and absorption spectroscopy and obtained the theoretical spectral functions within the combination of local density approximation with Dynamical Mean-Field Theory (LDA+DMFT). We conclude that the Ni 3$d$ states of LaNiPO reside deeper in the valence band than the Fe 3$d$ states of LaFeAsO. The greater occupation in the metal 3$d$ bands in LaNiPO reduces the density of the states at the Fermi level and increases the hybridization with O 2$p$ states compared to those in LaFeAsO. Accounting for dynamical correlation in the Ni 3$d$ states of LaNiPO results in the renormalization of the states below the Fermi energy and the formation of the lower Hubbard band centered at --9 eV, similar to NiO, but in contrast to LaFeAsO. The I($L_2$)/I($L_3$) ratio is much higher in LaNiPO than in LaFeAsO, indicating the Ni 3$d$ states of LaNiPO have stronger electron correlations than the Fe 3$d$ states of LaFeAsO. \section{Acknowledgments} The authors thank J. Kune\v{s} for providing the DMFT code and P. Werner for the CT-QMC impurity solver used in our calculations. This work was supported by the Research Council of the President of the Russian Federation (Grant NSH-4711.2010.2), the Russian Science Foundation for Basic Research (Projects 08-02-00148, 10-02-00046, and 10-02-00546), the Natural Sciences and Engineering Research Council of Canada (NSERC) and the Canada Research Chair program, Russian Federal Agency for Science and Innovations (Program ``Scientific and Scientific-Pedagogical Training of the Innovating Russia'' for 2009-2010 years), grant No. 02.740.11.0217, the Dynasty Foundation.
1,941,325,220,946
arxiv
\section{Introduction} In an algebra $\Afr$, the {\em commutator} of $B,C\in\Afr$ is $[B,C]=BC-CB$, and we denote by $\Comm(\Afr)\subseteq\Afr$ the set of all commutators. A {\em trace} on $\Afr$ is by definition a linear functional that vanishes on $\Comm(\Afr)$. The algebra $M_n(k)$ of $n\times n$ matrices over a field $k$ has a unique trace, up to scalar multiplication; (we denote the trace sending the identity element to $1$ by $\tr_n$). It is known that every element of $M_n(k)$ that has null trace is necessarily a commutator (see~\cite{S36} for the case of characteristic zero and \cite{AM57} for the case of an arbitrary characteristic). For the complex field, $k=\Cpx$, a natural generalization of the algebra $M_n(\Cpx)$ is the algebra $B(\HEu)$ of all bounded operators on a separable, possibly infinite dimensional Hilbert space $\HEu$. Thanks to the ground breaking paper~\cite{BP65} of Brown and Pearcy, $\Comm(B(\HEu))$ is known: the commutators in $B(\HEu)$ are precisely the operators that are not of the form $\lambda I+K$ for $\lambda$ a nonzero complex number, $I$ the identity operator and $K$ a compact operator (and an analogous result holds when $\HEu$ is nonseparable). Characterizations of $\Comm(B(X))$ for some Banach spaces $X$ are found in~\cite{A72}, \cite{A73}, \cite{D09} and~\cite{DJ10}. The von Neumann algebra factors form a natural family of algebras including the matrix algebras $M_n(\Cpx)$ and $B(\HEu)$ for infinite dimensional Hilbert spaces $\HEu$; (these together are the type~I factors). The set $\Comm(\Mcal)$ was determined by Brown and Pearcy~\cite{BP66} for $\Mcal$ a factor of type III and by Halpern~\cite{H69} for $\Mcal$ a factor of type II$_\infty$. The case of type II$_1$ factors remains open. A type II$_1$ factor is a von Neumann algebra $\Mcal$ whose center is trivial and that has a trace $\tau:\Mcal\to\Cpx$, which is then unique up to scalar multiplication; by convention, we always take $\tau(1)=1$. The following question seems natural, in light of what is known for matrices: \begin{ques}\label{qn:comm} Do we have \[ \Comm(\Mcal)=\ker\tau \] for any one particular II$_1$--factor $\Mcal$, or even for all II$_1$--factors? \end{ques} Some partial results are known. Fack and de la Harpe~\cite{FH80} showed that every element of $\ker\tau$ is a sum of ten commutators, (and with control of the norms of the elements). The number ten was improved to two by Marcoux~\cite{M06}. Pearcy and Topping, in~\cite{PT69}, showed that in the type II$_1$ factors of Wright (which do not have separable predual), every self--adjoint element of trace zero is a commutator. In section~\ref{sec:normal}, we employ the construction of Pearcy and Topping for the Wright factors and a result of Hadwin~\cite{Had98} to show firstly that all normal elements of trace zero in the Wright factors are commutators. We then use this same construction to derive that in any II$_1$--factor, every normal element with trace zero and purely atomic distribution is a single commutator. In section~\ref{sec:nilpotent}, we show that all nilpotent operators in II$_1$--factors are commutators. Finally, in section~\ref{sec:ques}, we provide classes of examples of elements of II$_1$--factors that are not normal and not nilpotent but are single commutators, and we ask some specific questions suggested by our examples and results. \smallskip \noindent{\bf Acknowledgement.} The authors thank Heydar Radjavi for stimulating discussions about commutators, and Gabriel Tucci for help with his operators. \section{Some normal operators} \label{sec:normal} The following lemma (but with a constant of $2$) was described in Concluding Remark (1) of~\cite{PT69}, attributed to unpublished work of John Dyer. That the desired ordering of eigenvalues can be made with bounding constant~$4$ follows from work of Steinitz~\cite{St13}, the value $2$ follows from~\cite{GS80} and the better constant in the version below (which is not actually needed in our application of it) is due to work of Banaszczyk~\cite{Ba87}, \cite{Ba90}. \begin{lemma}\label{lem:Anormal} Let $A\in M_n(\Cpx)$ be a normal element with $\tr_n(A)=0$. Then there are $B,C\in M_n(\Cpx)$ with $A=[B,C]$ and $\|B\|\,\|C\|\le\frac{\sqrt5}2\|A\|$. \end{lemma} \begin{proof} After conjugating with a unitary, we may without loss of generality assume $A=\diag(\lambda_1,\ldots,\lambda_n)$ and we may choose the diagonal elements to appear in any prescribed order. We have $A=[B,C]$ where \begin{equation}\label{eq:B} B=\left( \begin{matrix} 0&1&0&\cdots&0 \\ 0&0&1 \\ \vdots & &\ddots&\ddots \\ 0& & \cdots & 0 &1 \\ 0& 0 & \cdots & &0 \end{matrix}\right) \end{equation} and $C=B^*D$, where \begin{equation}\label{eq:D} D=\diag(\lambda_1,\,\lambda_1+\lambda_2,\,\ldots,\lambda_1+\cdots+\lambda_{n-1},0). \end{equation} By work of Banaszczyk~\cite{Ba87}, \cite{Ba90}, any list $\lambda_1,\ldots,\lambda_n$ of complex numbers whose sum is zero can be reordered so that for all $k\in\{1,\ldots,n-1\}$ we have \begin{equation}\label{eq:lambdasum} \left|\sum_{j=1}^k\lambda_j\right|\le\frac{\sqrt5}2\max_{1\le j\le n}|\lambda_j|. \end{equation} This ensures $\|B\|\le1$ and $\|C\|\le\frac{\sqrt5}2\|A\|$. \end{proof} The II$_1$--factors of Wright~\cite{W54} are the quotients of the von Neumann algebra of all bounded sequences in $\prod_{n=1}^\infty M_n(\Cpx)$ by the ideal $I_\omega$, consisting of all sequences $(a_n)_{n=1}^\infty\in\prod_{n=1}^\infty M_n(\Cpx)$ such that $\lim_{n\to\omega}\tr_n(a_n^*a_n)=0$, where $\omega$ is a nontrivial ultrafilter on the natural numbers. The trace of the element of $\Mcal$ associated to a bounded sequence $(b_n)_{n=1}^\infty\in\prod_{n=1}^\infty M_n(\Cpx)$ is $\lim_{n\to\omega}\tr_n(b_n)$. (See~\cite{McD70} or~\cite{J72} for ultrapowers of finite von Neumann algebras.) The following result in the case of self--adjoint operators is due to Pearcy and Topping~\cite{PT69}. \begin{thm}\label{thm:PT} If $\Mcal$ is a Wright factor and if $T\in\Mcal$ is normal with $\tau(T)=0$, then $T\in\Comm(\Mcal)$. \end{thm} \begin{proof} Let $T\in\Mcal$ be normal and let $X$ and $Y$ be the real and imaginary parts of $T$, respecitvely. Let $(S_n)_{n=1}^\infty\in\prod_{n=1}^\infty M_n(\Cpx)$ be a representative of $T$, with $\|S_n\|\le\|T\|$ for all $n$. Let $X_n$ and $Y_n$ be the real and imaginary parts of $S_n$. Then the mixed $*$--moments of the pair $(X_n,Y_n)$ converge as $n\to\omega$ to the mixed $*$--moments of $(X,Y)$. By standard methods, we can construct some commuting, self--adjoint, traceless $n\times n$ matrices $H_n$ and $K_n$ such that $H_n$ converges in moments to $X$ and $K_n$ converges in moments to $Y$, as $n\to\infty$. Now using a result of Hadwin (Theorem 2.1 of~\cite{Had98}), we find $n\times n$ unitaries $U_n$ such that \[ \lim_{n\to\omega}\|U_nX_nU_n^*-H_n\|_2=0 \qquad \lim_{n\to\omega}\|U_nY_nU_n^*-K_n\|_2=0, \] where $\|Z\|_2=\tr_n(Z^*Z)^{1/2}$ is the Euclidean norm resulting from the normalized trace on $M_n(\Cpx)$. This shows that $T$ has respresentative $(T_n)_{n=1}^\infty$, where $T_n=U_n^*(H_n+iK_n)U_n$ is normal and, of course, traceless. By Lemma~\ref{lem:Anormal}, for each $n$ there are $B_n,C_n\in M_n(\Cpx)$ with $\|B_n\|=1$ and $\|C_n\|\le\frac{\sqrt5}2\|T\|$ such that $T_n=[B_n,C_n]$. Let $B,C\in\Mcal$ be the images (in the quotient $\prod_{n=1}^\infty M_n(\Cpx)/I_\omega$) of $(B_n)_{n=1}^\infty$ and $(C_n)_{n=1}^\infty$, respectively. Then $T=[B,C]$. \end{proof} The {\em distribution} of a normal element $T$ in a II$_1$--factor is the compactly supported Borel probability measure on the complex plane obtained by composing the trace with the projection--valued spectral measure of $T$. \begin{thm}\label{thm:normalhyp} If $R$ is the hyperfinite II$_1$--factor and if $\mu$ is a compactly supported Borel probability measure on the complex plane such that $\int z\,\mu(dz)=0$, then there is a normal element $T\in\Comm(R)$ whose distribution is $\mu$. \end{thm} \begin{proof} We will consider a particular instance of the construction from the proof of Theorem~\ref{thm:PT}. Let $\Mcal$ be a factor of Wright, with tracial state $\tau$. Let $L$ be the maximum modulus of elements of the support of $\mu$. We may choose complex numbers $(\lambda^{(n)}_j)_{j=1}^n$ for $n\ge1$ such that the measures $\frac1n\sum_{j=1}^n\delta_{\lambda_j^{(n)}}$ converge in weak$^*$--topology to $\mu$ and all have support contained inside the disk of radius $L$ centered at the origin and such that $\sum_{j=1}^n\lambda^{(n)}_j=0$ for each $n$. Let $T_n=\diag(\lambda^{(n)}_1,\ldots,\lambda^{(n)}_n)\in M_n(\Cpx)$ and let $T\in\Mcal$ be the element associated to the sequence $(T_n)_{n=1}^\infty$. Then the distribution of $T$ is $\mu$. By~\cite{Ba87}, \cite{Ba90}, we can order these $\lambda^{(n)}_1,\ldots,\lambda^{(n)}_n$ so that $\big|\sum_{j=1}^k\lambda^{(n)}_j\big|\le\frac{\sqrt5}2\|T\|$ for all $1\le k\le n$. Then, as in the proof of Lemma~\ref{lem:Anormal}, we have $T_n=[B_n,B_n^*D_n]$ where $B_n$ and $D_n$ are the $n\times n$ matrices $B$ and $D$ of~\eqref{eq:B} and~\eqref{eq:D}, respectively. If $B,D\in\Mcal$ are the images in the quotient of the sequences $(B_n)_{n=1}^\infty$ and $(D_n)_{n=1}^\infty$, respectively, then $T=[B,B^*D]$. However, note that $B\in\Mcal$ is a unitary element such that $\tau(B^k)=0$ for all $k>0$. Moreover, the set $\{B^kDB^{-k}\mid k\in\Ints\}$ generates a commutative von Neumann subalgebra $\Ac$ of $\Mcal$ and every element of $\Ac$ is the image (under the quotient mapping) of a sequence $(A_n)_{n=1}^\infty$ where each $A_n\in M_n(\Cpx)$ is a diagonal matrix. Thus, the unitary $B$ acts by conjugation on $\Ac$, and, moreover, we have $\tau(AB^k)=0$ for all $A\in\Ac$ and all $k>0$. Therefore the von Neumann subalgebra generated by $\Ac\cup\{B\}$ is a case of the group--measure-space construction, $\Ac\rtimes\Ints$, and is a hyperfinite von Neumann algebra by \cite{Co76} and can, thus, be embedded into the hyperfinite II$_1$--factor $R$. \end{proof} The above proof actually shows the following. \begin{cor} Given any compactly supported Borel probability measure $\mu$ on the complex plane with $\int z\,\mu(dz)=0$, there is $f\in L^\infty([0,1])$ and a probability-measure-preserving transformation $\alpha$ of $[0,1]$ such that the distribution of $f-\alpha(f)$ equals $\mu$ and the supremum norm of $f$ is no more than $\frac{\sqrt5}2$ times the maximum modulus of the support of $\mu$. \end{cor} \begin{thm}\label{thm:atomic} If $\Mcal$ is any II$_1$--factor and $T\in\Mcal$ is a normal element whose distribution is purely atomic and with trace $\tau(T)=0$, then $T\in\Comm(\Mcal)$. \end{thm} \begin{proof} $\Mcal$ contains a (unital) subfactor $R$ isomorphic to the hyperfinite II$_1$--factor. By Theorem~\ref{thm:normalhyp}, there is an element $\Tt\in\Comm(R)$ whose distribution equals the distribution of $T$. Since this distribution is purely atomic, there is a unitary $U\in\Mcal$ such that $U\Tt U^*=T$. Thus, $T\in\Comm(\Mcal)$. \end{proof} \section{Nilpotent operators} \label{sec:nilpotent} The von Neumann algebra $\Mcal$ is embedded in $B(\HEu)$ as a strong--operator--topology closed, self--adjoint subalgebra. If $T\in\Mcal$, we denote the self--adjoint projection onto $\ker(T)$ by $\kerproj(T)$ and the self--adjoint projection onto the closure of the range of $T$ by $\ranproj(T)$. Both of these belong to $\Mcal$, and we have \[\tau(\kerproj(T))+\tau(\ranproj(T))=1\] The following decomposition follows from the usual sort of analysis of subspaces that one does also in the finite dimensional setting. \begin{lemma}\label{lem:UT} Let $\Mcal$ be a II$_1$--factor and let $T\in\Mcal$ be nilpotent, $T^n=0$. Then there are integers $n\ge k_1>k_2>\ldots>k_m\ge1$ and for each $j\in\{1,\ldots,m\}$ there are equivalent projections $f^{(j)}_1,\ldots,f^{(j)}_{k_j}$ in $\Mcal$ such that \begin{enumerate}[(i)] \item $f^{(j)}:=f^{(j)}_1+\cdots+f^{(j)}_{k_j}$ commutes with $T$, \item $f^{(1)}+\cdots+f^{(m)}=1$, \item the $k_j\times k_j$ matrix of $f^{(j)}T$ with respect to these projections $f^{(j)}_1,\ldots,f^{(j)}_{k_j}$ is strictly upper triangular. \end{enumerate} \end{lemma} In other words, the lemma says that $T$ lies in a unital $*$--subalgebra of $\Mcal$ that is isomorphic to $M_{k_1}(\Afr_1)\oplus\cdots\oplus M_{k_m}(\Afr_m)$ for certain compressions $\Afr_j$ of $\Mcal$ by projections, and the direct summand component of $T$ in each $M_{k_j}(\Afr_j)$ is a strictly upper triangular matrix. \begin{proof} The proof is by induction on $n$. The case $n=1$ is clear, because then $T=0$. Assume $n\ge2$. We consider the usual system $p_1,p_2,\ldots,p_n$ of pairwise orthogonal projections with respect to which $T$ is upper triangular: \begin{align*} p_1&=\kerproj(T), \\ p_j&=\kerproj(T^j)-\kerproj(T^{j-1}),\quad(2\le j\le n). \end{align*} Then we have \begin{gather} \tau(\ranproj(Tp_j))=\tau(p_j),\qquad(2\le j\le n), \label{eq:Tpj} \\ \ranproj(Tp_j)\le\kerproj(T^{j-1})=p_1+p_2+\cdots+p_{j-1},\qquad(2\le j\le n), \label{eq:Tpjle} \\ \ranproj(Tp_j)\wedge(p_1+ p_2+\cdots+p_{j-2})=0,\qquad(3\le j\le n). \label{eq:rpw} \end{gather} Indeed, for~\eqref{eq:Tpj}, it will suffice to show $\kerproj(Tp_j)=1-p_j$. For this, note that if $p_j\xi=\xi$ and $T\xi=0$, then $\xi\in\ker T\subseteq\ker T^{j-1}$. Since $p_j\perp\kerproj(T^{j-1})$, this gives $\xi=0$. The relation~\eqref{eq:Tpjle} is clear. For~\eqref{eq:rpw}, if $q:=\ranproj(Tp_j)\wedge\kerproj(T^{j-2})\ne0$, then by standard techniques (see, e.g., Lemma~2.2.1 of~\cite{CD09}), we would have a nonzero projection $r\le p_j$ such that $q=\ranproj(Tr)\le\kerproj(T^{j-2})$. However, this would imply $r\le\kerproj(T^{j-1})$, which contradicts $p_j\perp\kerproj(T^{j-1})$. Let \begin{align*} q_n&=p_n\,, \\ q_{n-j}&=\ranproj(T^jq_n),\qquad(1\le j\le n-1). \end{align*} Then we have \begin{gather} q_k=\ranproj(Tq_{k+1})\le p_1+\cdots+p_k,\qquad(1\le k\le n-1), \label{eq:Tqk} \\ q_k\wedge(p_1+\cdots+p_{k-1})=0,\qquad(2\le k\le n). \label{eq:qk} \end{gather} Now~\eqref{eq:Tpj} and~\eqref{eq:Tqk} together imply $\tau(q_k)=\tau(q_{k+1})$, and from~\eqref{eq:qk} we have $\tau(q_1\vee\cdots\vee q_k)=k\tau(q_1)$. Thus, we have pairwise equivalent and orthogonal projections $f_1,\ldots,f_n$ defined by \begin{align*} f_n&=q_n\,, \\ f_k&=(q_k\vee\cdots\vee q_n)-(q_{k+1}\vee\cdots\vee q_n),\qquad(1\le k\le n-1), \end{align*} $T$ commutes with $f:=f_1+\cdots+f_n$ and $Tf$ is strictly upper triangular when written as an $n\times n$ matrix with respect to $f_1,\ldots,f_n$. Moreover, we have $(T(1-f))^{n-1}=T^{n-1}(1-f)=0$ and the induction hypothesis applies to $T(1-f)$. \end{proof} \begin{prop} Let $\Mcal$ be a II$_1$--factor. Then $\Comm(\Mcal)$ contains all nilpotent elements of $\Mcal$. \end{prop} \begin{proof} By Lemma~\ref{lem:UT}, we only need to observe that a strictly upper triangular matrix in $M_n(\Afr)$ is a single commutator, for any algebra $\Afr$. But this is easy: if \[ A=\left( \begin{matrix} 0&a_{1,2}&a_{1,3}&\cdots&a_{1,n} \\ 0&0 &a_{2,3}&\cdots&a_{2,n} \\ \vdots & &\ddots&\ddots&\vdots \\ & & & &a_{n-1,n} \\ 0 & & \cdots & &0 \end{matrix}\right), \] then $A=BC-CB$, where $B$ is the matrix in~\eqref{eq:B}, \begin{equation C=\left( \begin{matrix} 0&0&\cdots&0 \\ 0&c_{2,2}&\cdots&c_{2,n} \\ \vdots& &\ddots&\vdots \\ 0&\cdots&0 &c_{n,n} \end{matrix}\right), \end{equation} and where the $c_{i,j}$ are chosen so that \begin{align*} a_{1,j}&=c_{2,j}\,,\qquad (2\le j\le n), \\ a_{p,j}&=c_{p+1,j}-c_{p,j-1}\,,\qquad(2\le p<j\le n). \end{align*} \end{proof} \section{Examples and questions} \label{sec:ques} \begin{example} A particular case of Theorem~\ref{thm:atomic} is that if $p$ is a projection (with irrational trace) in any II$_1$--factor $\Mcal$, then $p-\tau(p)1\in\Comm(\Mcal)$. We note that a projection with rational trace is contained in some unital matrix subalgebra $M_n(\Cpx)\subseteq\Mcal$; therefore, the case of a projection with rational trace is an immediate application of Shoda's result. \end{example} \begin{ques} In light of Theorem~\ref{thm:atomic}, it is natural to ask: does $\Comm(\Mcal)$ contain all normal elements of $\Mcal$ whose trace is zero? (Note that each such element is the limit in norm of a sequence of elements of the sort considered in Theorem~\ref{thm:atomic}.) It is of particular interest to focus on normal elements that generate maximal self--adjoint abelian subalgebras (masas) in $\Mcal$. Does it make a difference whether the masa is singular or semi-regular? (See~\cite{SS08}.) \end{ques} A particular case: \begin{ques} If $a$ and $b$ freely generate the group $\Fb_2$, let $\lambda_a$ and $\lambda_b$ be the corresponding unitaries generating the group von Neumann algebra $L(\Fb_2)$. Do we have $\lambda_a\in\Comm(L(\Fb_2))$? \end{ques} Our next examples come from ergodic theory. \begin{example}\label{ex:ergodic} Let $\alpha$ be an ergodic, probability measure preserving transformation of a standard Borel probability space $X$, that is not weakly mixing. Consider the hyperfinite II$_1$--factor $R$ realized as the crossed product $R=L^\infty(X)\rtimes_\alphat\Ints$ where $\alphat$ is the automorphism of $L^\infty(X)$ arising from $\alpha$ by $\alphat(f)=f\circ\alpha$. For $f\in L^\infty([0,1])$, we let $\pi(f)$ denote the corresponding element of $R$, and we write $U\in R$ for the implementing unitary, so that $U\pi(f)U^*=\pi(\alphat(f))$. By a standard result in ergodic theory (see, for example, Theorem 2.6.1 of~\cite{P83}), there is an eigenfunction, i.e., $h\in L^\infty(X)\backslash\{0\}$ so that $\alphat(h)=\zeta h$ for some $\zeta\ne1$; moreover, all eigenfunctions $h$ of an ergodic transformation must have $|h|$ constant. If $g\in L^\infty(X)$, then \begin{align*} [U\pi(g),\pi(h)]=U\pi\big(g\big(h-\alphat^{-1}(h)\big)\big). \end{align*} Since $h-\alphat^{-1}(h)$ is invertible, by making appropriate choices of $g$ we get $U\pi(f)=[U\pi(g),\pi(h)]\in\Comm(R)$ for all $f\in L^\infty(X)$. \end{example} \begin{ques} If $\alpha$ is a weakly mixing transformation of $X$ (for example, a Bernoulli shift), then, with the notation of Example~\ref{ex:ergodic}, do we have $U\pi(f)\in\Comm(R)$ for all $f\in L^\infty(X)$? \end{ques} \begin{example} Assume that $\alphat$ from Example \ref{ex:ergodic} has infinitely many distinct eigenvalues. This is the case for every compact ergodic action $\alpha$ (for example, an irrational rotation of the circle or the odometer action), but can also hold for a non-compact action (for example, a skew rotation of the torus). For every finite set $F\subset\Ints\setminus\{0\}$, there is an eigenvalue $\zeta$ such that $\zeta^k\neq 1$, for any $k\in F$. Let $h$ be an eigenfunction of $\alphat$ corresponding to this eigenvalue $\zeta$; clearly, $|h|$ is a constant. Then, for $g_k\in L^\infty(X)$, \[ \left[\sum_{k \in F}U^k\pi(g_k),\pi(h)\right]=\sum_{k\in F} \left[U^k\pi(g_k),\pi(h)\right] =\sum_{k\in F}U^k\pi\big(g_k\big(h-\alphat^{-k}(h)\big)\big). \] Thus, for any $f_k\in L^\infty(X)$, by choosing $g_k=f_k \big(h-\alphat^{-k}(h)\big)^{-1}$, we obtain \[ \sum_{k\in F} U^k\pi(f_k)\in\Comm(R). \] \end{example} \begin{ques} It is natural to ask Question~\ref{qn:comm} in the particular case of quasinilpotent elements $T$ of $\Mcal$: must they lie in $\Comm(\Mcal)$? From Proposition~4 of~\cite{MW79}, it follows that every quasinilpotent operator $T$ in a II$_1$--factor has trace zero. (Alternatively, use L.\ Brown's analogue~\cite{B86} of Lidskii's theorem in II$_1$--factors and the fact that the Brown measure of $T$ must be concentrated at $0$). \end{ques} \begin{ques} Consider the quasinilpotent DT--operator $T$ (see~\cite{DH04}), which is a generator of the free group factor $L(\Fb_2)$. Do we have $T\in\Comm(L(\Fb_2))$? \end{ques} \begin{example}\label{ex:Tucci} Consider G.\ Tucci's quasinilpotent operator \[ A=\sum_{n=1}^\infty a_n V_n\in R, \] from~\cite{T08}, where $a=(a_n)_{n=1}^\infty\in\ell^1_+$, the set of summable sequences of nonnegative numbers. Here $R=\overline{\bigotimes_1^\infty M_2(\Cpx)}$ is the hyperfinite II$_1$--factor and \begin{equation}\label{def:Vn} V_n=I^{\otimes n-1}\otimes\left(\begin{smallmatrix}0&1\\0&0\end{smallmatrix}\right)\otimes I\otimes I\otimes\cdots. \end{equation} Tucci showed in Remark~3.7 (p.\ 2978) of~\cite{T08} that $A$ is a single commutator whenever $a=(b_nc_n)_{n=1}^\infty$ for some $b=(b_n)_{n=1}^\infty\in\ell^1$ and $c=(c_n)_{n=1}^\infty\in\ell^1$, by writing $A=[B,C]$, where \begin{align} B&=\sum_{n=1}^\infty b_nV_nV_n^*, \label{eq:Bop} \\ C&=\sum_{n=1}^\infty c_nV_n\,. \label{eq:C} \end{align} Note that, for $a\in\ell^1_+$, there exist $b$ and $c$ in $\ell^1$ such that $a=(b_nc_n)_{n=1}^\infty$ if and only if $\sum_{n=1}^\infty a_n^{1/2}<\infty$, i.e., if and only if $a\in\ell^{1/2}_+$. \end{example} The rest of the paper is concerned with some further results and remarks about Tucci's operators. We might try to extend the formula $A=[B,C]$ for $B$ and $C$ as in~\eqref{eq:Bop} and~\eqref{eq:C}, respectively, to other sequences $a\in\ell^1_+$, i.e.\ for $b$ and $c$ not necessarily in $\ell^1$, and where the convergence in~\eqref{eq:Bop} and~\eqref{eq:C} might be in some weaker topology. We first turn our attention to~\eqref{eq:C}. Denoting the usual embedding $R\hookrightarrow L^2(R,\tau)$ by $X\mapsto\Xh$, from \eqref{def:Vn} we see that the vectors $\Vh_n$ are orthogonal and all have $L^2(R,\tau)$-norm equal to $1/\sqrt2$; therefore, the series~\eqref{eq:C} converges in $L^2(R,\tau)$ as soon as $c\in\ell^2$, and we have \begin{equation}\label{eq:Ch} \Ch=\sum_{n=1}^\infty c_n\Vh_n. \end{equation} We easily see (below) that only for $c\in\ell^1$ there is a bounded operator $C\in R$ such that $\Ch$ is given by~\eqref{eq:Ch}. \begin{prop}\label{prop:cl1} Let $c\in\ell^2$. Suppose there is a bounded operator $C\in R$ such that $\Ch$ is given by~\eqref{eq:Ch}. Then $c\in\ell^1$. \end{prop} \begin{proof} For any sequence $(\zeta_n)_{n=1}^\infty$ of complex numbers of modulus $1$, there is an automorphism of $R$ sending $V_n$ to $\zeta_nV_n$ for all $n$. Thus, without loss of generality we may assume $c_n\ge0$ for all $n$. Letting $E_n:R\to M_2(\Cpx)^{\otimes n}\otimes I\otimes I\otimes\cdots\cong M_{2^n}(\Cpx)$ be the conditional expectation onto the tensor product of the first $n$ copies of the $2\times 2$ matrices (see Example~\ref{ex:Tucci}), we must have $C_n:=E_n(C)=\sum_{k=1}^n c_kV_k\in M_{2^n}(\Cpx)$. Let $x=2^{-n/2}(1,1,\ldots,1)^t$ be the normalization of the column vector of length $2^n$ with all entries equal to $1$. Taking the usual inner product in $\Cpx^{2^n}$, we see $\langle V_kx,x\rangle=1/2$ for all $k\in\{1,\ldots,n\}$. Thus, \[ \frac12\sum_{k=1}^nc_k=\big|\,\langle C_n x,x\rangle\,\big|\le\|C_n\|\le\|C\|. \] This shows $c\in\ell^1$. \end{proof} Let us now investigate the series~\eqref{eq:Bop} for some sequence $b=(b_n)_{n=1}^\infty$ of complex numbers. We claim that this series gives rise (in a weak sense explained below) to a bounded operator if and only if $b\in\ell^1$. Indeed, for $K$ a finite subset of $\Nats$, we have \[ \left\|\sum_{n\in K}b_nV_nV_n^*\right\|_{L^2(R,\tau)}^2 =\;\frac14\sum_{n\in K}|b_n|^2+\frac14\left|\sum_{n\in K}b_n\right|^2. \] Now suppose $K_1\subseteq K_2\subseteq\cdots$ are finite sets whose union is all of $\Nats$. Then $\sum_{n\in K_p}b_nV_nV_n^*$ converges in $L^2(R,\tau)$ as $p\to\infty$ if and only if $b\in\ell^2$ and $y:=\lim_{p\to\infty}\sum_{n\in K_p}b_n$ exists. Then the limit in $L^2(R,\tau)$ is \begin{equation}\label{eq:Bh} \Bh=\sum_{n=1}^\infty b_n\left(V_nV_n^*-\frac12\right)^{\widehat{\;}}+\frac y2 \oneh. \end{equation} If there is a bounded operator $B$ such that $\Bh$ is given by~\eqref{eq:Bh}, then for every finite $F\subseteq\Nats$, the conditional expectation $E_F(B)$ of $B$ onto the (finite dimensional) subalgebra of $R$ generated by $\{V_nV_n^*\mid n\in F\}$ will be $\sum_{n\in F}b_n(V_nV_n^*-\frac12)+\frac y2$. Taking the projection $P=\prod_{n\in F}V_nV_n^*$, we have $E_F(B)P=\frac12(y+\sum_{n\in F}b_n)P$, so \[ \left|\frac12\left(y+\sum_{n\in F}b_n\right)\right|\le\|E_F(B)\|\le\|B\|. \] As $F$ was arbitrary, this implies $b\in\ell^1$. Suppose $b_nc_n=\frac1{n^r}$ and $b=(b_n)_1^\infty\in\ell^1$. Letting $(b^*_n)_1^\infty$ denote the nonincreasing rearrangement of $(|b_n|)_1^\infty$, we have $b^*_n=o(\frac1n)$ and standard arguments show $c^*_n\ge\frac{K}{n^{r-1}}$ for some constant $K$. Thus, by Proposition~\ref{prop:cl1}, Tucci's formula for writing $A=[B,C]$ does not work if $a_n=\frac1{n^r}$ for $1<r\le2$, while of course for $r>2$ it works just fine. \begin{ques}\label{qn:Tuccir} Fix $1<r\leq 2$, and let \[ A=\sum_{n=1}^\infty \frac1{n^r}V_n\in R \] be Tucci's quasinilpotent operator in the hyperfinite II$_1$--factor. Do we have $A\in\Comm(R)$? \end{ques} \begin{bibdiv} \begin{biblist} \bib{AM57}{article}{ author={Albert, A. A.}, author={Muckenhoupt, B.}, title={On matrices of trace zero}, journal={Michigan Math. J.}, volume={3}, year={1957}, pages={1--3} } \bib{A72}{article}{ author={Apostol, Constantin}, title={Commutators on $\ell^p$ spaces}, journal={Rev. Roumaine Math. Pures Appl.}, volume={17}, year={1972}, pages={1513--1534} } \bib{A73}{article}{ author={Apostol, Constantin}, title={Commutators on $c_0$ and $\ell^\infty$ spaces}, journal={Rev. Roumaine Math. Pures Appl.}, volume={18}, year={1973}, pages={1025--1032} } \bib{Ba87}{article}{ author={Banaszczyk, Wojciech}, title={The Steinitz constant of the plane}, journal={J. reine angew. Math.}, volume={373}, year={1987}, pages={218--220} } \bib{Ba90}{article}{ author={Banaszczyk, Wojciech}, title={A note on the Steinitz constant of the Euclidean plane}, journal={C. R. Math. Rep. Acad. Sci. Canada}, volume={12}, year={1990}, pages={97--102} } \bib{BP65}{article}{ author={Brown, Arlen}, author={Pearcy, Carl}, title={Structure of commutators of operators}, journal={Ann. of Math. (2)}, volume={82}, year={1965}, pages={112--127} } \bib{BP66}{article}{ author={Brown, Arlen}, author={Pearcy, Carl}, title={Commutators in factors of type III}, journal={Canad. J. Math.}, volume={18}, year={1966}, pages={1152--1160} } \bib{B86}{article}{ author={Brown, Lawrence G.}, title={Lidskii's theorem in the type II case}, conference={ title={Geometric methods in operator algebras}, address={Kyoto}, date={1983} }, book={ series={Pitman Res. Notes Math. Ser.}, volume={123}, publisher={Longman Sci. Tech.}, address={Harlow}, date={1986} }, pages={1--35} } \bib{CD09}{article}{ author={Collins, Beno\^it}, author={Dykema, Ken}, title={On a reduction procedure for Horn inequalities in finite von Neumann algebras}, journal={Oper. Matrices}, volume={3}, year={2009}, pages={1-40} } \bib{Co76}{article}{ author={Connes, Alain}, title={Classification of injective factors}, journal={Ann. Math.}, volume={104}, pages={73--115}, year={1976} } \bib{D09}{article}{ author={Dosev, Detelin}, title={Commutators on $\ell_1$}, journal={J. Funct. Anal.}, volume={256}, year={2009}, pages={3490--3509} } \bib{DJ10}{article}{ author={Dosev, Detelin}, author={Johnson, William B.}, title={Commutators on $\ell_\infty$}, journal={Bull. London Math. Soc.}, volume={42}, year={2010}, pages={155-169} } \bib{DH04}{article}{ author={Dykema, Ken}, author={Haagerup, Uffe}, title={Invariant subspaces of the quasinilpotent DT--operator}, journal={J. Funct. Anal.}, volume={209}, year={2004}, pages={332--366} } \bib{FH80}{article}{ author={Fack, Thierry}, author={de la Harpe, Pierre}, title={Sommes de commutaterus dans les alg\`ebres de von Neumann finies continues}, journal={Ann. Inst. Fourier (Grenoble)}, volume={30}, year={1980}, pages={49--73} } \bib{GS80}{article}{ author={Grinberg, V.S.}, author={Sewast'janow, S.V.}, title={Regarding the value of Steinitz's constant}, journal={Funktsional. Anal. i Prilozhen}, volume={14}, year={1980}, pages={56--57}, translation={ journal={Functional Anal. and Appl.}, volume={14}, year={1980}, pages={125--126} } } \bib{Had98}{article}{ author={Hadwin, Don}, title={Free entropy and approximate equivalence in von Neumann algebras}, conference={ title={Operator algebras and operator theory}, address={Shanghai}, date={1997} }, book={ series={Contemp. Math.} volume={228}, publisher={Amer. Math. Soc.}, address={Providence, RI}, year={1998} }, pages={111--131} } \bib{H69}{article}{ author={Halpern, Herbert}, title={Commutators in properly infinite von Neumann algebras}, journal={Trans. Amer. Math. Soc.}, volume={139}, year={1969}, pages={55-73} } \bib{J72}{article}{ author={Janssen, Gerhard}, title={Restricted ultraproducts of finite von Neumann algebras}, conference={ title={Contributions to non-standard analysis}, address={Oberwolfach}, date={1970}, }, book={ series={Studies in Logic and Found. Math.}, volume={69}, publisher={North--Holland}, address={Amsterdam}, date={1972} }, pages={101--114} } \bib{M06}{article}{ author={Marcoux, Laurent}, title={Sums of small numbers of commutators}, journal={J. Operator Theory}, volume={56}, year={2006}, pages={111--142} } \bib{McD70}{article}{ author={McDuff, Dusa}, title={Central sequences and the hyperfinite factor}, journal={Proc. London Math. Soc. (3)}, volume={21}, year={1970}, pages={443--461} } \bib{MW79}{article}{ author={Murphy, Gerard J.}, author={West, T. T.}, title={Spectral radius forumlae}, journal={Proc. Edinburgh Math. Soc.}, volume={22}, year={1979}, pages={271--275} } \bib{PT69}{article}{ author={Pearcy, Carl}, author={Topping, David}, journal={J. Funct. Anal.}, title={Commutators and certain II$_1$--factors}, volume={3}, year={1969}, pages={69--78} } \bib{P83}{book}{ author={Petersen, Karl}, title={Ergodic theory}, publisher={Cambridge Univ. Press.}, series={Cambridge studies in advanced mathematics}, volume={2}, year={1983} } \bib{S36}{article}{ author={Shoda, Kenjiro}, title={Einige S\"atze \"uber Matrizen}, journal={Japanese J. Math.}, volume={13}, year={1936}, pages={361--365} } \bib{SS08}{book}{ author={Sinclair, Allan M.}, author={Smith, Roger R.}, title={Finite von Neumann algebras and masas}, series={London Mathematical Society Lecture Note Series}, volume={351}, publisher={Cambridge University Press}, address={Cambridge}, year={2008} } \bib{St13}{article}{ author={Steinitz, Ernst}, title={Bedingt konvergente Reihen und konvexe Systeme}, journal={J. reine angew. Math.}, volume={143}, year={1913}, pages={128--175} } \bib{T08}{article}{ author={Tucci, Gabriel}, title={Some quasinilpotent generators of the hyperfinite II$_1$ factor}, journal={J. Funct. Anal.}, volume={254}, year={2008}, pages={2969--2994} } \bib{W54}{article}{ author={Wright, Fred}, title={A reduction for algebras of finite type}, journal={Ann. of Math. (2)}, volume={60}, year={1954}, pages={560--570} } \end{biblist} \end{bibdiv} \end{document}
1,941,325,220,947
arxiv
\section{Introduction} Whether or not strong dynamics ultimately explains the hierarchy problem, theories that exhibit confinement are undoubtedly the most elegant way to generate large hierarchies of scales \cite{Gross:1973id,Politzer:1973fx}. In the context of the standard model, there is the unfortunate reality that the electroweak scale ($246 ~\mathrm{GeV}$), the precision electroweak scale ($\sim 10 ~\mathrm{TeV}$), and the flavor scale ($\sim 1000 ~\mathrm{TeV}$) are well separated, and no known strong dynamics can elegantly explain \emph{three} hierarchies. In this light, little Higgs theories \cite{Arkani-Hamed:2001nc,Arkani-Hamed:2002pa,Arkani-Hamed:2002qy,Arkani-Hamed:2002qx,Gregoire:2002ra,Low:2002ws,Kaplan:2003uc,Skiba:2003yf} are a reasonable compromise to the hierarchy problem: strong dynamics separate the precision electroweak scale from the Planck scale, collective breaking explains the ``little hierarchy'' \cite{Cheng:2003ju,Cheng:2004yc} between the confinement scale and the electroweak scale, and the flavor problem is presumably addressed by some sort of GIM mechanism \cite{Glashow:1970gm}. For all the phenomenological successes of little Higgs theories, their structure can seem a bit artificial from the high energy perspective. While the $SU(5)/SO(5)$ littlest Higgs \cite{Arkani-Hamed:2002qy} could arise from strong $SO(N)$ dynamics \cite{Katz:2003sn}, the $\left(SU(3)/SU(2)\right)^2$ simple group little Higgs \cite{Kaplan:2003uc} has no obvious QCD-like UV completion, because it is hard (though not impossible) to imagine that an ordinary confining theory would break an $SU(N)$ flavor symmetry to $SU(N-k)$. In this light, technicolor theories \cite{Weinberg:1975gm,Susskind:1978ms} seem a lot more realistic (if not phenomenologically viable) in that they are simply scaled-up versions of ordinary $SU(N_f)_L \times SU(N_f)_R \rightarrow SU(N_f)_D$ chiral symmetry breaking in QCD. But before we dismiss the little Higgs theories as interesting but artificial constructions, we will actually show that \emph{any} $(SU(N)/H)^n$ little Higgs theory can arise from $n$ copies of QCD with $N$ flavors! In other words, the little Higgs could literally be a pion of QCD. We will show that an $SU(N)/H$ little Higgs theory where an $F$ subgroup of $SU(N)$ is gauged can be described by the moose diagram: \beq{intromoose} \begin{tabular}{c} \xymatrix@R=.4pc@C=1.4pc{\mathrm{Global:} & SU(N)_L && SU(N)_R \\ & *=<20pt>[o][F]{} \ar@{-}[r] |-{\SelectTips{eu}{}\object@{>}}^{\mbox{\raisebox{1.5ex}{$\psi$}}} & *=<12pt>[o][F=]{} \ar@{-}[r] |-{\SelectTips{eu}{}\object@{>}}^{\mbox{\raisebox{1.5ex}{$\psi^c$}}} & *=<20pt>[o][F]{} \\ \mathrm{Gauged:} &F &SU(N_c)&H} \end{tabular} \end{equation} This is just QCD with $N$ flavors where some of the flavor symmetries have been gauged. As we will see, this ``little technicolor'' moose emerges quite naturally from deconstructing the AdS dual of some quasi-CFT. However, even without insight from AdS/CFT, we could study \eq{intromoose} in its own right as a novel UV completion of little Higgs theories. The main results of this paper are contained in section \ref{sec:intro}: we present the AdS/CFT inspiration for little technicolor and show how it connects to the known formalisms of CCWZ \cite{Coleman:1969sm,Callan:1969sn} and Hidden Local Symmetry (HLS) \cite{Bando:1987br}; we then reinterpret the HLS construction in a novel way to arrive at the little technicolor moose. In section \ref{sec:four}, we apply the little technicolor construction to the $\left(SU(3)/SU(2)\right)^2$ simple group little Higgs to show how the same low energy degrees of freedom can arise from four very different theories: a straightforward application of AdS/CFT, two copies of QCD, one copy of QCD in the vector limit \cite{Georgi:1989xy}, and a known AdS$_5$ construction \cite{Contino:2003ve} that superficially does not look like a little Higgs theory (but really is). We return briefly to AdS space in section \ref{sec:holo} to show how ``integrating in'' the IR brane can turn holographic composite Higgs models into brane-localized little Higgs theories. We comment on vacuum alignment issues in section \ref{sec:fvsh}, and we conclude with some outstanding questions about more general little Higgs theories and speculations on the Wess-Zumino-Witten term \cite{Wess:1971yu,Witten:1983tw}. \section{From AdS/CFT to QCD via CCWZ and HLS} \label{sec:intro} The starting point for our analysis is the AdS/CFT correspondence \cite{Maldacena:1997re,Gubser:1998bc,Witten:1998qj} and its phenomenological interpretation \cite{Arkani-Hamed:2000ds,Rattazzi:2000hs}. There is a straightforward way to construct the AdS dual of a CFT that yields a $G/H$ nonlinear sigma model at low energies and where a subgroup $F \subset G$ is gauged: simply consider a slice of AdS$_5$ \cite{Randall:1999ee} with bulk $G$ gauge bosons where the gauge symmetry is reduced to $F$ on the UV brane and $H$ on the IR brane \cite{Contino:2003ve}: \medskip \beq{duality} \begin{tabular}{c} \includegraphics[scale=0.55]{bulksym} \end{tabular} \end{equation} This construction was studied in the context of the littlest Higgs in \cite{Thaler:2005en}. In this paper, we take the obvious next step and deconstruct the warped dimension \cite{Arkani-Hamed:2001ca}. The link fields in the moose are the Wilson lines constructed out of $A_5$, and the warp factor is reflected in the different decay constants on the links \cite{Randall:2002qr}: \beq{manysitemoose} \begin{tabular}{c} \xymatrix@R=.4pc@C=1.4pc{\mathrm{Global:} & G && G & & G && G \\ & *=<20pt>[o][F]{} \ar@{-}[rr] |-{\SelectTips{eu}{}\object@{>}} && *=<20pt>[o][F]{} \ar@{-}[r] |-{\SelectTips{eu}{}\object@{>}} & *=<20pt>[o]{\cdots} \ar@{-}[r] |-{\SelectTips{eu}{}\object@{>}} & *=<20pt>[o][F]{} \ar@{-}[rr] |-{\SelectTips{eu}{}\object@{>}} && *=<20pt>[o][F]{} \\ \mathrm{Gauged:} & F && G & & G & & H} \end{tabular} \end{equation} Going to the extreme where we only introduce sites corresponding the UV and IR branes, we arrive at a moose diagram which at low energies is supposed to describe a $G/H$ nonlinear sigma model with $F \subset G$ gauged: \beq{uvirmoose} \begin{tabular}{c} \xymatrix@R=.4pc@C=1.4pc{\mathrm{Global:} & G && G \\ & *=<20pt>[o][F]{} \ar@{-}[rr] |-{\SelectTips{eu}{}\object@{>}}^{\mbox{\raisebox{1.5ex}{$\xi$}}} && *=<20pt>[o][F]{} \\ \mathrm{Gauged:} &F &&H&} \end{tabular} \end{equation} In this AdS/CFT-inspired $G/H$ moose, the subgroup $H$ (which was a global symmetry from the low energy perspective) has become a gauge symmetry. Now, this construction of a $G/H$ nonlinear sigma model is quite well-known and is referred to in the literature as Hidden Local Symmetry (HLS) (see \cite{Bando:1987br} for a review). The link field $\xi$ is identified with the Goldstone matrix in the CCWZ prescription \cite{Coleman:1969sm,Callan:1969sn}, and the $H$ gauge bosons are interpreted as auxiliary fields. In appendix \ref{sec:warp}, we show how to generate the $G/H$ nonlinear sigma model by explicitly integrating out the bulk of AdS$_5$, but starting from the $G/H$ moose in \eq{uvirmoose}, it is trivial to reproduce the CCWZ results in the spirit of HLS. The effective lagrangian for $\xi = e^{i \Pi/ f}$ is \beq{mooseeft} \mathcal{L} = -\frac{1}{2g_F^2}\tr F_{\mu\nu}^2 - \frac{1}{2g_H^2}\tr H_{\mu\nu}^2 + f^2 \tr |D_\mu \xi|^2, \qquad D_\mu \xi = \partial_\mu \xi +i F_\mu \xi -i \xi H_\mu, \end{equation} where $\Pi \in G$ is the Goldstone matrix, $f$ is the Goldstone decay constant, and $F_\mu$ and $H_\mu$ are the gauge fields associated with the $F$ and $H$ gauge groups. We can go to a unitary gauge where $\Pi \in G/(F \cup H)$. At tree level, the spectrum consists of the massless Goldstones from $\xi$, the massless gauge bosons in $F \cap H$, and the massive gauge bosons in $F \cup H$. In the limit $g_H \rightarrow \infty$, the massive gauge bosons in $H$ decouple, and we can simply integrate out $H_\mu$. For convenience, we decompose the object $\xi^\dagger (\partial_\mu + i F_\mu) \xi$ into elements of $H$ and $G/H$: \begin{equation} \label{vpdecomp} \xi^\dagger (\partial_\mu + i F_\mu) \xi \equiv v_\mu + p_\mu, \qquad v_\mu \in H, \qquad p_\mu \in G/H. \end{equation} To leading order, the $H_\mu$ equation of motion sets \begin{equation} i H_\mu \equiv v_\mu. \end{equation} Plugging this value of $H_\mu$ into \eq{mooseeft} in the $g_H \rightarrow \infty$ limit: \beq{ccwzlagrange} \mathcal{L}= -\frac{1}{2g_F^2}\tr F_{\mu\nu}^2 + f^2 \tr p^\mu p^\dagger_\mu, \end{equation} which is precisely the CCWZ phenomenological lagrangian. Similarly, if a fermion $\psi$ is charged under $H$, $\psi \rightarrow h \psi$, its kinetic term before and after integrating out $H_\mu$ is \beq{fermionvmucoupling} \mathcal{L}_\psi = \bar{\psi} i \bar{\sigma}^\mu (\partial_\mu + i H_\mu) \psi, \qquad \mathcal{L}_\psi = \bar{\psi} i \bar{\sigma}^\mu (\partial_\mu + v_\mu) \psi + \frac{1}{f^2} \left(\mbox{four-fermion operators}\right), \end{equation} in agreement with CCWZ and na\"{\i}ve dimensional analysis (NDA) \cite{Manohar:1983md,Georgi:1986kr} for the size of four fermion operators. More generally, for finite $g_H$, it is easy to show that the size of higher order interactions from integrating out $H_\mu$ are consistent with NDA with the choice $\Lambda = g_H f$. Whereas the original CCWZ prescription for arbitrary $G/H$ required some insight to understand the importance of $v_\mu$ and $p_\mu$, the lagrangian in \eq{mooseeft} is trivial, and the CCWZ prescription falls out immediately from integrating out $H_\mu$. For finite $g_H$, this HLS construction can used as a model for $\rho$ mesons in confining theories \cite{Bando:1984ej} (more precisely, $\rho$ mesons in the ``vector limit'' \cite{Georgi:1989xy}; see also \cite{Harada:2003jx}). Indeed, in the AdS picture, the $H$ gauge symmetry on the IR brane is the ``gauge symmetry'' associated with light spin-1 KK modes which are holographically identified as CFT resonances. It has been observed that repeated applications of HLS as a model of higher $\rho$-like resonances can reconstruct an extra dimension \cite{Bando:1985rf, Bando:1987ym,Son:2003et,Piai:2004yb}, and in this light, the CCWZ prescription (as interpreted through the $G/H$ moose) could be seen as the first hint that confining theories holographically generate an extra dimension. In the context of little Higgs theories, the $G/H$ moose was implicitly studied in \cite{Piai:2004yb} to try to understand the UV sensitive parameters in the $SU(5)/SO(5)$ littlest Higgs, and they found that by including a $SO(5)$s worth of $\rho$ mesons, all the UV sensitive parameters in the original littlest Higgs became finite to one-loop order. (See \cite{Harada:2003xa} in the context of ordinary QCD.) This is easy to understand from the point of view of the $G/H$ moose. When $g_H = 0$, all of the Goldstones in $\xi$ are exact, so the gauge coupling $g_H$ acts like a spurion for the soft breaking of $SU(5)$. In the gauge sector of the littlest Higgs, the Higgs potential is now controlled by three spurions instead of just two, so by collective breaking there can be no one-loop quadratically divergent contributions to the Higgs potential. Similarly, by extending the moose as in \eq{manysitemoose} to approximate a tower of spin-1 resonances, the Higgs potential can be made arbitrarily well behaved, because the gauge coupling on each site now acts like a spurion for $SU(5)$ breaking \cite{Arkani-Hamed:2001nc}. This feature has been used to calculate the Higgs potential in AdS$_5$ models where the Higgs is a holographic pseudo-Goldstone boson \cite{Contino:2003ve,Agashe:2004rs,Thaler:2005en}, and as we will discuss in the section \ref{sec:four}, there is always a notion of collective breaking in any holographic Higgs model because both $g_F$ and $g_H$ must be turned on for the Goldstones in $\xi$ to acquire a radiative potential. So to date, HLS has been a way to discuss the spin-1 phenomenology of confining theories. In this paper, we observe that the $H$ gauge symmetry can also survive deep into the ultraviolet! In the case that $G = SU(N)$, we can write down a simple UV completion for the moose diagram in \eq{uvirmoose}. (The links now correspond to fermions.) \begin{equation} \begin{tabular}{c} \xymatrix@R=.4pc@C=1.4pc{\mathrm{Global:} & SU(N)_L && SU(N)_R \\ & *=<20pt>[o][F]{} \ar@{-}[r] |-{\SelectTips{eu}{}\object@{>}}^{\mbox{\raisebox{1.5ex}{$\psi$}}} & *=<12pt>[o][F=]{} \ar@{-}[r] |-{\SelectTips{eu}{}\object@{>}}^{\mbox{\raisebox{1.5ex}{$\psi^c$}}} & *=<20pt>[o][F]{} \\ \mathrm{Gauged:} &F &SU(N_c)&H} \end{tabular} \end{equation} We recognize this as the moose diagram for QCD with $N$ flavors, where some of the flavor symmetries have been gauged. When $SU(N_c)$ confines, the fermion condensate $\langle \psi \psi^c \rangle$ will become the link field $\xi$. Specializing to the littlest Higgs (with only one hypercharge generator): \beq{littlehiggsmoose} \begin{tabular}{c} \xymatrix@R=.4pc@C=1.4pc{\mathrm{Global:} & SU(5)_L && SU(5)_R \\ & *=<20pt>[o][F]{} \ar@{-}[r] |-{\SelectTips{eu}{}\object@{>}}^{\mbox{\raisebox{1.5ex}{$\psi$}}} & *=<12pt>[o][F=]{} \ar@{-}[r] |-{\SelectTips{eu}{}\object@{>}}^{\mbox{\raisebox{1.5ex}{$\psi^c$}}} & *=<20pt>[o][F]{} \\ \mathrm{Gauged:} &SU(2)^2 \times U(1)_Y &SU(N_c)&SO(5)} \end{tabular} \end{equation} Apparently, the $SU(5)/SO(5)$ littlest Higgs is contained in ordinary QCD with five flavors with the appropriate gauging of the flavor symmetries. In other words, this theory \emph{is} technicolor, albeit with different gauge groups. Below the confinement scale, the $SO(5)$ gauge bosons will become heavy via chiral symmetry breaking, and assuming the $SO(5)$ gauge coupling is large enough, we can integrate them out to generate a $SU(5)/SO(5)$ nonlinear sigma model. In other words, this ``littlest technicolor'' model has exactly the same low energy phenomenology as the original littlest Higgs, despite the fact that we have introduced a new gauge symmetry. Of course, now that we understand that little Higgs theories could come from technicolor-like theories, we inevitability encounter some of the same tensions from technicolor, such as how to generate fermion Yukawa couplings without introducing new sources of flavor violation \cite{Eichten:1979ah,Dimopoulos:1979es}. (We discuss some aspects of fermions in appendix \ref{sec:fermions}.) Also, as presented, the moose in \eq{littlehiggsmoose} has anomalies associated with the $F$ gauge group (but no anomalies that involve $H$ because $\psi^c$ transforms as a real representation of $SO(5)$), so one would have to hope that some of standard model fermions act as spectators to cancel these anomalies. On the other hand, it is remarkable that such a simple UV completion exists. Before going on to discuss little technicolor in the context of the simple group little Higgs, we remark that our attitude to spin-1 fields is similar in spirit to the Abbott-Farhi model of electroweak gauge bosons \cite{Abbott:1981re,Abbott:1981yg}. Ignoring hypercharge, the moose diagram for a massive $W$ boson with a custodial $SU(2)$ symmetry is: \beq{abbotfarhistart} \begin{tabular}{c} \xymatrix@R=.4pc@C=1.4pc{\mathrm{Global:} & SU(2) && SU(2) \\ & *=<20pt>[o][F]{} \ar@{-}[rr] |-{\SelectTips{eu}{}\object@{>}} && *=<20pt>[o][F]{} \\ \mathrm{Gauged:} &SU(2) && &} \end{tabular} \end{equation} If the only thing we knew about the $W$ boson was that it was a massive spin-1 field, then we could imagine many different UV completions for \eq{abbotfarhistart} that would unitarize $W$ longitudinal mode scattering. In the standard model, the link field is assumed to be UV completed into a linear sigma field. In technicolor, the link field comes from a fermion condensate. We could also imagine this moose diagram as a picture of a slice of AdS$_5$. If we let the gauged $SU(2)$ site be the UV brane, \begin{equation} \begin{tabular}{c|c} & Gauge Symmetry\\ \hline UV Brane & $SU(2)$ \\ Bulk & $SU(2)$ \\ IR Brane & $\emptyset$ \end{tabular} \qquad \mathop{\Longleftrightarrow}^{\raisebox{.4ex}{Dual}} \qquad \begin{tabular}{c} CFT with Gauged $SU(2)$ \\ $+$\\ $SU(2)/ \emptyset$ Symmetry Breaking \end{tabular} \end{equation} then the $W$ gauge boson gets a mass via spontaneous symmetry breaking, \`{a} la technicolor. Alternatively, we could reverse the roles of the UV and IR branes, \begin{equation} \begin{tabular}{c|c} & Gauge Symmetry\\ \hline UV Brane & $\emptyset$ \\ Bulk & $SU(2)$ \\ IR Brane & $SU(2)$ \end{tabular} \qquad \mathop{\Longleftrightarrow}^{\raisebox{.4ex}{Dual}} \qquad \begin{tabular}{c} CFT with Global $SU(2)$ \\ $+$\\ No Symmetry Breaking \end{tabular} \end{equation} in which case the $W$ boson is a $\rho$-like meson of a confining theory, which is the Abbott-Farhi model. Of course, the detailed properties of an ultraviolet gauge boson and an composite spin-1 meson are very different, and we can distinguish between Abbott-Farhi, technicolor, and the standard model through precision electroweak tests. But given a generic massive spin-1 field, we have no \emph{a priori} reason to label it a $W$-like vs. a $\rho$-like state. As we will see for the simple group little Higgs, by allowing ourselves maximal flexibility in relabeling the spin-1 spectrum, we can come up with many UV complete models with the same low energy physics. \section{Four Views of the Simple Group Little Higgs} \label{sec:four} We have seen that the $H_\mu$ field can be an auxiliary field for deriving CCWZ or a model of the $\rho$-like mesons in HLS. The key point of this paper is that it is entirely consistent for the $H_\mu$ field to be a real gauge field that survives deep into the ultraviolet. The simple group little Higgs model \cite{Kaplan:2003uc} is an ideal laboratory to study this possibility, because the symmetry structure of the theory makes it possible to imagine many different UV completions. In essence, we will show that the distinction between $W'$-like gauge bosons and $\rho$-like mesons can be blurred in these models. Moreover, we will understand that holographic composite Higgs models are actually little Higgs theories in the sense that there is always a meaning to collective breaking. The simple group little Higgs is based on an $(SU(3)/SU(2))^2$ symmetry breaking pattern where the diagonal $SU(3)$ symmetry is gauged. (We are ignoring hypercharge for simplicity.) Using the AdS/CFT correspondence as in \eq{duality}, we can easily construct an AdS$_5$ model to recover the low energy physics of the simple group theory. The brane and bulk gauge symmetries are: \beq{adssimplegroup} \begin{tabular}{c|c} & Gauge Symmetry\\ \hline UV Brane & $SU(3)_V$ \\ Bulk & $SU(3)_1 \times SU(3)_2$ \\ IR Brane & $SU(2)_1 \times SU(2)_2$ \end{tabular} \end{equation} At low energies, there are the massless $SU(2)_{EW}$ gauge bosons and an $SU(3)/SU(2)$'s worth of Goldstones, which contain an electroweak doublet (the Higgs) and an electroweak singlet. Like all little Higgs theories, the simple group theory exhibits collective breaking in that no single interaction breaks the global symmetries that protect the Higgs potential, so the Higgs mass is not quadratically sensitive to the cutoff (\emph{i.e.}\ the confinement scale). It is easy to see how collective breaking works in AdS space in terms of boundary conditions on the UV and IR branes \cite{Thaler:2005en}. When the gauge boson boundary conditions are Neumann on either brane, then we can go to $A_5 = 0$ gauge, and all the Goldstones are exact (\emph{i.e.}\ they are all eaten). Similarly, when the gauge boson boundary conditions are Dirichlet on either brane, then there are no massless gauge fields for the Goldstones to mix with, so all the Goldstones are exact. Only when different components of the bulk gauge field have different boundary conditions on each brane is it possible to have pseudo-Goldstone bosons. In the simple group theory, there are three interactions that have to be turned on for the symmetries that protect the Higgs to be broken: $SU(3)_V$ has to be gauged on the UV brane, $SU(3)_1$ has to be reduced to $SU(2)_1$ on the IR brane, and similarly for $SU(3)_2$. The lightest KK modes with Neumann boundary conditions on the IR brane are holographically identified with the $\rho$-like mesons of the confining CFT. We will now perform the two-site deconstruction of the AdS$_5$ model in \eq{adssimplegroup} to show that these $\rho$ mesons can be interpreted as $W'$ gauge bosons in a theory of two copies of QCD with $N_f = 3$. Following the example of \eq{uvirmoose}, the relevant moose diagram is: \begin{equation} \begin{tabular}{c} \xymatrix@R=.4pc@C=1.4pc{\mathrm{Global:} & SU(3)^2 && SU(3)^2& \\ & *=<20pt>[o][F]{} \ar@{-}[rr] |-{\SelectTips{eu}{}\object@{>}} && *=<20pt>[o][F]{} \\ \mathrm{Gauged:} &SU(3)_V &&SU(2)^2&} \end{tabular} \end{equation} Because $SU(3)^2$ is a direct product space, we can decompose this moose into two pieces: \beq{vectorlimitmoose} \begin{tabular}{c} \xy \xymatrix@R=.4pc@C=1.4pc{\mathrm{Global:} & SU(3) && SU(3) & SU(3) && SU(3)& \\ & *=<20pt>[o][F]{} \ar@{-}[rr] |-{\SelectTips{eu}{}\object@{>}} && *=<20pt>[o][F]{} & *=<20pt>[o][F]{} \ar@{-}[rr] |-{\SelectTips{eu}{}\object@{>}} && *=<20pt>[o][F]{} \\ \mathrm{Gauged:} &SU(2) &&\ar @{} [r] |{\mbox{\raisebox{0.0ex}{$SU(3)_V$}}} &&& SU(2)& \save "2,4"-(5,5);"2,5"+(5,5) **\frm{--} \restore} \endxy \end{tabular} \end{equation} The $SU(3)_V$ gauge symmetry breaks the approximate $SU(3)$ global symmetries on the boxed sites, so we might as well combine those two-sites into a single site. We can now UV complete the link fields into fermion condensates from two $SU(N_c)$ confining theories (arrows now correspond to fermions): \begin{equation} \begin{tabular}{c} \xymatrix@R=.4pc@C=1.4pc{\mathrm{Global:} & SU(3) && SU(3) && SU(3) & \\ & *=<20pt>[o][F]{} \ar@{-}[r] |-{\SelectTips{eu}{}\object@{>}} & *=<12pt>[o][F=]{} \ar@{-}[r] |-{\SelectTips{eu}{}\object@{>}} & *=<20pt>[o][F]{} \ar@{-}[r] |-{\SelectTips{eu}{}\object@{>}}& *=<12pt>[o][F=]{} \ar@{-}[r] |-{\SelectTips{eu}{}\object@{>}}& *=<20pt>[o][F]{} \\ \mathrm{Gauged:} &SU(2) &SU(N_c)&SU(3) & SU(N_c) & SU(2) &} \end{tabular} \end{equation} We see that the low energy degrees of freedom in the simple group theory can arise from two copies of QCD with $N_f = 3$, where some of the flavor symmetries have been gauged. Indeed, the $SU(2)^2$ gauge symmetries which had been associated with the holographic $\rho$-like mesons in the AdS$_5$ model are now ultraviolet gauge symmetries associated with $W$ and $W'$ gauge fields. If we had deconstructed the original AdS$_5$ model with three sites instead of two, we could do the exact same construction to see that the simple group theory could be embedded into four copies of QCD, each with $N_f = 3$ and the appropriate flavor symmetries gauged. It is obvious from this construction that any $(SU(N)/H)^n$ little Higgs theory where some subgroup of $SU(N)^n$ is gauged can arise from $n$ copies of QCD with $N$ flavors with the appropriate gauging of the flavor symmetries. There is an even more interesting UV completion of the moose in \eq{vectorlimitmoose} inspired by the HLS model of the $\rho$ meson in ordinary QCD \cite{Bando:1984ej}. As postulated by \cite{Georgi:1989xy}, there may be a ``vector limit'' of QCD (possibility the large $N_c$ limit) where the $\rho$ meson is anomalously light compared to other QCD resonances. Above the confinement scale, the QCD moose with $N_f$ fermion flavors is: \begin{equation} \begin{tabular}{c} \xymatrix@R=.4pc@C=1.4pc{\mathrm{Global:} & SU(N_f)_L && SU(N_f)_R & \\ & *=<20pt>[o][F]{} \ar@{-}[r] |-{\SelectTips{eu}{}\object@{>}} & *=<12pt>[o][F=]{} \ar@{-}[r] |-{\SelectTips{eu}{}\object@{>}} & *=<20pt>[o][F]{} \\ \mathrm{Gauged:} & &SU(N_c)&} \end{tabular} \end{equation} When we describe only the pions of QCD, the low energy description is in terms of the nonlinear sigma model moose: \begin{equation} \begin{tabular}{c} \xymatrix@R=.4pc@C=1.4pc{\mathrm{Global:} & SU(N_f)_L && SU(N_f)_R & \\ & *=<20pt>[o][F]{} \ar@{-}[rr] |-{\SelectTips{eu}{}\object@{>}} & & *=<20pt>[o][F]{}} \end{tabular} \vspace{.15in} \end{equation} The hypothesis of \cite{Georgi:1989xy} is that the effective description of the pions and $\rho$ mesons in the vector limit of QCD should be: \beq{vectorlimittwo} \begin{tabular}{c} \xy \xymatrix@R=.4pc@C=1.4pc{\mathrm{Global:} & SU(N_f)_L && SU(N_f)_L' & SU(N_f)_R' && SU(N_f)_R \\ & *=<20pt>[o][F]{} \ar@{-}[rr] |-{\SelectTips{eu}{}\object@{>}} && *=<20pt>[o][F]{} & *=<20pt>[o][F]{} \ar@{-}[rr] |-{\SelectTips{eu}{}\object@{>}} && *=<20pt>[o][F]{} \\ \mathrm{Gauged:} & &&\ar @{} [r] |{\mbox{\raisebox{0.0ex}{$SU(N_f)_V$}}} && \save "2,4"-(5,5);"2,5"+(5,5) **\frm{--} \restore} \endxy \end{tabular} \end{equation} where $SU(N_f)_V$ is the gauge symmetry associated with the $\rho$ mesons. The point of the vector limit is that as the $SU(N_f)_V$ gauge coupling is taken to zero, there could be an enhanced $SU(N_f)^4$ ``flavor'' symmetry in QCD which would allow the longitudinal components of $\rho$ to be the exact chiral partners of the pions. In fact, the AdS$_5$ metaphor for large $N_c$ QCD with $N_f$ flavors (and no flavor symmetries gauged) realizes this scenario explicitly (see \emph{e.g.}~\cite{Erlich:2005qh} when $N_c = 3$). The gauge symmetries in the AdS$_5$ metaphor are: \begin{equation} \begin{tabular}{c|c} & Gauge Symmetry\\ \hline UV Brane & $\emptyset$ \\ Bulk & $SU(N_f)_L \times SU(N_f)_R$ \\ IR Brane & $SU(N_f)_V$ \end{tabular} \end{equation} and when we deconstruct the AdS space, we indeed generate the moose in \eq{vectorlimittwo}, and the higher AdS KK states are well separated from the $\rho$ meson. What is amusing is that in ordinary QCD, $SU(N_f)_V$ is the gauge symmetry associated with the $\rho$ meson, whereas in the original inspiration for the simple group little Higgs, $SU(3)_V$ is a gauge symmetry associated with an ultraviolet gauge boson. From the point of view of low energy degrees of freedom, however, the mooses in \eqs{vectorlimitmoose}{vectorlimittwo} are identical if we set $N_f = 3$ and gauge an $SU(2)^2$ subgroup of $SU(3)_L \times SU(3)_R$. So we see that the simple group theory can arise from one copy of QCD with 3 flavors in the vector limit! What was an ultraviolet $W$ gauge boson in the original theory is a $\rho$ meson in this QCD model, but by construction, both theories have the same low energy physics. From ordinary QCD where $N_f = N_c = 3$, we know that $g_\rho = m_\rho / f_\pi \sim 8$, which is too large to be the perturbative $SU(3)_V$ gauge coupling in the simple group Higgs. The question is whether we can decrease $g_\rho$ by going to a large $N_c$ theory. Conventionally, $m_\rho$ is thought to be fixed as $N_c$ increases, so by using the $\sqrt{N_c}$ scaling of $f_\pi$ in large $N$ theories \cite{'tHooft:1973jz}, we would predict that $g_\rho$ scales as $1/\sqrt{N_c}$. However, if $m_\rho$ is fixed as $N_c$ increases, then there is no reason to expect $m_\rho$ to be well separated from ``QCD string'' states unless we are in a conformal window as in AdS/CFT or if the value of $N_f$ puts us close to the chiral symmetry breaking phase transition \cite{Harada:2000kb}. In other words, while $g_\rho$ would be small at large $N_c$, the effective theory would not be well described by the moose in \eq{vectorlimittwo} because of pollution from higher QCD resonances. A hypothesis presented in \cite{Georgi:1989xy} is that the vector limit could arise if $g_\rho$ decreases \emph{faster} than $1/\sqrt{N_c}$, such that $m_\rho$ goes to zero in the large $N_c$ limit. This would guarantee a parametric separation between the $\rho$ mesons and higher QCD resonances, justifying the moose description in \eq{vectorlimittwo}. Finally, we can interpret \eq{vectorlimitmoose} as a picture of an AdS$_5$ space where the left-most site is associated with the UV brane and the right-most site is associated with the IR brane. \begin{equation} \begin{tabular}{c|c} & Gauge Symmetry\\ \hline UV Brane & $SU(2)$ \\ Bulk & $SU(3)$ \\ IR Brane & $SU(2)$ \end{tabular} \end{equation} Modulo hypercharge, this symmetry pattern is exactly the symmetry pattern of the original realization of the Higgs as a holographic pseudo-Goldstone boson \cite{Contino:2003ve}. Like the $(SU(3)/SU(2))^2$ simple group, the minimal model in \cite{Contino:2003ve} does not have a natural mechanism for generating a large Higgs quartic coupling, and this is to be expected because both theories have the same low energy degrees of freedom. So without additional dynamics, neither model can successfully trigger electroweak symmetry breaking. The $(SU(4)/SU(3))^4$ simple group \cite{Kaplan:2003uc} does have a way to generate a large Higgs quartic coupling, so an AdS$_5$ model with bulk $SU(4)$ or $SU(4)^2$ gauge bosons might be a good starting point to construct a viable model of a composite Higgs boson, assuming there is an AdS$_5$ analog of the interactions that generate the low energy quartic coupling. (See, however, section \ref{sec:fvsh}.) But whereas we call the simple group theory a ``little Higgs'' theory, generic implementations of the Higgs as a holographic pseudo-Goldstone boson are not referred to as little Higgs theories in the literature. As we mentioned at the beginning of this section, though, \emph{any} AdS$_5$ model with an electroweak doublet Goldstone is a little Higgs theory because there is always a notion of collective breaking, namely collectively choosing boundary conditions on the UV and IR branes. In terms of the two-site moose from \eq{uvirmoose}, collective breaking means that both the $g_F$ and $g_H$ gauge couplings must be turned on in order for the uneaten Goldstones to acquire a radiative potential, and when we interpret the $H_\mu$ gauge fields as $\rho$-like mesons from a confining theory, there is a useful notion of collective breaking to the extent that the $\rho$s are light (\emph{i.e.}\ the $H_\mu$ gauge coupling is small, or equivalently, we are in the vector limit of the confining theory). In other words, what makes an interesting theory of a composite Higgs boson is not collective breaking \emph{per se}, but whether the Higgs potential can successfully trigger electroweak symmetry breaking while maintaining a large enough hierarchy between the confinement scale and the electroweak scale to avoid precision electroweak constraints. Any old symmetry breaking pattern that yields an electroweak doublet Goldstone will be a little Higgs theory if there are light $\rho$ mesons in the spectrum, but only very special theories can generate a light Higgs with a large quartic coupling, the necessary ingredients for a successful composite Higgs model. In the case that the approximate global symmetries are $SU(N)$s, we can use the technique described in this section to UV complete $(SU(N)/H)^n$ little Higgs models into technicolor. The littlest Higgs satisfies both these criteria, and in appendix \ref{sec:fermions} we sketch how to implement the top sector for the moose in \eq{littlehiggsmoose}. \section{Holographic Pseudo-Goldstones as Little Higgses} \label{sec:holo} We have argued that holographic composite Higgs models are little Higgs theories to the extent that we can define a notion of collective breaking in AdS space, and this fact is obvious in light of the AdS/CFT correspondence. If the gauge symmetry $F$ on the UV brane is equal to $G$, then all of the flavor symmetries in the dual CFT are gauged so there are no uneaten Goldstone modes. If the gauge symmetry $H$ on the IR brane is equal to $G$, then there is no spontaneous symmetry breaking in the dual theory and therefore no Goldstones. (Analogous arguments hold when $F$ or $H$ are empty.) In other words, collective breaking in AdS space reduces to the condition for the existence of pseudo-Goldstone bosons in the dual CFT \cite{Georgi:1975tz}. Inspired by the $G/H$ moose, there is another way that we can see collective breaking in models with holographic pseudo-Goldstone bosons. By ``integrating in'' the IR brane, we will see that these models have identical low energy physics to brane-localized little technicolor theories, and the radiative Higgs potential can be made arbitrarily finite without ever having to discuss the details of the warped dimension. In effect, the finiteness of the Higgs potential has to do only with a small slice of AdS space, and if we are interested only in studying a calculable composite Higgs model, then we need not invoke the full machinery of AdS/CFT. We will be able to turn holographic $\rho$ mesons into brane-localized $W'$ gauge bosons, and the low energy physics will be shielded from bulk dynamics. While the original inspiration for the $G/H$ moose came from AdS/CFT, we show in appendix \ref{sec:warp} that a $G/H$ nonlinear sigma model can arise from any extra dimension with boundaries, including flat space. One advantage of AdS space over flat space is that integrating out heavy modes is almost exactly the same as integrating out sites of the deconstructed moose. If the lattice spacing is much larger than the AdS length, then from \eq{unitarygaugeAdSaction}, we see that integrating out the UV brane site is almost exactly the same as integrating out the heaviest mode in the spectrum. In continuum language, we can move the UV brane while keeping the low energy physics fixed by allowing couplings on the UV brane to run, and this running is holographically dual to renormalization group running in the CFT. In flat space, integrating out heavy modes involves all of the sites, and therefore introduces interactions between the remaining sites that are non-local in theory space. In AdS space, the induced non-local interactions are exponentially suppressed and can be ignored to an excellent approximation. For the same reason, if we want to move the IR brane while keeping the low energy physics fixed, we have to introduce new degrees of freedom on the IR brane. This takes into account the fact that by moving the IR brane, we are integrating out a light degree of freedom which has $\mathcal{O}(1)$ overlap with the IR brane site, so we have to integrate this degree of freedom back into the spectrum. By deconstructing AdS as in appendix \ref{sec:warp}, it is easy to see that the following theories have identical low energy physics: \medskip \beq{littlesimilar} \begin{tabular}{c} \includegraphics[scale=0.50]{littlesym} \end{tabular} \end{equation} where we have placed the $N$ site moose \beq{nsitemoose} \begin{tabular}{c} \xymatrix@R=.4pc@C=1.4pc{G_{\rm global} & G_{\rm gauged} && G_{\rm gauged} & H_{\rm gauged} \\ *=<20pt>[o][F]{} \ar@{-}[r] |-{\SelectTips{eu}{}\object@{>}} & *=<20pt>[o][F]{} \ar@{-}[r] |-{\SelectTips{eu}{}\object@{>}} &\cdots \ar@{-}[r] |-{\SelectTips{eu}{}\object@{>}} & *=<20pt>[o][F]{}\ar@{-}[r] |-{\SelectTips{eu}{}\object@{>}} & *=<20pt>[o][F]{} } \end{tabular} \vspace{.15in} \end{equation} on the IR brane and identified $G_{\rm global}$ with the bulk gauge symmetry. In order to reproduce the desired warp factor as in \eq{deconAdSaction}, we have to give an exponential profile to the decay constants on the $N$ site moose. If the number of sites is small, then we may not even need to UV complete the link fields on the brane because unitarity would not be saturated until well beyond the local quantum gravity scale. Alternatively, the link fields could be embedded into linear sigma models without any considerable fine-tuning, or we could UV complete \eq{nsitemoose} into brane-localized little technicolor. What is interesting about integrating in the IR brane is that it makes it clear that only a small slice of AdS space is necessary to get a finite radiative potential for the holographic pseudo-Goldstones. In the case of the $SU(5)/SO(5)$ littlest Higgs, the one-loop radiative Higgs potential for an AdS space with \begin{equation} F = SU(2)^2 \times U(1), \qquad G = SU(5), \qquad H = SO(5), \end{equation} was calculated in \cite{Thaler:2005en}. However, by \eq{littlesimilar}, we could just as well transition to the analysis of \cite{Piai:2004yb} for the moose in \eq{nsitemoose} with an $F$ subgroup of the $G_{\rm global}$ site gauged. In other words, for purposes of understanding the radiative potential for a composite Higgs, a holographic composite model and an $N$ site moose localized on the IR brane will give nearly identical results, and the results can be made arbitrarily similar by adjusting the decay constants and gauge couplings on the $N$ site moose to match the holographic resonances of the initial CFT. For concreteness, in the original littlest Higgs \cite{Arkani-Hamed:2002qy}, the radiative Higgs quartic coupling from the gauge sector was quadratically sensitive to the cutoff: \begin{equation} \lambda \sim \frac{\Lambda^2}{(4\pi f)^2} g_{EW}^2, \end{equation} where $g_{EW}$ is the low energy $SU(2)_{EW}$ gauge coupling. In the holographic model of \cite{Thaler:2005en}, the Higgs potential was finite and parametrically scaled as \begin{equation} \lambda \sim \frac{1}{N_c}g_{EW}^2. \end{equation} In the littlest Higgs with a composite $\rho$ meson modeled by HLS (\emph{i.e.}\ \eq{nsitemoose} with two sites) \cite{Piai:2004yb}, the Higgs potential was logarithmically sensitive to the scale of the second $\rho$-like resonances, but setting these logarithms to one, the Higgs quartic scaled as: \begin{equation} \lambda \sim \frac{m_\rho^2}{(4\pi f)^2} g_{EW}^2. \end{equation} (We could remove all logarithmic sensitivity by transitioning to a three site moose.) If we make the identification $\Lambda \sim m_\rho \sim 4 \pi f / \sqrt{N_c}$ \cite{'tHooft:1973jz}, then these results scale in the same way as expected. More importantly, once we believe that \eq{littlesimilar} accurately describes the same low energy physics, the analysis of \cite{Thaler:2005en} and \cite{Piai:2004yb} for the one-loop radiative Higgs potential will be nearly identical once we match the spectra of the two theories. Because we have moved the IR brane in, pollution from bulk gauge boson KK modes as well as 5D quantum gravity effects occur at a much higher scale, so corrections to the radiative potential from bulk dynamics or from the details of how the brane-localized moose is UV completion are very suppressed. In other words, the statement that holographic pseudo-Goldstones are little Higgses is another way of saying that locality in AdS space can me mimicked by locality in theory space. Indeed, by integrating in the IR brane, locality in AdS space \emph{becomes} locality in theory space. In little technicolor models, collective breaking is manifest because the gauge coupling on each site has to be turned on for the Higgs to pick up a radiative potential. The analysis of \cite{Piai:2004yb} further guarantees us that at the one-loop level, only a few sites are necessary on the IR brane to get finite and calculable radiative corrections without ever needing to understand bulk dynamics. In appendix \ref{sec:bc} we consider the reverse situation, and show how all information about how $G$ is reduced to $H$ on the IR brane can be absorbed into small perturbations of the warp factor near the IR brane. \section{Vacuum Alignment} \label{sec:fvsh} One challenge to constructing viable little Higgs theories is vacuum alignment. In the $SU(6)/Sp(6)$ little Higgs \cite{Low:2002ws}, there is a gauged $SU(2)^2$ subgroup of $SU(6)$, and it is essential that the breaking of $SU(6) \rightarrow Sp(6)$ also breaks $SU(2)^2 \rightarrow SU(2)$. Similarly, in the $(SU(4)/SU(3))^4$ theory \cite{Kaplan:2003uc}, pairs of the $SU(3)$ subgroups must be misaligned when the diagonal $SU(4)$ gauge symmetry is turned on in order to have a light Higgs doublet with a large quartic coupling. Therefore, an ultraviolet completion of a little Higgs theory must not only break $G \rightarrow H$, but when an $F$ subgroup of $G$ is gauged, the relative orientation of $F$ and $H$ (and the various subgroups of $H$) must be radiatively stable. In little technicolor, we are specifying the UV completion of \eq{intromoose} where QCD dynamics guarantees that $SU(N)_L \times SU(N)_R$ will break to the diagonal $SU(N)$ as long as we are on the correct side of the chiral symmetry breaking phase transition. Once we choose a generator basis for $SU(N)_L \times SU(N)_R$, we can of course gauge any subgroup $F \times H$ we wish. However, there is no guarantee that in this basis the fermion condensate $\langle \psi \psi^c \rangle$ will point in the direction of the identity. If we assume that the $SU(N)_L \times SU(N)_R$ flavor symmetry is exact above $\Lambda_{QCD}$, then the orientation of $\langle \psi \psi^c \rangle$ (and hence the relative orientation of $F$ and $H$) is determined solely by radiative corrections. Vacuum alignment for the little technicolor version of $SU(6)/Sp(6)$ theory was implicitly studied in \cite{Piai:2004yb} where it was found that gauge boson loops do not favor $SU(2)^2 \rightarrow SU(2)$ breaking. The little technicolor version of the $(SU(4)/SU(3))^4$ simple group theory has the following low energy moose: \beq{fourtothreemoose} \begin{tabular}{l | l l} & Global & Gauged \\ \hline $A$ &$SU(4)$& $SU(4)$\\ $B$ & $SU(4)$& $SU(3)$\\ $C$ & $SU(4)$& $SU(3)'$ \end{tabular} \qquad \qquad \begin{tabular}{c} \xymatrix@R=.4pc@C=1.4pc{*=<20pt>[o][F]{B} \ar@{-}[rdr] |-{\SelectTips{eu}{}\object@{>}} && && *=<20pt>[o][F]{C}\\ && *=<20pt>[o][F]{A} \ar@{-}[rur] |-{\SelectTips{eu}{}\object@{>}} \ar@{-}[rdr] |-{\SelectTips{eu}{}\object@{>}} \\ *=<20pt>[o][F]{B} \ar@{-}[rur] |-{\SelectTips{eu}{}\object@{>}} &&&&*=<20pt>[o][F]{C}} \end{tabular} \end{equation} where $SU(3)$ and $SU(3)'$ are misaligned in some suitable basis. However, it can be shown that the gauge loop Coleman-Weinberg potential \cite{Coleman:1973jx} for the link vevs wants to align $SU(3)$ and $SU(3)'$, agreeing with the general lore that the vacuum points in the direction that yields the largest number of light gauge bosons. Of course, it is possible to force a particular vacuum alignment by including operators in the ultraviolet theory that explicitly break the global symmetries. For example, one could include four fermion operators in the ultraviolet completion of \eq{fourtothreemoose} to force the misalignment of $SU(3)$ and $SU(3)'$ at tree level. Also, radiative contributions from the fermion sector may affect the choice of vacuum. The point to keep in mind when constructing ultraviolet theories based on little technicolor is that while there is complete freedom (up to anomaly cancellation) in the choice of $F$ and $H$, one must check that the stable vacuum gives the desired spectrum of light gauge bosons. In the case of the littlest Higgs in \eq{littlehiggsmoose}, the vacuum indeed aligns in the correct direction \cite{Piai:2004yb}. \section{Future Directions} \label{sec:future} In this paper, we have explored a beautiful connection between an array of ideas that were already tantalizingly related, but which become even more intertwined in the context of AdS/CFT. HLS is a model for the $\rho$ mesons in QCD which is known to generate $G/H$ flavor symmetry breaking that is phenomenologically described by CCWZ which is the key to constructing little Higgs theories. AdS/CFT allows us to explicitly realize the vector limit (\emph{i.e.}\ large $N_c$ or light $\rho$ meson limit) of a confining theory, and once we deconstruct the corresponding AdS$_5$ model, we are free to interpret the link fields either as Wilson lines constructed out of $A_5$, or as fermion condensates from a completely different QCD theory. In other words, we can interpret the light spin-1 degrees of freedom either as holographic $\rho$ mesons or as ultraviolet $W'$ gauge fields that acquire a mass via spontaneous symmetry breaking. Similarly, by integrating in the IR brane in AdS$_5$ models, we saw that holographic $\rho$ mesons can be reinterpreted as brane-localized $W'$ gauge bosons. Note that there is no deep duality between the holographic CFT and the new QCD theory. Just like Abbott-Farhi, all we have shown is that two different theories can have the same low energy degrees of freedom, and this is particularly interesting in the context of the little Higgs, where the relevant phenomenology depends almost entirely on the low energy spectrum. What is a bit surprising (but obvious in retrospect) is that one possible UV completion of an $(SU(N)/H)^n$ little Higgs is $n$ copies of technicolor! We have known since \cite{Weinberg:1975gm,Susskind:1978ms} that ordinary QCD could give masses to $W$ and $Z$ bosons through spontaneous symmetry breaking; now we know that ordinary QCD can give rise to an electroweak doublet pion which can subsequently break electroweak symmetry. This allows us (or dooms us, depending on your perspective) to use the full machinery of extended technicolor \cite{Eichten:1979ah,Dimopoulos:1979es} and walking technicolor \cite{Holdom:1981rm} to address phenomenological questions in little Higgs theories (see appendix \ref{sec:fermions}). What is not obvious is whether general $(G/H)^n$ symmetry breaking patterns can emerge out of moose-like confining theories. A two site moose with global $SU(N)$ sites looks like the pion nonlinear sigma model of QCD, but in order to construct moose diagrams for $SO(N)/H$ theories (such as the $SO(9)/(SO(5)\times SO(4))$ model of \cite{Chang:2003zn}), we would have to find a UV completion for the moose link field: \begin{equation} \begin{tabular}{c} \xymatrix@R=.4pc@C=1.4pc{\mathrm{Global:} & SO(N) && SO(N) \\ & *=<20pt>[o][F]{} \ar@{-}[rr] |-{\SelectTips{eu}{}\object@{>}}^{\mbox{\raisebox{1.5ex}{$\xi$}}} && *=<20pt>[o][F]{} \\ \mathrm{Gauged:} & &&H} \end{tabular} \quad \mathop{\Longrightarrow}^{\mbox{?}} \quad \begin{tabular}{c} \xymatrix@R=.4pc@C=1.4pc{SO(N)_1 && SO(N)_2 \\ *=<20pt>[o][F]{} \ar@{-}[r] |-{\SelectTips{eu}{}\object@{>}}^{\mbox{\raisebox{1.5ex}{$\psi_1$}}} & *=<12pt>[o][F=]{} \ar@{-}[r] |-{\SelectTips{eu}{}\object@{>}}^{\mbox{\raisebox{1.5ex}{$\psi_2$}}} & *=<20pt>[o][F]{} \\ &??? &H} \end{tabular} \end{equation} In order to have $SO(N)$ flavor symmetries, the confining group must have real representations. To keep the $SO(N)^2$ flavor symmetry from enlarging to an $SO(2N)$ flavor symmetry, the confining group must have two different real representations $\mathbf{R_1}$ and $\mathbf{R_2}$. Furthermore, for $SO(N)^2$ to spontaneously break to $SO(N)_V$ via confinement, the most attractive condensate must be $\langle \mathbf{R_1} \mathbf{R_2} \rangle$. Whether these situations can in fact be realized is an open question, though it seems highly unlikely for $G$ to be one of the exceptional groups. Alternatively, if $G$ is some subgroup of $SU(N)$, we could start with $SU(N)^2/SU(N)$ chiral symmetry breaking and introduce large $SU(N)$-violating (but $G$-preserving) spurions on each of the sites to lift the masses of the unwanted Goldstones. Interestingly, the only constraint on the subgroup $H$ is anomaly cancelation, because we can weakly gauge any global symmetry we wish. We have seen that the $G/H$ moose in some sense trivializes the CCWZ prescription, and is interesting to wonder whether the same construction might trivialize another consequence of spontaneous symmetry breaking: the WZW term \cite{Wess:1971yu,Witten:1983tw}. The WZW term accounts for the fact that anomalies above the confinement scale must match anomalies in the low energy effective theory \cite{'tHooft:1980xb}. It is well-known that a Chern-Simons (CS) term in the bulk of AdS$_5$ can transmit anomalies from the UV brane to the IR brane \cite{Callan:1984sa}, giving a nice 5D representation of anomaly matching in the holographic CFT. It is also well-known that the WZW term can be derived by considering a CS term in the interior of a 5-ball, with ordinary 4D space-time on the boundary of the ball \cite{Witten:1983tw,Zumino:1983rz}, though to concisely express the WZW only in terms of phenomenological variables requires the full machinery of differential geometry \cite{Manes:1984gk}. It is even known how to derive the WZW term in the language of HLS \cite{Wu:1984pv,Fujiwara:1984mp}, so it should be possible to understand the WZW term in term of the $G/H$ moose. The difficulty is how to deconstruct the CS term in AdS$_5$, because by design, gauge transformations of the CS term introduce new terms on the UV and IR branes. Deconstructing a dimension usually involves going to $A_5= 0$ gauge and then reintroducing Wilson line link fields, and in principle, one should track changes in the CS term through this whole procedure. In the context of QCD, the deconstructed CS term has already been studied based on the following moose \cite{Hill:2004uc}: \begin{equation} \begin{tabular}{c} \xymatrix@R=.4pc@C=1.4pc{\mathrm{Global:} & SU(N_f)_L && SU(N_f)_R \\ & *=<20pt>[o][F]{} \ar@{-}[rr] |-{\SelectTips{eu}{}\object@{>}}^{\mbox{\raisebox{1.5ex}{$\Sigma$}}} && *=<20pt>[o][F]{} \\ \mathrm{Gauged:} &SU(N_f)_L && SU(N_f)_R&} \end{tabular} \end{equation} where the gauged $SU(N_f)_i$ correspond to arbitary left- and right-handed currents. Now with inspiration from AdS/CFT, we want to understand the CS term in the $G/H$ moose, where the natural variable is $\xi$, the ``square root'' of $\Sigma$: \begin{equation} \begin{tabular}{c} \xymatrix@R=.4pc@C=1.4pc{\mathrm{Global:} & SU(N_f)_L \times SU(N_f)_R && SU(N_f)_L \times SU(N_f)_R \\ & *=<20pt>[o][F]{} \ar@{-}[rr] |-{\SelectTips{eu}{}\object@{>}}^{\mbox{\raisebox{1.5ex}{$\xi$}}} && *=<20pt>[o][F]{} \\ \mathrm{Gauged:} &SU(N_f)_L \times SU(N_f)_R &&SU(N_f)_V&} \end{tabular} \end{equation} In deriving CCWZ from the $G/H$ moose, we saw that an essential step in constructing the Goldstone kinetic terms was integrating out the $H_\mu$ field (here, the $SU(N_f)_V$ gauge bosons), and we expect that this will also be an essential ingredient in constructing the moose-ified WZW term. \acknowledgments{We would like to thank Itay Yavin and Nima Arkani-Hamed for numerous discussions and many important insights. We gratefully acknowledge conversations with Spencer Chang, Howard Georgi, and Martin Schmaltz. This work was supported by an NSF graduate research fellowship.}
1,941,325,220,948
arxiv
\section{Introduction} \label{sec:intro} In his lucid 2015 book {\it Questions About Elastic Waves} \cite{Engl15}, Engelbrecht asks ``What is a wave?'' and answers ``As surprising as it may sound, there is no simple answer to this question.'' Indeed, the definition of `wave' depends on the physical context at hand \cite{Christov2014}. Although most wave phenomena in classical continuum mechanics are described by hyperbolic (wave) equations, one of the surprises of 20\textsuperscript{th} century research into nonlinear partial differential equations (PDEs) is that \emph{certain} parabolic (diffusion) equations also yield structures with finite speed of propagation. Two examples are (i) a linear diffusion equation with a nonlinear reaction term \cite{KPP37,Fisher37}, and (ii) a diffusion equation that is nonlinear due to a concentration-dependent diffusivity \cite{Barenblatt1952}.\footnote{More specifically, \citet{Barenblatt1952} (see also \cite[p.~13]{ZeldCollectedWorks}) credits the observation of finite-speed of propagation in a nonlinear diffusion equation to a difficult-to-find 1950 paper by Zeldovich and Kompaneets.} Indeed, it is known that certain aspects of wave phenomena can be reduced to a problem of solving a parabolic PDE, as gracefully illustrated by Engelbrecht \cite[Ch.~6]{Engl97} through a series of selected case studies; further examples include, but are not limited to: electromagnetic waves propagating along the earth's surface \cite{Vlasov1995}, seismic waves \cite{Jerzak2002}, underwater acoustics \cite{2011Jensen}, and the classical theory of nerve pulses \cite[\S6.4.2]{Engl97}, which nowadays has been updated by \citet{Engl18a} to a nonlinear hyperbolic (wave) model in the spirit of the Boussinesq paradigm \cite{Engl18b,CMP07}. Of special interest to the present discussion are physical problems that are modeled by nonlinear parabolic PDEs. These nonlinear problems lack general, all-encompassing solution methodologies. Instead, finding a solution often involves methods that are specific to the nature of the governing equation or the physical problem that it describes \cite[Ch.~4]{Evans2010} (see also the discussion in \cite{Christov2014} in the context of heat conduction). The classical examples of nonlinear parabolic PDEs admitting traveling wave solutions come from heat conduction \cite[Ch.~X]{ZR02} (see also \cite{Straughan2011}) and thermoelasticity \cite{Straughan2011,BerezVan2017}. The sense in which these nonlinear parabolic PDEs admit traveling-wave and `wavefront' solutions now rests upon solid mathematical foundations \cite{GildKers2004,Vazquez2007}, including the case of gradient-dependent nonlinearity \cite{Tedeev2015} (e.g., the last case in table~\ref{tb:eq_models} to be discussed below). A classical example of a nonlinear parabolic PDE governing the finite-speed wave-like motion of a substance arises in the study of an ideal gas spreading in a uniform porous medium \cite{Barenblatt1952}. A similar nonlinear parabolic equation was derived for the interface between a viscous fluid spreading horizontally underneath another fluid of lower density ($\Delta \rho > 0$ between the fluids) \cite{Huppert1982a}. The motion of the denser fluid is dictated by a balance of buoyancy and viscous forces at a low Reynolds number (viscous forces dominate inertial forces). Such viscous gravity current flows are characterized by `slender' fluid profiles i.e., they have small aspect ratios ($h/L \ll 1$, where $h$ and $L$ are typical vertical and horizontal length scales, respectively). Therefore, these flows can be modeled by lubrication theory \cite[Ch.~6]{L07}. Generically, one obtains a nonlinear parabolic equation for the gravity current's shape $h$ as a function of the flow-wise coordinate $x$ and time $t$. The case of the spreading of a fixed mass of Newtonian fluid was originally explored contemporaneously by \citet{Didden1982} and \citet{Huppert1982a}. Being governed by a parabolic (irreversible) equation, these currents `forget' their initial conditions after some time has elapsed; this is Barenblatt's concept of \emph{intermediate asymptotics} \cite{Barenblatt1972,Barenblatt1979}. Moreover, the PDE~\eqref{eq:nl_diff} can be reduced to an ordinary differential equation (ODE) through a self-similarity transformation. If the similarity variable can obtained by a scaling (dimensional) analysis, then the solution is termed a self-similar solution of the \emph{first} kind \cite[Ch.~3]{Barenblatt1979}. Specifically, the transformation is $h(x,t) = \mathfrak{C}\, t^\beta f(\zeta)$ ($h$ in units\footnote{Throughout the chapter, we use SI units for all dimensional quantities.} of meters), where $\zeta = x/(\eta_N t^\delta)$ is the similarity variable (dimensionless), $f(\zeta)$ is the self-similar profile to be determined by solving an ODE, and $\mathfrak{C}$ and $\eta_N$ are dimensional consistency constants. The exponents $\beta$ and $\delta$ are obtained through scaling (dimensional analysis) of the governing PDE. As a representative example, consider the one-dimensional (1D) spreading of a fixed mass of fluid having an arbitrary `wavy' initial shape, as shown in figure~\ref{fig:introSS}. Suppose the fluid's shape $h(x,t)$ is governed by the linear diffusion equation $\partial h/\partial t = A \partial^2 h/\partial x^2$ (taking $A=1$ m\textsuperscript{2}/s in this example without loss of generality) subject to $(\partial h/\partial x)|_{x=\pm L}=0$. The initial condition (IC) is quickly `forgotten,' and the ultimate asymptotic state (here, flat) is achieved after passing through an intermediate asymptotic regime. It is straightforward to determine the self-similarity transformation: $\beta=-\delta=-1/2$ and $f(\zeta)=\mathrm{e}^{-\zeta^2}$ (see, e.g., \cite{Barenblatt1979} and the Appendix). Here, $\mathfrak{C}$ depends on the initial condition and $\eta_N=2$. The convergence of the rescaled $h(x,t)$ profiles towards $f(\zeta)$ can be clearly observed in figure~\ref{fig:introSS}(b). The IC is forgotten, and the profile converges onto the Gaussian intermediate asymptotic shape \cite{Barenblatt1979}. The profile $f(\zeta)$ is termed `universal' because it is independent of $h(x,0)$. \begin{figure} \subfloat[][]{\includegraphics[width=0.5\textwidth]{introHvsX}\label{fig:introH}} \hfill \subfloat[][]{\includegraphics[width=0.5\textwidth]{SelfSimf_zeta}\label{fig:introF}} \caption{Spreading via 1D linear diffusion and the approach to the universal intermediate self-similar asymptotics. (a) An arbitrary wavy IC $h(x,0)$ spreads and levels until reaching a flat steady state $h_\infty$. (b) A first-kind self-similar transformation (obtained from dimensional analysis) yields a \emph{universal} profile $f(\zeta)$ (highlighted in gold) towards which the solution $h(x,t)$ evolves in the intermediate period after the IC is forgotten (but prior to leveling).} \label{fig:introSS} \end{figure} Having illustrated the notion of first-kind self-similarity as intermediate asymptotics, let us summarize its use in studying the gravitational spreading of Newtonian viscous fluids in a variety of physical scenarios. For example, gravity currents arise in geophysical applications associated with flows through porous rocks \cite{Woods2015} such as in ground water extraction \cite{Bear1972}, during oil recovery \cite{Huppert2000,FELISA2018}, and during CO\textsubscript{2} sequestration \cite{Huppert2014}. In these examples, $h(x,t)$ represents an interface between two immiscible fluids in the limit of large Bond number (gravity dominates surface tension). There is now an extensive literature featuring a wealth of exact and approximate analytical self-similar solutions for gravity currents in porous media, e.g., \cite{Barenblatt1952,Huppert1995,Anderson2003,Lyle2005,Vella2006,Hesse2007,Anderson2010,DeLoubens2011,Ciriello2013,Huppert2013,zcs14} amongst many others. In this chapter, we focus on the propagation of non-Newtonian gravity currents, specifically ones for which the denser fluid obeys a \emph{power-law} rheology. This tractable model of non-Newtonian rheological response is also known as the Oswald--de Weale fluid \cite{BAH87}. In unidirectional flow, the power-law model simply dictates that fluid's viscosity depends upon a power of the velocity gradient. \citet{DiFed2006} generalized Huppert's problem \cite{Huppert1982a} to power-law fluids, although Gratton et al~\cite{Gratton1999,Perazzo2003} had also considered some related problems. Even earlier, \citet{Kondic1996,Kondic1998} derived the governing equations for power-law fluids under confinement (i.e., in Hele-Shaw cells) using the lubrication approximation. These works have contributed to the use of a modified Darcy law to model the flow of non-Newtonian fluids in porous media using the analogy to flow in Hele-Shaw cells. \citet{aj_nonNewt1992} were perhaps the first to combine a Darcy law for a power-law fluid with the continuity equation to obtain a single PDE, of the kind studied herein, governing the gravity current's shape. Recently, \citet{Lauriola2018} highlighted the versatility of this approach by reviewing the existing literature and extending it to two-dimensional axisymmetric spreading in media with uniform porosity but variable permeability. All these flows are of interest because exact analytical self-similar solutions in closed form have been derived previously \cite{Gratton1999,Perazzo2005,DiFed2012,DiFed2017,Ciriello2016}. Specifically, the solution of \citet{Ciriello2016} will be used in \S\ref{sec:conv} below to verify the truncation error of the proposed numerical method. For a self-similar solution to exist, both the governing PDE and its boundary conditions (BCs) must properly transform into an ODE in $\zeta$ with suitable BCs. A number of studies have specifically shown that the volume of fluid within the domain can be transient, varying as a power law in time, $\mathcal{V}(t) \propto t^\alpha$ ($\alpha \ge 0$), and a self-similar solution still exists (see, e.g., \cite{Barenblatt1952,Lyle2005,Hesse2007,DiFed2012,zcs14,DiFed2017} and the references therein). However, the nonlinear ODE in $\zeta$ often cannot be integrated exactly in terms of known function, except for $\alpha=0$. In \S\ref{sec:eq} below, we discuss how a constraint of the form $\mathcal{V}(t) \propto t^\alpha$ can be implemented numerically through flux BCs at the computational domain's ends. With increasing complexity of the flow physics incorporated in the model, finding a self-similarity transformation may no longer be possible simply by scaling (dimensional) arguments. \citet{Gratton1990} classified a number of such situations, including the so-called `focusing' flows involving fluid axisymmetrically flowing towards the origin on a flat planar surface. Further examples involving confined currents in channels with variable width, and/or in porous media whose permeability and porosity are functions of $x$, were proposed by \citet{zcs14}, as illustrated in figure~\ref{fig:domains}. These gravity currents do enter a self-similar regime, even though a self-similar transformation cannot be obtained by scaling arguments alone. The exponents $\beta$ and $\delta$ in the transformation are unknown \textit{a priori}, hence this situation represents a self-similarity of the \emph{second} kind \cite[Ch.~4]{Barenblatt1979}. The governing equation can be transformed to an ODE, following which a nonlinear eigenvalue problem must be solved for $\beta$ and $\delta$ through a phase-plane analysis \cite{Gratton1990,Gratton1991}. Alternatively, experiments or numerical simulations are necessary to determine $\beta$ and $\delta$. For example, early numerical simulations were performed to this end by \citet{Diez1992}. However, a `pre-wetting film' ahead of the current's sharp wavefront ($x=x_f(t)$ where $h\big(x_f(t),t\big) = 0$) was required to avoid numerical instabilities. The scheme therein was also first-order accurate in time only. In this chapter, we propose a modern, high-order-accurate implicit numerical method for use in such problems. \begin{figure}[t] \centering \includegraphics[width=\textwidth,scale=0.95]{Summary_8_update_3} \caption{A summary of the gravity current flows and domains considered in this work. (a) Flow away from the origin in a completely porous ($\phi = 1$) HS cell of variable width given by $b(x) = b_1 x^n$ ($b_1 = const.$, $0\leq n<1$). (b) Flow in uniformly porous ($\phi = \phi_1 = const. \neq 1$) passage of variable width given by the same $b(x)$ as in (a). (c) Flow in a uniform-width slab (i.e., $b(x) = b_1 = const.$) with horizontally heterogeneous porosity and permeability given by $\phi (x) = \phi_1 x^m$ and $k(x) = k_1 x^n$, respectively. The effective permeability of the medium in (a) and (b) is set by the Hele-Shaw analogy via the width: $k(x) = [b(x)]^2/(12\mu)$. Figure reproduced and adapted with permission from [Zheng et al, Influence of heterogeneity on second-kind self-similar solutions for viscous gravity currents, J.\ Fluid Mech., vol.\ 747, p.\ 221] \textcopyright~Cambridge University Press 2014.} \label{fig:domains} \end{figure} Specifically, we develop and benchmark a strongly implicit and conservative numerical scheme for 1D nonlinear parabolic PDEs arising in the study of gravity currents. We show how the proposed scheme can be used to simulate (with high accuracy and at low computational expense) the spreading of 1D \emph{non-Newtonian} viscous gravity currents in variable geometries (specifically, Hele-Shaw cells with widths varying as a power law in $x$). To this end, we build upon the work of \citet{zcs14}, which introduced this type of finite-difference scheme for simulating the spreading of a finite mass of Newtonian fluid in a variable-width Hele-Shaw cell. Owing to its accuracy and stability, this finite-difference scheme has been recently applied by \citet{koch_2018} to study hydraulic fracturing of low-permeability rock. This chapter is organized as follows. In \S\ref{sec:prelim}, we briefly summarize existing models describing certain flows of viscous gravity currents. Then, we introduce a convenient general notation for such nonlinear parabolic PDEs. In \S\ref{sec:grid}, we introduce the 1D equispaced but staggered grid upon which the proposed finite-difference scheme is to be implemented. The derivation of the BCs for the PDE, from the mass conservation constraint, is discussed in \S\ref{sec:eq}. Then, we construct the nonlinear Crank--Nicolson scheme in \S\ref{sec:scheme_main} and discuss the discretized form of the nonlinear flux BCs in \S\ref{sec:BC}. Continuing, in \S\ref{sec:conv}, the scheme's accuracy is justified by comparing the numerical solution provided by the finite-difference scheme (up to a specified physical time) against an analytical solution obtained through a self-similar transformation of the PDE. Specifically, this approach involves three validation cases: (i) a symmetric (about $x=0$) lump of fixed fluid mass spreading in two directions (convergence is independent of BCs), (ii) a fixed fluid mass spreading away from the origin ($x=0$) (requires only no flux BCs), and (iii) a variable fluid mass injected at the origin spreading away from it (requires careful implementation of the nonlinear BCs). In all three cases, the scheme is shown to be capable of accurately computing the evolution of gravity current's shape. In \S\ref{sec:consv}, we analyze the scheme's conservation properties by verifying numerically that it respects the mass constraint $\mathcal{V}(t) \propto t^\alpha$. We consider two validation cases: (i) release of a fixed fluid mass ($\alpha = 0$), and (ii) fluid mass injection into the domain ($\alpha > 0$). In both cases, we specifically focus on the challenging case of a non-Newtonian (power-law) displacing fluid in a variable-width channel. As a benchmark, we use previously derived first-kind self-similar solutions from the literature, which are discussed in the Appendix. \section{Preliminaries} \label{sec:prelim} In this section, we summarize the mathematical model for viscous gravity currents in a selected set of applications involving Newtonian and non-Newtonian fluids. We study their spreading in a fixed- or variable-width channel geometry (also known as a ``Hele-Shaw cell''), as well as flows in heterogeneous porous media with independently variable permeability and porosity. Our goal is to highlight the fact that all these models can be concisely summarized by a single nonlinear parabolic PDE supplemented with a set of nonlinear Neumann (flux) BCs. \subsection{Fluid Domain and Flow Characteristics} The flow domain is assumed to be long and thin. For example, it can be a channel existing in the gap between two impermeable plates, i.e., a Hele-Shaw (HS) cell, which may or may not have variable transverse (to the flow) width as shown in figure~\ref{fig:domains}(a); or, it can be slab of uniform-thickness heterogeneously porous material, as shown in figure~\ref{fig:domains}(c). The viscous gravity current consists of one fluid displacing another immiscible fluid. Therefore, a sharp interface $y=h(x,t)$ separates the two fluids at all times. The present study considers the limit of negligible surface (interfacial) tension (compared to gravitational forces). The density difference $\Delta \rho$ between the two fluids is large compared to the density of the lighter fluid, and the denser fluid flows along the bottom of the cell, which is a horizontal impermeable surface. In doing so, the denser fluid displaces the lighter fluid out of its way. Here, the geometry is considered to be vertically unconfined so that the details of the flow of the upper (lighter) fluid can be neglected. We are interested in the evolution of the interface $h(x,t)$ between the two fluids. Owing to the vertically unconfined, long and thin geometry of the flow passage, the denser fluid has a slender profile (small aspect ratio), and the fluid flow can be described by \emph{lubrication theory}. The lubrication approximation also requires that viscous forces dominate inertial forces; this is the limit of small Reynolds number. In this regime of small Reynolds number but large Bond number, the flow is governed by a balance of viscous forces and gravity. Furthermore, the lubrication approximation allows for (at the leading order in the aspect ratio) the variation of quantities across the transverse direction, as well as the vertical velocities of the fluids to be neglected. As shown in figure~\ref{fig:domains}(a), for the flow in a HS cell, we allow the cell's width to vary as a power-law of the streamwise coordinate $x$, i.e., $b(x) = b_1 x^n$, where $n \geq 0$ is a dimensionless exponent, and $b_1>0$ is a dimensional consistency constant having units m$^{1-n}$. Since the cell has a variable width, it originates from a cell `origin,' which is always taken to be $x=0$ such that $b(0)=0$. As discussed in \cite{zcs14}, in such a flow geometry, the lubrication approximation may fail when $b(x)$ is an increasing function of $x$ i.e., $\mathrm{d} b/\mathrm{d} x = n b_1 x^{n-1} > 1$. In such quickly-widening cells, the transverse variations of properties become significant. We ensure the validity of the lubrication approximation, and models derived on the basis of it, by only considering $n<1$ such that $\mathrm{d} b/\mathrm{d}x$ remains a decreasing function of $x$. The porosity can also be varied by filling the HS cell with beads of fixed diameter, as illustrated in figure~\ref{fig:domains}(b). We also consider a gravity current spreading horizontally in a porous slab of constant transverse width ($b(x) = b_1 = const.$) with heterogeneous porosity $\phi(x) = \phi_1 x^m$ and permeability $k(x) = k_1 x^n$, as shown in figure~\ref{fig:domains}(c). Here, $m,n\geq0$ are dimensionless exponents and $\phi_1, k_1>0$ are dimensional constants needed for consistency with the definitions of porosity and permeability, respectively; specifically $\phi_1$ has units of units of m$^{-m}$, and $k_1$ has units of m$^{2-n}$. These variations are illustrated by the streamwise changes of bead radii in figure~\ref{fig:domains}(c). Now, the point at which the porosity and permeability vanish is the origin of the cell. Another interesting case, that of a medium with vertically heterogeneous porosity, has been explored by \citet{Ciriello2016}. In this chapter, we limit our discussion to flow in a completely porous (i.e., unobstructed, $\phi=1$) HS cell of variable width as in figure~\ref{fig:domains}(a). However, the numerical scheme developed herein can readily treat any of these cases, taking the appropriate parameter definitions from table~\ref{tb:eq_models} in \S\ref{sec:eq}. We allow the denser fluid to be non-Newtonian. Specifically, it obeys the power-law rheology. In unidirectional flow, the one unique non-trivial shear stress component is given by $\tau = \mu (\dot \gamma) \dot \gamma$, where the dynamic viscosity $\mu$ depends on the shear rate $\dot \gamma$ as $\mu(\dot \gamma) = \mu_0 \dot \gamma^{r-1}$. Here, $\mu_0$ is the flow consistency index (units of Pa$\cdot$s$^{r}$), and $r$ ($>0$) is the fluid's rheological index. Fluids having $r<1$ are termed shear-thinning (e.g., blood), and fluids with $r>1$ are termed shear-thickening (e.g., dense particulate suspensions). In the special case $r=1$, the power-law model reduces to the Newtonian fluid. As stated above, the flow of the displaced fluid is immaterial to the dynamics of the gravity current, as long as the viscosity and density contrasts are large. This condition is satisfied, e.g., by assuming (for the purposes of this chapter) the displaced fluid is air. Finally, the volume of the fluid in the cell itself may be either fixed (constant mass) or vary with time (injection). Consistent with the literature, we consider the instantaneous volume of fluid in the cell to increase as a power law in $t$: $\mathcal{V}(t) = \mathcal{V}_0 + \mathcal{V}_\mathrm{in} t^\alpha$, where $\mathcal{V}_0$ is the initial volume of fluid in the HS cell (measured in m\textsuperscript{3}), $\alpha \geq 0$ is a dimensionless exponent, and $\mathcal{V}_{in}$ is an injection pseudo-rate (in units m$^{3}$s$^{-\alpha}$), becoming precisely the injection rate for $\alpha=1$. Next, we discuss how this assumption leads to BCs for the physical problem and for the numerical scheme. \subsection{Governing Equation, Initial and Boundary Conditions} \label{sec:eq} The propagation of a viscous gravity current is described by a diffusion equation for the interface $h(x,t)$, which is the shape of profile of the denser fluid. The models are derived either from porous medium flow under Darcy's law and the Dupoit approximation \cite[Ch.~8]{Bear1972} or using lubrication theory with no-slip along the bottom of the cell and zero shear stress at the fluid--fluid interface \cite[Ch.~6-C]{L07}. The resulting velocity field is combined with a depth-averaged continuity equation to derive the nonlinear parabolic PDE for $h(x,t)$. We propose to summarize all gravity current propagation along horizontal surfaces through a single `thin-film' \cite{ODB97} equation: \begin{equation} \frac{\partial h}{\partial t} = \frac{A}{x^p} \frac{\partial}{\partial x}\left(x^q \psi \frac{\partial h}{\partial x}\right). \label{eq:nl_diff} \end{equation} According to \citet[Ch.~5]{Engl15}, eq.~\eqref{eq:nl_diff} can be classified as an `evolution equation.' The term in the parentheses on the right-hand side of eq.~\eqref{eq:nl_diff}, roughly, represents a fluid flux balanced by the change in height on the left-hand side. The multiplicative factor $A/x^p$ arises due to (i) geometric variations of the flow passage in the flow-wise direction, (ii) porosity variations in the flow-wise direction, or (iii) from the choice of coordinate system in the absence of (i) or (ii). Here, $A$ is dimensional constant depending on the flow geometry, the domain, and the fluid properties. Additionally, $p$ and $q$ are dimensionless exponents that depend on the flow geometry and fluid rheology. The quantity denoted by $\psi$ represents specifically the \emph{nonlinearity} in these PDEs. Thus, it is necessarily a function of $h$, and possibly $\partial h/\partial x$ for a non-Newtonian fluid (as in the third and fifth rows of table~\ref{tb:eq_models}).\footnote{Interestingly, an `$r$-Laplacian' PDE, similar to eq.~\eqref{eq:nl_diff} for a power-law fluid in a HS cell (third row of table~\ref{tb:eq_models}), arises during fluid--structure interaction between a power-law fluid and an enclosing slender elastic tube \cite{Boyko2017}. This PDE can also be tackled by the proposed finite-difference scheme.} As stated in \S\ref{sec:intro}, several versions of eq.~\eqref{eq:nl_diff} will be explored herein, incorporating geometric variations, porosity variations, non-Newtonian behavior. The pertinent physical scenarios that will be tackled herein (using the proposed numerical scheme) are presented in table~\ref{tb:eq_models}, which lists expressions for $A$, $p$, $q$ and $\psi$. From a dimensional analysis of the PDE~\eqref{eq:nl_diff}, it follows that the constant $A$ must have units of m$^{1+p-q}\cdot$s$^{-1}$, as long as the nonlinearity $\psi$ has units of length (as is the case for all the models summarized in Table~\ref{tb:eq_models}). It is worth noting that in the case of 1D, linear diffusion ($p=q=0$ and $\psi = 1$), $A$ becomes the `diffusivity' in units of m$^2$/s. The PDE~\eqref{eq:nl_diff} is solved on the finite space-time interval $(x,t) \in (\ell,L)\times(t_0,t_f]$. Here, $t_0$ and $t_f$ represent the initial and final times of the numerical simulation's run, respectively. An initial condition (IC) $h_0(x)$ is specified at $t=t_0$, so that $h(x,t_0)=h_0(x)$ is known. Meanwhile, $\ell$ is a small positive value (close to 0). Boundary conditions (BCs) are specified at $x=\ell$ and $x=L$. These involve some combination of $h$ and $\partial h/\partial x$. The reason for taking $x=\ell \ne 0$ becomes clear below. Thus, let us now discuss such a suitable set of BCs. The BCs are based on the imposed mass conservation/growth constraint. Consider the case of a viscous gravity current in a porous slab with variable porosity $\phi(x) = \phi_1 x^m$, and transverse width $b_1=const.$ Then, the conservation of mass constraint (see \cite{zcs14}) takes the form \begin{equation} \mathcal{V}(t) \equiv \int_\ell^L h(x,t) b_1 \phi(x) \, \mathrm{d}x = \mathcal{V}_0 + \mathcal{V}_\mathrm{in}t^\alpha, \label{eq:mass_conservn_porous} \end{equation} where $\alpha \geq 0$. In the parallel case of a HS cell with variable width $b(x) = b_1 x^n$ and porosity $\phi_1=const.$, which can either be set to unity or absorbed into $b(x)$ via $b_1$, the mass constraint becomes \begin{equation} \mathcal{V}(t) \equiv \int_\ell^L h(x,t) b(x) \, \mathrm{d}x = \mathcal{V}_0 + \mathcal{V}_\mathrm{in}t^\alpha. \label{eq:mass_conservn} \end{equation} Taking a time derivative of eq.~\eqref{eq:mass_conservn} and employing eq.~\eqref{eq:nl_diff}, we obtain \begin{multline} \frac{\partial}{\partial t} \int_\ell^L h(x,t) b_1 x^n \, \mathrm{d}x = \int_\ell^L \frac{\partial h}{\partial t} b_1 x^n \, \mathrm{d}x = \int_\ell^L b_1x^n \frac{A}{x^n}\frac{\partial}{\partial x}\left(x^q \psi \frac{\partial h}{\partial x}\right) \,\mathrm{d}x\\ = Ab_1 \left.\left( x^q \psi \frac{\partial h}{\partial x} \right)\right|_{x=\ell}^{x=L} \;\stackrel{\text{by }\eqref{eq:mass_conservn}}{=}\; \frac{ \mathrm{d} (\mathcal{V}_\mathrm{in}t^\alpha)}{\mathrm{d} t} = \alpha \mathcal{V}_\mathrm{in}t^{\alpha-1}. \label{eq:BC_leibnitz} \end{multline} Here, $p=n$ in this case of interest, as described in table \ref{tb:eq_models}, and $Ab_1=const.$ Thus, we have obtained conditions relating $x^q\psi\partial h/\partial x$ at $x=\ell$ and $x=L$ to $\alpha\mathcal{V}_\mathrm{in}t^{\alpha-1}$. These conditions, if satisfied, automatically take into account the imposed volume constraint from eq.~\eqref{eq:mass_conservn}. The calculation starting with eq.~\eqref{eq:mass_conservn_porous} is omitted as it is identical, subject to proper choice of $p$. For the case of propagation away from the cell's origin (i.e., any injection of mass must occur near $x=0$, specifically at $x=\ell$), to satisfy eq.~\eqref{eq:BC_leibnitz}, we can require that \begin{subequations}\begin{align} \left.\left(x^q \psi \frac{\partial h}{\partial x}\right)\right|_{x = \ell} &= \begin{cases} - \frac{\alpha B}{A} t^{\alpha - 1}, &\quad \alpha > 0, \\[3pt] 0, &\quad \alpha=0, \end{cases}\label{eq:L_bc}\displaybreak[3]\\[5pt] \left.\left(\psi \frac{\partial h}{\partial x}\right)\right|_{x=L} &= 0 \quad\Leftarrow\quad \left.\frac{\partial h}{\partial x}\right|_{x=L} = 0 , \end{align}\label{eq:bcs}\end{subequations} where $B = \mathcal{V}_\mathrm{in}/b_1$. Recall, the case of $\alpha>0$ represents mass injection. Although eq.~\eqref{eq:mass_conservn} and eqs.~\eqref{eq:bcs} are equivalent, the imposition of the nonlinear BC in eq.~\eqref{eq:L_bc} must be approached with care. It should be clear that to impose a flux near the origin (at $x=0$), we need $\big(x^q \psi \partial h/\partial x\big)\big|_{x \to 0}$ to be finite. Then, $\psi \partial h/\partial x = \mathcal{O}(1/x^q)$ as $x\to0$. On the spatial domain $x \in (0,L)$, such an asymptotic behavior is possible for $p=q=0$. However, in a variable-width cell ($p,q \ne 0$), the local profile and slope as $x \to 0$ blow up if they are to satisfy $\psi \partial h/\partial x = \mathcal{O}(1/x^q)$ as $x\to0$. To avoid this uncomputable singularity issue, we defined the computational domain to be $x\in(\ell,L)$, where $\ell$ is `small' but $>0$. The BC from eq.~\eqref{eq:L_bc} at $x=\ell$ can then be re-written as \begin{equation} \left.\left( \psi \frac{\partial h}{\partial x} \right)\right|_{x = \ell} = -\frac{\alpha B}{A\ell^{q}}t^{\alpha-1},\qquad \alpha > 0. \end{equation} It may also be of interest to consider the case of a gravity current released a finite distance away from the origin and then spreading towards $x=0$. In this case, an additional length scale arises in the problem: the initial distance of the current's edge from the origin, say $x_f(0)$. The existence of this extra length scale complicates the self-similarity analysis, leading to solutions of the second-kind \cite[Ch.~4]{Barenblatt1979}, as discussed in \S\ref{sec:intro}. However, the numerical scheme can handle this case just as well; in fact, it requires no special consideration, unlike spreading away from the origin. Now, we may simply take $\ell=0$ and consider spreading on the domain $(0,L)$ subject to the following BCs: \begin{subequations} \begin{align} \left.\left(x^q \psi \frac{\partial h}{\partial x}\right)\right|_{x\to 0} &= 0 \quad\Leftarrow\quad \left.\frac{\partial h}{\partial x}\right|_{x= 0} = 0, \label{eq:L_bc2}\\[5pt] \left.\left(\psi \frac{\partial h}{\partial x}\right)\right|_{x = L} &= \begin{cases} \frac{\alpha B}{A L^q} t^{\alpha - 1}, &\quad \alpha > 0, \\[3pt] 0, &\quad \alpha=0, \end{cases} \end{align}\label{eq:bcs2}\end{subequations} which together allow us to satisfy eq.~\eqref{eq:BC_leibnitz} and, thus, eq.~\eqref{eq:mass_conservn} for all $t\in(t_0,t_f]$. The most significant advantage of defining nonlinear flux BCs, such as those in eqs.~\eqref{eq:bcs} or \eqref{eq:bcs2}, is that a nonlinear nonlocal (integral) constraint, such as that in eq.~\eqref{eq:mass_conservn_porous} or \eqref{eq:mass_conservn}, no longer has to be applied onto the solution $h(x,t)$. Furthermore, if we start with compact initial conditions, i.e., there exists a nose location $x=x_f(t_0)$ such that $h\big(x_f(t_0),t_0\big)=0$, then the finite-speed of propagation property of the nonlinear PDE~\eqref{eq:nl_diff} \cite{GildKers2004,Vazquez2007} ensures that this nose $x_f(t)$ exists for all $t>t_0$ and $h\big(x_f(t),t\big)=0$ as well. The proposed fully-implicit scheme inherits this property of the PDE. Therefore, we can solve the PDE on the \emph{fixed} domain $x\in(\ell,L)$, without any difficulty, instead of attempting to rescale to a moving domain on which $x_f(t)$ is one of the endpoints with $h=0$ as the BC applied there. The latter approach proposed by \citet{Bonnecaze1992} (and used in more recent works \cite{Acton2001} as well) leads to a number of additional variable-coefficient terms arising in the PDE~\eqref{eq:nl_diff}, due to the non-Galilean transformation onto a shrinking/expanding domain. From a numerical methods point of view, having to discretize these additional terms is not generally desirable. Having defined a suitable set of BCs, the last remaining piece of information required to close the statement of the mathematical problem at hand is the selection of pertinent initial conditions (ICs). For the case of the release of a finite fluid mass ($\alpha = 0$), an arbitrary polynomial IC may be selected, as long as it has zero slope at the origin ($x=0$), leading to satisfaction of the no-flux boundary condition~\eqref{eq:L_bc}. To this end, let the IC be given by \begin{equation} h_0(x) = \begin{cases} a \left(\mathfrak{X}_0^c - x^c\right), &\quad x \leq \mathfrak{X}_0, \\[3pt] 0, &\quad x>\mathfrak{X}_0, \end{cases} \label{eq:poly_IC} \end{equation} where $\mathfrak{X}_0$ is a `release-gate' location defining the initial position of the current's nose, i.e., $\mathfrak{X}_0=x_f(t_0)$ and $h\big(x_f(t_0),t_0\big)\equiv h_0(\mathfrak{X}_0)=0$. The constant $c > 1$ is an arbitrary dimensionless exponent. Finally, $a$ (units of m$^{1-c}$) is set by normalizing $h_0(x)$ such that the initial volume of fluid corresponds to the selected initial fluid volume, $\mathcal{V}_0$, via eq.~\eqref{eq:mass_conservn}. The case of the release of a finite mass of fluid is particularly forgiving in how we set the IC, and its slope at $x=0$. In fact, we could even take $c=1$ in eq.~\eqref{eq:poly_IC} and the scheme will provide an initial flux of fluid at $t=t_0^+$, with $(\partial h/\partial x)_{x=0}=0$ thereafter. On the other hand, the case of mass injection ($\alpha > 0$) governed by the nonlinear BCs is not as forgiving. By virtue of the `point-source' mass injection at $x = \ell$, the slope at the origin rises sharply from the moment of mass injection. This very sharp rise has a tendency to introduce unphysical oscillations in the current profile when starting from the IC in eq.~\eqref{eq:poly_IC}. To avoid this, we must select a `better' IC, which has a shape more similar to the actual solution's singularity near $x=0$. Having tested a few different options, we found that an exponential function works well: \begin{equation} h_0(x) = \begin{cases} a\left(-1 + b\mathrm{e}^{cx}\right), &\quad x \leq \mathfrak{X}_0, \\[3pt] 0, &\quad x>\mathfrak{X}_0. \end{cases} \label{eq:expontial_IC} \end{equation} Here, $b$ (dimensionless) and $c$ (units of m$^{-1}$) are positive constants, $\mathfrak{X}_0 = \frac{1}{c}\ln{\frac{1}{b}}$ ensures that the IC has no negative values and a sharp wavefront, and $a$ (units of m) is set by normalizing $h_0(x)$ to the selected intial volume $\mathcal{V}_0$ via eq.~\eqref{eq:mass_conservn}, as above. Finally, it should be noted that the IC from eq.~\eqref{eq:poly_IC} is not used in the convergence studies for finite initial mass (\S\ref{sec:full_conv} and \S\ref{sec:haf_conv}). Rather, the IC is taken to be the exact self-similar solution of \citet{Ciriello2016} for a power-law fluid in a uniform-width ($n=0$) HS cell (see also the Appendix). The reasoning behind this particular choice is further expounded upon in \S\ref{sec:conv}. \begin{landscape} \begin{center} \begin{table} \caption{Selected models of the propagation of viscous gravity currents herein simulated by a finite-difference scheme.} \label{tb:eq_models} \begin{tabular}{p{5.2cm}p{5.2cm}p{1cm}p{2cm}p{2cm}} \hline\noalign{\smallskip} Case/Variable & $A$ [m$^{1-p+q}\cdot$s$^{-1}$] & $p$ [--] & $q$ [--] & $\psi$ [m] \\ \noalign{\smallskip}\svhline\noalign{\smallskip} \begin{tabular}[c]{@{}l@{}}Newtonian fluid,\\fixed-width HS cell: $b(x) = b_1$. \\ {\scriptsize (see \citet{Huppert1995})} \end{tabular} & $\displaystyle\frac{\Delta\rho g b_1^2}{12\mu}$ & 0 & 0 & $h$\\ \noalign{\medskip} \begin{tabular}[c]{@{}l@{}}Newtonian fluid,\\variable-width HS cell: $b(x) = b_1 x^n$. \\ {\scriptsize (see \citet{zcs14})} \end{tabular} & $\displaystyle\frac{\Delta\rho g b_1^2}{12\mu}$ & $n$ & $3n$ & $h$\\ \noalign{\medskip} \begin{tabular}[c]{@{}l@{}}Power-law fluid: $\mu = \mu_0 \dot \gamma^{r-1}$,\\variable-width HS cell: $b(x) = b_1x^n$. \\ {\scriptsize (see \citet{DiFed2017,Longo2017})} \end{tabular} &$\displaystyle\left(\frac{r}{2r+1}\right)\left(\frac{\Delta\rho g}{\mu_0}\right)^{1/r}\left(\frac{b_1}{2}\right)^{(r+1)/r}$ & $n$ & $\displaystyle n\left(\frac{2r+1}{r}\right)$ & $\displaystyle h\left|\frac{\partial h}{\partial x}\right|^{(1-r)/{r}}$\\ \noalign{\medskip} \begin{tabular}[c]{@{}l@{}}Newtonian fluid,\\ 2D porous medium, \\ variable porosity: $\phi(x) = \phi_1 x^m$, \\ variable permeability: $k(x) = k_1 x^n$.\\ {\scriptsize (see \citet{zcs14})} \end{tabular} & $\displaystyle\frac{\Delta \rho g k_1}{\mu \phi_1}$ & $m$ & $n$ & $h$\\ \noalign{\medskip} \begin{tabular}[c]{@{}l@{}}Power-law fluid: $\mu = \mu_0 \dot \gamma^{r-1}$,\\ 2D porous medium, \\ variable porosity: $\phi(x) = \phi_1 x^m$, \\ variable permeability: $k(x) = k_1 x^n$. \\ {\scriptsize (see \citet{Ciriello2016})} \end{tabular} & { $\displaystyle\begin{array}{lcl}2^{(3r+1)/2}\left(\frac{r}{3r+1}\right)^{1/r}\vphantom{\left(\frac{\Delta g}{\mu_0}\right)^{1/2}} \\ \qquad \times \left(\frac{k_1}{\phi_1}\right)^{(r+1)/2r}\left(\frac{\Delta \rho g}{\mu_0}\right)^{1/r}\end{array}$} & $m$ & $\frac{m(r-1) + n(r+1)}{2r}$ & $\displaystyle h\left|\frac{\partial h}{\partial x}\right|^{(1-r)/{r}}$\\ \noalign{\smallskip}\hline\noalign{\smallskip} \end{tabular} \newline \end{table} \end{center} \end{landscape} \section{The Numerical Scheme} \label{sec:scheme} The proposed numerical method is a finite-difference scheme using the Crank--Nicolson approach toward implicit time-stepping. Our presentation follows recent literature, specifically the construction in \cite[Appendix B]{zcs14}. The proposed scheme's truncation error is formally of second order in both space and time, and we expect the scheme to be unconditionally stable. Furthermore, the scheme is conservative in the sense that it maintains the imposed time-dependency of the fluid volume with high accuracy via a specific set of nonlinear BCs. This section is devoted to discussing all these topics one by one. \subsection{Notation: Grids, Time Steps, and Grid Functions} \label{sec:grid} The PDE \eqref{eq:nl_diff} is solved on an equispaced 1D grid of $N+1$ nodes with grid spacing $\Delta x = (L-\ell)/(N-1)$. The solution values are kept on a staggered grid of cell-centers, which are offset by $\Delta x/2$ with respect to the equispaced grid points. As a result, there is a node lying a half-grid-spacing beyond each domain boundary. It follows that the location of the $i$th grid point on the staggered grid is $x_i = \ell + (i-1/2) \Delta x$, where $i = 0,1,2, \hdots, N$. A representative grid with 12 nodes is shown in figure \ref{fig:grid}. The use of a staggered grid affords additional stability to the scheme and allows us to evaluate derivatives with second-order accuracy via central differences, by default, using only two cell-centered values. As stated in \S\ref{sec:eq}, the PDE \eqref{eq:nl_diff} is solved over a time period $t \in (t_0, t_f]$, such that $t_f > t_0 \geq 0$, where both the initial time $t_0$ and the final time $t_f$ of the simulation are user defined. The scheme thus performs $M$ discrete time steps each of size $\Delta t = (t_f - t_0)/(M-1)$. The $n$th time step advances the solution to $t = t^n \equiv t_0 + n \Delta t$, where $n = 0, 1, \hdots, M-1$. Finally, we define the discrete analog (`grid function') to the continuous gravity current shape, which we actually solve for, as $h_i^n \approx h(x_i,t^n)$. \begin{figure} \centering \includegraphics[width=\textwidth,scale=0.75]{grid} \caption{A sample twelve-node equispaced but staggered 1D grid. The grid nodes are staggered by half a grid step $\Delta x/2$ from the cell faces. The boundary conditions are implemented at the `real' domain boundaries (here marked by \textsf{x}). The two grid points \emph{outside} the physical domain (i.e., $i = 0,11$ or $x_0=-0.1$ and $x_{11}=1.1$ in this example) are used to implement the Neumann BCs, which require computing a derivative at the `real' domain boundaries (i.e., $i=1/2,21/2$ or $x_{1/2} \equiv \ell=0$ and $x_{21/2} \equiv L=1$ in this example).} \label{fig:grid} \end{figure} \subsection{The Nonlinear Crank--Nicolson Scheme} \label{sec:scheme_main} Let us denote by $\mathcal{L}$ the continuous spatial operator acting on $h$ on the right-hand side of eq.~\eqref{eq:nl_diff}, i.e., \begin{equation} \mathcal{L}[h] \equiv \frac{A}{x^p} \frac{\partial}{\partial x}\left(x^q \psi \frac{\partial h}{\partial x}\right). \label{eq:Lcont} \end{equation} Since $\mathcal{L}$ is a second-order spatial operator and, thus, eq.~\eqref{eq:nl_diff} is a diffusion equation, we are inclined to implement a second-order-accurate time-stepping by the Crank--Nicolson scheme \cite{Crank1947}. The Crank--Nicolson scheme is fully implicit, which avoids the stringent restriction ($\Delta t \lesssim (\Delta x)^2$) suffered by explicit time discretizations of diffusion equations \cite[Ch.~6]{Strikwerda}. Then, the time-discrete version of eq.~\eqref{eq:nl_diff} is \begin{equation} \frac{h^{n+1}_i - h^n_{i}}{\Delta t} = \frac{1}{2}\big( \mathcal{L}_d\left[h_i^{n+1}\right] + \mathcal{L}_d\left[h_i^n\right] \big), \label{eq:nl_diff_CNL} \end{equation} where $\mathcal{L}_d$ is the discrete analog to the continuous spatial operator $\mathcal{L}$ defined in eq.~\eqref{eq:Lcont}. Based on the approach of \citet{Christov2009}, the discrete spatial operator is constructed via flux-conservative central differencing using two cell-face values, while staggering the nonlinear terms: \begin{subequations} \begin{align} \mathcal{L}_d\left[h_i^{n}\right] &= \frac{A}{x_i^p} \left[ \frac{\left(x_{i+1/2}^q\psi^{n+1/2}_{i+1/2}\right) S_{i+1/2}^n - \left(x_{i-1/2}^q \psi^{n+1/2}_{i-1/2}\right) S_{i-1/2}^n}{\Delta x} \right],\\ \mathcal{L}_d\left[h_i^{n+1}\right] &= \frac{A}{x_i^p} \left[ \frac{\left(x_{i+1/2}^q\psi^{n+1/2}_{i+1/2}\right) S_{i+1/2}^{n+1} - \left(x_{i-1/2}^q \psi^{n+1/2}_{i-1/2}\right) S_{i-1/2}^{n+1}}{\Delta x} \right], \end{align}\label{eq:Ldiscrete}% \end{subequations} where $S \equiv \partial h/\partial x$ is the slope of the gravity current's shape. Note that the nonlinear terms, denoted by $\psi$, have been evaluated the same way, i.e., at the mid-time-step $n+1/2$, for both $\mathcal{L}_d\left[h_i^{n}\right]$ and $\mathcal{L}_d\left[h_i^{n+1}\right]$. Substituting eqs.~\eqref{eq:Ldiscrete} into eq.~\eqref{eq:nl_diff_CNL} results in a system of \emph{nonlinear} algebraic equations because $\psi$ is evaluated at mid-time-step $n+1/2$ and, thus, depends on both $h^{n}_i$ (known) and $h^{n+1}_i$ (unknown). This system must be solved for the vector $h^{n+1}_i$ ($i=0,\hdots,N$), i.e., the approximation to the gravity current's shape at the next time step. Solving a large set of nonlinear algebraic equations can be tedious and computationally expensive. A simple and robust approach to obtaining a solution of the nonlinear algebraic system is through fixed-point iterations, or `the method of internal iterations' \citep{Yanenko1971}. Specifically, we can iteratively compute approximations to $h_i^{n+1}$, the grid function at the new time step, by replacing it in eq.~\eqref{eq:nl_diff_CNL} with $h_i^{n,k+1}$, where $h_i^{n,0}\equiv h_i^n$. Then, the proposed numerical scheme takes the form: \begin{equation} \begin{aligned} \frac{h^{n,k+1}_i - h^n_{i}}{\Delta t} = &\frac{A}{2\Delta x} \left[ {\frac{x_{i+1/2}^q}{x_i^p}}\psi^{n+1/2,k}_{i+1/2} S^{n,k+1}_{i+1/2} - {\frac{x_{i-1/2}^q}{x_i^p}}\psi^{n+1/2,k}_{i-1/2} S^{n,k+1}_{i-1/2} \right]\\ + &\frac{A}{2\Delta x} \left[ {\frac{x_{i+1/2}^q}{x_i^p}}\psi^{n+1/2,k}_{i+1/2} S^{n}_{i+1/2} - {\frac{x_{i-1/2}^q}{x_i^p}}\psi^{n+1/2,k}_{i-1/2} S^{n}_{i-1/2} \right]. \end{aligned} \label{eq:nontriag_scheme} \end{equation} The key idea in the method of internal iterations is to evaluate the nonlinear $\psi$ terms from information known at iteration $k$ and the previous time step $n$, while keeping the linear slopes $S$ from the next time step $n+1$ at iteration $k+1$. This manipulation linearizes the algebraic system, at the cost of requiring iteration over $k$. Upon convergence of the internal iterations, $h_i^{n+1}$ is simply the last iterate $h_i^{n,k+1}$. Before we can further discuss the iterations themselves or their convergence, we must define our discrete approximations for $\psi$ and $S$. The operator $\mathcal{L}_d$ is essentially a second derivative, so we take inspiration from the standard way of constructing the three-point central finite-difference formula for the second derivative \cite{Strikwerda}. Therefore, $S_{i \pm 1/2}$ can be discretized using a two-point central-difference approximation on the staggered grid. For example, at any time step: \begin{equation} S_{i+1/2} \equiv \left(\frac{\partial{h}}{\partial{x}}\right)_{x=x_{i+1/2}} \approx \frac{h_{i+1} - h_{i}}{\Delta x}. \label{eq:dhdx_discr} \end{equation} Next, following \cite{zcs14,Christov2002}, we evaluate $\psi$ at $x_{i\pm1/2}$ by averaging the known values at $x_i$ and $x_{i+1}$ or $x_i$ and $x_{i-1}$, respectively. Likewise, to approximate $\psi^{n+1/2}$, we average the known values: $\psi^n$ at $t^n$ and $\psi^{n,k}$ at the previous internal iteration. In other words, our approximation of the nonlinear terms is \begin{subequations}\begin{align} {\psi}^{n+1/2,k}_{i+1/2} &= \frac{1}{2}\Bigg[\underbrace{\frac{1}{2}\left({\psi}^{n,k}_{i+1} + {\psi}^{n,k}_{i}\right)}_{={\psi}^{n,k}_{i+1/2}} + \underbrace{\frac{1}{2}\left( {\psi}^{n}_{i+1} + {\psi}^{n}_{i}\right)}_{={\psi}^{n}_{i+1/2}}\Bigg], \label{eq:psi_outa}\\ {\psi}^{n+1/2,k}_{i-1/2} &= \frac{1}{2}\Bigg[\underbrace{\frac{1}{2}\left({\psi}^{n,k}_{i} + {\psi}^{n,k}_{i-1}\right)}_{={\psi}^{n,k}_{i-1/2}} + \underbrace{\frac{1}{2}\left( {\psi}^{n}_{i} + {\psi}^{n}_{i-1}\right)}_{={\psi}^{n}_{i-1/2}}\Bigg]. \label{eq:psi_outb} \end{align} \label{eq:psi}% \end{subequations} Equations~\eqref{eq:psi} afford improved stability for nonlinear PDEs, while preserving the conservative nature of the scheme (as will be shown in \S\ref{sec:consv}), as discussed by \citet{VonR1975} who credits the idea of averaging nonlinear terms across time stages and staggered grid points to the seminal work of \citet{Douglas1959,Douglas1962}. The scheme thus described is depicted by the stencil diagram in Figure~\ref{fig:stencil}. \begin{figure}[t] \centering \includegraphics[width=0.75\textwidth]{stencil} \caption{Representative stencil of the proposed scheme. After performing $k$ internal iterations, the nonlinear terms $\psi_{i \pm 1/2}$ are computed at the intermediate stage `$n+1/2,k$' (highlighted in blue) from the known quantities $h_i^n$ and $h_i^{n,k}$. The unknown quantity $h_i^{n,k+1}$ at the next internal iteration, stage `$n,k+1$' (highlighted in red), is found by solving the linear system in eq.~\eqref{eq:fin_diff}. The process continues until the convergence criterion in eq.~\eqref{eq:internal_conv} is met, yielding the (initially unknown) solution at $t=t^{n+1}$.} \label{fig:stencil} \end{figure} Here, it is worthwhile noting that, the classical Crank--Nicolson \cite{Crank1947} scheme is only \emph{provably} unconditionally stable \cite{Strikwerda} when applied to a \emph{linear} diffusion equation. It was suggested by \citet{Christov2009} that the current approach provides additional stability to this \emph{nonlinear} scheme for large time steps. But, since our problem is nonlinear, some care should be taken in evaluating how large of a time step could be taken. Nevertheless, it is still expected that the largest stable $\Delta t$ will be independent of $\Delta x$. A complication arising in the present context is that we focus on the case of a power-law non-Newtonian viscous gravity current spreading in a variable-width cell. As a result, recalling table \ref{tb:eq_models}, this model features $\partial h/\partial x$ in $\psi$, \emph{unlike} the Newtonian case. While the temporal accuracy of the scheme is ensured through the robust implementation of the nonlinear Crank--Nicolson time-stepping, the spatial accuracy is contingent upon the discretization of $\partial h/\partial x$ in $\psi$. A further consequence is that, once we discretize $\partial h/ \partial x$, the discretization of $\psi$ becomes \emph{nonlocal} (i.e., it requires information beyond the $i$th grid point). Nevertheless, the overall scheme still only requires a three-point stencil for $\mathcal{L}_d$. In particular, for interior grid points, we use a central-difference formula, giving rise to the expression (at any time step): \begin{equation} {\psi}_{i} \equiv \left[ h \left|\frac{\partial h}{\partial x}\right|^{(1-r)/{r}} \right]_{x=x_i} \approx h_{i}\left|\frac{h_{i+1} - h_{i-1}}{2\Delta x}\right|^{(1-r)/{r}}. \label{eq:psi_disc} \end{equation} This choice of approximation ensures second-order accuracy at all interior grid nodes. However, at the second ($i=1$) and the penultimate ($i=N-1$) nodes, the second-order accurate approximation to $\partial h/\partial x$ in $\psi_{i\pm1/2}$ as defined in eqs.~\eqref{eq:psi} requires the unknown values $h_{-1}$ and $h_{N+1}$, respectively. To resolve this difficulty, we use `biased' (backward or forward) three-point difference approximations: \begin{subequations}\begin{align} {\psi}_{0} &\approx h_{0}\left|\frac{-3h_{0} +4h_{1} - h_{2}}{2\Delta x}\right|^{(1-r)/{r}},\\ {\psi}_{N} &\approx h_{N}\left|\frac{3h_{N} -4h_{N-1} + h_{N-2}}{2\Delta x}\right|^{(1-r)/{r}}. \end{align}\label{eq:psi_boundary}\end{subequations} Finally, substituting the discretization for $S$ from eq.~\eqref{eq:dhdx_discr} into eq.~\eqref{eq:nontriag_scheme}, it is possible to re-arrange the scheme into a tridiagonal matrix equation: \begin{multline} \underbrace{\left[-\frac{A\Delta t}{2(\Delta x)^2} {\frac{x_{i-1/2}^q}{x_i^p}}\psi_{i-1/2}^{n+1/2,k}\right]}_{\text{matrix subdiagonal coefficient}} h^{n,k+1}_{i-1}\\ + \underbrace{\left[1 + \frac{A\Delta t}{2(\Delta x)^2}\left({\frac{x_{i+1/2}^q}{x_i^p}}\psi^{n+1/2,k}_{i+1/2} + {\frac{x_{i-1/2}^q}{x_i^p}}\psi^{n+1/2,k}_{i-1/2}\right)\right]}_{\text{matrix diagonal coefficient}} h^{n,k+1}_i\\ + \underbrace{\left[ - \frac{A\Delta t}{2(\Delta x)^2} {\frac{x_{i+1/2}^q}{x_i^p}}\psi_{i+1/2}^{n+1/2,k} \right]}_{\text{matrix superdiagonal coefficient}} h^{n,k+1}_{i+1}\displaybreak[3]\\ = h^n_{i} + \frac{A\Delta t}{2(\Delta x)^2}\left[ {\frac{x_{i+1/2}^q}{x_i^p}}\psi^{n+1/2,k}_{i+1/2} (h^{n}_{i+1} - h^{n}_i) - {\frac{x_{i-1/2}^q}{x_i^p}}\psi^{n+1/2,k}_{i-1/2}(h^{n}_{i} - h^{n}_{i-1}) \right] \label{eq:fin_diff} \end{multline} for the interior grid points $i=1,\hdots,N-1$. In eq.~\eqref{eq:fin_diff}, the right-hand side and the variable coefficients in brackets on the left-hand side are both known, based on $h_i^{n,k}$, at any given internal iteration $k$. Then, each internal iteration involves the inversion of a tridiagonal matrix to solve for the grid function $h_i^{n,k+1}$. The inversion of this tridiagonal matrix can be performed efficiently with, e.g., `backslash' in {\sc Matlab}. Subsequently, the coefficient matrix must be recalculated for each internal iteration because of the dependency of $\psi_{i\pm1/2}^{n+1/2,k}$ on $h_i^{n,k}$ arising from eqs.~\eqref{eq:psi}, \eqref{eq:psi_disc} and \eqref{eq:psi_boundary} The iterations in eq.~\eqref{eq:fin_diff} are initialized with $h^{n,0}_i = h^n_i$ ($i=0,\hdots,N$) and continue until an iteration $k+1=K$ is reached at which a $10^{-8}$ relative error tolerance is met. Specifically, \begin{equation} \max_{0\le i\le N}\left|h^{n,K}_i-h^{n,K-1}_i\right| < 10^{-8} \max_{0\le i\le N} \left|h^{n,K-1}_i\right|. \label{eq:internal_conv} \end{equation} Only a small number (typically, less than a dozen) of internal iterations are required at each time step, making the scheme quite efficient overall. A detail remains, however. The algebraic system defined in eq.~\eqref{eq:fin_diff} applies to all \emph{interior} nodes, i.e., $i = 1,\hdots,N-1$. To complete the system, we must define rows $i=0$ and $i=N$, which arise from the discretization of the nonlinear BCs, which comes in \S\ref{sec:BC}. Upon completing the latter task successfully, $h^{n,K}_i$ becomes the grid function at the next time step $h^{n+1}_i$ upon the completion of the internal iterations, and the time stepping proceeds. \subsection{The Special Case of Linear Diffusion} A noteworthy special case of the proposed finite-difference scheme arises from setting the dimensionless exponents $p=q=0$ (i.e., no spatial variation of the diffusivity) and $\psi = 1$ (linear diffusion). Then, eq.~\eqref{eq:fin_diff} can be simplified and rearranged in the form ($i=1,\hdots,N-1$): \begin{equation} \left[1 + \frac{A \Delta t}{(\Delta x)^2}\right]h_{i}^{n+1} = \frac{A \Delta t}{(\Delta x)^2}\left(h_{i-1}^{n+1} + h_{i+1}^{n+1} + h_{i-1}^{n} + h_{i+1}^{n}\right) + \left[ 1 + \frac{A \Delta t}{(\Delta x)^2}\right]h_i^n. \label{eq:CN_reduction} \end{equation} If the grid function $h_i^n \approx h(x_i,t^n)$ represents the temperature field along a 1D rigid conductor situated on $x\in[\ell,L]$, eq.~\eqref{eq:CN_reduction} is then the original second-order (in space and time) numerical scheme proposed by \citet{Crank1947} to solve a linear (thermal) diffusion equation \cite[\S6.3]{Strikwerda}. As such, this simplification helps illustrate the mathematical roots of the current scheme, and how we have generalized the classical work. \subsection{Implementation of the Nonlinear Boundary Conditions} \label{sec:BC} As discussed in \S\ref{sec:eq}, the boundary conditions are a manifestation of the global mass conservation constraint, eq.~\eqref{eq:mass_conservn_porous} or \eqref{eq:mass_conservn}, imposed on eq.~\eqref{eq:nl_diff}. The BCs described in eqs.~\eqref{eq:bcs} and \eqref{eq:bcs2} are defined at the `real' boundaries of the domain, i.e., at $x=\ell$ and $x=L$. The numerical scheme is implemented over a staggered grid. This allows for derivatives at $x=\ell$ and $x=L$ to be conveniently approximated using central difference formulas using two nearby staggered grid points. In this manner, the BC discretization maintains the scheme's second order accuracy in space and time. Accordingly, for the case of a current spreading away from the cell's origin, eqs.~\eqref{eq:bcs} are discretized in a `fully-implicit' sense (to further endow numerical stability and accuracy to the scheme \cite{Christov2002}) as follows: \begin{align} \psi_{1/2}^{n+1/2,k} \frac{1}{\Delta x}\left(h^{n,k+1}_1 - h^{n,k+1}_0\right) &= \left\{ \begin{array}{ll} - \frac{\alpha B}{A \ell^q} t^{\alpha - 1}, &\quad \alpha > 0, \\[5pt] 0, &\quad \alpha=0, \end{array} \right. \\[3pt] \frac{1}{\Delta x}\left(h^{n,k+1}_{N} - h^{n,k+1}_{N-1}\right) &= 0. \end{align} Within the internal iterations, however, $\psi_{1/2}^{n+1/2,k}$ is known independently of $h^{n,k+1}_1$ and $h^{n,k+1}_0$. Hence, we can express the first ($i=0$) and last ($i=N$) equations, which defined the respective rows in the tridiagonal matrix stemming from eq.~\eqref{eq:fin_diff}, as \begin{subequations}\begin{align} h^{n,k+1}_1 - h^{n,k+1}_0 &= \left\{ \begin{array}{ll} \displaystyle-\frac{4 \alpha B t^{\alpha-1}\Delta x}{A \ell^q (\psi_0^n + \psi_1^n + \psi_0^{n,k} + \psi_1^{n,k})}, &\quad\alpha > 0 , \\[5pt] 0, &\quad\alpha=0, \end{array} \right. \\[3pt] h^{n,k+1}_{N} - h^{n,k+1}_{N-1} &= 0. \label{eq:num_bc_RHS}% \end{align}\label{eq:num_bc}\end{subequations}% Similarly, we can derive the discretized BCs for spreading towards the origin, upon its release a finite distance away from the origin, from eqs.~\eqref{eq:bcs2}. Then, the first ($i=0$) and last ($i = N$) equations, which defined the respective rows in the tridiagonal matrix, as \begin{subequations}\begin{align} h^{n,k+1}_{1}-h^{n,k+1}_{0} &= 0, \label{eq:num_bc2_LHS}\\ h^{n,k+1}_{N} - h^{n,k+1}_{N-1} &= \left\{ \begin{array}{ll} \displaystyle\frac{4 \alpha B t^{\alpha-1}\Delta x}{A L^q (\psi_{N-1}^n + \psi_{N-2}^n + \psi_{N-1}^{n,k} + \psi_{N-2}^{n,k})}, &\quad\alpha > 0 , \\[5pt] 0, &\quad\alpha=0. \end{array} \right. \end{align}\label{eq:num_bc2}\end{subequations}% \section{Convergence and Conservation Properties of the Scheme} \label{sec:prop} At this point, the numerical scheme and boundary conditions defined in eqs.~\eqref{eq:fin_diff} and \eqref{eq:num_bc} or \eqref{eq:num_bc2} form a complete description of the numerical solution to the parabolic PDE from eq.~\eqref{eq:nl_diff}, for a gravity current propagating away from the origin. We have claimed that the finite-difference scheme is conservative (i.e., it accurately maintains the imposed time-dependency of the fluid volume set by eq.~\eqref{eq:mass_conservn}) and has second-order convergence. These aspects of the scheme will be substantiated in \S\ref{sec:conv} and \S\ref{sec:consv}, respectively. The computational domain's dimensions, which are set by $L$ and $b_1$, and the properties of fluid being simulated are summarized in table~\ref{tb:sim_para}. For definiteness, in this chapter we select the fluid properties to be those of a 95\% glycerol-water mixture in air at 20\textdegree C (see \cite{Cheng_vis_2008,VolkAndreas2018}). \subsection{Estimated Order of Convergence} \label{sec:conv} First, we seek to justify the formal accuracy (order of convergence) of the proposed scheme through carefully chosen numerical examples. To do so, we pursue a series of benchmarks that are successively `more complicated' (from a numerical perspective). First, we simulate the case of a centrally released fixed mass of fluid propagating in two directions (\S\ref{sec:full_conv}). Second, we simulate the unidirectional spreading of a fixed mass of fluid (\S\ref{sec:haf_conv}). Last, we simulate the unidirectional spreading of a variable fluid mass (\S\ref{sec:haf_conv_inject}) by taking into account injection of fluid at the boundary. \begin{table}[t] \caption{Summary of the simulation parameters used in convergence and conservation studies. The fluid was assumed to be a 95\% glycerol-water mixture at 20\textdegree C. The width exponent $n$ and fluid's rheological index $r$ were varied on a case-by-case basis to simulate different physical scenarios.} \centering \begin{tabular}{p{5cm}p{2cm}p{2cm}} \noalign{\smallskip}\hline \noalign{\smallskip} Parameter & Value & Units\\ \noalign{\smallskip}\svhline\noalign{\smallskip} Channel length, $L$ & 0.75 & m\\ \noalign{\smallskip} Width coefficient, $b_1$ & 0.017390 & m$^{1-n}$ \\ \noalign{\smallskip} \hline \noalign{\smallskip} Total released mass, $w$ & 0.31550 & kg\\ \noalign{\smallskip} Density difference, $\Delta\rho$ & 1250.8 & kg/m\textsuperscript{3}\\ \noalign{\smallskip} Consistency index ($r\ne1$)\\ or dynamic viscosity ($r=1$), $\mu_0$ & 0.62119 & Pa$\cdot$s$^r$ \\ \noalign{\smallskip} \hline \end{tabular} \label{tb:sim_para} \end{table} In each of these three cases, there is a need for a reliable benchmark solution against which the numerical solutions on successively refined spatial grids can be compared. For the case of the release of a fixed mass of fluid, an exact self-similar solution is provided by \citet{Ciriello2016}. Specifically the solution is for a power-law fluid in uniform HS cell ($n=0$). The derivation of the self-similar solution is briefly discussed in the Appendix. We use this solution as the benchmark. As mentioned in \S\ref{sec:intro}, parabolic equations `forget' their IC and the solution becomes self-similar after some time. However, for a general PDE, it is difficult (if not impossible) to estimate how long this process takes. Therefore, to ensure a proper benchmark against the exact self-similar solution, we start the simulation with the exact self-similar solution evaluated at some non-zero initial time ($t_0>0$). Then, we let the current propagate up to a final time $t_f$, with the expectation that the current will remain in the self-similar regime for all $t\in[t_0,t_f]$. Comparing the final numerical profile with the exact self-similar solution at $t=t_f$ then allows for a proper benchmark. To quantify the error between a numerical solution $h_\text{num}$ and a benchmark $h_\text{exact}$ solution at $t=t_f$, we use three standard function-space norms \cite{Evans2010}: \begin{subequations}\begin{align} \|h_\text{num}(x, t_f) - h_\text{exact}(x, t_f)\|_{L^\infty} &= \max_{x \in [\ell,L]} \left|h_\text{num}(x, t_f) - h_\text{exact}(x, t_f)\right|,\label{eq:Linf_cont}\\ \|h_\text{num}(x, t_f) - h_\text{exact}(x, t_f)\|_{L^1} &= \int_{\ell}^{L} \left|h_\text{num}(x, t_f) - h_\text{exact}(x, t_f)\right|\,\mathrm{d}x,\label{eq:L1_cont}\\ \|h_\text{num}(x, t_f) - h_\text{exact}(x, t_f)\|_{L^2} &= \sqrt{\int_{\ell}^{L} \left|h_\text{num}(x, t_f) - h_\text{exact}(x, t_f)\right|^2\,\mathrm{d}x}.\label{eq:L2_cont} \end{align}\label{eq:Norms_cont}\end{subequations} Using a second-order trapezoidal rule for the integrals, the definitions in eqs.~\eqref{eq:Norms_cont} can be expressed in terms of the grid functions to define the `errors': \begin{subequations}\begin{align} L^\infty_{\mathrm{error}} &\equiv \max_{0\le i\le N} \left|h_i^M - h_{\mathrm{exact}}(x_i,t_f)\right|,\label{eq:Linf}\\ \begin{split} L^1_{\mathrm{error}} &\equiv \Delta x \left\{ \frac{1}{2}\Big[\left|h_0^M - h_{\mathrm{exact}}(x_0,t_f)\right| + \left|h_N^M - h_{\mathrm{exact}}(x_N,t_f)\right|\Big] \right.\\ &\hspace{3cm} + \sum_{i=1}^{N-1}\left. \left|h_i^M - h_{\mathrm{exact}}(x_i,t_f)\right| \vphantom{\frac{1}{2}}\right\}, \end{split}\label{eq:L1}\\ \begin{split}L^2_{\mathrm{error}} &\equiv \left[ \Delta x \left\{ \frac{1}{2}\left[\left|h_0^M - h_{\mathrm{exact}}(x_0,t_f)\right|^2 + \left|h_N^M - h_{\mathrm{exact}}(x_N,t_f)\right|^2\right] \right. \right.\\ & \hspace{3cm} + \sum_{i=1}^{N-1}\left.\left. \left|h_i^M - h_{\mathrm{exact}}(x_i,t_f)\right|^2 \vphantom{\frac{1}{2}}\right\}\right]^{1/2}, \end{split}\label{L2} \end{align}\end{subequations} where $M$ is the time step at which $t^M = t_f$. Since the solution actually has a corner (derivative discontinuity) at the nose (wavefront) $x_f(t)$ such that $h\big(x_f(t),t\big)=0$, the propagating gravity current is in fact only a \emph{weak} solution to the PDE \cite{Evans2010}. Therefore, the $L^\infty$ norm is not a good one to measure the error, as we do not expect the solution to `live' in this function space. Nevertheless, our numerical results show convergence in the $L^\infty$ norm. The natural functional space for solutions of eq.~\eqref{eq:nl_diff} is the space of integrable functions, i.e., $L^1$. Indeed, we observe excellent second-order convergence in this norm. For completeness, the $L^2$ norm (commonly the function-space setting for parabolic equations \cite[Ch.~7]{Evans2010}) is considered as well. While we observe convergence close to second order in this norm as well, it is clearly not the `natural' one for these problems either. For our estimated-order-of-convergence study, $\Delta x$ is successively halved on a domain of fixed length, such that on the $c$th iteration of the refinement, the grid spacing is $\Delta x_c = \Delta x_0/2^{c-1}$, where $\Delta x_0$ is the initial grid spacing. Doing so ensures a set of common grid points (corresponding to the same physical locations) between successively refined grids. In all studies in this section, we begin with a grid with $N = 101$ nodes, and it becomes the coarsest grid for the refinement study. Given the (formally) unconditionally stable nature of the scheme, we take $\Delta t_c = 2\Delta x_c$ for the refinement studies without loss of generality. From a computational standpoint, it is desirable that time step and grid spacing are of the same order of magnitude in the estimated-order-of-convergence study. \subsubsection{Central Release of a Fixed Fluid Mass (No Boundary Effects)} \label{sec:full_conv} Consider a symmetric domain $x\in[-L,+L]$. Then suppose that a fixed mass of fluid (i.e., $\alpha = 0$ in the volume constraint in eq.~\eqref{eq:mass_conservn}) is released with an initial shape that is symmetric about $x=0$. The final simulation time $t_f$ is such that the gravity current does not reach $x=\pm L$ for $t \le t_f$. Since the fluid mass is constant and the BCs are imposed at $x=\pm L$ (where $h=0$ initially and remains so for all $t\le t_f$, by construction), their discretization simply reduces to the trivial cases, i.e., eqs.~\eqref{eq:num_bc2_LHS} and \eqref{eq:num_bc_RHS}. Thus, the BCs for this study are simply linear Neumann (i.e., no flux or homogeneous) BCs, and they do not influence the order of convergence of the overall scheme. Therefore, this study allows us to verify that our approach to the treatment of the nonlinearity $\psi$, and its weighted averages appearing in the spatially discretized operator $\mathcal{L}_d$ in eq.~\eqref{eq:Ldiscrete}, deliver the desired second-order of accuracy in space. Coupled with the Crank--Nicolson time-stepping's second-order accuracy in time, we thus expect second order of convergence in this refinement study. As stated above, we take the exact self-similar solution to eq.~\eqref{eq:nl_diff} provided by \citet{Ciriello2016} (and discussed in the Appendix) evaluated at $t=t_0$ and mirrored about $x=0$ as the IC. Upon evolving this IC numerically up to $t=t_f$, we compare the numerical profile to the same exact solution now evaluated at $t=t_f$. Hence, in accordance with the assumptions required to obtain this exact solution in \cite{Ciriello2016}, we limit this first convergence study to a uniform-width HS cell, i.e., $n=0$. \begin{figure}[t] \centering \subfloat[][Newtonian fluid ($r=1$).]{\includegraphics[width=0.5\textwidth]{convF_g0n0r1}\label{fig:centnewt}}\\ \subfloat[][Shear-thinning fluid ($r=0.7$).]{\includegraphics[width=0.5\textwidth]{convF_g0n0r07}\label{fig:centsthin}}\hfill \subfloat[][Shear-thickening fluid ($r=1.6$).]{\includegraphics[width=0.5\textwidth]{convF_g0n0r16}\label{fig:centsthick}} \caption{Estimated order-of-convergence of a `centrally released' fixed fluid mass propagating in both directions in a uniform-width HS cell ($n=0$). The currents' shapes are plotted from `early' times (purple/dark) to `late' times (green/light). In all cases, the volume of fluid is $\mathcal{V}_0 = 2.4902 \times 10^{-5}$ m\textsuperscript{3} and $b_1 = 0.01739$ m. The currents are released at $t_0 = 1$ s and spread until $t_f = 3.5$ s.} \label{fig:conv1} \end{figure} Figure \ref{fig:conv1} shows the propagation of constant-mass viscous gravity current of three different fluids: (a) Newtonian, (b) shear-thinning, and (c) shear-thickening power-law. The currents propagate symmetrically about the center of the domain ($x=0$). The sharp moving front $x_f(t)$ is accurately captured in these simulations on fairly modest (i.e., coarse) grids, without \emph{any} signs of numerical instability or need for special treatment of the derivative discontinuity. Computing the error as a function of $\Delta x$ during the grid refinement shows second-order convergence. This numerical example thus indicates that the proposed approach to treating the implicit nonlinear $\psi$ terms, specifically their evaluation at $n+1/2$, is consistent with the desired second-order accuracy. It should be noted that the restriction on $t_f$, which is necessary so that the current does not reach the domain boundaries, is critical since the chosen benchmark exact solution only describes the `spreading' behavior of the current and not its `levelling' (once it reaches the no-flux boundaries at $x=\pm L$). Indeed, the levelling regime possesses its distinct self-similar behavior (see, e.g., \cite{Diez1992,Zheng2018}), which is beyond the scope of the present work. \subsubsection{Propagation of a Fixed Mass of Fluid in a Single Direction} \label{sec:haf_conv} To ascertain the accuracy of our discretization of the nonlinear BCs, we now return to a one-sided domain $x\in[\ell,L]$ with $\ell=0$. For the case of a current spreading away from the origin, the BC at the `left' end of the domain (from which the fluid is released) is non-trivial, and its proper discretization is key to the overall order of the convergence of the scheme. Conveniently, for a fixed mass ($\alpha=0$), the BCs still reduce to homogeneous Neumann conditions (recall \S\ref{sec:BC}), however, $h$ is no longer zero at the boundary (as was the case in \S\ref{sec:full_conv}). Thus, this benchmark is our successively `more complicated' case. \begin{figure}[t] \centering \subfloat[][Newtonian fluid ($r=1$).]{\includegraphics[width=0.5\textwidth]{convH_g0n0r1}\label{fig:anaic_newt}}\\ \subfloat[][Shear-thinning fluid ($r=0.5$).]{\includegraphics[width=0.5\textwidth]{convH_g0n0r05}\label{fig:anaic_sthin}}\hfill \subfloat[][Shear-thickening ($r=1.5$).]{\includegraphics[width=0.5\textwidth]{convH_g0n0r15}\label{fig:anaic_sthick}} \caption{Estimated order-of-convergence study for the release of a fixed fluid mass propagating in a single direction (away from cell's origin) in a uniform-width HS cell ($n=0$). Once again, the fluid is released at $t_0 = 1$ s and spreads until $t_f = 3.5$ s. The currents' shapes are plotted from early times (purple/dark) through late times (green/light). The remaining model parameters for these simulations are the same as in figure~\ref{fig:conv1}.} \label{fig:conv2} \end{figure} Once again, we ensure that $t_f$ is such that the fluid does not reach the downstream ($x=L$) domain end. Then, as in \S\ref{sec:full_conv}, we can once again use the exact solution of \citet{Ciriello2016} as the benchmark exact solution; again, this requires restricting to uniform-width HS cells (i.e., $n=0$). Figure \ref{fig:conv2} shows clear second-order estimated order-of-convergence in the $L^1$ norm. This result indicates the decision to implement the scheme on a staggered grid, in which case the Neumann BCs (for $\alpha = 0$) are conveniently discretized using two-point central differences at the boundary, was indeed correct. \subsubsection{Propagation in a Single Direction with Mass Injection} \label{sec:haf_conv_inject} Finally, we subject the numerical scheme to its most stringent test yet. That is, we compute the estimated order of convergence under mass injection conditions ($\alpha > 0$). The injection occurs near the cell's origin and the current propagates away from this location. Since $\alpha > 0$, the fully nonlinear forms of the BCs as given in eqs.~\eqref{eq:bcs} and \eqref{eq:bcs2} now come into play. Unlike the previously discussed cases of the release of a fixed fluid mass, a straightforward exact solution to the nonlinear ODE emerging from the self-similar analysis is not possible. For variable mass, obtaining a benchmark solution is now significantly more challenging, given that the nonlinear ODE must be solved \emph{numerically} (see the Appendix). Despite the availability of accurate stiff ODE solvers, such as {\tt ode15s} in {\sc Matlab}, it is quite difficult to map the numerical solution of the self-similar ODE onto the selected computational grid, and maintain the desired order of accuracy throughout this procedure. Therefore, for this benchmark, we instead elect to use a `fine-grid' numerical solution as the benchmark solution. This fine-grid solution is then compared against the solutions on successively coarser grids to establish the estimated order of convergence of the numerical scheme. For this study, the simulation domain is $x \in [\ell,L]$ with $\ell = \Delta x_0$, so that $x_{i=0} = \ell - \Delta x_0/2$ and $x_{i=N} = L + \Delta x_0/2$; the boundary points are at the same cell faces on all grids during the refinement. The IC at $t_0 = 0$ s is from eq.~\eqref{eq:expontial_IC} with $b = 3.5 \times 10^2$, and $c = 25$ m\textsuperscript{$-1$}. The numerical solution is advanced up to $t_f = 1.5$ s. \begin{figure}[t] \subfloat[][$\alpha=1$]{\includegraphics[width=0.5\textwidth]{convH_g1n0r1}\label{fig:anaic_NBCnewt}} \hfill \subfloat[][$\alpha=1.5$]{\includegraphics[width=0.5\textwidth]{convH_g15n06r06}\label{fig:anaic_NBCnon_newt}} \caption{Estimated order-of-convergence study for a variable-mass gravity current with injection at $x=\ell$ on the truncated domain $x\in[\ell,L]$ with $\ell=\Delta x_0$. Simulations are shown for the case of (a) a Newtonian fluid in a uniform-width HS cell ($r = 1$, $n = 0$), and (b) a shear-thickening fluid in a variable-width cell ($r=0.6$, $n = 0.6$). The remaining model parameters are as in figure~\ref{fig:conv1}.} \label{fig:conv3} \end{figure} Figure \ref{fig:conv3} shows that the order of convergence of the numerical method is second order in space and time in the $L^1$ norm, as expected. In this figure, to show some variety, we present two distinct but arbitrarily selected cases: (a) a Newtonian fluid in a uniform-width HS cell ($n=0$) with volume growth exponent $\alpha = 1$, and (b) a non-Newtonian (shear-thinning, $r=0.6$) fluid in a variable-width HS cell (width exponent $n=0.6$) with a volume growth exponent $\alpha = 1.5$. With this final numerical test, we have justified the formal truncation error of the proposed finite-difference scheme. This result is nontrivial because the PDE and the scheme are both \emph{nonlinear}, requiring subtle approximation on a staggered grid, across half-time steps, and linearization of an algebraic system via internal iterations. \subsection{Satisfaction of the Mass Constraint at the Discrete Level} \label{sec:consv} Since the BCs derived in \S\ref{sec:eq} (and discretized in \S\ref{sec:BC}) stem from the mass conservation constraint (i.e., eq.~\eqref{eq:mass_conservn_porous} or eq.~\eqref{eq:mass_conservn}), it is expected that the proposed finite-difference scheme should produce a solution $h(x,t)$ that satisfies eq.~\eqref{eq:mass_conservn_porous} or eq.~\eqref{eq:mass_conservn} to within $\mathcal{O}\left[(\Delta x)^2+(\Delta t)^2\right]$ or better, if this constraint is checked independently after computing the numerical solution. To verify this capability of the scheme, in this subsection, we consider two cases: (i) a fixed fluid mass released near the origin ($\alpha = 0$), and (ii) spreading subject to mass injection ($\alpha > 0$) near the origin. Both cases are studied on the domain $x\in[\ell,L]$ with $\ell = \Delta x$. The solution is evolved on the time interval $t \in (t_0,t_f]$, and the volume error for each $t$ is computed as \begin{equation} \left|\int_\ell^L h(x,t) b_1 x^n \, \mathrm{d}x - (\mathcal{V}_0 +\mathcal{V}_\mathrm{in}t^\alpha)\right|, \label{mass_consv} \end{equation} where the $x$-integration is performed by the trapezoidal rule to $\mathcal{O}\left[(\Delta x)^2\right]$ on the staggered mesh, as before. We expect that the volume error, as defined in eq.~\eqref{mass_consv}, is $\mathcal{O}\left[(\Delta x)^2 + (\Delta t)^2\right]$, the same as the overall scheme. In this numerical study, the selection of the IC is no longer critical, as we do not compare against an exact self-similar solution. Accordingly, we select generic ICs from eqs.~\eqref{eq:poly_IC} and \eqref{eq:expontial_IC}. \subsubsection{Fixed Mass Release ($\alpha=0$)} In this case, the IC is a cubic polynomial determined from eq.~\eqref{eq:poly_IC} with $c=3$ and $\mathfrak{X}_0 = 0.25$ m. The error in the total fluid volume as a function of $t$ is compared with the initial one. Figure \ref{fig:cons_fixed_mass} shows that, while numerical error does build up in the total volume, the initial volume remains conserved to within (or better than) $(\Delta t)^2=10^{-6}$. \begin{figure}[hb] \centering \subfloat[][Newtonian fluid ($r=0$).]{\includegraphics[width=0.5\textwidth]{conservation_g0n0r1}\label{fig:consvnewt}}\\ \subfloat[][Shear-thinning fluid ($r=0.7$).]{\includegraphics[width=0.5\textwidth]{conservation_g0n07r07}\label{fig:consvsthin}}\hfill \subfloat[][Shear-thickening fluid ($r=1.5$).]{\includegraphics[width=0.5\textwidth]{conservation_g0n05r15}\label{fig:consvsthick}} \caption{Results of the conservation study for the release of a fixed fluid mass. To highlight the scheme's capabilities, each case features a different HS cell: (a) $n=0$, (b) $n=0.7$, and (c) $n=0.5$. The currents are allowed to propagate from $t_0 = 0$ s up to $t_f = 2.5$ s, through 2500 time steps ($\Rightarrow \Delta t = 10^{-3}$ s). In all cases, $\alpha=0$ and $\mathcal{V}(t) = \mathcal{V}_0 = 2.4902 \times 10^{-5}$ m\textsuperscript{3}. The remaining model parameters for these simulations are the same as in figure~\ref{fig:conv1}.} \label{fig:cons_fixed_mass} \end{figure} \subsubsection{Mass Injection ($\alpha>0$)} A more stringent test of the conservation properties of the proposed scheme is conducted by applying the nonlinear BC associated with imposed mass injection at one end. For this case, the IC is taken to be the function in eq.~\eqref{eq:expontial_IC} with $b = 3.5\times10^{2}$ and $c = 25$ m$^{-1}$ and $\mathfrak{X}_0 = \frac{1}{c}\ln\frac{1}{b}$. A combination of $n$, $r$ and $\alpha$ values have been considered to highlight the conservation properties across different physical regimes. Figure \ref{fig:conservationMEG} shows that, in all cases, the volume constraint is properly respected; while the volume error builds up, it remains small (within or better than $(\Delta t)^2=10^{-6}$). \begin{figure} \centering \subfloat[][$\alpha = 1$.]{\includegraphics[width=0.5\textwidth]{conservation_g1n0r1}\label{fig:consvGnewt}}\\ \subfloat[][$\alpha = 1.5$.]{\includegraphics[width=0.5\textwidth]{conservation_g15n07r07}\label{fig:consvGsthin}}\hfill \subfloat[][$\alpha=2$.]{\includegraphics[width=0.5\textwidth]{conservation_g2n05r15}\label{fig:consvGsthick}} \caption{Results of the conservation study for mass injection. Three choices of the volume exponent $\alpha$ are considered. We simulate (a) a Newtonian fluid in uniform HS cell ($r=1,n=0$), (b) a shear-thinning fluid in a variable-width HS cell ($r=0.7$, $n=0.7$), and (c) a shear-thickening fluid in a variable-width HS cell ($r = 1.5$, $n=0.5$). In all cases, $t_0$, $t_f$, and $\Delta t$ are as in figure~\ref{fig:cons_fixed_mass}. Additionally we set $\mathcal{V}_0 = \mathcal{V}_\mathrm{in} = 2.4902 \times 10^{-5}$ m\textsuperscript{3} (see eq.~\eqref{eq:mass_conservn}), The remaining model parameters for these simulations are the same as in figure~\ref{fig:conv1}.} \label{fig:conservationMEG} \end{figure} \section{Conclusions and Outlook} In this chapter, we developed and benchmarked a finite-difference numerical scheme for solving a family of nonlinear parabolic PDEs with variable coefficients given by eq.~\eqref{eq:nl_diff}. A special feature of these nonlinear PDEs is that they possess solutions that can propagate in a wave-like manner with a finite speed of propagation. Our study featured examples from this family of PDEs for modeling the 1D spreading (propagation) of a power-law (Oswald--de Weale) fluid in a horizontal, narrow fracture with variable width. We placed an emphasis on designing a series of numerical tests that show conclusively that the proposed scheme is second-order accurate in space and time. Analytical self-similar solutions for special cases of the nonlinear parabolic PDE were used to benchmark the numerical method. Furthermore, we verified that a global mass conservation/injection constraint can be successfully reformulated into a set of nonlinear boundary conditions, which were successfully discretized with second-order accuracy as well. The main advantage of the proposed finite-difference scheme is that it is strongly implicit, generalizing the time-stepping suggested by \citet{Crank1947}. Therefore, the proposed scheme does not formally require a time-step restriction for stability. By using a staggered grid, along the lines of \citet{Christov2009}, nonlinear terms were handled within the same three-point stencil as the classical Crank--Nicolson scheme. This choice of grid is particularly convenient for the discretization of the nonlinear boundary conditions, allowing second-order accuracy to be achieved with just a two-point stencil near the domain boundaries. Using fractional steps in time (`internal iterations'), we reformulated the nonlinear algebraic problem at each time step as a fixed-point iteration. In future work, an interesting extension to our proposed numerical scheme could be the inclusion of a generic source term of the form $\mathcal{S}(x,t,h)$, added to the right-hand side of eq.~\eqref{eq:nl_diff}. Such a term can capture the effects of e.g., a leaky (porous) substrate over which a gravity current propagates, in which case $\mathcal{S}(x,t,h) = -\varkappa h(x,t)$ for some drainage constant $\varkappa$ \cite{Pritchard2001} (see also \cite[\S9.2]{Woods2015}). Then, the Crank--Nicolson disrectization in eq.~\eqref{eq:nl_diff_CNL} could be modified by adding \begin{equation} \frac{1}{2}\left[\mathcal{S}(x_i,t^{n+1},h_i^{n+1}) + \mathcal{S}(x_i,t^{n},h_i^{n}) \right] \label{eq:source_term} \end{equation} to the right-hand side. Here, it is assumed that $\partial h/\partial x$ does not appear in $\mathcal{S}$ but only in $\mathcal{L}$. Therefore, the discretization in eq.~\eqref{eq:source_term} (even if nonlinear) will, at most, introduce a term in the matrix diagonal coefficient and a term on the right-hand side of eq.~\eqref{eq:fin_diff}. Another variation on this theme involves the spreading of an unconfined viscous fluid above a deep porous medium into which it penetrates in a time-dependent manner over a depth of $l(x,t)$ \cite{Acton2001}. Then, $\mathcal{S}(x,t,h) = -\kappa [ 1 + h(x,t)/l(x,t) ]$ and an additional ODE for $l(x,t)$ is coupled to eq.~\eqref{eq:nl_diff}. This problem is an interesting avenue for future extension of our proposed scheme, as we would have to discretize the ODE for $l(x,t)$ in the same Crank--Nicolson sense as eq.~\eqref{eq:nl_diff} and add an extra equation (row) to the discrete problem in eq.~\eqref{eq:fin_diff}. On the other hand, an inclination angle (recall that all geometries in figure~\ref{fig:domains} are lying flat, so gravity is directed in the $-y$-direction) results in a term proportional to $\partial h/\partial x$ being added to eq.~\eqref{eq:nl_diff} (for the case of a Newtonian fluid, see, e.g., \cite{Huppert1995,Vella2006}). This additional term changes the nonlinear diffusion equation~\eqref{eq:nl_diff} into a nonlinear \emph{advection--diffusion} equation. Care must be taken in discretizing this new advective term. A similar PDE arises in the segregation of bidisperse granular mixtures \cite{Dolgunin1995,Gray2015}. As discussed by \citet{ChristovMPM2018}, a strongly implicit Crank--Nicolson scheme can be successfully used for these problems. The scheme in \cite{ChristovMPM2018} is so robust that it performs well even in the singular vanishing-diffusivity limit of the advection-diffusion equation. Considering a generic advection term $\partial \Psi(h)/\partial x$, it can be handled analogously to the nonlinearity in the diffusion term. Specifically, we approximate \begin{equation} \left(\frac{\partial \Psi}{\partial x}\right)_{x=x_i} \approx \frac{1}{2}\left[ \left(\frac{\Psi_{i+1}^{n+1} - \Psi_{i-1}^{n+1}}{2 \Delta x}\right) + \left(\frac{\Psi_{i+1}^{n} - \Psi_{i-1}^{n}}{2 \Delta x}\right)\right]. \label{eq:adv_psi} \end{equation} Here, the advective term is discretized through a central difference formula involving a local three-point stencil on all interior nodes ($i=1$ to $N-1$). At the boundary nodes ($i=0$ and $i=N$), one can use a three-point biased (forward or backward) difference formula, as in eqs.~\eqref{eq:psi_boundary}. The now well-established idea of staggering the nonlinear term across fractional time steps is carried forward (recall eqs.~\eqref{eq:psi}). However, to properly linearize the advective term within the internal iterations, we must be able to write $\Psi(h) = \Upsilon(h)h$ (the most obvious way being to let $\Upsilon(h) \equiv \Psi(h)/h$) so that \begin{equation} \Psi_{i\pm1}^{n+1} \approx \Upsilon^{n+1/2,k}_{i\pm1} h^{n+1,k}_{i\pm 1}. \label{eq:adv_psi_internal} \end{equation} Then, inserting eq.~\eqref{eq:adv_psi_internal} into eq.~\eqref{eq:adv_psi} and adding the result to the left-hand side of eq.~\eqref{eq:fin_diff}, modifies the tridiagonal system by adding $\Upsilon^{n+1/2,k}_{i\pm1}/(4\Delta x)$ to the superdiagonal ($i+1$) and subdiagonal ($i-1$), respectively. The remaining terms from eq.~\eqref{eq:adv_psi} are added to the right-hand side of the system. Any of these potential extensions would have to be benchmarked against available first-kind self-similar solutions in \cite{Huppert1995, Acton2001, Vella2006, Woods2015}, however, no particular difficulties are expected to arise. Another avenue of future work is as follows. Nowadays, high-order (i.e., greater than second-order) nonlinear parabolic PDEs are found to describe a wealth of low Reynolds number fluid phenomena: from the spreading and healing \cite{Zheng2018,ZhengHealing} to the rupture dynamics \cite{Garg2017} of thin liquid films dominated by capillary forces (see also \cite[Ch.~6-C]{L07}). Typically, the spatial operator is of fourth order due to the inclusion of surface tension effects (which depend upon the curvature of $h$), making the PDE more challenging to solve numerically. (Note that this is distinct from the inclusion of capillary effects in the context of gravity currents propagating in porous media, see \cite{Golding2011}.) Even higher (sixth) order thin film equations arise in dynamics of lubricated thin elastic membranes \cite{HM04,Flitton2004,Hewitt2015} dominated by elastic forces. To interrogate these complex interfacial phenomena, there is a current need for a robust and accurate numerical scheme to simulate these flows with low computational overhead (e.g., without the prohibitive time step stability restrictions of explicit schemes). In future work, it would be of interest to generalize the scheme from this chapter to such problems. Additionally, non-uniform (or adaptive) grids, which could be implemented along the lines of \cite{Christov2002}, can be used to capture singularity formation during thin film rupture. \begin{acknowledgement} We dedicate this work to the 80\textsuperscript{th} anniversary of Prof.\ J\"{u}ri Engelbrecht and his contributions to the fields of complexity science, nonlinear wave phenomena and applied mathematical modeling. I.C.C.\ acknowledges a very productive trip to Tallinn in September of 2014 (on invitation of Prof.~Andrus Salupere) and fondly recalls meeting and interacting with Prof.~Engelbrecht there, at the IUTAM Symposium on Complexity of Nonlinear Waves. We also thank Profs.~Tarmo Soomere and Arkadi Berezovski for their efforts in editing this volume and their kind invitation to contribute a chapter. Finally, I.C.C.\ acknowledges many helpful conversations on gravity currents with Prof.~H.A.\ Stone, Dr.~Z.\ Zheng and Prof.~S.\ Longo. Specifically, we thank S.\ Longo for suggesting the form of the governing equation for a power-law fluid in a variable-width HS cell (third row in table~\ref{tb:eq_models}). \end{acknowledgement} \section*{Appendix} For many natural phenomena, a relatively simple procedure involving a scaling (dimensional analysis) of the governing equations can be used to yield the similarity variable and appropriate rescaling necessary to obtain a first-kind self-similar solution \cite{Barenblatt1979}. Viscous gravity currents exhibit such self-similar propagation, meaning that the solution (at sufficiently `long' times \cite{Barenblatt1979}) depends solely upon a combined variable of $x$ and $t$, rather than on each independently. Self-similarity allows for the derivation of exact analytical solutions to the governing equation~\eqref{eq:nl_diff} against which numerical solutions can be benchmarked. Specifically, for the case of the release of a fixed mass of fluid ($\alpha = 0$ so that $\mathcal{V}(t) = \mathcal{V}_\mathrm{in}$ $\forall t\in[t_0,t_f]$), a closed-form analytical self-similar solution was used in \S\ref{sec:conv} to test the order of convergence of the numerical scheme. In this Appendix, following \citet{DiFed2017}, we summarize the derivation of said self-similar solution for a power-law non-Newtonian fluid spreading \textit{away} from the origin ($x=0$) of a HS cell of uniform width $b_1$ ($n=0$).\footnote{While self-similar solutions, of course, exist for $n>0$, they cannot be found in closed-form as analytical solutions.} First, we introduce the following dimensionless variables (with $^*$ superscripts) from \cite{DiFed2017}: \begin{multline} x^* = \left(\frac{B}{A^\alpha}\right)^{1/(\alpha-2)} x,\qquad t^* = \left(\frac{B}{A^2}\right)^{1/(\alpha-2)} t,\\ h^*(x^*,t^*) = \left(\frac{B}{A^\alpha}\right)^{1/(\alpha-2)} h(x,t), \label{eq:scale} \end{multline} where $A$ is the constant from eq.~\eqref{eq:nl_diff} (defined in table \ref{tb:eq_models}) and $B = \mathcal{V}_\mathrm{in}/b_1$. Hereafter, we drop the $^*$ superscripts. Next, we must select a suitable similarity variable $\eta$. As discussed in \S\ref{sec:intro}, a scaling analysis of the dimensionless version of eq.~\eqref{eq:nl_diff}, suggests that the self-similar solution of the first kind has the form \begin{equation} h(x,t) = \eta_N^{r+1}{t}^{F_2}f(\zeta), \qquad \zeta = \frac{\eta}{\eta_N}, \qquad \eta = \frac{x}{t^{F_1}}. \label{eq:simsol} \end{equation} It can be shown that the constant $\eta_N$ specifically corresponds to the value of $\eta$ at the nose of the current, i.e., $\eta_N = x_f(t)/{t}^{F_1}$, where $x=x_f(t)$ is such that $h\big(x_f(t),t\big) = 0$. Here, $\zeta$ is a convenient rescaled similarity variable, and the exponents $F_{1,2}$ are% \begin{subequations}\begin{align} F_1 &= \frac{\alpha + r}{r+2}, \\ F_2 &= \alpha - F_1. \end{align}\label{eq:F1F2}\end{subequations} The shape function $f(\zeta)$ represents the (universal) self-similar profile of the gravity current. We must now determine this function, by substituting eqs.~\eqref{eq:simsol} into the dimensionless version of eq.~\eqref{eq:nl_diff} to reduce the latter to a nonlinear ODE: \begin{equation} \frac{\mathrm{d}}{\mathrm{d} \zeta}\left(f\left|\frac{\mathrm{d} f}{\mathrm{d} \zeta}\right|^{\frac{1}{r}}\right) + F_2 f - F_1 \zeta \frac{\mathrm{d} f}{\mathrm{d} \zeta} = 0,\qquad \zeta\in[0,1]. \label{eq:nonlinODE} \end{equation} The second-order ODE in eq.~\eqref{eq:nonlinODE} can be rewritten as a first-order system: \begin{equation} \frac{\mathrm{d}}{\mathrm{d}\zeta}\left\lbrace \begin{matrix} f_1 \\ f_2 \end{matrix} \right\rbrace = \left\lbrace \begin{matrix} f_2 \\ \displaystyle\frac{-r}{f_1 f_2 |f_2|^{(1-2r)/r}} \left[f_2|f_2|^{1/r} + F_2 f_1 - F_1 \zeta f_2\right] \end{matrix} \right\rbrace, \label{eq:nonlinODE2} \end{equation} where, for convenience, we have set $f_1 = f$. The system in eq.~\eqref{eq:nonlinODE2} is `stiff,' and we must use an appropriate ODE solver, such as {\tt ode15s} in {\sc Matlab}, subject to appropriate initial and/or boundary conditions at $\zeta=0,1$. A peculiarity of this self-similar analysis is that we have only a single BC for the ODE~\eqref{eq:nonlinODE}, namely $f(1)=0$, i.e., this is the location of the gravity current's nose $x=x_f(t)$ at which $\zeta=\eta/\eta_N=1$ and $h\big(x_f(t),t\big) = 0$. Since the ODE in eq.~\eqref{eq:nonlinODE2} requires a second initial or boundary condition, we use the `backwards-shooting' idea of \citet{Huppert1982a} to provide a second condition near $\zeta = 1$. Then, the ODE in eq.~\eqref{eq:nonlinODE2} can be integrated `backwards' from $\zeta = 1$ to $\zeta = 0$ subject to two `initial' conditions at $\zeta=1$. To this end, consider the asymptotic behavior of the current near the nose. By assuming that $f \sim \mathfrak{c}_1(1 - \zeta)^{\mathfrak{c}_2}$ as $\zeta\to1^-$ and substituting this expression into eq.~\eqref{eq:nonlinODE}, we obtain $\mathfrak{c}_1 = F_2^r$ and $\mathfrak{c}_2 = 1$ by balancing the lowest-order terms. Now, we have two BCs (see also \cite{DiFed2017}):% \begin{subequations}\begin{align} f_1(1-\epsilon) &= F_2^r \epsilon,\\ f_2(1-\epsilon) &= -F_2^r, \end{align}\label{eq:f1f2_bc}\end{subequations} for a sufficiently small $\epsilon \ll 1$. We can now solve the system~\eqref{eq:nonlinODE2} subject to the `final' conditions~\eqref{eq:f1f2_bc} on the interval $\zeta \in [0,1-\epsilon]$. By convention, an ODE is solved with initial, not final, conditions. Therefore, we perform the transformation $\zeta \mapsto 1-\hat{\zeta}$, which leads to the right-hand-side of eq.~\eqref{eq:nonlinODE2} being multiplied by $-1$. Then, the final conditions in eqs.~\eqref{eq:f1f2_bc} become initial conditions at $\hat{\zeta} = \epsilon$, and the first-oder system of ODEs is solved on the interval $\hat{\zeta} \in [\epsilon,1]$. For certain special cases, a closed-form analytical solution to eq.~\eqref{eq:nonlinODE} can be obtained. For the case of the release of a fixed mass of fluid ($\alpha=0$), \citet{Ciriello2016} derived such an exact solution (as can be verified by substitution): \begin{equation} f(\zeta) = \frac{r^r}{(r+2)^r (r+1)} \left( 1 - \zeta ^{r+1} \right), \label{eq:unisol} \end{equation} which we used to benchmark our finite-difference scheme in \S\ref{sec:conv}. Finally, to obtain the viscous gravity current profile given in eq.~\eqref{eq:simsol} we must compute $\eta_N$. This value follows from imposing the mass conservation constraint in dimensionless form: \begin{equation} \eta_N = \left[\int_0^1 f(\zeta) \,\mathrm{d} \zeta \right]^{-1/(r+2)} \approx \left[\int_\epsilon^1 f(\hat{\zeta}) \,\mathrm{d} \hat{\zeta} \right]^{-1/(r+2)}, \label{eq:etaN} \end{equation} where the second (approximate) equality is needed for the case in which eq.~\eqref{eq:nonlinODE} has to be integrated numerically (no exact solution); $\epsilon \ll 1$ is chosen sufficiently small, as above. Finally we can substitute eqs.~\eqref{eq:unisol} and \eqref{eq:etaN} into eq.~\eqref{eq:simsol} to obtain the analytical solution for the profile of the gravity current, as a function of $x$ at some time $t$. It should be noted, however, that for this solution to apply, the current must have achieved its self-similar asymptotics , having forgotten the initial condition from which it evolved. \footnotesize{ \bibliographystyle{spbasic}
1,941,325,220,949
arxiv
\section{Introduction} \par Communication privacy is important. In IP networks, based on the source and destination IP addresses, an adversary can track interactions and interaction patterns, revealing personal data about the users. Therefore, practical mechanisms have been developed to enhance user privacy via unlinkability and unobservability, in the so-called anonymity networks, or mix nets. One of the most popular anonymity networks today is The Onion Routing (Tor), built as an overlay network among volunteer systems on the Internet. Tor provides anonymous communication between source and destination as well as data integrity. Onion routing is a low-latency application of mix nets, where each message is encrypted to each proxy using public key cryptography, with the resulting layered encryption. Each relay has a public and a private key. The public keys are known by all users and are used to establish communication path. Anonymous communication is possible through traffic tunneling over a chain of randomly selected Tor relays. After the tunnel between a pair of Tor routers is setup, symmetric key cryptography is used to transfer the data. These encryption layers ensure sender unlinkability, whereby the eavesdropper is unable to guess complete path from observed links \cite{Erdin:2015,Nepal:2015}. \par As more and more data is transmitted in the configurable photonic layer, whereby all optical switches and routers forward packets without electronic layers involved, we envision privacy as the intrinsic property of optical networks also. Just like optical and quantum cryptography has advanced the field of traditional cryptography \cite{Mowla:2016,ChenZeng:2015}, optical network systems designed with secrecy and anonymity features should also be able to provide essential building blocks for privacy in future networks, built to serve free societies. However, in contrast to Tor networks, where privacy and anonymity directly depend on number and dependability of volunteer systems, the privacy features in optical network need to be approached differently: for instance, it is a telecom operator that should be able offer a private optical communication service as a value added feature. For instance, for some client networks an optical network can grant anonymous access to third-party servers in the cloud, whereby the traffic contents and the origin of requests can remain secret for both the attacker as well as the cloud provider. In designing an anonymous optical network akin to Tor, however, several obstacles need to be overcome, since the main features need to be primarily implemented in photonics, i.e., without intervention of electronics, such as encryption, traffic routing, and session key distribution. Also, just as Tor requires compute intensive processing of encryption layers in forwarding routers, high speed processing of optical data would also be required, or consideration of large optical buffers, which is a challenge, and requires practical foundations for privacy-enhancing optical network technologies. \begin{figure*} [t] \centerline{\includegraphics[width=0.75\textwidth]{forwarding}} \vspace{-0.2cm} \caption{\small Anonymous forwarding through data transformation.} \label{transform}\vspace{-0.6cm} \end{figure*} \par In this paper, we propose to treat the well-known privacy constructs of Tor in the optical layers, which we refer to as the Optical Onion Routing (OOR). To this end, we address two practical issues: the design of all-optical anonymization nodes, and the secrecy and privacy degree achieved. To design an optical anonymizaiton node, we propose to generate a session key with Linear Feedback Shift Register (LFSR), -- a component more commonly used for random number generation, able to utilize different primitive irreducible polynomials of random degree. In addition, we propose to use an optical XOR operation as encryption, an important all-optical technology coming of age. These two components allow to encrypt data in the optical layer at the line speed, thus eliminating the need for large buffers in the node. To enable optical routing and forwarding, we integrate the anonymization functions in the traditional optical cross connect architecture, with the goal of processing optical data all-optically as much as possible with the current technologies. Finally, we prove formally that for the encryption technique and distribution of secret information proposed, the system can be perfectly private and secrecy-preserving, whereby entropy of the secret data is equal to or less than equivocation observed by the fiber eavesdropper. \par The rest of the paper is organized as follows. Section II provides principles of the optical anonymity routing proposed. Section III presents the analysis. Section IV shows analytical and simulation results. Section V concludes the paper. \section{System Model} \subsection{Anonymous forwarding} \par Onion routing in the Internet is based on a connection-oriented communication channel, \emph{a circuit}. This is where we start drawing the analogy. We envision optical WDM network as the underlaying infrastructure to setup that circuit, and assume a network of optical nodes and fiber links, where switching, routing and forwarding is all done in the photonic domain. Just like in the Tor, the nodes can act as either regular optical nodes, - with all-optical switching and forwarding functions, or the anonymization nodes. Anonymization nodes are the optical nodes with enhanced functions responsible for processing and forwarding optical signals, such that no correlation can be established between the source and destination by tapping into any link along the way. As the optical network architecture usually encompass both the data plane, and from the data plane a separated control plane, we assume that the control plane is able to provide information about network topology, available network resources and is able to direct optical data and control the related processing such as encryption. Similar to Tor, only a subset of anonymization nodes in the network is enough to assure secrecy and anonymity. Control plane randomly selects anonymization nodes in the network and available wavelengths and, then, sends control message to establish optical circuit between source $s$ and destination $d$ on the select wavelengths. Here, control plane does not distribute the actual session keys, but only the routing information for optical circuit setup as well as randomly selected parameters for session key generation. To keep these sensitive control information private and confidential, the control plane encrypts it in layers by applying the public key cryptography, just like in Tor. Fig.~\ref{transform} illustrates the idea of OOR network architecture. \par The source $s$ is an initiator of private communication, whereby, based on control plane information, anonymization nodes and the corresponding available wavelengths are randomly selected, and made known to the source. After that, the control plane sends, on a select wavelength or separate control channel, the control messages to establish optical circuit between source $s$ and destination $d$, as well as to distribute policies of session key generation to all nodes in that circuit. This is similar to Tor network, where the tunnel is established over randomly selected IP routers. In our example, the path between source and destination consists of two concatenated circuits, one between nodes $s$ and $a$ and the other one between the nodes $a$ and $d$, whereby each circuit contains one forwarding node; the forwarding node is a traditional optical switching and forwarding function, without anonymization. The circuit, i.e., end-to-end wavelength path, is established over arbitrary available links and forwarding nodes on the available wavelengths. Thus, the path available between $s$ and $d$ is randomly selected for setup, so that neither the destination $d$ nor anonymization nodes know the paths selected (which is the essence of Tor). The control message is encrypted with public keys of nodes $a$ and $d$, as in Tor. In contrast to Tor, where data from exit node, i.e., the last anonymization node, to destination is sent without encryption, all nodes involved in anonymous communication in OOR, i.e, $s$, $a$ and $d$, perform anonymization of optical data via encryption. \par The idea behind onion routing, and its Tor implementation, is to hide the communicating nodes from the eavesdropper of the individual links, as well as the identity of the source from the destination. This is how we envision to do it in the optical layer. After the optical path (tunnel) is established (via two circuits) the secret data $m$ is ready for transmission towards the anonymization node $a$. The secret data $m$ is encrypted at the source with a session keys $c_a$ and $c_d$ of $a$ and $d$, respectively. These session keys are generated with the previously mentioned Linear Feedback Shift Register (LFSR), which is its new application as it is a component more commonly used for random number generation. Here, LSFR generates the key based on randomly selected generator polynomials and seeds configured by the control plane. Thus, the source sends an optical stream $[m+c_d+c_{a}]$ to node $a$. \emph{Lambda reader} in node $a$ detects the input port and wavelength allocated, and based on that allocated wavelength the \emph{Key Generation Unit} allocates a suitable session key $c_a$ which is then sent to optical Decryption Unit. Finally, the payload is decrypted with a key $c_a$ as $m+c_{d}+c_{a}+c_{a}=m+c_{d}$. Next, the optical data stream $[ m+c_d]$ sent by $a$ over sub-tunnel reaches node $d$. After detecting of input signal on certain wavelength, at $d$, the session key $c_d$ is applied to optical payload as $m+c_d+c_d=m$. Due to data encryption in anonymization nodes, each outgoing optical stream differs from incoming optical stream. When an attacker has access to links of certain switch, it must deanonymize all outgoing data to identify a certain optical stream of interest and to guess its next hop. \subsection{Discussion on implementation} \subsubsection{Public Key Cryptography} To distribute confidential control plane information during optical circuit (tunnel) setup process, the public key cryptography can be applied, similar to Tor. The public key cryptosystems require two keys, i.e., public $K^+$ and private $K^-$. The public keys of all nodes in the network are known. Due to the fact, that the key information must be stored, the public key cryptosystem is implemented in the electronic control plane layer, similar to what is proposed in \cite{Guneysu:2014}. In our architecture, we do not define a specific public key cryptosystem and generally allow all existing public key designs, which can be based on discrete logarithm problem such as Diffie-Hellman, factorization problem such as Rivest-Shamir-Adelmann (RSA) or on square root problem, such as Rabin systems. \begin{figure} [t!] \hspace{1mm} \centerline{\includegraphics[width=0.38\textwidth]{node2}} \vspace{-0.4cm} \caption{\small Architecture of optical node in OOR network.} \label{node}\vspace{-0.6cm} \end{figure} \subsubsection{Exclusive or (XOR) operation} The XOR operation utilized in cryptographic systems is usually implemented in software. We propose to implement encryption and decryption with session keys in the optical layer, i.e., data anonymization, with an all-optical XOR gate component \cite{Dimitriadou:2013,Yang:2010}. The XOR operation transforms the incoming data into new outgoing data and, thus, unlinks the communication between source and destination, whereby each incoming message is mixed, i.e., XOR concatenated, with a session key. Here, ultrafast nonlinear interferometers based on semiconductor optical amplifiers (SOAs) can be used to combine two optical streams, whereby transverse electric (TE) and transverse magnetic (TM) components of a probe pulse can be split and recombine by setting the relative optical delays between them. When the phases experienced by the TE and TM components in the SOA are the same, the resulting signal is '1', or '0' otherwise. \subsubsection{Linear Feedback Shift Register} In each anonymization node, optical data is ananymized by encryption, before it is forwarded to output port. We propose to generate the session key for ananymization by LFSR, a component commonly used as random number generator. The session key generation with LFSR is discussed in \cite{67207, Eljadi:2014}. Since LFSR of length $n$ bits can generally be easily deducted by observing $2n$ consecutive generated bits, we propose to utilize different generator polynomials of different degrees and randomly selected seeds. This can help us to increase the amount and randomness of possible session keys, with the goal to provide a one-time pad, which is random, and at least as long as the plaintext, and not reused and completely secret \cite{Shannon:1949}. Generally, LFSR can be implemented in hardware or software. In our system, this function is implemented in the optical layer, which necessitates an electrical-to-optical conversion before encryption (XOR). The session key can be pre-calculated during circuit setup process, or generated at line rate. The first variant is suitable for LFSR implementations as we propose, based on \cite{David:2012}, whereby it must be assumed that additional electronic buffer is required to store the pre-calculated keys. In contrast, the second solution must be implemented at line speed, what is a challenge for current optical systems, due to high speed, though it would eliminate the need for buffer. \subsection{OOR node architecture} \par A possible node architecture is illustrated in Fig.~\ref{node}. As it can be seen, the typical WDM node architecture is enhanced to provide functions of anonymization. In that sense, the node can act as a simple forwarding (all-optical switching node), anonymization node, or OOR node for sending (source) or termination (destination). We next describe each of these functionalities and concepts in more detail. \subsubsection{Source } The basic function of the source node is to modulate the electronic signals onto optical carriers, along with the flow encryption, the wavelength assignment and/or any other flow adaptation for further transmission over an OTN/WDM network. Here, the optical data generated ($m'$) is encrypted with the dedicated keys from LFSR and by applying optical XOR. The source collects all anonymization keys of the anonymization nodes that are to be used on the wavelength path assigned, e.g., $\vv c=\{c_1, c_2,...,c_d \}$ and these keys are to be utilized by each anonymization node traversed, e.g., $c_1$ by the node $a_{1}$ and $c_d$ by destination $d$; this is a way to anonymize the incoming optical data for the next hop or to decrypt it. The generation of the key $c_i$ is important and it is created in LFSR with polynomial $c_i^*$ from vector $\vv c^*$ and seed $Sh_i$ from $\vv {Sh}$, both randomly selected in the control plane and distributed to each node $a_i$ during the circuit setup. The incoming optical signal $m\rq{}$ is then encrypted with optical XOR by all elements from $\vv c$ as $[m'+c_{1...d}]$, where $c_{1...d}=c_1+c_2+...+c_d$. The encrypted optical data $[m'+c_{1...d}]$ is finally sent to the predefined optical circuit (on Fiber 4). \subsubsection{Anonymization} \par Each anonymization node performs data anonymization/decryption before forwarding. Here, the incoming optical flows $m_i=[m'+c_{i...d}]$ from optical circuit on wavelength $\lambda_1$ (Fiber 1) is detected, and the information about this wavelength is sent to Lambda Reader and Key Generation Unit for matching. As a result, the corresponding anonymization key $c_i$ is forwarded to optical XOR gate. The session key was also here generated by LFSR by utilizing generator polynomial $c_i^*$ and corresponding seed $Sh_i$, just like in the source node. The session key is simply converted to optical signal, before it is XOR-concatenated with data, as follows $[m'+c_{i...d}+c_i]=[m'+c_{i+1...d}]$. For simplicity, if the data is to be further sent towards the next hop, we assume that the same wavelength is utilized, respecting the wavelength continuity constraint. Otherwise, the signal can also be retransmitted (converted) to another wavelength, which would make it more complex. \subsubsection{Destination} When data reached its destination $d$, it is processed just like if destination were an ananymization node. The received optical payload $[m'+c_d]$ is decrypted with key $c_d$ into $[m']$, converted into the electronic signal at the destination. \section{Modeling and analysis} \subsection{Routing and Treat Model in OOR} \subsubsection {Routing model} \par We assume that optical circuits in form of wavelength-continuous optical paths are setup in the random fashion over a randomly selected wavelength, whereby a network provides at most $\mathcal N$ optical paths between source $s$ and destination $d$ over all wavelengths and fibers, which for the sake of modeling we collect in set $\Psi$. Generally, only $N$ out of $\mathcal N$, $\mathcal N\geq N$, paths are available, while at least one wavelength paths among them is randomly selected for transmission. All $\mathcal N$ existing optical paths are arranged in the sorted vector $\vv{P}=\begin{pmatrix} P_0 \ P_1 \ ... \ P_{\mathcal N-2} \ P_{\mathcal N-1} \ \end{pmatrix}$ with related probabilities, that an individual path $\mathcal P_l$ is available. We denote a fiber link as $e_{v'v''}$ and a wavelength link on $\lambda_x$ connecting two nodes, $v'$ and $v''$, as a wavelength link as $e_{v'v''}(\lambda_x)\in e_{v'v''}$, respectively. The capacity of a fiber link is measured in number of wavelengths. Thus, each edge $e_{v'v''}$ provides $c_{e_{v'v''}}$ parallel wavelength links between nodes $v'$ and $v''$. Each path $\mathcal P_l $ between $s$ and $d$ consists of $\theta_{l}+1$ links $e_{lk}\in \mathcal P_l$, $1\leq k\leq \theta_{l}+1$, and of $\theta_{l}$ intermediate nodes $v_{lq}\in\mathcal P_l$, $1\leq q\leq \theta_{l}$, while $\eta$ out of $ \theta_{l}$ nodes are randomly selected as anonymization nodes $a_{i}$, $1\leq i\leq\eta$. \par Let us now assume that there is a collection $\mathcal A$, which contains $\mathfrak a= C(|\Phi|,\gamma):=\binom{|\Phi|}{\gamma}$ path sets $\mathcal A_{\alpha}$, $1\leq\alpha\leq\mathfrak a$, while $\Phi$ can be a collection of all $\mathcal N$ existing paths, i.e., $\Psi$, or of $N$ available paths, i.e., $|\Phi|=\mathcal N$ or $|\Phi|=N$, and $\gamma$ can be a number of available paths $N$ or the number of required for transmission paths, i.e., $\gamma:=N$ or $\gamma:=1$. In contrast, set $B_{\alpha}=\Phi\backslash A_{\alpha}=\{\mathcal P_{l} |\mathcal P_{l} \notin A_{\alpha} \}$ from collection $\mathcal B$ is the $\alpha^{th}$ set of remaining $|\Phi|-\gamma$ elements, which are not in the $\alpha^{th}$ combination $A_{\alpha}$. Thus, the probability $P''(\alpha,\gamma, \Phi)$, that $\gamma $ paths are in set $A_{\alpha}$ and not in set $B_{\alpha}$, is defined as \begin{equation}\label{PrPathComb} P''(\alpha, \gamma, \Phi)=\prod_{i=1,\atop \mathcal P_{l_i}\in A_{\alpha}}^{\gamma}P_{l_i}(\alpha)\prod_{t=1,\atop \mathcal P_{l_t}\in B_{\alpha}}^{\mathcal |\Phi|-\gamma}(1-P_{l_t}(\alpha)) \end{equation} , where $P_{l_i}(\alpha)$ and $P_{l_t}(\alpha)$ are probabilities of path $\mathcal P_{l}$, $l=1,2,...,\mathcal N$, collected in $A_{\alpha}$ and $B_{\alpha}$ and indexes $i$ and $t$ are the sequence numbers of paths in $A_{\alpha}$ and $B_{\alpha}$, respectively \par As a result, the network provides $\Omega=j$ wavelength paths out of $\mathcal N$ paths with probability $\hat P(\Omega=j,\Psi)$ defined as \begin{equation}\label{PrNPath} \hat P(\Omega=j,\Psi)=\sum_{\alpha=1}^{\binom{\mathcal N}{j}}P''(\alpha, j, \Psi) \end{equation} , where the $\alpha^{th}$ set from the collection $\mathcal A$ contains one path combination out of $\binom{\mathcal N}{j}$ combinations of $\Omega=j$, $0\leq j\leq\mathcal N$, available paths with related probabilities from vector $\vv P$. \par In case of $N<1$ (no path is available), the transmission request will be blocked with probability $P_B$, i.e., \begin{equation}\label{RequestBlocking} P_B=\hat P(\Omega=0, \Psi) \end{equation} \par Since we assume that all $N$ paths have the same probability $1/N$ to be selected for transmission, the probability, that any path $\mathcal P_l$ collected in $\mathcal A_{\alpha}$ is available and utilized, is \begin{equation}\label{PrComb} P(\alpha)=\tfrac{P''(\alpha, 1, \Psi)(1-P_B)}{\hat P(\Omega=1, \Psi)} \end{equation} \subsubsection{Threat model} The treat model assumes that an attacker can eavesdrop select links in the network, and guess the source and destination nodes, as well as the data transmitted. To model this, let us define a set $\mathfrak W$, containing all possible wiretap edges, while at most $\mathbf w = | \mathfrak W| $ edges can be attacked simultaneously. Since optical receivers are broadband, we assume that an attacker is always able to access all $c_{e_{v'v''}}$ wavelengths on a fiber link $e_{v'v''}$. \par Let us assume the worst type of attack in the network, where any link in the network can be eavesdropped with a probability $\phi$. In other words, the set of fiber links attacked $\mathfrak W$ and its size $\mathbf w$ are variable, while each link can belong to set $\mathfrak W$ with probability $\phi$. Here, each wavelength path can be wiretapped with probability $P^w(\mathcal P_l)$ defined by Eq.~\eqref{PrWPath} as a probability that at least one wiretap link utilized by path $\mathcal P_l$. \begin{equation}\label{PrWPath} P^w(\mathcal P_l)=\phi\sum_{i=0}^{\theta_l}(1-\phi)^i=1-(1-\phi)^{\theta_l+1} \end{equation} , where $\phi$ has the same value for all links in the network. As a result, the probability, that a wiretap path is utilized for transmission, is defined by using of Eqs.~\eqref{PrComb} and ~\eqref{PrWPath} as \begin{equation}\label{NumWPathsA} P^{\phi}_w=\sum_{\alpha=1}^{\mathcal N}P^w(\mathcal P_l)P(\alpha), \forall \mathcal P_{l}\in\mathcal A_{\alpha} \end{equation} \subsection{Analysis of data anonymization} The secret data $m'$ of length $L_{m'}$ bits is sent over OOR network passing through $\eta$, $0\leq\eta\leq\eta_{max}$ anonymization nodes, whereby $\eta_{max}$ is the maximal number of anonymization nodes can be utilized along optical tunnel. When an attacker gains access to encrypted optical data $m$ with probability $P^{\phi}_w$ as discussed previously, it has to decrypt $m$ along all its anonymization keys to reveal the secret data $m\rq{}$. \begin{lem}\label{secrM} The OOR system is perfectly secure, whereby an attacker is not able to recover the secret data $m\rq{}$ sent over randomly selected wavelength path. \end{lem} \begin{proof} A secret data $m'$ of length $L_{m'}$ bits is generally an arbitrary bit sequence out of all $2^{L_{m'}}$ possible, while the entropy of the plain text is $H(m')=L_{m'}$. $m'$ is encrypted by all $\eta+1$ secret keys of all anonymization node and of the destination. Thus, there are $(\eta+1)! \cdot \binom{2^{L_{m'}}-2} {\eta+1}\cdot 2^{L_{m'}}$ possible combinations of $m'$ and $\eta+1$ secret keys, while each combination always contains $\eta+2$ different elements out of $2^{L_{m'}}$, whereby only $m\rq{}$ can contain zero element. Thus, the entropy of encrypted data $m'$ is defined as follow \begin{equation}\label{Hm} \scalebox{0.95}{\begin{minipage}{2\columnwidth} $H_e(m')=log\left((\eta+1)! \binom{2^{L_{m'}}-2} {\eta+1}\cdot 2^{L_{m'}}\right)\overset{!}{\geq}L_{m'}=:H(m')$ \end{minipage}} \end{equation} \par An attacker does not have any knowledge about the number of selected anonymization nodes $\eta$ or the number of already passed anonymization nodes on the wavelength path and, thus, has to check all $\eta_{max}+1$ possible variants of the same, where $m'$ can be encrypted by one to $\eta_{max}+1$ secret keys. Thus, the equivocation $H(m'|m)$ observed by an attacker is \begin{equation}\label{HmA} \scalebox{0.9}{\begin{minipage}{2\columnwidth} $H(m'|m)=\sum\limits^{\eta_{max}}_{i=0}log\left((\eta_{max}+1-i)! \cdot \binom{2^{L_{m'}}-2} {\eta_{max}+1-i}\cdot 2^{L_{m'}}\right)$ \end{minipage}} \end{equation} However, $H(m'|m)\geq H(m')$, thus any $m'$ can be transmitted perfectly secret. \end{proof} \par To provide data privacy and anonymity, the proposed OOR utilizes different functional components such as public key cryptography, encoding by XOR and key generation with LFSR on control and data plane. Next, we analyze information-theoretically the resulting privacy and anonymity degree as a function of components utilized. \subsubsection{Public Key Cryptography} The public key cryptosystems require two keys, i.e., public $K^+$ and private $K^-$. The message $m$ sent to node $v_j$ is encrypted by public key $K^+_j$ of $v_j$ as $m_e=K^+_j(m)$. The destination $v_j$ can decrypt received message $m_e$ by applying the private key $K^-_j$ as $K^-_j(m_e)=K^-_j(K^+_j(m))=m$. For high level of data secrecy, we restrict the policies for selecting of key and plain text sizes as $H(m)\leq H(K^+)$, where $H(m)=L_m$ and $H(K^+)=L_K$ are entropies of secret message of length $L_m$ bits and public key of length $L_K$ bits, respectively, i.e., $L_m\leq L_K$. That ensures that an eavesdropper is not able to break the utilized cryptosystem by obtaining the encrypted data $m_e$. \subsubsection{XOR operation} We assume that incoming date $m_{v_i}$ of length $L_{m_{v_i}}$ in node $v_i$ is mixed, i.e., XOR concatenated, with a secret key $c_i$ of length $L_{c_i}$ so that an attacker can not recognize $m_{v_i}$ and its next hop node $v_j$. The outgoing data $m_{v_j}$ is defined as $m_{v_j}=m_{v_i}+c_i=\{\forall m_{{v_i}_p}\in m_{v_i} \land \forall c_{{i}_p}\in c_i| (\neg {m_{{v_i}_p}}\land c_{{i}_p})\lor( m_{{v_i}_p}\land \neg c_{{i}_p})\}$, where $m_{{v_i}_p}$ and $c_{{i}_p}$ are the $p^{th}$ bits, $1\leq p\leq L_{m_{v_i}}$, and, $1\leq p\leq L_{c_{i}}$, within $m_{v_i}$ and $c_i$, respectively. Without loss of generality, any secret data $m'$ is XOR encrypted into data $m$ of the same length $L_{m}=L_{m'}$. \subsubsection{LFSR} Generally, keys generated with LFSR do not provide a strong cryptographic security, whereby an attacker is able to gain the generator polynomial of degree $g$, if it receives at least $2g$ consecutive plain text bits generated by LFSR. To this end, we propose to generate session key $c_i$ for data anonymization directly in each anonymization node $a_i$, whereby a primitive irreducible polynomial $c^*_i$ of degree $g$ and seed $Sh_i$ as a start point are randomly selected by source for each utilized anonymization node $a_i$ and secretly distributed with public key cryptography. The source randomly selects one out of $C_g=\varphi(2^g-1)/g$ primitive polynomials of arbitrary degree $g$, $g_{min}\leq g\leq g_{max}$, where $\varphi(x)=x(1-1/p_1)(1-1/p_2)...(1-1/p_k)$ is Euler function, while $p_1...p_k$ are the prim numbers. The minimal degree $g_{min}$ is defined so that the maximal key length generated by LFSR is larger than data $m\rq{}$ encrypted by this key, i.e., $L_{m\rq{}}<2^{g_{min}}-1$. \par Due to public key cryptography used to distribute control messages during setup process of the optical circuit, all the data is assumed to be perfectly secret. In other words, each control message $m_c$ of length $L_c$ encrypted with a public key $K^+_i$ of node $v_i$ can not be recovered by an attacker, unless the control message $m_c$ out of all $2^{L_c}$ possible messages is guessed, i.e., $H(m_c)=L_c=H(m_c|K^+_i(m_c))\leq H(K^-_i|K^+_i(m_c))=H(K^-_i)$. As a result, the routing information for circuit provisioining, the bit sequences $\vv c^*$, i.e., randomly selected primitive polynomials for session key generation, and random selected seeds $\vv Sh$ are perfectly secret, which can not be recovered by an external attacker. \par Since the data from source to destination is anonymized in each anonymization node along optical tunnel, an attacker can only discover $s$ and $d$ by accessing all $\eta+1$ wavelength segments between the anonymization nodes, as well as all incoming links of $s$ and outgoing links of $d$ to ensure that they are not the forwarding nodes of optical data attacked. \begin{lem}\label{unlink} The proposed OOR ensures privacy and secrecy between any $s-d$ pair from an arbitrary extern attacker, if \begin{equation}\label{hlen} \sum_{g=g_{min}}^{g_{max}}log\left( \tfrac{\varphi(2^{g}-1)(2^{g}-1)}{g}\right)\geq L_{m}\leq 2^{g_{min}}- \end{equation} \end{lem} \begin{proof} Let us assume that attacker has access to all links along the path. In this case, the attacker needs to deanonymize the optical data sent by each node $a_{i}$ to the next anonymization node $a_{i+1}$. Due to the fact that the polynomial $c^*_i$ of length $g+1$ bits, $g_{min}\leq g\leq g_{max}$, and seed $Sh_i$ are chosen randomly and transmitted perfectly secure, the entropy of anonymization key can be defined as $H_1(c_i)=\sum_{g=g_{min}}^{g_{max}}log(C_g(2^{g}-1))$, while source randomly selects one out of $C_g$ existing primitive irreducible polynomials of degree $g$ and a seed out of $2^g-1$ (without zero) possible for each anonymization node. On the other hand, the secret key $c_i$ can be an arbitrary bit sequence out of $2^{L_{m}}$ possible, i.e., $H_2(c_i)=L_{m}$ bits. An attacker can follow the algorithm for generation of $c_i$ and, thus, guesses any $c^*_i$ and $Sh_i$ or directly guesses $c_i$ of length $L_{m}$. In the first case, the equivocation is defined as $H(c_i|K^+_i(m_c))=H(c_i|m)=\sum_{g=g_{min}}^{g_{max}}log(C_g(2^{g}-1))=H_1(c_i)$, while, in the second case, $H(c_i|K^+_i(m_c))=H(c_i|m)=L_{m}=H_2(c_i)$. For an attacker, it is simpler to guess polynomial and seed, if $\sum_{g=g_{min}}^{g_{max}} C_g(2^{g}-1)<2^ {L_{m}}$. Thus, $H_1(c_i)$ must be equal to or larger than $H_2(c_i)$ for a perfect secrecy. Since there are $C_g=\varphi(2^g-1)/g$ primitive polynomials of degree $g$, the condition for perfect secrecy provided by anonymization key can be defined by Eq.~\eqref{hlen}, i.e., an attacker will be not able to deanonymize and to link (trace back) to nodes $s$ and $d$. \end{proof} \section{Performance evaluation} \par We now show theoretical results for proposed private and anonymous OOR network and validate the same by simulations. The analytical results were calculated with Eq.~\eqref{NumWPathsA} as well as with Eqs.~\eqref{Hm} and \eqref{HmA}. Since our model directly depends a steady state wavelength path availability and random path selection, we validate the analysis by using dynamic Monte-Carlo-simulations with $95\%$ of confidence. \par We analyze modified optical network topology with 24 nodes and 35 fiber links, each fiber link carrying $10$ wavelengths $\lambda$; each wavelength has the capacity of 10Gb/s. The link directions and available number of wavelengths on each fiber link are defined as $\{1-2,1-3,2-6,2-3,3-4,3-7,4-5,4-10,10-11,11-5,6-16, 6-7,16-17,7-17,7-8,17-18,18-8,18-22,8-9,9-4,9-12,9-19,22-19,22-23,19-20,23-24, 23-20,20-12,20-21,24-21,12-10,12-13,21-15,15-13,13-11\}$ and $\{6,6,3,3,4,5,4,3,4,8,2,1,2,3,3,5,2,3,5,3,1,1,1,2,2,1,1,\\2,1,1,1,2,2,2,4\}$, respectively. Let us consider source node 1 and destination node 5. Here, there are in total $\mathcal N = 12$ different possible wavelength paths over all available wavelength links. All paths are sorted in the ascending order of length in number of hops, and collected in $\vv P$. The path availability for each $\lambda$ decreases with increasing path length, i.e., $\vv P = \{0.9,0.85,0.8,0.75,0.75,0.7,0.65,0.6,0.55,0.55,0.5,0.5\}$. Before transmission, random wavelength paths between anonymization nodes are established by utilizing available wavelengths. Every node in the network can be used as an anonymization node, and the number of anonymization nodes per path is determined randomly. \par Fig.~\ref{AnonNodes} shows the normalized equivocation $H(m'|m)$ as a function of amount of number anonymization nodes $\eta$ used on a path and of maximal number of anonymization nodes $\eta_{max}$. An increase in $\eta_{max}$ increases the system robustness against wiretapping (dashed line), while an attacker have to recover more redundant information, when $\eta<\eta_{max}$ anonymization nodes are utilized. For instance, an attacker must recover $35 H_e(m')$ bits to guess secret data $m'$ in case of $\eta=0$ and $\eta_{max}=9$, while increase in $\eta$ increases entropy $H_e(m')$ as per Eq.~\eqref{Hm} and decreases redundant information in encrypted data $m$ up to $6 H_e(m')$ for $\eta=\eta_{max}=9$. \begin{figure} [t] \centerline{\includegraphics[width=0.9\columnwidth]{NumFNodesVsEntropy}}\vspace{-0.4cm} \caption{\small Normalized equivocation vs. amount of anonymization nodes.}\vspace{-0.4cm} \label{AnonNodes} \end{figure} \begin{figure} [t] \centerline{\includegraphics[width=0.9\columnwidth]{HmYDynamic}}\vspace{-0.2cm} \caption{\small $H(m\rq{}|m)$ and $P^{\phi}_w$ vs. probability for attacked link, $\phi$.} \label{PLinksWData}\vspace{-0.5cm} \end{figure} \begin{figure} [t] \centerline{\includegraphics[width=0.9\columnwidth]{HmYStatic}}\vspace{-0.2cm} \caption{\small $H(m\rq{}|m)$ and $P_w$ vs. number of wiretap fiber links, i.e., $\mathbf w$.} \label{WLinksWData}\vspace{-0.6cm} \end{figure} \par Next, we assume $\eta_{max}=2$ and evaluate the equivocation $H(m'|m)$ and the probabilities $P_w$ and $P^{\phi}_w$ for successful eavesdropping and correctly recovered data $m\rq{}$. Fig.~\ref{PLinksWData} shows the normalized mean equivocation $H(m'|m)$ and probability for wiretapped transmission path $P^{\phi}_w$, when any link in the network can be eavesdropped with probability $\phi$. The equivocation redundancy and probability for eavesdropped transmission path $P^{\phi}_w$ increase with $\phi$. As a result, an attacker can wiretap almost all paths when probability for wiretap link, $\phi$, is $50\%$, while equivocation redundancy amounts $~9H(m')$, when an attacker tries to decrypt. Next, we consider a special case whereby a maximum of $4$ fiber links in network can be wiretapped either simultaneously or indiviudally, $e_i\in\{ 3-7, 8-9, 17-18, 13-11\}$. Fig.~\ref{WLinksWData} shows the normalized mean equivocation $H(m'|m)$ and probability for wiretapped transmission path $P_w$ as a function of number of fiber links wiretapped at the same time, i.e., $\mathbf w$. An increase in $\mathbf w$ increases the probability $P_w$ and, thus, the amount of redundant information required to be recovered by attacker, which follows the algorithm to guess $m'$ from eavesdropped optical data $m$. Here, the equivocation increases from around $2.7 H(m')$ to $4 H(m')$ bits with increasing number of wiretap links, i.e., for $\mathbf w=1$ and $\mathbf w=4$, respectively, while the mean amount of wiretapped data, i.e., $P_w$, also increases. \section{Conclusion} We proposed an Optical Onion Routing (OOR) architecture, the mirror of Tor. We designed the network and a new optical anonymization node architecture, including the optical components (XOR) and their electronic counterparts (LFSR) to realize layered encryption. We proved formally and confirmed numerically that such an optical onion network can be perfectly private and secure. The paper aimed at providing practical foundations for privacy-enhancing optical network technologies, and as such is work in progress. \bibliographystyle{IEEEtran}
1,941,325,220,950
arxiv
\section{Introduction} \label{sec:intro} In video surveillance and automotive systems, fisheye cameras~\cite{miyamoto1964fel} that possess an ultra wide field of view (FOV) of 180 degrees and beyond are often employed. With such an FOV, a single camera is able to capture the entire hemisphere it is facing and thus proves advantageous in surveillance systems~\cite{surveillance}, where tracking~\cite{surveillance2} and detection is important, or in automotive systems to aid the driver~\cite{auto1, auto2, gehrig}. Contrary to pinhole cameras, the captured images from fisheye cameras exhibit strong radial distortions that cause straight lines of the scene to not be straight in the final images. While a single fisheye camera already has its advantages, using more cameras offers even more possibilities for detection~\cite{fisheyerobot, pedestrian, fisheyestereo2} and other applications. Surround views can be synthesized from multiple fisheye cameras~\cite{liu2008birdseyeview, surroundview}, for instance. Fisheye stereo is commonly dealt with in the field of computer vision, where abundant literature exists on the topic, e.\,g.,~\cite{gehrig2, fisheyestereo, trinocular}, to name only a few publications. A common task in the context of stereo camera setups is the extraction of depth by estimating the disparity between the two available views. Replacing conventional pinhole cameras with fisheye cameras, new problems arise during disparity estimation as the image characteristics differ significantly due to the non-perspective projection function. Fig.~\ref{fig:motivation} shows two views, each obtained from a fisheye camera. The most obvious observation is that the expected horizontal offset between the two views is no longer just horizontal. While the displacement of the cameras is indeed purely horizontal, the same cannot be said for the image content. The radially distorted objects also change shape as well as vertical position between views, which complicates stereo matching and depth map generation. \begin{figure}[t] \centering \psfrag{a1}[cr][tc]{{\color{brownish}Left view}} \psfrag{a2}[cl][tc]{{\color{brownish}Right view}} \psfrag{a3}[cc][tc]{{\color{brownish}Intermediate view}} \centerline{\includegraphics[width=0.95\columnwidth]{figures/motivation}} \vspace{-0.1cm} \caption{Fisheye views as obtained by a stereo camera setup. Using disparity estimation, arbitrary intermediate views can be synthesized.} \label{fig:motivation} \vspace{-0.2cm} \end{figure} In this paper, we investigate block matching for fisheye disparity estimation and adapt it so as to take into account the projection function of the fisheye lens. The proposed disparity estimation builds upon our motion estimation method for fisheye video sequences~\cite{eichenseer2015motionfish, eichenseer2016motioncalib}, where we could show that incorporating knowledge about the fisheye properties considerably improves the estimation results. We demonstrate that our adapted disparity estimation method is able to produce more accurate disparity maps compared to the conventional approach by synthesizing intermediate views, for which improved visual results are achieved. An objective evaluation against ground truth data substantiates the visual examples. A similar approach was introduced in~\cite{vision1}, where the plane-sweeping algorithm was adapted for fisheye stereo. In contrast to that, we focus on a simplistic, versatile approach that finds use in many signal processing and video coding applications. The remainder of this paper is structured as follows. Section~\ref{sec:disparity} briefly describes disparity estimation based on block matching and how we employ it. In Section~\ref{sec:fisheye}, we introduce the proposed disparity estimation via fisheye-adapted block matching. Section~\ref{sec:simulation} provides the simulation setup and results, while Section~\ref{sec:conclusion} concludes the paper. \section{Disparity Estimation via Horizontal Block Matching} \label{sec:disparity} For setups with conventional cameras that more or less follow the pinhole model, a straightforward approach to estimate the disparity between two views is block matching, which also finds wide use in motion estimation and temporal prediction applications as well as in hybrid video coding. Assuming rectified views, the block matching can even be restricted to one dimension by making use of epipolar geometry~\cite{hartley2003mvgeo} (e.\,g., the horizontal dimension in case of left and right views) and to one direction (from left to right, for instance). This results in much fewer candidate blocks to be tested when compared to traditional two-dimensional block matching. Given two views of a scene, typically a left view $I_\text{left}(m,n)$ and a right view $I_\text{right}(m,n)$, with $m$ and $n$ being the vertical and horizontal spatial coordinates, respectively, the disparity between the two images can be used to estimate one view from the other: \begin{equation} \tilde I_\text{right}(m,n) = I_\text{left}(m,n+d)\: , \end{equation} where $\tilde I_\text{right}(m,n)$ describes the estimated right view and $d$ corresponds to the disparity map entry at position $(m,n)$. The disparity map itself is denoted as $D(m,n)$ and has the same dimensions as the two views. In this paper, the disparity map is given from the right view to the left view. To determine the disparity for each pixel $(m,n)$, we employ block matching in a pixel-wise manner, i.\,e., for each pixel, a support block is defined around the pixel, which is then used for the disparity estimation. We call the margin around one pixel the support width $w$ in this paper. The relation between support width $w$ and block size $b\times b$ is given by $b = 2w+1$. The search range $s$ denotes the maximum offset to be tested during disparity estimation and thus comprises the horizontal offsets in the range of $[0,s]$. Fig.~\ref{fig:sota} provides a visualization. For each candidate block within the search range, the current block to be estimated in the right view is compared with the corresponding block in the reference (left) view, shifted by the horizontal offset candidate. Using the sum of squared differences (SSD) for its simplicity, the error between these two blocks is minimized to find the optimum candidate that describes the disparity between the blocks. The optimum candidate thus obtained is then only stored for the center pixel of the block and corresponds to the final entry $d$ in $D(m,n)$ at position $(m,n)$. The disparity $d$ thus describes the offset between the block to be matched in the right view and the best match found in the left view. Repeating this process for each pixel creates a dense disparity map. \begin{figure}[t] \vspace{0.1cm} \centering \psfrag{a1}[tc][tc]{{\color{brownish}Left view}} \psfrag{a2}[tc][tc]{{\color{brownish}Right view}} \psfrag{a3}[tc][tc]{$s$} \psfrag{a4}[tc][tc]{$w$} \psfrag{a5}[tc][tc]{$b$} \psfrag{a6}[bc][bc]{$d$} \psfrag{a7}[bc][bc]{$n$} \psfrag{a8}[br][br]{$m$} \centerline{\includegraphics[width=0.89\columnwidth]{figures/sota}} \vspace{-0.2cm} \caption{Horizontal disparity estimation using a support block of size $b\times b$ to determine the disparity $d$ for the red pixel at position $(m,n)$. The area covered by the candidate blocks to be tested is shown by the dashed lines.} \label{fig:sota} \vspace{-0.3cm} \end{figure} \subsection*{Intermediate View Synthesis} With the disparity from the right view to the left view available, an arbitrary intermediate view can be generated. In the simplest case, the view that would result from a camera placed exactly in the middle between the two actual cameras is synthesized. The disparity from the right view to the intermediate view is then exactly half the previously estimated disparity and the intermediate view can thus be calculated by a pixel-wise horizontal shift of the right view: \begin{equation} I_\text{intermediate}\left(m,n+\frac{1}{2}d\right) = I_\text{right}(m,n)\: , \end{equation} followed by a hole-filling algorithm. Please note that this simple approach can be easily extended towards using both views for the calculation of the intermediate view. To that end, a second disparity map describing the pixel-wise offsets from the left view to the right view would also be necessary. This is not considered in the scope of this paper, however. \vspace{0.2cm} \section{Disparity Estimation via Fisheye-Adapted Block Matching} \label{sec:fisheye} \vspace{0.2cm} Disparity estimation via block matching works well for perspective images. For fisheye images, however, the horizontal offset is not sufficient to describe the disparity (cf. Fig.~\ref{fig:motivation}). Also storing the vertical component would result in two disparity maps or a pixel-wise motion vector field containing both horizontal and vertical offsets. To retain the representation of the disparity by means of a single disparity map so as to also save bits for storage and transmission, and, more importantly, to generate more accurate disparity maps, we therefore propose a novel adaptation to the previously described disparity estimation that takes into consideration the fisheye projection function. We again assume that the two views are rectified, i.\,e., in the perspective representations of the two views, two corresponding points have identical vertical coordinates and the optical axes of the two cameras are parallel. For our proposed method, we build upon the overlapping block-matching method described in Section~\ref{sec:disparity}. The one-dimensional block search is not conducted directly in the fisheye image, however, but on projected pixel coordinates. The adaptation described in the following is a novel variant of the fisheye motion estimation method described in~\cite{eichenseer2015motionfish}, that no longer requires a costly hybridization. With the image center serving as the origin, the pixel positions of the fisheye image are represented in polar coordinates $(r_\text{f},\phi_\text{f})$. The fisheye projection function is described by a function $p(.)$ that relates the incident angle of light $\theta$ (measured against the optical axis) to the resulting position on the image plane $r_\text{f}$ (measured against the image center): $r_\text{f} = p(\theta)$. This position $r_\text{f}$ corresponds to the radius of the polar representation. A common fisheye model, which we make use of here, is the equisolid angle fisheye projection function given by: \begin{equation} \label{eq:equisolid} r_\text{f} = p(\theta) = 2f\sin(\theta /2)\: , \end{equation} where $f$ denotes the focal length of the fisheye lens. Solving~(\ref{eq:equisolid}) for $\theta$ and putting it into the pinhole model $r_\text{p} = f\tan(\theta)$ then gives the projection to perspective coordinates: \begin{equation} \label{eq:bw} r_\text{p} = f\tan\left(2 \arcsin\left(\frac{r_\text{f}}{2f}\right)\right)\quad \text{and}\quad\: \phi_\text{p} = \phi_\text{f} \: . \end{equation} Note that this operation is solely based on pixel positions and as such, only the coordinates are manipulated. This means that no actual distortion correction is performed, which would require interpolation of the luminance values and also result in a significant loss of image content. This is true for all steps of the adaptation so that all information available in the fisheye views is retained. In the next step, the obtained perspective polar coordinates $(r_\text{p},\phi_\text{p})$ are transformed to Cartesian coordinates $(m_\text{p},n_\text{p})$. A shift by the horizontal offset candidate $d_i$ (defined within the search range $s$) yields the coordinates $(m_\text{p},n_\text{p}+d_i)$, which are transformed back to polar coordinates $(r_{\text{p},d_i},\phi_{\text{p},d_i})$. Re-projecting these shifted coordinates back into the fisheye domain by employing: \begin{equation} \label{eq:fw} r_{\text{f},d_i} = 2f\sin\left(\frac{1}{2}\arctan\left(\frac{r_{\text{p},d_i}}{f}\right)\right)\:\: \text{and}\:\:\: \phi_{\text{f},d_i} = \phi_{\text{p},d_i}\: , \end{equation} and transforming them to a Cartesian representation then yields the coordinates with which to extract the luminance values to be compared against the reference block. For more details on the luminance value extraction from the reference image, the interested reader is referred to~\cite{eichenseer2015motionfish}. Using the SSD to minimize the error between the estimated block and the reference block finally yields the optimum candidate which is stored as the disparity map entry $d$ for pixel $(m,n)$. This procedure is repeated for each pixel of the fisheye image. Please note that the re-projection also only manipulates the pixel coordinates. There are no actual distortion corrected images involved, as this would result in either images of infinite dimensions or a significant loss in terms of FOV. In~\cite{eichenseer2016motioncalib}, a compensation for ultra wide angles ($\theta > \pi/2$) is described. This strategy is also included here; for details, please refer to the original publication. By employing our proposed disparity estimation method, it is possible to get more accurate estimation results while still only having to store the horizontal displacement in the disparity map. In contrast to our previous work, the proposed adaptation for disparity estimation is less costly as it requires no hybridization and is further limited to a one-dimensional search without causing a loss in quality. In the following, we describe how the obtained disparity map can be used for synthesizing intermediate fisheye views. \begin{figure}[t] \small \vspace{0.1cm} \centering \psfrag{a1}[cc][tc]{{\color{brownish}Intermediate view}} \psfrag{a2}[cc][tc]{{\color{brownish}Right view}} \psfrag{a5}[cc][tc]{{\color{brownish}Left view}} \psfrag{a3}[tc][tc]{$(m,n)$} \psfrag{a4}[tc][tc]{$(m_\frac{d}{2},n_\frac{d}{2})$} \psfrag{a7}[bc][bc]{$n$} \psfrag{a8}[br][br]{$m$} \centerline{\includegraphics[width=\columnwidth]{figures/proposed}} \vspace{-0.2cm} \caption{Intermediate fisheye view synthesis. The shift by $d/2$ is calculated in the perspective domain and translates to a two-dimensional offset (orange) in the fisheye domain.} \vspace{-0.5cm} \label{fig:proposed} \end{figure} \subsection*{Intermediate Fisheye View Synthesis} To save storage space, it is sufficient to store the perspective horizontal offset between the left fisheye view and the right fisheye view for each pixel. As the actual offset is expressed in both the horizontal and the vertical dimension, the projections (\ref{eq:bw}) and (\ref{eq:fw}) between the fisheye and the perspective domain must again be employed so as to re-obtain the actual two-dimensional fisheye offset. In order to generate an intermediate fisheye view, the dense disparity map previously calculated can be used as follows. For each fisheye pixel position $(m,n)$, the polar coordinates $(r_\text{f},\phi_\text{f})$ are obtained and projected to the perspective domain according to (\ref{eq:bw}), where we can then shift the perspective pixel coordinates $(m_\text{p},n_\text{p})$ by half the disparity, i.\,e., $d/2$. Just like before, $d$ describes the disparity map entry for positon $(m,n)$. After the re-projection of the shifted perspective coordinates to the fisheye domain according to (\ref{eq:fw}), we obtain the polar fisheye coordinates $(r_{\text{f},d/2},\phi_{\text{f},d/2})$, which are further transformed into Cartesian coordinates $(m_{d/2},n_{d/2})$. That way, the actual target positions for warping the luminance values are obtained and can be used for the view synthesis: \begin{equation} I_\text{intermediate}(m_\frac{d}{2},n_\frac{d}{2}) = I_\text{right}(m,n)\: . \end{equation} Fig.~\ref{fig:proposed} provides a visualization that illustrates how an object changes shape and position between views and how a horizontal offset between perspective views translates to a two-dimensional offset between fisheye views. An equivalent representation of the perspective disparity is given in blue. In the following, we evaluate our proposed disparity estimation method in the context of intermediate view synthesis for synthetically generated and real-world fisheye images. \section{Simulation Setup and Results} \label{sec:simulation} \begin{table}[h] \small \caption{Test sequences and frames used.} \label{tab:setup} \vspace{-0.2cm} \centering \renewcommand\arraystretch{0.9} \begin{tabularx}{\columnwidth}{p{0.35cm}p{1.5cm}cc} \toprule &\textbf{Sequence} & \textbf{Frame numbers} & \textbf{Frame offsets to right}\\ & & \textbf{of left views} & \textbf{views (base lines)}\\ \midrule \multirow{6}{2cm}{\rotatebox[]{90}{Synthetic}}&\textit{Clips} & 10--30 & 2, 4, 6 \\ &\textit{Pencils} & 20--40 & 2, 4, 6 \\ &\textit{Street} & 400--420 & -2, -4, -6 \\ &\textit{PoolA} & 30--50 & 2, 4, 6 \\ &\textit{LivingroomC} & 20--40 & 10, 20, 30 \\ &\textit{HallwayD} & 40--60 & 2, 10, 20 \\ \addlinespace \multirow{6}{1cm}{\rotatebox[]{90}{Real-World}}&\textit{LibraryB} & 100--120 & 2, 10, 20\\ &\textit{ClutterA} & 100--120 & 40, 70, 100 \\ &\textit{ClutterB} & 100--120 & 40, 70, 100 \\ &\textit{LectureB} & 100--120 & -10, -20, -30 \\ &\textit{DriveD} & 400--420 & -2, -6, -10 \\ &\textit{DriveE} & 130--150 & -2, -4, -6 \\ \bottomrule \end{tabularx} \vspace{-0.5cm} \end{table} \begin{table*}[p] \small \caption{Intermediate view synthesis results (luminance PSNR in dB) averaged over the number of views that were generated per sequence. Results are given for three frame offsets (cf. Table~\ref{tab:setup}), denoted as small, medium, and large base line.} \label{tab:psnr} \vspace{-0.15cm} \centering \renewcommand\arraystretch{0.95} \begin{tabularx}{\textwidth}{p{1.28cm}cc|ccc|ccc|ccc} \toprule & & & \multicolumn{3}{c}{Small Base Line} & \multicolumn{3}{c}{Medium Base Line} & \multicolumn{3}{c}{Large Base Line}\\ \textbf{Sequence} & \hspace{-0.05cm}\textbf{\#Views}\hspace{-0.15cm} & $w$/$s$ & \textbf{BM-IVS} & \textbf{Fisheye-IVS} & $\Delta$ & \textbf{BM-IVS} & \textbf{Fisheye-IVS} & $\Delta$ & \textbf{BM-IVS} & \textbf{Fisheye-IVS} & $\Delta$ \\ \midrule \textit{Clips} & 21 & 8/256 & 20.22 & 28.93 & 8.71 & 15.28 & 22.44 & 7.16 & 13.65 & 19.89 & 6.24 \\ \textit{Pencils} & 21 & 8/256 & 34.64 & 37.25 & 2.61 & 29.47 & 31.34 & 1.87 & 26.73 & 28.05 & 1.32 \\ \textit{Street} & 21 & 8/256 & 21.57 & 26.28 & 4.71 & 20.39 & 23.33 & 2.94 & 19.35 & 21.78 & 2.42 \\ \textit{PoolA} & 21 & 8/256 & 34.83 & 36.74 & 1.91 & 30.51 & 32.00 & 1.49 & 28.48 & 29.33 & 0.86 \\ \textit{LivingroomC} & 21 & 8/256 & 32.71 & 37.49 & 4.78 & 27.73 & 32.99 & 5.26 & 25.36 & 29.85 & 4.49 \\ \textit{HallwayD} & 21 & 8/256 & 34.45 & 36.76 & 2.31 & 26.19 & 31.85 & 5.67 & 23.11 & 28.83 & 5.73 \\ \addlinespace \textit{LibraryB} & 21 & 8/256 & 38.14 & 39.19 & 1.05 & 28.35 & 31.78 & 3.42 & 23.63 & 26.94 & 3.30\\ \textit{ClutterA} & 21 & 8/256 & 32.96 & 33.71 & 0.75 & 31.40 & 33.36 & 1.97 & 27.12 & 29.17 & 2.05\\ \textit{ClutterB} & 21 & 8/256 & 31.68 & 32.53 & 0.85 & 28.75 & 30.01 & 1.26 & 25.52 & 28.29 & 2.77\\ \textit{LectureB} & 21 & 8/256 & 27.90 & 29.96 & 2.06 & 24.38 & 27.80 & 3.42 & 22.55 & 26.09 & 3.54\\ \textit{DriveD} & 21 & 8/256 & 31.52 & 32.94 & 1.43 & 27.53 & 29.33 & 1.79 & 25.35 & 27.23 & 1.89\\ \textit{DriveE} & 21 & 8/256 & 27.62 & 29.14 & 1.52 & 24.33 & 26.71 & 2.38 & 22.27 & 23.26 & 0.99\\ \bottomrule \end{tabularx} \end{table*} To evaluate our proposed disparity estimation and view synthesis methods, six synthetically generated as well as six actually captured real-world sequences with an FOV of 185 degrees are used. All sequences are part of the publicly available fisheye data set introduced in \cite{eichenseer2016dataset}. Since this data set was captured with a single camera, we simulate a stereo setup by including only sequences with a purely horizontal camera motion in the test set. As the camera motion is uniform in the considered synthetic sequences, left and right views can be selected and ground truth intermediate views are immediately available for evaluation purposes. For the real-world sequences, the camera motion may have been less stable, thus providing slightly less accurate ground truth images, but we will still show that the proposed method outperforms the reference. Table~\ref{tab:setup} summarizes the sequences that are part of the test set. The top six sequences are synthetic, the bottom six were captured with a fisheye camera, details on which can be found in~\cite{eichenseer2016dataset}. The frame number is given for each left view that was included in the test. The right view is obtained by adding the given frame offset to the frame number of the left view. Please note that a positive frame offset is given for all sequences that exhibit a left-to-right camera motion. In contrast, a negative frame offset is given for those sequences that feature a camera motion from right to left. For each sequence, 21 view pairs are tested. Three frame offsets are used per each pair, simulating a small, medium, and large base line between the two cameras. In accordance with \cite{eichenseer2015motionfish} and \cite{eichenseer2016motioncalib}, we use SSD as a dissimilarity metric; if desired, SSD can easily be substituted by another metric, though. For the intermediate view synthesis, we synthesize the exact middle frame between the left view and the right view. We use Delaunay triangulation followed by cubic interpolation to obtain integer target positions both in the conventional block matching method and in the proposed approach. This is mainly to avoid disadvantageous rounding operations as $(m_{d/2},n_{d/2})$ may describe non-integer numbers, but it simultaneously serves as the hole-filling algorithm. In this work, we do not yet handle occlusions and disocclusions in either the reference or the proposed method. If a certain integer target position is written to more than once, the last warped value is kept. The conventional intermediate view synthesis based on disparity estimation via block matching is referred to as BM-IVS. Our proposed method is called Fisheye-IVS. For both methods, we compared the synthesized intermediate views against their ground truth views and used the luminance PSNR to objectively evaluate the quality of the result. The PSNR was computed excluding the black peripheral areas beyond the 185 degree border, i.\,e., those sensor regions not hit by light. Table~\ref{tab:psnr} summarizes the PSNR results for the entire test set consisting of the twelve sequences and three base line settings. Blocks of $17\times 17$ pixels were used, so that a support margin of $w = 8$ pixels was defined around each pixel position. The one-dimensional search range was set to $s=256$, so that horizontal candidate offsets in the range $[0, 256]$ were tested for each block. Evidently, the proposed Fisheye-IVS outperforms BM-IVS in all instances and achieves significant gains without adding a second dimension to the disparity information. PSNR values decrease for larger base lines, as the search range may be too short to accurately capture the disparity and occlusions are more pronounced for the wider spacings. Sequences with a fast camera motion (particularly \textit{DriveD} and \textit{DriveE} which were captured out of a driving car) automatically result in rather large base lines. To substantiate the objective results, Fig.~\ref{fig:visualres} provides representative visual examples for the large base line settings of \textit{HallwayD} and \textit{LibraryB}. For a real stereo setup that uses two distinct cameras, additional factors such as diverging internal camera settings and brightness inconstancies may also have to be accounted for when performing an intermediate view synthesis. This does not directly or exclusively affect the fisheye adaption, however, so that results similar to those shown in this paper may be expected. \begin{figure*}[p] \centering \psfrag{a1}[cB][cB]{\textbf{Left view}} \psfrag{a2}[cB][cB]{\textbf{Right view}} \psfrag{a3}[cB][cB]{\textbf{Intermediate view}} \psfrag{a4}[cB][cB]{\textbf{BM-IVS}} \psfrag{a5}[cB][cB]{\textbf{Fisheye-IVS}} \centerline{\includegraphics[width=\textwidth]{figures/visualresults5}} \caption{Visual examples for the intermediate views as obtained by BM-IVS and Fisheye-IVS. First row: \textit{HallwayD}, frames 50 (left view), 70 (right view), and 60 (ground truth intermediate view). Third row: \textit{LibraryB}, frames 100 (left view), 120 (right view), and 110 (ground truth intermediate view). Second and fourth row: detail examples of the images above.} \label{fig:visualres} \end{figure*} \section{Conclusion} \label{sec:conclusion} In this paper, we analyzed the effects of a conventional disparity estimation method based on horizontal block matching on fisheye images. As the inherent radial distortion of fisheye images causes a significant deterioration of the estimation results, we proposed a novel adaptation that exploits knowledge about the fisheye projection function. Using a fisheye-to-perspective projection, the pixel coordinates are transformed to their perspective representation, for which a horizontal search is able to find better matches and thus create a more accurate disparity map. In doing so, no actual distortion correction of the fisheye images had to be performed and neither was it necessary to provide two-dimensional disparity information as would otherwise be needed for fisheye views. In contrast to our previous work, both hybridization and searching in two dimensions were avoided. Intermediate view synthesis was employed on both synthetic and real-world fisheye images to show the benefits of our adapted approach. Significant luminance PSNR gains were achieved for the entire test set and further reflected by means of visual examples. Our proposed disparity estimation method can be easily adapted to any kind of distortion or deviation from the pinhole model, provided the projection function is known; furthermore, other dissimilarity measures may be easily incorporated. Current research includes analyses of view synthesis based on more than one reference view, the inclusion of calibration information to improve the results for the real-world sequences, occlusion-handling, and adapting our scheme towards more sophisticated approaches. Further points of interest include depth information in the context of fisheye data as well as multi-view coding and frame rate up-conversion for fisheye sequences. \section*{Acknowledgment} This work was supported by the Research Training Group 1773 “Heterogeneous Image Systems”, funded by the German Research Foundation (DFG). \bibliographystyle{IEEEtran}
1,941,325,220,951
arxiv
\section{Introduction} \IEEEPARstart{I}{mage} classifiers using deep neural networks demonstrate a high level of performance \cite{hu2018squeeze,simonyan2014very}, but they lack human interpretability; they provide accurate classification results but the reasoning behind them is not accessible. This black-box property hinders their prediction from convincing a human user. Therefore, making a neural network interpretable is a prerequisite for high-stakes applications. A popular approach for explaining the prediction of an image classifier is to generate an \textit{attribution} map \cite{shrikumar2017learning,sundararajan2017axiomatic}, which is the quantified contribution of each input pixel to the prediction. Overlapping the map with an input image offers an intuitive and easily interpretable visualization that highlights a salient region of the image. To estimate an input feature's importance to the classifier output, it is reasonable to compare the outputs \textit{with} and \textit{without} that feature. If removing feature $x_i$ from input $\mathbf{x}$ results in a significant change in the classifier output, it can be concluded that $x_i$ plays an important role in the classifier prediction $p_{\theta}(Y \,|\, \mathbf{x})$ for a label $Y$. A group of recent explanation methods called \textit{perturbation-based methods}~ \cite{zeiler2014visualizing,zintgraf2017visualizing} takes this approach. First, the methods calculate a prediction for a partially removed input ($p_{\theta}(Y \,|\, \xvec_{\setminus i})$) and compare it with the original prediction $p_{\theta}(Y \,|\, \mathbf{x})$ to quantify the contribution of $x_i$. More formally, a perturbation-based method consists of the following two steps: \begin{enumerate}[leftmargin=1.5cm, label=Step \arabic*:] \item Calculate $p_{\theta}(Y \,|\, \xvec_{\setminus i})$ \item Calculate $m_i=\textrm{Diff} \left (p_{\theta}(Y \,|\, \mathbf{x}), p_{\theta}(Y \,|\, \xvec_{\setminus i}) \right )$, \end{enumerate} where the $\textrm{Diff}(\cdot)$ function compares predictions with and without feature $x_i$, and the specific function varies among methods. The quantified difference between the two predictions becomes $m_i$, which is a contribution of $x_i$. \input{figtex/1_Image_ex.tex} \input{figtex/1_PerturbationMethod.tex} The existing perturbation-based methods~\cite{zeiler2014visualizing,zintgraf2017visualizing} proposed approaches for each step, but their approaches for Step 1 have a major problem in that the estimation is inaccurate. They compute $p_{\theta}(Y \,|\, \xvec_{\setminus i})$ by substituting $x_i$ with other values, but the replaced image is then out-of-distribution (OoD) from the training distribution of a classifier. Previous studies~\cite{hendrycks2016baseline,liang2017enhancing} discovered that the predicted probability of the classifier rapidly drops for OoD images. Consequently, the removal of an uninformative feature can lead to a large change in the classifier output, and the feature would falsely receive high attributions. Inspired by our work, Kim~\textit{et al.}~\cite{kim2020interpretation} critiqued the same problem in the interpretation methods for natural language processing (NLP) models, and their approach has been accepted in the NLP field. To overcome the OoD problem, we perform Step 1 by implementing accurate marginalization by employing a powerful generative model. Consequently, the attribution map becomes more \textit{faithful}~\cite{ribeiro2016should} in terms of a quantitative evaluation metric~\cite{petsiuk2018rise}. Following the precise estimation in Step 1 and drawing inspiration from information theory, we propose a more theory-grounded quantification scheme in Step 2. Based on the theoretical background, our method also provides a class independent saliency analysis which is unavailable in the existing methods. Through a two-fold improvement of a perturbation-based method, we propose two attribution maps---an information gain \textbf{(IG) map} and a point-wise mutual information \textbf{(PMI) map}---which are class-\textit{independent} and class-\textit{specific} explanations, respectively. Fig.~\ref{fig:example} presents examples. \section{Background} \subsection{Attribution method} Given an input image $\mathbf{x} \in \mathbb{R}^{\rm{H} \times \rm{W} \times \rm{C}}$, the classifier predicts its label $y$ ($\rm{H}$, $\rm{W}$, and $\rm{C}$ are the height, width, and the number of channels in the image). By regarding the unknown label $Y$ as a random variable, the classifier predicts it by calculating a posterior distribution, $p_{\theta}(Y \,|\, \mathbf{x})$. For a specific class $y_c$ (typically the classifier's top prediction), to provide a human-interpretable rationale for the prediction $p_{\theta}(y_c \,|\, \mathbf{x})$, the \textit{attribution method} generates an attribution map, $\mathbf{m} \in \mathbb{R}^{\rm{H} \times \rm{W}}$. An element of $\mathbf{m}$, $m_i$, quantifies the contribution of pixel $x_i$ to the prediction; however, its definition and range vary among the different methods. Starting from the Saliency map~\cite{simonyan2013deep}, backprop-based methods~\cite{bach2015pixel,smilkov2017smoothgrad,springenberg2014striving} including Integrated Gradients~\cite{sundararajan2017axiomatic} and Grad-CAM~\cite{selvaraju2017grad} have been suggested to generate an attribution map using backpropagation. Although they are sufficiently fast to facilitate real-time results, Dabkowski~\textit{et al.}~\cite{dabkowski2017real} argued that their quality is limited and suggested the generation of a mask-like attribution map. Following the work of Dabkowski~\textit{et al.}~\cite{dabkowski2017real}, mask-based methods including FIDO~\cite{chang2018explaining}, Meaningful Perturbation~\cite{fong2017interpretable}, Extremal Perturbation~\cite{fong2019understanding}, and RISE~\cite{petsiuk2018rise} aim to find a mask that covers a relevant region of the target class. They applied an image perturbation with a mask and optimized the mask with their unique objective functions. Meanwhile, Shapley-based methods~\cite{lundberg2017unified,frye2020shapley} take game-theoretic approach to calculate feature importance using Shapley value~\cite{shapley201617}. More details regarding the backprop-based, mask-based, and Shapley-based methods are provided in the Appendix. \subsection{Perturbation-based method} Since the work by Zeiler \textit{et al.}~\cite{zeiler2014visualizing}, perturbation-based methods define an attribution of an input feature as the difference between the classifier output with and without that feature. They compute $p_{\theta}(Y \,|\, \xvec_{\setminus i})$ for each feature $x_i$ (Step 1) and measure the discrepancy from the original prediction, $p_{\theta}(Y \,|\, \mathbf{x})$ (Step 2). Fig.~\ref{fig:perturbation_methods} depicts the overall flow of the perturbation-based methods. \input{figtex/1_PerturbationMethodStep1} In Step 1, a direct computation of $p_{\theta}(Y \,|\, \xvec_{\setminus i})$ is infeasible because typical classifiers cannot predict for a partially removed input. Zeiler~\textit{et al.}~\cite{zeiler2014visualizing} approximated the computation by substituting $x_i$ for a reference value $x_{\rm{ref}}$: \begin{equation} \label{eq:reference_value} p_{\theta}(Y \,|\, \xvec_{\setminus i}) \approx p_{\theta}(Y \,|\, \xvec_{\setminus i}, x_{\rm{ref}}), \end{equation} where the authors used a gray value for $x_{\rm{ref}}$, as depicted In Fig.~\ref{figure:perturbation_methods_step1}(a). Although such a heuristic substitution has been widely used \cite{dabkowski2017real,fong2017interpretable,shrikumar2017learning}, selecting the reference value is non-trivial. This is because unlike the MNIST dataset, the background of which is black, many datasets (\textit{e}.\textit{g}.~ImageNet) have no uninformative color; a gray pixel indicates a gray color, not ``no information." In such cases, a replacement with $x_{\rm{ref}}$ introduces a new flaw, which it leads to an OoD problem. In Step 2, the authors used $\textrm{Diff}=p_{\theta}(y_c \,|\, \mathbf{x})-p_{\theta}(y_c \,|\, \xvec_{\setminus i})$, which directly subtracts two probabilities. The authors' choice was somewhat heuristic because a subtraction of two probabilities has no sound theoretical background. An accurate estimation of $p_{\theta}(Y \,|\, \xvec_{\setminus i})$ can be achieved by using marginalization as in (\ref{eq:marginalization}), where multiple values of $\tilde{x}_i$ are sampled from the distribution of $X_i$ (a random variable corresponding to $x_i$). \begin{equation} \label{eq:marginalization} p_{\theta}(Y \,|\, \xvec_{\setminus i}) = \mathbb{E}_{\tilde{x}_i \sim p(X_i \,|\, \xvec_{\setminus i})}\left [p_{\theta}(Y \,|\, \xvec_{\setminus i}, \tilde{x}_i) \right ]. \end{equation} The prediction difference analysis (PDA) \cite{zintgraf2017visualizing} proposed by Zintgraf~\textit{et al.}~follows this approach with the assumption that $x_i$ follows a Gaussian distribution \textit{i}.\textit{e}., $p(X_i \,|\, \xvec_{\setminus i})=\mathcal{N}(\cdot \,|\, \xvec_{\setminus i})$. The expectation in (\ref{eq:marginalization}) is further approximated using a Monte Carlo (MC) sampling of sample number $N=10$, as illustrated in Fig.~\ref{figure:perturbation_methods_step1}(b). However, we show that the Gaussian distribution does not model the image distribution accurately, and thus PDA still suffers from the OoD problem, as in Fig.~\ref{figure:gaussian_ood}; the images substituted by samples from the Gaussian distribution are clearly OoD. Hence the resulting attribution map cannot distinguish authentic evidence for the classifier. In Step 2, the authors~\cite{zintgraf2017visualizing} used a weight of evidence: $\textrm{Diff}=\textrm{logodds}(p_\theta(y_c \,|\, \mathbf{x}))-\textrm{logodds}(p_\theta(y_c \,|\, \xvec_{\setminus i}))$, where $\textrm{logodds}(p)=\textrm{log}(p/(1-p))$. By contrast, we utilize point-wise mutual information and information gain, and their definitions and strengths are described in the following section. \input{figtex/2_Gaussian_OoD.tex} \subsection{Information-theoretic analysis} \label{section:information_theory} Given the random variables $X$ and $Y$ with their realizations $x$ and $y$, information theory discloses how they are interconnected. Specifically, point-wise mutual information (PMI) between two events is a measure of the extent to which one event \{$X=x$\} provokes the other \{$Y=y$\}: $\textup{PMI}(x;y)=\log(p(y \,|\, x)/p(y))$. PMI has a positive or negative value when one event triggers or suppresses another event, respectively. Therefore, by investigating the sign and magnitude of PMI, we can infer if an observation of \{$X=x$\} is evidence for (positive) or against (negative) event \{$Y=y$\}. We now consider an event \{$X=x$\} and a random variable $Y$. If the observation of $x$ brings about a considerable change in the probability distribution of $Y$, the observation is considered informative. Therefore, we quantify the information as a Kullbeck---Leibler (KL) divergence between the posterior and prior distributions of $Y$ and refer to it as information gain (IG): $\textup{IG}(Y, x) = \textup{D}_\textup{KL}\left(p(Y \,|\, x) \parallel p(Y) \right)$. IG is minimized at zero when observing $x$ provides no information regarding $Y$, while a large IG value denotes a significant amount of \textit{Shannon's information}. Therefore, investigating the magnitude of IG help us estimate the informativeness of an event. We utilize these two information-theoretic measures, PMI and IG, to quantify the contribution of each feature. \input{figtex/2_PatchSampler_InD.tex} \input{figtex/3_OverallFlow.tex} \section{Method} In this study, we propose a model-agnostic \textit{perturbation-based} method. The method visually explains the prediction of any black-box classifier using two building blocks---enhanced marginalization (Section~\ref{section:marginalization}) and information-theoretic pixel attribution (Section~\ref{section:pixel_attribution}). Each component corresponds to Step 1 and Step 2 in the perturbation-based methods. \subsection{Enhanced marginalization} \label{section:marginalization} To perform the marginalization expressed in (\ref{eq:marginalization}), we should sample input features, and thus a generative model $p(X_i \,|\, \xvec_{\setminus i})$ is necessary. If the generative model does not model the distribution correctly, it makes the perturbed images ($\left[\xvec_{\setminus i}, \tilde{x}_i\right]$ in (\ref{eq:marginalization})) OoD. Chang \textit{et al.}~\cite{chang2018explaining} partially addressed the OoD problem by replacing inputs using the CA-GAN \cite{yu2018generative} output, namely, they used $x_{\rm{ref}}=\textrm{CA-GAN}(\xvec_{\setminus i})$ for computing (\ref{eq:reference_value}). Nonetheless, their approach remains far from marginalization because CA-GAN in-fills the input deterministically without sampling. To model the complex distribution of pixels and sample multiple pixel values, we deploy a powerful generative model consisting of a neural network, referred to as ``PatchSampler". Because a pixel is strongly dependent on its surroundings, PatchSampler approximates $p(X_i \,|\, \xvec_{\setminus i}) \approx p(X_i \,|\, \widehat{\xvec}_{\setminus i})$, where $\widehat{\xvec}_{\setminus i}$ is the neighborhood of the pixel $x_i$. $p(X_i \,|\, \widehat{\xvec}_{\setminus i})$ is then modeled $p_\phi(X_i \,|\, \widehat{\xvec}_{\setminus i})$ using a neural network comprising a series of convolutional layers with parameter $\phi$. We trained PatchSampler in advance using the training data of the classifier. Compared to Gaussian in Fig.~\ref{figure:gaussian_ood}, PatchSampler models the distribution far more accurately, as shown in Fig.~\ref{figure:enhanced_modeling}. As a result, the computation in Step 1 becomes more accurate, and the OoD problem is prevented. The resulting attribution maps depicted in Fig.~\ref{figure:enhanced_modeling} show a dramatic improvement both qualitatively and quantitatively. As the PatchSampler was trained using the ImageNet training set, which is a large general image dataset, the trained PatchSampler is expected to apply to classifiers for the other natural image datasets. Evaluating (\ref{eq:marginalization}) requires a classifier feedforward $p_{\theta}(Y \,|\, \xvec_{\setminus i}, x_i)$ for all possible values of $x_i$, which demands a large number of computations. We mitigate this computing issue by using the MC approximation of the sample number $N$: \begin{equation} \label{eq:ours} p_{\theta}(Y \,|\, \xvec_{\setminus i}) \approx \frac{1}{N}\sum_{\tilde{x}_i \sim p_\phi(\cdot \,|\, \widehat{\xvec}_{\setminus i})}^{N}p_{\theta}(Y \,|\, \xvec_{\setminus i}, \tilde{x}_i). \end{equation} Although a larger $N$ yields better approximation, we empirically show that a small $N$ provides visually and numerically indistinguishable results in the Appendix. Given a class $y_c$, an input feature $x_i$ obtains a high attribution when the corresponding $p_{\theta}(y_c \,|\, \xvec_{\setminus i})$ is far from $p_{\theta}(y_c \,|\, \mathbf{x})$. This happens under following two circumstances. First, $x_i$ should be relevant to class $y_c$. Second, the probable values of $X_i$ should be diverse, so that observing $\{ X_i=x_i \}$ makes the posterior different from the prior. In other words, pixels receive high attribution when they are both \textit{relevant} to $y_c$ and \textit{unpredictable} \cite{zintgraf2017visualizing}. However, pixels are highly correlated with their neighborhoods and are extremely predictable. Alternatively, patches are more uncertain and less predictable. Since a pixel is a patch of unit size, we regard the contribution of a patch instead of a pixel for a generality. We use $x_i$ a patch of size $K \times K$ with a hyperparameter $K$. Accordingly, PatchSampler models a joint distribution of a $K \times K$ patch conditioned on its surrounding $3K \times 3K$ patch. For $K > 1$, patch-wise calculated attributions were equally distributed to every pixel in each patch. Throughout the research, we used $K=8$ as it provides easily understandable attribution maps. However, the most attractive value of $K$ is different for each input image. In the Appendix, we provide an example of the attribution maps generated using various values of $K$. One can try various values of $K$ to obtain more intuitive visualization depending on the size of the object. \subsection{Information-theoretic pixel attribution} \label{section:pixel_attribution} Given an image $\mathbf{x}$ and its (unknown) label, we consider their corresponding random variables, $\mathbf{X}$ and $Y$. The classifier prediction $p_{\theta}(Y \,|\, \mathbf{x})$ is the posterior distribution of $Y$. Starting from a prior distribution, the classifier makes a final prediction based on the information from the input observation. To quantify the contribution of the features, it is reasonable to measure the amount of information provided by each feature. Accordingly, our method outputs two attribution maps, $\mathbf{m}_{\rm{pmi}}$ (PMI map) and $\mathbf{m}_{\rm{ig}}$ (IG map) $\in \mathbb{R}^{\rm{H} \times \rm{W}}$, for two different questions: ``How much does each pixel support a specific class $y_c$?" and ``How informative is each pixel?" \subsubsection{PMI map} First, assume that we are interested in how much pixel $x_i$ accounts for class $y_c$. In other words, given the other part of the image, how much does an observation \{$X_i=x_i$\} trigger the predicted event \{$Y=y_c$\}? Such a notion is captured by PMI between the two events conditioned on $\xvec_{\setminus i}$. For a given pixel, its PMI for a class $y_c$ is computed as follows: \begin{equation} \label{eq:pmi} \textup{PMI}(y_c;x_i \,|\, \xvec_{\setminus i})=\log \left (\frac{p_\theta(y_c \,|\, \xvec_{\setminus i}, x_i)}{p_\theta(y_c \,|\, \xvec_{\setminus i})} \right). \end{equation} It is noteworthy that the numerator in (\ref{eq:pmi}) is the prediction for the original input. For every pixel, we calculate each PMI value using (\ref{eq:pmi}) and the result constitutes a \textbf{PMI map}: $\mathbf{m}_{\rm{pmi}}^{i}(y_c,\mathbf{x}) \overset{\underset{\mathrm{def}}{}}{=} \textup{PMI}(y_c;x_i \,|\, \xvec_{\setminus i})$. Pixels with positive PMI support $y_c$, whereas those with negative PMI oppose $y_c$. Note that a PMI map can be calculated for any class $y_c$, not necessarily the top-1 class. \subsubsection{IG map} To quantify the informativeness of a pixel $x_i$ regardless of a specific class, we estimate the IG between an observation \{$X_i=x_i$\} and $Y$ given $\xvec_{\setminus i}$ using (\ref{eq:ig}). \begin{align*} \label{eq:ig} \textup{IG}(Y, x_i \,|\, \xvec_{\setminus i}) &= \textup{D}_\textup{KL} \left (p_\theta(Y \,|\, \xvec_{\setminus i}, x_i) \parallel p_\theta(Y \,|\, \xvec_{\setminus i}) \right) \\ &= \mathbb{E}_{y_c \sim p_\theta(Y \,|\, \mathbf{x})}\left [\log \left (\frac{p_\theta(y_c \,|\, \xvec_{\setminus i}, x_i)}{p_\theta(y_c \,|\, \xvec_{\setminus i})} \right) \right ] \\ &= \numberthis \mathbb{E}_{y_c \sim p_\theta(Y \,|\, \mathbf{x})}\left [\textup{PMI}(y_c;x_i \,|\, \xvec_{\setminus i})\right ]. \end{align*} Likewise, the \textbf{IG map} is defined as $\mathbf{m}_{\rm{ig}}^{i}(\mathbf{x}) \overset{\underset{\mathrm{def}}{}}{=} \textup{IG}(Y, x_i \,|\, \xvec_{\setminus i})$. Notably, the IG is an expectation of the PMI among all possible $y_c$ in (\ref{eq:ig}). This allows the calculation of IG by obtaining the PMI for every class $y_c$ and calculating the weighted sum with the predicted probability $p_\theta(y_c \,|\, \mathbf{x})$. Because a single feed-forward of a classifier calculates $p_\theta(y_c \,|\, \mathbf{x})$ for every $y_c$, an IG map can be obtained at only a marginal cost given PMI maps. Pixels with high IG attribution are informative to the classifier and thus salient. The IG map is not class-specific and is complementary to a class-specific PMI map. In particular, when multiple classes are probable so that no class-specific explanations are sufficient to describe the behavior of the classifier, only the IG map can provide an acceptable interpretation. Note that such class-independent explanations are not available in most existing methods, including perturbation-based methods~\cite{zeiler2014visualizing,zintgraf2017visualizing}. In the calculation of Eq~(\ref{eq:pmi}) and (\ref{eq:ig}), we add a small constant ($\epsilon$=1e-13) inside log for the numerical stability. In summary, our method generates two attribution maps, namely a PMI map and an IG map, which provide class-specific and class-independent explanations, respectively. Fig.~\ref{fig:big_picture} shows an overview of the proposed method. The pseudo-code of our method is provided in the Appendix. \input{figtex/3_PMImap_ex.tex} \input{figtex/3_PMImap_Top12} \subsubsection{Comparison to Learning to Explain~\cite{l2x}} Inspired from an information-theoretic perspective, Learning to Explain (L2X) tries to explain a prediction by selecting a set of informative input features. L2X trains a selector model that predicts a set of features with a high variational lower bound of mutual information. Unlike PDA~\cite{zintgraf2017visualizing} and our method, L2X does not consider the distribution of a feature and uses a variational approximation with a heuristic black replacement. For this reason, MNIST was the only image dataset that the authors demonstrated. Taking the pixel distribution into account, we attempt to explain a classifier trained with datasets of any kind. Moreover, we directly quantify the information of each feature so that our method can deduce a solution for their problem, but not vice versa. \section{Results and Discussion} First, we present the interpretation results of a classifier prediction using the proposed method. Second, we compare the proposed method both quantitatively and qualitatively to the existing methods. Third, we present the strengths of the proposed method. Finally, we analyze the proposed method in detail. More results regarding the hyperparameters and the implementation details are provided in the Appendix. Throughout the experiments, we interpreted the predictions of VGG19~\cite{simonyan2014very} classifier for various images. Besides this specific classifier, our method applies to any other model, and the explanation results for other image classifiers are also provided in the Appendix. \input{figtex/3_IGmap_ex.tex} \input{figtex/4_AllComparison.tex} \subsection{Interpretation results} \label{section:interpretation_results} \subsubsection{PMI maps} In Fig.~\ref{figure:PMImap_ex}, we provide the examples of the PMI maps for the classifier prediction. The images are classified as \textit{corkscrew}, and their PMI maps clearly indicate that the corkscrews are the reason why. When an image contains multiple objects, PMI map of each class can visualize the influence of each object on the corresponding class. For example, the PMI maps in Fig.~\ref{fig:pmi2_scubadiver}(a) suggest that the scuba diver supports the \textit{scuba diver} class, but contradicts the rival class, \textit{coral reef}, and vice versa. Meanwhile, evidence for each class is not always mutually exclusive, particularly when the classes are similar;. In Fig.~\ref{fig:pmi2_scubadiver}(b), the face of the snow leopard is supporting evidence for both \textit{tiger} and \textit{snow leopard} classes. \subsubsection{IG maps} In Fig.~\ref{fig:useful_igmap}, the classifier failed to pick a dominant class; the top-3 predicted classes were \textit{zucchini} ($p=0.20$), \textit{slug} ($p=0.18$), and \textit{acorn} ($p=0.14$). In such cases, class-specific attribution maps are hardly helpful, and class-independent explanations produced by the IG map are the only remaining option. In Fig.~\ref{fig:useful_igmap}, the IG map indicates that the classifier was trying to classify the herb at the bottom. \input{figtex/4_DeletionCurve.tex} \input{figtex/4_DeletionAUC_merged.tex} \input{figtex/4_DeletionAUC_generative.tex} \subsection{Comparison to the existing methods} \label{section:comparison} \subsubsection{Quantitative comparison} \label{section:quantitative_comparison} In this section, we examine the \textit{faithfulness}~\cite{ribeiro2016should} of our attribution method through a quantitative analysis. A faithful explanation is one that correctly explains the behavior of the classifier~\cite{ribeiro2016should}. Deletion AUC~\cite{petsiuk2018rise}, the metric to be used, measures the area under the prediction probability curve as pixels with high attributions are gradually removed. A low AUC implies a steep decrease in prediction, thus indicating that the explanation correctly captures the relevant area for the predicted class. To gradually remove pixels, the authors in \cite{petsiuk2018rise} masked them using gray values. In addition to this, we tried in-filling the pixels with CA-GAN~\cite{yu2018generative}. For example, given an image in Fig.~\ref{fig:deletion_curves}(a), the PMI map for the \textit{green mamba} class is provided in Fig.~\ref{fig:deletion_curves}(b). Fig.~\ref{fig:deletion_curves}(c) shows the predicted probability curve as the pixels with high scores are gradually removed. In this example, the deletion curve of the PMI map shows the steepest drop, and hence the PMI map provides the explanation with the highest faithfulness. Deletion AUCs~\cite{petsiuk2018rise} are compared between maps from closely related methods (perturbation-based and mask-based methods) and are presented in Fig.~\ref{fig:deletion_scores}. The PMI maps yield the lowest average Deletion AUCs both in gray and CA-GAN~\cite{yu2018generative} in-fill experiments. The results confirmed the improved faithfulness of the PMI map over the perturbation-based baselines. As the IG map is a class-independent attribution map, it does not necessarily yield a low AUC. Interestingly, the AUC for the IG map is quite low, and we postulate that this is because the regions that discriminate among the most probable classes deliver significant information to the classifier overall as well. \subsubsection{PatchSampler} \label{section:patchsampler_result} The key contribution of this study is the deployment of PatchSampler to accurately perform marginalization in Step 1. The Deletion AUCs presented in Fig.~\ref{fig:deletion_scores_generative} show that adopting PatchSampler as a generative model lowers both Deletion AUC metrics compared to using heuristic values. We can conclude that the use of PatchSampler improves the correctness of the attribution maps. When other type of generative model, CA-GAN~\cite{yu2018generative}, is used, the Deletion AUCs slightly increase. CA-GAN models the image distribution quite well and prevents the OoD problem, but it cannot accurately implement Eq~(\ref{eq:marginalization}) because it cannot sample multiple pixel values for marginalization. Moreover, Fig.~\ref{figure:patches} presents the example patches substituted by baseline methods and PatchSampler, and it is evident that the patches replaced by PatchSampler appear more natural. When quantitatively compared, PatchSampler yields a much lower FID score~\cite{heusel2017gans} (the lower the better) than the Gaussian used in PDA~\cite{zintgraf2017visualizing}. From the above experimental evidences, our purpose to keep the perturbed images in-distribution seems to be achieved by adopting PatchSampler. One might suspect that explaining a black-box model by using PatchSampler, which is another black-box model, hurts the trustworthiness of the explanation. However, we can easily inspect the samples from PatchSampler and understand their impact on the attribution calculation. Therefore, we believe that the explanation provided by the PMI and IG maps confined the un-interpretability to the acceptable level. \input{figtex/4_Patches.tex} \input{figtex/4_OtherComparison.tex} \subsubsection{Visual assessment} \label{section:visual_comparison} We present two example images and attribution maps for the corresponding classifier predictions in Fig.~\ref{fig:comparison_all}. In the hockey puck image (Fig.~\ref{fig:comparison_all} top), the proposed PMI and IG maps clearly identify the relevant object, the hockey puck. Compared with our method, PDA~\cite{zintgraf2017visualizing} assigns excessive attribution to irrelevant areas such as the background. This is due to the OoD problem incurred by the Gaussian modeling of images. A bell cote image (Fig.~\ref{fig:comparison_all} bottom) is an example showing that the PMI map can offer negative evidence. The PMI map pinpoints the bell as a supporting evidence, while the clock as negative. It transpired that the clock supports another class, \textit{analog clock}. Note that the other methods except PDA~\cite{zintgraf2017visualizing}, which is a perturbation-based method, only visualize supporting evidence. \input{figtex/4_NegativeEvidence.tex} \subsubsection{Comparison to the methods in other categories} In this section, we present a comparison with the attribution methods other than the methods compared in Section~\ref{section:comparison}. In Fig.~\ref{fig:other_comparison}, we provide the attribution maps generated using the Saliency map~\cite{simonyan2013deep}, e-LRP~\cite{bach2015pixel}, KernelSHAP~\cite{lundberg2017unified}, and LIME~\cite{ribeiro2016should} for the same images in Fig.~\ref{fig:comparison_all}. Backprop-based methods generate extremely scattered attribution map and provide low visibility. KernelSHAP~\cite{lundberg2017unified} and LIME~\cite{ribeiro2016should} use super-pixels as the unit of perturbation, and hence their attribution maps have a block-like structure. \input{figtex/4_DeletionAUC_other.tex} They highlighted the key objects in the images (the hockey puck and the bell) with redundant regions. Quantitatively compared, the average CA-GAN~\cite{yu2018generative} Deletion AUCs~\cite{petsiuk2018rise} for the random 5,000 ImageNet images in Fig.~\ref{fig:deletion_other} show that the baseline methods result in higher AUC than the PMI map, meaning that their maps indicate the evidence of the classifier less accurately than our method. \subsection{Strengths} \subsubsection{Negative evidence} \label{section:negative_evidence} Explanations become more descriptive when they provide contrary as well as supporting evidence. In Fig.~\ref{fig:negative_evidence}, the PMI map explains the rationales behind the prediction using negative evidence. Negative evidence facilitates a more in-depth understanding of a classifier. However, mask-based and most backprop-based methods do not support such analysis because they only provide supporting regions. To verify whether the negative evidence truly has a negative influence on the classifier prediction, we conducted the following experiment. \input{figtex/4_PMI2_IGmap.tex} \input{figtex/4_SanityChecks.tex} \input{figtex/4_NegativeRemove.tex} Fig.~\ref{fig:negative_remove} shows the average top-1 predicted probabilities for ImageNet test images as portions of negative evidences are gradually removed. The result shows that the classifier gains confidence as the negative evidence is removed. This implies that the negative evidence provided by our method is indeed negative. \subsubsection{Class-independent explanation} \label{section:useful_igmap} Almost all existing methods~\cite{chang2018explaining,dabkowski2017real,fong2019understanding,sundararajan2017axiomatic,zeiler2014visualizing,zintgraf2017visualizing} only provide class-specific explanations. However, when multiple classes are probable, it is difficult to select an appropriate class-specific map to interpret the prediction. In Fig.~\ref{fig:igmap_and_unit}(a), VGG19~\cite{simonyan2014very} predicted the image as a \textit{bassoon} ($p$ = 0.08), a \textit{stethoscope} ($p$ = 0.07), and a \textit{horn} ($p$ = 0.07) and failed to identify a dominant class. The IG map implies that the classifier focused on the stethoscope. Unlike in the other methods, the IG map can provide overall explanations about the region based on which the classifier made the decision. \subsubsection{Implication of an attribution} \label{section:implication} Mask-based methods~\cite{chang2018explaining,dabkowski2017real,fong2019understanding,fong2017interpretable} propose \textit{how to calculate} their own attribution, without a clear definition of \textit{what} the attribution is. They quantify the relative importance of input features, but the value itself has no clear meaning. In contrast, the PMI and IG maps measure the actual amount of \textit{Shannon's information} delivered in \textit{bits} by each feature (Fig.~\ref{fig:igmap_and_unit}(b)). Therefore, our method provides a theoretically meaningful, yet intuitive, explanation. \subsection{Additional remarks} \subsubsection{Sanity Checks} \label{section:sanity} Adebayo~\textit{et al.}~\cite{adebayo2018sanity} proposed two Sanity Checks that any valid attribution method should pass. They test if an attribution map is sensitive to both training data and the classifier parameters. The IG map showed a clear sensitivity to the classifier parameters, as in Fig.~\ref{figure:sanity}, and both PMI and IG maps passed the two Sanity Checks. More results are provided in the Appendix. \input{figtex/4_Swing.tex} \subsubsection{Biased data} \label{section:clever_hans} In Fig.~\ref{figure:clever_hans}, the PMI maps report that the person riding a swing is more supporting feature of the class \textit{swing} than the swing itself. This is because the majority of the training data for the class \textit{swing} contains a swing with a person riding it, as shown in Fig.~\ref{figure:clever_hans}. Thus, the classifier learned a strong correlation between the target class and person. For better generalization and to reduce unwanted bias towards data, examining a trained classifier using PMI maps will be beneficial. \subsubsection{Text cues} In Fig.~\ref{fig:bandaid}(a), VGG19~\cite{simonyan2014very} classified the test images as the \textit{Band Aid} class, and the corresponding PMI maps suggest that the predictions are based on the red ``BAND-AID" texts in the images. This observation can be attributed to the frequent occurrence of the similar texts in the training data. Police van images in Fig.~\ref{fig:bandaid}(b) is another example. \input{figtex/4_BandaidRemove.tex} To verify if the text functions as a critical cue for classification, we observe the predicted probability as the letters in the text are removed one-by-one. Fig.~\ref{fig:bandaid_remove} shows that erasing a couple of letters yield a significant drop in the predicted probability. This signifies that the classifier indeed responds to the text in the image. However, the evidence is still limited to support that the classifier can \textit{read}. Rather, the classifier seems to have learned the low-level pattern as those texts are visually similar. \input{figtex/4_Animals.tex} \subsubsection{Classifying animals} \label{section:animals} ImageNet has a number of animal classes, and a well-trained classifier should capture the unique appearance of each animal class. In Fig.~\ref{figure:animals}, we provided PMI maps of VGG19~\cite{simonyan2014very} for several animal classes. The classifier successfully captured some distinctive features. For many animals, the facial areas are particularly discriminative than the other body parts. \subsubsection{Explaining different classifiers} Since our method is model-agnostic, it can explain a classifier of any kind including non-differentiable one. The PMI maps for four popular image classifiers are presented in Fig.~\ref{fig:various_models}. The four classifiers made the same predictions for each image (provided at the left side of the input images), and the PMI maps are generated for those classes. For some images, the classifiers focused on the similar cues (1st row), while for the other images, they made decisions based upon different areas (2nd and 3rd rows). \input{figtex/4_VariousModels.tex} \subsubsection{Adversarial robustness} \label{section:adversarial_robustness} Ghorbani \textit{et al.}~\cite{ghorbani2019interpretation} proposed two attack methods (iterative and random perturbation attacks) for feature-based interpretations. Iterative attack performs a gradient descent with respect to the input image in the direction that maximally changes the explanation, and the authors manipulated the explanation significantly while leaving the input image almost unchanged. However, gradient signals are unattainable with our method because the sampling node followed by PatchSampler blocks backpropagation. Only the random attack, which the authors reported to have weaker attack performance, is feasible, and hence our method is more robust against adversarial attacks than backprop-based methods such as Saliency map~\cite{simonyan2013deep} and its variants~\cite{shrikumar2017learning,sundararajan2017axiomatic}. \section{Conclusion} \subsection{Contributions} In this study, we proposed a new visual explanation method based on information theory, and showed its five advantages over the existing methods. We developed a novel approach in analyzing the input and its label as two random variables, thereby suggesting the use of theory-backed attribution methods, namely PMI and IG maps. The improved marginalization and the use of theory-backed attribution calculation schemes provided easily interpretable and strongly convincing attribution maps. Ribeiro \textit{et al.}~\cite{ribeiro2016should} reported a trade-off between the interpretability and faithfulness of an explanation. The most faithful explanation (\textit{i}.\textit{e}., the parameters themselves) lacks interpretability, and a more understandable explanation inevitably simplifies the classifier behavior, thus losing its faithfulness. Hence, refining an attribution map by averaging out the noise \cite{smilkov2017smoothgrad} and forcing attributions to cluster or suppress artifacts using a regularizer~\cite{chang2018explaining,fong2017interpretable} gains human-interpretability in exchange for faithfulness. By contrast, we performed no heuristic refinement for the visual appearance, yet giving easily interpretable results. \subsection{Expandability} Although PMI and IG maps are defined and provided for image data herein, the notions of marginalizing the input feature and measuring PMI and IG are applicable to other domains such as language modeling~\cite{devlin2019bert} and tabular data~\cite{hwang2019hexagan} using appropriate generative models. \section{} \subsection{Pseudo-code} \input{9__pseudo_code.tex} \subsection{Effect of patch size} \begin{minipage}{\textwidth} \begin{figure}[H] \centering \includegraphics[width=0.85\linewidth]{figures/Figure_VariousK_v6_small.pdf} \caption{\textbf{Attribution maps with varying patch size, $K$.} VGG19~\cite{simonyan2014very} classified the image as a \textit{dowitcher} with $p=0.999$. As $K$ increases, our method generates more interpretable attribution maps. By examining these maps with proper $K$, one can easily understand that the model made a decision upon the bird's beak and facial region. } \label{fig:Various_K} \end{figure} \end{minipage} \subsection{Effect of MC sample number} \begin{minipage}{\textwidth} \begin{figure}[H] \centering \includegraphics[width=0.85\linewidth]{figures/Figure_VariousN_v6_small.pdf} \caption{\textbf{Attribution maps with varying MC sample number, $N$.} Attribution maps when $N \leq 128$ are visually indistinguishable from maps generated using $N=1024$. They are almost identical in terms of the Pearson correlation coefficient ($\rho$) as well. Accordingly, we chose $N=8$ as a point of compromise between computational efficiency and approximation accuracy and used this value throughout the study. } \label{fig:Various_N} \end{figure} \end{minipage} \clearpage \subsection{Sanity Checks} \begin{turn}{90} \noindent \begin{minipage}{0.95\textheight} \begin{figure}[H] \vspace{-25pt} \centering \includegraphics[width=0.85\linewidth]{figures/Figure_randomWeight_v3_small.pdf} \caption{ \textbf{An input image (a), results from the parameter randomization test~\cite{adebayo2018sanity} (b, c), and the label randomization test~\cite{adebayo2018sanity} (d).} In a parameter randomization test, weights of VGG19~\cite{simonyan2014very} are randomized from top to bottom layers. PMI and IG maps for each randomized classifier are depicted in (b). As each layer is successively randomized, both attribution maps become dissimilar to the original explanation. Pearson correlation coefficient and Spearman's rank coefficient between each map and the original maps are plotted in (c). PMI and IG maps for a fully randomized classifier are completely different from the original maps; both coefficients are close to zero. Thus they passed the first test. In a label randomization test, VGG19 is retrained to fit the ImageNet training images with random labels. Both attribution maps for the deformed classifier look completely different from the original maps. They imply that no pixels provided useful information \textit{overall} nor supported the \textit{ostrich} class. Indeed, numerical comparison using Pearson correlation coefficient ($\rho$) and Spearman's rank coefficient ($r_s$) confirms this point. Therefore, PMI and IG maps passed both Sanity Checks~\cite{adebayo2018sanity}. } \label{fig:rand_param} \end{figure} \end{minipage} \end{turn} \clearpage \section{Implementation Details} \subsection{Environments} The proposed method is implemented using TensorFlow~\cite{abadi2016tensorflow} version 1.12, on a machine consisting of an Intel i7-6850K CPU and a GeForce GTX 1080 Ti GPU. \subsection{Running time} The generation of a pair of PMI and IG maps of VGG19~\cite{simonyan2014very} for one input image requires less than thirty minutes using a single GPU. Note that the running time is linear in $N$, and reducing $N$ to a quarter of the used value (2) gives almost identical attribution maps. Moreover, additional approximations, such as increasing the patch stride, can further reduce the computation time to a couple of minutes. \subsection{PatchSampler} PatchSampler predicts pixel values of a patch from a larger patch surrounding it. We model PatchSampler as a stack of the convolutional (conv.) layers as follows: conv9-256, conv9-256, maxpool2, crop out 2 pixels from each edge, conv8-256, conv8-256, conv8-256, conv8-256, conv8-256, and conv8-768. Every conv. and maxpool operations used stride size of one (conv9-256 means a conv. layer with kernel size 9 yielding 256 feature maps). The channels in the last layer are grouped into three, representing RGB channel for 256 possible values. Therefore, the softmax activations are applied for each group of channels. The output of PatchSampler is three groups of 256-class categorical distributions for each pixel and each RGB channel. PatchSampler is then trained to predict the pixel values using an ImageNet training set and multi-class classification loss. All convolutional layers except for the last layer are followed by a LeakyReLU~\cite{maas2013rectifier} activation with $\alpha=0.2$. \subsection{Classifiers} The parameter weights of the analyzed classifiers were downloaded from PyTorch model zoo \footnote{\url{https://pytorch.org/docs/stable/model_zoo.html\#module-torch.utils.model_zoo}}. \subsection{Methods} The implementations of other attribution maps are downloaded from various online sources as follows. Prediction Difference Analysis (PDA)~\cite{zintgraf2017visualizing}: \url{https://github.com/lmzintgraf/DeepVis-PredDiff}; Saliency map (Gradients)~\cite{simonyan2013deep}, Integrated Gradients~\cite{sundararajan2017axiomatic}, and $\epsilon$-LRP~\cite{bach2015pixel}: \url{https://github.com/marcoancona/DeepExplain}; Grad-CAM~\cite{selvaraju2017grad}: \url{https://github.com/jacobgil/pytorch-grad-cam}; Real Time Image Saliency~\cite{dabkowski2017real}: \url{https://github.com/PiotrDabkowski/pytorch-saliency} ; Meaningful Perturbation~\cite{fong2017interpretable}: \url{https://github.com/jacobgil/pytorch-explain-black-box} ; FIDO~\cite{chang2018explaining}: \url{https://github.com/zzzace2000/FIDO-saliency} ; Extremal Pertubation~\cite{fong2019understanding}: \url{https://github.com/facebookresearch/TorchRay} ; RISE~\cite{petsiuk2018rise}: \url{https://github.com/eclique/RISE}. Occlusion~\cite{zeiler2014visualizing} was implemented by us. SSR loss was used for a generation of FIDO maps because the loss is reported to be less susceptible to artifacts than SDR loss~\cite{chang2018explaining}. \section{Additional Remarks} \subsection{Backprop-based method} Since Simonyan \textit{et al.}~\cite{simonyan2013deep} first suggested Saliency map to visualize attribution by using a gradient of class score with respect to the input, successive works have improved its visual quality by reducing the noise with averaging~\cite{smilkov2017smoothgrad} and by using integration~\cite{sundararajan2017axiomatic}. In addition, some works have changed the back-propagation rule by using various heuristic methods such as Deconv~\cite{zeiler2014visualizing}, Layer-wise Relevance Propagation (LRP)~\cite{bach2015pixel}, and Guided Back Propagation~\cite{springenberg2014striving}, thereby producing visually appealing results. Because these methods calculate an attribution map through several backpropagations, they are called backprop-based methods. As indicated by Dabkowski~\textit{et al.}~\cite{dabkowski2017real}, the backprop-based methods are fast enough to be real-time, but their quality is limited. Moreover, some of these methods fail Sanity Checks~\cite{adebayo2018sanity} by showing invariance to classifier parameters and to the training data. Therefore, the failed methods are proved to be inadequate in many domains. Moreover, Nie~\textit{et al.}~\cite{nie2018theoretical} theoretically showed that Deconv~\cite{zeiler2014visualizing} and Guided Back Propagation~\cite{springenberg2014striving} simply generated a map similar to the input image, rather than providing an explanation for the classifier. \subsection{Mask-based method} Dabkowski \textit{et al.}~\cite{dabkowski2017real} attempted to find an attribution map having a different implication. Their attribution map is a mask that covers a relevant region of the target object. They generated the mask in real-time by training a mask-generating model in advance. In another approach \cite{fong2017interpretable,chang2018explaining,fong2019understanding}, similar masks were optimized iteratively using a gradient descent, in which additional loss terms were used to force the mask to cluster together and suppress high-frequency components. Such masks tend to cover the entire object in the image rather than only the most decisive portion within the object. For example, given an image of a \textit{car}, mask-based methods cover the entire car area, whereas in other approaches \cite{zeiler2014visualizing,zintgraf2017visualizing}, a high score will be assigned only to the most salient subregions, such as the wheels. \subsection{Shapley-based method} Shapley value~\cite{shapley201617} quantifies the contribution of each player in a coalition game. Following this game-theoretic approaches, Lundberg~\textit{et al.}~\cite{lundberg2017unified} adopted Shapley value to estimate the feature importance. They estimate the expected amount of prediction change if each feature is masked out under all possible context. Frye~\textit{et al.}~\cite{frye2020shapley} takes the similar approach while using a generative model to keep the masked data on the data manifold. The use of a generative model by the authors has a similar motivation to that of PatchSampler in the perturbation-based methods. However, Kumar~\textit{et al.}~\cite{kumar2020problems} claimed the inappropriateness of Shapley value as an attribution method.
1,941,325,220,952
arxiv
\section{Introduction} The image of the world offered by quantum mechanics leaves us with a dizziness. It is hard to make sense of the discreteness and the indetermination revealed by the theory. Different reactions to this vertigo lead to different ways of understanding what the theory tells us about reality, namely different 'interpretations' of the theory. Two opposite attitudes are possible. One is to try to bend discreteness and indetermination into an underlying hypothetical continuous and determined reality. For instance, the Many-Worlds interpretation assigns ontological value to the quantum states, which are continuous and always determined; while the DeBroglie-Bohm, or pilot-wave, interpretation assigns ontological value to a commuting algebra of preferred variables such as positions of particles, which are also assumed to be continuous and always determined. All these are interpretations based on an ontology of objects, or entities, with properties. The other possible attitude is to take discreteness and indetermination at their face value, and study their consequences. The relational interpretation of quantum mechanics \cite{Rovelli:1995fv} starts from this second position. It does not make use of an ontology of entities that have always properties, but rather an ontology of relations, where the \emph{properties} (of relata) are only determined at discrete interaction times and are always relative to both interacting systems\footnote{ The relationality of relational quantum mechanics has been compared with ontic structuralism by Candiotto and Dorato \cite{Candiotto:2017,Dorato:2020}. The metaphysical implications of relational quantum mechanics, and the association with the more general structuralist framework, are still requiring an in-depth investigation. In this respect, of particular interest are the positions developed by Esfeld, French and Ladyman \cite{Esfeld:2004,FrenchLadyman:2003}. }. This is a rather radical metaphysical position: it places relations, rather than objects, or substances, at the center of the metaphysical conception. Articulating such a position has its difficulties: How to think of relations before relata? How to preserve the objectivity of our representations, if properties turn out to be so relative? What grants the commensurability between perspectives? There are answers to these questions \cite{ FrenchKrause:2006,LadymanRoss:2007,Rovelli:2017sky}, but they involve a radical rethinking of the conceptual basis of all our representations of reality. So, why should we venture into this arduous journey, when more pacifying readings of quantum theory, compatible with a more naive realism, are available? After all, different interpretations offer coherent frameworks for understanding the content of quantum mechanics and interpreting reality around us. Internal coherence is a necessary condition for a consistent interpretation of quantum mechanics, but is insufficient in helping us choosing between different interpretations. One possibility to settle this problem is to delegate the answer to the future: some interpretations may turn out to more fruitful. This is for instance how the debate on wether or not it is better to consider the Earth to be the center of the Universe (a non empirical question!) was settled: one option turned out to be definitely more fruitful. It has been argued that quantum gravity might be easier within one interpretation. Or perhaps a future theory superseding quantum mechanics will require one particular interpretation \cite{Smolin,Valentini:2021izg}. In all these cases, however, the new results against which to evaluate current interpretations are not yet available. Here I want, instead, to investigate a different strategy for evaluating interpretations of quantum theory: their coherence with the conceptual frameworks of {\em other} physical theories that best capture our recent advances in understanding the physical world. I argue below that the relationality that characterizes the relational interpretation of quantum mechanics is in fact not so unconventional after all. Rather, it characterizes modern physics. My aim is to provide in this way a more solid foundations to the idea that relationality is central to quantum mechanics: through the analysis of how relationality is present, perhaps in a transversal way, in virtually all aspects of contemporary physics. In fact, I shall argue that the relationality at the base of quantum mechanics is already present in classical mechanics: by putting this classical relationality in evidence, we better situate the emergence of the more subtle relationality of the quantum case. To this end, we need to look at classical theory from a modern perspective, in particular using the language of symmetries and gauge theories. This allows us to create a natural bridge with quantum mechanics in its relativistic version. When we talk about interpretations of quantum mechanics, it is misleading to restrict us to the non relativistic domain: we must consider the compatibility with quantum field theory, with Yang Mills theories and with gravity. Relationality offers a key to do so. I also discuss the specific connection between quantum theory and the relativistic theory of the gravitational field. The problematic nature of this connection can be solved by using a common language: that of totally constrained systems. This can serve as common ground for understanding the foundational problems of quantum mechanics, gravity, and the role of symmetries/gauge, within a common conceptual framework. If these steps are carried out carefully, then the image of a fundamental conceptual structure for understanding reality at our present level of knowledge opens up: that of covariant quantum fields. This is what quantum gravity is about: a quantum description of the gravitational field must follow from a covariant description of quantum fields in full generality. Physics forces us towards engaging metaphysically in a specific direction: everything that exists is quantum, everything that exists is covariant. I argue below that this is clarified by seeing that everything quantum is relational, and everything covariant is relational. \\[1em] \begin{center} \begin{tabular}{@{} rcl @{}} \hline \\ ~particles + fields & & ~~spacetime \\[.7em] { \bf Quantum Theory} ~ $\Downarrow$ \hspace{8mm} & & \hspace{6mm} $\Downarrow$ ~ { \bf General Relativity} \\[.5em] quantum fields & ~$\myarrow[-45]$ \hskip2cm $\myarrow[-135]$~ & covariant field \\ & covariant quantum fields &\\[2em] \hline \end{tabular} \end{center} \vspace{2mm} \section{Relationality in quantum mechanics} Taking relations as fundamental in quantum mechanics implies a change in the ontology that does not prevent being a realist. The relational interpretation is a realistic one: when a self-adjoint operator, which codes the physical properties of a system, assumes a certain eigenvalue, this corresponds to an element of reality. Rovelli refers to these elements of reality of relational quantum mechanics as {\em facts} (see Rovelli's article in this volume), or {\em events}. Like the events in relativity, quantum events are about physical systems in interaction. We may label these systems as {\em observer} or ``observed'', but subjectivity, agents, mind, idealism, phenomenology, etcetera, play no role in this interpretation. These facts actually have a clear correspondence with the ontology of classical mechanics. In the quantum formalism, the {\em observables} correspond by definition to the measurable quantities of classical physics. On the other hand, the relational interpretation is characterized by the fact that these {\em facts} are understood as intrinsically relational: they are real, but their actualization as real is always related to both systems interacting when the value of a measurable quantity is determined. A fact can be true or actualized with respect to a system (which acts with the abstract role of observer or measuring apparatus) and also not be true with respect to another. Reality and relationality are therefore inextricably linked. We attribute existence to a system from its possessing certain properties: location, speed, energy, angular momentum, area, volume... In quantum mechanics, we realize that it is not it is possible to speak of any of these properties except in a relational manner. Each property is determined by a relationship between systems. When this relationship does not materialize, the property is not determined. In a Galilean system, in order to define the speed of an object we must have a reference system with respect to which the speed is measured; different reference systems associate a different speed to the same object. If no reference system is defined, the object does not have any ``speed'' property. There can not be a notion of ``speed'', for instance, associated to the universe as a whole. In relational quantum mechanics this principle is the foundation of the ontology of the theory: the elements of reality, the facts are aspects of a relationship, and take place in interactions. The ontological priority of the interactions invests the whole structure of what we call real% \footnote{ Notice that in an ontology of relations it is still possible to refer to {\em relata} in a meaningful way. For instance here we have used the notion of systems and we will talk about objects such as particles and fields. All of them, it is argued, have a relational nature. But, as a structuralist would say, it does not follow from logical principles that they cannot be objects of predication \cite{Saunders:2003}. }. In particular, interactions determine our notion of locality. In contemporary physics we emphasize the fact that interactions are local. But the notion of interaction is more primitive than that of localization. Nonetheless, as we shall see, it is precisely the locality of the interactions that saves us from some apparent paradoxes of quantum mechanics. The prototype of these paradoxes is the EPR one \cite{ Smerlak:2006gi,Martin-Dussaud:2018kmh% }. Two spatially separated systems ({\em observers}) A and B interact with ({\em measure}) two entangled particles, one each. This determines a fact relative to A and a fact relative to B. A paradox arises only from the assumption that what is an element of reality for A is also an element of reality for B, and vice versa. A and B may have an element of reality in common only when a local interaction occurs between them. We cannot consider a fact for A, or a fact for B, as absolute. We can eventually introduce a third observer C, with respect to which there will be some element that regards the comparison of the two, but only provided that this is interacting (hence locally interacting) with both A and B. \section{The relationality of symmetries} In modern physics, interactions are largely encoded in symmetries. The symmetries of a system determine the possible interactions such system can have with another system. Symmetries therefore capture the potential of interactions among systems. The apparent arbitrariness that often appear in the definition of the symmetries of a system reflect different possibilities for the system to couple to another system. For instance, general relativity formulated in a tetrad formalism has an additional local symmetry with respect to Einstein's metric formulation, which captures the possibility of coupling the theory to oriented local objects such as fermions. In particular, from this perspective, gauge symmetries do not represent redundant superstructures. They do not just express an indeterminism that needs to be eliminated to get a deterministic theory, or a redundancy in any other sense. The apparent arbitrariness has its origin in the relationship between gauge and relationality. A gauge transformation is a mathematical redundancy only when we consider systems in isolation. The coupling of the system with other systems can well be given by (gauge invariant) interactions that couple {\em gauge degrees of freedom} of one system with {\em gauge degrees of freedom} of the other. Together, new physical degrees of freedom are born in this coupling. For instance, the Maxwell potential is redundant in the dynamics of the electromagnetic field alone, but is needed in coupling the field to some charged fields such as an electron. The Maxwell potential is not just a redundant mathematical addition to reality: it is the handle through which the electromagnetic field couples to electrons. Notice that what is relevant, what captures the essence of physical reality, is the coupling between systems, not what we identify as the system. Two systems coupled to each other cancel their respective gauge redundancies: by coupling a gauge-dependent quantity of one system to a gauge-dependent quantity of the other system, we give rise to a give-independent physical interaction. This observation leads to an important distinction regarding observable quantities. We refer to gauge-dependent quantities as {\em partial observables} \cite{Rovelli:2001bz}, and gauge-independent quantities as complete observables in the sense of Dirac. Both kinds of quantities are associated with operators whose eigenvalues corresponds to elements of reality. Both are associated to relative facts. In this sense, partial observables and complete observables have the same ontological status. The difference, on the other hand, is clear cut: partial observables can be measured, but cannot be predicted by dynamic equations alone, while gauge-independent observables can be measured and also predicted \cite{Rovelli:2013bf}. In this sense, as Dirac noted, only the latter lead to a determinism in the theory. The indeterminacy of the evolution of the former simply reflects the fact that their value depends on the dynamical evolution of another system whose equations of motion are not considered. For instance, the Einstein equations do not determine uniquely the evolution of the metric tensor because a measurement of this tensor is always relative to a specific (say, material) reference frame, whose equations of motion are in general not included in the Einstein's equations alone. \section{Relationality in quantum field theory} A striking example of relationality is provided by the notion of particle in quantum field theory. While some presentations of quantum field theory rely heavily on the notion of particle taken as fundamental \cite{Weinberg}, it is also very well known that the number of particles present in a given quantum field theory state depends on the reference system. On a generic curved spacetime, in particular, there is no unique notion of number of particles. Physically, different particle detectors count different numbers of particles. Mathematically, in the absence of global Poincar\'e invariance there is no natural Fock structure in the (nevertheless well defined) Hilbert space of states. The existence of particle can be true with respect to one system but not with respect to another. Different detectors probe different bases in the same Hilbert space. When a detector measure a certain number of particles, we say that the existence of these particles is an element of reality. But the point above makes clear that this is a relational reality: it is the number of particles {\em with respect to the interaction with that detector}. \vskip1em \section{Relationality in general relativity } The relational nature of space and time has been longly debated. General relativity, while defining space and time as manifestation of the gravitational field, has a structure that is deeply relational \cite{Vidotto:2013qf}. Dynamical objects are not localized with respect to a fixed background but only with respect to one another. Notice how the collection of dynamical objects includes the gravitational field itself. The very structure of spacetime is built upon contiguity relations, namely the property of spacetime regions being ``next to one another". But in the case of the gravitational field, saying that different regions are contiguous one another through their boundaries means exactly that these regions are interacting. Alternatively, when we couple general relativity to the matter of a material reference system, the components of the gravitational field with respect to the directions defined by this system are gauge-invariant quantities of the coupled system; but they are gauge-dependent quantities of the gravitational field, measured with respect to a given external frame. In this case, a prototypical example of a partial observable is time: a quantity that we routinely determine (looking at a clock) but we can not predict from the dynamics of the system. \section{Relationality in Quantum Gravity} The relational aspect of spatio-temporal localization that characterizes general relativity and the relational aspect of quantum mechanics that is emphasized by its relational interpretation combine surprisingly well precisely thanks to the fact that interactions are local. This combination provides a solid conceptual structure for quantum gravity \cite{Vidotto:2013qf}. In fact, locality is a main discovery of XX century modern physics: interactions at distance of the Newton's kind don't seem to be part of our world. They are only approximate descriptions of reality. In the particles' standard model, as well as in general relativity, things can interact only when they ``touch'': all interactions are local. This means that objects in interactions should be in the same place: interaction require localization and localization requires interaction. To be in interaction correspond to be adjacent in spacetime and vice versa: the two reduce to one another. In other words, the fact that interaction are local means that they require spacetime contiguity. But the contrary is also true: the only way to ascertain that two objects are contiguous is by means of having them interact. Therefore we can identify the {\em Heisenberg cut} that defines the separation with respect to which (relative) facts are realized in quantum theory, with the boundary of spacetime regions that define the (relative) localization in general relativity. \begin{table*}[h] \begin{center} \begin{tabular}{ccc} {\bf Quantum relationalism} & {}\hskip3cm{} & {\bf Einstein's relationalism} \\ Systems interact with other systems & $\longleftrightarrow$& Systems are located wrt other systems\\ Interaction $\Rightarrow$ Localization &$\longleftrightarrow$ & Localization $\Rightarrow$ Interaction \end{tabular} \end{center} \label{default} \end{table*}% By bringing the two perspectives together, we obtain the boundary formulation of quantum gravity \cite{Oeckl:2003vu,Oeckl:2005bv}: the theory describes processes and their interactions. The manner a process affects another is described by the Hilbert state associated to its boundary. The probabilities of one or another outcome are given by the transition amplitudes associated to the bulk, and obtained from the matrix elements of the projector on the solutions of the Wheeler-DeWitt equation. Let us make this more concrete. Consider a process such as the scattering of some particles at CERN. If we want to take into account the gravitational field, we need to include it as part of the system. In doing quantum gravity, the gravitational field (or spacetime) is part of the system. Distance and time measurements are field measurements like the others in general relativity: they are part of the boundary data of the problem. Thinking in terms of functional integrals, we have to sum over all possible histories, but also all possible geometries associated to a given finite spacetime region. To computate a transition amplitude, we fix the boundary data of the process. In a scattering process, these can be the positions of the particles at initial and final times. These positions are defined by rods and clocks. These measure geometrical informations, and geometrical information is given by the gravitational field. The transition amplitudes depend on the value of all fields on the boundary, including the gravitational fields. They do not depend on further variables such as a time and position. These are coded in the boundary gravitational field, which has the information about how much time have lapsed and the distances between the particles. Geometrical and temporal data are encoded in the boundary state, because these include the state of the gravitational field, which is the state of spacetime. \\[1em] \begin{center} \includegraphics[height=50mm]{processo} \\ \end{center} This structural identification is in fact much deeper. As noticed, the most remarkable aspect of quantum theory is that the boundary between processes can be moved at wish. Final total amplitudes are not affected by displacing the boundary between ``observed system'' and ``observing system''. The same is true for spacetime: boundaries are arbitrarily drawn in spacetime. The physical theory is therefore a description of how arbitrary partitions of nature affect one another. Because of locality and because of gravity, these partitions are at the same time subsystems split and partitions of spacetime. A spacetime is a process, a state is what happens at its boundary \cite{Rovelli:2014ssa}. This clarifies that in quantum gravity a process is a spacetime region. Relational quantum mechanics describes systems in interaction. What defines the system and when is it interacting? For spacetime, a process is simply a region of spacetime. Spacetime is a quantum mechanical process once we do quantum gravity. This now helps us to understand how to do quantum gravity. Notice that from this perspective quantum gravitational processes are defined locally, without any need to invoke asymptotic regions. Summarizing: {\small \begin{center} {\bf Spacetime Quantum Dynamics} ~~~~~~~~~~~~~~~~~~~~~~~~ \\[1mm] \begin{tabular}{rcl} \hline \\ Processes &~~~~~~~~~~~~~~~~~~ $\longrightarrow$ ~~~~~~~~~~~~~~~~~~ & Spacetime Regions\\[1em] States &$\longrightarrow$ & Boundaries (Spacial Regions) \\[1em] Probability &$\longrightarrow$& Transition Amplitudes\\[1em] Discreteness &$\longrightarrow$& Spacetime Quanta \\[1em] \hline \\ \\ \end{tabular} \end{center} \label{default}} \section{Conclusion: The relational nature of contemporary physics} The debate on the interpretation of quantum mechanics is far from having reached a consensus. Addressing it is unavoidable in order to answer the question of "what does exist?" as far as current physics tells us. But considering this a question related to Quantum Mechanics alone deprives ourselves of some fundamental conceptual inputs, that come from the core of the picture of the world revealed to contemporary physics. I have described the lesson of quantum mechanics from the perspective of relational quantum mechanics. General relativity, quantum field theory and quantum gravity, are compatible and they support such point of view. Gauge theories and quantum fields theories have a deep relational core: gauge degrees of freedom are handle for interactions to other systems. Even the particles of quantum field theory, that in an ontology of objects we would be tempted to call fundamental objects, are in fact relative, not absolute, entities. Locality reveals a deep structural analogy between the relations on which quantum mechanics is based and those on which spacetime is based. Quantum gravity makes this connection completely explicit. In quantum gravity a process is not in a spacetime region: a process \emph{is} a spacetime region. Analogously, a state is not somewhere in space: it \emph{is} the description of the way two processes interact, or two spacetime regions pass information to one another. Viceversa, a spacetime region \emph{is} a process: it is like a Feynman sum of everything that can happen between its boundaries. \vspace{2mm} The resulting relational ontology, compatible with quantum mechanics as well as with the rest of our current physical theories, is a minimalistic one. There is no necessity to attribute an ontological role to states nor some mysterious hidden variables: only facts, or events, are part of the ontology. It is also a ``lighter'' ontology: facts are \emph{sparse} and \emph{relative}. This means for instance that particles only exists in interactions, not in between, and exists only with respect to the system they are in relation with, not with respect to the rest of the universe. One may ask: what happens \emph{between} two interactions? In between, there are other interactions of the field: these interactions are what gives sense to the expression ``in between''. We can distinguish a particle that appears here and then there, being some interaction made by the field: what does define the identity of the particle and its story? Only regularities in the interactions. In fact we may think, if we wish, that there is no particle, only correlated interactions \cite{Hume}. These correlations are such that I measure the field here now and later on there, I obtain correlated values. This is what we mean by saying that there is the same particle. There are just manifestations of a field. A field exists trough its interactions. This stance weakens \emph{usual} realism, but makes it compatible with our current empirical knowledge and spares us pernicious paradoxes. The relational realism, it should be stressed, is not in any form relativist: going relational does not weaken the reality of the world. If there are only interactions that are intrinsically relational, there is no absolute reality with respect to which the relational events are ``less real''. Relationalism should not be confused here with a form of subjectivism, which can lead to solipsism. The relations we considered are among any physical systems in interactions, not subjects or agents that require conscious agency. Conscious agents are a peculiar case among the different systems. Systems can acquire and store information about one another: here information should be understood as physical correlations, without a necessary epistemic connotation. This leads us to think of relations in a completely physical way, discarding a possible reading of the restriction to the relations as only epistemically motivated (as, for instance, in epistemic structural realism). An interpretation of relations that restricts them to be only epistemic would require the assumption of a hypothetical non-relational underlying substance, not accessible to our knowledge: such a move seems circular and redundant, not adding any clarity to our understanding of the world. In particular, for the sake of philosophy of science, it appears as a useless epicycle. On the other hand, embracing a relational perspective, we may be able to leave a monolithic reality for a richer kaleidoscopic one. One in which it is required an epistemology where the notion of objectivity is pluralistic and perspectival \cite{Barad:2007,Massimi:2022}.
1,941,325,220,953
arxiv
\section{Introduction} 3D reconstruction is one of the fundamental problems of computer vision and a cornerstone of augmented and virtual reality. Concurrently with steady progress towards real-time photo-realistic rendering of 3D environments in game engines, the last few decades have seen great strides towards photo-realistic 3D reconstruction. A recent achievement in this direction is the discovery of a fairly general formulation for representing radiance fields \cite{Mildenhall_2020_NeRF,liu2020neural,martin2021nerf,Schwarz2020NEURIPS,zhang2020nerf++,yu2021pixelnerf,trevithick2020grf,bi2020neural,srinivasan2021nerv,niemeyer2021giraffe,sucar2021imap}. Neural radiance fields are remarkably versatile for reconstructing real-world objects with high-fidelity \emph{geometry} and \emph{appearance}. But static appearance is only the first step: it ignores how an object moves and interacts with its environment. 4D reconstruction tackles this problem in part by incorporating the time dimension: with more intricate capture setups and more data, we can reconstruct objects over time---but can only re-play the captured sequences. Today, in the age of mixed reality, a photo-realistically reconstructed object might still destroy immersion if it is not ``physically realistic'' because \emph{the object cannot be interacted with.} (For example, if a soft object appears as rigid as the rocks next to it when stepped on.) By building on advances in computer vision and physics simulation, we begin to tackle the problem of physically-realistic reconstruction and create \emph{Virtual Elastic Objects}: virtual objects that not only look like their real-world counterparts but also behave like them, even when subject to novel interactions. For the first time, this allows for full-loop reconstruction of deforming elastic objects: from capture, to reconstruction, to simulation, to interaction, to re-rendering. Our core observation is that with the latest advances in 4D reconstruction using neural radiance fields, we can both capture radiance and deformation fields of a moving object over time, and re-render the object given novel deformation fields. That leaves as the main challenge the core problem of capturing an object's physics from observations of its interactions with the environment. With the right representation that jointly encodes an object's geometry, deformation, and material behavior, compatible with both differentiable physical simulation and the deformation fields provided by 4D reconstruction algorithms, we can use these deformation fields to provide the necessary supervision to learn the material parameters. But even with this insight, multiple challenges remain to create Virtual Elastic Objects. We list them together with our technical contributions:\\ \noindent\textbf{1) Capture.} To create VEOs, we need to collect data that not only contains visual information but also information about physical forces. We present the new \textbf{PLUSH} dataset containing occlusion-free 4D recordings of elastic objects deforming under known controlled force fields. To create this dataset, we built a multi-camera capture rig that incorporates an air compressor with a movable, tracked nozzle. More details can be found in Sec.~\ref{ssec:capture}. \\ \noindent\textbf{2) Reconstruction.} VEOs~do not require any prior knowledge about the geometry of the object to be reconstructed; the reconstruction thus must be template-free and provide full 4D information (\ie, a 3D reconstruction and deformation information over time). We extend Non-rigid Neural Radiance Fields~\cite{tretschk2021nonrigid} with novel losses, and export point clouds and point correspondences to create the data required to supervise learning material behavior using physical simulation. We provide further details in Sec.~\ref{ssec:recon}.\\ \noindent\textbf{3) Simulation.} Crucially for creating realistic interactive objects, a physical simulation is required, both to optimize for an unknown object's physical parameters and to generate deformations of that object in response to novel interactions. We implement a differentiable quasi-static simulator that is particle-based and is compatible with the deformation field data provided by our 4D reconstruction algorithm. We present the differentiable simulator and explain how we use it to obtain physical parameters in Sec.~\ref{ssec:learning_material}, and describe simulations of novel interactions in Sec.~\ref{ssec:interaction}.\\ \noindent\textbf{4) Rendering.} Since we convert from a neural representation of the captured object's geometry to a point cloud reconstructing the object's physical properties, we require a function that allows rendering the object given new simulated deformations of the point cloud. We introduce a mapping function that enables us to use deformed point clouds instead of continuous deformation fields to alter the ray casting for the Neural Radiance Fields we used for the original reconstruction. Further details on re-rendering can be found in Sec.~\ref{ssec:rendering}. \section{Related Work} Our work integrates together multiple areas of computer vision, computer graphics, and simulation. \noindent\textbf{Recovering Elastic Parameters for 3D Templates.} A number of prior works estimate material parameters of a pre-scanned 3D template by tracking the object over time from depth input. Wang~\etal~\cite{wang2015deformation} were among the first to tackle tracking, rest pose estimation, and material parameter estimation from multi-view depth streams. They adopt a gradient-free downhill simplex method for parameter fitting, and can only optimize a limited number of material parameters. Objects built from multiple types of materials cannot be faithfully captured without manual guidance or prior knowledge of a part decomposition. Hahn~\etal~\cite{Hahn:2019} learn an inhomogeneous viscoelastic model from recordings of motion markers covering the object. Recently, Weiss~\etal~\cite{weiss2020correspondence} infer homogeneous linear material properties by tracking deformations of a given template with a single depth camera. In contrast to these methods, ours jointly reconstructs not just object deformations and physics \emph{without a need for depth input or markers} but also geometry and appearance \emph{without a need for a template}. Our formulation can model inhomogeneous, nonlinear materials without prior knowledge or annotations. \noindent\textbf{Dynamic Reconstruction.} Reconstructing non-rigid objects from a video sequence is a long-standing computer vision and graphics problem~\cite{Zhang2003SpacetimeSS, tung_complete_2009}. Shape-from-Template methods deform a provided template using RGB~\cite{yu2015direct} or RGB-D data~\cite{zollhofer2014real}. DynamicFusion~\cite{Newcombe_2015_CVPR} is a model-free, real-time method for reconstructing general scenes from a single RGB-D video. When reliable 2D correspondences are available from optical flow, non-rigid structure-from-motion (NRSfM) can be used to reconstruct the 3D geometry~\cite{Agudo2014OnlineDN, grassmanian_2018}, perhaps even using physics-based priors~\cite{agudo2015sequential}. There are also image-based approaches that do not yield a true 3D scene~\cite{yoon2020novel,Bemana2020xfields}. Recently, reconstruction using neural representations have become more common. Whereas OccupancyFlow~\cite{niemeyer2019occupancy} requires 3D supervision, Neural Volumes~\cite{Lombardi:2019} reconstructs a dynamic scene from multi-view input only, but does not compute temporal correspondences. See a recent survey on neural rendering~\cite{Tewari2020NeuralSTAR} for more. Neural Radiance Fields~\cite{Mildenhall_2020_NeRF}, the seminal work of Mildenhall~\etal, lays the groundwork for several follow-up reconstruction methods that extend it to dynamic scenes~\cite{Li2021, park2021hypernerf,attal2021torf,pumarola2020d,park2021nerfies,li2021neural,du2021nerflow,Gaofreeviewvideo,xian2021space,Lombardi_2021_MVP}. In this work, we assume multi-view RGB video input with known camera parameters and foreground segmentation masks and so extend Non-Rigid Neural Radiance Fields (NR-NeRF)~\cite{tretschk2021nonrigid}. \noindent\textbf{Data-Driven Physics Simulation.} Much recent research has explored the potential of machine learning to enhance or even replace traditional physics-based simulation. Learning natural laws from data without any priors has been shown possible for a few simple physics systems\cite{schmidt2009distilling}, but the computational cost scales exponentially with the complexity of the system, and remains intractable for real-world problems. For simulating elastic objects specifically, one line of work replaces traditional mesh kinematics with a learned deformation representation to improve performance: Fulton~\etal~\cite{Fulton:2018} use an autoencoder to learn a nonlinear subspace for elastic deformation, and Holden~\etal~\cite{Holden:2019} train a neural network to predict the deformation of cloth using a neural subspace. Some methods use neural networks to augment coarse traditional simulations with fine details~\cite{deepWrinkles,geng2020coercing}. Another line of work uses data to fit a parameterized material model to observed deformations. This idea has been successfully applied to muscle-actuated biomechanical systems such as human faces~\cite{Kadlecek:2019,Srinivasan:2021}, learning the rest pose of an object in zero gravity~\cite{chen2014anm}, the design of soft robotics~\cite{hu2019chainqueen, hu2020difftaichi}, and motion planning with frictional contacts~\cite{Geilinger:2020,du2021diffpd}. Yang~\etal~\cite{Yang2017} learn physical parameters for cloth by analysing the wrinkle patterns in video. While all of these methods learn physical parameters from data, our method is unique in requiring no template or other prior knowledge about object geometry to reconstruct and re-render novel deformations of an object. \noindent\textbf{Meshless Simulation.} Meshless physics-based simulation emerged as a counter-part to traditional mesh-based methods \cite{muller2004} and is ideal for effects such as melting or fracture \cite{muller2004,pauly2004meshless}. These methods have been later extended to support oriented particles and skinning \cite{muller2011solid,gilles2011frame,macklin2014unified}. Another extension of point-based simulations consists in incorporating a background Eulerian grid, which enables more efficient simulation of fluid-like phenomena \cite{stomakhin2013material,jiang2017anisotropic}. \section{Method} \subsection{Capture} \label{ssec:capture} To create a physically accurate representation of an object, we first need to record visual data of its deformation under known physical forces. For recording, we use a static multi-view camera setup consisting of 19 OpenCV AI-Kit Depth (OAK-D) cameras\footnote{\url{https://store.opencv.ai/products/oak-d}}, each containing an RGB and two grey-scale cameras (note that \methodname~does not use the stereo camera data to infer classical pairwise stereo depth). They represent an affordable, yet surprisingly powerful solution for volumetric capture. In particular, their on-board H265 encoding capability facilitates handling the amount of data produced during recording (5.12GB/s uncompressed). Since the cameras lack a lens system with zoom capabilities, we keep them close to the object to optimize the pixel coverage and re-configure the system depending on object size. The maximum capture volume has a size of roughly $30\text{cm}^3$. We put a black sheet around it to create a dark background with the exception of five stage lights that create a uniform lighting environment. In addition to the images, we also need to record force fields on the object surface. This raises a problem: if a prop is used to exert force on the capture subject, the prop becomes an occluder that interferes with photometric reconstruction. We solved this problem when capturing our \textbf{PLUSH} dataset by actuating the object using transparent fishing line and a compressed air stream; see Sec.~\ref{sec:dataset} for further details. \subsection{4D Reconstruction} \label{ssec:recon} Given the captured video of an object deforming under external forces, we need 4D reconstruction to supply a temporally-coherent point cloud that can be used to learn the object material properties. To that end, we use NR-NeRF~\cite{tretschk2021nonrigid}, which extends the static reconstruction method NeRF~\cite{Mildenhall_2020_NeRF} to the temporal domain. NeRF learns a volumetric scene representation: a coordinate-based Multi-Layer Perceptron (MLP) $\mathbf{v}(\mathbf{x}) = (o, \mathbf{c})$ that regresses geometry (opacity $o(\mathbf{x})\in\mathbb{R}$) and appearance (RGB color $\mathbf{c}(\mathbf{x})\in\mathbb{R}^3$) at each point $\mathbf{x}$ in 3D space. At training time, the weights of $\mathbf{v}$ are optimized through 2D supervision by RGB images with known camera parameters: for a given pixel of an input image, the camera parameters allow us to trace the corresponding ray $\mathbf{r}(s)$ through 3D space. We then sample the NeRF at $|S|$ points $\{\mathbf{r}(s)\in\mathbb{R}^3\}_{s \in S}$ along the ray, and use a volumetric rendering equation to accumulate the samples front-to-back via weighted averaging: $\Tilde{\mathbf{c}} = \sum_{s\in S} \alpha_s \mathbf{c}(\mathbf{r}(s))$ (\ie, alpha blending with alpha values $\{\alpha_s\in\mathbb{R}\}_s$ derived from the opacities $\{o_s\}_s$). A reconstruction loss encourages the resulting RGB value $\Tilde{\mathbf{c}}$ to be similar to the RGB value of the input pixel. On top of the static geometry and appearance representation $\mathbf{v}$ (the \emph{canonical model}), NR-NeRF models deformations explicitly via a jointly learned ray-bending MLP $\mathbf{b}(\mathbf{x},\mathbf{l}_t) = \mathbf{d} $ that regresses a 3D offset $\mathbf{d}$ for each point in space at time $t$. ($\mathbf{l}_t$ is an auto-decoded latent code that conditions $\mathbf{b}$ on the deformation at time $t$.) When rendering a pixel at time $t$ with NR-NeRF, $\mathbf{b}$ is queried for each sample $\mathbf{r}(s)$ on the ray in order to deform it into the canonical model: $(o,\mathbf{c}) = \mathbf{v}\left[\mathbf{r}(s) + \mathbf{b}(\mathbf{r}(s),\mathbf{l}_t)\right]$. Unlike NR-NeRF's monocular setting, we have a multi-view capture setup. We thus disable the regularization losses of NR-NeRF and only use its reconstruction loss. \noindent\textbf{Extensions.} We improve NR-NeRF in several ways to adapt it to our setting. The input videos contain background, which we do not want to reconstruct. We obtain foreground segmentations for all input images via image matting~\cite{Lin2021} together with a hard brightness threshold. During training, we use a background loss $L_\mathit{background}$ to discourage geometry along rays of background pixels. When later extracting point clouds, we need opaque samples on the inside of the object as well. However, we find that $L_\mathit{background}$ leads the canonical model to prefer empty space even inside the object. We counteract this effect with a density loss $L_\mathit{density}$ that raises the opacity of point samples of a foreground ray that are `behind' the surface, while emptying out the space in front of the surface with $L_\mathit{foreground}$. During training, we first build a canonical representation by pretraining the canonical model on a few frames and subsequently using it to reconstruct all images. Our capture setup not only provides RGB streams but also grey-scale images. We use these for supervision as well. In practice, we use a custom weighted combination of these techniques for each sequence to get the best reconstruction. \noindent\textbf{Point Cloud Extraction} In order to extract a temporally-consistent point cloud from this reconstruction, we require a forward deformation model, which warps from the canonical model to the deformed state at time~$t$. However, NR-NeRF's deformation model~$\mathbf{b}$ is a backward warping model: it deforms each deformed state into the canonical model. We therefore jointly train a coordinate-based MLP $\mathbf{w}$ to approximate the inverse of $\mathbf{b}$. After training, we need to convert the reconstruction from its continuous MLP format into an explicit point cloud. To achieve that, we cast rays from all input cameras and extract points from the canonical model that are at or behind the surface and whose opacity exceeds a threshold. These points can then be deformed from the canonical model into the deformed state at time $t$ via $\mathbf{w}$. We thus obtain a 4D reconstruction in the form of a 3D point cloud's evolving point positions $\{P_t\}_t$, which are in correspondence across time. To keep the computational cost of the subsequent reconstruction steps feasible, we downsample the point cloud to 9-15$k$ points if necessary. \subsection{Learning Material Parameters} \label{ssec:learning_material} Before we can simulate novel interactions with a captured object, we need to infer its physical behavior. Given that we have no prior knowledge of the object, we make several simplifying assumptions about its mechanics, with an eye towards minimizing the complexity of the physical model while also remaining flexible enough to capture heterogeneous objects built from multiple materials. First, we assume a \emph{spatially varying, isotropic nonlinear Neo-Hookean material model} for the object. Neo-Hookean elasticity well-approximates the behavior of many real-world materials, including rubber and many types of plastic, and is popular in computer graphics applications because its nonlinear stress-strain relationship guarantees that no part of the object can invert to have negative volume, even if the object is subjected to arbitrary large and nonlinear deformations. Finally, Neo-Hookean elasticity admits a simple parameterization: a pair of \lame parameters $(\mu_i, \lambda_i)\in\mathbb{R}^2$ at each point $i$ of the point cloud $P$. Second, we assume that the object deforms \emph{quasistatically} over time: that at each point in time, the internal elastic forces exactly balance gravity and applied external forces. The quasistatic assumption greatly simplifies learning material parameters, and is valid so long as inertial forces in the captured video sequences are negligible (or equivalently, so long as external forces change sufficiently slowly over time that there is no secondary motion, which is true for the air stream and string actuations in our \textbf{PLUSH} dataset). \paragraph{Overview.} We first formulate a differentiable, mesh-free \emph{forward} physical simulator that is tailored to work directly with the (potentially noisy) reconstructed point cloud. This forward simulator maps from the point cloud $P_0$ of the object in its \emph{reference pose} (where it is subject to no external forces besides gravity), an assignment of \lame parameters to every point, and an assignment of an external force $\mathbf{f}_i\in\mathbb{R}^3$ to each point on the object surface, to the deformed position $\mathbf{y}_i\in\mathbb{R}^3$ of every point in the point cloud after the object equilibrates against the applied forces. Next, we learn the \lame parameters that match the object's observed behavior by minimizing a loss function $\mathbf{L}$ that sums, over all times $t$, the distance between $\mathbf{y}_i$ and the corresponding target position of the point in the 4D point cloud $P_t$. \paragraph{Quasistatic Simulation.} To compute the equilibrium positions $\mathbf{y}_i$ of the points in $P$ for given external loads and material parameters, we solve the variational problem \begin{equation} \argmin_\mathbf{y} \mathbf{E}(\mathbf{y}), \label{eq:equilibrium} \end{equation} where $\mathbf{E}$ is the total energy of the physical system, capturing both the elastic energy of deformation as well as work done on the system by external forces. In what follows, we derive the expression for $\mathbf{E}$, and discuss how to solve Eq.~\ref{eq:equilibrium}. Following M\"{u}ller \etal~\cite{muller2004}, we adopt a mesh-free, point-based discretization of elasticity to perform forward simulation. For every point $\mathbf{x}_i$ in the reference point cloud $P_0$, we define a neighborhood $\mathcal{N}_i$ containing the 6 nearest neighbors of $\mathbf{x}_i$ in $P_0$. For any given set of deformed positions $\mathbf{y}_j$ of the points in $\mathcal{N}_i$, we estimate strain within the neighborhood in the least-squares sense. More specifically, the local material deformation gradient $\mathbf{F}_i\in\mathbb{R}^3$ maps the neighborhood $\mathcal{N}_i$ from the reference to the deformed state: \begin{equation} \label{eq:deformation_gradient} \mathbf{F}_i(\mathbf{x}_i-\mathbf{x}_j) \approx \mathbf{y}_i-\mathbf{y}_j \quad \forall \mathbf{x}_j \in \mathcal{N}_i. \end{equation} For neighborhoods larger than three, Eq.~\ref{eq:deformation_gradient} is over-determined, and we hence solve for $\mathbf{F}_i$ in the least-squares sense, yielding the closed-form solution: \begin{equation} \mathbf{F}_i = \mathbf{Y}_i \mathbf{W}_i \mathbf{X}_i^T (\mathbf{X}_i \mathbf{W}_i \mathbf{X}_i^T)^{-1}, \end{equation} where the $j$-th column of $\mathbf{X}_i$ and $\mathbf{Y}_i$ are $\mathbf{x}_i - \mathbf{x}_j$ and $\mathbf{y}_i - \mathbf{y}_j$, respectively, and $\mathbf{W}_i$ is a diagonal matrix of weights depending on the distance from $\mathbf{x}_j$ to $\mathbf{x}_i$~\cite{muller2004}. The elastic energy of the object can be computed from the classic Neo-Hookean energy density~\cite{ogden1984non}: \begin{equation} \label{eq:neoHookean} \Psi_\mathit{NH}^i = \frac{\mu_i}{2}(I_c -3 ) - \mu_i \log J + \frac{\lambda_i}{2}(J-1)^2, \end{equation} where $I_c$ is the trace of the right Cauchy-Green tensor $\mathbf{F}_i^T\mathbf{F}_i$, and $J$ is the determinant of $\mathbf{F}_i$. $\mu_i$ and $\lambda_i$ are the \lame parameters assigned to point $i$. The total elastic energy is then: \begin{equation} \mathbf{E}_\mathit{NH} = \sum_i V_i \Psi_\mathit{NH}^i, \end{equation} where $V_i\in\mathbb{R}$ approximates the volume of $\mathcal{N}_i$. We also need to include the virtual work done by the external force field to Eq.~\ref{eq:equilibrium}: \begin{equation} \mathbf{E}_W = \sum_i \mathbf{f}_i \cdot \mathbf{y}_i, \label{eq:virtualwork} \end{equation} where $\mathbf{f_i}$ is the force applied to point $i$ (the force of the air stream on the boundary). If we measured the tension in the fishing lines, we could also include the forces they exert on the object in Eq.~\ref{eq:virtualwork}. But since a fishing line is effectively inextensible relative to the object we are reconstructing, we instead incorporate the fishing lines as soft constraints on the positions of the points $Q\subset P$ attached to the lines: we assume that at time $t$, points in $Q$ should match their observed positions in $P_t$, and formulate an attraction energy: \begin{equation} \mathbf{E}_A = \alpha \sum_{q \in Q} \lVert \mathbf{y}_q - \mathbf{x}_q^* \rVert^2, \end{equation} where $\mathbf{x}_q^*$ is the position of the point corresponding to $\mathbf{y}_q$ in $P_t$, and $\alpha$ is a large penalty constant. We found that this soft constraint formulation works better in practice than alternatives such as enforcing $\mathbf{y}_q = \mathbf{x}_q^*$ as a hard constraint. The total energy in Eq.~~\ref{eq:equilibrium} is thus $\mathbf{E} = \mathbf{E}_{NH} + \mathbf{E}_W + \mathbf{E}_A$, which we minimize using Newton's method. Since Newton's method can fail when the Hessian $\mathbf{H}$ of $\mathbf{E}$ is not positive-definite, we perform a per-neighborhood eigen-decomposition of $\mathbf{H}$ and replace all eigenvalues that are smaller than a threshold $\epsilon>0$ with $\epsilon$; note that this is a well-known technique to improve robustness of physical simulations~\cite{teran2005robust}. We also make use of a line search to ensure stability and handling of position constraints at points where the capture subject touches the ground; see the supplemental material for further implementation details. \paragraph{Material Reconstruction.} Given the 4D point cloud $P_t$ and forces acting on the object $\{\mathbf{f}_i\}_i$, we use our forward simulator to learn the \lame parameters that best explain the observed deformations. More specifically, at each time $t$ we define the loss: \begin{equation} \mathbf{L}_t = \sum_{i \in \partial \Omega} \lVert \mathbf{y}_{t,i} - \mathbf{x}_{t,i}^* \rVert^2 \end{equation} where $\mathbf{x}_{t,i}^*$ is the position of point $i$ in $P_t$, and $\mathbf{y}_{t,i}$ is the output of the forward simulation. We use an $\ell_2$ loss to penalize outliers strongly, which would jeopardize the reconstruction quality otherwise. We choose a training subsequence $T$ of 20-50 frames from the input where the impact of the air stream roughly covers the surface so that we have some reference for each part of the object, and compute the desired \lame parameters by minimizing the sum of the loss over all $t \in T$ using the gradient-based Adam optimizer~\cite{KingmaB14}: \begin{equation} \mu^*, \lambda^* = \argmin_{\mu, \lambda} \sum_{t\in T} \mathbf{L}_t. \end{equation} It is not trivial to back-propagate through the Newton solve for $\mathbf{y}_{t,i}$, even if we ignore the line search and assume a fixed number of Newton iterations $K$. The gradient of $\mathbf{y}$ with respect to the \lame parameters ($\mu$ for instance) can be computed using the chain rule: \begin{equation} \label{eq:devLoss} \frac{\partial \mathbf{L}}{\partial \mu} = \frac{\partial \mathbf{L}}{\partial \mathbf{y}^K}\frac{\partial \mathbf{y}^K}{\partial \mu}, \end{equation} and, for any $1 \leq k \leq K$, \begin{align} \frac{\partial \mathbf{y}^k}{\partial \mu} &= \frac{\partial \mathbf{y}_{k-1}}{\partial \mu} -\left(\frac{\partial\mathbf{H}_{k-1}^{-1}}{\partial \mu} + \frac{\partial\mathbf{H}_{k-1}^{-1}}{\partial \mathbf{y}_{k-1}}\frac{\partial \mathbf{y}_{k-1}}{\partial \mu}\right)\nabla\mathbf{E}_{k-1} \notag\\ & -\mathbf{H}_{k-1}^{-1}\left(\frac{\partial\nabla\mathbf{E}_{k-1}}{\partial \mu} + \frac{\partial\nabla\mathbf{E}_{k-1}}{\partial \mathbf{y}_{k-1}}\frac{\partial \mathbf{y}_{k-1}}{\partial \mu}\right). \end{align} To avoid an exponentially-large expression tree, we approximate the derivative of the $k$th Newton iterate $\mathbf{y}^k$ by neglecting the higher-order derivative of the Hessian and of the gradient of the energy with respect to the previous position update: \begin{align*} \frac{\partial \mathbf{y}^k}{\partial \mu} \approx \frac{\partial \mathbf{y}_{k-1}}{\partial \mu} -\frac{\partial\mathbf{H}_{k-1}^{-1}}{\partial \mu} \nabla\mathbf{E}_{k-1} -\mathbf{H}_{k-1}^{-1} \frac{\partial\nabla\mathbf{E}_{k-1}}{\partial \mu} \end{align*} Although it is not guaranteed that the higher-order terms are always negligible, this approximation provides a sufficiently high-quality descent direction for all examples we tested. To improve performance and to capture hysteresis in cases where $\mathbf{E}$ has multiple local minima at some times $t$, we warm-start the Newton optimization at time $t$ using the solution from time $t-1$. \subsection{Novel Interactions} \label{ssec:interaction} Given a reconstructed \methodname, we can use the same physical simulator used for material inference to re-simulate the captured object subject to novel interactions. New force fields can easily be introduced by modifying $\mathbf{f}_i$ in the energy $\mathbf{E}_W.$ Other possible interactions include changing the direction of gravity, adding contact forces to allow multiple objects to mutually interact, or to allow manipulation of the object using mixed-reality tools, etc. We demonstrate the feasibility of re-simulating novel interactions by implementing a simple penalty energy to handle contact between a \methodname~and a secondary object, represented implicitly as a signed distance field $d:\mathbb{R}^3\to\mathbb{R}$. The penalty energy is given by: \begin{align} \Psi_c(\mathbf{y}) &= \begin{cases} \alpha_c d(\mathbf{y})^2 & \text{if } d(\mathbf{y}) < 0\\ 0 & \text{otherwise,}\\ \end{cases} \\%\notag \\ \mathbf{E}_c &= \sum_i V_i \Psi_c(\mathbf{y}_i), \end{align} where $\alpha_c$ is chosen large enough to prevent visually-noticeable penetration of the VEO by the secondary object. \subsection{Rendering} \label{ssec:rendering} We are able to interact freely with the \methodname~in a physically plausible manner. Hence, we can close the full loop and realistically render the results of simulated novel interactions using neural radiance fields. While we used $\mathbf{b}$ for deformations during the reconstruction, we are now given a new deformed state induced by a discrete point cloud: a canonical reference point cloud $P_0 = \{ \mathbf{x}^0_{s} \}_s$ and its deformed version $S_d = \{ \mathbf{y}^d_{s} \}_s$. We need to obtain a continuous backward-warping field from that point cloud in order to replace $\mathbf{b}$, which bends straight rays into the canonical model. To that end, we interpolate the deformation offsets $\mathbf{d}^b_s = \mathbf{x}^0_{s} - \mathbf{y}^d_{s}$ at a 3D sample point $\mathbf{p}^d$ in deformed space using inverse distance weighting (IDW): \begin{equation} \mathbf{p}^c = \mathbf{p}^d + \sum_{s \in \mathcal{N}} \frac{w_{s}}{\sum_{s'\in\mathcal{N}} w_{s'} } \mathbf{d}^b_{s}, \label{eq:idw} \end{equation} where $\mathcal{N}$ are the $K=5$ nearest neighbors of $\mathbf{p}^d$ in $S_d$, and $w_{s} = w'_{s} - \min_{s'\in \mathcal{N}} w'_{s'} $ with $w'_{s}= \lVert \mathbf{p}^d - \mathbf{y}^d_{s} \rVert^{-1}$. We can then sample the canonical model at $\mathbf{p}^c$ as before: $(o,\mathbf{c}) = \mathbf{v}(\mathbf{p}^c)$. To remove spurious geometry that $o$ might show, we set $o(\mathbf{x})=0$ for $\mathbf{x}$ that are further than some threshold from $S_d$. Thus, we can now bend straight rays into the canonical model and render the interactively deformed state of the object in a realistic fashion. When needed, we can upsample the point cloud from the simulation to make it denser. Unlike for rendering, we need to consider forward warping for this case. \section{Results} \subsection{Dataset} \label{sec:dataset} \begin{figure*} \centering \resizebox{0.95\textwidth}{!}{ \begin{tabular}{c c c c c c c} \includegraphics[height=2cm]{figures/dataset/babyalien.png} & \includegraphics[height=2cm]{figures/dataset/dino_rainbow.png} & \includegraphics[height=2cm]{figures/dataset/dino_blue.png} & \includegraphics[height=2cm]{figures/dataset/dino_green.png} & \includegraphics[height=2cm]{figures/dataset/fish.png} & \includegraphics[height=2cm]{figures/dataset/leaf.png} & \includegraphics[height=2cm]{figures/dataset/serpentine.png} \\ \footnotesize Baby Alien (179g, 41s)$^\|$ & \footnotesize Dino Rainbow (672g, 37s)$^\|$ & \footnotesize Dino Blue (148g, 55s)$^\|$ & \footnotesize Dino Green (76g, 42s) & \footnotesize Fish (282g, 65s)$^\|$ & \footnotesize Leaf (58g, 32s) & \footnotesize Serpentine (54g, 40s)$^*$\\ \cline{6-7} \includegraphics[height=2cm]{figures/dataset/mrseal.png} & \includegraphics[height=1.2cm]{figures/dataset/pillow_sea.png} & \includegraphics[height=2cm]{figures/dataset/pony.png} & \includegraphics[height=2cm]{figures/dataset/dog.png} & \includegraphics[height=2cm]{figures/dataset/sponge.png} & \multicolumn{1}{|l}{\includegraphics[height=2.5cm,trim={10cm 5cm 10cm 0cm},clip]{figures/val_deformations/baby_alien.png}} & \includegraphics[height=2.5cm,trim={10cm 5cm 10cm 0cm},clip]{figures/val_deformations/pony.png} \\ \footnotesize Mr. Seal (444g, 53s)$^\|$ & \footnotesize Pillow (406g, 42s) & \footnotesize Pony (197g, 51s)$^*$ & \footnotesize Dog (213g, 67s)$^\|$ & \footnotesize Sponge (21g, 46s) & \footnotesize Baby Alien \lame $\mu$ & \footnotesize Pony \lame $\mu$ \end{tabular} } \caption{\textit{The \textbf{PLUSH} dataset} consists of 12 items from everyday life: a pillow, a sponge and several plushies. $\|$ indicates that we recorded extremity motion for the object, * indicates that the recording has significant second order motion. We additionally provide the mass and recording duration for each object. \textbf{Lower right:} \textit{Lam\'e parameter visualizations for Baby Alien and Pony.} Colors tending towards purple show a softer region, colors tending towards green and yellow a harder region. Our method clearly identifies different material properties on the objects, for example the arms and ears for the Baby Alien, and the mane and tail of the Pony.} \label{fig:dataset} \end{figure*} The \textbf{PLUSH} dataset consists of 12 soft items encountered in everyday life (see Fig.~\ref{fig:dataset}): a pillow, a sponge, and various plush toys. We chose items that are composed of soft (and in some cases, heterogeneous) material, complex geometry, and rich texture and color to enable successful background subtraction, 4D reconstruction and tracking. Our strategy for applying external forces is based on the observation that our chosen objects consist of \emph{bulk volumes} (such as the body of a plush toy) along with \emph{flexible extremities} (ears and fingers of the toy). We move object extremities by using transparent fishing line, and we use a stream of compressed air to exert force on bulk volumes. The nozzle position and stream direction must be tracked during video capture to provide the direction and magnitude of forces acting on the object at every point in time. Of the 19 cameras in our capture rig, we use three to track the nozzle using an attached ArUco marker~\cite{Garrido-Jurado2016,Romero-Ramirez2018}. Using this system, we generate multi-part video sequences for each capture subject, where we sequentially actuate the fishing lines (when applicable) followed by sweeping the air stream over the object. We record between 32s and 67s of video for each object, at a frame rate of 40FPS. \subsection{Virtual Elastic Objects} \begin{table} \centering \resizebox{0.4\textwidth}{!}{ \begin{tabular}{c|c|c|c} Object & average (mm) & 95\% (mm) & max (mm) \\ \hline Baby Alien & 3.8 & 14.4 & 29.3 \\ \rowcolor{Gray} Fish & 1.1 & 6.6 & 18.5 \\ Leaf & 0.4 & 1.1 & 9.8 \\ \rowcolor{Gray} Mr. Seal & 0.4 & 1.9 & 171.9 \\ Pillow & 1.5 & 7.8 & 18.35 \\ \rowcolor{Gray} Dog & 1.7 & 7.5 & 28.8 \\ Sponge & 0.2 & 1.8 & 15.8 \\ \rowcolor{Gray} Dino Rainbow & 4.0 & 14.6 & 171.4 \\ Dino Blue & 5.5 & 56.0 & 105.8\\ \rowcolor{Gray} Dino Green & 6.2 & 68.4 & 132.0 \\ Pony & 21.1 & 164.3 & 204.9 \\ \rowcolor{Gray} Serpentine & 7.5 & 43.1 & 94.7 \\ \hline \hline Average* & 2.5 & 18.0 & 70.2 \\ Average & 4.4 & 32.3 & 83.5 \\ \hline \end{tabular} } \caption{\textit{$\ell_2$ distance of simulated point clouds compared with reconstructed point clouds on the test set.} We record the average distance per point per frame, the 95th percentile of average point distances of all frames, and the maximum distance of all points. Average* excludes the data from Pony and Serpentine.} \label{tab:distance} \end{table} For each of the 12 examples, we create a VEO using 20-50 frames from the reconstruction and evaluate on the remaining 500-1500 frames. We use the $\ell_2$ distance between the surface points of the VEO to the reconstructed point cloud from the captured data to evaluate the quality of the reconstructed parameters. For all examples except for the Baby Alien we use the external force field data obtained using the air stream. For the Baby Alien, we specifically use the arm and ear motion to demonstrate the versatility of our method in this scenario. We present the results in Tab.~\ref{tab:distance}. The error is relatively small for all objects, which shows that our method is applicable to objects with different geometries, and can learn the corresponding material parameters even for heterogeneous objects. Larger errors are observed for objects with a thin and tall component (see the last 4 rows of the table). This error is largely caused by tracking inaccuracies of the nozzle: even slight inaccuracies can cause large errors when, for example, the neck of the dinosaur moves while the recorded air stream direction does not, or barely, touch the object. \begin{figure} \centering \includegraphics[width=0.4\textwidth]{figures/baseline_comparison/figure_v3.png} \caption{\textit{Baseline comparison with a homogeneous material.} We show two postures for both material settings, overlayed over the ground truth point cloud in purple. The homogeneous parameters have been optimized. The inhomogeneous model has clear advantages over the homogeneous model: the core body posture is better simulated, and the arms and floppy ears are better posed.} \label{fig:errorMap} \vspace*{0.2cm} \end{figure} \noindent\textbf{Inhomogeneous Material.} An important feature of our method is that it can identify different material parameters for different parts of the object (c.t. Fig.~\ref{fig:dataset}, lower right). This is crucial for building a detailed physics model with no prior knowledge of the object. Even more, our method can reliably learn `effective' softness of the material even in places with unreliable tracking, for example thin geometrical structures close to joints. In case of Baby Alien, our method learns that the ears and arms are softer compared to the other body parts; the mane and tail of the Pony are softer, even though these regions are very hard to track. Both reconstructions match the properties of their real counterparts. We show a comparison between our method that assumes an independent material parameter on all points with a baseline with only one global material parameter. We trained the baseline model with the exact same procedure as before but learn just one $\mu$ and one $\lambda$ for the energy in Eq.~\ref{eq:neoHookean}. As shown in Fig.~\ref{fig:errorMap}, our inhomogeneous model is visually indistinguishable from the ground truth point cloud, while the homogeneous baseline model has a larger error. The homogeneous model fails to capture the exact movements at the arms and ears of the Baby Alien, but instead distributes the deformation evenly at the ears and arms. \begin{figure} \centering \includegraphics[width=4cm,trim={10cm 8cm 10cm 7cm},clip]{figures/alien_unseen/ears.png} \includegraphics[width=4cm,trim={10cm 5cm 10cm 7cm},clip]{figures/alien_unseen/hand.png} \vspace*{-0.2cm} \caption{\textit{Simulation of Baby Alien in poses unseen in the dataset.} Using the material model and simulator our method generalizes well to these asymmetric postures for ears and arms; we only observe symmetric forward and backward motions during training.} \label{fig:novel} \end{figure} \noindent\textbf{Generalization to Novel Poses.} The strength of the underlying physics simulator is the ability to generalize to scenarios that are not encountered in the training set. We show different simulated poses of the Baby Alien in Fig.~\ref{fig:novel}, such as pulling the ears in opposite directions, and moving just one single arm. This deformation is particularly challenging for purely data-driven methods since both ears and arms only move synchronously in the training data. \begin{figure} \centering \includegraphics[width=0.23\textwidth,trim={0cm 4cm 0cm 10cm},clip]{figures/renders_deformation/dino_blue_cloud.png} \includegraphics[width=0.23\textwidth,trim={0cm 4cm 0cm 10cm},clip]{figures/renders_deformation/dog_cloud.png}\\ \includegraphics[width=0.23\textwidth]{figures/renders_deformation/dino_blue_render.png} \includegraphics[width=0.23\textwidth]{figures/renders_deformation/dog_render.png} \caption{\textit{Rendering of the Dino Blue and Dog VEOs during interactions with secondary objects.} The dinosaur neck bends correctly, and dents are forming on the Dog's back.} \label{fig:collision-and-render} \end{figure} \noindent\textbf{Interaction with Virtual Objects.} The physical model of the object enables interactions with all kinds of different virtual items. Fig.~\ref{fig:collision-and-render} shows the one-way coupled interaction of the learned elastic objects with other virtual items. \noindent\textbf{Rendering} Our pipeline ends with re-rendering an object under novel interactions not seen during training. Fig.~\ref{fig:collision-and-render} contains renderings of the Dino Blue and Dog objects, including interactions with two virtual objects. Tab.~\ref{tab:rerender} contains quantitative results, where we compare the renderings obtained from the reconstructed point clouds (which are used for supervision when learning the material parameters) and the simulated point clouds. The former thus provides a soft upper bound of the quality that the simulator can achieve. We find that the simulator results are very close to those from the reconstructed point clouds. Thus, both the quantitative and qualitative results show that our approach is able to synthesize renderings of novel deformed states in a realistic manner. \iftrue \begin{table} \centering \renewcommand{\arraystretch}{0.9} \setlength{\tabcolsep}{2pt} \resizebox{0.47\textwidth}{!}{ \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c|} & \multicolumn{6}{|c|}{Simulated} & \multicolumn{6}{|c|}{Reconstructed} \\ & \multicolumn{3}{|c|}{Not Masked} & \multicolumn{3}{|c|}{Masked} & \multicolumn{3}{|c|}{Not Masked} & \multicolumn{3}{|c|}{Masked} \\ Object & PSNR & SSIM & LPIPS & PSNR & SSIM & LPIPS & PSNR & SSIM & LPIPS & PSNR & SSIM & LPIPS \\ \hline Baby Alien & 18.40 & 0.734 & 0.255 & 21.17 & 0.840 & 0.174 & 18.75 & 0.747 & 0.249 & 21.92 & 0.853 & 0.167 \\ \rowcolor{Gray} Fish & 19.75 & 0.692 & 0.239 & 22.55 & 0.808 & 0.173 & 20.03 & 0.701 & 0.235 & 22.96 & 0.818 & 0.169 \\ Leaf & 25.14 & 0.901 & 0.091 & 27.32 & 0.935 & 0.065 & 25.19 & 0.901 & 0.091 & 27.37 & 0.935 & 0.065 \\ \rowcolor{Gray} Mr. Seal & 20.61 & 0.697 & 0.240 & 24.03 & 0.801 & 0.180 & 20.65 & 0.698 & 0.239 & 24.11 & 0.802 & 0.180 \\ Pillow & 21.45 & 0.743 & 0.223 & 23.18 & 0.806 & 0.174 & 21.92 & 0.760 & 0.218 & 23.84 & 0.823 & 0.169 \\ \rowcolor{Gray} Dog & 18.98 & 0.751 & 0.206 & 24.68 & 0.904 & 0.104 & 19.05 & 0.757 & 0.203 & 25.24 & 0.912 & 0.100 \\ Sponge & 21.94 & 0.846 & 0.130 & 26.99 & 0.925 & 0.070 & 21.92 & 0.846 & 0.130 & 27.01 & 0.925 & 0.070 \\ \rowcolor{Gray} Dino Rainbow & 18.64 & 0.754 & 0.302 & 23.87 & 0.839 & 0.232 & 20.22 & 0.778 & 0.281 & 26.21 & 0.859 & 0.213 \\ Dino Blue & 18.48 & 0.702 & 0.244 & 20.70 & 0.848 & 0.160 & 19.56 & 0.726 & 0.227 & 22.06 & 0.871 & 0.143 \\ \rowcolor{Gray} Dino Green & 18.94 & 0.779 & 0.190 & 21.49 & 0.863 & 0.135 & 20.46 & 0.794 & 0.180 & 23.59 & 0.879 & 0.121 \\ Pony & 16.54 & 0.758 & 0.245 & 19.20 & 0.859 & 0.163 & 19.31 & 0.798 & 0.200 & 24.65 & 0.906 & 0.108 \\ \rowcolor{Gray} Serpentine & 18.22 & 0.798 & 0.181 & 21.39 & 0.903 & 0.111 & 19.95 & 0.813 & 0.162 & 23.14 & 0.916 & 0.091 \\ \hline \hline Average* & 20.23 & 0.760 & 0.212 & 23.60 & 0.857 & 0.145 & 20.78 & 0.771 & 0.205 & 24.43 & 0.868 & 0.140 \\ Average & 19.76 & 0.763 & 0.212 & 23.05 & 0.861 & 0.147 & 20.58 & 0.777 & 0.201 & 24.34 & 0.875 & 0.133 \\ \end{tabular} } \caption{\emph{Rendering evaluation.} We report the classic error metrics PSNR and SSIM~\cite{wang2004image} ($-1$ to $+1$), where higher is better for both, and the learned perceptual metric LPIPS~\cite{zhang2018unreasonable} (0 is best). We use deformed point clouds to render deformed states of the canonical model, see Sec.~\ref{ssec:rendering}. We use both the point cloud $P_t$ that the reconstruction (Sec.~\ref{ssec:recon}) provides directly (`Reconstructed') or the point cloud that the simulator provides after learning the material parameters (Sec.~\ref{ssec:learning_material}, `Simulated'). We report two versions: we either apply the segmentation masks of the input images to the rendered image to remove all artifacts that spill-over onto the background (`Masked') or we do not (`Not Masked'). Note that the values on the reconstructed point cloud are a (soft) upper bound for what the simulator can achieve. The simulated results are close the reconstructed results, demonstrating that the learned material parameters yield deformation fields that allow to re-render the object as well as the reconstruction can. } \label{tab:rerender} \end{table} \fi \section{Limitations} \noindent\textbf{Artifacts.} Due to the sparse camera setup (16 cameras for 360 degree coverage), we found NeRF unable to reconstruct viewpoint dependent effects, leading to artifacts around specular regions like eyes. Furthermore, the air compressor leads to quickly oscillating surfaces (\eg, the fins of the fish), which pose a challenge for reconstruction and material parameter estimation, and impacts calibration. These issues impact the extracted point clouds as well as the final renderings (artifacts visible in Fig.~\ref{fig:collision-and-render}); we manually removed resulting background clutter in the point clouds. The physical simulator turned out to be remarkably robust towards noise and can run with any reconstructed point cloud with temporal correspondences. \noindent\textbf{Known Forces.} The simulator requires the forces impacting the object during capture to be known. This limits the variety of forces that can be applied and hence the kind of objects that are compatible with the presented method. We expect an extension to handle unknown forces to be a challenging but exciting direction for future work. Finding good force priors could be a viable approach in this direction. \section{Conclusion} We introduced a novel, holistic problem setting: estimating physical parameters of a general deformable object from RGB input and known physical forces, and rendering its physically plausible response to novel interactions realistically. We further proposed Virtual Elastic Objects as a solution and demonstrated their ability to synthesize deformed states that greatly differ from observed deformations. Our method leverages a physical simulator that is able to estimate plausible physical parameters from a 4D reconstruction of a captured object. Finally, we showed that these deformed states can be re-rendered with high quality. We hope that the presented results and the accompanying dataset will inspire and enable future work on reconstructing and re-rendering \emph{interactive} objects. \newpage \section{Conclusion} We introduced a novel, holistic problem setting: estimating physical parameters of a general deformable object from RGB input and known physical forces, and rendering its physically plausible response to novel interactions realistically. We further proposed Virtual Elastic Objects as a solution and demonstrated their ability to synthesize deformed states that greatly differ from observed deformations. Our method leverages a physical simulator that is able to estimate plausible physical parameters from a 4D reconstruction of a captured object. Finally, we showed that these deformed states can be re-rendered with high quality. We hope that the presented results and the accompanying dataset will inspire and enable future work on reconstructing and re-rendering \emph{interactive} objects. \newpage \section{Results} \subsection{Dataset} \label{sec:dataset} \begin{figure*} \centering \resizebox{0.95\textwidth}{!}{ \begin{tabular}{c c c c c c c} \includegraphics[height=2cm]{figures/dataset/babyalien.png} & \includegraphics[height=2cm]{figures/dataset/dino_rainbow.png} & \includegraphics[height=2cm]{figures/dataset/dino_blue.png} & \includegraphics[height=2cm]{figures/dataset/dino_green.png} & \includegraphics[height=2cm]{figures/dataset/fish.png} & \includegraphics[height=2cm]{figures/dataset/leaf.png} & \includegraphics[height=2cm]{figures/dataset/serpentine.png} \\ \footnotesize Baby Alien (179g, 41s)$^\|$ & \footnotesize Dino Rainbow (672g, 37s)$^\|$ & \footnotesize Dino Blue (148g, 55s)$^\|$ & \footnotesize Dino Green (76g, 42s) & \footnotesize Fish (282g, 65s)$^\|$ & \footnotesize Leaf (58g, 32s) & \footnotesize Serpentine (54g, 40s)$^*$\\ \cline{6-7} \includegraphics[height=2cm]{figures/dataset/mrseal.png} & \includegraphics[height=1.2cm]{figures/dataset/pillow_sea.png} & \includegraphics[height=2cm]{figures/dataset/pony.png} & \includegraphics[height=2cm]{figures/dataset/dog.png} & \includegraphics[height=2cm]{figures/dataset/sponge.png} & \multicolumn{1}{|l}{\includegraphics[height=2.5cm,trim={10cm 5cm 10cm 0cm},clip]{figures/val_deformations/baby_alien.png}} & \includegraphics[height=2.5cm,trim={10cm 5cm 10cm 0cm},clip]{figures/val_deformations/pony.png} \\ \footnotesize Mr. Seal (444g, 53s)$^\|$ & \footnotesize Pillow (406g, 42s) & \footnotesize Pony (197g, 51s)$^*$ & \footnotesize Dog (213g, 67s)$^\|$ & \footnotesize Sponge (21g, 46s) & \footnotesize Baby Alien \lame $\mu$ & \footnotesize Pony \lame $\mu$ \end{tabular} } \caption{\textit{The \textbf{PLUSH} dataset} consists of 12 items from everyday life: a pillow, a sponge and several plushies. $\|$ indicates that we recorded extremity motion for the object, * indicates that the recording has significant second order motion. We additionally provide the mass and recording duration for each object. \textbf{Lower right:} \textit{Lam\'e parameter visualizations for Baby Alien and Pony.} Colors tending towards purple show a softer region, colors tending towards green and yellow a harder region. Our method clearly identifies different material properties on the objects, for example the arms and ears for the Baby Alien, and the mane and tail of the Pony.} \label{fig:dataset} \end{figure*} The \textbf{PLUSH} dataset consists of 12 soft items encountered in everyday life (see Fig.~\ref{fig:dataset}): a pillow, a sponge, and various plush toys. We chose items that are composed of soft (and in some cases, heterogeneous) material, complex geometry, and rich texture and color to enable successful background subtraction, 4D reconstruction and tracking. Our strategy for applying external forces is based on the observation that our chosen objects consist of \emph{bulk volumes} (such as the body of a plush toy) along with \emph{flexible extremities} (ears and fingers of the toy). We move object extremities by using transparent fishing line, and we use a stream of compressed air to exert force on bulk volumes. The nozzle position and stream direction must be tracked during video capture to provide the direction and magnitude of forces acting on the object at every point in time. Of the 19 cameras in our capture rig, we use three to track the nozzle using an attached ArUco marker~\cite{Garrido-Jurado2016,Romero-Ramirez2018}. Using this system, we generate multi-part video sequences for each capture subject, where we sequentially actuate the fishing lines (when applicable) followed by sweeping the air stream over the object. We record between 32s and 67s of video for each object, at a frame rate of 40FPS. \subsection{Virtual Elastic Objects} \begin{table} \centering \resizebox{0.4\textwidth}{!}{ \begin{tabular}{c|c|c|c} Object & average (mm) & 95\% (mm) & max (mm) \\ \hline Baby Alien & 3.8 & 14.4 & 29.3 \\ \rowcolor{Gray} Fish & 1.1 & 6.6 & 18.5 \\ Leaf & 0.4 & 1.1 & 9.8 \\ \rowcolor{Gray} Mr. Seal & 0.4 & 1.9 & 171.9 \\ Pillow & 1.5 & 7.8 & 18.35 \\ \rowcolor{Gray} Dog & 1.7 & 7.5 & 28.8 \\ Sponge & 0.2 & 1.8 & 15.8 \\ \rowcolor{Gray} Dino Rainbow & 4.0 & 14.6 & 171.4 \\ Dino Blue & 5.5 & 56.0 & 105.8\\ \rowcolor{Gray} Dino Green & 6.2 & 68.4 & 132.0 \\ Pony & 21.1 & 164.3 & 204.9 \\ \rowcolor{Gray} Serpentine & 7.5 & 43.1 & 94.7 \\ \hline \hline Average* & 2.5 & 18.0 & 70.2 \\ Average & 4.4 & 32.3 & 83.5 \\ \hline \end{tabular} } \caption{\textit{$\ell_2$ distance of simulated point clouds compared with reconstructed point clouds on the test set.} We record the average distance per point per frame, the 95th percentile of average point distances of all frames, and the maximum distance of all points. Average* excludes the data from Pony and Serpentine.} \label{tab:distance} \end{table} For each of the 12 examples, we create a VEO using 20-50 frames from the reconstruction and evaluate on the remaining 500-1500 frames. We use the $\ell_2$ distance between the surface points of the VEO to the reconstructed point cloud from the captured data to evaluate the quality of the reconstructed parameters. For all examples except for the Baby Alien we use the external force field data obtained using the air stream. For the Baby Alien, we specifically use the arm and ear motion to demonstrate the versatility of our method in this scenario. We present the results in Tab.~\ref{tab:distance}. The error is relatively small for all objects, which shows that our method is applicable to objects with different geometries, and can learn the corresponding material parameters even for heterogeneous objects. Larger errors are observed for objects with a thin and tall component (see the last 4 rows of the table). This error is largely caused by tracking inaccuracies of the nozzle: even slight inaccuracies can cause large errors when, for example, the neck of the dinosaur moves while the recorded air stream direction does not, or barely, touch the object. \begin{figure} \centering \includegraphics[width=0.4\textwidth]{figures/baseline_comparison/figure_v3.png} \caption{\textit{Baseline comparison with a homogeneous material.} We show two postures for both material settings, overlayed over the ground truth point cloud in purple. The homogeneous parameters have been optimized. The inhomogeneous model has clear advantages over the homogeneous model: the core body posture is better simulated, and the arms and floppy ears are better posed.} \label{fig:errorMap} \vspace*{0.2cm} \end{figure} \noindent\textbf{Inhomogeneous Material.} An important feature of our method is that it can identify different material parameters for different parts of the object (c.t. Fig.~\ref{fig:dataset}, lower right). This is crucial for building a detailed physics model with no prior knowledge of the object. Even more, our method can reliably learn `effective' softness of the material even in places with unreliable tracking, for example thin geometrical structures close to joints. In case of Baby Alien, our method learns that the ears and arms are softer compared to the other body parts; the mane and tail of the Pony are softer, even though these regions are very hard to track. Both reconstructions match the properties of their real counterparts. We show a comparison between our method that assumes an independent material parameter on all points with a baseline with only one global material parameter. We trained the baseline model with the exact same procedure as before but learn just one $\mu$ and one $\lambda$ for the energy in Eq.~\ref{eq:neoHookean}. As shown in Fig.~\ref{fig:errorMap}, our inhomogeneous model is visually indistinguishable from the ground truth point cloud, while the homogeneous baseline model has a larger error. The homogeneous model fails to capture the exact movements at the arms and ears of the Baby Alien, but instead distributes the deformation evenly at the ears and arms. \begin{figure} \centering \includegraphics[width=4cm,trim={10cm 8cm 10cm 7cm},clip]{figures/alien_unseen/ears.png} \includegraphics[width=4cm,trim={10cm 5cm 10cm 7cm},clip]{figures/alien_unseen/hand.png} \vspace*{-0.2cm} \caption{\textit{Simulation of Baby Alien in poses unseen in the dataset.} Using the material model and simulator our method generalizes well to these asymmetric postures for ears and arms; we only observe symmetric forward and backward motions during training.} \label{fig:novel} \end{figure} \noindent\textbf{Generalization to Novel Poses.} The strength of the underlying physics simulator is the ability to generalize to scenarios that are not encountered in the training set. We show different simulated poses of the Baby Alien in Fig.~\ref{fig:novel}, such as pulling the ears in opposite directions, and moving just one single arm. This deformation is particularly challenging for purely data-driven methods since both ears and arms only move synchronously in the training data. \begin{figure} \centering \includegraphics[width=0.23\textwidth,trim={0cm 4cm 0cm 10cm},clip]{figures/renders_deformation/dino_blue_cloud.png} \includegraphics[width=0.23\textwidth,trim={0cm 4cm 0cm 10cm},clip]{figures/renders_deformation/dog_cloud.png}\\ \includegraphics[width=0.23\textwidth]{figures/renders_deformation/dino_blue_render.png} \includegraphics[width=0.23\textwidth]{figures/renders_deformation/dog_render.png} \caption{\textit{Rendering of the Dino Blue and Dog VEOs during interactions with secondary objects.} The dinosaur neck bends correctly, and dents are forming on the Dog's back.} \label{fig:collision-and-render} \end{figure} \noindent\textbf{Interaction with Virtual Objects.} The physical model of the object enables interactions with all kinds of different virtual items. Fig.~\ref{fig:collision-and-render} shows the one-way coupled interaction of the learned elastic objects with other virtual items. \noindent\textbf{Rendering} Our pipeline ends with re-rendering an object under novel interactions not seen during training. Fig.~\ref{fig:collision-and-render} contains renderings of the Dino Blue and Dog objects, including interactions with two virtual objects. Tab.~\ref{tab:rerender} contains quantitative results, where we compare the renderings obtained from the reconstructed point clouds (which are used for supervision when learning the material parameters) and the simulated point clouds. The former thus provides a soft upper bound of the quality that the simulator can achieve. We find that the simulator results are very close to those from the reconstructed point clouds. Thus, both the quantitative and qualitative results show that our approach is able to synthesize renderings of novel deformed states in a realistic manner. \iftrue \begin{table} \centering \renewcommand{\arraystretch}{0.9} \setlength{\tabcolsep}{2pt} \resizebox{0.47\textwidth}{!}{ \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c|} & \multicolumn{6}{|c|}{Simulated} & \multicolumn{6}{|c|}{Reconstructed} \\ & \multicolumn{3}{|c|}{Not Masked} & \multicolumn{3}{|c|}{Masked} & \multicolumn{3}{|c|}{Not Masked} & \multicolumn{3}{|c|}{Masked} \\ Object & PSNR & SSIM & LPIPS & PSNR & SSIM & LPIPS & PSNR & SSIM & LPIPS & PSNR & SSIM & LPIPS \\ \hline Baby Alien & 18.40 & 0.734 & 0.255 & 21.17 & 0.840 & 0.174 & 18.75 & 0.747 & 0.249 & 21.92 & 0.853 & 0.167 \\ \rowcolor{Gray} Fish & 19.75 & 0.692 & 0.239 & 22.55 & 0.808 & 0.173 & 20.03 & 0.701 & 0.235 & 22.96 & 0.818 & 0.169 \\ Leaf & 25.14 & 0.901 & 0.091 & 27.32 & 0.935 & 0.065 & 25.19 & 0.901 & 0.091 & 27.37 & 0.935 & 0.065 \\ \rowcolor{Gray} Mr. Seal & 20.61 & 0.697 & 0.240 & 24.03 & 0.801 & 0.180 & 20.65 & 0.698 & 0.239 & 24.11 & 0.802 & 0.180 \\ Pillow & 21.45 & 0.743 & 0.223 & 23.18 & 0.806 & 0.174 & 21.92 & 0.760 & 0.218 & 23.84 & 0.823 & 0.169 \\ \rowcolor{Gray} Dog & 18.98 & 0.751 & 0.206 & 24.68 & 0.904 & 0.104 & 19.05 & 0.757 & 0.203 & 25.24 & 0.912 & 0.100 \\ Sponge & 21.94 & 0.846 & 0.130 & 26.99 & 0.925 & 0.070 & 21.92 & 0.846 & 0.130 & 27.01 & 0.925 & 0.070 \\ \rowcolor{Gray} Dino Rainbow & 18.64 & 0.754 & 0.302 & 23.87 & 0.839 & 0.232 & 20.22 & 0.778 & 0.281 & 26.21 & 0.859 & 0.213 \\ Dino Blue & 18.48 & 0.702 & 0.244 & 20.70 & 0.848 & 0.160 & 19.56 & 0.726 & 0.227 & 22.06 & 0.871 & 0.143 \\ \rowcolor{Gray} Dino Green & 18.94 & 0.779 & 0.190 & 21.49 & 0.863 & 0.135 & 20.46 & 0.794 & 0.180 & 23.59 & 0.879 & 0.121 \\ Pony & 16.54 & 0.758 & 0.245 & 19.20 & 0.859 & 0.163 & 19.31 & 0.798 & 0.200 & 24.65 & 0.906 & 0.108 \\ \rowcolor{Gray} Serpentine & 18.22 & 0.798 & 0.181 & 21.39 & 0.903 & 0.111 & 19.95 & 0.813 & 0.162 & 23.14 & 0.916 & 0.091 \\ \hline \hline Average* & 20.23 & 0.760 & 0.212 & 23.60 & 0.857 & 0.145 & 20.78 & 0.771 & 0.205 & 24.43 & 0.868 & 0.140 \\ Average & 19.76 & 0.763 & 0.212 & 23.05 & 0.861 & 0.147 & 20.58 & 0.777 & 0.201 & 24.34 & 0.875 & 0.133 \\ \end{tabular} } \caption{\emph{Rendering evaluation.} We report the classic error metrics PSNR and SSIM~\cite{wang2004image} ($-1$ to $+1$), where higher is better for both, and the learned perceptual metric LPIPS~\cite{zhang2018unreasonable} (0 is best). We use deformed point clouds to render deformed states of the canonical model, see Sec.~\ref{ssec:rendering}. We use both the point cloud $P_t$ that the reconstruction (Sec.~\ref{ssec:recon}) provides directly (`Reconstructed') or the point cloud that the simulator provides after learning the material parameters (Sec.~\ref{ssec:learning_material}, `Simulated'). We report two versions: we either apply the segmentation masks of the input images to the rendered image to remove all artifacts that spill-over onto the background (`Masked') or we do not (`Not Masked'). Note that the values on the reconstructed point cloud are a (soft) upper bound for what the simulator can achieve. The simulated results are close the reconstructed results, demonstrating that the learned material parameters yield deformation fields that allow to re-render the object as well as the reconstruction can. } \label{tab:rerender} \end{table} \fi \section{Limitations} \noindent\textbf{Artifacts.} Due to the sparse camera setup (16 cameras for 360 degree coverage), we found NeRF unable to reconstruct viewpoint dependent effects, leading to artifacts around specular regions like eyes. Furthermore, the air compressor leads to quickly oscillating surfaces (\eg, the fins of the fish), which pose a challenge for reconstruction and material parameter estimation, and impacts calibration. These issues impact the extracted point clouds as well as the final renderings (artifacts visible in Fig.~\ref{fig:collision-and-render}); we manually removed resulting background clutter in the point clouds. The physical simulator turned out to be remarkably robust towards noise and can run with any reconstructed point cloud with temporal correspondences. \noindent\textbf{Known Forces.} The simulator requires the forces impacting the object during capture to be known. This limits the variety of forces that can be applied and hence the kind of objects that are compatible with the presented method. We expect an extension to handle unknown forces to be a challenging but exciting direction for future work. Finding good force priors could be a viable approach in this direction. \section{Related Work} Our work integrates together multiple areas of computer vision, computer graphics, and simulation. \noindent\textbf{Recovering Elastic Parameters for 3D Templates.} A number of prior works estimate material parameters of a pre-scanned 3D template by tracking the object over time from depth input. Wang~\etal~\cite{wang2015deformation} were among the first to tackle tracking, rest pose estimation, and material parameter estimation from multi-view depth streams. They adopt a gradient-free downhill simplex method for parameter fitting, and can only optimize a limited number of material parameters. Objects built from multiple types of materials cannot be faithfully captured without manual guidance or prior knowledge of a part decomposition. Hahn~\etal~\cite{Hahn:2019} learn an inhomogeneous viscoelastic model from recordings of motion markers covering the object. Recently, Weiss~\etal~\cite{weiss2020correspondence} infer homogeneous linear material properties by tracking deformations of a given template with a single depth camera. In contrast to these methods, ours jointly reconstructs not just object deformations and physics \emph{without a need for depth input or markers} but also geometry and appearance \emph{without a need for a template}. Our formulation can model inhomogeneous, nonlinear materials without prior knowledge or annotations. \noindent\textbf{Dynamic Reconstruction.} Reconstructing non-rigid objects from a video sequence is a long-standing computer vision and graphics problem~\cite{Zhang2003SpacetimeSS, tung_complete_2009}. Shape-from-Template methods deform a provided template using RGB~\cite{yu2015direct} or RGB-D data~\cite{zollhofer2014real}. DynamicFusion~\cite{Newcombe_2015_CVPR} is a model-free, real-time method for reconstructing general scenes from a single RGB-D video. When reliable 2D correspondences are available from optical flow, non-rigid structure-from-motion (NRSfM) can be used to reconstruct the 3D geometry~\cite{Agudo2014OnlineDN, grassmanian_2018}, perhaps even using physics-based priors~\cite{agudo2015sequential}. There are also image-based approaches that do not yield a true 3D scene~\cite{yoon2020novel,Bemana2020xfields}. Recently, reconstruction using neural representations have become more common. Whereas OccupancyFlow~\cite{niemeyer2019occupancy} requires 3D supervision, Neural Volumes~\cite{Lombardi:2019} reconstructs a dynamic scene from multi-view input only, but does not compute temporal correspondences. See a recent survey on neural rendering~\cite{Tewari2020NeuralSTAR} for more. Neural Radiance Fields~\cite{Mildenhall_2020_NeRF}, the seminal work of Mildenhall~\etal, lays the groundwork for several follow-up reconstruction methods that extend it to dynamic scenes~\cite{Li2021, park2021hypernerf,attal2021torf,pumarola2020d,park2021nerfies,li2021neural,du2021nerflow,Gaofreeviewvideo,xian2021space,Lombardi_2021_MVP}. In this work, we assume multi-view RGB video input with known camera parameters and foreground segmentation masks and so extend Non-Rigid Neural Radiance Fields (NR-NeRF)~\cite{tretschk2021nonrigid}. \noindent\textbf{Data-Driven Physics Simulation.} Much recent research has explored the potential of machine learning to enhance or even replace traditional physics-based simulation. Learning natural laws from data without any priors has been shown possible for a few simple physics systems\cite{schmidt2009distilling}, but the computational cost scales exponentially with the complexity of the system, and remains intractable for real-world problems. For simulating elastic objects specifically, one line of work replaces traditional mesh kinematics with a learned deformation representation to improve performance: Fulton~\etal~\cite{Fulton:2018} use an autoencoder to learn a nonlinear subspace for elastic deformation, and Holden~\etal~\cite{Holden:2019} train a neural network to predict the deformation of cloth using a neural subspace. Some methods use neural networks to augment coarse traditional simulations with fine details~\cite{deepWrinkles,geng2020coercing}. Another line of work uses data to fit a parameterized material model to observed deformations. This idea has been successfully applied to muscle-actuated biomechanical systems such as human faces~\cite{Kadlecek:2019,Srinivasan:2021}, learning the rest pose of an object in zero gravity~\cite{chen2014anm}, the design of soft robotics~\cite{hu2019chainqueen, hu2020difftaichi}, and motion planning with frictional contacts~\cite{Geilinger:2020,du2021diffpd}. Yang~\etal~\cite{Yang2017} learn physical parameters for cloth by analysing the wrinkle patterns in video. While all of these methods learn physical parameters from data, our method is unique in requiring no template or other prior knowledge about object geometry to reconstruct and re-render novel deformations of an object. \noindent\textbf{Meshless Simulation.} Meshless physics-based simulation emerged as a counter-part to traditional mesh-based methods \cite{muller2004} and is ideal for effects such as melting or fracture \cite{muller2004,pauly2004meshless}. These methods have been later extended to support oriented particles and skinning \cite{muller2011solid,gilles2011frame,macklin2014unified}. Another extension of point-based simulations consists in incorporating a background Eulerian grid, which enables more efficient simulation of fluid-like phenomena \cite{stomakhin2013material,jiang2017anisotropic}. \section{Introduction} 3D reconstruction is one of the fundamental problems of computer vision and a cornerstone of augmented and virtual reality. Concurrently with steady progress towards real-time photo-realistic rendering of 3D environments in game engines, the last few decades have seen great strides towards photo-realistic 3D reconstruction. A recent achievement in this direction is the discovery of a fairly general formulation for representing radiance fields \cite{Mildenhall_2020_NeRF,liu2020neural,martin2021nerf,Schwarz2020NEURIPS,zhang2020nerf++,yu2021pixelnerf,trevithick2020grf,bi2020neural,srinivasan2021nerv,niemeyer2021giraffe,sucar2021imap}. Neural radiance fields are remarkably versatile for reconstructing real-world objects with high-fidelity \emph{geometry} and \emph{appearance}. But static appearance is only the first step: it ignores how an object moves and interacts with its environment. 4D reconstruction tackles this problem in part by incorporating the time dimension: with more intricate capture setups and more data, we can reconstruct objects over time---but can only re-play the captured sequences. Today, in the age of mixed reality, a photo-realistically reconstructed object might still destroy immersion if it is not ``physically realistic'' because \emph{the object cannot be interacted with.} (For example, if a soft object appears as rigid as the rocks next to it when stepped on.) By building on advances in computer vision and physics simulation, we begin to tackle the problem of physically-realistic reconstruction and create \emph{Virtual Elastic Objects}: virtual objects that not only look like their real-world counterparts but also behave like them, even when subject to novel interactions. For the first time, this allows for full-loop reconstruction of deforming elastic objects: from capture, to reconstruction, to simulation, to interaction, to re-rendering. Our core observation is that with the latest advances in 4D reconstruction using neural radiance fields, we can both capture radiance and deformation fields of a moving object over time, and re-render the object given novel deformation fields. That leaves as the main challenge the core problem of capturing an object's physics from observations of its interactions with the environment. With the right representation that jointly encodes an object's geometry, deformation, and material behavior, compatible with both differentiable physical simulation and the deformation fields provided by 4D reconstruction algorithms, we can use these deformation fields to provide the necessary supervision to learn the material parameters. But even with this insight, multiple challenges remain to create Virtual Elastic Objects. We list them together with our technical contributions:\\ \noindent\textbf{1) Capture.} To create VEOs, we need to collect data that not only contains visual information but also information about physical forces. We present the new \textbf{PLUSH} dataset containing occlusion-free 4D recordings of elastic objects deforming under known controlled force fields. To create this dataset, we built a multi-camera capture rig that incorporates an air compressor with a movable, tracked nozzle. More details can be found in Sec.~\ref{ssec:capture}. \\ \noindent\textbf{2) Reconstruction.} VEOs~do not require any prior knowledge about the geometry of the object to be reconstructed; the reconstruction thus must be template-free and provide full 4D information (\ie, a 3D reconstruction and deformation information over time). We extend Non-rigid Neural Radiance Fields~\cite{tretschk2021nonrigid} with novel losses, and export point clouds and point correspondences to create the data required to supervise learning material behavior using physical simulation. We provide further details in Sec.~\ref{ssec:recon}.\\ \noindent\textbf{3) Simulation.} Crucially for creating realistic interactive objects, a physical simulation is required, both to optimize for an unknown object's physical parameters and to generate deformations of that object in response to novel interactions. We implement a differentiable quasi-static simulator that is particle-based and is compatible with the deformation field data provided by our 4D reconstruction algorithm. We present the differentiable simulator and explain how we use it to obtain physical parameters in Sec.~\ref{ssec:learning_material}, and describe simulations of novel interactions in Sec.~\ref{ssec:interaction}.\\ \noindent\textbf{4) Rendering.} Since we convert from a neural representation of the captured object's geometry to a point cloud reconstructing the object's physical properties, we require a function that allows rendering the object given new simulated deformations of the point cloud. We introduce a mapping function that enables us to use deformed point clouds instead of continuous deformation fields to alter the ray casting for the Neural Radiance Fields we used for the original reconstruction. Further details on re-rendering can be found in Sec.~\ref{ssec:rendering}. \section{Method} \subsection{Capture} \label{ssec:capture} To create a physically accurate representation of an object, we first need to record visual data of its deformation under known physical forces. For recording, we use a static multi-view camera setup consisting of 19 OpenCV AI-Kit Depth (OAK-D) cameras\footnote{\url{https://store.opencv.ai/products/oak-d}}, each containing an RGB and two grey-scale cameras (note that \methodname~does not use the stereo camera data to infer classical pairwise stereo depth). They represent an affordable, yet surprisingly powerful solution for volumetric capture. In particular, their on-board H265 encoding capability facilitates handling the amount of data produced during recording (5.12GB/s uncompressed). Since the cameras lack a lens system with zoom capabilities, we keep them close to the object to optimize the pixel coverage and re-configure the system depending on object size. The maximum capture volume has a size of roughly $30\text{cm}^3$. We put a black sheet around it to create a dark background with the exception of five stage lights that create a uniform lighting environment. In addition to the images, we also need to record force fields on the object surface. This raises a problem: if a prop is used to exert force on the capture subject, the prop becomes an occluder that interferes with photometric reconstruction. We solved this problem when capturing our \textbf{PLUSH} dataset by actuating the object using transparent fishing line and a compressed air stream; see Sec.~\ref{sec:dataset} for further details. \subsection{4D Reconstruction} \label{ssec:recon} Given the captured video of an object deforming under external forces, we need 4D reconstruction to supply a temporally-coherent point cloud that can be used to learn the object material properties. To that end, we use NR-NeRF~\cite{tretschk2021nonrigid}, which extends the static reconstruction method NeRF~\cite{Mildenhall_2020_NeRF} to the temporal domain. NeRF learns a volumetric scene representation: a coordinate-based Multi-Layer Perceptron (MLP) $\mathbf{v}(\mathbf{x}) = (o, \mathbf{c})$ that regresses geometry (opacity $o(\mathbf{x})\in\mathbb{R}$) and appearance (RGB color $\mathbf{c}(\mathbf{x})\in\mathbb{R}^3$) at each point $\mathbf{x}$ in 3D space. At training time, the weights of $\mathbf{v}$ are optimized through 2D supervision by RGB images with known camera parameters: for a given pixel of an input image, the camera parameters allow us to trace the corresponding ray $\mathbf{r}(s)$ through 3D space. We then sample the NeRF at $|S|$ points $\{\mathbf{r}(s)\in\mathbb{R}^3\}_{s \in S}$ along the ray, and use a volumetric rendering equation to accumulate the samples front-to-back via weighted averaging: $\Tilde{\mathbf{c}} = \sum_{s\in S} \alpha_s \mathbf{c}(\mathbf{r}(s))$ (\ie, alpha blending with alpha values $\{\alpha_s\in\mathbb{R}\}_s$ derived from the opacities $\{o_s\}_s$). A reconstruction loss encourages the resulting RGB value $\Tilde{\mathbf{c}}$ to be similar to the RGB value of the input pixel. On top of the static geometry and appearance representation $\mathbf{v}$ (the \emph{canonical model}), NR-NeRF models deformations explicitly via a jointly learned ray-bending MLP $\mathbf{b}(\mathbf{x},\mathbf{l}_t) = \mathbf{d} $ that regresses a 3D offset $\mathbf{d}$ for each point in space at time $t$. ($\mathbf{l}_t$ is an auto-decoded latent code that conditions $\mathbf{b}$ on the deformation at time $t$.) When rendering a pixel at time $t$ with NR-NeRF, $\mathbf{b}$ is queried for each sample $\mathbf{r}(s)$ on the ray in order to deform it into the canonical model: $(o,\mathbf{c}) = \mathbf{v}\left[\mathbf{r}(s) + \mathbf{b}(\mathbf{r}(s),\mathbf{l}_t)\right]$. Unlike NR-NeRF's monocular setting, we have a multi-view capture setup. We thus disable the regularization losses of NR-NeRF and only use its reconstruction loss. \noindent\textbf{Extensions.} We improve NR-NeRF in several ways to adapt it to our setting. The input videos contain background, which we do not want to reconstruct. We obtain foreground segmentations for all input images via image matting~\cite{Lin2021} together with a hard brightness threshold. During training, we use a background loss $L_\mathit{background}$ to discourage geometry along rays of background pixels. When later extracting point clouds, we need opaque samples on the inside of the object as well. However, we find that $L_\mathit{background}$ leads the canonical model to prefer empty space even inside the object. We counteract this effect with a density loss $L_\mathit{density}$ that raises the opacity of point samples of a foreground ray that are `behind' the surface, while emptying out the space in front of the surface with $L_\mathit{foreground}$. During training, we first build a canonical representation by pretraining the canonical model on a few frames and subsequently using it to reconstruct all images. Our capture setup not only provides RGB streams but also grey-scale images. We use these for supervision as well. In practice, we use a custom weighted combination of these techniques for each sequence to get the best reconstruction. \noindent\textbf{Point Cloud Extraction} In order to extract a temporally-consistent point cloud from this reconstruction, we require a forward deformation model, which warps from the canonical model to the deformed state at time~$t$. However, NR-NeRF's deformation model~$\mathbf{b}$ is a backward warping model: it deforms each deformed state into the canonical model. We therefore jointly train a coordinate-based MLP $\mathbf{w}$ to approximate the inverse of $\mathbf{b}$. After training, we need to convert the reconstruction from its continuous MLP format into an explicit point cloud. To achieve that, we cast rays from all input cameras and extract points from the canonical model that are at or behind the surface and whose opacity exceeds a threshold. These points can then be deformed from the canonical model into the deformed state at time $t$ via $\mathbf{w}$. We thus obtain a 4D reconstruction in the form of a 3D point cloud's evolving point positions $\{P_t\}_t$, which are in correspondence across time. To keep the computational cost of the subsequent reconstruction steps feasible, we downsample the point cloud to 9-15$k$ points if necessary. \subsection{Learning Material Parameters} \label{ssec:learning_material} Before we can simulate novel interactions with a captured object, we need to infer its physical behavior. Given that we have no prior knowledge of the object, we make several simplifying assumptions about its mechanics, with an eye towards minimizing the complexity of the physical model while also remaining flexible enough to capture heterogeneous objects built from multiple materials. First, we assume a \emph{spatially varying, isotropic nonlinear Neo-Hookean material model} for the object. Neo-Hookean elasticity well-approximates the behavior of many real-world materials, including rubber and many types of plastic, and is popular in computer graphics applications because its nonlinear stress-strain relationship guarantees that no part of the object can invert to have negative volume, even if the object is subjected to arbitrary large and nonlinear deformations. Finally, Neo-Hookean elasticity admits a simple parameterization: a pair of \lame parameters $(\mu_i, \lambda_i)\in\mathbb{R}^2$ at each point $i$ of the point cloud $P$. Second, we assume that the object deforms \emph{quasistatically} over time: that at each point in time, the internal elastic forces exactly balance gravity and applied external forces. The quasistatic assumption greatly simplifies learning material parameters, and is valid so long as inertial forces in the captured video sequences are negligible (or equivalently, so long as external forces change sufficiently slowly over time that there is no secondary motion, which is true for the air stream and string actuations in our \textbf{PLUSH} dataset). \paragraph{Overview.} We first formulate a differentiable, mesh-free \emph{forward} physical simulator that is tailored to work directly with the (potentially noisy) reconstructed point cloud. This forward simulator maps from the point cloud $P_0$ of the object in its \emph{reference pose} (where it is subject to no external forces besides gravity), an assignment of \lame parameters to every point, and an assignment of an external force $\mathbf{f}_i\in\mathbb{R}^3$ to each point on the object surface, to the deformed position $\mathbf{y}_i\in\mathbb{R}^3$ of every point in the point cloud after the object equilibrates against the applied forces. Next, we learn the \lame parameters that match the object's observed behavior by minimizing a loss function $\mathbf{L}$ that sums, over all times $t$, the distance between $\mathbf{y}_i$ and the corresponding target position of the point in the 4D point cloud $P_t$. \paragraph{Quasistatic Simulation.} To compute the equilibrium positions $\mathbf{y}_i$ of the points in $P$ for given external loads and material parameters, we solve the variational problem \begin{equation} \argmin_\mathbf{y} \mathbf{E}(\mathbf{y}), \label{eq:equilibrium} \end{equation} where $\mathbf{E}$ is the total energy of the physical system, capturing both the elastic energy of deformation as well as work done on the system by external forces. In what follows, we derive the expression for $\mathbf{E}$, and discuss how to solve Eq.~\ref{eq:equilibrium}. Following M\"{u}ller \etal~\cite{muller2004}, we adopt a mesh-free, point-based discretization of elasticity to perform forward simulation. For every point $\mathbf{x}_i$ in the reference point cloud $P_0$, we define a neighborhood $\mathcal{N}_i$ containing the 6 nearest neighbors of $\mathbf{x}_i$ in $P_0$. For any given set of deformed positions $\mathbf{y}_j$ of the points in $\mathcal{N}_i$, we estimate strain within the neighborhood in the least-squares sense. More specifically, the local material deformation gradient $\mathbf{F}_i\in\mathbb{R}^3$ maps the neighborhood $\mathcal{N}_i$ from the reference to the deformed state: \begin{equation} \label{eq:deformation_gradient} \mathbf{F}_i(\mathbf{x}_i-\mathbf{x}_j) \approx \mathbf{y}_i-\mathbf{y}_j \quad \forall \mathbf{x}_j \in \mathcal{N}_i. \end{equation} For neighborhoods larger than three, Eq.~\ref{eq:deformation_gradient} is over-determined, and we hence solve for $\mathbf{F}_i$ in the least-squares sense, yielding the closed-form solution: \begin{equation} \mathbf{F}_i = \mathbf{Y}_i \mathbf{W}_i \mathbf{X}_i^T (\mathbf{X}_i \mathbf{W}_i \mathbf{X}_i^T)^{-1}, \end{equation} where the $j$-th column of $\mathbf{X}_i$ and $\mathbf{Y}_i$ are $\mathbf{x}_i - \mathbf{x}_j$ and $\mathbf{y}_i - \mathbf{y}_j$, respectively, and $\mathbf{W}_i$ is a diagonal matrix of weights depending on the distance from $\mathbf{x}_j$ to $\mathbf{x}_i$~\cite{muller2004}. The elastic energy of the object can be computed from the classic Neo-Hookean energy density~\cite{ogden1984non}: \begin{equation} \label{eq:neoHookean} \Psi_\mathit{NH}^i = \frac{\mu_i}{2}(I_c -3 ) - \mu_i \log J + \frac{\lambda_i}{2}(J-1)^2, \end{equation} where $I_c$ is the trace of the right Cauchy-Green tensor $\mathbf{F}_i^T\mathbf{F}_i$, and $J$ is the determinant of $\mathbf{F}_i$. $\mu_i$ and $\lambda_i$ are the \lame parameters assigned to point $i$. The total elastic energy is then: \begin{equation} \mathbf{E}_\mathit{NH} = \sum_i V_i \Psi_\mathit{NH}^i, \end{equation} where $V_i\in\mathbb{R}$ approximates the volume of $\mathcal{N}_i$. We also need to include the virtual work done by the external force field to Eq.~\ref{eq:equilibrium}: \begin{equation} \mathbf{E}_W = \sum_i \mathbf{f}_i \cdot \mathbf{y}_i, \label{eq:virtualwork} \end{equation} where $\mathbf{f_i}$ is the force applied to point $i$ (the force of the air stream on the boundary). If we measured the tension in the fishing lines, we could also include the forces they exert on the object in Eq.~\ref{eq:virtualwork}. But since a fishing line is effectively inextensible relative to the object we are reconstructing, we instead incorporate the fishing lines as soft constraints on the positions of the points $Q\subset P$ attached to the lines: we assume that at time $t$, points in $Q$ should match their observed positions in $P_t$, and formulate an attraction energy: \begin{equation} \mathbf{E}_A = \alpha \sum_{q \in Q} \lVert \mathbf{y}_q - \mathbf{x}_q^* \rVert^2, \end{equation} where $\mathbf{x}_q^*$ is the position of the point corresponding to $\mathbf{y}_q$ in $P_t$, and $\alpha$ is a large penalty constant. We found that this soft constraint formulation works better in practice than alternatives such as enforcing $\mathbf{y}_q = \mathbf{x}_q^*$ as a hard constraint. The total energy in Eq.~~\ref{eq:equilibrium} is thus $\mathbf{E} = \mathbf{E}_{NH} + \mathbf{E}_W + \mathbf{E}_A$, which we minimize using Newton's method. Since Newton's method can fail when the Hessian $\mathbf{H}$ of $\mathbf{E}$ is not positive-definite, we perform a per-neighborhood eigen-decomposition of $\mathbf{H}$ and replace all eigenvalues that are smaller than a threshold $\epsilon>0$ with $\epsilon$; note that this is a well-known technique to improve robustness of physical simulations~\cite{teran2005robust}. We also make use of a line search to ensure stability and handling of position constraints at points where the capture subject touches the ground; see the supplemental material for further implementation details. \paragraph{Material Reconstruction.} Given the 4D point cloud $P_t$ and forces acting on the object $\{\mathbf{f}_i\}_i$, we use our forward simulator to learn the \lame parameters that best explain the observed deformations. More specifically, at each time $t$ we define the loss: \begin{equation} \mathbf{L}_t = \sum_{i \in \partial \Omega} \lVert \mathbf{y}_{t,i} - \mathbf{x}_{t,i}^* \rVert^2 \end{equation} where $\mathbf{x}_{t,i}^*$ is the position of point $i$ in $P_t$, and $\mathbf{y}_{t,i}$ is the output of the forward simulation. We use an $\ell_2$ loss to penalize outliers strongly, which would jeopardize the reconstruction quality otherwise. We choose a training subsequence $T$ of 20-50 frames from the input where the impact of the air stream roughly covers the surface so that we have some reference for each part of the object, and compute the desired \lame parameters by minimizing the sum of the loss over all $t \in T$ using the gradient-based Adam optimizer~\cite{KingmaB14}: \begin{equation} \mu^*, \lambda^* = \argmin_{\mu, \lambda} \sum_{t\in T} \mathbf{L}_t. \end{equation} It is not trivial to back-propagate through the Newton solve for $\mathbf{y}_{t,i}$, even if we ignore the line search and assume a fixed number of Newton iterations $K$. The gradient of $\mathbf{y}$ with respect to the \lame parameters ($\mu$ for instance) can be computed using the chain rule: \begin{equation} \label{eq:devLoss} \frac{\partial \mathbf{L}}{\partial \mu} = \frac{\partial \mathbf{L}}{\partial \mathbf{y}^K}\frac{\partial \mathbf{y}^K}{\partial \mu}, \end{equation} and, for any $1 \leq k \leq K$, \begin{align} \frac{\partial \mathbf{y}^k}{\partial \mu} &= \frac{\partial \mathbf{y}_{k-1}}{\partial \mu} -\left(\frac{\partial\mathbf{H}_{k-1}^{-1}}{\partial \mu} + \frac{\partial\mathbf{H}_{k-1}^{-1}}{\partial \mathbf{y}_{k-1}}\frac{\partial \mathbf{y}_{k-1}}{\partial \mu}\right)\nabla\mathbf{E}_{k-1} \notag\\ & -\mathbf{H}_{k-1}^{-1}\left(\frac{\partial\nabla\mathbf{E}_{k-1}}{\partial \mu} + \frac{\partial\nabla\mathbf{E}_{k-1}}{\partial \mathbf{y}_{k-1}}\frac{\partial \mathbf{y}_{k-1}}{\partial \mu}\right). \end{align} To avoid an exponentially-large expression tree, we approximate the derivative of the $k$th Newton iterate $\mathbf{y}^k$ by neglecting the higher-order derivative of the Hessian and of the gradient of the energy with respect to the previous position update: \begin{align*} \frac{\partial \mathbf{y}^k}{\partial \mu} \approx \frac{\partial \mathbf{y}_{k-1}}{\partial \mu} -\frac{\partial\mathbf{H}_{k-1}^{-1}}{\partial \mu} \nabla\mathbf{E}_{k-1} -\mathbf{H}_{k-1}^{-1} \frac{\partial\nabla\mathbf{E}_{k-1}}{\partial \mu} \end{align*} Although it is not guaranteed that the higher-order terms are always negligible, this approximation provides a sufficiently high-quality descent direction for all examples we tested. To improve performance and to capture hysteresis in cases where $\mathbf{E}$ has multiple local minima at some times $t$, we warm-start the Newton optimization at time $t$ using the solution from time $t-1$. \subsection{Novel Interactions} \label{ssec:interaction} Given a reconstructed \methodname, we can use the same physical simulator used for material inference to re-simulate the captured object subject to novel interactions. New force fields can easily be introduced by modifying $\mathbf{f}_i$ in the energy $\mathbf{E}_W.$ Other possible interactions include changing the direction of gravity, adding contact forces to allow multiple objects to mutually interact, or to allow manipulation of the object using mixed-reality tools, etc. We demonstrate the feasibility of re-simulating novel interactions by implementing a simple penalty energy to handle contact between a \methodname~and a secondary object, represented implicitly as a signed distance field $d:\mathbb{R}^3\to\mathbb{R}$. The penalty energy is given by: \begin{align} \Psi_c(\mathbf{y}) &= \begin{cases} \alpha_c d(\mathbf{y})^2 & \text{if } d(\mathbf{y}) < 0\\ 0 & \text{otherwise,}\\ \end{cases} \\%\notag \\ \mathbf{E}_c &= \sum_i V_i \Psi_c(\mathbf{y}_i), \end{align} where $\alpha_c$ is chosen large enough to prevent visually-noticeable penetration of the VEO by the secondary object. \subsection{Rendering} \label{ssec:rendering} We are able to interact freely with the \methodname~in a physically plausible manner. Hence, we can close the full loop and realistically render the results of simulated novel interactions using neural radiance fields. While we used $\mathbf{b}$ for deformations during the reconstruction, we are now given a new deformed state induced by a discrete point cloud: a canonical reference point cloud $P_0 = \{ \mathbf{x}^0_{s} \}_s$ and its deformed version $S_d = \{ \mathbf{y}^d_{s} \}_s$. We need to obtain a continuous backward-warping field from that point cloud in order to replace $\mathbf{b}$, which bends straight rays into the canonical model. To that end, we interpolate the deformation offsets $\mathbf{d}^b_s = \mathbf{x}^0_{s} - \mathbf{y}^d_{s}$ at a 3D sample point $\mathbf{p}^d$ in deformed space using inverse distance weighting (IDW): \begin{equation} \mathbf{p}^c = \mathbf{p}^d + \sum_{s \in \mathcal{N}} \frac{w_{s}}{\sum_{s'\in\mathcal{N}} w_{s'} } \mathbf{d}^b_{s}, \label{eq:idw} \end{equation} where $\mathcal{N}$ are the $K=5$ nearest neighbors of $\mathbf{p}^d$ in $S_d$, and $w_{s} = w'_{s} - \min_{s'\in \mathcal{N}} w'_{s'} $ with $w'_{s}= \lVert \mathbf{p}^d - \mathbf{y}^d_{s} \rVert^{-1}$. We can then sample the canonical model at $\mathbf{p}^c$ as before: $(o,\mathbf{c}) = \mathbf{v}(\mathbf{p}^c)$. To remove spurious geometry that $o$ might show, we set $o(\mathbf{x})=0$ for $\mathbf{x}$ that are further than some threshold from $S_d$. Thus, we can now bend straight rays into the canonical model and render the interactively deformed state of the object in a realistic fashion. When needed, we can upsample the point cloud from the simulation to make it denser. Unlike for rendering, we need to consider forward warping for this case.
1,941,325,220,954
arxiv
\section{} During the past several years, a variety of experiments ranging from NMR \cite{martindale,itoh} and penetration depth \cite{hardy} studies to ARPES \cite{shen,ding} and Josephson phase interference measurements \cite{wohlman,tsuei} have provided clear evidence for $d_{x^2-y^2}$-pairing in the high $T_c$ cuprates. This type of pairing was in fact predicted from a variety of theoretical studies on Hubbard and $t$-$J$\ models in which a short range Coulomb potential leads to a near neighbor exchange interaction and short range antiferromagnetic correlations \cite{reports}. Thus, in spite of the differences in the interpretations of some of these calculations, one might have concluded that the basic mechanism which is responsible for pairing in the cuprates arises from the antiferromagnetic exchange interaction and the short range exchange correlations. However, there is far from a consensus on this, and a variety of different basic models and pairing mechanisms have been proposed\cite{batlogg}. In the traditional low temperature superconductors one could see an image of the phonon density of states $F(\omega)$ in the frequency dependence of the gap $\Delta(\omega)$ \cite{schrieffer}. One also had a clear isotope effect in some of the simpler materials and Chester \cite{chester} showed that in this case the superconducting condensation energy could be related to the change in the ion kinetic energy. Thus, while the kinetic energy of the electrons is increased in the superconducting state relative to the normal state, the decrease in the ion lattice energy is sufficient to give the condensation energy. This provided a further link between the electron lattice interaction and the pairing mechanism in the traditional superconductors. Now, in the high $T_c$ cuprates, we believe that one can see the image of the $k$-dependence of the interaction in $\Delta(k)$ and that this supports the Hubbard and $t$-$J$\ pictures \cite{scalapino,holestructures}. However, as noted, this remains an open question and it would be useful to look for the analogue of the decrease in lattice energy and the condensation energy. From density matrix renormalization group studies of the $t$-$J$\ model \cite{holestructures}, we know that while the kinetic energy of a pair of holes is increased relative to having two separate holes, the exchange energy is reduced. Thus, if the short range antiferromagnetic spin lattice correlations play a similar role to the ion lattice in the traditional low temperature superconductors, the condensation energy would be proportional to the change in the exchange energy between the normal and superconducting states. Here we examine this and look for its possible experimental consequences. Unfortunately, just as in the case of the traditional electron-phonon systems where the fractional change in the lattice energy between the normal and superconducting ground states is small, of order $T_c^2/\mu_F \omega_D$, and hence hard to detect, here we find that the fractional change in the exchange energy, of order $T_c^2/\mu_F J$, will also be difficult to observe. Nevertheless, on a formal level it is interesting to contrast the relationship between the superconducting condensation energy and the change in the exchange energy with a recent proposal by Leggett\cite{leggett} in which he argues that the condensation energy arises from a change in the long-wavelength Coulomb energy associated with the mid-infrared dielectric response. Our basic idea originated from the results of numerical density matrix renormalization group calculations\cite{holestructures} for the $t$-$J$\ model. The $t$-$J$\ \ Hamiltonian in the subspace in which there are no doubly occupied sites is given by \begin{equation} H = - t \sum_{\langle ij \rangle s} ( c_{is}^{\dagger}c_{js} + {\rm h.c.}) + J \sum_{\langle ij \rangle} ( {\bf S}_{i} \! \cdot \! {\bf S}_{j} - \frac{n_i n_j}{4} ) . \label{tj-ham} \end{equation} Here $ij$ are near-neighbor sites, $s$ is a spin index, $\vec S_i = (c^\dagger_{is}\vec \sigma_{ss'} c_{is'})/2$ and $c^\dagger_{i,s}$ are electron spin and creation operators, and $n_i= c^\dagger_{i\uparrow}c_{i\uparrow} + c^\dagger_{i\downarrow}c_{i\downarrow}$. The near-neighbor hopping and exchange interactions are $t$ and $J$. We have calculated the ground state energy of Eq. (1) for zero ($E_0$), one ($E_1$), and two ($E_2$) holes. For $J/t=0.35$ we find, for an $8\times8$ system, that the binding energy of a pair of holes is \begin{equation} \Delta_B = 2 E_1 - (E_2 + E_0) = 0.23 J . \end{equation} We also find that the dominant contribution to this binding comes from the change in the exchange energy \begin{equation} 2 \langle J \sum_{\langle ij \rangle} \vec S_i \cdot \vec S_{j} \rangle_1 - \left ( \langle J \sum_{\langle ij \rangle} \vec S_i \cdot \vec S_{j} \rangle_2 + \langle J \sum_{\langle ij \rangle} \vec S_i \cdot \vec S_{j} \rangle_0 \right ) \label{twoa} \end{equation} Here 0, 1, and 2 refer to the number of holes in the ground state. The pair binding energy can be used in a simple estimate of $T_c$: if we relate the superconducting gap to the binding energy via $2 \Delta = \Delta_B$, and assume that $2 \Delta/kT_c \approx 6$, we find $T_c \approx 0.04 J/k$. Taking $J=1500 K$, this gives $T_c \approx 60K$, a quite reasonable value. Now, it is clear that superconductivity in the cuprates is a much more complicated phenomena than this simple picture of pair binding. For example, even in the $t$-$J$ model, we find that with a finite concentration of holes, domain walls form, rather than pairs\cite{energetics}. However, the formation of domain walls in the $t$-$J$ model is also driven largely by the exchange energy. Therefore, it is reasonable to assume that whatever the precise mechanism of superconductivity in the cuprates, energetically it is driven by the exchange interaction. Based upon this and in analogy with the electron phonon case, we suggest that if the basic interaction which is responsible for pairing in the high $T_c$ cuprates is the antiferromagnetic exchange, the condensation energy will be proportional to the change in the exchange energy between the normal and superconducting phases \begin{equation} \frac{\alpha H^2_c(T)\Omega_0}{8\pi} = J\left(\langle\vec S_i \cdot \vec S_{i+x} + \vec S_i \cdot \vec S_{i+y}\rangle_N - \langle\vec S_i \cdot \vec S_{i+x}+ \vec S_i \cdot \vec S_{i+y} \rangle_S\right) \label{three} \end{equation} Here $H_c(T)$ is the thermodynamic critical field at temperature $T$, $\Omega_0$ is the unit cell volume per $CuO_2$, and $\alpha$ is a factor of order 1. Note that both expectation values in Eq. (4) are also taken at temperature $T$ with the subscript $N$ referring to a nominal normal state and $S$ to the superconducting state. Thus one needs to be able to extrapolate the normal state data to temperatures $T<T_c$. For the $t$-$J$\ model we have \cite{hubbard} \begin{equation} \left\langle\vec S_i \cdot \vec S_j\right\rangle = 3\int \ \frac{d^2q}{(2\pi)^2} \int_0^\infty\ \frac{d\omega}{\pi}\ {\rm Im}\ \chi(q,\omega) \cos\left[\vec q\cdot (\vec i-\vec j) \right] \label{one} \end{equation} where $\chi(q,\omega)$ is the magnetic susceptibility at temperature $T$. For $\vec i$ equal to $\vec j$ we have the sum rule \begin{equation} (1-x) S(S+1) = 3 \int\frac{d^2q}{(2\pi)^2}\ \int_0^\infty\ \frac{d\omega}{\pi} \ {\rm Im}\ \chi(q,\omega) \label{two} \end{equation} with $S= 1/2$, and $x$ the hole doping. Using Eqs.~(\ref{one}) and (\ref{two}), we can write Eq.~(\ref{three}) in the form \begin{equation} \frac{\alpha H_c^2(T)\Omega_0}{8\pi} = 3J\ \int \ \frac{d^2q}{(2\pi)^2}\ \int_0^\infty\ \frac{d\omega}{\pi} \ \left({\rm Im}\ \chi_S(q,\omega) - {\rm Im}\ \chi_N(q,\omega)\right) \left(2-\cos q_x-\cos q_y\right) \label{four} \end{equation} In Eq.~(\ref{four}), we have added a constant 2 using the sum rule Eq.~(\ref{two}). The form factor $2-\cos q_x -\cos q_y$ favors large momentum transfers $q_x \sim q_y \sim \pi$ and the energy scale is set by $\omega \mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} J$. For the optimally or possibly the overdoped materials, it may be that ${\rm Im}\ \chi_N(q,\omega)$ has reached its ``low temperature normal form'' at temperatures above $T_c$. In this case, one could extract it from neutron scattering data for $T>T_c$. Then, using low temperature $T<<T_c$ data for ${\rm Im}\ \chi_S(q,\omega)$ in Eq. (\ref{four}), one would obtain the condensation energy. Because $H^2_c(0)\Omega_0/8\pi J\sim 10^{-3}$, it will require extremely careful neutron scattering measurements to check Eq.~(\ref{four}). Furthermore, one will have to be satisfied that the normal state measurements taken at temperatures above $T_c$ can be extrapolated to a temperature which is low compared to $T_c$. Clearly, this will be difficult. However, on a formal level, it is interesting to contrast the content of Eq.~(\ref{four}) with the recent proposal by Leggett \cite{leggett}. He takes the point of view that the pairing mechanism is associated with the long wave length Coulomb energy and relates the condensation energy to a change in the dielectric function between the normal and superconducting state. He then argues that the important contributions are associated with momentum transfers which are small compared to $\pi$ and energy transfers in the mid-infrared region, 0.1 to 1.5eV. Now, it is certainly true that if one goes all the way back, the Coulomb energy is responsible for the exchange interaction we have focused on. However, having integrated out the short range part of the Coulomb interaction to arrive at an exchange interaction $J\sim 4t^2/U$, we conclude from Eq.~(\ref{four}) that the important part of the pairing interaction is associated with large momentum transfers $q\sim(\pi/a, \pi/a)$ and energies less than or of order $J\sim 0.1$eV. Thus, contrary to ref.~\cite{leggett}, where one seeks to find a relationship between the condensation energy and the change in the {\it dielectric} response between the normal and superconducting state in the small momentum and higher energy 0.1--1.5eV regime, we suggest that the condensation energy is related to changes in the {\it magnetic} spin response at large momentum transfer and energies $\omega\mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} J$. Thus, it would be very interesting if it were possible to confirm or contradict the relationship of the change in $\langle \vec S_i \cdot \vec S_{i+\hat x}\rangle$ between the normal and superconducting states and the superconducting condensation energy given by Eqs.~(\ref{three})and (\ref{four}). \vskip.50in \centerline{ACKNOWLEDGMENTS} We thank A.J.~Leggett and S.C. Zhang for interesting discussions. DJS acknowledges support from the Department of Energy under grant, DE-FG03-85ER-45197 and SRW acknowledges NSF suport under grant DMR95-09945. DJS would also like to acknowledge the Program on Correlated Electrons at the Center for Material Science at Los Alamos National Laboratory.
1,941,325,220,955
arxiv
\section{Introduction} The relativistic direct interaction theory arises from the expectation that the dynamics of an interacting particle system can be constructed in a consistent Poincar\' e-invariant way without introducing the notion of the field as an independent object with its own degrees of freedom \cite{[1],[2],[3],[4],[5]}. At present the principal possibility of such a theory is evident in the classical and quantum domains. Its application to the description of particle systems is most effective when processes of radiation and particle creation may be neglected. Among various more or less equivalent approaches to the construction of the relativistic direct interaction theory, the single-time Lagrangian formalism \cite{[1],[13],[10]} proposed by Professor Gaida more than twenty years ago, seems to be the most convenient for the consideration of the general problem of relativistic dynamics, as well as for the investigation of various approximations. This formalism has been extended to an arbitrary form of relativistic dynamics \cite{[14]} defined geometrically by means of space-like foliations of the Minkowski space \cite{[15],[10],nlmp}. The conditions of the Poincar\' e-invariance were reformulated in an arbitrary form of dynamics and a wide class of exact solutions to the equations expressing these conditions were established for the interactions originally described by a Fokker-type action. The transition from the classical Lagrangian to the Hamiltonian description allows one to consider the relativistic effects in the statistical and quantum mechanical properties of the particle systems. The purpose of the present paper is to review some relatively recent generalizations and specifications of this development. The transition from a non-relativistic interacting particle system to its relativistic counterpart, which on a more formal level can be considered as the replacement of the Galilei group by the Poincar\'e group as a symmetry group of the system, leads to profound changes in the structure of the theory. Within the Lagrangian formalism such a change manifests itself in the necessity of using the interaction Lagrangians depending on derivatives of an infinitely high order: in the general case the exact relativistic Lagrangian must be defined on the infinite order jet space \cite{[13]}. This fact is the Lagrangian counterpart of the famous no-interaction theorem in the Hamiltonian relativistic mechanics \cite{Car63} and has with the latter a common cause lying in the very structure of the Poincar\'e group. It also reflects the time non-locality inherent to relativistic interactions. All the aforementioned exact solutions of Poincar\'e-invariance conditions corresponding to time-symmetric Fokker-type actions have such kind of non-locality in any form of relativistic mechanics \cite{[8],[9],nlmp}. Although there are elaborated several methods of dealing with such systems (expansions in various parameters \cite{[1],[11],[12]}, transition to the center-of-mass variables \cite{gyt}), it is evident that such a drastic change in the structure of mechanical description leads to serious difficulties in the physical interpretation of the formalism, as well as in proving its mathematical consistency. But there are important exceptions from the general rule. If the form of dynamics defines a simultaneity relation in the Poincar\'e-invariant way (i.e. the Poincar\' e group transforms simultaneous events into simultaneous ones), then the corresponding invariance conditions of the Lagrangian description allow a large class of exact solutions containing derivatives of any finite order (not less than unity). Particularly, in such forms of dynamics we can construct in the closed form a variety of nontrivial interaction Lagrangians depending on the first order derivatives. This fact was first established for an $N$-particle system in the two-di\-men\-si\-o\-nal space-time $\M_2$ within the framework of the front form of dynamics \cite{[20]}. Then it was extended to the case of a two-particle system in the four-dimensional Minkowski space $\M_4$ by means of isotropic forms of dynamics with simultaneity between the events of particle world lines defined by a light cone \cite{dt}. The existence of such ``standard'' relativistic Lagrangians brings the problem of describing such kind of systems within the scope of the usual analytical (and, probably, quantum) mechanics. It allows the formulation of various exact models of relativistic direct interactions which admit more or less explicit investigations. Such models are the main subject of this paper. It is organized as follows. In section~1 we begin with introducing the notion of the form of relativistic dynamics within the framework of the Lagrangian formalism. The general features of the relativistic Lagrangian description in a two-dimensional version of the front form of dynamics and in isotropic forms of dynamics are discussed in sections~3 and~4, respectively. The Fokker-type action integrals which correspond to time-asymmetric interactions are considered in section~5. Section~6 is devoted to the construction of the Hamiltonian formalism with constraints in the isotropic form of dynamics. On this basis in section~7 we investigate in the most explicit form the motions of two particles under the influence of a time-asymmetric scalar, vector, and other interactions of physical interest. The limiting case of straight line motions of such systems is considered in section~8 within the framework of the front form of dynamics. Finally, in section~9 we present certain exactly solvable relativistic quantum models of interacting particle systems in the two-dimensional space-time. \input{iso1} \input{iso2} \input{iso3} \input{iso4} \section{Conclusion} We have considered the class of isotropic forms of dynamics which admit the construction of a variety of exactly solvable relativistic models of interacting particle systems. Most of the models originate from the Fokker-type actions with the time-asymmetric (retarded or advanced) Green function of the d'Alembert equation. These models reflect not only the relativistic kinematics but also certain field-theory aspects of the particle interaction. They demonstrate a complexity of the relativistic particle dynamics in comparison with its non-relativistic counterpart. The study of such a dynamics in detail is possible because of the fact that the considered forms of dynamics allow reformulation of the theory in terms of various formalisms and approaches, both three-dimensional and manifestly covariant four-dimensional. The physical meaning of time-asymmetric interactions is not so clear. Nevertheless, the corresponding models may be regarded as the first step to some approximation scheme for finding solutions of more physically acceptable models, for example, the Wheeler-Feynman electrodynamics and the related theories. Particularly, in the linear approximation in the coupling constant the time-asymmetric, time-symmetric and purely retarded (field) approaches yield the same result. On the other hand, exact solutions of such models provide a better understanding of the special features of relativistic interactions and interrelations between various descriptions of relativistic interacting particles.\\ We wish to thank Yuriy Kluchkovsky, S.~N.~Sokolov, V.~I.~Lengel, and J.~Llosa for the stimulating discussions. The ideas and influence of the late Professor Roman Gaida are evident throughout all the reported investigations. \input{iso5} \end{document} \section{Geometrical concept of the forms of dynamics} Let us consider a dynamical system consisting of $N$ interacting point particles. It is convenient to describe the evolution of this system in the 4-dimensional Minkowski space ${\mathbb M}_4$ with coordinates $x^{\mu}$, $\mu =0,1,2,3$. We use the metric $\| \eta _{\mu \nu}\| ={\rm diag}(1,-1,-1,-1)$. The motion of the particles is described by the world lines $\gamma _a:{\mathbb R}\rightarrow{\mathbb M}_4$, $a=1,...,N$, which can be parametrized by arbitrary parameters $\tau _{a}$. In the coordinates we have \begin{equation} \gamma _a:\tau _a\mapsto x_a^{\mu}(\tau _{a}). \end{equation} The velocity of light is taken to be unity. Since in the Poincar\' e-invariant theory no particle can move with the velocity greater than the velocity of light, the world lines $\gamma_{a}$ must be time-like lines, and the tangent vectors \begin{equation} u_a^{\mu}=\frac{\d x_a^{\mu}}{\d\tau_a} \end{equation} obey the inequality \begin{equation} u_a^2\equiv \eta _{\mu \nu}u_a^{\mu}u_a^{\nu}\equiv u_a\cdot u_a>0. \end{equation} It is well known that the whole physical information about the motion of the system is contained in the world lines $\gamma _{a}$ considered as unparametrized paths in the Minkowski space. Therefore, freedom in the choice of parameters ${\tau _a}$ may be used for the simplification of the description. Particularly, we can choose common parameter $t$ for all the world lines of the $N$-particle system. This parameter is defined by a set of $N$ relations of the following general form: \begin{equation} \Phi _a(x_1(t),\ldots ,x_N(t),u_1(t),\ldots ,u_N(t),t)=0. \end{equation} The geometrical concept of the forms of relativistic dynamics originated by Dirac \cite{[14],[17]} can be introduced within the framework of the single-time Lagrangian or Hamiltonian descriptions in the following way \cite{[15],[10],nlmp}. Let us consider the foliation $\Sigma = \{ \Sigma _t|t\in {\mathbb R}\}$ of the Minkowski space ${\mathbb M}_4$ by the hypersurfaces $\Sigma _t$ with the equation \begin{equation}\label{1.2} t=\sigma (x),\qquad t\in {\mathbb R}. \end{equation} We require that every hypersurface $\Sigma _{t} = \{ x\in {\mathbb M}_{n+1}|\sigma (x)=t\}$ must intersect the world lines $\gamma _{a}$ of all the particles at one and only one point \begin{equation} x_{a}(t)=\gamma _{a}\bigcap \Sigma _{t} . \end{equation} This allows us to consider $t$ as an evolution parameter of the system \cite{[17],[18]}. In the Poincar\' e-invariant theory, when we consider only time-like world lines, the hypersurfaces (2.5) must be space-like or isotropic, \begin{equation} \eta _{\mu \nu}(\partial ^{\mu}\sigma )(\partial ^{\nu}\sigma )\geq 0, \end{equation} where $\partial ^{\mu}=\partial /\partial x_{\mu}$. Then we have $\partial _0\sigma >0$, and the hypersurface equation (2.5) has the unique solution $x^0=\varphi (t,{\bf x})$, where ${\bf x}=(x^i)$, $i=1,2,3$. Therefore, the constraint $x_{a}(t)\in \Sigma _{t}$ enables us to determine the zeroth component of $x_a(t)$ in terms of $t$ and $x_a^i(t)$. The parametric equations (2.1) of the world lines of the particles in the given form of dynamics are as follows: \begin{equation} x^0=\varphi (t,{\bf x}_a(t))\equiv \varphi _a ,\qquad x^i=x^i_a(t). \end{equation} The evolution of the system is determined by $3N$ functions $t\mapsto x_a^i(t)$. They may be considered as representatives (in some local chart) for the sections $s:{\mathbb R}\rightarrow{\mathbb F},t\mapsto (t,x_a^i(t))$ of the trivial fibre bundle $\pi :{\mathbb F}\rightarrow {\mathbb R}$ with $3N$-dimensional fibre space $\cM={\mathbb R}^{3N}$ \cite{28}. The latter constitutes the configuration space of our system. Three Dirac forms of relativistic dynamics correspond to the following hypersurfaces (2.5): $x^0=t$ (instant form), $x^0-x^3=t$ or $x^0+x^3=t$ (front form), and $\eta _{\mu \nu}x^{\mu}x^{\nu} =t^2$ (point form). Other examples may be found in \cite{[15]}. Now we assume that the evolution of the system under consideration is completely determined by specifying the action functional \begin{equation} S=\int \d tL. \end{equation} The Lagrangian function $L:J^{\infty}\pi \rightarrow {\mathbb R}$ is defined on the infinite order jet space of the fibre bundle $\pi :{\mathbb F}\rightarrow {\mathbb R}$ with the standard coordinates $x_a^{i(s)}$ \cite{[22],[24]}. The values of these coordinates for the section $s:t\mapsto (t,x_a^i(t))$ belonging to the corresponding equivalence class from $J^{\infty}\pi$ are $x_a^{i(s)}(t)=\d^sx_a^i(t)/\d t^s$ , $s=0,1,2,\ldots$. The variational principle $\delta S=0$ with the action (2.9) gives Euler-Lagrange equations of motion \begin{equation} \sum _{s=0}^{\infty}(-D)^{s}\frac {\partial L} {\partial x_{a}^{i(s)}}=0, \end{equation} where $D$ is an operator of the total time derivative \begin{equation} D=\sum _{a}\sum _{s=0}^{\infty}x_{a}^{i(s+1)}\frac {\partial }{\partial x_{a}^{i(s)}}+ \frac{ \partial }{\partial t} . \end{equation} Let us consider an arbitrary $r$-parametric Lie group ${\cal G}$ acting on ${\mathbb M}_4$ by the point transformations $g:{\mathbb M}_4\rightarrow {\mathbb M}_4$: \begin{equation} x^{\mu}\mapsto (gx)^{\mu}=x^{\mu}+ \lambda ^{\alpha}\zeta _{\alpha}^{\mu}(x) +o(\lambda) , \end{equation} where $\lambda ^{\alpha}$, $\alpha =1,\ldots ,r$ are the parameters of the group. The vector fields \begin{equation} {\cal X}_{\alpha}=\zeta _{\alpha}^{\mu}\partial _{\mu} \end{equation} satisfy the commutation relations of the Lie algebra of group ${\cal G}$, \begin{equation} [{\cal X}_{\alpha},{\cal X}_{\beta}]=c_{\alpha \beta}^{\gamma}{\cal X}_{\gamma}, \qquad \alpha ,\beta ,\gamma =1,\ldots ,r , \end{equation} with the structure constants $c_{\alpha \beta}^{\gamma}$. The action (2.12) of group ${\cal G}$ on ${\mathbb M}_4$ can be easily extended on the world lines $\gamma _a$ by the rule: \begin{equation} \gamma _a\mapsto g\gamma _a=\{ gx|x\in {\rm Im}\gamma _a\}. \end{equation} But in the given form of dynamics the world lines $\gamma _a$ are determined by the functions $t\mapsto x_{a}^{i}(t)$ or, in other words, by sections $s$ of the bundle $\pi$. Therefore, (2.15) induces an action of group $G$ on $J^{\infty}\pi $ by the Lie-B\" acklund transformations \cite{[22],[23],[24]}. As it was shown in \cite{[15]}, the generators of such transformations have the form: \begin{equation} X_{\alpha}=\sum _{a}\sum _{s=0}^{\infty}(D^{s}\xi _{a\alpha}^{i}) \frac {\partial }{\partial x_{a}^{i(s)}} , \end{equation} where \begin{equation} \xi _{a\alpha}^{i}=\zeta _{a\alpha}^{i}-v_{a}^{i}\eta _{a\alpha} , \end{equation} and \begin{equation} \zeta _{a\alpha}^{i}=\zeta _{\alpha}^{i}(t,{\bf x_{a}}) ,\qquad \eta _{a\alpha}=({\it X}_{\alpha}\sigma)(t,{\bf x_{a}}) ,\qquad v_{a}^{i}=x_{a}^{i(1)} . \end{equation} The Lie-B\" acklund vector fields (2.16) obey the same commutation relations as (2.14), \begin{equation} [X_{\alpha},X_{\beta}]=c_{\alpha \beta}^{\gamma}X_{\gamma}, \end{equation} and commute with the total time derivative (2.11) \begin{equation} [X_{\alpha},D]=0 . \end{equation} For the Poincar\' e group we have the following ten vector fields corresponding to the natural action of ${\cal P}(1,3)$ on ${\mathbb M}_4$: \begin{equation} {\cal X}_{\mu}^{T}=\partial _{\mu} , \end{equation} \begin{equation} {\cal X}_{\mu \nu}^{L}=x_{\mu}\partial _{\nu}-x_{\nu}\partial _{\mu} , \end{equation} with the commutation relations \begin{equation} [{\cal X}_{\mu}^{T},{\cal X}_{\mu}^{T}]=0 , \end{equation} \begin{equation} [{\cal X}_{\mu}^{T},{\cal X}_{\rho \sigma}^{L}]= \eta _{\mu \rho}{\cal X}_{\sigma}^{T}- \eta _{\mu \sigma}{\cal X}_{\rho}^{T} , \end{equation} \begin{equation} [{\cal X}_{\mu \nu}^{L},{\cal X}_{\rho \sigma}^{L}]= \eta _{\nu \rho}{\cal X}_{\mu \sigma}^{L}+ \eta _{\mu \sigma}{\cal X}_{\nu \rho}^{L}- \eta _{\mu \rho}{\cal X}_{\nu \sigma}^{L}- \eta _{\nu \sigma}{\cal X}_{\mu \rho}^{L} . \end{equation} Thus, we obtain the following realization of the Poincar\' e algebra in terms of Lie-B\" acklund vector fields (2.16): \begin{equation} X^{T}_{\mu}=\sum _{a}\sum _{s=0}^{\infty}D^{s} [\delta _{\mu}^{i}-v^{i}_{a}\sigma _{a\mu}] \frac {\partial }{\partial x_{a}^{i(s)}} , \end{equation} \begin{equation} X^{L}_{\mu \nu}=\sum _{a}\sum _{s=0}^{\infty}D^{s} [x_{a\mu}\delta _{\nu}^{i}-x_{a\nu}\delta _{\mu}^{i}- v^{i}_{a}(x_{a\mu}\sigma _{a\nu}-x_{a\nu}\sigma _{a\mu}] \frac {\partial }{\partial x_{a}^{i(s)}} , \end{equation} where we must use (2.8) for the elimination of $x^0_a$, and we denote \begin{equation} \sigma _{a\mu}\equiv (\partial _{\mu}\sigma )(t, {\bf x}_{a}) . \end{equation} Making use of the hypersurface equation (2.5) we find: \begin{equation} \sigma _{a0}=(\partial \varphi _{a}/\partial t)^{-1} \equiv \varphi _{at}^{-1}, \end{equation} \begin{equation} \sigma _{ai}=-\varphi _{at}^{-1}(\partial \varphi _{a}/\partial x_{ai}) \equiv -\varphi _{at}^{-1}\varphi _{ai} . \end{equation} It is convenient to introduce the vector fields \begin{equation} {\cal H}=-X^T_0,\quad {\cal P}_{i}=X^{T}_{i},\quad {\cal J}_{i}=-\frac {1}{2}\varepsilon _{ijk}X^{L}_{jk},\quad {\cal K}=X^L_{i0}, \end{equation} obeying the following commutation relations: \begin{equation} [{\cal H},{\cal P}_{i}]=0 ,\qquad [{\cal P}_{i},{\cal P}_{j}]=0 ,\qquad [{\cal H},{\cal J}_{i}]=0 ,\qquad [{\cal P}_{i},{\cal J}_{k}]=-\varepsilon _{ikl}{\cal P}_{l} , \end{equation} \begin{equation} [{\cal J}_{i},{\cal J}_{k}]=-\varepsilon _{ikl}{\cal J}_{l} ,\qquad [{\cal K}_{i},{\cal J}_{k}]=-\varepsilon _{ikl}{\cal K}_{l} ,\qquad [{\cal K}_{i},{\cal K}_{j}]=\varepsilon _{ijk}{\cal J}_{k} , \end{equation} \begin{equation} [{\cal H},{\cal K}_{i}]={\cal P}_{i} ,\qquad [{\cal P}_{i},{\cal K}_{j}]=\delta _{ij}{\cal H} . \end{equation} Inserting (2.29), (2.30) into (2.26), (2.27), we obtain the realization of the Poincar\'e algebra which is convenient for the consideration of the symmetries of a single-time three-dimensional Lagrangian description \cite{[15]}: \begin{equation} {\cal H}=\sum _{a}\sum _{s=0}^{\infty}D^{s} [v^{i}_{a}\varphi ^{-1}_{at}] \frac {\partial }{\partial x_{a}^{i(s)}} , \end{equation} \begin{equation} {\cal P}_{i}=\sum _{a}\sum _{s=0}^{\infty}D^{s} [\delta ^{j}_{i}+v^{j}_{a}\varphi _{ai}\varphi ^{-1}_{at}] \frac {\partial }{\partial x_{a}^{j(s)}} , \end{equation} \begin{equation} {\cal J}_{i}=\varepsilon _{ikl}\sum _{a}\sum _{s=0}^{\infty}D^{s} [x^{k}_{a}(\delta ^{j}_{l}+v^{j}_{a}\varphi _{al}\varphi ^{-1}_{at})] \frac {\partial }{\partial x_{a}^{j(s)}} , \end{equation} \begin{equation} {\cal K}_{i}=\sum _{a}\sum _{s=0}^{\infty}D^{s} [-\varphi _{a}\delta ^{j}_{i}+v^{j}_{a}(x_{ai}-\varphi _{a}\varphi _{ai}) \varphi ^{-1}_{at}] \frac {\partial }{\partial x_{a}^{j(s)}} . \end{equation} The symmetry of the Lagrangian description of an interacting particle system under group ${\cal G}$ means the invariance of the Euler-Lagrange equation (2.10) under corresponding Lie-B\" acklund transformations generated by the vector fields (2.16). The sufficient conditions for the symmetry under the Poincar\' e group have the form \cite{[13],[10]}: \begin{equation} X_{\alpha}L=D\Omega _{\alpha},\qquad \alpha =1,\ldots ,10 , \end{equation} with auxiliary functions $\Omega _{\alpha}$, satisfying the consistency relations \begin{equation} X_{\alpha}\Omega _{\beta}-X_{\beta}\Omega _{\alpha}= c_{\alpha \beta}^{\gamma}\Omega _{\gamma} . \end{equation} An important corollary of symmetry conditions (2.39), (2.40) for an arbitrary $r$-parametric Lie group is the existence of $r$ conservation laws \begin{equation} DG_{\alpha}=0 ,\qquad \alpha =1,\ldots ,r , \end{equation} for quantities $G_{\alpha}$ which can be explicitly determined in terms of the Lagrangian function $L$ and auxiliary functions $\Omega _{\alpha}$. This statement, which is well known as the N\" other theorem, follows immediately from the identity \cite{[22],[23]} \begin{equation} X_{\alpha}L=\sum _{a}\xi ^{i}_{a\alpha}{\cal E}_{ai}L+D\sum _{a} \sum _{s=o}^{\infty}\pi _{ai,s}D^{s}\xi ^{i}_{a\alpha} , \end{equation} which holds for an arbitrary Lie-B\" acklund vector field (2.16). Here, \begin{equation} \pi _{ai,s}=\sum _{n=s}^{\infty}(-D)^{n-s} \frac {\partial L}{\partial x_{a}^{i(n+1)}} \end{equation} are the Ostrogradskyj momenta. Making use of the identity (2.42) in symmetry conditions (2.39), one readily checks that for the solutions of Euler-Lagrange equation (2.10) the conservation laws (2.41) hold with \begin{equation} G_{\alpha}=\sum _{a}\sum _{s=o}^{\infty}\pi _{ai,s}D^{s}\xi ^{i}_{a\alpha}- \Omega _{\alpha} . \end{equation} In the general case the Poincar\' e-invariance conditions forbid the existence of interaction Lagrangians which are defined on the jet-space $J^r\pi $ with some finite $r$ (for example, with $r=1$). This leads to serious difficulties in the physical interpretation of the formalism, and, in fact, makes it impossible to obtain a closed form of the corresponding Hamiltonian functions. In the following we shall consider some exceptions from this rule. The first is offered by the front form of dynamics in the two-dimensional Minkowski space. In this case there exists a wide class of interaction Lagrangians for an $N$-particle system, which are defined on the first-order jet-space $J^1\pi $ \cite{[20]}. The second consists in the consideration of a more general definition of the form of dynamics, than (2.5). \section{Front form of dynamics in ${\mathbb M}_2$} In the two-dimensional space-time ${\mathbb M}_2$ the front form of dynamics corresponds to the foliation of ${\mathbb M}_2$ by isotropic hyperplanes (i.e., lines): \begin{equation}\label{s-1 x^0+x=t. \end{equation} In this form of dynamics for an $N$-particle system Poincar\' e-invariance conditions allow the existence of interaction Lagrangians which do not contain derivatives higher than the first order. The general form of such a Lagrangian function including only pairwise interactions is given by \cite{[20]}: \begin{equation} \label{s-2} L =-\sum_am_ak_a + \sumab r_{ab}V_{ab}(r_{ab}k_a^{-1}, r_{ab}k_b^{-1}), \end{equation} where $k_a = \sqrt {1-2v_a}$, $r_{ab}\equiv x_a-x_b$, $a,b=\overline{1,N}$, and $V_{ab}$ are arbitrary functions of the indicated arguments. As a result of the Poincar\' e invariance of the Lagrangian function \re{s-2}, there exist three conserved quantities: energy $E$, total momentum $P$, and the center-of-inertia integral of motion $K$. They have the form \cite{[20]}: \begin{eqnarray} \label{s-3} E&=&\sum_{a=1}^{N}v_{a}\frac{\partial L}{\partial v_{a}}- L,\quad P=\sum_{a=1}^{N}\frac{\partial L}{\partial v_{a}}-E, \nonumber\\ K&=&-tP+\sum_{a=1}^{N}x_a\frac{\partial L}{\partial v_{a}}. \end{eqnarray} The existence of the interaction Lagrangians \re{s-2} permits one to trace quite easily the relations between various formalisms of relativistic dynamics and to find out special features of relativistic particle systems. In spite of the fact that the Lagrangian function (\ref{s-2}) does not contain higher derivatives and the transition to the Hamiltonian description is a usual Legendre transformation, the investigation of exactly solvable models shows some new features which do not occur in the non-relativistic mechanics. In the classical mechanics, the Lagrangian function is determined on the tangent bundle $T\cM$, $L:T\cM\to \R$ \cite{28}. If the configuration space $\cM$ is diffeomorphic to ${\mathbb R}^N$, then the tangent bundle is a trivial one: $T\cM={\mathbb R}^N\times{\mathbb R}^N$. This means that a single chart with coordinates $(x_1,...,x_N, v_1,...,v_N)$ covers the whole $T\cM$. For the Lagrangian \re{s-2} the configuration space $\cM$ coincides with the whole ${\mathbb R}^N$ or at least with the disconnected union of open sets in ${\mathbb R}^N$. Hence, one can expect that there should not be any complications connected with the global structure. But the Lagrangian function (\ref{s-2}) is determined on submanifold $\cQ _f$ of $T\cM$. This submanifold is defined by the inequalities \begin{equation}\label{s-2.8 v_a < 1/2, \end{equation} which reflect the time-like character of particle world lines in ${\mathbb M}_2$. Submanifold $\cQ _f$ does not have the structure of a tangent bundle. Moreover, we do restrict the Lagrangian description to the smaller region than $T\cM$ for another reason. The Hamilton principle $\delta S=0$ leads to Euler-Lagrange equations if the Hessian is positively defined: \begin{equation \label{s-3-1} {\sf h}={\rm det}||\pl^2L/\pl v_a\pl v_b||>0. \end{equation} For the Lagrangian function \re{s-2} the Hessian is, in general, a complicated function on coordinate differences and velocities: ${\sf h}={\sf h}(r_{ab},k_c)$. Therefore, inequality \re{s-3-1} defines an open region $\cQ \subset TM\approx {\mathbb R}^{2N}$. This region also does not have the structure of a tangent bundle and for a free-particle system coincides with $\cQ _f$. It could be unimportant if the system moves inside the region (3.5) and does not reach the boundary \begin{equation \partial\cQ =\{(x_a,v_a)\in T\cM | {\sf h}=0, {\sf h}^{-1}=0\}. \end{equation} In contrast, the difficulty arises when the system reaches the points of the boundary region (singular points) at a finite value of the evolution parameter $t$ \cite{Sh}. The theorem of existence and uniqueness for Euler-Lagrange differential equations breaks at singular points and the Lagrangian system is not defined. Therefore, we cannot prolong the evolution of the system beyond the critical points within the framework of the basic Lagrangian description. The way of overcoming this difficulty is offered by the Hamiltonian description. It is well known that the Legendre transformation is a differentiable mapping ${\pounds}:T\cM\to T^*\cM$. The transition from the Lagrangian \re{s-2} to the Hamiltonian formalism may be performed by the usual Legendre transformation. But this transformation is a diffeomorphism only in the region $\cQ$. It maps the open region $\cQ \subset {\mathbb R}^{2N}$ to the open one $\pounds\cQ\subset T^\ast \cM\approx{\mathbb R}^4$. The Hamiltonian description is equivalent to the Lagrangian one only in the region $\pounds\cQ$ \cite{28}. In a strict sense the motion in the Hamiltonian case is well defined on $\pounds\cQ$ only. In other words, we should consider $\pounds\cQ$ as a whole phase space of the system. After the Legendre transformation is performed, the conserved quantities \re{s-3} become canonical generators of the Poincar\' e group ${\cal P}(1,1)$: \begin{eqnarray \label{s-4} P_+&=&\sum ^N_{a=1}p_a ,\qquad K=\sum ^N_ {a=1}x_ap_a,\\ \label{s-5} P_-&=& \sum ^N_{a=1} \frac {m^2_a}{p_a} + \frac {1}{P_+} V(rp_b, r_{1c}/r)~. \end{eqnarray} They satisfy the following Poisson bracket relations: \begin{equation \label{s-6} \{ P_{+},P_{-}\} =0 ,\qquad \{ K,P_{\pm}\}=\pm P_{\pm} . \end{equation} Here we have introduced more convenient in the front form quantities $P_{\pm}=E\pm P$. The classical total mass squared function $M^{2}=P_{+}P_{-}$ has vanishing Poisson brackets with all the generators (\ref{s-4}), (\ref{s-5}). If we deal with the Lagrangian region $\pounds\cQ $ within the Hamiltonian description, we shall obtain the same results as in the Lagrangian case. For systems which reach the points of $\partial\cQ$, the Lagrangian description leads to disconnected segments of world lines \cite{MST86}. To obtain the whole evolution of such systems we have to determine the motion of the system beyond the Lagrangian region. In the following we shall demonstrate for certain relativistic models that the Hamiltonian formalism permits one to prolong the evolution of the system beyond singular points and obtain smooth world lines in $\M_2$, as well as in the four-dimensional space-time $\M_4$ (see sections 8 and 7.1, respectively). \section{Isotropic forms of dynamics} For a two-particle system in ${\mathbb M}_4$ the class of isotropic forms of dynamics corresponds to the following definition of simultaneity between the events of particle world lines \cite{dt}: \begin{equation}\label{3.1 [x_1(t)-x_2(t)]^2=0 \end{equation} with the supplementary condition \begin{equation}\label{3.2 {\rm sgn}[x^0_1(t)-x^0_2(t)]=\epsilon, \end{equation} where $\epsilon =\pm 1$. Such a description has been developed within the framework of the predictive relativistic mechanics in a series of papers by K\"unzle \cite{k1,k2,k3}. The idea of this definition of simultaneity was formulated in the classic Van Dam-Wigner's work \cite{vvw}. In the contents of relativistic Lagrangian and Hamiltonian mechanics the descriptions based on equation~(\ref{3.1}) were elaborated in \cite{dt,Duv96,Duv97}. equations~(\ref{3.1}), (\ref{3.2}) determine the difference of the zeroth components: \begin{equation}\label{3.3 x^0_1(t)-x^0_2(t)=\epsilon |{\bf x}_1(t)-{\bf x}_2(t)|. \end{equation} For the definition of the value of the common evolution parameter $t$ we choose the relation \begin{equation}\label{3.4 \sigma \left (\frac {x_1(t)+x_2(t)}{2}\right )=t, \end{equation} where $\sigma (x)$ is the same function as in the definition of the geometrical forms of dynamics (\ref{1.2}). Therefore, we have \begin{equation}\label{3.5 \frac {x^0_1(t)+x^0_2(t)}{2}= \varphi \left (t, \frac {{\bf x}_1(t)+{\bf x}_2(t)}{2}\right ), \end{equation} and \begin{equation}\label{3.6 x^0_1=\varphi (t, {\bf y})+ \frac 12\epsilon |{\bf r}|, \quad x^0_2=\varphi (t, {\bf y})- \frac 12\epsilon |{\bf r}|. \end{equation} Here and henceforth the variables $y^{\mu }\equiv (x^{\mu }_1+x^{\mu }_2)/2$ and $r^{\mu }\equiv x^{\mu }_1-x^{\mu }_2$ are used. If we put $\varphi (t,{\bf y})=t$ as in the instant form of dynamics, we obtain \begin{equation}\label{3.7 x^0_a=t+\frac 12(-1)^{\bar a}\epsilon |{\bf r}|, \qquad a = 1,2;\ \ \bar a \equiv 3-a. \end{equation} These relations have been used in \cite{k1,k2}. When we choose $\sigma (x)$ as in the front form $[\varphi (t, {\bf y})=t-y^3]$, we obtain \begin{equation}\label{3.8 x^0_a=t+ y^3 +\frac 12 (-1)^{\bar a}\epsilon |{\bf r}|. \end{equation} In the two-dimensional space-time (\ref{3.8}) reduces to the geometrical definition of the front form provided $\epsilon ={\rm sgn}(x_2-x_1)$. The general structure of the Lagrange function is again determined by the Poincar\'e-invariance conditions. Their formulation requires the realization of algebra ${\mathfrak p}(1,3)$ by the Lie-B\"acklund vector fields (2.16). In paper \cite{dt} it was shown, that the components of the corresponding fields have the form (2.17), where \begin{equation}\label{3.13 \zeta ^i_{a\alpha }=\zeta ^i_{\alpha }[x_a(t)] \end{equation} and \begin{eqnarray}\label{3.11 \eta _{a\alpha }&=&\frac 12 [\zeta ^{\nu }_{\alpha }(x_1)+ \zeta ^{\nu }_{\alpha }(x_2)] \partial_{\nu }\sigma \left (\frac {x_1+x_2}{2}\right ) \nonumber \\ &=&(\zeta ^{\nu }_{\alpha }\partial_{\nu }\sigma ) \left (\frac {x_1+x_2}{2}\right )=\eta _{\alpha }(t,{\bf y}). \end{eqnarray} All the zeroth components here must be excluded by means of relations \re{3.6}. Let us note the independence of $\eta _{\alpha }$ on the particle labels. It is a matter of simple calculation to verify that such vector fields satisfy the commutation relations (2.19). The Poincar\'e-invariance conditions have the form (2.39) where we can put \begin{equation}\label{3.14 \Omega _{\alpha }=- \eta _{\alpha }L. \end{equation} Such a choice of auxiliary functions $\Omega _{\alpha }$ enables (\ref{3.14}) to be expressed in the form: \begin{equation}\label{3.15 {\hat X}_{\alpha }L + LD\eta _{\alpha }=0, \end{equation} where the vector fields \begin{equation}\label{3.15a {\hat X}_{\alpha }=X_{\alpha } + \eta _{\alpha }D. \end{equation} generate the point transformation of the extended configuration space ${\mathbb F}={\mathbb R}\times \cM$. As in the case of the front form of dynamics in ${\mathbb M}_2$, equations~(\ref{3.15}) allow a large class of exact solutions depending on the derivatives of any finite order. If we suppose that the Lagrangian contains only the first derivatives, i.e. it is defined on the space $J^1\pi$, we obtain \begin{equation}\label{3.17 \eta _{\alpha }\frac{\partial L}{\partial t}+ \sum _{a=1}^2\left ( \zeta ^i_{a\alpha }\frac{\partial L}{\partial x_a^i}+ (D\zeta ^i_{a\alpha } - v^i_aD\eta _{\alpha })\frac{\partial L}{\partial v_a^i}\right ) +LD\eta _{\alpha}=0, \end{equation} where $\zeta ^i_{a\alpha }\equiv \zeta ^i_{\alpha }[x_a(t)]$. The general solution to these equations can be presented in the form \cite{dt}: \begin{equation}\label{3.18 L=\vartheta F(\sigma _1,\sigma _2,\omega ), \end{equation} where $$ \vartheta = (x_1^{\mu }-x_2^{\mu })u_{1\mu}= (x_1^{\mu }-x_2^{\mu })u_{2\mu}= \epsilon |{\bf r}|D\varphi (t, {\bf y})-{\bf r}\cdot \dot {\bf y}; $$ $$ \Gamma _a^{-2}=u_a^{\mu}u_{a\mu}= \left (D\varphi (t, {\bf y})-\frac 12 (-1)^a\epsilon {\bf n}\cdot {\bf v} \right )^2-v_a^2; $$ $$ \qquad {\bf n}\equiv {\bf r}/r,\qquad r\equiv |{\bf r}|,\qquad {\bf v}\equiv {\bf v}_1- {\bf v}_2; $$ $$ \sigma _a=\Gamma _a\vartheta = r^{\nu }\hat u_{a\nu}, \qquad \hat u_{a\nu}\equiv u_{a\nu} /\sqrt{u_a^2}; $$ $$ \omega =\Gamma _1\Gamma _2\left [(D\varphi (t, {\bf y}))^2 -{\bf v}_1\cdot {\bf v}_2-\frac 14({\bf n}\cdot {\bf v})^2\right ]= \hat u_{1\nu}\hat u_2^{\nu}, $$ and $F$ being an arbitrary (smooth) function on three variables. In the front form of dynamics in ${\mathbb M}_2$ we have $\vartheta = r$, $\Gamma _a= (1 - 2v_a)^{-1/2}=k_a^{-1}$, and $\omega $ is a function on the invariants $\sigma _1$, $\sigma _2$: \begin{equation}\label{3.20 \omega =\frac 12\left (\frac {\Gamma _1}{\Gamma _2}+ \frac {\Gamma _2}{\Gamma _1}\right )= \frac 12\left (\frac {\sigma _1}{\sigma _2}+ \frac {\sigma _2}{\sigma _1}\right ). \end{equation} Invariance conditions (\ref{3.15}) lead to the conservation laws for the quantities (2.44). In our case they have the form: \begin{equation}\label{3.21 G_{\alpha }=\sum_{a=1}^2(\zeta ^i_{a\alpha } - v^i_a\eta _{\alpha }) \frac{\partial L}{\partial v_a^i} - \Omega _{\alpha }. \end{equation} Taking into account (\ref{3.14}) they can be expressed as \begin{equation}\label{3.22 G_{\alpha }=\sum_{a=1}^2\zeta ^i_{a\alpha } \frac{\partial L}{\partial v_a^i} - \eta _{\alpha }H, \end{equation} where \begin{equation}\label{3.23 H=\sum_{a=1}^2v^i_a\frac{\partial L}{\partial v_a^i} - L. \end{equation} Let us introduce the Poincar\'e-invariant functions: \begin{equation}\label{3.24 A_a=\sigma ^2_a\frac{\partial F}{\partial \sigma _a}+ (\omega \sigma _a-\sigma_{\bar a})\frac{\partial F}{\partial \omega }, \end{equation} \begin{equation}\label{3.25 B_a=\sigma ^2_a\frac{\partial F}{\partial \sigma _a}+ (\omega \sigma _a+\sigma_{\bar a})\frac{\partial F}{\partial \omega }. \end{equation} They are not independent, \begin{equation}\label{3.26 \sigma _1(A_1 - B_1)=\sigma _2(A_2- B_2). \end{equation} In terms of these functions we have: $$ \frac{\partial L}{\partial v_a^i}= \frac 12(r\varphi _i(t,{\bf y}) -\epsilon r_i)\tilde F + v_{ai}\Gamma _a\sigma _a \left(\sigma _a\frac{\partial F}{\partial \sigma _a} +\omega \frac{\partial F}{\partial \omega }\right )- v_{bi}\Gamma_{\bar a}\sigma _a\frac{\partial F}{\partial \omega } $$ \begin{equation}\label{3.27 -\frac 12\varphi _i(t,{\bf y})(\Gamma _1u^0_1A_1+\Gamma _2u^0_2A_2)+ \frac 12(-1)^a\epsilon n_i(\Gamma _1u^0_1B_1-\Gamma _2u^0_2B_2). \end{equation} where \begin{equation}\label{3.28 \tilde F=F+\sum _{a=1}^2\sigma _a\frac{\partial F}{\partial \sigma _a }. \end{equation} The function (\ref{3.23}) is easily found to be \begin{equation}\label{3.29 H=\varphi _t(t,{\bf y})(-r\tilde F+\Gamma _1u^0_1A_1+\Gamma _2u^0_2A_2). \end{equation} Explicitly, the integrals of motion (\ref{3.22}) are given by $$ G_{\alpha }=-\zeta ^0_{\alpha }(t,{\bf y})\varphi _t(t,{\bf y})^{-1}H+ \zeta ^i_{\alpha }(t,{\bf y})(-\epsilon r_i\tilde F+\Gamma _1v_{1i}A_1+ \Gamma _2v_{2i}A_2) + $$ \begin{equation}\label{3.30 +\frac 12(\zeta ^i_{1\alpha }-\zeta ^i_{2\alpha })[\Gamma _2(\epsilon n_iu^0_2- v_{2i})B_2 -\Gamma _1(\epsilon n_iu^0_1- v_{1i})B_1]. \end{equation} Inserting the expressions for functions $\zeta ^{\nu }_{\alpha }$ which correspond to the generators (2.21), (2.22) of the Poincar\'e group, we obtain the following formulae for conserved energy $E$, momentum ${\bf P}$, angular momentum ${\bf J}$ and the center-of-inertia integral of motion ${\bf K}$: \begin{equation}\label{3.31 E=\varphi ^{-1}_tH= -r\tilde F+\Gamma _1u^0_1A_1+\Gamma _2u^0_2A_2, \end{equation} \begin{equation}\label{3.32 {\bf P}=-\epsilon {\bf r}\tilde F+\Gamma _1{\bf v}_1A_1+\Gamma _2{\bf v}_2A_2, \end{equation} \begin{equation}\label{3.33 {\bf J}={\bf y\times P} +\frac 12 {\bf r\times }(\Gamma _1{\bf v}_1B_1- \Gamma _2{\bf v}_2B_2), \end{equation} \begin{equation}\label{3.34 {\bf K}={\bf y}E - \varphi (t,{\bf y}){\bf P} -\frac 12[\Gamma _2({\bf r}u^0_2-\epsilon r{\bf v}_2)B_2- \Gamma _1({\bf r}u^0_1-\epsilon r{\bf v}_1B_1]. \end{equation} We note that the expressions (\ref{3.31}), (\ref{3.32}) can be united into a 4-vector of momentum $P_{\mu }$, as well as equations~(\ref{3.33}), (\ref{3.34}) represent a 4-tensor of angular momentum $J_{\mu \nu }$ \begin{equation}\label{3.35 P_{\mu }=\epsilon r_{\mu }\tilde F-\hat{u}{_1\mu }A_1-\hat{u}_{2\mu }A_2, \end{equation} \begin{equation}\label{3.36 J_{\mu \nu }=\frac 12 \left (y_{\nu }P_{\mu }-y_{\mu }P_{\nu }- r_{\nu }(\hat{u}_{1\mu }B_1-\hat{u}_{2\mu }B_2) -r_{\mu }(\hat{u}_{1\nu }B_1-\hat{u}_{2\nu }B_2)\right ). \end{equation} Here \begin{equation}\label{3.37 E=-P_0, \quad J_i=\epsilon _{ilk}J^{lk},\quad K_i=J_{0i}, \end{equation} and the zeroth components of the 4-vectors $x_a$ and $\hat{u}_a$ must be excluded with the help of relations (\ref{3.6}). The structure of the motion integrals (\ref{3.35}), (\ref{3.36}) agrees with the results of Refs.~\cite{k1,k2} which were derived within the framework of the predictive relativistic dynamics. Ten integrals of motion can be used to reduce the integration of equations of motion to quadratures. But it is more convenient to preform such a reduction by means of the techniques of the constrained Hamiltonian mechanics. \section{Fokker-type action and single-time Lagrangians} One of the possible ways to specify the form of the arbitrary functions entering the general solution of the Poincar\'e-invariance conditions is the comparison with the Fokker-type relativistic mechanics \cite{[4],[5],[6]}, the oldest attempt to construct the relativistic direct interaction theory which has a relation to the field description. It is based on the manifestly Poincar\' e-invariant variational principle formulated in terms of four-dimensional coordinates and velocities of the particles. Such a variational principle was first introduced for the electromagnetic interaction by Schwarzschild, Tetrode, and Fokker at the beginning of this century and developed by various authors (see Refs. \cite{[1],[5],[6]} and references therein). Later this description was extended to other relativistic interactions. The equations of motion following from such a variational principle explicitly satisfy the demand of relativistic invariance and can be compared with the corresponding field theory expressions. However, this approach is not free of difficulties both on physical and mathematical levels. The cost for a manifestly Poincar\' e-invariant four-dimensional description is the necessity to use a many-time formalism which complicates the physical interpretation of its results. Mathematically, it is hard to motivate the obtaining of the equations of motion from the action integrals which are obviously divergent because the integration is carried out on the whole length of the world lines of the particles \cite{[4]}. Within the framework of Fokker-type mechanics the dynamics of a relativistic particle system is specified in a manifestly Poincar\' e- and reparametrization-invariant way on the basis of the variational principle $\delta S=0$ with the action being given by \begin{equation}\label{2} S=S_f-S_{int} , \end{equation} where \begin{equation} S_f=-\sum_{a}m_a\int \d\tau _a\sqrt {u_a^2} , \end{equation} corresponds to a free-particle system and \begin{equation} S_{int}=\sumab \int \d\tau _a\int \d\tau _b \Lambda _{ab}(x_a,x_b,u_a,u_b). \end{equation} determines two-particle interactions. Here $\Lambda_{ab}$ are some functions depending on the four-dimensional particle coordinates $x_{a}^{\mu}$ and on the first derivatives $u_{a}^{\mu}$. They have the form \cite{[6],Ram73}: \begin{equation} \Lambda_{ab}=\sqrt {u_{a}^{2}u_{b}^{2}}U_{ab}(x_{a},x_{b}, \hat u_{a},\hat u_{b}), \end{equation} where $\hat u_{a}^{\mu}=u_{a}^{\mu}/\sqrt {u_{a}^{2}}$ and function $U_{ab}$ (which we shall call the {\it Fokker potential\/}) depends on the following set of the two-body Lorentz scalars: \begin{equation} \varrho _{ab}=(x_{a}-x_{b})^{2},\qquad \sigma _{ab}=(x_{a}-x_{b})\cdot \hat u_{a},\qquad \omega _{ab}=\hat u_{a}\cdot \hat u_{b}; \end{equation} that is \begin{equation} U_{ab}=U_{ab}(\varrho _{ab},\sigma _{ab},\sigma _{ba},\omega _{ab}). \end{equation} In papers \cite{[8],[9]} it was shown that many-time Fokker-type action integrals can be transformed into single-time actions with non-local Lagrangians depending on the three-dimensional coordinates of the particles and on all the derivatives of the coordinates with respect to parameter $t$. Such Lagrangians provide us with a useful tool for the consideration of various approximations \cite{[8],[9],[10]}, as well as for the transition to the predictive relativistic mechanics and Hamiltonian formalism \cite{[11],[12]}. It was demonstrated \cite{[9]} that non-local Lagrangians corresponding to the manifestly Poincar\' e-invariant action integrals satisfy the Poincar\' e-invariance conditions within the framework of the three-dimensional Lagrangian description of interacting particle systems \cite{[13]}. The conservation laws which follow from such an invariance were investigated via the N\" other theorem. Moreover, the non-local single-time Lagrangians which are found on the basis of the Fokker-type action integrals represent a closed form for a wide class of solutions of equations~(2.29) expressing the requirements of the invariance of the Lagrangian description of particle systems under the Poincar\' e group \cite{[9],nlmp}. If $U_{ab}$ happens to have the special form \begin{equation} U_{ab}=e_ae_b\omega _{ab}\delta (\varrho _{ab}) , \end{equation} then action (\ref{2}) describes the electromagnetic interaction of charges $e_a$ within the framework of the Tetrode-Fokker-Wheeler-Feynman electrodynamics. Such an approach has been extended to the interactions which are mediated by massive scalar and vector fields \cite{[4],[6],Ram73}: \begin{equation} {\rm the~scalar~case }\qquad U_{ab}=g_ag_bG^{sym}(\varrho _{ab}) , \end{equation} \begin{equation} {\rm the~vector~case }\qquad U_{ab}=g_ag_b\omega _{ab}G^{sym}(\varrho _{ab}) . \end{equation} In the above, $g_a$ is a coupling constant of particle $a$ and $G^{sym}(x)=G^{sym}(x^2)$ is a time-symmetric Green function of the Klein-Gordon equation \begin{equation} (\Box+\kappa ^2)G^{sym}(x)=4\pi \delta (x) , \end{equation} where $\Box\equiv \eta _{\mu \nu}\partial ^\mu \partial ^\nu$ and $\kappa $ is a mass of the field quanta. Explicitly, we have \begin{equation} G^{sym}(x)=\delta (x^2)-\Theta (x^2) \frac {\kappa}{2\sqrt{x^2}}J_1(\kappa \sqrt{x^2}) , \end{equation} where $\Theta (x)$ is the Heaviside step function and $J_1(x)$ is the Bessel function of order~1. There exists a wider class of physically important Fokker-type integrals which permit a field-theoretical interpretation of interaction between particles. It corresponds to Fokker potentials of the following form: \begin{equation}\label{5.2} U_{ab} =g_ag_bf(\omega _{ab})G(\varrho _{ab}), \end{equation} where $f(\omega)$ depends upon the tensor structure of the fields mediating the interaction, and $G(x)$ is a symmetrical Green function of the relevant wave equation. In the case of massless fields $G(x) = \delta (x^2)$. Especially, for interactions mediated by the massless field with the given helicity $\lambda =\pm n$ we have \cite{Tre98} \begin{equation} f(\omega )= T_n(\omega). \end{equation} One more example is a model of confinement interaction \cite{ri}, for which \begin{equation} U_{ab} =g_ag_b\sigma _{ab}\sigma_{ba}\delta (\varrho_{ab}). \end{equation} Equivalently, this model can be presented in the form (\ref{5.2}) with $f(\omega )=\omega$ and the Green function $G(x)$ replaced by the ``phenomenological propagator'' $\Theta (x^2)$. Generally, the Fokker-type action with a time-symmetric Green function leads to non-local in time Lagrangians and integral- or difference-differential equations of motion. It makes the analysis of particle motions a complicated task (except for the case of circular motion when the solution may be constructed explicitly \cite{Shi63,And70}). An interesting possibility to obtain ordinary differential equations of motion is to replace $G$ in the right-hand side of equation~(\ref{5.2}) by the retarded (advanced) Green function of d'Alambert equation \cite{[7]}: \begin{equation} G_{\epsilon}(x)=2\Theta(\epsilon x^0)\delta(x^2), \qquad \epsilon= \pm 1. \end{equation} This choice in the case of a two-particle system corresponds to the model with the following particle interaction: the advanced field of the first particle acts on the second particle and the retarded field of the second particle acts on the first particle. Such interactions correspond to the exact solutions of the Poincar\'e-invariance conditions considered above in the front and isotropic forms of dynamics \cite{[20],dt,Duv97}. In such models a one-to-one correspondence of points of two particle world lines appears naturally, namely, of those points which satisfy the {\em light cone condition}: \begin{equation}\label{5.4} r^2 = 0 , \quad \epsilon r^0>0,\qquad {\rm i.e.,} \qquad \epsilon r^0 = |\bf r|, \end{equation} This correspondence allows one to reduce the Fokker-type integral to a manifestly covariant single-time action, \begin{equation}\label{5.5} S_I = \int\!\!d\tau\,(L + \lambda r^2), \end{equation} where the Lagrangian multiplier $\lambda$ is introduced to take into account condition (\ref{5.4}) as a holonomic constraint (the boundary constraint $\epsilon r^0 > 0$ is also meant). An action of this kind occurs when the Fokker potential has a more general structure: \begin{equation} U = \tilde f(\omega, \sigma_1, \sigma_2)G_{\epsilon}(r), \qquad \sigma_1 \equiv \sigma_{12}, \quad \sigma_2 \equiv \sigma_{21}. \end{equation} The relevant Lagrangian function reads: \begin{equation}\label{5.7} L = - \sum^2_{a=1}m_a \lr{a} - \frac{\lr{a}\lr{b}}{\midl\dot y\cdot r\midr} \tilde f, \end{equation} where the dot denotes a derivative on parameter $\tau$. It creates a sufficiently wide class of two-particle {\em time-asymmetric} models. Their study would not be successful without an appropriate Hamiltonian description. \section{Hamiltonian description in the isotropic form of dynamics} The Lagrangian description in the configuration space ${\mathbb M}_4^2$ allows a natural transition to the manifestly covariant Hamiltonian description with constraints \cite{Dir50,[7]}. The corresponding phase space ${\rm T}^*\IM_4^2$ is a 16-dimensional one with the Poisson brackets $[...,...]$. They have a standard form in terms of covariant coordinates $x^{\mu}_a$ and conjugated momenta defined in a usual manner: \begin{equation} p_{a\mu} = \partial L/\partial\dot x^{\mu}_a. \end{equation} Since the Lagrangian \re{5.7} and the constraint \re{5.4} are Poincar\'e-invariant, there exist ten N\"other integrals of motion, \begin{equation} P_{\mu} = \sum^2_{a=1} p_{a\mu},\qquad J_{\mu\nu} = \sum^2_{a=1} \left(x_{a\mu}p_{a\nu} - x_{a\nu}p_{a\mu}\right). \end{equation} In the Hamiltonian description these $P_{\mu}$ and $J_{\mu\nu}$ are generators of the canonical realization of the Poincar\'e group. By virtue of parametric invariance of the action (\ref{5.5}), the Lagrangian (\ref{5.7}) is singular. Hence the canonical Hamiltonian vanishes, while the dynamics of the system is determined by the {\em dynamical} constraint of the following general form: \begin{equation} \phi(P^2,~p_\bot^2, ~P \cdot r, ~p_\bot \cdot r) = 0, \end{equation} which appears together with the holonomic constraint (\ref{5.4}); here $p_{\bot\mu} \equiv p_{\mu} - r_{\mu}P \cdot p/P \cdot r; ~P_{\mu}$ and $p_{\mu} = \ha(p_{1\mu} - p_{2\mu})$ are canonical momenta conjugated to $y^{\mu}$ i $r^{\mu}$, respectively. Both the constraints are of the first class, and they unambiguously determine the particle dynamics in ${\mathbb M}_4$ (i.e. the particle world lines). Since no secondary constraints occur, the system possesses 12 physically essential degrees of freedom. In order to single them out explicitly, two subsidiary {\em gauge fixing} constraints are needed. They can be given in the general form: \begin{equation} \chi (y,r,P,p_\bot,t) = 0, \qquad [\chi ,\phi ] \not= 0, \quad \partial \chi /\partial t \not= 0. \end{equation} \begin{equation} \psi (y,r,P,p) = 0, \qquad [\psi , r^2] \not= 0. \end{equation} These constraints permit one to eliminate redundant time-like variables $x^0_a$ and the corresponding momenta $p_{a0}$, and then to pass to the three-dimensional Hamiltonian description. The gauge fixing constraints do not influence the dynamics of the model, but their choice determines specific features of the final description, namely, the reduced phase space $\IP$ (as a submanifold of ${\rm T}^*\IM_4^2$), the induced Poisson brackets, and a possible choice of variables, in terms of which these brackets take the canonical form. An explicit form of observables (i.e. the covariant particle positions, the generators of the Poincar\'e group etc.), being functions of the canonical variables of space $\IP$, depends on a choice of the gauge fixing constraints, too. Thus, using the arbitrariness of this choice one can make an effective influence on the structure of the final description. A special choice of the constraint (6.4) in the form \begin{equation} \chi = \chi (y,r,P_0,\tau), \end{equation} allows one to avoid a well-known {\em no--interaction theorem} \cite{Car63}, that is, to pass to such a three-dimensional Hamiltonian description of time-asymmetric models in which the spatial covariant particle positions $x_a^i\ (a=1,2;\ i=1,2,3)$ become canonical variables. The three-dimensional Hamiltonian description in terms of covariant variables is desirable in various aspects. For example, it simplifies the introduction of the interaction with external fields and allows a position representation on the quantum-mechanical level. But this description is not convenient for solving a two-body problem, because it does not provide a relevant separation of external and internal degrees of freedom. Another choice of the gauge fixing constraint (6.4), \begin{equation} \chi = y^0 + {\rm tr}(\Lambda\,\Omega\,\partial\Lambda^{\rm T}\!/ \partial P_0) - \tau = 0, \end{equation} where \begin{equation} \Omega_{\mu\nu} \equiv r_\mu p_\nu - r_\nu p_\mu, \end{equation} $|P| \equiv \sqrt{P^2}$, and matrix $\|\Lambda(P/|P|)^\nu_{\ \mu}\| \in {\cS\cO(1,3)}$, $\Lambda^\mu_{\ \nu}P^\nu = \delta_0^\mu|P|$ describes the Lorentz transformation into the centre-of-mass (CM) reference frame, leads to a three-dimensional Hamiltonian description within the framework of the Ba\-kam\-ji\-an-Tho\-mas model \cite{B-T53,D-K92,D-K93}. Within this description ten generators of the Poincar\'e group $P_{\mu}$, $J_{\mu\nu}$, as well as the covariant particle positions $x_a^{\mu}$ are the functions of canonical variables ${\bf Q,~P}, ~\brho, ~\bpi$. The only arbitrary function entering into expressions for canonical generators is the total mass $\midl P \midr= M(\brho, \bpi)$ of the system which determines its internal dynamics. For time-asymmetric models this function is defined by the mass-shell equation \cite{dt,Duv97} which can be derived from the dynamical constraint via the following substitution of arguments on the l.-h.s.\ of (6.3): \begin{equation} \hspace{-2.5em} P^2\!\to M^2,~~p_\bot^2\!\to -\bpi^2, ~~P\!\cdot\!r\!\to\!\epsilon M\rho,~~ p_\bot\!\cdot\!r\!\to - \bpi\!\cdot\!\brho;~~~{\rm here}~~ \rho \equiv \midl \brho \midr\!. \end{equation}% Due to the Poincar\'e-invariance of the description, it is sufficient to choose the CM reference frame in which ${\bf P=0,~Q=0}$. Accordingly, $P_0 = M$, $J_{0i}=0~ (i=1,2,3)$, and the components $S_i \equiv \frac{1}{2} \varepsilon_i^{\ jk}J_{jk}$ form a 3-vector of the total spin of the system (internal angular momentum) which is an integral of motion. At this point the problem is reduced to a rotation-invariant problem of some effective single particle which is integrable in terms of polar coordinates, \begin{equation} \brho = \rho{\bf e}_\rho,~~~~\bpi = \pi_\rho{\bf e}_\rho + S{\bf e}_{\varphi}/\rho. \end{equation} Here $S \equiv {}\midl {\bf S} \midr$; the unit vectors ${\bf e}_\rho$, ~${\bf e_{\varphi}}$ are orthogonal to ${\bf S}$, they form together with ${\bf S}$ a right-oriented triplet and can be decomposed in terms of the Cartesian unit vectors ${\bf i,~j}$: \begin{equation} {\bf e}_\rho = {\bf i} \cos \varphi + {\bf j} \sin \varphi,~~~~ {\bf e}_{\varphi} = - {\bf i} \sin \varphi + {\bf j} \cos \varphi, \end{equation} where $\varphi$ is a polar angle. The corresponding quadratures read: \begin{eqnarray} t - t_0 = \inta \d\rho\ \partial\pi_\rho(\rho,M,S)/\partial M,&& \\ \varphi - \varphi_0 = - \inta \d\rho\ \partial\pi_\rho(\rho,M,S)/\partial S,&& \end{eqnarray} where $t$ is an evolution parameter fixed by constraint (6.7) in the CM reference frame, and the radial momentum $\pi_\rho$ as a function of $\rho,~M,~S$ is defined by the mass-shell equation written down in terms of these variables, \begin{equation} \phi\left(M^2,\ - \bpi^2,\ \epsilon M\rho,\ -\bpi\cdot\brho\right) \equiv \phi\left(M^2,\ - \pi_\rho^2 - \frac{S^2}{\rho^2},\ \epsilon M\rho,\ -\pi_\rho\rho\right) = 0. \end{equation} The solution of the problem given in terms of canonical variables enables one to obtain particle world lines in the Minkowski space using the following formulae \cite{dt,Duv97}: \begin{equation} x_a^0 = t + \frac{1}{2}(-)^{\bar a} \epsilon\rho, \end{equation} \begin{equation} {\bf x}_a = \frac{1}{2}(-)^{\bar a}\brho + \epsilon\rho\frac{\bpi}{M} \equiv \left(\frac{1}{2}(-)^{\bar a} + \epsilon \frac{\pi_\rho}{M} \right)\rho {\bf e}_\rho + \epsilon \frac{S}{M} {\bf e}_{\varphi}. \end{equation} Particularly, vector ${\bf r} = {\bf x}_1 - {\bf x}_2 = \brho$ characterizes the relative motion of particles. \section{Time-asymmetric models of particle interactions with long-range and confining potentials} The explicit form of $\phi$ (6.3) depends in a complicated manner on the choice of the original Fokker potential. Its construction is the main difficulty which occurs in the analysis of time-asymmetric models. Let us split function $\phi$ into two parts: \begin{equation} \phi_f + \phi_{int}=0, \end{equation} where \begin{equation} \phi_f= \frac{1}{4}P^2 - \frac{1}{2}(m^2_1 + m_2^2) + (m_1^2 - m_2^2) \frac{p_\bot \cdot r}{P \cdot r} + p_\bot^2 \end{equation} corresponds to a free-particle system, and $\phi_{int}$ is to be found. Hereafter we refer to $\phi_{int}$ as the {\em Hamiltonian potential}. Only few cases are known when function $\phi_{int}$ can be constructed explicitly. They correspond to the three-parametric Fokker potential \begin{equation} U = U_s + U_v + U_c = (\alpha_s + \alpha_v\omega + \alpha_c\sigma_1\sigma_2)G_{\epsilon}(r)~, \end{equation} where $\alpha_s,\alpha_v,\alpha_c$ are arbitrary constants. The first and the second terms on the r.-h.s. of equation (7.3) correspond to the scalar and vector field-type interactions with the coupling constants $\alpha_s$ and $\alpha_v$, respectively, and the third term describes the confinement interaction (when $\alpha_c>0$). In the non-relativistic limit this model leads to the potential $U^{(0)} = (\alpha_s+\alpha_v)/r + \alpha_c r$, where $r$ is the distance between the particles. The corresponding Hamiltonian potential has the form: \begin{eqnarray} \phi _{int} & = & -\,\frac{2\alpha_s m_{1}m_{2} + \alpha_v (P^{2}-m_{1}^{2}- m_{2}^{2})}{\epsilon P\cdot r} - 2\alpha_c\left(\frac{b_1 b_2}{\epsilon P\cdot r} - \alpha_v\right) \nonumber \\ && -\,(\alpha_s ^2 -\alpha_v ^2) \frac {2\alpha_s m_1m_2 + (b_{1}-\alpha_v)m_{2}^{2}+(b_{2}-\alpha_v)m_{1}^{2}} {\epsilon P\cdot r\bigl((b_{1}-\alpha_v)(b_{2}-\alpha_v) -\alpha_s^2\bigr)} \end{eqnarray} where \begin{equation} b_a \equiv \epsilon(\ha P\cdot r + \na p_\bot\!\cdot r). \end{equation} It is worth noting that the interactions are combined in terms of the Hamiltonian potential in a non-linear manner. For other Fokker potentials approximation methods (such as coupling constant expansion) should be applied for the Hamiltonization procedure. Especially for the time-asymmetric analogue of the Fokker potential (5.12) the Hamiltonian potential in the second order approximation in coupling constant $\alpha=g_1g_2$ reads: \begin{equation} \phi_{int} = - \frac{2m_1m_2}{\epsilon P \cdot r}\alpha f(\nu) - \frac{\alpha^2h(\nu)}{\epsilon P \cdot r} \left(\frac{m_1^2}{b_1} + \frac{m_2^2}{b_2} \right) +O(\alpha^3), \end{equation} where \begin{equation} h(\nu) \equiv \left( (f(\nu) - \lambda f'(\nu)\right)^2 - \left(f' (\nu)\right)^2 \end{equation} and \begin{equation} \nu \equiv \frac{P^2 - m_1^2 - m_2^2}{2m_1m_2}. \end{equation} We note that particular cases of (5.12) are the Fokker potentials which correspond to the particle interaction via massless linear tensor fields of an arbitrary rank (see equation (5.13)) and their superpositions. Below we consider some most interesting features of time-asymmetric models described in this section. \subsection{Vector and scalar models} We begin with vector and scalar time-asymmetric interactions. These models are based on the Fokker-type integrals (5.1) with the Fokker potentials $U = U_v$ and $U = U_s$, respectively (see (7.3)). Both scalar and vector models were partly considered earlier (the former in the two-dimensional space-time only) \cite{Sta70,R-H70,Ste85,% MST86,[18],k2,Fah81}. Our results obtained by means of both the Lagrangian and especially the Hamiltonian formulations of these models complete the analysis of their classical dynamics. The vector and scalar time-asymmetric models present two-body problems lying near the border line of those problems the solution of which can be presented in a closed form. A lot of analyses can be made analytically. Especially, turning and other important for the integration points are solutions of the third and fourth order algebraic equations, while the quadratures cannot be expressed even in terms of elliptic and other special functions and, thus, they need computer work. For simplicity here we limit ourselves to the case of equal particle rest masses $m_0$. In the non-relativistic limit the vector and scalar interactions reduce to the Coulomb interaction with the coupling constant $\alpha$ (namely, $\alpha_v$ and $\alpha_s$, respectively). Thus, it is convenient to present the specific features of vector and scalar models in comparison with the non-relativistic Coulomb system. The variety of solutions to the equations of motion of a two-particle system consists of a 12--parametric family. The Poincar\'e transformations (which form a 10--parametric group) change only the motion of system considered as the whole. This motion does not reflect specific features of the models. Here we do not distinguish solutions which differ from one another by the Poincar\'e transformations. So, non-equivalent solutions form a two--parametric family. It can be parametrized by values of total mass $M$ (or energy $E$ in the non-relativistic case) and spin (internal angular momentum) $S$, the pair of integrals of motion. Thus, a variety of all the possible solutions is reduced to some subset of ($M,S$)--plane. We note that parameters $m_0$ and $|\alpha|$ become unessential when using $m_0$ and $r_0 \equiv \midl\alpha\midr/m_0$ as units of measurement for momentum-- and position--like variables, respectively. We also introduce dimensionless integrals of motion $\mu = \ha M/m_0$ and $\sigma = S/\midl\alpha\midr$. For the convenience we will speak about various solutions (namely, phase trajectories, particle trajectories and world lines) as if each of them is placed at the corresponding point of ($\mu,\sigma$)--plane. First of all we shall consider a vector model. Qualitatively different types of the phase trajectory (three top graphs of figure 1) correspond to three different domains $\cD(+)$, $\cD(-)$ and $\cD(0)$ of ($\mu,\sigma$)--plane (the bottom graph of figure 1). \begin{figure}[p] \input fig1a.pic \input fig1b.pic \input fig1c.pic \input fig1.pic \vspace{-2.5in} \hangindent=2.6in\hangafter=0 \noindent {\small{\sf Figure 1.} Vector model. Various doma\-ins of ($\mu,\sigma$)-plane (bottom graph) and the corresponding types of the phase trajectory (three top graphs).\\ Curve $\cal F$ on the ($\mu,\sigma$)-plane is determined by the parametric equations:\\ $$\sigma^2 = \frac{p(p+2)^2}{2p^2+5p+4},$$ $$\mu^2 = \frac{p(2p^2+5p+4)}{2(p+4)^2},$$ $$p\in[0,\infty[.$$}\vspace{-0.5in} \end{figure} \begin{figure}[p] \input fig2.pic \vspace{-2.2in} \hangindent=3.2in\hangafter=0 \noindent {\small{\sf Figure 2.} Vector model, $\alpha<0$ (attraction). Regular particle trajectories for various values of $\mu$, $\sigma$.\\ a) $\cal D(+)$: $\mu = 1.01$, $\sigma = 1.0$;\\ b) $\cal D(-)$: $\mu = 0.95$, $\sigma = 0.68$;\\ c) $\cal D(-)$: $\mu = 0.05$, $\sigma = 0.01$.\\ \vspace*{0.8in} } \hangindent=-4.0in\hangafter=0 \noindent {\small{\sf Figure 3.} Vector model. Pathological world lines $\gamma_+^a$.\\ Critical points:\\ $\Box$ is a turning point;\\ $\bullet$ is a collision point;\\ $\circ$ is start/end of the evolution. } \vspace{-1.3in} \input fig3.pic \vspace{-0.2in} \end{figure} The number and the position of $(\mu,\sigma)$--domains on ($\mu,\sigma$)--plane are roughly in accordance with the non-relativistic case, while the phase trajectories are more complicated: they consist of few disconnected branches. It does means that there exist few solutions of the Hamiltonian equations of motion at the same values of the integrals of motion $\mu,\sigma$. Only one branch, namely, $\gamma_-^a$ for the attraction case ($\alpha<0$) and $\gamma_-^r$ for the repulsion case ($\alpha>0$), is {\em regular}, {\em i.e.} it is a relativistic analogue of the phase trajectory of the Coulomb system and coincides with the latter in a weakly relativistic domain of ($\mu,\sigma$)--plane ({\em i.e.}, $\mu\approx1$, and $\sigma\gg1$ for the attraction case). If $\mu>1$ ($\cal D(+)$ domain of ($\mu,\sigma$)--plane), both $\gamma_-^a$ and $\gamma_-^r$ exist and correspond to unbounded particle trajectories which are analogues to the hyperbolas of the Coulomb problem. We note that particle trajectories $\gamma_-^a$ have a loop-like shape at the points of ($\mu,\sigma$)--plane which are close to $\mu=1$, $\sigma=1$ (figure 2a). This effect becomes more evident for bounded states $\gamma_-^a$ (existing in $\cal D(-)$) as the appearance of the perihelion advance (figure 2b) (states $\gamma_-^r$ of the repulsion case disappear in this domain). In the ultrarelativistic case $\mu\to0$ (the left lower corner of $\cal D(-)$) the particles "stick" together, so that the distance between them becomes far less than the distance to the centre-of-mass (figure 2c). On curve ${\cal F}$ the regular states correspond to a circular motion of particles, so that in domain ${\cal D}(0)$ (from below curve $\cal F$) the regular motion is forbidden. The remaining branches $\gamma_+^0$, $\gamma_+^+$, $\gamma_+^-$, and $\gamma_+^r$ of the phase trajectory do not have non-relativistic counterparts. They exist and are qualitatively similar on the whole ($\mu,\sigma$)--plane. These branches present a rather strange motion of particles, so that the sign of $\alpha$ does not characterize the interaction as attractive or repulsive. Moreover, it turns out natural to sew up the three branches $\gamma_+^0$, $\gamma_+^+$, and $\gamma_+^-$ into a unique one $\gamma_+^a$ (this is shown in figure 1 for the phase trajectory in $\cD(0)$), so that the resulting motion is as follows: the particles move from an infinite distance between them to their collision, go through one another and go away to the distance $\sim r_0$, draw closer to one another, collide again, and go away to an infinite distance (figure 3). Branches $\gamma_+^r$ and $\gamma_+^a$ and the corresponding world lines are {\em pathological} in the sense that the velocities of massive particles tend asymptotically up to the light speed. Besides, these solutions contain critical points (namely, collision and turning points) in which massive particles reach but not exceed the light speed at a finite time. Nevertheless, the particle world lines turn out smooth both at these points and everywhere. Another specific feature of the pathological states is that the evolution of particles is spread over a semiinfinite interval of the coordinate time while the evolution parameter covers the whole real axes (figure 3). The scalar model is more intricate, especially for an attractive interaction. There are more qualitatively different types of the phase trajectory which correspond to a larger number of ($\mu,\sigma$)--domains and which consist of more branches (figure 4). Among them only one branch is regular, {\em i.e.} analogous to the Coulomb phase trajectory. It exists in the domains ${\cal D}(1\pm;1)$. Bounded states (in ${\cal D}(1-;1)$) present a motion of particles with the perihelion retardance (unlike the advance in the vector model). They disappear from below curve ${\cal F}$, $\sigma>1/\sqrt{5}$, on which the particle trajectories become circular. \begin{figure}[p] \input fig4a.pic \vspace{0.2in} \input fig4b.pic \vspace{0.2in} \input fig4c.pic \vspace{0.2in} \input fig4.pic \vspace{-4.0in} \hangindent=3.5in\hangafter=0 \noindent {\small{\sf Figure 4.} Scalar model. Various domains of the ($\mu,\sigma$)-plane (bottom graph) and the corresponding types of the phase trajectory (six top graphs).\\ The curves $\cal F$, $\cal J$, ${\cal H}_\pm$, and ${\cal X}_\pm$ on the ($\mu,\sigma$)-plane are defined by the equations: \begin{eqnarray} {\cal F}:&\mu^2 = \frac{27\sigma}{2\left((3+\sigma^2)^{3/2} + \sigma(9-\sigma^2)\right)},&\nonumber\\ {\cal J}:&\mu^2 = \frac{1}{2(1-\sigma^2)},&\nonumber\\ {\cal H}_\pm:&\mu = \frac{1}{2(1\pm\sigma)},&\nonumber\\ {\cal X}_\pm:&\mu = \left(\sqrt{1+\sigma^2}\pm\sigma\right)/2.&\nonumber \end{eqnarray} } \vspace{1in} \end{figure} \begin{figure}[t] \vspace{-0.35in} \input fig5.pic \vspace{2.3in} {\small{\sf Figure 5.} Scalar model, $\alpha<0$ (attraction). Various types of bounded particle trajectories.\\ a) ${\cal D}(1-;2):\ \mu=0.95,\ \sigma=0.68;$ b) ${\cal D}(1-;3):\ \mu=0.9,\ \sigma=0.5;$\\ c) ${\cal D}(1-;3)$ near ${\cal J}:\ \mu=0.75,\ \sigma=0.3;$ d) ${\cal D}(2)$ near ${\cal H_-}:\ \mu=0.6,\ \sigma=0.16666;$\\ e) ${\cal D}(4;2):\ \mu=0.3,\ \sigma=0.15.$} \end{figure} In contrast to the case of a vector model, the domain of regular states is bounded not only from below, but also from the left where a motion is not forbidden. The border lines ${\cal X}_+$ and ${\cal J},\ \mu>\sqrt{5/8}$ indicate no special changes in the particle motion except the appearance of critical points (which corresponds to reaching the light speed) on the particle world lines. These {\em provisionally regular} states exist in the domains ${\cal D}(1\pm;2)$ and ${\cal D}(1\pm;3)$. The effect of the perihelion retardance grows for them (figure 5a), especially in the domain ${\cal D}(1\pm;3)$; here the particles move as if they attract one another at a large distance, while at a small distance $\sim r_0$ each particle repulses another one by a very (but not absolutely) hard core. The particles bounce back off this core with the light speed, but their world lines are smooth at this critical point (figure 5b). Going to curve ${\cal J}$, $\sigma<1/\sqrt{5}$, the particle trajectories tend (as in the regular case) to circular ones, but in a very strange manner: the particles rebound more frequently (figure 5c), so that in the limiting circular trajectories (which corresponds to ${\cal J}$ itself) the set of critical points becomes dense everywhere. Apart from the regular or provisionally regular states (which present a reasonable behaviour of the particles on the whole) and the pathological ones (which are roughly similar to those in the vector model), the attraction (i.e. $\alpha<0$) scalar model possesses some {\em exotic} states which correspond to a bounded particle motion at a relative distance of order $r_0$. These states exist in the domains ${\cal D}(2)$, ${\cal D}(3\pm)$, and ${\cal D}(4;1)-{\cal D}(4;2)$, i.e., far from the weakly relativistic domain, and thus they have no non-relativistic analogues. For example, in the domain ${\cal D}(2)$ the particles move as if each particle repulses another one by the hard exterior of an empty core inside (figure 5d); in the domains ${\cal D}(4;1)-{\cal D}(4;2)$ the trajectory of one of the particles always lies inside the trajectory of another particle (figure 5e). The variety of solutions described above is obtained within the Hamiltonian formulation of the vector and scalar models. Within the framework of the Lagrangian formalism only regular solutions can be reconstructed completely. Besides, this framework partially recovers provisionally regular solutions, namely, some segments of world lines between the critical points. Other solutions disappear within the Lagrangian formalism. In our consideration the Lagrangian formalism is primary with respect to the Hamiltonian one. Thus, one can conclude at first sight that non-Lagrangian solutions have no physical meaning. On the other hand, the Hamiltonian formulation of the models is an important link toward their quantization, and non-Lagrangian solutions may contribute to the resulting quantum-mechanical picture. These complicated questions are discussed in more detail in sections 8 and 9.1 where we study the classical and quantum mechanics of the vector and scalar models in ${\mathbb M}_2$. \subsection{Scalar--vector model} The purely vector and scalar time-asymmetric models are calculatingly cumbersome and present a rather intricate particle dynamics. The case of arbitrary superposition of the scalar and vector interaction is not expected to be simpler (though it is also solvable). It follows from the complicated structure of the Hamiltonian potential (see equation (7.4) with $\alpha_c=0$). In a special case of superposition, $\alpha_v =\kappa\alpha_s \equiv\kappa \alpha, \ \ \kappa =\pm 1$, the second term of $\phi _{int}$ (7.4) vanishes. This structure of the dynamical constraint simplifies to a great extent the dynamics of the model and makes it similar, in the mathematical respect, to the dynamics of a non-relativistic system with the Coulomb interaction. In this case one can expect the existence of an additional integral of motion, the relativistic analogue of the Runge-Lenz vector. Actually, it is easy to guess the structure of this integral of motion working within the framework of manifestly covariant Hamiltonian mechanics \cite{Duv96}. For this purpose it is convenient to simplify the free-particle term $\phi _f$ (10) of the dynamical constraint whose cumbersome form obscures the following treatment of the model and is caused by a descriptional rather than dynamical reason. Let us perform the canonical transformation $(y^\mu ,\ \ P_\mu ,\ \ r^\mu ,\ \ p_\mu )\longmapsto (z^\mu ,\ \ P_\mu ,\ \ r^\mu ,\ \ q_\mu )$, \begin{equation} q_{\mu} = p_{\mu } - \frac {m_{1}^2 - m_{2}^2}{2P^2}P_{\mu }, \qquad z^{\mu } = y^{\mu } + \frac {m_{1}^2 - m_{2}^2}{2P^2} \Bigl( r^{\mu } - 2\frac {P\cdot r}{P^2}P^{\mu }\Bigr) , \end{equation} (the variables $r^\mu $ and $P_\mu $ remain unchanged). In terms of new variables the dynamical constraint takes the form: \begin{equation} \phi = \frac {1}{4}P^2 - \frac {1}{2}(m_{1}^2 + m_{2}^2) + \frac {(m_{1}^2 - m_{2}^2)^2}{4P^2} + q_\bot^2 -\frac{\alpha \bigl(P^2 - (m_1 - \kappa m_2)^2\bigr)}{\epsilon P\cdot r} = 0, \end{equation} where \begin{equation} q_{\bot\mu} \equiv P^{\nu}\Xi _{\nu \mu}/P\cdot r,\quad \Xi _{\mu \nu} = r_\mu q_\nu - r_\nu q_\mu ;\quad q_\bot\!\cdot P\equiv 0. \end{equation} Then, it is easy to examine that the relativistic analogue of the Runge-Lenz vector has the following form: \begin{equation} R_\mu = \Pi _\mu ^\nu \Bigl( q_\bot^\lambda \Xi_{\lambda \nu} + \frac{\alpha \bigl(P^2 - (m_1 - \kappa m_2)^2\bigr)} {2\epsilon P\cdot r}r_\nu \Bigr) , \end{equation} where $\Pi _\mu ^\nu \equiv \delta _\mu ^\nu - P_\mu P^\nu /P^2$. It is indeed an integral of motion, i.e. \begin{equation} \lbrack R_\mu ,\phi \rbrack \approx 0, \qquad \lbrack R_\mu ,r^2\rbrack = 0 \end{equation} and satisfies the relations: \begin{equation} \lbrack R_\mu ,P_\nu \rbrack = 0 , \qquad \ \ \lbrack R_\mu ,J_{\lambda \sigma } \rbrack = - \eta _{\mu \lambda} R_\sigma + \eta _{\mu \sigma} R_\lambda , \end{equation} \begin{equation} \lbrack R_\mu ,R_\nu \rbrack \approx \Big( \frac {1}{4}P^2 - \frac {1}{2}(m_{1}^2 + m_{2}^2) + \frac {(m_{1}^2 - m_{2}^2)^2}{4P^2}\Bigr) \Pi _\mu ^\lambda \Pi _\nu ^\sigma J _{\lambda \sigma }, \end{equation} where the Dirac symbol $\approx $ denotes a weak equality. The relations (7.14)--(7.15) are similar to those obtained for the Runge--Lenz vector of a simple relativistic oscillator and Coulomb models in \cite{DVN90}. These relations are essentially nonlinear and thus their group theoretical treatment is complicated. In the present paper we limit our study to the case of the CM reference frame in which the corresponding Poisson bracket relation can be linearized. For this purpose we reformulate (as in the previous cases) the present time-asymmetrical model into the framework of the Bakamjian-Thomas model. Then the Runge-Lenz vector becomes $R_\mu = (0,\bf R)$, where \begin{equation} {\bf R} = \bpi \times {\bf S} + g(M)\brho/\rho, \end{equation} $\bf S = \brho \times \bpi$ is a spin of the system, and the total mass satisfies the equation \begin{equation} d(M) - \bpi^2 - 2g(M)/\rho = 0. \end{equation} Here \begin{equation} d(M) \equiv \frac{1}{4M^2}\Bigl( M^2 - (m_1 + m_2)^2\Bigr) \Bigl( M^2 - (m_1 - m_2)^2\Bigr) , \end{equation} \begin{equation} g(M) \equiv \frac{\alpha}{2M}\Bigl( M^2 - (m_1 - \kappa m_2)^2\Bigr) . \end{equation} Besides, in the CM reference frame the covariant particle positions are the following functions of the canonical variables: \begin{equation} {\bf x}_a = \frac{(-)^{\bar a}}{2}\Bigl( 1 + \frac{m^2_{\bar a} - m^2_a}{M^2}\Bigr)\brho + \epsilon\rho\frac{\bpi}{M}, \qquad a = 1,2;\ \ \bar a \equiv 3-a. \end{equation} The Poisson bracket relations for the internal angular momentum (spin) of the system $\bf S$ and the Runge-Lenz vector $\bf R$ are similar to those in the non-relativistic Coulomb problem: \begin{equation} \lbrace S_i,S_j\rbrace = \varepsilon_{ij}^{\ \ k}S_k,\ \ \lbrace R_i,S_j\rbrace = \varepsilon_{ij}^{\ \ k}R_k,\ \ \lbrace R_i,R_j\rbrace = -d(M)\varepsilon_{ij}^{\ \ k}S_k. \end{equation} Indeed, when $d(M) = 0$, equations (7.21) are the relations for generators of the Euclidian group $\cE(3)$. In the case $d(M) \not= 0$ the $S_i$ and the normalized ${\hat R}_i \equiv R_i/\sqrt {\vert d\vert }$ generate the group $\cS\cO(4)$, when $d(M) < 0$, and the group $\cS\cO(1,3)$, when $d(M) > 0$. Taking into account equation (7.21) we obtain the following cases for the algebra of internal symmetries:\\ \hspace*{1cm}$\gs\go(4)\ \ \ $ for $\vert m_1 - m_2\vert < M < m_1 + m_2$, \hspace*{1cm}$\gge(3)\ \ \ \ \ $ for $M = \vert m_1 - m_2\vert $ and $M = m_1 + m_2$, \hspace*{1cm}$\gs\go(1,3)\ $ for $0 < M < \vert m_1 - m_2\vert $ and $M > m_1 + m_2$.\\ The existence of the Runge--Lenz vector makes it possible to obtain both the relative and particle trajectories traced by vectors $\brho$ and ${\bf x}_a$, respectively, without an integration. At first we note that these trajectories are flat curves placed on the plane orthogonal to the spin of the system, i.e. ${\bf{\brho\cdot S}} = {\bf{x}}_a{\bf{\cdot S}} = 0$. Vector $\bf R$ lies on the same plane, i.e. ${\bf{R\cdot S}} = 0$. Multiplying equation (7.16) by $\brho$ one can obtain the relation: \begin{equation} {\bf{R\cdot\brho}} = g\rho + S^2, \end{equation} where $S \equiv \vert \bf{S}\vert$. Let $\varphi $ be an angle between $\bf R$ and $\brho$, i.e. ${\bf{R\cdot\brho}} = R\rho\cos \varphi$. Then equation (7.22) can be reduced to the canonical equation of a conic section \begin{equation} p/\rho = e\cos{\varphi} - {\rm{sgn}}\hspace*{0.05cm}g \end{equation} with the following canonical parameter $p$ and eccentricity $e$: \begin{equation} p = \frac{S^2}{\vert g\vert} = \frac{2MS^2} {\vert \alpha \vert \vert M^2 -(m_1 - \kappa m_2)^2\vert },\ \ \ \ e = \frac{R}{\vert g\vert} = \sqrt{1 + \frac{S^2}{\alpha ^2} \hspace*{0.06cm}\frac{M^2 - (m_1 + \kappa m_2)^2} {M^2 - (m_1 - \kappa m_2)^2}}. \end{equation} Searching for the equations of particle trajectories is a similar but somewhat complicated task. Let us define the vectors: \begin{equation} {\bf r}_a \equiv {\bf x}_a - c_a{\bf R}, \end{equation} where \begin{equation} c_a = \frac{2(-)^{\bar a}}{(M + m_{\bar a})^2 - m_a^2}. \end{equation} Then, one can obtain the relations \begin{equation} (-)^{\bar a}{\bf{R\cdot r}}_a = gr_a + \frac{m_{\bar a}}{M}S^2, \end{equation} which are similar to equation (7.22) and hence can be written down as follows: \begin{equation} p_a/r_a = e\cos \varphi_a - {\rm{sgn}}\hspace*{0.05cm}g, \end{equation} where $\varphi_a$ are angles between $(-)^{\bar a}\bf R$ and ${\bf r}_a$. Equations (7.28) describe the particle trajectories as being conic sections of the same shape as the relative trajectory, i.e. with the same eccentricity $e$ (7.25) but with other canonical parameters $p_a = \frac{m_{\bar a}} {M}p$. The foci of these conic sections are shifted with respect to the centre of mass by vectors $c_a\bf R$. On the contrary, the non-relativistic particle trajectories have a common focus which is located in the centre of mass. \subsection{Models with higher rank tensor interactions.} As it was pointed out above, among time-asymmetric field-type models only those corresponding to the (arbitrary) superposition of scalar and vector interactions permit the exact hamiltonization. In the case when the rank of the field $n \ge 2$, the transition to the Hamiltonian description and the construction of quadratures can be done by means of the method of expansion in a coupling constant. The structure of the second order Fokker potential (7.6) is common for linear field-type interactions of various tensor dimensions. It specifies the sort of interaction by the functions $f(\nu)$ and $h(\nu)$ which depend on the integral of motion $\nu$ only. Moreover, the nonlinear gravitational interaction can be also described (at least in a slow motion approximation) by this potential (7.6) (see \cite{Tur82,Duv96P}) with \begin{eqnarray} &f_{gr}(\nu) = 2 \nu^2-1,&\\ &h_{gr}(\nu) = - 2(2 \nu^2+1),& \end{eqnarray} and $\alpha_{gr} = -\Upsilon m_1m_2$ where $\Upsilon$ is the gravitational constant. It is possible to integrate a two-body problem considering $f$ and $h$ as arbitrary first and second order functions, respectively. We note that in the second order approximation the quadratures for the present case can be expressed in terms of elementary functions. For bounded states they lead to the relative motion trajectory of a very simple form, \begin{equation} 1/\rho\ =\ \midl a\midr\ +\ b\cos\left((1-\delta)\varphi\right)~~~~~~(b < \midl a \midr), \end{equation} where $a$, $b$, and $\delta$ are functions of the integrals of motion. It describes an ellipse which precesses with the perihelion advance \begin{equation} \Delta \varphi\ =\ 2\pi \delta\ =\ -\pi\alpha^2 h(1)/S^2. \end{equation} In the case of a linear purely tensor interaction of arbitrary rank $n$ the perihelion advance $\Delta \varphi$ can be calculated by means of the formulae (7.7), (5.12)--(5.13), \begin{equation} \Delta \varphi\ =\ \pi(2n^2-1)(g_1g_2/S)^2. \end{equation} For the gravitational interaction, using (7.30), we obtain \begin{equation} \Delta \varphi\ =\ 6\pi( \Upsilon m_1m_2/S)^2. \end{equation} The spatial particle trajectories calculated by means of (7.16) turn out to be more intricate than the relative trajectory which is the typical feature of time-asymmetric models. Nevertheless, their analysis leads to the same value of the perihelion advance. We note that these relations for the perihelion advance fit those obtained within the various quasirelativistic approaches to the relativistic direct interactions \cite{Dar20,Fic50,Rya79,[1],Yar90}. \subsection{Confinement models} Our simplest version of a confinement model \cite{Duv98} is based on the Fokker potential $U_c$ (see equation (7.3)), the time-asymmetric counterpart of which is proposed in \cite{ri}. This model could be regarded as a classical relativisation of the primitive quarkonium model with the linear non-relativistic potential. Of course, the relativisation of any non-relativistic system is not unique. There exists in the literature a wide variety of relativistic versions of the potential confinement model. The present model has a number of features which are expected for the models of this kind but which usually are not realized together. 1. The model is a self-consistent relativistic two-particle model. The quantities in terms of which it is built have a clear physical meaning. Solutions of this model are free of any critical point and lead to timelike particle world lines. 2. It is well known that a non-relativistic potential model with the linear potential leads to the Regge trajectory with the unsatisfactory asymptote $M \sim S^{2/3}$. Here we do not propose a quantum version of the present model, but we make the estimates of the Regge trajectory from what follows. Usually the Regge trajectories in the potential models are calculated in the oscillator approximation \cite{L-S89}. Then, the leading Regge trajectory originates from the classical mechanics: it coincides with the curve of circular motions on the ($M,S$)--plane. In our case this curve is described by the following equation: \begin{equation} S = \frac{M^2(1 - 4m_0^2/M^2)^{3/2}}{6\sqrt{3}\alpha_c} \end{equation} (we consider the case of equal particle rest masses $m_0$). In the ultrarelativistic limit $M\to\infty$ it leads to the desirable linear asymptote: \begin{equation} M^2 \approx 6\sqrt{3}\alpha_c S. \end{equation} It is remarkable that this asymptote is achieved only by taking account of relativity. 3. The present model permits the interpretation of an interaction in terms of some classical fields. It follows from the fact that the Fokker potential $U_c$ can be transformed into an equivalent form, \begin{equation} \tilde U_c = - 2\alpha_c\omega D_\epsilon(x), \end{equation} where function $D_\epsilon(x)$, \begin{equation} D_\epsilon(x) = \ha\Theta(\epsilon x^0) \Theta(x^2), \end{equation} is the fundamental solution of the equation \begin{equation} \Box^2 D_\epsilon(x) = 4\pi\delta(x). \end{equation} Thus, the interaction of particles can be considered as mediated by the vector field obeying some fourth order equation. Gauge invariant nonlinear equations of this kind arise when considering the behaviour of a gluon propagator in the infrared region \cite{AAB82}. Static solutions of such equations are used in a sort of the bag model of confinement \cite{Ale88}. \begin{figure}[h,t] \input fig6.pic \vspace{0.2in} {\small{\sf Figure 6.} Confinement model. Classical Regge trajectories at various rates of the coupling constants and the rest mass.} \end{figure} The simplest version of the relativistic confinement model can be appropriate for the description of light mesons for which the confinement interaction dominates. To include into consideration also heavy mesons one can modify the present model by adding to $U_c$ the usual vector potential $U_v$ (with the appropriate coupling constant $\alpha_v<0$) \cite{Duv98}. In the non-relativistic limit this mixture leads to the well known potential $U^{(0)} = -\midl\alpha_v\midr/r + \alpha_c r$. The resulting model becomes appreciably cumbersome but still remains solvable. Pathological solutions which occur in this model can be unambiguously separated from its regular solutions (which are free of critical points). As an illustration we present the classical Regge trajectories for various rates of the coupling constants and the rest mass (figure 6). We note that all the trajectories tend asymptotically to straight lines. Moreover, the vector correction does not influence their asymptotic behaviour which is still described by equation (7.36). \endinput \section{Vector and scalar models in $\M_2$} \setcounter{figure}{6} \renewcommand{\thefigure}{\arabic{figure}} The analysis of the vector and scalar time-asymmetric models in the four-di\-men\-si\-o\-nal space-time $\M_4$ was carried out in the previous section for the case of equal particle masses. The cumbersome form of the expressions and a large set of possible motions obscures the physical understanding of the obtained results. In the two-dimensional space-time $\M_2$ the analysis of dynamics becomes considerably simpler even for different particle masses. The dynamics in $\M_2$ seems to correspond to the motions with the inner angular momentum (spin) $S=0$. But as it turns out, the limit $S\to 0$ is a singular one. Therefore, the consideration of the dynamics of such models in the two-dimensional Minkowski space $\M_2$ appears to be interesting. The Fokker-type action integral with a time-asymmetric variant of the Fokker potential \re{5.2} in the front form in $\M_2$ leads to the Lagrangian \cite{[20]} \beql{s-7} L=-\sum_{a=1}^{N} m_{a}k_{a} - \frac{\al k_{1}k_{2}f(\om)}{r} ,\qq r>0, \eeq where \beql{s-8} \om=\fr{1}{2}\left(\frac{k_1}{k_{2}}+\frac{k_{2}}{k_{1}} \right). \eeq The existence of three integrals of motion, which for the Lagrangian \re{s-7} have the form \begin{eqnarray} \lab{s-9} \!P_+& =&\fr{m_1}{k_1} + \fr{m_2}{k_2} - \fr{\al B (\om)}{ r }~, \\ \lab{s-10} P_-& =& m_1 k_1 + m_2k_2, \qqqq\qqq\qqqq\qq\\ \lab{s-11} K& =& - t(P_+ + P_-)/2 - \sul{a=1}{2}\fr{x_am_a}{k_a}-\nn\\ &&-\ \fr{\al}{r} \left[ \left(\fr{x_1k_2}{k_1}+\fr{x_2k_1}{k_2}\right)f+ \fr12\left(\fr{k_1}{k_2}-\fr{k_2}{k_1}\right) \left(\fr{x_1k_2}{k_1}-\fr{x_2k_1}{k_2}\right)f' \right], \end{eqnarray} where \beql{s-11-1} B(\om) = 2 \left(-\om f + (\om^2 - 1)f' \right), \eeq permits one to reduce the solutions of Euler-Lagrange equations to quadrature \cite{MST86}. But solutions of Euler-Lagrange equations exist only in the region $\cQ \subset T\cM\approx\R^4$ which is defined by the inequalities \re{s-3-1}: \begin{equation} \lab{s-12} {\sf h}_f =\frac{m_1m_2}{k_1^3k_2^3} - \al \fr{(m_2 \k_2 + m_1 \k_1) A(\om)}{ rk_1^3k_2^3 } >0~, \end{equation} where \beql{s-12-1} A(\om) = - f + \om f' + (\om^2 - 1)f''~. \eeq The investigation of two-particle models with the time-asymmetric field-like interactions (see \cite{MST86,Sh}) shows that for some values of the parameters the system reaches the boundary of the Lagrangian region $\pl\cQ=\{(x_a,x_b,v_a,v_b)\in\R^{4}|{\sf h}_\ell=0;\\ {\sf h}_\ell^{-1}=0\}$. An exception is the repulsion case ($\al>0$) if the total mass of the system $M>m_1+m_2=m$, where $m_1, m_2$ are particle rest masses. Then the system does not reach the singular points and the world lines are smooth timelike curves in ${\mathbb M}_2$ \cite{Ste85,R-H70}. The Hamiltonian description allows one to prolong the evolution of the system beyond the critical points for other values of the parameters and, as a result, to obtain continuous world lines in the following way \cite{Sh}. The Legendre transformation $\pounds$ associated with the Lagrangian \re{s-7} with $f(\om)=\om^\cl;~ \cl=0,1,2,...$ has the form: \begin{equation} \lab{s-13} p_{a}=\frac{\partial L}{\partial v_a}= \frac{m_{a}}{k_{a}}+\frac{\alpha}{2r}\left( 1+\ell+(1-\ell ) \frac{k_{\bar a}^{2}} {k_{a}^{2}}\right) \om^{\ell -1}. \end{equation} Here $a=1,2,~{\bar a}=3-a$. In the scalar ($\cl=0$) and vector ($\cl=1$) cases it is possible to solve equations \re{s-13} with respect to velocities and obtain from the expressions for conserved quantities \re{s-3} the generators of the Lie algebra of the Poincar\' e group ${\cal P}(1,1)$ in the explicit form \cite{26-1,Sh}. Se\-pa\-ration of the external and internal motions is carried out by the choice: \beql{s-14} P_{+}=p_1+p_2~,~~Q = K/ P_{+}~;~~\{Q,P_+\}=1 \eeq as new external canonical variables. As internal variables we choose \begin{equation} \lab{s-15} \xi =\frac{m_{2}p_{1}-m_{1}p_{2}}{P_{+}} ,\quad q=r\frac{P_{+}}{m} ; \quad \{ q,\xi \} =1 , \end{equation} where $m=m_{1}+m_{2}$. Then the Hamiltonian equations of motion become \begin{eqnarray} \lab{s-2.27} \dot Q = 1/2 - \frac{M^2}{2P_+},\quad \dot P_+ = 0,\quad\\ \lab{s-2.28} \dot q = \frac{1}{2P_+} \frac{\partial M^2}{\partial\xi},\quad \dot\xi =-\frac{1}{2P_+}\frac{\partial M^2}{\partial q}. \end{eqnarray} Solving equations \re{s-13} with respect to velocities and substituting the solutions into the expression for the Hessian we obtain from \re{s-12} inequalities which define the image $\pounds\cQ$ of the Lagrangian region $\cQ$ under the Legendre transformation \re{s-13}. The external canonical variable $P_+$ is an integral of motion. The Hessian does not depend on the external variable $Q$. Thus, all the singularities of the Hessian are expressed in terms of inner variables and we can transform \re{s-12} into an inequality which defines the region $\widetilde{\pounds\cQ}$ in the inner phase space: $\widetilde{\pounds\cQ}\subset\R^2$. In the scalar case, if $\al>0$, the region $\widetilde{\pounds\cQ}$ of the phase plane corresponds to the region $q>0$ restricted by the curves $y_1,~y_2$ (see figure 7) which are defined by the equations: \begin{eqnarray} \lab{s-2.24} y_1:-m_1 \xi + m_1m_2 + m_2 \al/q=0~,\nn\\ \\ y_2: m_2 \xi + m_1m_2 + m_1 \al/q=0~. \nonumber \end{eqnarray} If $\al<0$, then $\widetilde{\pounds\cQ}$ lies between the curves $y_1,~y_2 $ to the right of their intersection point. \begin{figure*}[p] \input fig7.pic \caption{Scalar interaction. Phase trajectories (continuous curves): $(m_2-m_1)/m=0.2;~ M/m=1.2$. {\bf a)}: $\alpha >0$, {\bf b)}: $\alpha <0$. Dashed curves $ y_1, y_2$ correspond to the singularity of the Hessian. } \end{figure*} \begin{figure*}[p] \input fig8.pic \caption{Vector interaction. Phase trajectories (continuous curves): $(m_2-m_1)/m=0.2;~ M/m=1.2$. {\bf a)}: $\alpha >0$, {\bf b)}: $\alpha <0$. Dashed curves $ y_1, y_2$ correspond to the singularity of the Hessian. } \end{figure*} In the vector case, if $\al<0$, the region $\widetilde{\pounds\cQ}$ corresponds to the region bounded by the curves ${\tilde y}_1 , ~{\tilde y}_2 ,~q=0$ (see figure 8) which are defined by the equations: \begin{eqnarray} \lab{s-2.25} \tilde y_1 : m_1 + \xi - {\al}/{q}=0~,\nn\\ \\ \tilde y_2 : m_2 - \xi - {\al}/{q}=0~. \nn \end{eqnarray} If $\al>0$, then the indicated region lies between the curves ${\tilde y}_1, ~{\tilde y}_2 $ to the right of their intersection point. The intersection points of the phase trajectories and the curves $y_1 $, $y_2 $ (${\tilde y}_1,~{\tilde y}_2 $) correspond to the case when one of the particles reaches the speed of light: $\k_1 =0$ or $ \k_2 =0$. To construct smooth world lines in $\M_2$ it is necessary to consider the inner motion in more detail. It is determined by the mass-shell equation \begin{equation} \lab{s-2.19} (\xi{-} \xi_M)^2{=} \fr{ (\nu^2{-} 1)m^2m_1^2m_2^2q^2 {-}2 \al M^2 m_1m_2m \nu^\cl q {+} ({-}1)^{\cl{+}1}M^4 \al^2 } {M^4q^2}, \end{equation} where \beql{s-2.20} \xi_M = \frac{(M^2{-}m^2)(m_2{-}m_2)}{2M^2},\quad \nu = \frac{M^2{-}m^2_1{-}m^2_2}{2m_1m_2}~. \eeq \begin{figure*}[h,t] \input fig9.pic \vspace*{15mm} \caption{World lines in $\M_2$ for an unbounded motion: $(m_2-m_1)/m=0.2,~ M/m=1.2~, \alpha <0$. {\bf a)}: scalar interaction (Stephas case \protect\cite{Ste85}); {\bf b)}: vector interaction (Rudd and Hill case \protect\cite{R-H70}).} \end{figure*} \begin{figure*}[p] \input fig10.pic \vspace*{15mm} \caption{ Scalar interaction. World lines in $\M_2$ for an unbounded motion: $(m_2-m_1)/m=0.2,~ M/m=1.2~, \alpha <0$.} \end{figure*} \begin{figure*}[p] \input fig11.pic \caption{Vector interaction. World lines in $\M_2$ for an unbounded motion: $(m_2-m_1)/m=0.2,~ M/m=1.2~, \alpha <0$.} \end{figure*} We assume that equation \re{s-2.19} is true in the whole phase plane $\R^2$. The motion is possible in the region where \beql{s-2.21} {\sf D}_\cl= (\nu^2 - 1)m^2m_1^2m_2^2q^2 - 2 \al M^2 m_1m_2m \nu^\cl q + (-1)^{\cl+1}M^4 \al^2 \eeq is non-negative. Then, we see that for a bounded motion $q$ belongs to the interval $[q_1, q_2]$, where $q_1$, $q_2$ are real solutions of the quadratic equation ${\sf D}_\cl=0$: \beql{s-2.22} \hspace*{-8mm} q_1 = \frac{2\alpha M^2(-1)^{\ell+1}}{(M^2-(m_1-m_2)^2)m}, \quad q_2=\frac{2\alpha M^2}{(M^2-m^2)m}. \eeq In such a manner we get the phase trajectories which lead to smooth world lines in $\M_2$ for all the values of the total mass of the system $M>0$ and signs of the coupling constant $\al$ \cite{Sh}. Using phase trajectory equation \re{s-2.19} and solving equations \re{s-2.27}, \re{s-2.28} we obtain a parametric equation for world lines in $\M_2$: \beql{s-2.31} x_1^0(q)=t(q)-x_1(q)~,~~x_2^0(q)=t(q)-x_2(q)~; \eeq \begin{eqnarray} \lab{s-2.32} x_1(q) = {K}/{P_+} + \left({m_2 - \xi (M^2, q)} \right)q/{P_+}~,\nn\\ \\ x_2(q) = {K}/{P_+} -\left({m_1 + \xi (M^2, q)} \right)q/{P_+}~.\nn \end{eqnarray} \vspace*{6mm} Figures 7,8 show examples of the phase trajectories for the scalar (figure 7) and vector (figure 8) interactions. Figures 9-11 show the corresponding smooth world lines. Unlike the scalar interaction, there exist particle collisions in the vector case. At the collision points $(q=0)$ the particles mutually change their positions (figure 8, b) and the phase trajectories break up. The motion along smooth world lines corresponds to the jumps along the momentum axis $-\infty \to \infty$ ($\infty \to -\infty $). \section{Quantum models in $\M_2$} In this section we consider a number of exactly solvable quantum-mechanical models which follow from certain quantization procedures applied to the corresponding classical counterparts. We construct a quantum description for the investigated above classical time-asymmetric scalar and vector models, as well as for the classical models for which the Lagrangian description is not known. \subsection{Vector and scalar interactions} The classical two-particle system with time-asymmetric scalar and vector interactions can be quantized in a purely algebraic way \cite{26-1} regarding the Lie algebra $\gs\go(2,1)$ as the basic algebraic structure. Let us introduce the following functions of canonical variables: \begin{eqnarray} \lab{s-2.24-1} J_{0}&=&\frac{1}{2}\left( \Delta q\xi ^{2}+\frac{q}{\Delta}+ \fr{\al^2 \Delta (\al_0^2-\al_1^2)}{q}\right) ,\nn\\ J_{1}&=&\frac{1}{2}\left( \Delta q\xi ^{2}-\frac{q}{\Delta}+ \fr{\al^2 \Delta (\al_0^2-\al_1^2)}{q}\right),\\ J_{2}&=&q\xi~,\nn \end{eqnarray} where ${\Delta}$ is an arbitrary constant. They span, under the Poisson bracketing, the Lie algebra $\gs\go$(2,1) \begin{equation}\lab{s-2.28-1} \{ J_{0},J_{1}\} =J_{2} ,\qquad \{ J_{1},J_{2}\} =-J_{0} ,\qquad \{ J_{2},J_{0}\} =J_{1} . \end{equation} Then, the mass-shell equation \re{s-2.19} takes the form: \beql{s-2.25-1} J+C_\cl=0. \eeq The quantity \beql{s-2.26-1} J=aJ_0+bJ_1+dJ_2 \eeq is an element of the Lie algebra of group $\cS\cO(2,1)$ and we use the following notation: \begin{eqnarray} \lab{s-2.27-1} a{=}\fr{M^{2}}{\Delta}{+}\Delta m_{1}m_{2}(m^{2}{-}M^{2}),\quad b{=}\fr{M^{2}}{\Delta} {-}\Delta m_{1}m_{2}(m^{2}{-}M^{2}), \nonumber\\ d{=}(m_{2}{-}m_{1})(m^{2}{-}M^{2}) ,\quad C_{\cl}{=}2\alpha mm_{1}m_{2}\nu^\cl.\qq \end{eqnarray} It would appear natural that the structure of the linear relation on the Lie algebra $\gs\go$(2,1) must be preserved after quantization. Then, replacing functions \re{s-2.24-1} with the Hermitian operators obeying the commutation relations of the $\gs\go$(2,1) Lie algebra \begin{equation}\lab{s-3.2} [\hat J_{0},\hat J_{1}] =i\hat J_{2} ,\qquad [\hat J_{1},\hat J_{2}] =-i\hat J_{0} ,\qquad [\hat J_{2},\hat J_{0}] =i\hat J_{1} , \end{equation} we obtain the quantum-mechanical equation: \begin{equation}\lab{s-3.3} (\hat J+C_\cl)|\psi\rangle =0 . \eeq This equation was considered in \cite{26-1} as the basic one for the quantum--mechanical problem. One can obtain in a purely algebraic way on the basis of equation \re{s-3.3} the mass spectrum \begin{equation}\lab{s-3.15} (M_{n}^{\pm})_{\cl}^{2}=m_{1}^{2}+m_{2}^{2}\pm 2m_{1}m_{2} \bigl( 1-(-1)^{\cl}\alpha^{2}/n^{2}\bigr) ^{(-1)^{\cl}/2} , \end{equation} where \beql{s-3.12} n=(-1+\sqrt{1+4(-1)^\cl\alpha^{2}})/2+s ,~ s=1,2,... \eeq The branch $(M_{n}^ {+})_{k}^{2}$ has a correct non-relativistic limit. Expansion to the order $1/c^2$ gives the following correction to the energy spectrum: \begin{eqnarray} \lab{s-3.18} E \approx - \fr{m_1m_2\al^2}{2ms^2 \hbar^2} - \fr{\al^4m_1m_2} {4m \hbar^4 s^4c^2} \left[ \left( 1-4\cl + \fr{m_1m_2}{m^2} \right) \fr{1}{2} - 4s(-1)^\cl \right], \nonumber\\~s=1,2,\ldots~. \end{eqnarray} In the single-particle limit $(m_1/m_2\to 0)$ we obtain \begin{equation}\lab{s-3.19} E=m_1\Bigl( 1-(-1)^\ell\alpha ^2/n^2\Bigr) ^{(-1)^\ell /2}-m_1 , \end{equation} which is in agreement with a one-particle problem in the external scalar or vector field in the case of states with the zero value of the quantum orbital number. The mass spectrum of a vector type agrees with the result obtained by Barut on the basis of the infinite component wave equation \cite{B-R}. The existence of an additional algebraic structure of the mass-shell equation permits one to quantize the classical problem without ambiguities typical of relativistic mechanics \cite{NTSH}. Furthermore, such a quantization method allows one to avoid difficulties connected with the choice of certain representation (coordinate, momentum, etc.) which is very important for the field-type interactions because of the difficulties of the global structure of the Hamiltonian description (see above). \subsection{ Relativistic Hamiltonian models in $\M_2$} Considering the field-type models we started from the Lagrangian description. But it is also possible to construct a number of exactly solvable models immediately within the framework of the Hamiltonian description \cite{39,40,89}. Contrary to the models based on the Fokker-type action integral, relativistic Hamiltonian models are not connected with the field theory. Nevertheless, they are also of interest for a variety of reasons. They can describe phenomenological aspects of the inner structure of mesons and baryons \cite{79,61}. Besides, these models can be useful for the verification of different approximation methods, and may be considered as an approximation of more realistic models. It appears to be significant for the explanation of relativistic effects in the well-established non-relativistic oscillator-like quark models of hadrons. The standard quantization procedure consists in the transition from a set of canonical generators to a set of Hermitian operators which determine the unitary representation of the Poincar\' e group. So, in the case of two-dimensional space-time we must put in correspondence with the canonical generators of $\cP(1,1)$ the Hermitian operators $\hK,\hP_+,\hP_-$ in some Hilbert space which satisfy the following bracket relations: \begin{equation} \lab{s-16} [ \hP_{+},\hP_{-}] =0 ,\qquad [ \hK,\hP_{\pm}]=\pm i \hP_{\pm} . \end{equation} This determines the squared total mass operator $\hM^2=\hP_+\hP_-$ and the quantum problem is reduced to the eigenvalue problem \cite{39,40,89}: \beql{s-17} \hM^2\psi=M_{n,\la}^2\psi. \eeq From a variety of the known paths for such a transition we choose the Weyl quantization rule \cite{8}. It is necessary that typical of the front form inequalities \beql{s-18} p_a>0 \eeq be satisfied for this quantization method. It will be noted that these conditions are destroyed by field-like interactions. The wave functions $\psi (p)=\langle p|\psi\rangle$ describing the physical (normalized) states in the front form of dynamics constitute the Hilbert space $ {\cal H}_N^F ={\cal L}^2 (\R_+^N ,d\mu _N^F )$ with the inner product \cite{39,40,89}: \begin{equation} \lab{s-19} (\psi _1,\psi ) = \int \d\mu ^F_N (p)\psi ^\ast _1(p) \psi (p), \end{equation} where \begin{equation} \lab{s-20} \d\mu ^F_N (p) = \prod ^N_ {a=1} \frac {\d p_a}{2p_a}\Theta (p_a) \end{equation} is a Poincar\' e--invariant measure and $\Theta(p_a)$ is the Heaviside function. According to the Weyl rule we get the following operators \cite{39,40,89}: \begin{equation} \lab{s-21} \hat P_+=\sum ^N_{a=1}p_a ,\quad \hat K = i \sum_{a=1} ^N p_a \partial / \partial p_a ,\quad \hat P_- = \hat M^2 / \hat P_+ \q , \end{equation} which are Hermitian with respect to the inner product \re{s-19}. They determine the unitary realization of group ${\cal P}(1.1)$ on the Hilbert space ${\cal H}_N^F$. Here $\hat M $ is determined by \begin{equation} \lab{s-22} \hat M^2=\hat M_f^2 + \hat V, \end{equation} where $\hat M_f^2$ is a free-particle part of the square mass operator: \begin{equation}\lab{s-22-1} \hat M_f^2=\hat P_+\sum_{a=1}^N\frac{m_a^2}{p_a}. \end{equation} Operator $\hat V $ is an integral operator \begin{equation} \lab{s-23} ( \hat V \psi)(p) = \int \d \mu_N^F (p)V (p,p') \psi (p') \end{equation} with the kernel \begin{eqnarray} \lab{s-24} V(p,p') = \left[ \prod _{d=1}^N \sqrt {4p_dp_d'} \right] \delta (P_+-P_+') \int _{- \infty} ^{\infty} V \left( r \frac {p_b +p_b'}{2}; \frac {r_{1c}}{r} \right) \times \nonumber \\ \times \exp\left[i \sum_ {a=2}^N r_{1a}(p_a-p_a') \right] \prod_ {a=2} ^N \frac {\d r_{1a}}{2 \pi}. \end{eqnarray} The general properties of the Weyl transformation \cite{8} ensure that in the classical limit these operators correspond to the functions \re{s-4}, \re{s-5}. The evolution of the quantum system is described in the front form of dynamics by the Schr\" odinger-type equation \begin{equation} \lab{s-25} {\rm i}\frac{\partial\Psi}{\partial t}=\hat H\Psi, \end{equation} where $\Psi\in{\cal H}_N^F$ and \begin{equation} \lab{s-26} \hat H=\frac{1}{2}(\hat P_++\hat P_-)=\frac{1}{2}(\hat P_++\hat M^2/\hat P_+). \end{equation} Putting $\Psi =\chi (t,P_+)\psi$, where $\psi$ is a function of some Poincar\' e-invariant inner variables, we obtain a stationary eigenvalue problem for the operator $\hat M^2$. In such a way a number of exactly solvable two-particle systems were considered in Refs.~\cite{39,40}. It is convenient to introduce for a two-particle system the following Poincar\' e-invariant inner momentum variable \cite{79} \beql{s-1.40} \e = (p_1 - p_2)/2P_+~, \eeq which is linearly related to the variable $\xi= (m_2 - m_1)/2 + m \e$. Then the interaction part of the squared total mass of the system $V$ takes the form: \beql{s-1.41} V(rp_1, rp_2) = F(\rho, \e)~,~~~~~~\rho=rP_+~. \eeq The conditions \re{s-18} lead to inequality $ | \e | < 1/2~. $ The Hilbert space $\cH_2^F$ decomposes into the tensor product $ \cH_2^F = h_{int} \otimes \cH_{ext}^F~, $ where "inner" and "external" spaces are realized, correspondingly, by functions $\psi (\eta )$ and $\chi (P_+)$ with the inner products \begin{eqnarray} \lab{s-1.46} (\psi_1, \psi) = \fr{1}{2} \int \limits_{-1/2}^{1/2} \fr{\d \e}{1/4 - \e^2} \psi_1^* (\e ) \psi (\e )~, \\ \lab{s-1.47} (\chi_1, \chi) = \int \limits_0^{\infty} \fr{\d P_+}{2P_+}\chi_1^*(P_+) \chi(P_+)~. \end{eqnarray} Operator $\hat M^2$ acts nontrivially only on $h_{int}$. It is an integral operator which is determined by the rule: \begin{eqnarray} \lab{s-1.48} (\hat M^2 \psi)(\e ) = \left( \fr{2m_1^2}{1+2 \e} + \fr{2m_2^2}{1-2 \e} \right) \psi (\e ) + \nonumber \\ \\ + \int \limits_{-1/2}^{1/2} \d \e' \sqrt{\fr{1-4 \e^2} {1-4 \e'^2}}W(\e, \e' ) \psi (\e')~, \nonumber \end{eqnarray} where kernel $W(\e , \e')$ has the form: \beql{s-1.49} W(\e , \e') = \fr{1}{2 \pi} \int \limits_{- \infty}^{\infty} \d\rho F \left(\rho, \fr{\e + \e'}{2} \right) {\rm e}^{-i\rho(\e - \e')}~. \eeq The structure of operator $\hat M^2$ coincides with the one-dimensional variant of the corresponding expression in Ref. \cite{79}, but in the present treatment kernel $W(\e , \e')$ is directly related to the classical interaction potential $V$. It is convenient to pass from the functions $\psi(\e)$ with the inner product \re{s-1.46} to the functions \beql{s-1.50} \vp (\e ) = \fr{\psi (\e )}{\sqrt{1/2 - 2 \e^2}} \eeq with the inner product \beql{s-1.51} (\vp_1, \vp) = \int \limits_{-1/2}^{1/2} \d \e \vp_1^* (\e ) \vp (\e )~. \eeq The latter differs from the non-relativistic product only by limits of integration. The action of $\hat M^2$ on the function $\vp(\e)$ is defined by the equation \beql{s-1.52} (\hat M^2 \vp )(\e ) = \left( \fr{2m_1^2}{1 + 2 \e} + \fr{2m_2^2}{1 - 2 \e} \right) \vp (\e ) + \int \limits_{-1/2}^{1/2} \d \e' W(\e, \e' ) \vp (\e' )~. \eeq Let us consider two simple examples. 1. \underline{$\de$--\it like potential} . Let us put $F(\rho,\e)= \al\de(\rho),~\al=const$. Then the equation for $\vp(\e)$ has the form \cite{39}: \beql{s-1.108} \left( M^2-\fr{m_1^2}{1/2+\e}-\fr{m_2^2}{1/2-\e}\right)\vp(\e)= \fr{\al}{2\pi}\intl{-1/2}{1/2}\d\e'\vp(\e')~. \eeq Putting \beql{s-1.109} \intl{-1/2}{1/2}\d\e\vp(\e)=C~(\ne 0) \eeq we get from \re{s-1.108} \beql{s-1.110} \vp(\e)=\fr{\al C}{2\pi} \left( M^2-\fr{m_1^2}{1/2+\e}-\fr{m_2^2}{1/2-\e}\right)^{-1}~. \eeq Substituting \re{s-1.110} into \re{s-1.109}, we obtain the equation \beql{s-1.111} \fr{2\pi}{\al}=\intl{-1/2}{1/2}\d\e\left( M^2-\fr{m_1^2}{1/2+\e}-\fr{m_2^2}{1/2-\e}\right)^{-1}~, \eeq which describes the eigenvalues of $M^2$ for bound states. \begin{figure}[h] \input fig12.pic \caption{$\de$-like potential. $f_1(\la)$ is a graph of the r.-h. side of equation \protect\re{s-1.113}, $f_2(\la)$ is a graph of the r.-h. side of equation \protect\re{s-1.114}} \end{figure} Let us consider the case of equal particle masses $(m_1=m_2=m/2)$. Then we get from \re{s-1.111} \beql{s-1.112} \fr{2\pi M^2}{\al}=1-\fr{m^2}{2M}\intl{-M}{M}\fr{\d x}{x^2+m^2-M^2}~. \eeq If $M<m$, putting $M=m\sin \la,~0\le\la\le\pi/2$, we come to the following transcendental equation for $\la$: \beql{s-1.113} \fr{2\pi m^2}{\al}= \left( 1-\fr{2\la}{\sin 2\la}\right) \sin^{-2}\la\equiv f_1(\la) ~. \eeq The graph of the right-hand side of this equation (figure 12) shows that there exists its only solution for $-3\pi m^2<\al<0$. This corresponds to attraction. The energy of a bound state has a proper non-relativistic limit. It is interesting to point out that there also exists a bound state in the case of a strong repulsion. If $M>m$, one can put $M=m\ch\la,~\la>0$. Then, from \re{s-1.112} we obtain the following equation: \beql{s-1.114} \fr{2\pi m^2}{\al}= \left( 1+\fr{2\la}{\sh 2\la}\right) \ch^{-2}\la\equiv f_2(\la) ~, \eeq which has the only solution if $\al>\pi m^2$ (figure 12). This solution does not have a non-relativistic limit. 2. \underline{\it Oscillator potential}. Let us consider an interaction with a quadratical dependence on coordinates of the following type: \beql{s-1.88} V=\om_0^2r^2p_1p_2= \om_0^2(1/4-\e^2)\rho^2,~\om_0 \in \R. \eeq Then, equation \re{s-17} transforms into an ordinary differential equation of the hypergeometric type \cite{39,40}: \begin{eqnarray} \lab{s-1.89} \left(\fr{1}{4}-\e^2\right)\vp''(\e)-2\e\vp'(\e)+\qqqq\qqqq\qqqq\qqq\nn\\ +\left[-\fr{1}{2}+\fr{1}{\om_0^2} \left( M_n^2-\fr{m_1^2}{1/2+\e}-\fr{m_2^2}{1/2-\e}\right)\right]\vp(\e)=0 \end{eqnarray} with the boundary conditions \beql{s-1.86} \lim\limits _{\e\to\pm 1/2}u(\e)\vp(\e)=0,~ \lim\limits _{\e\to\pm 1/2}u(\e)\vp'(\e)=0~. \eeq Equation \re{s-1.89} leads to the mass spectrum \beql{s-1.96} M_n^2=\left[ m+\om_0(n+1/2)\right]^2+\fr{\om_0^2}{4}. \eeq Its nontrivial solutions, which are bounded and square-integrable on the interval $(-1/2,~1/2)$ have the form: \beql{s-1.97} \vp_n(\e)=C_n\left(\fr12+\e\right) ^{m_1/\om_0}\left(\fr12-\e\right)^{m_2/\om_0} P_n^{(2m_2/\om_0,2m_1/\om_0)}(2\e)~. \eeq In equation \re{s-1.97} $P_n^{(2m_2/\om_0,2m_1/\om_0)}(2\e)$ are Jacobi polynomials \cite{45-2} and $C_{n}$ are normalization constants. In the non-relativistic limit $\hbar\om_0 /mc^2\to 0,~M_n\to m+E_n/c^2$ we obtain well--known wave functions in the momentum representation and a non-relativistic energy spectrum of the harmonic oscillator: $E=\hbar\om_0(n+1/2)$. It is also possible to construct within the framework of the two-dimensional variant of the front form an exactly solvable quantum-mechanical $N$-particle model with the oscillator-like interaction \begin{equation} \lab{s-27} V=\om_0^2\sum\sum_{\hspace*{-5mm}a<b}r_{ab}^2p_ap_b. \end{equation} Function \re{s-27} gives an $N$-particle generalization of the two-particle interaction \re{s-1.88}, as well as one of the possible relativistic generalizations of the N-particle oscillator potential. For this system by means of the Weyl quantization rule one can also reduce the eigenvalue problem \re{s-17} to a differential equation. The system with interaction \re{s-27} has $N-2$ additional integrals of motion which mutually commute and provide the exact integrability of the system in the classical case. They depend nontrivially on the products of coordinate and momenta variables \cite{89}. Therefore, in general, the quantization procedure can destroy commutation relations between these quantities and, as a result, the integrability of the quantum problem. The Weyl quantization rule transforms classical additional integrals of motion into a set of quantum integrals of motion in involution. That permits one to solve exactly the eigenvalue problem and to obtain the eigenfunctions and eigenvalues of $\hM^2$ (see \cite{89}): \begin{equation} \lab{s-28} M_n^2=\left[\sum_{a=1}^Nm_a+\om_0\sum_{b=1}^{N-1}(n_b+1/2) \right]^2+ \frac{N-1}{4}\om_0^2. \end{equation} Interaction function \re{s-27} may be generalized by adding terms which are linear in the coordinates \begin{equation} \lab{s-29} V\to {\tilde V}=V+\al \sum\sum_{\hspace*{-5mm}a<b}r_{ab}(p_a-p_b). \end{equation} Such a system also has additional integrals of motion and permits exact solutions in the quantum case \cite{89}. Thus, the Weyl quantization rule preserves the commutation relation of Po\-in\-ca\-r\'e group $\cP(1,1)$, as well as additional symmetries which are responsible for the integrability of this model \cite{89}. As it was shown in \cite{NTSH} on the example of a two-particle oscillator-like model in the two-dimensional variant of the front form, the Weyl quantization is not the only quantization rule with this property. The application of different quantization rules preserving the commutation relation of $\cP(1,1)$ may result in different observables as, for instance, a mass spectrum of the system \cite{NTSH}.
1,941,325,220,956
arxiv
\section{Introduction} \label{sect:intro} Recently many reading comprehension datasets like HotpotQA~\cite{yang2018hotpotqa} and WikiHop~\cite{welbl2018constructing} that require compositional reasoning over several disjoint passages have been introduced. This style of compositional reasoning, also referred to as multi-hop reasoning, first requires finding the correct set of passages relevant to the question and then the answer span in the selected set of passages. Most of these dataset are often collected via crowdsourcing, which makes the evaluation of such models heavily reliant on the quality of the collected held-out sets. Crowdsourced datasets often present only a partial picture of the underlying data distribution. Learning complex latent sequential decisions, like multi-hop reasoning, to answer a given question under such circumstances is marred by numerous biases, such as annotator bias~\cite{geva2019we}, label bias~\citep{dua2020benefits,gururangan2018annotation}, survivorship bias~\citep{min2019compositional,jiang2019avoiding}, and ascertainment bias~\cite{jia2017adversarial}. As a result, testing model performance on such biased held-out sets becomes unreliable as the models exploit these biases and learn shortcuts to get the right answer but without learning the right way to reason. \newsavebox{\mybox} \newenvironment{display}{ \fontsize{10pt}{12pt}\selectfont% \begin{lrbox}{\mybox}% \begin{minipage}[][18\baselineskip][t]{18.5\baselineskip} }{ \end{minipage} \end{lrbox}\fbox{\usebox{\mybox}} } \begin{figure}[t] \begin{display} \fontsize{8pt}{9pt}\selectfont \textbf{Question:} The 2011-12 VCU Rams men's basketball team, led by third year head coach Shaka Smart, represented the university which was founded in what year? \\ \textbf{Gold Answer:} 1838 \\ \\ \textbf{Passage 1:} The \hllime{2011-12 VCU Rams men's basketball team represented Virginia Commonwealth University} during the 2011-12 NCAA Division I men's basketball season... \\ \\ \textbf{Passage 2:} \hllime{Virginia Commonwealth University (VCU)} is a public research university located in Richmond, Virginia. \hllime{VCU was founded in 1838} as the medical department of Hampden-Sydney College, becoming the Medical College of Virginia in 1854... \\ \\ \hllime{\textbf{Prediction:} 1838} \\ \\ \textbf{Adversarial context from~\cite{jiang2019avoiding}:} \\ \hlpink{Dartmouth University} is a public research university located in Richmond, Virginia. \hlpink{Dartmouth was founded in 1938} as the medical department of Hampden-Sydney College, becoming the Medical College of Virginia in 1854... \\ \\ \hlpink{ \textbf{New Prediction:} 1938} \\ \end{display} \caption{Example from HotpotQA, showing the reasoning chain for answering the question (in green) and an adversarial context (in pink) introduced by ~\citet{jiang2019avoiding} which confuses the model, causing it to change its prediction because it did not learn the right way to reason.} \label{fig:hotpot_example} \end{figure} Consider an example from HotpotQA in Figure~\ref{fig:hotpot_example}, where the latent entity ``Virgina Commonwealth University" can be used by the model~\cite{jiang2019avoiding} to bridge the two relevant passages (highlighted in green) from the original dev set and correctly predict the answer ``1838''. However, upon adding an adversarial context (highlighted in pink) to the pool of contexts, the model prediction changes to ``1938'' implying that the model did not learn the right way to reason. This is because the discriminatively trained passage selector exploits lexical cues like ``founded'' in the second passage and does not pay attention to the complete question. The absence of such adversarial contexts at training allows the model to find incorrect reasoning paths. In this work, we propose a generative context pair selection model, which tries to reason through the data generation process of how a specific question could have been constructed from a given pair of passages. We show that our proposed model is comparable in performance to the state-of-the-art systems, with minimal drop in performance on the adversarial held-out set. Our generative passage selector shows an improvement of 4.9\% in Top-1 accuracy as compared to discriminatively trained passage selector on the adversarial dev set. \section{Generative Passage Selection} Given a set of contexts $C = \{c_0, c_1, ...c_N\}$, the goal of multi-hop question answering is to combine information from multiple context passages to identify the answer span $a$ for a given question $q$. In \emph{single-hop} QA, the goal is to identify the \emph{pair} of contexts, from all possible pairs $\psi = \{(c_i, c_j): c_i \in C, c_j \in C)\}$, that is appropriate for answering the question. Existing models for multi-hop question answering~\cite{tu2020select,chen2019multi} consist of two components: a \emph{discriminative passage selection} and an \emph{answering model}. Passage selection identifies which pairs of contexts are relevant for answering the given question, i.e. estimates $p(c_{ij}|q,\psi)$. This is followed by the answering model to extract the answer span given a context pair and the question ($p(a|q, c_{ij})$). These are combined as follows: \begin{equation} p(a|q,\psi) = \sum_{c_{ij}} p(a|q, c_{ij}) p(c_{ij}|q,\psi) \end{equation} The discriminative passage selector learns to select a set of contexts by conditioning on the question representation. This learning process does not encourage the model to pay attention to the entire question, which can result in ignoring parts of the question, thus, learning spurious correlations. To predict the answer at test time, we do not sum over all pairs of contexts, but instead use the top scoring pair to answer the question\footnote{Summing over all context pairs, or maintaining a beam of highly ranked pairs, did not yield much higher performance, in particular, not worth the additional computation cost.}. In other words, we use \emph{passage selection} to pick the best context pair $c^*_{ij}$, which is used by the answering module to get the answer, $a^* = \argmax p(a|q, c^*_{ij})$. \subsection{Model Description} We propose a joint question-answering model which learns $p(a,q|\psi)$ instead of $p(a|q,\psi)$. This joint question-answer model can be factorized into a generative passage selector and a standard answering model as: \begin{equation} p(a,q|\psi) = \sum_{c_{ij}} p(a|q, c_{ij}) p(q|c_{ij}) p(c_{ij}|\psi) \end{equation} First, a prior, $p(c_{ij}|\psi)$, over the context pairs establishes a measure of compatibility between passages in a particular dataset. Then, a conditional generation model, $p(q|c_{ij})$, establishes the likelihood of generating the given question from a selected pair of passages. Finally, a standard answering model, $p(a|q, c_{ij})$, estimates the likely answer distribution given a question and context pair. The first two terms (prior and conditional generation) can be seen as a generative model that chooses a pair of passages from which the given question could have been constructed. The answering model can be instantiated with any existing state-of-the-art model, such as a graph neural network~\cite{tu2020select,shao2020graph}, entity-based chain reasoning~\cite{chen2019multi}, etc. The process at test time is identical to that with discriminative passage selection, except that the context pairs are scored by taking the entire question into account, $c^*_{ij} = \argmax_{c_{ij}} p(q|c_{ij}) p(c_{ij}|\psi)$. \subsection{Model Learning} We use a pre-trained T5~\cite{raffel2019exploring} based encoder-decoder model for obtaining contextual representations, which are further trained to estimate all individual probability distributions. For learning the generative model, we train the prior, $p(c_{ij}|\psi)$ and the conditional generation model $p(q|c_{ij}, \psi)$ jointly. First, the prior network projects the concatenated contextualized representation, $r_{ij}$, of starting and ending token of concatenated contexts $(c_i; c_j)$, from the encoder to obtain un-normalized scores, which are then normalized across all context-pairs via softmax operator. The loss function tries to increases the likelihood of gold context pair over all possible context pairs. \begin{align} r_{ij} &= encoder(c_i;c_j)\\ s_{ij} &= W^{1 \times d} (r_{ij}[start]; r_{ij}[end]) \end{align} The conditional question generation network gets contextual representations for context-pair candidates from the encoder and uses them to generate the question, via the decoder. We define the objective to increase the likelihood of the question for gold context pairs and the unlikelihood~\cite{welleck2019neural} for a sample set of \emph{negative} context pairs (Eq.~\ref{eq:gen_loss}) \begin{align} \mathcal{L}(\theta) = & \sum_{t=1}^{|question|} \log p(q_t|q_{<t}, c_{gold}) \nonumber\\ &+ \sum_{n \in |neg. pairs|} \sum_{t=1}^{|question|} \log (1 - p(q_t|q_{<t}, c_{n})) \label{eq:gen_loss} \end{align} \section{Analysis} \subsection{Context pairs vs. Sentences} Some context selection models for HotpotQA use a multi-label classifier that chooses top-k sentences~\cite{fang2019hierarchical,clark2017simple} which result in limited inter-document interaction than context pairs. To compare these two input types, we construct a multi-label sentence classifier $p(s|q,C)$ that selects relevant sentences. This classifier projects a concatenated sentence and question representation, followed by a sigmoid, to predict if the sentence should be selected. This model has a better performance over the context-pair selector but is more biased (Table~\ref{tab:results_ctxtype}). \begin{table}[htp] \centering \small \begin{tabular}{lcc} \toprule \bf{Model} & \bf{Original} & \bf{Adversarial}\\ \midrule \textbf{Discriminative Selectors}\\ Passage, $p(c_{ij}|q, \psi)$ & 95.3 & 96.3 \\ Sentence, $p(s|q, C)$ & 97.6 & 90.9 \\ \addlinespace \textbf{Generative Selectors}\\ Passage, $p(q|c_{ij}, \psi)p(c_{ij}|\psi)$ & 97.5 & 96.3 \\ Sentence, $p(q|s,C)p(s|C)$ & 90.6 & 89.2 \\ Multi-task, $p(q,s|c_{ij}, \psi)p(c_{ij}|\psi)$ & 98.1 & 97.2\\ \bottomrule \end{tabular} \caption{\textbf{Passages vs Sentences:} Passage selection accuracy for models with different context inputs on the development and adversarial set of HotpotQA. } \label{tab:results_ctxtype} \end{table} We performed similar experiments with the generative model. Along with the \emph{passage} selection model, we train a generative \emph{sentence} selection model by first selecting a set of sentences with gumbel softmax and then generating the question given the set of sentences. Given that the space of set of sentences is much larger than context pairs, the generative sentence selector does not have good performance (Table~\ref{tab:results_ctxtype}). To further improve the performance of the generative selector, we add an auxiliary loss term that predicts the relevant sentences in the context pair, $p(q,s|c_{ij}, \psi)$, along with selecting the context pair in a multi-task setting. We see slight performance improvements by using relevant sentences as an additional supervision signal. \section{Experiments and Results} We experiment with two popular multi-hop datasets: HotpotQA~\cite{yang2018hotpotqa} and WikiHop~\cite{welbl2018constructing}. Most SOTA passage selection modules for HotpotQA use a RoBERTa~\cite{liu2019roberta} based classifier to select top-k passages given the question, which has an accuracy of $\sim$94.5\%~\cite{tu2020select}. We used a T5-based standard passage selector, $p(c_{ij}|q, \psi)$, as our baseline, which provides a comparable performance to SOTA passage selector (Table \ref{tab:results_rank}). \begin{table}[tb] \centering \small \begin{tabular}{lcc} \toprule \multirow{2}{*}{\bf{Dataset}} & \bf{Standard Selector} & \bf{Generative Selector} \\ & $p(c_{ij}|q, \psi)$ & $p(q|c_{ij}) p(c_{ij}|\psi)$ \\ \midrule \textbf{HotpotQA} & 95.3 & 97.5 \\ \textbf{WikiHop} & 96.8 & 97.2 \\ \bottomrule \end{tabular} \caption{\textbf{Passage selection accuracy:} Accuracy that the selected passage pair ($c^*_{ij}$) by different techniques is the oracle one ($c_{gold}$) on original development set.} \label{tab:results_rank} \end{table} We also use a simple T5-based answering model that has a comparable performance to SOTA answering models to illustrate the effect of our generative selector on end-to-end model performance. The \emph{oracle} EM/F1 of our answering model, $p(a|q, c_{gold})$, on HotpotQA and WikiHop are 74.5/83.5 and 76.2/83.9 respectively. The overall EM/F1 of WikiHop with generative model are 73.5/80.2. \subsection{Adversarial Evaluation} We use an existing adversarial set~\cite{jiang2019avoiding} for HotpotQA to test the robustness of model's multi-hop reasoning capabilities given a confusing passage. This helps measure, quantitatively, the degree of biased correlations learned by the model. In Table~\ref{tab:results_adv}, we show that the standard discriminative passage selector has a much higher performance drop ($\sim$4\%) as compared to the generative selector ($\sim$1\%) on adversarial dev set~\cite{jiang2019avoiding}, showing that generative selector is less biased and less affected by conservative changes~\cite{ben2010impossibility} to the data distribution. We can also see in Table~\ref{tab:results_adv} that SOTA models \cite{tu2020select,fang2019hierarchical}, which use the standard passage selector, also have a larger F1 drop when applied to the adversarial set. Table~\ref{tab:sample_questions} shows that the generator was able to generate multi-hop style questions using both the contexts. \begin{table}[tb] \centering \small \begin{tabular}{lcccc} \toprule \multirow{2}{*}{\bf Models} & \multicolumn{2}{c}{\bf Original} & \multicolumn{2}{c}{\bf Adversarial}\\ \cmidrule(lr){2-3} \cmidrule(lr){4-5} & Acc & F1 & Acc & F1 \\ \midrule \textbf{Standard Selector} & 95.3 & 79.5 & 91.4 & 76.0 \\ \textbf{Generative Selector} & \bf 97.5 & 81.9 & \bf 96.3 & \bf 80.1 \\ \midrule \footnotesize{\citet{tu2020select}} & 94.5 & 80.2 & - & 61.1\\ \footnotesize{\citet{fang2019hierarchical}} & - & \bf 82.2 & - & 78.9\\ \bottomrule \end{tabular} \caption{\textbf{Performance on Adversarial Data:} Passage selection accuracy and end to end QA F1 on original and adversarial set~\cite{jiang2019avoiding} of HotpotQA. The results of \citet{tu2020select} and \citet{fang2019hierarchical} are taken from \citet{perez2020unsupervised}.} \label{tab:results_adv} \end{table} \begin{table*}[t] \small \centering \begin{tabular}{p{.18\textwidth}p{.77\textwidth}} \toprule \textbf{Context 1, $c_i$:} & The America East Conference is a collegiate athletic conference affiliated with the NCAA Division I, whose members are located mainly in the Northeastern United States. The conference was known as the Eastern College Athletic Conference-North from 1979 to 1988 and the North Atlantic Conference from 1988 to 1996. \\\addlinespace[1mm] \textbf{Context 2, $c_j$:} & The Vermont Catamounts men's soccer team represents the University of Vermont in all NCAA Division I men's college soccer competitions. The team competes in the America East Conference. \\ \addlinespace[1mm] \textbf{Original Question, $q$:} & the vermont catamounts men's soccer team currently competes in a conference that was formerly known as what from 1988 to 1996? \\ \midrule \textbf{Generated Questions: $p(q|c_{ij},\psi)$} & the vermont catamounts men's soccer team competes in what collegiate athletic conference affiliated with the ncaa division i, whose members are located mainly in the northeastern united states? \\ \addlinespace[1mm] & the vermont catamounts men's soccer team competes in a conference that was known as what from 1979 to 1988? \\ \addlinespace[1mm] & the vermont catamounts men's soccer team competes in a conference that was known as what from 1988 to 1996? \\ \bottomrule \end{tabular} \caption{ Sample questions generated by using the question generation decoder with top-k sampling show that the generative model is able to construct (reason about) possible multi-hop questions given a context-pair. } \label{tab:sample_questions} \end{table*} \section{Conclusion} We have presented a generative formulation of context pair selection in multi-hop question answering models. By encouraging the context selection model to \emph{explain} the entire question, it is less susceptible to bias, performing substantially better on adversarial data than existing methods that use discriminative selection. Our proposed model is simple to implement and can be used with \emph{any} existing (or future) answering model; we will release code to support this integration. Since context pair selection scales quadratically with the number of contexts, it is not ideal for scenarios that involve a large number of possible contexts. However, it allows for deeper inter-document interaction as compared to other approaches that use summarized document representations. With more reasoning steps, selecting relevant documents given only the question becomes challenging, increasing the need for inter-document interaction. \clearpage \section{Ethical Considerations} This paper focuses on biases found in question answering models that make its reasoning capabilities brittle. It uses an existing method of testing model performance on adversarial held-out set as an evaluation metric. This work does not deal with any social impacts of biases in natural language processing systems. \section{Related work} Most passage selection models for HotpotQA and Wikihop's distractor style setup employ a RoBERTA based context selectors given the question~\cite{tu2020select,fang2019hierarchical}. In an ideal scenario, the absence of latent entity in the question should not allow selection of all oracle passages. However, the high performance of these systems can be attributed to existing bias in HotpotQA~\cite{jiang2019avoiding,min2019compositional}. Another line of work dynamically updates the working memory to re-rank the set of passage at each hop~\cite{das2019multi}. With the release of datasets like SearchQA~\cite{dunn2017searchqa}, TriviaQA~\cite{joshi2017triviaqa}, and NaturalQuestions~\cite{kwiatkowski2019natural}, lot of work has been done in open-domain passage retrieval, especially in the full Wikipedia setting. However, these questions do not necessarily require multi-hop reasoning. A series of work has tried to match a document-level summarized embedding to the question~\cite{seo2018phrase,karpukhin2020dense,lewis2020retrieval} for obtaining the relevant answers. In generative question answering, a few works ~\cite{lewis2018generative,dos2020beyond} have used a joint question answering approach on single context. \section{Example Appendix} \end{document}
1,941,325,220,957
arxiv
\section*{Abstract} Recently, the Whitham and capillary-Whitham equations were shown to accurately model the evolution of surface waves on shallow water~\cite{Trillo,bidWh}. In order to gain a deeper understanding of these equations, we compute periodic, traveling-wave solutions to both and study their stability. We present plots of a representative sampling of solutions for a range of wavelengths, wave speeds, wave heights, and surface tension values. Finally, we discuss the role these parameters play in the stability of the solutions. \section{Introduction} The dimensionless Korteweg-deVries equation (KdV) including surface tension, \begin{equation} u_t+u_x+\Big{(}\frac{1}{6}-\frac{T}{2}\Big{)}u_{xxx}+2uu_x=0, \end{equation} is an asymptotic approximation to the surface water-wave problem in the small-amplitude, long-wavelength limit. The variable $u=u(x,t)$ represents dimensionless surface displacement, $t$ represents the dimensionless temporal variable, $x$ represents the dimensionless spatial variable, and $T\ge0$ represents the dimensionless coefficient of surface tension (the inverse of the Bond number). This equation only accurately reproduces the unidirectional, linear phase velocity of the full water wave problem for a small range of wavenumbers near zero. In order to address this issue, Whitham~\cite{Whitham,Whithambook} proposed a generalization of KdV that is now known as the Whitham equation for water waves. In dimensionless form, this equation is given by \begin{equation} u_t+\mathcal{K}u_x+2uu_x=0, \label{Whitham} \end{equation} where $\mathcal{K}$ is the Fourier multiplier defined by the symbol \begin{equation} \widehat{\mathcal{K}f}(k)=\sqrt{\big{(}1+T k^2\big{)}\frac{\tanh(k)}{k}}~\hat{f}(k). \label{K} \end{equation} We refer to equation (\ref{Whitham}) with $T=0$ as the Whitham equation and (\ref{Whitham}) with $T>0$ as the capillary-Whitham, or cW, equation. Equation (\ref{Whitham}) reproduces the unidirectional phase velocity of the water wave problem with $T\ge0$. In summarizing some of the recent work on these equations, we focus on the results that are most directly related to the work we present below. Ehrnstr\"om \& Kalisch~\cite{EK} proved the existence of and computed periodic, traveling-wave solutions to the Whitham equation. Sanford {\emph{et al.}}~\cite{WhithamStability} and Johnson \& Hur~\cite{MatVera} established that large-amplitude, periodic, traveling-wave solutions of the Whitham equation are unstable, while small-amplitude, periodic, traveling-wave solutions are stable if their wavelength is long enough. Moldabayev {\emph{et al.}}~\cite{Moldabayev} presented a scaling regime in which the Whitham equation can be derived from the water wave problem and compared its dynamics with those from other models including the Euler equations. Hur~\cite{WhithamBreaking} proved that solutions to the Whitham equation will break provided that the initial condition is sufficiently asymmetric. Deconinck \& Trichtchenko~\cite{BernardOlga} proved that the unidirectional nature of the Whitham equation causes it to miss some of the instabilities of the Euler equations. Dinvay {\emph{et al.}}~\cite{Dinvay} extended the work of Moldabayev {\emph{et al.}}~\cite{Moldabayev} to include surface tension and show that the capillary-Whitham equation gives a more accurate reproduction of the free-surface problem than the KdV and Kawahara (fifth-order KdV) equations. Trillo {\emph{et al.}}~\cite{Trillo} compared Whitham predictions with measurements from laboratory experiments and showed that the Whitham equation provides an accurate model for the evolution of initial waves of depression, especially when nonlinearity plays a significant role. Finally, Carter~\cite{bidWh} compared predictions with another set of laboratory measurements and showed that the Whitham and capillary-Whitham equations both more accurately model the evolution of long waves of depression than do the KdV and Serre (Green-Naghdi) equations. The remainder of the paper is outlined as follows. Section \ref{TWStabSection} describes the solutions we examine, their properties, and the linear stability calculations. Section \ref{Numerics} contains plots of solutions to the Whitham and capillary-Whitham equations, plots of the corresponding stability spectra, and a discussion of these results. This section contains the main results of the paper. Section \ref{Summary} concludes the paper by summarizing our results. \section{Traveling waves and their stability} \label{TWStabSection} We consider periodic, traveling-wave solutions of the form \begin{equation} u(x,t)=f(x-ct), \label{TWansatz} \end{equation} where $f$ is a smooth, periodic function with period $L$. Ehrnstr\"om \& Kalisch~\cite{EK} proved that the Whitham equation admits solutions of this form and Remonato \& Kalisch~\cite{RemonatoKalisch} computed a variety of cW solutions of this form. Substituting (\ref{TWansatz}) into (\ref{Whitham}) and integrating once gives \begin{equation} -cf+\mathcal{K}f+f^2=B, \label{TWEquation} \end{equation} where $B$ is the constant of integration. This equation is invariant under the transformation \begin{equation} f\rightarrow f+\gamma,~~~~~ c\rightarrow c+2\gamma,~~~~~ B\rightarrow B+\gamma(1-c-\gamma). \label{relations} \end{equation} Therefore, we consider the entire family of solutions of the form given in equation (\ref{TWansatz}) by considering only solutions with zero mean, that is solutions such that \begin{equation} \int_0^Lf(z)~dz=0. \label{ZeroMeanDef} \end{equation} In order to study the stability of these solutions, we change variables to a moving coordinate frame by introducing the coordinates, $z=x-ct$ and $\tau=t$. In the moving coordinate frame, the cW equation is given by \begin{equation} u_{\tau}-cu_z+\mathcal{K}u_z+2uu_z=0. \label{movingWhitham} \end{equation} We consider perturbed solutions of the form \begin{equation} u_{\text{pert}}(z,\tau)=f(z)+\epsilon w(z,\tau)+\mathcal{O}(\epsilon^2), \label{pertAnsatz} \end{equation} where $f(z)$ is a zero-mean, periodic, traveling-wave solution of the cW equation (i.e.~a stationary solution of (\ref{movingWhitham})), $w(z,\tau)$ is a real-valued function, and $\epsilon$ is a small, positive constant. Substituting (\ref{pertAnsatz}) into (\ref{movingWhitham}) and linearizing gives \begin{equation} w_\tau-cw_z+\mathcal{K}w_z+2fw_z+2f_zw = 0. \label{linearizedWhitham} \end{equation} Without loss of generality, assume \begin{equation} w(z,\tau) = W(z)\mbox{e}^{\lambda\tau}+c.c., \label{wForm} \end{equation} where $W$ is a complex-valued function, $\lambda$ is a complex constant, and $c.c.$ denotes complex conjugate. Substituting (\ref{wForm}) into (\ref{linearizedWhitham}) and simplifying gives \begin{equation} (c-2f)W^{\prime}-2f^{\prime}W-\mathcal{K}W^{\prime}=\lambda W, \label{evalProb2} \end{equation} where prime means derivative with respect to $z$. In operator form, equation (\ref{evalProb2}) can be written as \begin{equation} \mathcal{L}W=\lambda W,~~~~{\text{where}}~~~\mathcal{L}=(c-2f)\partial_z-2f^{\prime}-\mathcal{K}\partial_z. \label{evalProb} \end{equation} We are interested in finding the set of $\lambda$ that lead to bounded solutions of (\ref{evalProb}). In other words, we are interested in finding the spectrum, $\sigma$, of the operator $\mathcal{L}$. The spectrum determines the stability of the solutions. If $\sigma(\mathcal{L})$ has no elements with positive real part, then the solution is said to be spectrally stable. If $\sigma(\mathcal{L})$ has one or more elements with positive real part, then the solution is said to be unstable. Since the capillary-Whitham equation is Hamiltonian, see Hur \& Pandey~\cite{HurPandey}, $\sigma(\mathcal{L})$ is symmetric under reflections across both the real and imaginary axes. We use this fact as one check on our numerical results. \section{Numerical results} \label{Numerics} In this section, we present plots of periodic, traveling-wave solutions of the Whitham and capillary-Whitham equations along with their stability spectra. The solutions were computed using a generalization of the method presented by Ehrnstr\"om \& Kalisch~\cite{EK}. Following the work of Sanford {\emph{et al.}}~\cite{WhithamStability}, the stability of these solutions was computed by the Fourier-Floquet-Hill method of Deconinck \& Kutz~\cite{DK}. \subsection{The Whitham equation} In order to best understand the role surface tension plays in the stability of periodic, traveling-wave solutions to the capillary-Whitham equation, we begin by reviewing results from the Whitham equation (i.e.~the zero surface tension case). Hur \& Johnson~\cite{MatVera} proved that all small-amplitude Whitham solutions with $k<1.145$ (where $k$ is the wavenumber of the solution) are stable while all small-amplitude solutions with $k>1.145$ are unstable. Sanford {\emph{et al.}}~\cite{WhithamStability} numerically corroborated this result, presented numerical results that suggest that all large-amplitude Whitham solutions are unstable, and established that $2\pi$-periodic, traveling-wave solutions with "small" wave heights are spectrally stable while those with "large" wave heights are unstable. Figure \ref{Wsolns} contains plots of four $2\pi$-periodic solutions to the Whitham equation with moderate wave heights. As the wave height, $H$, of the solution increases, so does the solution's wave speed, $c$. Figure \ref{Wsolnsstab} contains plots of the stability spectra corresponding to these solutions. The spectrum of the solution in Figure \ref{Wsolns}(a), see Figure \ref{Wsolnsstab}(a), lies entirely on the imaginary axis and therefore this solution is spectrally stable. Further simulations (not included) show that all solutions with smaller wave heights (with period $2\pi$) are also spectrally stable. The spectra corresponding to the other three solutions all include eigenvalues with positive real parts and therefore they are unstable. As the wave height of the solution increases, the maximum instability growth rate (i.e.~the real part of the eigenvalue with maximal real part) also increases. All of these spectra include the "figure 8" associated with the modulational (Benjamin-Feir) instability. \begin{figure} \centering \includegraphics[width=12cm]{Whithamsolnplots.eps} \caption{Plots of four moderate wave height, $2\pi$-periodic, zero-mean solutions of the Whitham equation. The wave speeds and wave heights of these solutions are (a) $c=0.89236$, $H=0.17148$, (b) $c=0.92685$, $H=0.29901$, (c) $c=0.96612$, $H=0.43667$, and (d) $c=0.97249$, $H=0.47203$. Note that the vertical scale is different in each of the plots.} \label{Wsolns} \end{figure} \begin{figure} \centering \includegraphics[width=12cm]{Whithamsolnspectra.eps} \caption{Spectra of the solutions shown in Figure \ref{Wsolns}. Note that both the horizontal and vertical scales vary from plot to plot.} \label{Wsolnsstab} \end{figure} Whitham~\cite{Whithambook} conjectured that the Whitham equation admits a highest traveling-wave solution and that it is nonsmooth. Recently, Ehrnstr\"om and Wahl\'en~\cite{EhrnstromWahlen} proved this hypothesis. Figure \ref{WsolnsLarge} includes plots of six solutions that are somewhat near this highest wave. The inset plots are zooms of the solutions near their peaks and shows that all of the solutions we consider are smooth. Note that it is computationally expensive to study solutions near the highest wave due to the number of Fourier modes required to accurately resolve the solutions. To our knowledge, this is the first time that the stability of solutions of the Whitham equation with wave heights this large have been studied. Figure \ref{WsolnsLargestab} includes the stability spectra corresponding to the solutions in Figure \ref{WsolnsLarge}. All six of these solutions are unstable. As wave height (or wave speed) increases, the maximal instability growth rate increases. The stability spectra undergo two bifurcations as the wave height increases. The first bifurcation is shown in Figure \ref{WsolnsLargestab}(a) and the second is shown in Figure \ref{WsolnsLargestab}(b). The first occurs when the top part of the figure 8 bends down and touches the bottom part. (See the transition from the spectrum in Figure \ref{Wsolnsstab}(d) to the blue spectrum in Figure \ref{WsolnsLargestab}(a).) This causes the (vertical) figure 8 to transition into a horizontal figure 8 inside of a vertical "peanut". (See the orange spectrum in Figure \ref{WsolnsLargestab}(a).) The second bifurcation occurs when the horizontal figure 8 collapses toward the origin and the peanut pinches off into two ovals centered on the $\mathcal{R}(\lambda)$ axis. (See Figure \ref{WsolnsLargestab}(b).) Note that the two yellow "dots" near $\lambda=\pm0.32$ are actually small ovals. As wave height increases even further, these ovals decrease in diameter and move further away from the $\mathcal{I}(\lambda)$ axis. Exactly what happens to the stability spectra as the wave height approaches the maximal wave height remains an open question. \begin{figure} \centering \includegraphics[width=12cm]{WhithamsolnsLarge.eps} \caption{Figures (a) and (b) each contain three plots of large wave height, $2\pi$-periodic, zero-mean solutions of the Whitham equation. The solutions are very similar and essentially lie on top of one another. The inset plots are zooms of the intervals surrounding the peaks of the solutions. The wave speeds and heights of these solutions, in order of increasing speed, are (a) blue $c=0.97266$, $H=0.47330$; orange $c=0.97276$, $H=0.47405$; and yellow $c=0.97351$, $H=0.48007$; and (b) blue $c=0.97451$, $H=0.50058$; orange $c=0.97501$, $H=0.499599$; and yellow $c=0.97596$, $H=0.52013$.} \label{WsolnsLarge} \end{figure} \begin{figure} \centering \includegraphics[width=12cm]{WhithamsolnspectraLarge.eps} \caption{Spectra of the solutions shown in Figure \ref{WsolnsLarge}.} \label{WsolnsLargestab} \end{figure} \subsection{The capillary-Whitham equation} In this subsection, we study periodic, traveling-wave, zero-mean solutions of the cW equation and their stability. Due to the massive number of solutions this equation admits, our study is not meant to be exhaustive. We present plots of solutions and their stability spectra and end with a discussion that summarizes our observations. Note that the solutions presented herein cannot be directly compared with those of Remonato \& Kalisch~\cite{RemonatoKalisch} because we required the solutions to have zero mean while they did not. However, the two sets of solutions are related by equation (\ref{relations}). We begin by justifying the values we selected for the capillarity/surface tension parameter, $T$. The Fourier multiplier $\mathcal{K}$ undergoes a bifurcation at $T=\frac{1}{3}$. When $T$=0, $\mathcal{K}$ decreases monotonically to zero as the wavenumber of the solution, $k>0$, increases. When $T\in(0,\frac{1}{3})$, $\mathcal{K}$ achieves a unique local minimum at some wavenumber $k^*\in(0,\infty)$. When $T>\frac{1}{3}$, $\mathcal{K}$ increases monotonically for all $k>0$ and therefore there is no local minimum. Because of this behavior, we selected $T=0.2,~\frac{1}{3},$ and $0.4$. Additionally, we study solutions for $T\approx0.1582$ (see Section \ref{specialT} for details). Figure \ref{DispPlots} contains plots of $\mathcal{K}$ versus $k$ for each of these $T$ values and demonstrates the bifurcation. \begin{figure} \centering \includegraphics[width=12cm]{DispPlots.eps} \caption{Plots of $\mathcal{K}(k)$ versus $k$ for each of the five values of $T$ examined herein.} \label{DispPlots} \end{figure} \subsubsection{Surface tension parameter $T=0.2$} When $T=0.2$, we were able to compute solutions with wavenumbers greater than $k\approx1.9$, but were not able to compute solutions with wavenumber less than $k\approx1.9$. This is interesting because $k^*\approx1.9$. (Recall that $k^*$ is the location of the local minimum of $\mathcal{K}$ when $T<\frac{1}{3}$.) Figure \ref{fig:twoBifurcation} includes a portion of the wave height versus wave speed bifurcation diagram for this case. It includes the bifurcation branches corresponding to the $k=2,\dots, 6$ solutions as well as an additional branch that splits off from the $k=4$ branch when $H>0$. The colored dots correspond to solutions that are examined in more detail below. Unlike the Whitham ($T=0$) case, the diagram shows that as the speed of the solutions decreases, the wave height of the solutions increases. Additionally, it is unclear if any of the branches have upper bounds. \begin{figure} \begin{center} \includegraphics[width=12cm]{02bif-withrepsets.eps} \caption{A portion of the wave height versus wave speed bifurcation diagram for the cW equation with $T=0.2$. The colored dots correspond to solutions that are examined in Figures \ref{fig:twokTwoSolutions}-\ref{fig:twokOneSolutions}.} \label{fig:twoBifurcation} \end{center} \end{figure} Figure \ref{fig:twokTwoSolutions}(a) includes plots of four different $k=2$ (i.e.~period $\pi$), traveling-wave solutions to the cW equation with $T=0.2$. Unlike solutions of the Whitham equation, these solutions are waves of depression instead of waves of elevation. As the wave height increases, the solution speed decreases. Although the bifurcation diagram suggests that a maximal wave height does not exist, we were not able to prove it, numerically or otherwise. As wave speed decreases, the solutions appear to be approaching the sum of a pair of negative delta functions. Figure \ref{fig:twokTwoSolutions}(b) contains plots of the corresponding linear stability spectra. All four of these solutions are unstable. The figure 8 has switched from being vertical (in the Whitham case) to horizontal (in the cW case). Just as with solutions to the Whitham equation, the maximal instability growth rate of these solutions increases as their wave heights increase. Additional numerical simulations (not shown) establish that all small-amplitude, traveling-wave solutions with $k=2$ and $T=0.2$ are unstable. \begin{figure} \begin{center} \includegraphics[scale=0.4]{twokTwoSolutions.eps} \includegraphics[scale=0.4]{twokTwoStability.eps} \caption{Plots of (a) four representative solutions of the cW equation with $T=0.2$ and $k=2$ and (b) their stability spectra.} \label{fig:twokTwoSolutions} \end{center} \end{figure} Figure \ref{fig:twokFiveSolutions}(a) includes plots of four $k=5$, traveling-wave solutions to the cW equation with $T=0.2$. Figure \ref{fig:twokTwoSolutions}(b) shows the corresponding stability spectra. These four solutions have approximately the same wave heights as the four $k=2$ solutions shown in Figure \ref{fig:twokTwoSolutions}(a). Other than their period, the $k=5$ solutions are qualitatively similar to those with $k=2$. The solution with the smallest wave height (blue) is spectrally stable. This is qualitatively different than what happens in the $T=0$ case where all small-amplitude solutions with $k>1.145$ are unstable. This suggests that surface tension provides a stabilizing effect to small-amplitude solutions with higher wavenumbers in this case. The three solutions with larger wave height are unstable and the maximal instability growth rate increases with wave height. Additional numerical simulations (not shown) establish that solutions with $k\in(2,5)$ have similar properties to the solutions presented in Figures \ref{fig:twokTwoSolutions}-\ref{fig:twokFiveSolutions}. There exists a $k^\dagger\in(2,5)$ where the small-amplitude solutions switch from being unstable to stable. \begin{figure} \begin{center} \includegraphics[scale=0.4]{twokFiveSolutions.eps} \includegraphics[scale=0.4]{twokFiveStability.eps} \caption{Plots of (a) four representative solutions of the cW equation with $T=0.2$ and $k=5$ and (b) their stability spectra.} \label{fig:twokFiveSolutions} \end{center} \end{figure} Figure \ref{fig:twokOneSolutions} contains plots of four representative solutions from the bifurcation branch that splits off from the $k=4$ branch. These solutions are qualitatively different than the solutions examined above, but have approximately the same wave heights as the solutions in Figures \ref{fig:twokTwoSolutions}(a) and \ref{fig:twokFiveSolutions}(a). Unsurprisingly, the stability spectra are also qualitatively different than those examined above. The spectra for each solution includes a horizontal figure 8 centered at the origin. (In the figure, these appear as horizontal lines along $\mathcal{I}(\lambda)=0$ due to scaling.) Each spectrum has six additional "bubbles" centered on the $\mathcal{I}(\lambda)$ axis. (Only four of the blue bubbles are easily visible due to the scaling used.) Surprisingly, there does not appear to be a simple relationship between wave height and the maximum instability growth rate. The solution with smallest wave height (the blue solution) has the largest maximum instability growth rate. \begin{figure} \begin{center} \includegraphics[scale=0.4]{twokOneSolutions.eps} \includegraphics[scale=0.4]{twokOneStability.eps} \caption{Plots of (a) four representative solutions of the cW equation with $T=0.2$ from the solution branch that does not touch the horizontal axis in Figure \ref{fig:twoBifurcation} and (b) their stability spectra.} \label{fig:twokOneSolutions} \end{center} \end{figure} \subsubsection{Surface tension parameter $T=\frac{1}{3}$} Figure \ref{fig:thirdBifurcation} includes a portion of the bifurcation diagram for $T=\frac{1}{3}$. The colored dots correspond to solutions that are examined in more detail below. These solutions have approximately the same wave heights as the colored solutions examined in other sections. The bifurcation diagram shows that as wave speed decreases, wave height increases for all branches (that we examined). \begin{figure} \begin{center} \includegraphics[width=12cm]{33bif-withrepsets} \caption{A portion of the bifurcation diagram for the capillary Whitham equation with $T=\frac{1}{3}$. The colored dots correspond to solutions that are examined in more detail below.} \label{fig:thirdBifurcation} \end{center} \end{figure} Figures \ref{fig:thirdkOneSolutions} and \ref{fig:thirdkTwoSolutions} include plots of $k=1$ and $k=2$ solutions and their stability spectra for $T=\frac{1}{3}$. All eight of these solutions are unstable and their spectra are shaped like horizontal figure 8s. As wave height increases, the maximal instability growth rate also increases. The $k=2$ solutions have larger instability growth rates than the corresponding $k=1$ solutions with the same wave height. Figures \ref{fig:thirdkFiveSolutions} includes plots of the $k=5$ solutions and their stability spectra for $T=\frac{1}{3}$. Other than their periods, these solutions appear to be qualitatively similar to the $k=1$ and $k=2$ solutions. However, their stability spectra lie completely on the $\mathcal{I}(\lambda)$ axis. This means that all four of these solutions,regardless of their wave height, are spectrally stable. This is quite a surprising result. This suggests that, for this value of $T$, there are bands and gaps in $k$ space where periodic, traveling-wave solutions to the cW equation are stable/unstable. \begin{figure} \begin{center} \includegraphics[scale=0.4]{thirdkOneSolutions.eps} \includegraphics[scale=0.4]{thirdkOneStability.eps} \caption{Plots of (a) four representative solutions of the cW equation with $T=\frac{1}{3}$ and $k=1$ and (b) their stability spectra.} \label{fig:thirdkOneSolutions} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[scale=0.4]{thirdkTwoSolutions.eps} \includegraphics[scale=0.4]{thirdkTwoStability.eps} \caption{Plots of (a) four representative solutions of the cW equation with $T=\frac{1}{3}$ and $k=2$ and (b) their stability spectra.} \label{fig:thirdkTwoSolutions} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[scale=0.4]{thirdkFiveSolutions.eps} \includegraphics[scale=0.4]{thirdkFiveStability.eps} \caption{Plots of (a) four representative solutions of the cW equation with $T=\frac{1}{3}$ and $k=5$ and (b) their stability spectra.} \label{fig:thirdkFiveSolutions} \end{center} \end{figure} \subsubsection{Surface tension parameter $T=0.4$} Figure \ref{fig:fourBifurcation} includes a portion of the bifurcation diagram for $T=0.4$. The colored dots correspond to solutions that are examined in more detail below. These solutions have approximately the same wave heights as the colored solutions examined in other sections. The bifurcation diagram shows that as wave speed decreases, wave height increases for all branches (that we examined). \begin{figure} \begin{center} \includegraphics[width=12cm]{04bif-withrepset} \caption{A portion of the bifurcation diagram for the cW equation with $T=0.4$. The colored dots correspond to solutions that are examined in more detail below.} \label{fig:fourBifurcation} \end{center} \end{figure} Figure \ref{fig:fourkOneSolutions} includes plots of four $k=1$ solutions to the cW equation with $T=0.4$ and their stability spectra. All four of these solutions are unstable and the growth rate of the instabilities increases as the wave height of the solution increases. Figure \ref{fig:fourkTwoSolutions} shows that all four $k=2$ solutions are stable while Figure \ref{fig:fourkFiveSolutions} shows that all four $k=5$ solutions are unstable. This suggests that, for this value of $T$, there are bands and gaps in $k$ space where periodic, traveling-wave solutions to the cW equation are stable/unstable. These bands and gaps are likely related to the bands and gaps in the $T=\frac{1}{3}$ case, but we were not able to find a simple relationship between them. \begin{figure} \begin{center} \includegraphics[scale=0.4]{fourkOneSolutions.eps} \includegraphics[scale=0.4]{fourkOneStability.eps} \caption{Plots of (a) four representative solutions of the cW equation with $T=0.4$ and $k=1$ and (b) their stability spectra.} \label{fig:fourkOneSolutions} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[scale=0.4]{fourkTwoSolutions.eps} \includegraphics[scale=0.4]{fourkTwoStability.eps} \caption{Plots of (a) four representative solutions of the cW equation with $T=0.4$ and $k=2$ and (b) their stability spectra.} \label{fig:fourkTwoSolutions} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[scale=0.4]{fourkFiveSolutions.eps} \includegraphics[scale=0.4]{fourkFiveStability.eps} \caption{Plots of (a) four representative solutions of the cW equation with $T=0.4$ and $k=5$ and (b) their stability spectra.} \label{fig:fourkFiveSolutions} \end{center} \end{figure} \subsubsection{Surface tension parameter $T\approx0.1582$} \label{specialT} Remonato \& Kalisch~\cite{RemonatoKalisch} presented the following formula which allows $T$ values to be chosen so that solutions corresponding to any two $k$ values will have the same wave speed in the small-amplitude limit \begin{equation} T=T (k_{1}, k_{2}) = \frac{k_{1}\tanh(k_{2})-k_{2}\tanh(k_{1})}{k_{1}k_{2}(k_{1}\tanh(k_{1})-k_{2}\tanh(k_{2}))}. \end{equation} Using this formula with $k_1=1$ and $k_2=4$ gives $T\approx0.1582$, which is the final $T$ value we examine. A portion of the corresponding bifurcation diagram is included in Figure \ref{fig:onefourBifurcation}. The fact that the $k=1$ and $k=4$ solutions have the same speed in the small-amplitude limit is exemplified by the fact that there are two branches leaving the same point on the $c$ axis near $c=0.94$. These branches correspond to the $k=4$ solution and a $k=(1,4)$ branch. Here the notation $k=(a,b)$ means that the solution is composed of a linear combination of the $k=a$ and $k=b$ wavenumbers in the small-amplitude limit. We were unable to isolate the $k=1$ solution. This may be related to the fact that $1<k^*\approx2.29$ when $T\approx0.1582$. Similarly, we were not able to compute solutions on the $k=2$ branch in this case. \begin{figure} \begin{center} \includegraphics[scale = 0.6]{onefour-bif-with-rep-set} \caption{A portion of the bifurcation diagram for the cW equation with $T\approx 0.1582$. The colored dots correspond to solutions that are examined in more detail below.} \label{fig:onefourBifurcation} \end{center} \end{figure} Figure \ref{fig:onefourkFourSolutions} shows that solutions on the $k=4$ branch with small wave height are stable, while those with large wave height are unstable. These solutions do not have the striking delta-function-like, wave of depression form that the cW solutions presented above have. As wave height increases, the growth rates of the instabilities also increase. \begin{figure} \begin{center} \includegraphics[scale=0.4]{onefivek4Solutions.eps} \includegraphics[scale=0.4]{onefivek4Stability.eps} \caption{Plots of (a) four representative solutions of the cW equation with $T\approx0.1582$ and $k=4$ and (b) their stability spectra.} \label{fig:onefourkFourSolutions} \end{center} \end{figure} Figure \ref{fig:onefourkOneFourSolutions} includes plots of four representative $k=(1,4)$ solutions and their stability spectra. These solutions do not have the delta-function-like form of the majority of the other cW solutions. The spectra of these $k=(1,4)$ solutions are similar to those in Figure \ref{fig:twokOneSolutions}(b). This is likely due to the fact that both of these sets of solutions have more than one dominant Fourier mode. Each spectrum has a horizontal figure 8 centered at the origin and six bubbles centered on the $\mathcal{I}(\lambda)$ axis. The solution with the largest wave height is not the most unstable solution. \begin{figure} \begin{center} \includegraphics[scale=0.4]{onefivek14Solutions.eps} \includegraphics[scale=0.4]{onefivek14Stability.eps} \caption{Plots of (a) four representative solutions of the cW equation with $T\approx0.1582$ and $k=(1,4)$ and (b) their stability spectra.} \label{fig:onefourkOneFourSolutions} \end{center} \end{figure} Figure \ref{fig:onefourBifurcation} shows that there is a secondary branch that splits off from the $k=5$ branch at $H\approx0.39$. As expected, the solutions along this branch do not have a single dominant wavenumber. The solutions on this branch are $k=(1,5)$ solutions until the branch curves around and heads upward. The solutions after this turning point are $k=(1,5,6)$ solutions. Figure \ref{fig:onefourkOneFiveSolutions}(a) includes plots of four $k=(1,5)$ solutions and their stability spectra. All four of these solutions are unstable and have complicated spectra. Figure \ref{fig:onefourkOneFiveSixSolutions} includes plots of four representative $k=(1,5,6)$ solutions and shows that they are unstable. All four solutions have horizontal figure 8s centered at the origin and four bubbles centered along the $\mathcal{I}(\lambda)$ axis. The solution with smallest wave height is the most unstable. \begin{figure} \begin{center} \includegraphics[scale=0.4]{onefivek15Solutions.eps} \includegraphics[scale=0.4]{onefivek15Stability.eps} \caption{Plots of (a) four representative solutions of the cW equation with $T\approx0.1582$ and $k=(1,5)$ and (b) their stability spectra.} \label{fig:onefourkOneFiveSolutions} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[scale=0.4]{onefivek156Solutions.eps} \includegraphics[scale=0.4]{onefivek156Stability.eps} \caption{Plots of (a) four representative solutions of the cW equation with $T\approx0.1582$ and $k=(1,5,6)$ and (b) their stability spectra.} \label{fig:onefourkOneFiveSixSolutions} \end{center} \end{figure} \section{Summary} \label{Summary} Bottman \& Deconinck~\cite{BD} proved that all traveling-wave solutions of the KdV equation are stable. This is quite different than the Whitham equation where all large-amplitude solutions are unstable~\cite{WhithamStability} and only small-amplitude solutions with a wavenumber larger than $k=1.145$ are unstable~\cite{MatVera,WhithamStability}. We began by examining large-amplitude, periodic, traveling-wave solutions to the Whitham equation (zero surface tension). We found that all such solutions are unstable and that their stability spectra undergo two bifurcations as wave height increases. Next, we examined periodic, traveling-wave solutions to the capillary-Whitham equation with four different $T$ values. As expected, we found that the cW solutions and their stability were more diverse than in the Whitham equation case. Most of the solutions we examined were waves of depression. In contrast, all periodic, traveling-wave solutions to the Whitham equation are waves of elevation. We found that as wave height increases, wave speed decreases (and can become negative). We were not able to determine if the cW equation admits a solution with maximal wave height. In contrast, the Whitham equation has a solution with maximal wave height. We computed periodic, traveling-wave solutions with wavenumbers $k>k^*$ where $k^*$ is the location of the local minimum of the Fourier multiplier $\mathcal{K}$. We were not able to compute any periodic, traveling-wave solutions with $k<k^*$. In addition to computing a variety of solutions with a single dominant wavenumber, we computed four families of solutions that had multiple dominant wavenumbers. We examined the stability of all of the cW solutions we computed. We found that some were stable and others were unstable. There appear to be bands and gaps in $k$ and $T$ space that separate small- and large-amplitude solutions to the cW equation by stability. The exact structure of these bands and gaps remains an open question. If the solutions had a single dominant wavenumber, then the maximal instability growth rate increased with wave height. If the solutions had multiple dominant wavenumbers, there was not a simple relationship between instability growth rates and wave heights. We found regions of $k$ and $T$ space where all solutions appeared to be stable, regardless of wave height. Finally, we found some solutions for which the modulational (Benjamin-Feir) instability was the dominant instability and other solutions that had other dominant instabilities. We thank Mats Ehrnstr\"om, Vera Hur, Mat Johnson, and Logan Knapp for helpful discussions. This material is based upon work supported by the National Science Foundation under grant DMS-1716120.
1,941,325,220,958
arxiv
\section{Introduction} In the framework of the perturbative QCD, in the Regge kinematics, particle interaction is described by the exchange of reggeized gluons which emit and absorb real gluons with certain production vertexes ("Lipatov vertexes')~\cite{bfkl}. Pomeron interaction leads to their splitting. Emission of real gluons from split reggeized gluons is described by vertexes introduced by J.Bartels ("Bartels vertexes') ~\cite{bartels}. Originally both type of vertexes were calculated directly from the relevant simple Feynman diagrams in the Regge kinematics. Later a powerful effective action formalism was proposed by L.N.Lipatov ~\cite{lipatov}, which considers reggeized and normal gluons as independent entities from the start and thus allows to calculate all QCD diagrams in the Regge kinematics automatically and in a systematic and self-consistent way. However the resulting expressions are 4-dimensional and need reduction to the final 2-dimensional transverse form. In the paper of two co-authors of the present paper (M.A.B. amd M.I.V.) ~\cite{bravyaz} it was demonstrated that the diffractive amplitude for the production of a real gluon calculated by means of the Lipatov effective action, after integration over the longitudinal variables, goes over into the expression obtained via the Lipatov and Bartels vertexes. However in the process of reduction to the transverse form a certain prescription had to be used to give sense to divergent integrals. In this paper we generalize these results to a more complicated case of production of two real gluons with a large difference in rapidity. This case is of importance in view of the contradiction between the results obtained by Yu.Kovchegov and K.Tuchin ~\cite{kovt}, on the one hand, and J.Bartels, M.Salvadore and G.P.Vacca ~\cite{barsv}, on the other for the inclusive cross-section of gluon production in the Regge kinematics. Analysis of these results requires to compare expressions for the two-gluon production amplitude in the Lipatov-Bartels and dipole pictures. The study of this amplitude in the Lipatov effective action formalism is thus a valuable test of the presently used expressions. Our results demonstrate that the Lipatov effective action leads to the standard expression for the two gluon production amplidude with the Lipatov and Bartels vertexes, provided the same prescription for the longitudinal integration is used as in ~\cite{bravyaz}. \section{The set of diagrams} Our purpose is to study the amplitude for the production of two gluons in the diffractive collision on a colorless target. To simplify we shall restrict ourselves with a case when both colliding particle are quarks. This will introduce infrared divergence in the final intergrations over the transferred transverse momenta, absent with the realistic colorless participants. However our final goal is only to obtain the amplitudes with fixed transverse variables to be able to compare with the corresponding expression in the Lipatov-Bartels formalism. For this particular purpose using quarks as the projectile and target is sufficient. And it substancially reduces the number of diagrams to study. Contributions to the process we study start at the perturbative order $g^6$, with which we limit ourselves here. All relevant diagrams then can be split into three groups shown in Figs.~1,2,3. Reggeized gluons are shown by wavy lines. The first group (\ref{Fig1}) consists of diagrams in which one gluon (the harder) is emitted before the reggeized gluon splits into two and the other at the splitting vertex. The second group (\ref{Fig2}) represents diagrams in which the harder gluon is emitted at the splitting vertex. Finally the third group (\ref{Fig3}) is represented by digrams in which the reggeized gluons do not split at all. Note that the contribution from the diagrams in which the harder gluon is emitted before splitting and the softer after splitting is equal to zero, since the splitting vertex without emission of a real gluon vanishes due to signature conservation. We denote the momentum of the incident quark $k$ and that of the target quark $l$. Their final momenta are $k'$ and $l'$ respectively. We assume that $k_-=k_\perp=l_+=l_\perp=0$. The momenta of the emitted gluons are $p_1$ and $p_2$ with $p_{1+}>>p_{2+}$. For all longitudinal components we use the definition $a_{\pm}=a_{0}\pm a_{3}$, so that $ab=\frac{1}{2}a_{+}b_{-}+\frac{1}{2}a_{-}b_{+}+(ab)_{\perp}$. In order to have the uniform notations for all diagrams we used the following definitions of various transferred momenta: \begin{equation} q=k-k',\ \ q_{2}=q-p_{1}-q_{1},\ \ q_{3}=q-p_{1}-p_{2}-q_{1},\ \ q_{4}=q-p_{1},\ \ q_{5}=q-q_{1},\ \ q_{6}=q-q_{2}\, , \label{1-6} \end{equation} where $q_{1}$ is a loop momentum. In the Regge kinematics we have: \[ \sqrt{s}=k_{+}\approx k'_{+}\gg p_{1+}\sim q_{+}\gg p_{2+} \sim q_{4+}\gg l'_{+}\] \[ \sqrt{s}=l_{-}\approx l'_{-}\gg p_{2-}\gg p_{1-} \sim -q_{4-}\gg k'_{-} \] \begin{equation} \label{9} q_{5+}\ll \sqrt{s}, q_{6+}\ll\sqrt{s}, q_{1-}\ll\sqrt{s}, q_{2-}\ll\sqrt{s} \, . \end{equation} We recall that in the Regge kinematics non-zero tranversal momentum components are assumed to be much smaller than longitudinal ones. \section{Diagram of Fig. \ref{Fig1}} \begin{figure}[h] \leavevmode \centering{\epsfysize=0.3\textheight\epsfbox{1.eps}} \caption{Diagram of the type 1} \label{Fig1} \end{figure} The only diagram of type 1 is shown in Fig. \ref{Fig1}. We denote the wave functions of the projectile and target as $\bar{u}(k')$, $u(k)$ and $\bar{w}(l')$, $w(l)$ correspondingly. So the factors describing the projectile and target quarks are correspondingly \begin{equation} \label{11} ig\bar{u}(k')\frac{\gamma_{+}}{2}t^{a}u(k) \end{equation} and \begin{equation} \label{12} (ig)^2\bar{w}(l')t^{b_{3}}\frac{\gamma_{-}}{2} \frac{i(\hat{l}+\hat{q_{1}})}{(l+q_{1})^2+i0} \frac{\gamma_{-}}{2}t^{b_{1}}w(l)\, . \end{equation} We can use (\ref{9}) to simplify \begin{equation} \label{13} \gamma_{-}(\hat{l}+\hat{q_{1}})\gamma_{-}= \frac{1}{2}\gamma_{-}(l_{-}+q_{1-})\gamma_{+}\gamma_{-}= 2\gamma_{-}(\sqrt{s}+q_{1-})\approx 2\sqrt{s}\gamma_{-}\, . \end{equation} We shall be interested in the diffraction process when the target does not change its colour and so the $t$-channel coupled to the target is colourless. So we introduce a projector onto the colourless target state \begin{equation} P_{b_1b_3|b'_1b'_3}=\frac{\delta_{b_1b_3}\delta_{b'_1b'_3}}{N_c^2-1}\, . \label{19} \end{equation} Acting on the target quark colours it gives a factor \begin{equation} \frac{1}{N_c^2-1}\delta_{b'_1b'_3}t^{b'_3}t^{b'_1}=\frac{1}{2N_c}\, . \label{colour} \end{equation} The diagram of Fig. \ref{Fig1} is formed by the vertexes already studied previously. The lower vertex is the Reggeon $\to$ 2 Reggeons + Particle ("effective") vertex obtained in ~\cite{bravyaz}: \begin{equation} \begin{split} \frac{ig^{2}f^{b_{3}cd}f^{b_{4}b_{1}d}}{(q_{4}-q_{1})^{2}} \left[ q_{4+}(q_{4}\varepsilon^{*}_{2})_{\bot}+ \frac{q_{4}^{2}}{q_{1-}} \left(-((q_{4}-q_{1})\varepsilon^{*}_{2})_{\bot} +\frac{(q_{4}-q_{1})^{2}}{p_{2\bot}^{2}}(p_{2}\varepsilon^{*}_{2})_{\bot} \right)\right] \\ +\frac{ig^{2}f^{b_{1}cd}f^{b_{4}b_{3}d}}{(q_{4}-q_{3})^{2}} \left[ q_{4+}(q_{4}\varepsilon^{*}_{2})_{\bot}+\frac{q_{4}^{2}}{q_{3-}} \left(-((q_{4}-q_{3})\varepsilon^{*}_{2})_{\bot} +\frac{(q_{4}-q_{3})^{2}}{p_{2\bot}^{2}}(p_{2}\varepsilon^{*}_{2})_{\bot} \right)\right]\, . \label{15} \end{split} \end{equation} The third term in each square brackets is the contribution of the so-called "induced" vertex which is given by expansion of the $P$-exponential in the effective action~\cite{lipatov}. The upper vertex is the well-known Reggeon $\to$ Reggeon $+$ particle ("Lipatov") vertex, which we write as: \begin{equation} \label{16} igf^{ab_{4}e}q_{\bot}^{2}\Big(L_{\perp}(q,p_1)\varepsilon^{*}_{1}\Big)\, , \end{equation} where we define the transverse vector \begin{equation} L_\nu(q,p)=\frac{q_{\bot\nu}}{q^2_{\bot}}-\frac{p_{\bot\nu}}{p^2_{\bot}}\, . \label{16a} \end{equation} The effective vertex consists of two parts proportional to $f^{b_3cd}f^{b_4b_1d}$ and and $f^{b_{1}cd}f^{b_{4}b_{3}d}$, each containing three terms. In both cases the convolution of the vertex colour factors with the target colour factor (\ref{colour}) gives the final overall colour factor \begin{equation} \label{20} \frac{1}{2N_c}f^{ab_{4}e}f^{b_{1}cd}f^{b_{4}b_{1}d}t^a= \frac{1}{2}f^{aec}t^{a}\, . \end{equation} To reduce the contribution of the diagram to the 2-dimensional form we have to integrate over the longitudinal variables in the loop. This integration does not involve the four reggeon propagators as they are purely transversal \begin{equation} \label{13a} D^{ab}(q)=-i\frac{2\delta_{ab}}{q^{2}_{\bot}} \end{equation} and thus contribute a totally transverse factor \begin{equation} \label{14} \frac{16}{q^2_{\bot}q^2_{4\bot}q^2_{1\bot}q^2_{3\bot}}\, . \end{equation} The effective vertex generates three kinds of terms including longitudinal components proportional to \begin{equation} \label{21} q_{4+}(q_{4}\varepsilon^{*}_{2})_{\bot}\, , \end{equation} \begin{equation} \label{22} \frac{q_{4}^{2}}{q_{i-}}(-((q_{4}-q_{i}) \varepsilon^{*}_{2})_{\bot}),\ \ i=1,3 \end{equation} and \begin{equation} \label{24} \frac{q_{4}^{2}}{q_{i-}}\frac{(q_{4}-q_{i})^{2}} {p_{2\bot}^{2}}(p_{2}\varepsilon^{*}_{2})_{\bot},\ \ i=1,3\, . \end{equation} Combined with the denominator from the quark propagator the first two terms lead to the longitudinal integrals of two forms \begin{equation} J_1(k_1,k_2)= \frac{1}{2i} \int\frac{dq_{1-}}{2\pi}\int\frac{dq_{1+}}{2\pi} \frac{1}{(k_1^2+i0)(k_2^2+i0)} \label{form1} \end{equation} and \begin{equation} J_2(k,k_1,k_2)= \frac{1}{2i} \int\frac{dq_{1-}}{2\pi}\int\frac{dq_{1+}}{2\pi}\frac{1}{k_-} \frac{1}{(k_1^2+i0)(k_2^2+i0)}\ . \label{form2} \end{equation} where $k$, $k_1$ and $k_2$ are some linear functions of the integration momentum $q_1$. The first terms, proportional to (\ref{21}), lead to the integrals \begin{equation} \label{26} I_1=J_1(q_4-q_1,l+q_1) \end{equation} and \begin{equation} \label{27} I_2=J_1(q_4-q_3,l+q_1)\, . \end{equation} The second terms (\ref{22}) combined with the target quark denominator lead to the integrals \begin{equation} \label{28} I_3=J_2(q_1,q_4-q_1,l+q_1) \end{equation} and \begin{equation} \label{29} I_4=J_2(q_3, q_4-q_3,l+q_1)\, . \end{equation} The third terms (\ref{24}) combined with the target quark propagator give the integrals \begin{equation} \frac{1}{2i} \int\frac{dq_{1-}}{2\pi} \int\frac{dq_{1+}}{2\pi} \frac{1}{q_{1-}}\frac{1}{(l+q_1)^2 +i0} \label{26a} \end{equation} and \begin{equation} \frac{1}{2i} \int\frac{dq_{1-}}{2\pi}\int\frac{dq_{1+}}{2\pi} \frac{1}{q_{3-}}\frac{1}{(l+q_1)^2 +i0}\ . \label{26b} \end{equation} In these formulas $q_3=q-p_1-p_2-q_1$. The first four integrals (\ref{26}-\ref{29}) are calculated in the Appendix. The last integrals (\ref{26a}), (\ref{26b}) are formally divergent. The same integrals were also found in the simpler case of the single gluon production in ~\cite{bravyaz}. There it was noted that if a prescription is imposed to calculate the integral in the principal value sense then the integral vanishes and the result turns out to be in agreement with the standard Lipatov-Bartels approach in terms of ordinary Feynman diagrams. Relying on this conclusion we also in this study impose the same rule of calculation and consequently neglect these integral altogether. The results found in the Appendix for the sum of the first two integrals is \begin{equation} \label{30} I_1+I_2= \frac{i}{4 q_{4+}\sqrt{s}}\, , \end{equation} attaching the rest factor from the effective vertex we find the total contribution to the diagram from the terms (\ref{21}) as \begin{equation} \label{33} \frac{i(q_{4}\varepsilon^{*}_{2})_{\bot}}{4\sqrt{s}}\ . \end{equation} For the second terms, the factors (\ref{22}) are different for the two parts of the effective vertex. If we change the variable of the loop integration $q_{1}\to q_{3}=q-p_{1}-p_{2}-q_{1}$ only in the contribution of the second part, then the factors (\ref{22}) become equal and we need to calculate the sum of integrals $I_3$ and $I_4=J_2(q_1, q_4-q_1,l'-q_1)$. The sum of integrals is found to be \begin{equation} I_3+I_4=\frac{i}{4\sqrt{s}(q_{4}-q_{1})^2_{\bot}}\ . \label{33a} \end{equation} Combined with the rest factor from the effective vertex they give the contribution from the terms (\ref{22}) as \begin{equation} \label{34} -\frac{iq_{4\bot}^{2}}{4\sqrt{s}(q_{4}-q_{1})^2_{\bot}} ((q_{4}-q_{1})\varepsilon^{*}_{2})_{\bot}\ . \end{equation} Summing (\ref{33}) and (\ref{34}) we find the final result for the diagram of type 1 in the form \begin{equation} \label{35} g^{6}\cdot\bar{u}(k')\gamma_{+}\Delta_{1}u(k)\cdot \bar{w}(l')\gamma_{-}w(l)\, , \end{equation} where \begin{equation} \label{36} \Delta_{1}=\frac{i}{2}f^{aec}t^{a}\cdot \int \frac{d^{2}q_{1\bot}}{(2\pi)^2}\frac{1}{q^2_{1\bot}} \frac{1}{q^2_{3\bot}}\left(L_{\bot}(q,p_{1})\varepsilon^{*}_{1}\right) \left(B_{\bot}(q_{4},q_2)\varepsilon^{*}_{2}\right)\, . \end{equation} Here we denoted \begin{equation} \label{37} B_{\nu}(q,p)=\frac{q_{\bot\nu}}{q^{2}_{\bot}}- \frac{p_{\bot\nu}}{p^{2}_{\bot}} \end{equation} with \begin{equation} q_{2}=q_{4}-q_{1}=q-p_{1}-q_{1},\ \ q_{3}=q-p_{1}-p_{2}-q_{1}\, . \end{equation} This is the momentum part of the well-known Bartels vertex \cite{bartels} expressed in terms of this part for the Lipatov vertex. Expression (\ref{36}) is exactly the one which is found for the configuration of the diagram in Fig. \ref{Fig1} in the Lipatov-Bartels formalism using the transverse space approach from the start. \section{Diagrams of Fig. \ref{Fig2}} \begin{figure}[t] \leavevmode \centering{\epsfysize=0.25\textheight\epsfbox{2.eps}} \caption{Diagrams of the type 2} \label{Fig2} \end{figure} The two diagrams of type 2 are shown in Fig. \ref{Fig2}. The softer gluon is now emitted from inside the loop. The structure of the diagrams is similar to the previous case except that the effective vertex has the larger rapidity than the Lipatov vertex. The target factor for the diagram Fig. \ref{Fig2}.1 is \begin{equation} \label{39} \bar{w}(l')t^{b_{3}}\frac{ig\gamma_{-}} {2}\frac{i(\hat{l}+\hat{q_{1}})}{(l+q_{1})^2+i0} \frac{ig\gamma_{-}}{2}t^{b_{1}}w(l) \end{equation} and for the diagram Fig. \ref{Fig2}.2 is \begin{equation} \label{39a} \bar{w}(l')t^{b_{1}}\frac{ig\gamma_{-}} {2}\frac{i(\hat{l'}-\hat{q_{1}})}{(l'-q_{1})^2+i0} \frac{ig\gamma_{-}}{2}t^{b_{3}}w(l)\, . \end{equation} Similar to (\ref{13}): \begin{equation} \label{40} \gamma_{-}(\hat{l'}-\hat{q_{1}})\gamma_{-} \approx 2\sqrt{s}\gamma_{-}\ . \end{equation} Reggeon propagators give a factor \begin{equation} \label{41} \frac{16}{q^2_{\bot}q^2_{1\bot}q^2_{2\bot}q^2_{3\bot}}\ . \end{equation} For the first diagram Fig. \ref{Fig2}.1 the effective vertex is \begin{eqnarray} \frac{ig^{2}f^{b_{1}ed}f^{ab_{2}d}}{(q-q_{2})^{2}} \left[ q_{+}(q\varepsilon^{*}_{1})_{\bot}+\frac{q^{2}}{q_{2-}} \left(-((q-q_{2})\varepsilon^{*}_{1})_{\bot} +\frac{(q-q_{2})^{2}}{p_{1\bot}^{2}}(p_{1}\varepsilon^{*}_{1})_{\bot}\right) \right] \nonumber \\ + \frac{ig^{2}f^{b_{2}ed}f^{ab_{1}d}}{(q-q_{1})^{2}} \left[ q_{+}(q\varepsilon^{*}_{1})_{\bot}+\frac{q^{2}}{q_{1-}} \left(-((q-q_{1})\varepsilon^{*}_{1})_{\bot} +\frac{(q-q_{1})^{2}}{p_{1\bot}^{2}}(p_{1}\varepsilon^{*}_{1})_{\bot}\right) \right]\ . \label{42} \end{eqnarray} and the Lipatov vertex is \begin{equation} \label{43} igf^{b_{2}b_{3}c}q_{2\bot}^{2} \left(L_{\bot}(q_2,p_2)\varepsilon^{*}_{2}\right)= igf^{b_{2}b_{3}c}q_{2\bot}^{2} \left( \frac{(q_{2}\varepsilon^{*}_{2})_{\bot}}{q^{2}_{2\bot}} -\frac{(p_{2}\varepsilon^{*}_{2})_{\bot}}{p^{2}_{2\bot}} \right)\ . \end{equation} For the second diagram Fig. \ref{Fig2}.2 the effective vertex is the same (\ref{42}), since it is invariant under interchange of the two lower reggeons, and the Lipatov's vertex is also (\ref{43}). The projection of the reggeons coupled to the target onto the colorless state supplies the same factor as for the diagram of Fig. \ref{Fig1} (\ref{19}). Its convolution with other color factors for both digarams of Fig. \ref{Fig2} however gives different results for the two parts of the effective vertex. For the first part we get \begin{equation} \label{47} \frac{1}{2N_{c}}f^{b_{1}ed}f^{ab_{2}d}f^{b_{2}b_{1}c}t^a =-\frac{1}{4}f^{aec}t^{a} \end{equation} and for the second part we get the opposite sign \begin{equation} \label{48} \frac{1}{2N_{c}}f^{b_{2}ed}f^{ab_{1}d}f^{b_{2}b_{1}c}t^a =\frac{1}{4}f^{aec}t^{a}\, . \end{equation} Further calculations are quite similar to those for the diagram on Fig. \ref{Fig1}, except that now we have to consider the two parts of the effective vertex separately. In the following we choose the integration variable to be $q_{1}$. Consider the diagram Fig. \ref{Fig2}.1. The first terms in the two parts of the effective vertex lead to the integrals respectively \begin{equation} I_5=J_1(q-q_2,l+q_1)=0 \end{equation} and \begin{equation} I_6=J_1(q-q_1,l+q_1)=\frac{i}{4 q_{+}\sqrt{s}}\, . \end{equation} Notice that $I_5$ enters with the color factor (\ref{47}) and $I_6$ does with the color factor (\ref{48}). For the second diagram Fig. \ref{Fig2}.2 the integrals for the first terms of the effective vertex are \begin{equation} I_7=J_1(q-q_2,l'-q_1)=\frac{i}{4 q_{+}\sqrt{s}} \end{equation} and \begin{equation} I_8=J_1(q-q_1,l'-q_1)=0 \end{equation} and the color factors are (\ref{47}) and (\ref{48}) respectively. Since the contributions of the two non-zero integrals enter with the opposite sign, in the sum of the two diagrams in Fig. \ref{Fig2} we get zero. The last terms in the effective vertex again formally diverge. Using our prescription of the principal value integration we put them to zero. So in the end only the second terms in the effective vertex give non-zero contribution. For the first diagram in Fig. \ref{Fig2} they lead to the longitudinal integrals $ I_9=J_2(q_2,q-q_2,l+q_1) $ and $ I_{10}=J_2(q_1,q-q_1,l+q_1). $ For the second diagram in Fig. \ref{Fig2} one has to change $l+q_1\to l'-q_1$: $ I_{11}=J_2(q_2,q-q_2,l'-q_1) $ and $ I_{12}=J_2(q_1,q-q_1,l'-q_1). $ The integrals are similar to that we have calculated for diagram in Fig. \ref{Fig1}. Color factor (\ref{47}) corresponds to $I_9$ and $I_{11}$. Summing them we obtain \begin{equation} -\frac{1}{4}f^{aec}t^{a}(I_9+I_{11})= -\frac{1}{4}f^{aec}t^{a}\frac{i}{4 (q-q_{2})^{2}_{\bot}}\, . \label{48d} \end{equation} Similarly, the second part gives \begin{equation} \frac{1}{4}f^{aec}t^{a}(I_{10}+I_{12})= \frac{1}{4}f^{aec}t^{a}\frac{i}{4 (q-q_{1})^{2}_{\bot}}\, . \label{48e} \end{equation} Taking in account that vertex (\ref{43}) is the same for both diagrams we obtain for the sum of diagrams Fig.~\ref{Fig2}: \begin{equation} \label{56} g^{6}\cdot\bar{u}(k')\gamma_{+}\Delta_{2}u(k) \cdot\bar{w}(l')\gamma_{-}w(l)\, , \end{equation} where \begin{eqnarray} \label{57} \Delta_{2}=\frac{i}{4}f^{aec}t^{a}\cdot\int \frac{d^{2}q_{1\bot}}{(2\pi)^2}\frac{1}{q^2_{1\bot}} \frac{1}{q^2_{3\bot}} \left[ (B_{\bot}(q,q_{5})\varepsilon^{*}_{1}) -(B_{\bot}(q,q_{6})\varepsilon^{*}_{1}) \right] \left(L_{\bot}(q_{2},p_{2})\varepsilon^{*}_{2}\right)\, . \end{eqnarray} The definition of $B_{\nu}$ was made in (\ref{37}). This result is also corresponding to the Lipatov-Bartels formalism. \section{Diagrams of Fig. \ref{Fig3}} \begin{figure}[t] \leavevmode \centering{\epsfysize=0.8\textheight\epsfbox{3.eps}} \caption{Diagrams of the type 3} \label{Fig3} \end{figure} The diagrams on Fig (\ref{Fig3}) divide into two parts: with emission of the two gluons from the same reggeon (diagrams 1,2,5 and 6) and from different reggeons (diagrams 3,4,7 and 8). These two parts have different structures of the Lipatov vertices. Consider diagram 1. Factors coupled with the target and projectile quarks are: \begin{equation} \label{58} \bar{w}(l')t^{a_{1}}\frac{ig\gamma_{-}} {2}\frac{i(\hat{l'}-\hat{q_{1}})}{(l'-q_{1})^2+i0} \frac{ig\gamma_{-}}{2}t^{b_{3}}w(l) \end{equation} and \begin{equation} \label{59} \bar{u}(k')t^{a_{1}}\frac{ig\gamma_{+}} {2}\frac{i(\hat{k'}+\hat{q_{1}})}{(k'+q_{1})^2+i0} \frac{ig\gamma_{+}}{2}t^{a_{2}}u(k)\, . \end{equation} As before we simplify \begin{equation} \label{60} \gamma_{-}(\hat{l'}-\hat{q_{1}})\gamma_{-}\approx 2\sqrt{s}\gamma_{-}\ , \end{equation} \begin{equation} \label{61} \gamma_{+}(\hat{k'}+\hat{q_{1}})\gamma_{+}\approx 2\sqrt{s}\gamma_{+}\ . \end{equation} The reggeon propagators are: \begin{equation} \label{62} \frac{16}{q^2_{1\bot} q^2_{2\bot} q^2_{3\bot} q^2_{5\bot}}\, . \end{equation} The two Lipatov vertices are: \begin{equation} \label{63} igf^{a_{2}b_{2}e}q_{5\bot}^{2} \left( \frac{(q_{5}\varepsilon^{*}_{1})_{\bot}}{q^{2}_{5\bot}} -\frac{(p_{1}\varepsilon^{*}_{1})_{\bot}}{p^{2}_{1\bot}} \right)\, , \end{equation} \begin{equation} \label{64} igf^{b_{2}b_{3}c}q_{2\bot}^{2} \left( \frac{(q_{2}\varepsilon^{*}_{2})_{\bot}}{q^{2}_{2\bot}} -\frac{(p_{2}\varepsilon^{*}_{2})_{\bot}}{p^{2}_{2\bot}} \right)\, . \end{equation} After the projection of the reggeons coupled to the target onto the colorless state we obtain the following color structure: \begin{equation} \label{65} \frac{1}{2N_{c}}f^{a_{2}b_{2}e}f^{b_{2}a_{1}c}t^{a_{1}}t^{a_{2}}\ . \end{equation} The diagram 5 differs from this one only in the target quark propagator in which $l'-q_1\to l+q_1$. The diagrams 2 and 6 have a different color structure \begin{equation} \label{66} \frac{1}{2N_{c}}f^{a_{2}b_{2}e}f^{b_{2}a_{1}c}t^{a_{2}}t^{a_{1}}\ . \end{equation} It is convenient to combine momentum parts of these four diagrams. In order to do it we split $t^{a_{1}}t^{a_{2}}$ into symmetric and antisymmetric parts: \begin{equation} \label{68} t^{a_{1}}t^{a_{2}}=\frac{1}{2} \{t^{a_1},t^{a_2}\}+\frac{1}{2}[t^{a_{1}},t^{a_{2}}] \end{equation} to obtain colour factors \begin{equation} \label{69} \frac{1}{4N_{c}}f^{a_{2}b_{2}e}f^{b_{2}a_{1}c}[t^{a_{1}},t^{a_{2}}] =\frac{i}{8}f^{aec}t^{a} \end{equation} \begin{equation} \label{70} \frac{1}{4N_{c}}f^{a_{2}b_{2}e}f^{b_{2}a_{1}c}\{t^{a_{1}},t^{a_{2}}\} =-\frac{1}{4N_c}\delta^{ce}-\frac{1}{8}d^{aec}t^{a}\ . \end{equation} Thus for the antisymmetric part we use (\ref{69}) with an extra minus sign for diagrams 2 and 6. For the total symmetric part we use (\ref{70}) for all four diagrams. The longitudinal integrals for diagrams 1, 2, 5 and 6 are correspondingly \begin{equation} I_{13}=J_1(k'+q_{1},l'-q_1)=\frac{i}{4s}\, , \end{equation} \begin{equation} I_{14}=J_1(k-q_{1},l+q_1)=\frac{i}{4s}\, , \end{equation} \begin{equation} I_{15}=J_1(k'+q_{1},l+q_1)=0\, , \end{equation} \begin{equation} I_{16}=J_1(k-q_{1},l'-q_1)=0\, . \end{equation} As we observe, the antisymmetric part completely vanishes. In the symmetric part for the sum of diagrams 1, 2, 5 and 6 we obtain \begin{equation} \label{75} 2i\left(-\frac{\delta^{ce}}{4N_{c}}-\frac{d^{aec}}{8}t^{a}\right)\cdot\int \frac{d^{2}q_{1\bot}}{(2\pi)^2}\frac{1}{q^2_{1\bot}} \frac{1}{q^2_{3\bot}} \left(L_{\bot}(q_{5},p_{1})\varepsilon^{*}_{1}\right) \left(L_{\bot}(q_{2},p_{2})\varepsilon^{*}_{2}\right)\, . \end{equation} Calculation of diagrams 3, 4, 7 and 8 is completely similar. The total result for all diagrams in Fig. \ref{Fig3} is \begin{equation} \label{76} g^{6}\cdot\bar{u}(k')\gamma_{+}\Delta_{3}u(k)\cdot \bar{w}(l')\gamma_{-}w(l)\, , \end{equation} where \begin{eqnarray} \label{77} \Delta_{3}=2i\left(-\frac{1}{4N_{c}}\delta^{ce}-\frac{1}{8}d^{aec}t^{a}\right) \cdot \int\frac{d^{2}q_{1\bot}}{(2\pi)^2}\frac{1}{q^2_{1\bot}} \frac{1}{q^2_{3\bot}} \left(L_{\bot}(q_{2},p_{2})\varepsilon^{*}_{2}\right) \nonumber \\ \left[ (L_{\bot}(q_{5},p_{1})\varepsilon^{*}_{1}) -(L_{\bot}(q_{6},p_{1})\varepsilon^{*}_{1}) \right]\, . \end{eqnarray} This expression is exactly the one obtained in the standard Lipatov-Bartels transverse space approach . \section{Conclusions} Using the Lipatov effective field theory we have generalized the results of ~\cite{bravyaz} to the case when in the diffractive process two real soft gluons are emiited with large distance between their rapidities The Reggeon $\to$ 2 Reggeons+Particle vertex involved in the process was taken from ~\cite{bravyaz}. The found general structure of the amplitude corresponds to what has been known from the direct calculation of standard Feynman diagrams. To check the full correspondence we performed longitudinal integrations. The encountered difficulties are the same as with the single gluon emission. They require imposition of a certain rule, which reduces to taking certain integrals in the prinicpal value recipe. With this rule obeyed, the found expression for the production amplitude completely coincides with the one obtained by using Lipatov and Bartels vertexes in the transversal space from the start. It however remains to be seen if this result is true when the target changes its colour. Such a process is an important part of the inclusive soft gluon production, which is now under careful study in view of the contradiction betwen the results found in the Lipatov-Bartels and dipole pictures, mentioned in the introduction. We leave this problem for future studies. \section{Acknowledgements} This work has been partially supported by grants RNP 2.1.1/1575 of Education and Science Ministry of Russia and RFFI 09-02-01327a. \section{Appendix. Calculation of longitudinal integrals} The typical longitudinal integral of the form (\ref{form1}) is \begin{equation} \label{79} I_1=J_1(q_4-q_1,l+q_1)= \frac{1}{2i}\int\frac{dq_{1-}}{2\pi}\int\frac{dq_{1+}}{2\pi} \frac{1}{[(q_{4}-q_{1})^2+i0]}\frac{1}{[(l+q_{1})^{2}+i0]}\ . \end{equation} The standard procedure to calculate similar integrals is to use that the longitudinal components of the reggeon momentum $q_{1\pm}$ can be neglected as compared to large longitudinal components of the particles to which the reggeon is coupled, that is $q_{1+}$ is to be neglected as compared to $q_{4+}$ and $q_{1-}$ is to be neglected as compared to $l_-$. Should we follow this procedure, integral $I_1$ will factorize into two independent integrals over $q_{1+}$ and $q_{1-}$, but both of them will be divergent at large $q_{1\pm}$. Below we shall demonstrate that this procedure still can be applied not to separate integrals like (\ref{79}) but to the sum of integrals coming from the direct and crossed terms in our expression and also somewhat transformed to achieve convergence. To be able to calculate separate integrals of our type we recur to a slightly different procedure, in which the condition that $q_{1\pm}$ are small is imposed not from the start but after integration in one of the longitudinal momenta. Of course our procedure is fully equivalent to the standard one applied to convergent integrals and gives identical results. As a function of $q_{1+}$ the integrand has two poles \begin{equation} q_{1+}=q_{4+}-\frac{(q_{4}-q_{1})^{2}_{\bot}+i0}{q_{1-}} \label{pole1} \end{equation} and \begin{equation} q_{1+}=-\frac{(q_{1}+l)^{2}_{\bot}+i0}{q_{1-}+l_-}\ . \label{pole2} \end{equation} A non-zero result is obtained only if the two poles in $q_{1+}$ are on the opposite sides from the real axis. It determines the limits of the integration over $q_{1-}$. In the Regge kinematics in any case $q_{1-}<<l_-$ so that the limits are \begin{equation} \label{82} -l_{-}<q_{1-}<0\, . \end{equation} Thus taking the residue at (\ref{pole1}) we get an integral over $q_{1-}$ \begin{equation} \label{83} \frac{1}{4\pi}\int_{-l_{-}}^{0}dq_{1-}\frac{1}{D_1}\ , \end{equation} where \begin{equation} D_1=q_{1-}^{2}q_{4+}+q_{1-}(l_{-}q_{4+}- (q_{4}-q_{1})^{2}_{\bot}+q_{1\perp}^{2})-l_{-}(q_{4}-q_{1})^{2}_{\perp}-i0\, . \label{denom} \end{equation} The integral over $q_{1-}$ can be directly calculated as it stands. However such calculation is incorrect, since it does not take into account the kinematical conditions which are to be fulfilled for the propagating reggeons. In fact we have to require that both longitudinal components of the reggeon momenta are small as compared with the transversal components. Otherwise the longitudinal momenta have to be kept in the reggeon propagator and, if large, will correspond to the kinematics quite different to the Regge one. So we have to restrict integration in (\ref{79}) to the region \begin{equation} |q_{1+}q_{1-}|<<|q_{1\perp}^2|\, . \label{cond} \end{equation} In our case from (\ref{pole1}) we have \begin{equation} q_{1+}q_{1-}=q_{4+}q_{1-}-(q_4-q_1)_{\perp}^2\, , \end{equation} so that condition (\ref{cond}) transforms into \begin{equation} |q_{4+}q_{1-}-(q_4-q_1)_{\perp}^2|<<|q_{1\perp}^2| \ . \label{cond1} \end{equation} This implies that the integration in $q_{1-}$ is to be restricted to a narrow interval around the point where the left-hand side of (\ref{cond1}) vanishes. With small values of $q_{1-}$ of this order, we have to drop here all terms except by those which contain a large factor $l_-$, so that we get \begin{equation} D_1(q_{1-})=l_-\Big(q_{4+}q_{1-}-(q_4-q_1)_{\perp}^2 -i0 \Big)\ , \end{equation} and according to (\ref{cond1}) the integration should go in a small interval around the point where $D_1(q_{1-})=0$. This means that in fact \begin{equation} \frac{1}{D_1(q_{1-})}= \frac{i\pi}{l_-}\delta\Big(q_{4+}q_{1-}-(q_4-q_1)_{\perp}^2\Big)\ . \end{equation} Notice that $q_{4+}\approx p_{2+}>0$. Since $(q_4-q_1)_{\perp}^2<0$ and in the integration region also $q_{4+}q_{1-}<0$ the $\delta$-function gives a nonzero contribution and we obtain \begin{equation} I_1=\frac{i}{4 l_- q_{4_+}}\ . \label{i1} \end{equation} Now take integral (\ref{27}). First changing the integration variable to $q_3$ and then redenoting it as $q_1$ we find \begin{equation} \label{80} I_2=J_1(q_4-q_1,l'-q_1)= \frac{1}{2i}\int\frac{dq_{1-}}{2\pi}\int\frac{dq_{1+}}{2\pi} \frac{1}{[(q_{4}-q_{1})^2+i0]}\frac{1}{[(l'-q_{1})^{2}+i0]}\ . \end{equation} The two poles in $q_{1+}$ are now the old one (\ref{pole1}) and \begin{equation} q_{1+}=l'_{+}+\frac{(l'-q_1)_{\perp}^2+i0}{l'_- -q_{1-}} \ . \end{equation} Now the integration region in $q_{1-}$ is \begin{equation} 0<q_{1-}<l'_- \ . \end{equation} We get an integral \begin{equation} \label{83a} I_2=-\frac{1}{4\pi}\int_{0}^{l'_-}dq_{1-}\frac{1}{D_2}\, , \end{equation} where \begin{equation} D_2(q_{1-})=q_{1-}^{2}(q_{4+}-l'_+)-q_{1-}(l'_{-}q_{4+}+ (q_{4}-q_{1})^{2}_{\bot}-(l'-q_{1})^{2}_{\bot} +{l'}_{\perp}^2)+l'_{-}(q_{4}-q_{1})^{2}_{\bot}+i0 \end{equation} and the minus sign is due to the fact that the pole (\ref{pole1}) now lies in the upper half plane. According to our estimates, in the assumed kinematical conditions only the term in $D_2$ which contains $q_{1-}$ multiplied by $l'_- q_{4+}$ is to be kept, so that \begin{equation} D_2(q_{1-})=l'_-\Big((q_4-q_1)^2-q_{1-}q_{4_+} +i0\Big) \label{d2} \end{equation} and according to (\ref{cond1}) we have to integrate over $q_{1-}$ in the small interval around the point where $D_2(q_{1-})=0$. But now in (\ref{d2}) the right-hand side never vanishes, since in the brackets both terms are negative in the integration region. So we find \begin{equation} I_2=0 \label{i2} \end{equation} and the result (\ref{30}) follows. The integrals of the second form (\ref{form2}) $I_3$ and $I_4$ contain an extra factor $q_{1-}$ in the denominator as compared to $I_1$ and $I_2$. On the formal level this leads to a divergency of these two integrals at the point $q_{1-}=0$. However in the sum $I_3 +I_4$ this divergence cancels. Indeed using our approximate expressions for $D_1$ and $D_2$ valid in the region (\ref{cond1}) we find \[ I_3+I_4= \frac{1}{4\pi}\int_{0}^{l_{-}}\frac{dq_{1-}}{q_{1-}} \Big(-\frac{1}{D_1(-q_{1-})}-\frac{1}{D_2(q_{1-})}\Big) \]\begin{equation}= \frac{1}{4\pi l_-}\int_{0}^{l_{-}}\frac{dq_{1-}}{q_{1-}} \Big( \frac{1}{q_{4+}q_{1-}+(q_4-q_1)_{\perp}^2 +i0}- \frac{1}{(q_4-q_1)_{\perp}^2-q_{1-}q_{4_+} +i0}\Big)\ . \label{i34} \end{equation} Obviously the integrand is not singular at $q_{1-}=0$. We have to integrate this expression in in the small interval around the points where $D_1=0$ or $D_2=0$. However, as we have seen, the denominator $D_2$ never vanishes. So in (\ref{i34}) we can drop the second term and in the first term change \[ \frac{1}{q_{4+}q_{1-}+(q_4-q_1)_{\perp}^2 +i0}\to -i\pi\delta\Big(q_{4+}q_{1-}+(q_4-q_1)_{\perp}^2)\Big)\ , \] which gives \begin{equation} I_3+I_4=\frac{i}{4l_- (q_4-q_1)_{\perp}^2} \label{i34a} \end{equation} that is Eq. (\ref{33a}). The rest of longitudinal integrals can be calculated in a similar manner. Now we are going to demonstrate that one can also calculate our integrals in the standard manner, factorizing them into two independent ones over $q_{1\pm}$. Take integral $I_1$. As mentioned one cannot neglect $q_{1+}$ in the first denominator and $q_{1-}$ in the second without losing convergence. To preserve it we consider the sum of integrals (\ref{26}) and (\ref{27}) \begin{equation} I_1+I_2=\frac{1}{2i}\int\frac{dq_{1-}}{2\pi}\int\frac{dq_{1+}}{2\pi} \Big\{\frac{1}{(q_{4}-q_{1})^2+i0}+\frac{1}{(q_4-q_3)^2+i0}\Big\}\frac{1}{(l+q_{1})^{2}+i0} \ . \label{i121} \end{equation} Here $q_3=q_4-p_2-q_1$. One observes that convergence in $q_{1-}$ is improved. In order to do this with respect to $q_{1+}$ we first pass to integration over $q_3$ with $q_1=q_4-p_2-q_3$ and then rename $q_3\to q_1$ to obtain \begin{equation} I_1+I_2=\frac{1}{2i}\int\frac{dq_{1-}}{2\pi}\int\frac{dq_{1+}}{2\pi} \Big\{\frac{1}{(q_{4}-q_{1})^2+i0}+\frac{1}{(q_4-q_3)^2+i0}\Big\}\frac{1}{(l+q_{3})^{2}+i0} \ . \label{i122} \end{equation} Taking half the sum of (\ref{i121}) and (\ref{i122}) we finally find \[ I_1+I_2=\frac{1}{4i}\int\frac{dq_{1-}}{2\pi}\int\frac{dq_{1+}}{2\pi} \Big\{\frac{1}{(q_{4}-q_{1})^2+i0}+\frac{1}{(q_4-q_3)^2+i0}\Big\} \]\begin{equation} \times\Big\{\frac{1}{(l+q_{1})^{2}+i0} +\frac{1}{(l+q_{3})^{2}+i0}\Big\}\ . \label{i122a} \end{equation} Now both factors have enough convergence to put $q_{1+}=0$ in the first one and $q_{1-}$ in the second. The integrals factorizes in two. \begin{equation} I_1+I_2=\frac{1}{4i}I_+I_- \label{i123} \end{equation} where \[ I_-=\int\frac{dq_{1-}}{2\pi}\Big\{\frac{1}{q_{4+}(q_{4-}-q_{1-})+(q_4-q_1)_\perp^2+i0}+ \frac{1}{q_{4+}(p_{1-}+q_{1-})+(q_4-q_3)_\perp^2+i0}\Big\} \]\begin{equation} =-i\frac{1}{q_{4+}} \label{i12-} \end{equation} and \begin{equation} I_+=\int\frac{dq_{1+}}{2\pi}\Big\{\frac{1}{l_-q_{1+}+q_{1\perp}^2+i0}+ \frac{1}{l_-(q_{4+}-p_{2+}-q_{1+})+q_{3\perp}^2+i0}\Big\}=-i\frac{1}{l_-} \label{i12+} \end{equation} Obviously the result (\ref{i123}) is identical to the the sum of (\ref{i1}) and (\ref{i2}) calculated previously by a different method. Integrals with $1/q_{1-}$ in the denominator can also be calculated by the standard method provided one eliminates the singularity at $q_{1-}=0$. In fact the sum of (\ref{28}) and (\ref{29}) can be rewritten as \begin{equation} I_3+I_4=\frac{1}{2i}\int\frac{dq_{1-}}{2\pi}\int\frac{dq_{1+}}{2\pi} \frac{1}{q_{1-}}\frac{1}{(q_{4}-q_{1})^2+i0}\Big\{\frac{1}{(l+q_{1})^{2}+i0} +\frac{1}{(l+q_{3})^{2}+i0}\Big\} \ . \label{i341} \end{equation} where as before $q_3=q_4-p_2-q_1$. Now we can safely put $q_{1+}=0$ in the first factor and $q_{1-}=0$ in the brackets without losing convergence. The integral again factorizes in two: \begin{equation} I_3+I_4=\frac{1}{2i}I_+I_{1-} \label{i342} \end{equation} where $I_+$ is the same as before and given by (\ref{i12+}) and \begin{equation} I_{1-}=\int \frac{dq_{1-}}{2\pi}\frac{1}{q_{1-}}\ \frac{1}{q_{4+}(q_{4-}-q_{1-})+(q_4-q_1)_\perp^2+i0}\ . \end{equation} Here we can safely neglect the term $q_{4+}q_{4-}$ in the denominator since this product is to be small as compared to squares of the transverse momenta. The singularity at $q_{1-}=0$ then becomes spurious. Indeed changing $q_{1-}\to -q_{1-}$ and taking half of the sum we get \begin{equation} I_{1-}=\frac{1}{2}\int \frac{dq_{1-}}{2\pi}\frac{1}{q_{1-}}\Big\{\frac{1}{-q_{4+}q_{1-}+(q_4-q_1)_\perp^2+i0}- \frac{1}{q_{4+}q_{1-}+(q_4-q_1)_\perp^2+i0}\Big\}\ . \label{i34-} \end{equation} The bracket vanishes at $q_{1-}=0$ so that there is no singularity at this point. Taking the residue in the upper half-plane we find \begin{equation} I_{1-}=\frac{-i}{2(q_4-q_1)_\perp^2}\ , \end{equation} so that (\ref{i342}) again coincides with (\ref{i34a}) calculated in our previous manner.
1,941,325,220,959
arxiv
\section{\@startsection {section}{1}{\z@}% \def{\bf R}{{\bf R}} \def{\bf S}{{\bf S}} \def{\bf C}{{\bf C}} \def\AdS#1{AdS$_{#1}$} \def\({\left(} \def\){\right)} \def\[{\left[} \def\]{\right]} \def\ZF#1{{\color{LimeGreen}{ [#1]}}} \def\DM#1{{\color{red}{ [#1]}}} \def\EM#1{{\color{orange}{ [#1]}}} \title{Time-independent wormholes} \author{Zicao Fu, Donald Marolf, and Eric Mefford} University of California, Santa Barbara{Department of Physics, University of California, Santa Barbara, CA 93106, USA} \emailAdd{[email protected]} \emailAdd{[email protected]} \emailAdd{[email protected]} \abstract{We study two-sided static wormholes with an exact Killing symmetry that translates both mouths of the wormhole toward the future. This differs from the familiar Kruskal wormhole whose time translation is future-directed only in one asymptotic region and is instead past-directed in the other. Our spacetimes are solutions to Einstein-Hilbert gravity sourced by scalar domain walls. Explicit examples are found in the thin wall approximation. More generally, we show that such spacetimes can arise in the presence of scalar fields with potentials that are $C^1$ but not $C^2$ and find examples numerically. However, solutions with an exact such Killing symmetry are forbidden when the scalar potential is smooth. Finally, we consider the mutual information of boundary regions associated with such wormholes in AdS/CFT. Although the interior of our solutions are unstable, we find that even mutual informations between opposite boundaries are already thermalized at any finite $t$ in the sense that they agree with the $t\rightarrow \infty$ limit of results from the familiar AdS-Kruskal solution.} \begin{document} \maketitle \section{Introduction} \label{sec:Intro} The familiar Kruskal wormhole has an exact Killing symmetry often called a time-translation. But as illustrated in figure \ref{fig:Conformal} (left) for the asymptotically AdS case, this symmetry displaces one asymptotic region forward in time while shifting the other asymptotic region toward the past. As a result, non-local quantities that compare the two boundaries do in fact change under the asymptotic symmetry that shifts both boundaries toward the future. Such quantities are commonly studied in AdS/CFT and include both boundary-to-boundary two-point functions and mutual informations between the two boundaries. The resulting time-evolutions were described in e.g. \cite{Fidkowski:2003nf} and \cite{Hartman:2013qma}. Below, we explore whether Einstein-Hilbert gravity coupled to familiar matter sources might allow wormholes with a Killing symmetry that translates {\it both} ends in the same direction. Since topological censorship \cite{Friedman:1993ty,Galloway:1999bp} requires wormholes to have horizons, and since the Killing symmetry must resemble a flat-space boost transformation near the horizon bifurcation surface, such spacetimes should have conformal diagrams resembling figure \ref{fig:Conformal} (right), or more generally should have Killing horizons with an even number of bifurcation surfaces in the $t=0$ hypersurface. \begin{figure}[t] \centerline{ \includegraphics[width=0.257\textwidth]{penrosenowormhole.pdf}\includegraphics[width=0.5\textwidth]{penrosewormhole2.pdf} } \caption{Sketches of conformal diagrams for the familiar two-sided Kruskal-AdS wormhole (left) and what we call time-independent wormholes (right). In the Kruskal case the Killing symmetry moves one boundary forward in time while shifting the other backward. But on the right the Killing symmetry acts as a future-directed time-translation on both boundaries. On the left, the Killing horizon has only a single bifurcation surface, while the Killing horizon of the right figure has two (red dots). Both spacetimes have $Z_2$ reflection symmetries about the dotted vertical lines. On the left, this reflection changes the sign of the time translation Killing field, while it leaves the Killing field invariant on the right.} \label{fig:Conformal} \end{figure} For simplicity, we study wormholes with spherical symmetry. Birkhoff's theorem then forbids vacuum solutions of this form in Einstein-Hilbert gravity. Physically, the issue is that the interior of the wormhole tends to collapse, destroying the presumed-static region shown in the middle of the wormhole at right in figure \ref{fig:Conformal}. We solve this problem by coupling gravity to a scalar field. The repulsive gravity generated by either positive-tension scalar domain walls or positive scalar potentials (which effectively act as local positive cosmological constants) allow the desired static region to exist. Section \ref{sec:Thin} constructs and studies asymptotically-AdS such solutions in the thin wall approximation. The resulting spacetimes are similar in many ways to the single-asymptotic region black holes with de Sitter interiors found in \cite{Fidkowski:2003nf}. Interestingly, the holographic mutual information between the two boundaries always vanishes when considering regions smaller than half of either boundary. We then consider spacetimes sourced by smooth scalar fields in section \ref{sec:Thick}. We show that time-independent wormhole solutions exist when the scalar potential $V(\phi)$ is chosen to behave like $\phi^2 (\log \phi)^3$ near a local minimum; i.e., while the solutions are smooth, the scalar potentials are only $C^1$ as functions of $\phi$. Examples are constructed numerically. Again, the holographic mutual information between the two boundaries always vanishes when considering regions smaller than half of either boundary. That singular potentials are required is shown in section \ref{sec:NoGo}; scalar fields with smooth potentials cannot support our time-independent wormholes. We close with some final discussion in section \ref{sec:Disc}. In particular, we comment on the status of such solutions with respect to gauge/gravity duality and also with respect to recent discussions of the possible role of complexity in gauge/gravity duality \cite{Stanford:2014jda,Roberts:2014isa,Brown:2015bva,Brown:2015lvg}. \section{Thin Wall Solutions} \label{sec:Thin} We begin in section \ref{sec:cut} by constructing thin wall versions of the time-independent wormholes shown at right in figure \ref{fig:Conformal}. We then briefly analyze the holographic mutual information defined by these wormholes in section \ref{sec:MIthin} and note that in a certain sense they are already thermalized at any finite $t$. \subsection{A cut and paste construction} \label{sec:cut} It is straightforward to assemble the desired time-independent wormholes by cutting two copies of Kruskal-AdS (fig. \ref{fig:Conformal} left) along a timelike surface defined by orbit of the symmetry group (a constant $r$ surface) and then sewing the two larger pieces together other along a thin positive-tension domain wall. This domain wall then becomes the dotted line in right diagram in figure \ref{fig:Conformal} and is left invariant under the reflection symmetry. To proceed, recall the $D$ dimensional AdS-Schwarzschild metric \begin{equation} ds^2=-\left(1-\frac{\omega _DM}{r^{D-3}}+\frac{r^2}{\ell ^2}\right)dt^2+\frac{1}{1-\frac{\omega _DM}{r^{D-3}}+\frac{r^2}{\ell ^2}}dr^2+r^2d\Omega ^2, \end{equation} where $\omega _D=\frac{16\pi G_D}{\left(D-2\right)S_{D-2}}$ and $S_{D-2}=\frac{2\pi ^\frac{D-1}{2}}{\Gamma \left(\frac{D-1}{2}\right)}$. A timelike constant $r$ surface has unit normal $n^a=\sqrt{1-\frac{\omega _DM}{r^{D-3}}+\frac{r^2}{\ell ^2}}\left(\frac{\partial }{\partial r}\right)^a$. Its extrinsic curvature $K_{ab}=\frac{1}{2}\pounds _nh_{ab}$ is thus \begin{equation} \label{eq:ExtrCurv} K_{ab} dx^a dx^b=\frac{1}{2}\sqrt{1-\frac{\omega _DM}{r^{D-3}}+\frac{r^2}{\ell ^2}}\left[-\left((D-3)\frac{\omega _DM}{r^{D-2}}+\frac{2r}{\ell ^2}\right)dt^2 +2rd\Omega^2\right]. \end{equation} We wish to consider relativistic domain walls with surface stress tensor $\hat T_{ab}=-\sigma h_{ab}$ in terms of the (constant) tension $\sigma$ and the induced metric $h_{ab}$. Here we use the conventions of \cite{Wald:1984rg} in which $h_{ab}$ is a degenerate tensor in the full spacetime such that $h^a_{~b}$ is the projector onto the vector space tangent to the wall. The full stress-energy tensor $T_{ab}$ is proportional to $\hat T_{ab}$, but contains an extra delta-function localizing the stress-energy on the wall. Given the $Z_2 $ symmetry of figure \ref{fig:Conformal} (right), the Israel junction conditions (see e.g. \cite{Misner:1974qy}) require $\hat T_{ab} \propto K_{ab}$, and thus $\frac{g_{tt}}{g_{\Omega \Omega }}=\frac{K_{tt}}{K_{\Omega \Omega }}$. This relation is satisfied if and only if \begin{equation} \label{eq:sew} r_\text{wall}^{D-3}=\frac{D-1}{2}\omega _DM. \end{equation} The junction condition then gives $K_{ab} = \frac{4\pi G_D\sigma }{D-2}h_{ab}$ so that \begin{equation} \sigma = \frac{D-2}{4\pi G_Dr_\text{wall}}\sqrt{1 - \frac{\omega_D M}{r_\text{wall}^{D-3}} + \frac{r_\text{wall}^2}{\ell ^2}} \end{equation} is positive as desired. This completes our construction of thin-wall solutions corresponding to figure \ref{fig:Conformal} (right). However, we note in passing that a similar analysis indicates that our solutions are unstable. This is to be expected as the interior of our wormhole remains static only due to a delicate balance between the gravitational attraction of the black hole and the gravitational repulsion of the domain wall. Indeed, maintaining the $Z_2$ reflection symmetry and spherical symmetry but allowing the wall to move with time on a surface $r = R(T)$, the Israel junction conditions imply an equation of motion \begin{equation} \label{eq:tIJC} 2\sqrt{f(R)+\dot R^2}=\frac{8\pi G_D\sigma }{D-2}R, \end{equation} for $f(r)=1-\frac{\omega _DM}{r^{D-3}}+\frac{r^2}{\ell ^2}$ and $\dot R$ the derivative of $R$ with respect to proper time along the shell. Here the first-order nature of the equation is a consequence of restricting to solutions with $Z_2$ symmetry. Squaring \eqref{eq:tIJC} and linearizing it around the static solution \eqref{eq:sew}, we obtain \begin{equation} \left(\frac{d}{d\tau }\delta R\right)^2=\left(D-3\right)4^\frac{1}{D-3}\left(\left(D-1\right)\omega _DM\right)^\frac{-2}{D-3}\delta R^2+O(\delta R^3), \end{equation} so the static solution is unstable on the timescale \begin{equation} \tau =\sqrt{\frac{1}{D-3}4^\frac{-1}{D-3}\left(\left(D-1\right)\omega _DM\right)^{\frac{2}{D-3}}}. \end{equation} \subsection{Mutual Information and Thermalization} \label{sec:MIthin} As noted in the introduction, physical quantities defined by the geometry of our wormhole must be independent of time. This includes the (leading order) holographic mutual information defined by the Ryu-Takayanagi (RT) \cite{Ryu:2006bv,Ryu:2006ef} or the covariant Hubeny-Rangamani-Takayanagi (HRT) \cite{Hubeny:2007xt} prescriptions. While -- as will be discussed in section \ref{sec:Disc} -- the derivations of \cite{Lewkowycz:2013nqa} and \cite{Dong:2016hjy} need not apply to our spacetime, it is nevertheless of interest to investigate what these prescriptions would predict. In particular, we will see that -- despite the instability noted above -- in a sense these mutual informations (and indeed the entropies of all boundary regions) appear to already be thermalized at any finite $t$. We note that such leading-order holographic mutual informations are of more interest in our context than our boundary-to-boundary correlators, as the latter depend on the choice of quantum state for light bulk fields as well as on the classical background geometry. Since we have not constructed our spacetimes as stationary points of a path integral, there is no preferred choice for this quantum state. And due to the large causal shadow between the two event horizons of our time-independent wormholes, we are free to choose the light bulk fields in the left asymptotic region to be completely uncorrelated with those in the right asymptotic region so that all connected correlators vanish when evaluated with one argument on the right boundary and another on the left. Because the spacetime is not globally static, the RT prescription does not strictly apply. Nevertheless, in a spacetime with time-reversal symmetry, the maximin construction of \cite{Wall:2012uf} guarantees the HRT surface to be the minimal surface within the $t=0$ (i.e., within the hypersurface invariant under $t \rightarrow -t$) as one would expect from the RT prescription\footnote{\label{VH}Since this surface is minimal on the $t=0$ slice, its area can be no larger than that of the maximin surface. But the time-reversal symmetry means that this minimal surface is also an extremal surface in the full spacetime. It can therefore have area no smaller than the maximin surface, as the latter agrees with the area of the smallest extremal surface. We thank Veronika Hubeny for pointing this out to us.}. We wish to study surfaces anchored both to a region $A_R$ of the right boundary and also to a corresponding region $A_L$ of the left boundary, such that $A_R, A_L$ are interchanged by the $Z_2$ symmetry of reflection across the wall. In order to compute the entropy $S_{A_LA_R}$ of $A_L \cup A_R$, we must correctly identify the minimal surface. We first consider the case where $A_R$ and $A_L$ are each precisely half of the $t=0$ sphere at the AdS boundary (note that our solutions correspond to `global' Schwarzschild-AdS). Referring to $A_L, A_R$ as the `northern' hemispheres (whose boundaries are thus the equator of the sphere), it is then clear that the smallest connected surface anchored to both $A_L$ and $A_R$ is the surface defined by taking the equator of the sphere at each $r$. As shown below, it suffices for our purposes to compute the area of the portion of this surface inside our wormhole. Noting that the radius $r_{0}$ of the event horizon is defined by \begin{equation} 1-\frac{\omega _DM}{r_{0}^{D-3}}+\frac{r_{0}^2}{\ell ^2} =0, \end{equation} and introducing $\tilde r = r/\ell$ and $\tilde r_{0} = r_{0}/\ell$, this area satisfies \begin{equation} \label{eq:extrathin} \frac{A_\text{connected, \ inside}}{A_\text{EH}}=\frac{2}{\sqrt{\pi }}\frac{\Gamma \left(\frac{D-1}{2}\right)}{\Gamma \left(\frac{D-2}{2}\right)}\int_{1}^{\left[\frac{D-1}{2}\left(1+\tilde r_{0}^2\right)\right]^{\frac{1}{D-3}}}\frac{\hat{r}^{D-3}}{\sqrt{1-\frac{1+\tilde r_{0}^2}{\hat{r}^{D-3}}+\left(\tilde r_{0} \hat{r}\right)^2}}d\hat{r}, \end{equation} where we have normalized the quantity by dividing by the area of either event horizon. The area of the full minimal connected surface is then $A_\text{connected} = A_\text{connected, \ inside} + A_\text{connected, \ Kruskal}$ where $A_\text{connected, \ Kruskal}$ is the area of the minimal connected surface in the AdS-Kruskal geometry of figure \ref{fig:Conformal} (left). \begin{figure}[t] \centering \includegraphics[width=0.9\textwidth]{MutualInfo.pdf} \caption{The functions \eqref{eq:extrathin} for $D=4$ (red), $D=5$ (orange), $D=6$ (yellow), $D=7$ (green), $D=8$ (cyan), $D=9$ (blue), and $D=10$ (purple).} \label{fig: MutualInfo} \end{figure} For general $D$ the integral \eqref{eq:extrathin} can be performed numerically. But for $D=5$ it can be performed exactly to obtain \begin{equation} \label{eq:extrathin5d} \frac{A_{\text{connected, inside, }D=5}}{A_\text{EH}}=\frac{2{{\tilde r}_{0}}\sqrt{1+{\tilde r}_{0}^{2}}\left( 1+2{\tilde r}_{0}^{2} \right)-\ln \left( 1+2{{\tilde r}_{0}}\left( {{\tilde r}_{0}}+\sqrt{1+{\tilde r}_{0}^{2}} \right) \right)}{\pi {\tilde r}_{0}^{3}}. \end{equation} As shown in figure \ref{fig: MutualInfo}, \eqref{eq:extrathin} and \eqref{eq:extrathin5d} are increasing functions of $\tilde r_0$, which are larger than $1.6$ for all $\tilde r_0$ (at least for $4 \le D \le 10$). In particular, there is $\frac{A_\text{connected, inside}}{A_\text{EH}}>1$. However, as usual we must also consider the smallest disconnected surface anchored on $A_L, A_R$ and compare its area to that of the connected surface. Let us first study a single connected component, say the one anchored to $A_L$. One example of a surface satisfying these boundary conditions is the surface $\Sigma_0$ shown in figure \ref{fig:figure1} which consists of the northern hemisphere of the bifurcation surface for the left event horizon together with the equators of all $t=0$ spheres in the left asymptotic region. In other words, outside the horizon it coincides with the connected surface studied above anchored to both $A_L$ and $A_R$. So the area of the left component of the actual minimal surface must be less than that of $\Sigma_0$. \begin{figure}[t] \centering \includegraphics[scale=.4]{hemisphereminimal.pdf} \includegraphics[scale=.4]{hemisphere2.pdf} \caption{On the left, $\Sigma_1$ is minimal surface for a hemisphere of the boundary with a black hole (red, dotted) in the bulk. The surface $\Sigma_0$ on the right necessarily has larger area than $\Sigma_1$. This surface contains a piece (straight segments along the equator) that are part of the connected surface passing through the wormhole; the other piece lies on the black hole horizon.} \label{fig:figure1} \end{figure} Adding together the two components, the area of the minimal disconnected surface must satisfy \begin{equation} A_\text{disconnected} \le A_\text{connected, \ Kruskal} + A_\text{EH}. \end{equation} The observation that \eqref{eq:extrathin} and \eqref{eq:extrathin5d} are larger than $1$ then implies $A_\text{disconnected} < A_\text{connected}$. The HRT surface is thus disconnected and, due to e.g. the barrier theorems of \cite{Engelhardt:2013tra}, lies entirely outside the horizons. The mutual information $I(A_L:A_R)$ is then just what would be obtained from surfaces outside the horizon of AdS-Kruskal (fig. \ref{fig:Conformal} left) and $I(A_L:A_R)$ vanishes. Furthermore, the positivity and monotonicity of HRT mutual information derived in \cite{Wall:2012uf} then imply vanishing mutual information $I(A_L:A_R)$ for any subsets $A_L, A_R$ of the northern hemisphere, whether or not such $A_L,A_R$ are related by the $Z_2$ symmetry. In fact, since $A_L \cup A_R$ is homologous to its complement, the same argument shows that the HRT surface for $S_{A_LA_R}$ is again disconnected (and lies entirely outside the horizon) whenever $A_L, A_R$ both {\it contain} the entire southern hemisphere. So here too $I(A_L:A_R)$ is what would be obtained from surfaces outside the horizon of AdS-Kruskal (fig. \ref{fig:Conformal} left), though due to the homology constraint $I(A_L:A_R)$ no longer vanishes. Equivalently \cite{Hartman:2013qma}, we may say in both cases that $I(A_L:A_R)$ for the time-dependent wormhole agrees with that for the $t \rightarrow +\infty$ limit of AdS-Kruskal. Though there remain certain cases that we have not checked, it is thus natural to conjecture the same to be true of arbitrary $A_L, A_R$, and thus for the entropies of arbitrary boundary regions. But the $t\rightarrow +\infty$ limit of AdS-Kruskal is naturally interpreted as a thermalized state. So if our conjecture is true, then despite the instability found in section \ref{sec:cut}, as measured by such entropies we find that our time-independent wormhole is already thermalized at any finite time $t$. \section{Smooth Solutions} \label{sec:Thick} Having constructed time-independent wormholes using thin shells, it is natural to ask if similar solutions can be sourced by smooth scalar fields $\phi$. We shall now show that they can, but with an interesting twist. While the solutions are completely smooth, the scalar potential $V(\phi)$ is not. Indeed, near the AdS minimum $\phi_0$, our $V(\phi)$ will behave like $(\phi-\phi_0)^2 [\ln(\phi-\phi_0)]^3.$ We demonstrate the existence of such solutions analytically and construct a particular example numerically. Appendix \ref{sec:NoGo} then gives a general argument that spherically symmetric time-independent wormholes cannot be sourced by scalar fields with smooth potentials. Our smooth solution will bear a strong similarity to our domain wall solution, in that it will be precisely $D$-dimensional AdS-Schwarzschild outside the horizon and also in the region where the time-translation Killing field is spacelike. In those regions our scalar field will be constant and will sit at a minimum of its potential. The scalar will deviate from this minimum only in the central diamond of figure \ref{fig:Conformal} (right) which in section \ref{sec:Thin} contained the domain wall; we refer to this diamond as the wormhole below. Smoothness then requires that all derivatives of $\phi$ vanish at boundaries of the wormhole. The wormhole should enjoy both spherical and time-translation symmetry. As a result, any smooth metric in this region may be written \begin{equation} \label{eq:wmet} ds^2 = -f(r)dt^2 + \frac{dr^2}{f(r)} + S(r)^2d\Omega_{D-2}^2, \end{equation} where we require $f$ to vanish linearly at the wormhole boundaries to give a smooth bifurcate horizon. Imposing the $Z_2$ reflection symmetry of figure \ref{fig:Conformal} (right), we may set $r=0$ at the fixed points of this reflection. It then suffices to study the metric only on the right half of the spacetime. We take this to be $r > 0$, with the wormhole boundary at $r=r_h$. Note that \eqref{eq:wmet} and these choices still allow the freedom to perform a constant rescaling $(t,r,r_h,f) \rightarrow (\alpha t, r/\alpha , r_h/\alpha, f/\alpha^{2})$ without changing the geometry. For later reference, we note that AdS-Schwarzschild in these coordinates has \begin{equation} S_{AS}(r) = r,\quad f_{AS}(r) = \frac{r^2}{\ell^2} + 1 - (r_h^2+\ell^2)\left(\frac{r_h}{r}\right)^{D-3}, \label{eq:AdSSchwarz} \end{equation} with AdS boundary at $r\to \infty$. From now on, we set $\ell=1$ so that \begin{equation} \label{eq:AdSmin} V(\phi(r_h)) = \Lambda_{AdS} = -\frac{(D-1)(D-2)}{2}. \end{equation} In these coordinates, the equation of motion for a single minimally coupled scalar field reads \begin{equation} f\phi''+\left[(D-2)\frac{fS'}{S}+f'\right]\phi' = \frac{dV}{d \phi}. \label{eq:scalar} \end{equation} and the nontrivial $tt$, $rr$, and sphere-sphere, components of the Einstein equation (with $8\pi G=1$) may be combined to write \begin{equation} \begin{split} (D-2)S''&=-S\phi'^2, \\ \left(\frac{S'}{S}\right)f' -\frac{D-3}{S^2}\left(1-f(S')^2\right) &= \frac{2}{D-2}T_r^r = \frac{1}{D-2}(f\phi'^2-2V(\phi)),\\ f'' +(D-3)\left(\left(\frac{S'}{S}\right)f'+\frac{1}{S^2}-\left(\frac{S'^2}{S^2}\right)f\right) &= \frac{2}{D-2}T_\Theta^\Theta =-\frac{1}{D-2}\left(2V(\phi)+f\phi'(r)^2\right). \label{eq:Einstein} \end{split} \end{equation} As usual, \eqref{eq:scalar} follows from \eqref{eq:Einstein} due to the Bianchi identity, so it suffices to consider \eqref{eq:Einstein}. Rather than choose a form for $V(\phi)$ and solve for the resulting $\phi(r)$, we find it convenient to proceed in analogy with section 4 of \cite{Herdeiro:2015waa} and to posit $\phi(r)$. We then take the middle equation from \eqref{eq:Einstein} as the definition of $V(\phi)$. The requirement that all derivatives of $\phi(r)$ vanish at $r_h$ motivates us to choose the form \begin{equation} \phi(r) = b\tanh\left(\frac{kr}{r_h^2-r^2}\right). \label{eq:scalarprofile} \end{equation} This leaves us with a pair of second order ODEs (the first and last of \eqref{eq:Einstein}) to solve for $f(r), S(r)$. The $Z_2$ reflection symmetry requires the boundary conditions \begin{equation} S'(0) = f'(0) = 0. \label{eq:midpoint} \end{equation} We also wish to impose two boundary conditions at $r_h$. The first of these is simply $f(r_h)=0$. Using \eqref{eq:AdSmin} and our definition of $V(\phi)$ (the middle equation in \eqref{eq:Einstein}) gives the second: \begin{equation} \label{eq:hBC2} \frac{df}{dr}|_{r=r_h} = \frac{1}{S'(r_h)S(r_h)}\left((D-1)\;S(r_h)^2 + (D-3)\right). \end{equation} We note that \eqref{eq:hBC2} guarantees the surface gravity at the wormhole boundary to match that of AdS-Schwarzschild with horizon radius $S(r_h)$ if we rescale $(t,r,r_h,k,f) \rightarrow (\alpha t, r/\alpha , r_h/\alpha, \alpha k, f/\alpha^{2})$ to set $S'(r_h)=1.$ With this understanding, the redshift factor $f$, the sphere size $S$, and their first derivatives with respect to $r$ are then continuous at $r=r_h$. So long as $S(r_h) \neq 0$, the ODEs \eqref{eq:Einstein} then guarantee continuity of all derivatives and the geometry matches smoothly to AdS-Schwarzschild as desired. However, it will be convenient for our later numerics to first choose $b,k,r_h$ arbitrarily and only later to rescale in this manner. This suffices to prove that the desired solutions exist. So long as $S> 0$, it is clear that our ODEs have no singular points. Furthermore, since we take $\phi(r)$ as given, the first ODE is a homogeneous equation for $S(r)$ alone. Using only $S'(0)=0$, it is then clear that the resulting one-parameter family of solutions for $S(r)$ will have $S > 0$ on $[0,r_h]$ so long as we choose $b$ sufficiently small for given $k, r_h$. For each such $S(r)$, the second ODE defines a regular linear 2nd order ODE for $f(r)$, so there a unique solution $f(r)$ satisfying $f'(0)=0$ and $f(r_h)=0$. Taking the remaining free parameter to be $S(0)$, and noting that scaling $S(0) \rightarrow \beta S(0)$ induces the scalings $S(r), f(r), V(r) \rightarrow \beta S(r), \beta^{-2} f(r), \beta^{-2} V(r)$ we may then choose $\beta$ so as to both satisfy \eqref{eq:hBC2} and make $S(0)$ positive. Thus smooth time-independent wormholes of this form exist so long as $b$ is sufficiently small. Figure \ref{fig:solutions} displays numerical solutions for $f(r)$, $S(r)$ and $V(\phi)$ in $D=4$ with \begin{equation} b = 1, \quad k = 2.05768, \quad S(r_h)=1, \end{equation} where for numerical convenience we have chosen $r_h=1$. \begin{figure}[t] \centering \includegraphics[scale=.3]{scalar1.pdf} \includegraphics[scale=.3]{potential1.pdf} \centering \includegraphics[scale=.3]{f1.pdf} \includegraphics[scale=.3]{s1.pdf} \caption{Above we plot the numerical solutions to the Einstein-scalar system for the scalar field profile (\ref{eq:scalarprofile}) with b = 1, k=2.05768. Note that, having set $\ell=1$, the potential goes to $V=\Lambda_{AdS}$ at the horizons ($r_h=\pm 1$).} \label{fig:solutions} \end{figure} It now remains to discuss $V(\phi)$. Since $f, S$ are smooth, our definition of $V$ via \eqref{eq:Einstein} guarantees that $V$ is a smooth function of $r$. The ansatz \eqref{eq:scalarprofile} then implies that $V(\phi)$ is smooth for $\phi \in (-b,b)$. But the behavior at the minimum $b$ must be determined by expanding $S,f,\phi$ near $r=r_h$. To simplify this calculation we now set $r_h=1$ to find \begin{equation} \begin{split} \phi &\approx b(1-2e^{-k/(1-r)}),\\ \phi' &= b\left(\frac{kr}{1-r^2}\right)'\text{sech}^2\left(\frac{kr}{1-r^2}\right) \approx 2bk\frac{e^{-k/(1-r)}}{(1-r)^2},\\ \phi'' &= b\left[\left(\frac{kr}{1-r^2}\right)''-2\left[\left(\frac{kr}{1-r^2}\right)'\right]^2 \text{tanh}\left(\frac{kr}{1-r^2}\right)\right]\text{sech}^2\left(\frac{kr}{1-r^2}\right) \\ &\approx b\left[\frac{4k}{(1-r)^3} - \frac{2k^2}{(1-r)^4}\right]e^{-k/(1-r)}. \end{split} \end{equation} and \begin{equation} \begin{split} S^2f &\approx - f'(1)S(1)^2(1-r)+...\\ (S^2f)' &\approx f'(1)S(1)^2 +... \end{split} \end{equation} Using \eqref{eq:scalar} then yields \begin{equation} \frac{dV}{d\phi} = f'(1)\log^{2}\left(\frac{b-\phi}{2b}\right)\left(\log\left(\frac{b-\phi}{2b}\right)-1\right)\frac{b-\phi}{k}+\mathcal{O}[(b-\phi)^2], \end{equation} or \begin{equation} \begin{aligned} V(\phi) = & \Lambda_{AdS} + \frac{f'(1)b}{4k}\left(\frac{b-\phi}{2b}\right)^2\left[-5+10\log\left(\frac{b-\phi}{2b}\right)\right. \\ & \left.-10\log^{2}\left(\frac{b-\phi}{2b}\right) + 4\log^{3}\left(\frac{b-\phi}{2b}\right)\right] +\mathcal{O}[(b-\phi)^3]. \label{eq:potential} \end{aligned} \end{equation} \begin{figure}[t] \centering \includegraphics[scale=.4]{numericalvsanalytical.pdf} \caption{A comparison of the numerical results (y-axis) for $\frac{dV}{d\phi}$ to analytic results (x-axis). We have plotted our result (blue) on a log-log plot against a line (red) with slope 1 to show agreement over 4 orders of magnitude. The values of $b$,$k$, and $r_h$ are the same as in fig. \ref{fig:solutions}.} \label{fig:nvsa} \end{figure} So our potential is not smooth at its minimum. Instead, $\frac{d^2V}{d\phi^2}$ has a logarithmic singularity, indicating that interactions remain important near the horizon. As shown in figure \ref{fig:nvsa}, this result is consistent with our numerics. We remark that the singularity in our potential is not just an artifact of our particular construction. Indeed, appendix \ref{sec:NoGo} demonstrates -- even when the requirement of a pure AdS-Schwarzschild exterior is dropped -- that time-independent spherically-symmetric wormholes cannot be sourced by scalar fields with smooth potentials. \subsection{HRT entropies} \label{sec:ThickHRT} Finally, we investigate the holographic HRT mutual information between the two boundaries of our smooth time-independent wormhole. Here we consider the particular numerical solution displayed in figure \ref{fig:solutions}. As in section \ref{sec:MIthin}, we begin by choosing $A_L,A_R$ to each be the northern hemisphere of the respective boundary at $t=0$. Repeating the steps describes there, and since the solution is just AdS$_4$-Kruskal outside the horizon, we focus on the area $A_{\text{connected,\ inside}}$ of the surface defined by taking the equator of each sphere inside the horizon. Interestingly, as in the thin-wall case, we find $A_{\text{connected,\ inside}} > A_{EH}$ for all values of $k,b$ that we have explored -- and indeed even for other functional forms of $\phi(r)$ such as $\phi(r) = b\tanh\left(\frac{kr}{(r_h^2-r^2)^{c_2}}\right)^{c_1}$ with $c_1, c_2$ integer constants. So as in section \ref{sec:MIthin} we conjecture that for general $A_L, A_R$ the HRT entropy $S_{A_LA_R}$ agrees with the $t \rightarrow +\infty$ limit of AdS-Kruskal and that, despite a likely instability analagous to that found for the domain wall solutions, in this sense our time-independent wormholes are already thermalized at any finite $t$. Typical results for $A_{\text{connected,\ inside}}$, $A_{EH}$ are shown in figure \ref{fig:areas} for the profile \eqref{eq:scalarprofile}. One might expect that for large $k$ our smooth solutions approximate the thin-shell solutions of section \ref{sec:Thin}. At least so far as these areas are concerned, the plot indicates that the agreement is already quite good for any $b$ at $k \sim 1$. Indeed, different scalar profiles in this regime that lead to the same $A_{EH}$ also have nearly identical $A_{\text{connected,\ inside}}$. \begin{figure}[t] \centering \includegraphics[scale=.6]{minimalsurfaces.pdf} \caption{A comparison of the areas $A_{\text{connected, \ inside}}$ of minimal surfaces inside the wormhole to the area $A_{EH}$ of the corresponding black hole. Using the profile in (\ref{eq:scalarprofile}), each curve corresponds to a fixed value of $b$ ($b=.87$ (green), $b=1$ (red), $b=1.2$ (blue)) while $k$ is varied from .2 to 2.1. For each $b$ there are two branches of solutions which join around $k\sim .7$. We have also plotted the $D=4$ solution from fig. \ref{fig: MutualInfo} in brown which is seen to coincide with the top branch of our solutions for each $b$; in particular, while the brown curve is hidden by the top branches of the colored curves across much of the figure, it remains visible at both the lower left and upper right ends. All solutions lie above the dashed line which plots $A_{\text{connected, \ inside}} =A_{EH}$, so the minimal surfaces are disconnected for hemispheres on the boundary of AdS.} \label{fig:areas} \end{figure} \section{Discussion} \label{sec:Disc} We have constructed time-independent spherically symmetric AdS-wormholes sourced by both thin-shell domain walls and smooth scalar fields with potentials $V(\phi)$ that are $C^1$ but not $C^2$. The time-translation in such spacetimes translates both wormhole mouths forward in time, instead of shifting them in opposite directions as in familiar AdS-Kruskal black holes. Interestingly, the results of figure \ref{fig:areas} indicates that, at least for some purposes, the thin-shell solutions become good approximations to the smooth solutions when the parameter $k$ in \eqref{eq:scalarprofile} satisfies $k \gtrsim 1.$ As shown in appendix \ref{sec:NoGo}, the non-smooth potential $V(\phi)$ is critical to the construction, as there there can be no precisely time-independent such solutions when the scalar potential $V$ is a smooth function of the scalar field $\phi$. This feature may be related to the expectation that -- even when they exist -- the interior of such wormholes will be unstable. The instability was identified explicitly in the thin-shell case. Nevertheless, as discussed in sections \ref{sec:MIthin} and \ref{sec:ThickHRT}, at least for a large set of boundary regions the HRT entropies of boundary regions are already thermalized at any finite $t$ without the above instability having been triggered. By this we mean that the result agrees with that obtained from familiar AdS-Kruskal in the limit $t\rightarrow \infty$. This was shown in particular for many cases where the boundary region contains pieces on both boundaries so that the same result holds for cross-boundary mutual informations similar to those studied in \cite{Hartman:2013qma}. Indeed, we conjecture that it holds for all such entropies and mutual informations. Should one be able to find a stable version of our time-independent wormholes, a feature of this sort would be an interesting consistency check on whether dual gauge theory states thermalize in a universal way. Such computations raise the question of whether our wormholes can have gauge theory duals in some version of gauge/gravity duality. One question involves the dual description of the logarithm at the minimum of the potentials used in section \ref{sec:Thick}. But leaving this aside for now, we might ask if our wormholes define stationary points of Euclidean path integrals in analogy with \cite{Maldacena:2001kr}. At least in the thin-wall context, it is clear that the answer is negative. Constructing a Euclidean thin-wall stationary point amounts to solving an ODE for the Euclidean motion of the wall within Euclidean AdS-Schwarzschild. Since at $t=0$ the wall sits at \eqref{eq:sew} with zero velocity, it must do so for all Euclidean time. But since shifting Euclidean time by half a period takes one to the opposite side of the Lorentzian horizon, this is incompatible with the requirement that the wall exist only inside the wormhole and not outside. It would be interesting to determine whether a similar argument applies to our smooth wormhole solutions built from non-smooth potentials. Finally, we briefly mention the recent discussions of the possible role of complexity in gauge/gravity duality in \cite{Stanford:2014jda,Roberts:2014isa,Brown:2015bva,Brown:2015lvg} and the conjectures that gauge-theory complexity is related either to the volume of maximal slices or the action of certain regions in the bulk geometry. In our case, even the renormalized volume of a maximal slice that extends from one boundary to the other is strictly infinite. Although the renormalized volume of the $t=0$ slice will be finite, for maximal slices there is no analogue of the argument in footnote \ref{VH}. Indeed, in our case it is clear that a surface of arbitarily large renormalized volume can be obtained by following an orbit of the Killing field in the regions of figure \ref{fig:Conformal} (right) in which the time-translation Killing field is spacelike. In particular, the volume of such surfaces grows without bound as the surface nears the topmost point of the dotted line in \ref{fig:Conformal} (right). Similarly, the action of the spacetime region inside the wormhole (say, defined as in \cite{Lehner:2016vdi}) should diverge due to the required integral over time and the (non-compact) time-translation symmetry. Interestingly, assuming the wormhole to be unstable as in section \ref{sec:Thin} and choosing a perturbation that collapses the interior even at a very late time would result in finite actions and volumes of maximal surfaces at any given time $t$, though the resulting breaking of time-translation symmetry would also cause these quantities to grow with time. Indeed, at late times the growth in such quantities should be dominated by the region near the outermost horizon and so will proceed precisely as in AdS-Kruskal. In contrast, with an instability that causes the wormhole interior to expand both the relevant actions and volumes of maximal slices will continue to diverge. It would be interesting to understand better the meaning of such divergences in the context of the conjectures of \cite{Stanford:2014jda,Roberts:2014isa,Brown:2015bva,Brown:2015lvg}. \section*{Acknowledgements} We thank Gary Horowitz, Henry Maxfield, Brian Swingle, and Ying Zhao for useful discussions. DM and ZF were supported in part by the Simons Foundation and by funds from the University of California. EM was supported in part by NSF grant PHY-1504541.
1,941,325,220,960
arxiv
\section{Introduction} \label{sec:intro} The realisation that there may be more to the universe than meets the eye is one of the most profound developments in 20$^{\textrm{th}}$ Century astronomy and astrophysics. However, while there are now multiple lines of evidence that dark matter outweighs baryonic matter by a ratio of approximately 5:1 \cite{Ade:2015xua}, we have few clues regarding the physical nature of dark matter. Much theoretical and experimental effort has focused on WIMP models, motivated by their consistency with supersymmetric extensions to the Standard Model and their relatively simple dynamics. However, advanced direct-detection experiments are putting increasingly tight constraints on the WIMP parameter space \cite{Tan:2016zwf, Akerib:2016vxi} and $\Lambda$CDM cosmology with simple, pressureless, noninteracting dark matter (a class including simple WIMP scenarios) is potentially at odds with observations at small astrophysical scales \cite{Bull:2015stt}. The potential shortcomings of simple cold dark matter scenarios motivate investigations of more novel dark matter scenarios. In particular, ultralight dark matter (ULDM), also known as fuzzy dark matter (FDM), or BEC dark matter, is an increasingly well-studied possibility; for a recent review of the potential advantages and characteristic attributes of this scenario see Ref~\cite{Hui:2016ltb}. ULDM models are well motivated by fundamental theories possessing approximate shift symmetries such as the theory of the QCD axion \cite{Kim:2008hd, Marsh:2015xka}. Moreover, ULDM can naturally resolve the small-scale problems of $\Lambda$CDM as the Heisenberg uncertainty principle suppresses gravitational collapse on length scales shorter than the de Broglie wavelength of the ULDM particle. In this regime the mass of the ULDM particle becomes correlated with astrophysical observables; if it is on the order of $10^{-22}$~eV, structure is suppressed at kiloparsec scales and below \cite{Hu:2000ke}. Given the presence of a fundamental lengthscale, the behaviour of ULDM is more complex than that of simple dark matter scenarios whose cosmologically relevant interactions are purely gravitational. Physically, the effective short-scale pressure and condensate-like properties of ULDM create new dynamical possibilities for ULDM scenarios, such as purely pressure supported soliton-like solutions \cite{Marsh:2015wka} and superposition or interference during interactions between condensate-like halos \cite{Schwabe:2016rze}. Consequently, modelling dark matter dynamics in ULDM scenarios is more challenging than in simple cold dark matter models, but is critical to understanding the physical consequences of ULDM models. In the non-relativistic regime, the dynamics of ULDM can be reduced to the Schr{\"o}dinger-Poisson system, where the complex variable $\psi$ describes the local density of ULDM quanta while the Poisson equation describes the local gravitational potential. Many approaches have been taken to this problem, including both modifications of existing cosmological simulation codes and the development of new codes specifically designed for ULDM systems. One widely used approach is the Madelung fluid formulation of the Schr{\"o}dinger-Poisson system \cite{Madelung1926} which has a quantum pressure term that can be treated numerically in a variety of ways. In Ref.~\cite{Zhang:2016uiy}, the cosmological code \textsc{gadget} \cite{Springel:2005mi} is modified to treat the quantum pressure as an effective particle-particle interaction and the resulting code, \textsc{axion-gadget} is publicly available \cite{axion-gadget}. Ref.~\cite{Nori:2018hud} modifies a non-public extension of \textsc{gadget}, \textsc{p-gadget3} to treat the quantum pressure term via smoothed-particle hydrodynamics (SPH) routines. The SPH approach is also used in Ref.~\cite{Mocz:2015sda}, while a particle-mesh approach was implemented in \cite{Veltmaat:2016rxo}. N\textsc{yx} \cite{Almgren:2013sz} was modified in \cite{Schwabe:2016rze} to study merging ULDM solitonic cores, G\textsc{alacticus} \cite{Benson:2010kx} was modified in \cite{Du:2016zcv} to study the effects of tidal stripping and dynamical friction on ULDM halos, \textsc{arepo} \cite{Springel:2009aa} was modified in \cite{Mocz:2017wlg} to study the core-mass relationship and turbulence characteristics of ULDM halos, and \textsc{gamer} \cite{Schive:2009hw, gamer} was modified \cite{Schive:2014dra} to perform a detailed study of structure formation in ULDM cosmologies. While a large number of public codes can solve conventional dark matter scenarios, \textsc{axion-gadget} is the only currently available solver for ULDM dynamics. This paper introduces \textsc{PyUltraLight}\xspace, a stand-alone Python-based pseudospectral Schr{\"o}dinger-Poisson solver, and demonstrates that it reproduces many of the key findings of more complicated cosmological simulation codes within a desktop computing environment. We anticipate that as a publicly available resource, \textsc{PyUltraLight}\xspace\ will serve as a valuable cross-check on more complex implementations, serve as a basis for further development of such codes within the computational cosmology community, and facilitate explorations of ULDM dynamics. \textsc{PyUltraLight}\xspace is based on a symmetrised-split-step (leapfrog) solver for the time evolution, and uses a pseudospectral Fourier algorithm to solve the Poisson equation for the gravitational potential at each step.\footnote{A similar methodology was described in Ref.~\cite{Paredes:2015wga}; at the time of writing this code has not been released. Spectral methods are often used to solve the Poisson equation in large scale structure simulations, while the {\sc PSpectre} code \cite{Easther:2010qz} provides a pseudospectral solver for the evolution of the inflaton and fields coupled to it during parametric resonance and preheating after inflation \cite{Amin:2010dc,Amin:2011hj,Zhou:2013tsa}.} This algorithm has $2^{nd}$ order accurate time integration steps and sub-percent level energy conservation, while the wavefunction normalisation is conserved to machine precision. As a pseudospectral code, linear differential operators are computed by direct multiplication in the Fourier domain, while non-linear terms are evaluated in position space. Consequently, \textsc{PyUltraLight}\xspace is free from noise associated with spatial derivatives computed via finite-differencing. There is a necessary computational cost associated with the Fourier and inverse Fourier transforms but these transforms are optimised in \textsc{PyUltraLight}\xspace through the use of the pyFFTW pythonic wrapper around the C-based FFTW subroutine library \cite{pyfftw,fftw}. As the FFTW libraries offer full parallelisation, \textsc{PyUltraLight}\xspace is currently designed to take advantage of multiple cores on a user PC or shared-memory environment. Full MPI compatibility has not yet been implemented as we have not found a need to run simulations in a distributed-memory cluster environment, however future releases may address this possibility. This paper is organised as follows. We first provide a short review of ULDM physics, including a derivation of the Schr{\"o}dinger-Poisson equations from the underlying scalar-field Lagrangian. We then describe their implementation in \textsc{PyUltraLight}\xspace, before moving on to describe testing and verification procedures applied to the code. We reproduce a selection of results from a variety of recent ULDM simulations and discuss the energy conservation and accuracy as a function of spatial resolution. \section{The Physics of ULDM}\label{sec:physics} \subsection{The Schr{\"o}dinger-Poisson System} The existence of an extremely light scalar field, minimally coupled to gravity, is the central premise on which ULDM models are predicated. Within the ULDM framework, it is proposed that this scalar field exists as a Bose-Einstein condensate, described by a single wavefunction which is governed by the Schr{\"o}dinger-Poisson coupled differential equations. We begin by deriving this system of equations as a non-relativistic weak-field limit of a more general theory. We start from the action functional for a scalar field, $\phi$, minimally coupled to gravity and in the absence of self-interactions, \begin{equation}\label{eq:action} S=\int \frac{d^4x}{\hbar}\sqrt{-g}\bigg\{\frac{1}{2}g^{\mu\nu}\partial_\mu\phi\partial_\nu\phi-\frac{m^2}{2\hbar^2}\phi^2\bigg\}, \end{equation} where we have taken $c=1$ but retain factors of $\hbar$ at this stage. Applying the variational principle to this action yields the Euler-Lagrange equations \begin{equation}\label{eq:e-l} \frac{1}{\sqrt{-g}}\partial_\mu\big[\sqrt{-g}g^{\mu\nu}\partial_\nu\phi\big]-\frac{m^2}{\hbar^2}\phi=0. \end{equation} We evaluate equation \ref{eq:e-l} using linear perturbation theory, adopting the perturbed FRW metric in the Newtonian gauge: \begin{equation}\label{eq:pFRW} ds^2=-\big(1+2\Phi(\vec{r},t)\big)dt^2+a(t)^2\big(1-2\Phi(\vec{r},t)\big)d\vec{r}^{\,2}. \end{equation} To linear order in $\Phi(\vec{r})$ we obtain \begin{equation}\label{eq:linear} \Ddot{\phi}-\frac{\big(1+4\Phi\big)}{a(t)^2}\nabla^2\phi+3H\Dot{\phi}-4\dot{\Phi}\dot{\phi}+\big(1+2\Phi\big)\frac{m^2}{\hbar^2}\phi=0, \end{equation} where $H=\Dot{a}(t)/a(t)$. At late times in an expanding universe, $H\ll m/\hbar$ and it is sufficient to set $H=0$ and $a(t)=1$ in equation \ref{eq:linear}. This is a good approximation even at relatively high redshifts, including the epochs of early structure formation. Alternatively, if we consider a non-expanding universe, these equalities are true by definition. In either case, we can remove the third term in equation \ref{eq:linear}. The resulting equation can then be analysed using WKB methods in the non-relativistic regime.\footnote{For a detailed explication of the WKB approximation in the non-relativistic limit, see \cite{Young2015}.} This allows us to write an ansatz solution for the field $\phi$: \begin{equation}\label{eq:ansatz} \phi=\frac{\hbar}{\sqrt{2}m}\big(\psi e^{-imt/\hbar}+\psi^* e^{imt/\hbar}\big), \end{equation} where $\psi$ is assumed to be slowly varying in the sense that $m\vert\psi\vert\gg\hbar\vert\dot{\psi}\vert$, $m\vert\dot{\psi}\vert\gg\hbar\vert\Ddot{\psi}\vert$, $m\vert\psi\vert\gg\hbar\vert\nabla{\psi}\vert$, and $m\vert\nabla{\psi}\vert\gg\hbar\vert\nabla^2{\psi}\vert$. Since $\Phi$ is sourced by $\psi$, we also have that $m\vert\Phi\vert\gg\hbar\vert\dot{\Phi}\vert$. Direct substitution of the ansatz solution into equation \ref{eq:linear}, discarding heavily suppressed terms, yields \begin{equation}\label{eq:schrodinger} i\hbar\Dot{\psi}=-\frac{\hbar^2}{2m}\nabla^2\psi+m\Phi\psi. \end{equation} We have thus shown that $\psi$ satisfies the Schr{\"o}dinger equation in this limit, which is interpreted as the macroscopic wavefunction of a Bose-Einstein condensate. It follows that the particle number density of the condensate is given by $\vert\psi\vert^2$, so its mass density is simply $m\vert\psi\vert^2$. The local gravitational potential thus satisfies the Poisson equation, \begin{equation}\label{eq:poisson} \nabla^2\Phi=4\pi G m \vert\psi\vert^2, \end{equation} where $G$ is Newton's gravitational constant. The coupled equations \ref{eq:schrodinger} and \ref{eq:poisson} together form the nonlinear Schr{\"o}dinger-Poisson system which describes the dynamics of ULDM in the non-relativistic regime. While Equations \ref{eq:schrodinger} and \ref{eq:poisson} are valid for open boundary conditions, \textsc{PyUltraLight}\xspace is designed to solve the Schr{\"o}dinger-Poisson system under periodic boundary conditions. In this case the correct form of equation \ref{eq:poisson} is \begin{equation}\label{eq:poisson_periodic} \nabla^2\Phi=4\pi G m \big(\vert\psi\vert^2-\langle\vert\psi\vert^2\rangle\big), \end{equation} where we subtract the average density from the right hand side of the Poisson equation. The form of Equation \ref{eq:poisson_periodic} is a consequence of Gauss' law and the fact that the surface integral of the gradient of the field around the perimeter of the simulation grid is identically zero when periodic boundary conditions are imposed \cite{Dabo:2008}. \subsection{Field Rescalings} It is helpful to recast the Schr{\"o}dinger-Poisson system (equations \ref{eq:schrodinger} and \ref{eq:poisson}) in terms of adimensional quantities. In keeping with Refs~\cite{Schive:2014hza,Paredes:2015wga} we introduce length, time, and mass scales as follows: \begin{align} \CMcal{L}&=\left(\frac{8\pi\hbar^2}{3 m^2H_0^2\Omega_{m_0}}\right)^{\frac{1}{4}}\approx121\left(\frac{10^{-23}\operatorname{eV}}{m}\right)^{\frac{1}{2}}\operatorname{kpc},\label{eq:length}\\ \CMcal{T}&=\left(\frac{8\pi}{3 H_0^2\Omega_{m_0}}\right)^{\frac{1}{2}}\approx75.5 \operatorname{Gyr},\label{eq:time}\\ \CMcal{M}&=\frac{1}{G}\left(\frac{8\pi}{3 H_0^2\Omega_{m_0}}\right)^{-\frac{1}{4}}\left(\frac{\hbar}{m}\right)^{\frac{3}{2}}\approx 7\times 10^7\left(\frac{10^{-23}\operatorname{eV}}{m}\right)^{\frac{3}{2}}\operatorname{M}_{\odot},\label{eq:mass} \end{align} where $m$ is the mass of the ultralight scalar field, $H_0$ is the present-day Hubble parameter, $G$ is Newton's gravitational constant and $\Omega_{m_0}$ is the present-day matter fraction of the energy density of the universe. We recast equations \ref{eq:schrodinger} and \ref{eq:poisson} in terms of the dimensionless quantities \begin{equation}\label{eq:dimensionless} t'=\frac{t}{\CMcal{T}},\quad \vec{x}^{\,'}=\frac{\vec{x}}{\CMcal{L}},\quad \Phi'=\frac{m\CMcal{T}}{\hbar}\Phi,\quad \psi'=\CMcal{T}\sqrt{mG}\psi. \end{equation} Dropping the primes for notational convenience, we see that the coupled differential equations of the Schr{\"o}dinger-Poisson system under periodic boundary conditions reduce to \begin{align} i\Dot{\psi}(\vec{x},t)&=-\frac{1}{2}\nabla^2\psi(\vec{x},t)+\Phi(\vec{x},t)\psi(\vec{x},t),\label{eq:s-adim}\\ \nabla^2\Phi(\vec{x},t)&=4\pi\big(\vert\psi(\vec{x},t)\vert^2-\langle\vert\psi(\vec{x},t)\vert^2\rangle\big),\label{eq:p-adim} \end{align} where it is understood that all quantities involved are dimensionless. We can recover dimensionful quantities via the ``dictionary'' provided by equations \ref{eq:length} to \ref{eq:mass}. For example, the integrated mass of the system, $M_{tot}$, is given by \begin{equation}\label{eq:integrated-mass} M_{tot}=\CMcal{M}\int d^3x\vert\psi\vert^2. \end{equation} Likewise, the mass density at any point is given by \begin{equation}\label{eq:density} \rho=\CMcal{M}\CMcal{L}^{-3}\vert\psi\vert^2. \end{equation} By dimensional analysis, we can easily restore dimensionful units to any of the quantities calculated by the code. In particular, in the following sections it is to be understood that \begin{equation} E=\CMcal{M}\CMcal{L}^2\CMcal{T}^{-2}\ E_{code}, \quad v=\CMcal{L}\CMcal{T}^{-1}\ v_{code}, \end{equation} where $E$ and $v$ represent energy and velocity, respectively. \textsc{PyUltraLight}\xspace\ works internally with these dimensionless quantities but can receive initial conditions and generates output in physical units. Henceforth, we will often refer to $\vert\psi\vert^2$ as the density, where it is understood that this is in fact a dimensionless quantity related to the physical mass density via the constant of proportionality given by equation \ref{eq:density}. \section{Algorithm and Implementation}\label{sec:implementation} In this section we discuss the methodology used to calculate the dynamics of the adimensional Schr{\"o}dinger-Poisson system (equations \ref{eq:s-adim} and \ref{eq:p-adim}) given user-specified initial conditions. We introduce the symmetrised split-step Fourier method, and schematically describe how the system is evolved at each timestep. \subsection{Dynamical Evolution}\label{sec:dynamics} Dynamical evolution within \textsc{PyUltraLight}\xspace\ progresses via a symmetrised split-step Fourier process on an $N\times N\times N$ grid with periodic spatial boundary conditions. To understand this method, first consider the exact expression for the unitary time evolution of the wavefunction according to equation \ref{eq:s-adim}, namely \begin{equation}\label{eq:exact-time-ev} \psi(\vec{x},t+h)=\mathcal{T}\operatorname{exp}\left[-i\int_t^{t+h}dt'\left\{-\frac{1}{2}\nabla^2+\Phi(\vec{x},t')\right\}\right]\psi(\vec{x},t), \end{equation} where $\mathcal{T}$ is the time-ordering symbol. For a sufficiently small timestep $h$, the trapezoidal rule gives the approximation \begin{equation} \int_t^{t+h}dt'\Phi(\vec{x},t')\approx \frac{h}{2}\Big(\Phi(\vec{x},t\hspace*{-.2em}+\hspace*{-.2em}h)+\Phi(\vec{x},t)\Big). \end{equation} We can therefore write the approximate form of equation \ref{eq:exact-time-ev} as \begin{equation}\label{eq:approx-time-ev} \psi(\vec{x},t+h)\approx \operatorname{exp}\left[i\frac{h}{2}\Big(\nabla^2-\Phi(\vec{x},t\hspace*{-.2em}+\hspace*{-.2em}h)-\Phi(\vec{x},t)\Big)\right]\psi(\vec{x},t). \end{equation} Note that the exponential in equation \ref{eq:approx-time-ev} omits the time-ordering symbol, and is only equivalent to its time-ordered counterpart to order $h^2$. The linear differential operator in equation \ref{eq:approx-time-ev} acts naturally in Fourier space, while the nonlinear potential term is simplest to evaluate in position space. By splitting the exponential we can evaluate each term in its natural domain. Such a splitting is valid when the timestep is small, and is represented as \begin{equation}\label{eq:split-step} \psi(\vec{x},t\hspace*{-.2em}+\hspace*{-.2em}h)\approx\operatorname{exp}\left[-\frac{ih}{2}\Phi(\vec{x},t\hspace*{-.2em}+\hspace*{-.2em}h)\right]\operatorname{exp}\left[\frac{ih}{2}\nabla^2\right]\operatorname{exp}\left[-\frac{ih}{2}\Phi(\vec{x},t)\right]\psi(\vec{x},t). \end{equation} This splitting can be understood thusly: first, a half timestep is taken in which only the nonlinear potential operator acts, followed by a full timestep in the linear term. The potential field is then updated, and a final half timestep in the nonlinear term is performed. Using the Baker-Campbell-Hausdorff formula to express the product of exponentials in equation \ref{eq:split-step} as a single exponential and keeping only terms to order $h^2$ we find: \begin{equation}\label{eq:BCH} \operatorname{exp}\left[i\frac{h}{2}\Big(\nabla^2-\Phi(\vec{x},t\hspace*{-.2em}+\hspace*{-.2em}h)-\Phi(\vec{x},t)\Big)+\frac{h^2}{8}\Big[\nabla^2,\Phi(\vec{x},t)\Big]-\frac{h^2}{8}\Big[\nabla^2,\Phi(\vec{x},t\hspace*{-.2em}+\hspace*{-.2em}h)\Big]\right]. \end{equation} Making use of the fact that $\Phi(\vec{x},t+h)\approx\Phi(\vec{x},t)+h\Dot{\Phi}(\vec{x},t)$ we see that the commutators in equation \ref{eq:BCH} cancel at $\mathcal{O}(h^2)$ and the expression matches \ref{eq:approx-time-ev}, with the dominant error term appearing at $\mathcal{O}(h^3)$. Evaluation of equation \ref{eq:split-step} within \textsc{PyUltraLight}\xspace thus proceeds as follows: Initially, the nonlinear term acts in position space for one half-timestep. The result is Fourier transformed, and a full timestep is taken with the differential operator applied in the Fourier domain. The potential field is then updated in accordance with equation \ref{eq:p-adim}. After an inverse Fourier transform a final half timestep is taken with the updated nonlinear term acting in position space to give the new $\psi$ field configuration. This procedure is known as the symmetrised split-step Fourier method, and used widely in fields such as nonlinear fiber optics \cite{Agrawal2013, Zhang:2008, Sinkin2003}. The algorithm can be represented schematically as \begin{equation}\label{eq:schematic-psi} \psi(\vec{x},t\hspace*{-.2em}+\hspace*{-.2em}h)=\operatorname{exp}\left[-\frac{ih}{2}\Phi(\vec{x},t\hspace*{-.2em}+\hspace*{-.2em}h)\right]\CMcal{F}^{-1} \operatorname{exp}\left[\frac{-ih}{2}k^2\right]\CMcal{F}\operatorname{exp}\left[-\frac{ih}{2}\Phi(\vec{x},t)\right]\psi(\vec{x},t), \end{equation} where the order of operations runs from right to left, $\CMcal{F}$ and $\CMcal{F}^{-1}$ denote the discrete Fourier transform and its inverse, and $k$ is the wavenumber in the Fourier domain. The potential field is updated following the inverse Fourier transform in equation \ref{eq:schematic-psi}, via \begin{equation}\label{eq:schematic-phi} \Phi(\vec{x},t\hspace*{-.2em}+\hspace*{-.2em}h)=\CMcal{F}^{-1}\left(-\frac{1}{k^2}\right)\CMcal{F}\ 4\pi\vert\psi(\vec{x},t_{i})\vert^2, \end{equation} where $\psi(\vec{x},t_{i})$ is the field configuration at this halfway point in the full timestep. We explicitly set the $k=0$ Fourier mode to zero prior to the final inverse Fourier transform; as a consequence there is no need to subtract the global average density from the local value in Equation \ref{eq:schematic-phi}, in contrast to Equation \ref{eq:poisson_periodic}. The final operation in equation \ref{eq:schematic-psi} only changes the phase of $\psi$, so we could replace $\psi(\vec{x},t_{i})$ with $\psi(\vec{x},t\hspace*{-.2em}+\hspace*{-.2em}h)$ in equation \ref{eq:schematic-phi} with no change in meaning. \textsc{PyUltraLight}\xspace makes an additional simplification to the symmetrised split-step Fourier method by combining the consecutive half-steps in the nonlinear term into a single full step. Consequently, only the first and last operations involve actual half steps. Schematically this becomes \begin{equation} \psi(t\hspace*{-.2em}+\hspace*{-.2em}nh)=\operatorname{exp}\left[+\frac{ih}{2}\Phi\right]\left(\prod^n\operatorname{exp}\left[-ih\Phi\right]\operatorname{exp}\left[\frac{ih}{2}\nabla^2\right]\right)\operatorname{exp}\left[-\frac{ih}{2}\Phi\right]\psi(t), \end{equation} where $\Phi$ is updated at each step via equation \ref{eq:schematic-phi}; attention is drawn to the sign difference between the first and last operators. From a computational perspective, the numerical Fourier transforms are likely to be the rate-limiting step in any pseudospectral code. In \textsc{PyUltraLight}\xspace the discrete Fourier transform (DFT) and its inverse are implemented via pyFFTW, a pythonic wrapper for the C-based FFTW subroutine library which efficiently implements both real and complex DFTs \cite{pyfftw,fftw,Frigo2005}. This allows \textsc{PyUltraLight}\xspace to combine the flexibility of a notebook based modelling tool with the efficiency of a carefully tuned, compiled numerical library. FFTW is fully parallelised and its support for multithreading is inherited by pyFFTW and accessed within \textsc{PyUltraLight}\xspace; the number of threads used by the pyfftw.FFTW class is determined by the Python multiprocessing routines which are used to ascertain the number of available CPU cores. In addition, \textsc{PyUltraLight}\xspace uses the NumExpr package to parallelise operations on array objects within the simulation \cite{numexpr}. \subsection{Initial Conditions: Soliton Profiles}\label{sec:soliton-profiles} \textsc{PyUltraLight}\xspace specifies the initial dark matter configuration as a superposition of an arbitrary number of solitonic halos, with arbitrary (user-defined) velocities and phases. This is necessarily an idealisation, given that realistic dark matter halos will not map directly to the solitonic solutions, but it provides an excellent ``playground'' in which to explore ULDM dynamics, and the initialisation routines within \textsc{PyUltraLight}\xspace can be easily augmented to accommodate a wider range of scenarios. The initial field configuration is built by loading a NumPy array file encoding a solitonic solution to the Schr{\"o}dinger-Poisson system and the corresponding position mass, velocity, and phase parameters each specified by the user within the accompanying Jupyter notebook. In practice, only a finite range of halo masses can be supported within a given simulation -- the radius of a solitonic halo is inversely proportional to its mass, so resolving a light halo interacting with a very massive halo would require an extremely fine spatial mesh. However, \textsc{PyUltraLight}\xspace also allows the user to specify a fixed, external potential which does not take part in the dynamics. At this point only a central $1/r$ potential is supported but this would be easily generalised. It should be noted that because \textsc{PyUltraLight}\xspace enforces periodic boundary conditions, care must be taken in cases where solitons approach the boundaries of the simulation grid. If a soliton were to cross the boundary during a simulation in which a Newtonian central potential is implemented, the forces exerted during the crossing would be unphysical. For studies of orbital stability this is unlikely to cause any problems, as in these circumstances material collapses toward the centre of the simulation grid rather than crossing the boundaries. However, the user should ensure that solitons are initialised sufficiently far from the boundary for the purposes of each simulation on a case-by-case basis. In situations where a significant portion of the total mass is expected to be ejected, such as the merger of multiple solitons to form a larger halo, care should be taken to ensure that mass ejected above the escape velocity is not recaptured as it re-enters the grid from the other side. For studies of this kind, an absorbing sponge at the grid boundaries is perhaps more suitable than periodic boundary conditions, though this has not been implemented in \textsc{PyUltraLight}\xspace at this stage. The soliton profile used to generate the initial conditions in \textsc{PyUltraLight}\xspace is found by first imposing spherical symmetry in the Schr{\"o}dinger-Poisson equations and assuming time independence in the radial density profile \cite{Paredes:2015wga}: \begin{equation} \psi(\vec{x},t)\rightarrow e^{i\beta t} f(r), \quad \Phi(\vec{x},t)\rightarrow \varphi(r), \end{equation} where $r=\vert\vec{x}\vert$. Introducing $\tilde{\varphi}(r):=\varphi(r)+\beta$, equations \ref{eq:s-adim} and \ref{eq:p-adim} reduce to \begin{align} 0 &= -\frac{1}{2}f^{\prime\prime}(r)-\frac{1}{r}f^\prime(r)+\tilde{\varphi}(r)f(r)\label{eq:s-spherical}\\ 0 &= \tilde{\varphi}^{\prime\prime}(r)+\frac{2}{r}\tilde{\varphi}^\prime(r)-4\pi f(r)^2\label{eq:p-spherical} \end{align} where primes denote derivatives with respect to $r$. Note that this system contains no arbitrary constants, so the underlying profile is effectively universal and is loaded as a pre-computed array by \textsc{PyUltraLight}\xspace, rather than computed from scratch with each code execution. The soliton profile numpy array file is included with \textsc{PyUltraLight}\xspace, however, an auxiliary program {\sc soliton\texttt{\_}solution.py} is also supplied, from which this array can be generated; it uses a fourth-order Runge-Kutta algorithm to solve the coupled profile equations. We set $\left. f(r) \right|_{r=0}$=1, while smoothness requires that first derivatives of $f(r)$ and $\tilde{\varphi}(r)$ vanish at the origin. We then use the shooting method to search for solutions of $f(r)$ and $\varphi(r)$ satisfying the boundary conditions $\lim_{r\rightarrow\infty}\varphi(r)=0$ and $\lim_{r\rightarrow\infty}f(r)=0$, varying $\left. \tilde{\varphi}(r)\right|_{r=0}$ until we obtain a solution of $f(r)$ which approaches zero at the maximal specified radius, $r_m$. The value of $\beta$ is then calculated by assuming that $\varphi(r)$ goes as $-c/r$ at large radii, where $c$ is a constant. Under this assumption, we can write \begin{equation} \tilde{\varphi}(r_m)=-\frac{c}{r_m}+\beta, \quad c=r_m^2\tilde{\varphi}^{\ \prime}(r_m). \end{equation} We thus obtain the full solution $\psi(\vec{x},t)=e^{i\beta t}f(r)$. Having initially chosen $\left. f(r)\right|_{r=0}=1$, we may then generalise to $\left. f(r)\right|_{r=0}=\alpha$, where $\alpha$ is an arbitrary positive real number. It is easily verified that if $e^{i\beta t}f(r)$ is a solution to the spherically symmetric Schr{\"o}dinger-Poisson system, then $g(r)$ is also a solution, where \begin{equation} g(r)=e^{i\alpha\beta t}\alpha f(\sqrt{\alpha}\ r). \end{equation} We thus have a family of spherically symmetric soliton solutions to the dimensionless Schr{\"o}\-dinger-Poisson system; the dimensionless soliton mass is proportional to $\sqrt{\alpha}$ and the full width at half maximum is proportional to $1/\sqrt{\alpha}$. Since the size of the soliton scales inversely with the mass, the most massive soliton in the solution puts a lower bound on the required spatial resolution. The Schr{\"o}dinger equation is not trivially form invariant under Galilean boosts but we can enforce Galilean covariance through the addition of a velocity-dependent phase factor, \begin{equation}\label{eq:covariant-solution} \psi(\vec{x},t)=\alpha f\big(\sqrt{\alpha}\vert\vec{x}-\vec{v}t\vert\big)e^{i\left(\alpha\beta t+\vec{v}\cdot\vec{x}-\frac{1}{2}\vert\vec{v}\vert^2t\right)}. \end{equation} To construct the initial field configuration, \textsc{PyUltraLight}\xspace, loads the NumPy array encoding the radial profile $f(r)$ for the $\left. f(r)\right|_{r=0}=1$ case. Equation \ref{eq:covariant-solution} is then used to transform the this solution into soliton(s) with user-specified values position, mass, and velocity specified, via the accompanying Jupyter notebook. The user may also add an additional constant phase factor if desired. \subsection{Choosing the Timestep} The Courant-Friedrichs-Lewy (CFL) condition is an upper bound on the timestep (as a function of grid-spacing) that must be satisfied by many partial differential equation solvers based on finite-differencing \cite{Ajaib:2013oua} and is often cited in numerical analyses of ULDM via the Schr{\"o}dinger-Poisson system, see e.g. Ref.~\cite{Schwabe:2016rze}. However, the CFL condition expresses a causality constraint, and is generally only strictly applicable to hyperbolic PDEs, whereas the Schr{\"o}dinger-Poisson has only a first order time derivative, even though it is effectively the nonrelativistic limit of the Klein-Gordon equation. Moreover, because \textsc{PyUltraLight}\xspace computes spatial derivatives via a pseudospectral method, technically it is unconditionally stable \cite{Taha:1984jz}. Our split-step algorithm is second order in the timestep, and its value will always be an empirical tradeoff between computational cost and convergence to the apparent limit in which step is arbitrarily small. Consequently, the user is encouraged to validate their choice of timestep via case-by-case convergence testing. The default timestep in \textsc{PyUltraLight}\xspace is fixed with reference to the fluid interpretation of the Schr{\"o}dinger-Poisson system \cite{Hui:2016ltb}. The fluid interpretation is often used to recast the Schr{\"o}dinger-Poisson system in the form of the Madelung equations \cite{Suarez:2011yf}, which a hydrodynamical representation of the system. The first step is to define \begin{equation}\label{eq:madelung} \psi\equiv\sqrt{\rho}e^{i\theta}, \quad \vec{v}\equiv\boldsymbol{\nabla}\theta. \end{equation} and to treat $\vec{v}$ as a fluid velocity. From this perspective, if the phase difference between two adjacent grid points exceeds $\pi$ the fluid will appear to move ``backwards'' across the grid. We thus set the default timestep, $\Delta t$, so that fluid travelling at this maximum velocity traverses one grid space, $\Delta x$, per timestep, or \begin{equation}\label{eq:timestep} \Delta t = \frac{(\Delta x)^2}{\pi}. \end{equation} This is a choice, rather than a strict constraint on $\Delta t$. However, if the ``fluid'' approaches velocities where the phase appears to switch direction, the configuration is approaching the point where the simulation grid is too coarse to fully resolve the dynamics. Hence, a timestep much smaller than this value may offer little practical advantage. However, in some cases the breakdown may occur in regions of the simulation volume that are of little physical interest, and the user is free to choose a larger timestep via the 'step\textunderscore factor' parameter in the Jupyter notebook. Alternatively, Ref.~\cite{Mocz:2017wlg} fixes the timestep by ensuring that neither of the unitary operators in Equation \ref{eq:schematic-psi} lead to a phase change of more than $\pi$ for a single grid point over one timestep. However, because the pseudo-spectral algorithm does not compare the phase of a single gridpoint at different points in time, this choice of timestep is not a requirement for stability. This method gives the following constraints: \begin{equation} \Delta t < \left[\frac{2\pi}{\Phi_{max}}, \frac{2(\Delta x)^2}{\pi}\right], \end{equation} where the second of these constraints is generally the stricter of the two, and is equivalent to our default choice of timestep up to a multiplicative factor of $\mathcal{O}(1)$. Our experience is that specifying the timestep via Equation \ref{eq:timestep} is suitable for the majority of simulation scenarios, and we explore convergence in more detail in Section \ref{sec:resolution}. \section{ULDM Dynamics with \textsc{PyUltraLight}\xspace}\label{sec:test} In this section we validate \textsc{PyUltraLight}\xspace by reproducing results from previous studies of ULDM dynamics, demonstrating interference effects and effective repulsive forces arising from the wavelike nature of ULDM. In addition we study the evolution of the velocity field of a solitonic core orbiting within a Newtonian central potential, showing that the stable orbital configuration is an irrotational Riemann-S ellipsoid. Finally, we demonstrate that \textsc{PyUltraLight}\xspace delivers sub-percent level energy conservation for a selection of dynamical scenarios. \subsection{Interference Patterns During Soliton Collisions}\label{sec:interference} \begin{figure} \includegraphics[trim={0 0 0 0.9cm},clip, scale=0.9]{interference_patterns} \caption{Comparison of theoretical and numerical density profiles at time of maximal interference for head-on collision of two solitons with mass ratio $\mu=2$ and no relative phase difference. The solitons have dimensionless masses 5 and 10, with an initial separation of 4 code units and relative velocity of 20 code units. The simulation resolution is $256^3$ in a box of side-length 8.} \label{fig:interference} \end{figure} The outcomes of ULDM soliton collisions depend critically on whether the total energy of the isolated binary system is positive or negative. With a positive total energy the solitons pass through each other, emerging largely undisturbed from their initial configurations and the wavefunctions describing the solitons are superposed during the collision, yielding distinctive interference patterns. Following \cite{Schwabe:2016rze}, we consider the head-on collision of two solitions with mass ratio $\mu=2$ and high relative velocity. While we work in dimensionless code units, it should be noted that a dimensionful velocity can be restored from the code velocity by multiplying through by $\CMcal{L}\CMcal{T}^{-1}$, the scale parameters defined in Equations \ref{eq:length} and \ref{eq:time}. This simple case of a head-on soliton collision can be treated approximately. Starting from equation \ref{eq:covariant-solution} we write the total wavefunction of the binary system in terms of dimensionless quantities along the collision axis as \begin{align} \psi(x,t)=& \ \alpha_1 f\big(\sqrt{\alpha_1}\vert x-x_1-v_1 t\vert\big)e^{i\left(\alpha_1\beta t+v_1(x-x_1)-\frac{1}{2}v_1^2 t+\delta\right)}\nonumber\\ &+\alpha_2 f\big(\sqrt{\alpha_2}\vert x-x_2-v_2 t\vert\big)e^{i\left(\alpha_2\beta t+v_2(x-x_2)-\frac{1}{2}v_2^2 t\right)}, \end{align} where $x_1$ and $x_2$ are the initial central positions of the solitions, $v_1$ and $v_2$ are the soliton velocities, $\delta$ is a constant relative phase term and $\alpha_2 = 4\, \alpha_1$, parameterising the density profiles as discussed in Section \ref{sec:soliton-profiles}. For convenience we set $v_1=-v_2$ and $x_1=-x_2$. We expect that the interference effects will be maximised when two components of the wavefunction are centred at the same location, such that $x_1+v_1t=-x_2-v_2t=0$. This corresponds to a time $t_{o}=\vert x_1/v_1\vert = \vert x_2/v_2\vert$, where in this simplified model we do not account for distortions caused by the accelerating or compactifying effects that the gravitational interaction has on the soliton profiles as they approach one another. The dimensionless density is then given by \begin{align}\label{eq:predicted-interference} \vert\psi(x,t_{o})\vert^2= \ \alpha_1^2\bigg[&f(\sqrt{\alpha_1}x)^2+16f(2\sqrt{\alpha_1}x)^2+\nonumber\\ &8f(\sqrt{\alpha_1}x)f(2\sqrt{\alpha_1}x)\operatorname{cos}\left(-3\alpha_1\beta\left\vert\frac{x_1}{v_1}\right\vert+2v_1x+\delta \right)\bigg]. \end{align} Figure \ref{fig:interference} shows the dimensionless density profile at the time of maximal interference for two solitons with mass ratio 2 and phase difference $\delta=0$. The numerical result obtained using \textsc{PyUltraLight}\xspace closely matches the theoretical prediction of equation \ref{eq:predicted-interference}. Small disparities between the numerical and theoretical profiles may be attributed to the effect of gravitational contraction not included in the theoretical prediction of equation \ref{eq:predicted-interference} and to a small offset in the true time of maximal interference due to the solitons accelerating as they fall together. We do not expect an exact match, but we have verified that \textsc{PyUltraLight}\xspace qualitatively reproduces the wave interference effects of the ULDM model. With the exception of \cite{Schwabe:2016rze}, few studies of ULDM dynamics have investigated the interference patterns generated by colliding solitons in this way. In some cases, this is because the algorithm employed to simulate the dynamics is not capable of reproducing such effects. An example of this is given in \cite{Veltmaat:2016rxo}, where it is demonstrated that the coarse-grained nature of the particle-mesh method renders the algorithm incapable of reproducing detailed interference patterns such as those shown here. \subsection{Effective Forces From Destructive Interference} \begin{figure} \includegraphics[width=1.\textwidth, trim={0 0 0 0},clip]{phase_comparison} \caption{Head-on collisions between solitons of mass 20, initial relative velocity 20, and initial separation 1.2 in code units. Plots show contours of constant density; time progresses from left to right across each row and is indicated in code units for each frame. The upper panel shows solutons initially in phase; the effective repulsive force generated by a $\pi$ phase shift can be seen in the lower panel.} \label{fig:repulsion} \end{figure} As demonstrated in \cite{Paredes:2015wga}, the wavelike properties of ULDM give rise to effective forces which can dramatically affect the dynamics of core collisions. These effective forces arise as a result of interference phenomena, rather than because of any local interactions the ULDM model might incorporate. Figure \ref{fig:repulsion} shows the results of a head-on collision between two solitons, where in one instance the solitons have no initial phase difference, and in the other instance a phase difference of $\pi$ is applied in the initial conditions. In this simulation, solitons of mass 20 are initialised with relative velocity 20 and initial separation 1.2 (code units). The solitons are allowed to collide, and contours of the density profile along the plane of symmetry are displayed. In one case (top) there is no phase offset between the initial solitons, while in the second the phases differ by $\pi$. In the latter case, the phase shift creates an effective repulsive force between the two solitons. It can be seen in the second frame that as the solitons approach one another, the $\pi$ phase shift results in a slowing of the approach accompanied by a deformation of the density profile, acting so as to avoid contact between the solitons. Dissimilarly, in the case where there is no phase shift, the solitons readily collide and merge to form a single contiguous density profile prior to re-separating. Further discussion of this phenomenon and its possible observational consequences can be found in \cite{Paredes:2015wga}. \vspace{1em} \subsection{Tidal Disruption of Solitons Orbiting a Central Potential}\label{sec:disruption} \begin{figure} \includegraphics[width=1.\textwidth,trim=1cm 1cm 0 1cm,clip]{riemann} \caption{Contours of constant density for a solitonic core after one revolution around a central potential. Contours are superimposed upon the internal velocity field (with the bulk motion subtracted). This velocity field is qualitatively that of an irrotational Riemann-S ellipsoid, and the deformation of the density profile of the soliton along the radial direction (red arrow) is visible. The host:satellite mass ratio is approximately 55; simulation resolution is $256^3$.} \label{fig:riemann} \end{figure} \textsc{PyUltraLight}\xspace allows the inclusion of a static potential equivalent to a point-mass at the centre of the simulation region. There is no backreaction on this mass as a result of the ULDM dynamics, and its ``mirror images'' within the periodic coordinate systems are not accounted for within the overall gravitational potential. While a potential of this form does not necessarily accurately emulate that which we might expect from a realistic galaxy or dark matter halo, it provides a starting point for a study of the stability of satellite dark matter halos orbiting a much larger object. In particular, this includes the investigation of lifespans of dwarf satellite galaxies orbiting much larger objects (including the Milky Way) which are a key to understanding whether ULDM models can resolve the so-called missing satellites problem \cite{Weinberg:2013aya}. An extensive study of the tidal disruption of ULDM solitonic cores orbiting a central potential has recently been undertaken in \cite{Du:2018zrg} and we reproduce just one of their results here. To do this, we again adopt the definitions \ref{eq:madelung}, namely \begin{equation}\label{eq:madelung_repeated} \psi\equiv\sqrt{\rho}e^{i\theta}, \quad \vec{v}\equiv\boldsymbol{\nabla}\theta, \end{equation} where we are working in dimensionless code units. Using these definitions, the Schr{\"o}dinger-Poisson system can be recast in terms of hydrodynamical quantities in the so-called Madelung representation. The Madelung equations resemble the continuity and Euler equations of classical fluid dynamics, with the addition of a `quantum pressure' term accounting for resistance against gravitational collapse. The Madelung formalism is discussed in detail in \cite{Suarez:2011yf, Suarez:2015uva, Johnston:2009wz, Kopp:2017hbb}. Because this hydrodynamical formulation defines the fluid velocity as the gradient of the phase of the field $\psi$, problems arise when $\psi=0$, where the phase is not well defined. Because of this issue, the Madelung and Schr{\"o}dinger representations are not strictly equivalent unless a quantisation condition is imposed, as discussed in \cite{Wallstrom:1994fp}. We do not consider the subtleties of the Madelung representation here, as it is sufficient for our purposes to consider the fluid velocity in the region of a solitonic core, where no field nodes are present.\footnote{It should be noted that, restoring dimensionful units, the fluid velocity $\vec{v}$ is related to the ususal quantum mechanical probability current $\vec{j}$ through $\vert\psi\vert^2\vec{v} = \vec{j} = \hbar/2mi\left[\psi^*\nabla\psi-\psi\nabla\psi^*\right]$} For a discussion of the possible remedies to the `nodes problem', the reader is referred to Chapter 15.3 of \cite{Wyatt:2005uc}. Where the Madelung representation is well defined, i.e. where the phase is a smoothly varying function, the velocity field of the Schr{\"o}dinger-Poisson system is strictly irrotational, $\nabla\times\vec{v}=0$. If a radially symmetric soliton is initialised in a circular orbit around a Newtonian potential, there will be initial transient behaviour as the spherical profile becomes elongated along the radial direction of the central potential. Meanwhile, the velocity field corresponding to the overall orbital motion of the soliton will be superposed with the internal velocity field, combining so as to produce a net flow with vanishing curl. The family of Riemann-S ellipsoids describe non-axisymmetric uniformly rotating bodies whose internal velocity fields have vanishing curl \cite{Chandrasekhar1965}. Therefore, it is the characteristic internal velocity field of a Riemann-S ellipsoid which we expect to arise during our simulation of a soliton orbiting a central mass. It is found in \cite{Du:2018zrg} that an initially spherical solitonic core without self-rotation will gradually spin up to form a tidally-locked ellipsoid with an irrotational internal velocity field when orbiting a host mass. We reproduce this result using \textsc{PyUltraLight}\xspace. Figure \ref{fig:riemann} shows the internal velocity field of a solitonic satellite after one complete revolution around a host mass. The soliton has become elongated along the radial line connecting it to the host, indicating that it is tidally locked, while the velocity field within the tidal radius is visibly irrotational and bears the qualitative trademarks of the Riemann-S ellipsoid as presented in Figure 2 of \cite{RindlerDaller:2011kx}. It should be noted that the wider velocity field is not expected to be accurately predicted in a simulation of this kind, though the field within the tidal radius is well-modelled. This is because the initial soliton density profile is defined only out to a given cutoff radius, beyond which the $\psi$ field value is set identically to zero. As mentioned previously, the Madelung hydrodynamical formulation of the Schr{\"o}dinger-Poisson system is not valid where $\psi=0$. Because of this, we focus primarily on the internal velocity field within the high density region of the solitonic core. As we have seen, in this region \textsc{PyUltraLight}\xspace is able to accurately reproduce the expected velocity field characteristics. \section{Convergence and Validation} \subsection{Energy Conservation}\label{sec:energy} \begin{figure} \includegraphics[width=1.1\textwidth,trim=2.5cm 0 0 1cm,clip]{combined_energy_and_density1} \caption{Left: Evolution of the density profile of a solitonic core at equally spaced times as it undergoes tidal disruption in a potential centred at the red cross. Right: Evolution of the components of the total energy of the system; times correspond to the labeled density profile snapshots. All quantities are in dimensionless code units.} \label{fig:combined_1} \end{figure} \begin{figure} \includegraphics[width=0.9\textwidth,trim=0 0 2cm 2cm,clip]{egy_m=22.eps} \caption{Evolution of the components of the total energy of a binary soliton system with each soliton in an elliptical orbit around the common centre of mass.} \label{fig:binary} \end{figure} \begin{figure} \includegraphics[width=0.9\textwidth,trim=0 0.5cm 0 0,clip]{Total_Energy_Change.eps} \caption{Energy conservation as a function of simulation resolution for a binary solitons orbiting their common centre of mass. $\Delta E_{total}$ is the difference between the current total energy and the initial total energy for the configuration. The ratio of this difference to the initial integrated energy is plotted on the y axis for each resolution.} \label{fig:energy_change} \end{figure} \begin{figure} \includegraphics[width=0.9\textwidth,trim=0 0.5cm 0 0,clip]{Total_Energy_Change_Disruption.eps} \caption{Energy conservation as a function of simulation resolution for a soliton undergoing significant tidal disruption in a Newtonian central potential. $\Delta E_{total}$ is the difference between the current total energy and the initial total energy for the configuration. The ratio of this difference to the initial integrated energy is plotted on the y axis for each resolution.} \label{fig:energy_change_2} \end{figure} Physically, we expect that the overall energy in the system will be conserved. This provides a test on the numerical performance of \textsc{PyUltraLight}\xspace, and we find that even at relatively low spatial resolution we see sub-percent level energy conservation for all the dynamical scenarios considered here. In this Section we express the energy of the Schr{\"o}dinger-Poisson system in terms of the variables $\psi$ and $\Phi$ and discuss its decomposition into individual constituents calculated separately within the code. We then present results for a variety of configurations. We begin by defining a suitable action which yields the full Schr{\"o}dinger-Poisson system through its corresponding Euler-Lagrange equations. We find that variation of \begin{equation}\label{eq:sp-action} S=\int dt\int_{\mathbb{R}^3} d^3x \ -\bigg\{\frac{1}{2}\vert\nabla\Phi\vert^2+\Phi\vert\psi\vert^2+\frac{1}{2}\vert\nabla\psi\vert^2+\frac{i}{2}(\psi\Dot{\psi}^*-\Dot{\psi}\psi^*)\bigg\} \end{equation} with respect to $\Phi$, $\psi^*$ and $\psi$ yields equations \ref{eq:p-adim}, \ref{eq:s-adim}, and the conjugate of equation \ref{eq:s-adim}, respectively. The integrand of equation \ref{eq:sp-action} is the Lagrangian density, $\mathcal{L}$, from which we can derive the conserved energy in the usual way: \begin{equation} E_{tot}=\int_{\mathbb{R}^3}d^3x \ \bigg\{\frac{\partial \mathcal{L}}{\partial \Dot{\psi}}\Dot{\psi}+\frac{\partial \mathcal{L}}{\partial \Dot{\psi}^*}\Dot{\psi}^*+\frac{\partial \mathcal{L}}{\partial \Dot{\Phi}}\Dot{\Phi}-\mathcal{L}\bigg\}. \end{equation} Evaluating this expression, we obtain: \begin{align} E_{tot}&=\int_{\mathbb{R}^3}d^3x \ \bigg\{\frac{1}{2}\vert\nabla\Phi\vert^2+\Phi\vert\psi\vert^2+\frac{1}{2}\vert\nabla\psi\vert^2\bigg\}\\ &=\int_{\mathbb{R}^3}d^3x \ \bigg\{\frac{1}{2}\nabla(\Phi\nabla\Phi)-\frac{1}{2}\Phi\nabla^2\Phi+\Phi\vert\psi\vert^2+\frac{1}{2}\nabla(\psi^*\nabla\psi)-\frac{1}{2}\psi^*\nabla^2\psi\bigg\}\\ &=\int_{\mathbb{R}^3}d^3x \ \bigg\{\frac{1}{2}\Phi\vert\psi\vert^2-\frac{1}{2}\psi^*\nabla^2\psi\bigg\}.\label{eq:energy-tot} \end{align} where in the last step we have used Stokes' Theorem as well as the Poisson equation (\ref{eq:p-adim}) to perform simplifications. Because we are working with the dimensionless quantities defined in equation \ref{eq:dimensionless}, it is easy to see that this quantity is related to the physical energy through multiplication by a constant factor of $\CMcal{L}^5\CMcal{T}^{-4}G^{-1}$. It should be noted that equation \ref{eq:energy-tot} is not equivalent to the expectation value of the Schr{\"o}dinger Hamiltonian, which is itself not a conserved quantity of the Schr{\"o}dinger-Poisson system and is given by \begin{equation} \langle\hat{H}\rangle=\int_{\mathbb{R}^3}d^3x \ \bigg\{\Phi\vert\psi\vert^2-\frac{1}{2}\psi^*\nabla^2\psi\bigg\}. \end{equation} The two terms in the integral \ref{eq:energy-tot} are calculated separately within the code. The first term is the gravitational potential energy of the Schr{\"o}dinger-Poisson system, $E_{GP}$. As discussed in \cite{Hui:2016ltb}, the second term may be decomposed into contributions which may be considered separately as kinetic and `quantum' energies, $E_K$ and $E_Q$. However, for our purposes it is sufficient to consider only their combined contribution. When \textsc{PyUltraLight}\xspace includes the central potential of a point mass located at the centre of the simulation grid we have additional energy contributions and calculate the gravitational potential energy from self-interactions separately from the gravitational potential energy due to the central potential. Figures \ref{fig:combined_1} and \ref{fig:binary} demonstrate energy conservation for two scenarios. The first case shows the evolution of the energy of a single soliton undergoing significant tidal disruption within a Newtonian central potential. For this simulation a soliton of mass 12 in code units was initialised at a radial distance of 3 code units from the centre of a Newtonian central potential generated by a central mass of 1000 code units. As the soliton is disrupted, the kinetic energy increases, while the gravitational energy due to the central potential decreases, as expected. Meanwhile, the gravitational potential energy from self-interactions gradually increases toward zero as the disruption continues and the ULDM is halo spread over a greater area. In this case the sum of the individual energy components is conserved to $10^{-4}\ \%$ at a resolution of $256^3$. Figure \ref{fig:binary} demonstrates the evolution of the energy of a binary system of solitons in elliptical orbits around their common centre of mass over three orbital periods. In dimensionless code units, the soliton masses are 22, the initial separation 2, and the initial relative velocity was 3.6. At points of closest approach the kinetic energy increases as the solitons speed up, while the potential energy due to self-interaction decreases commensurately such that the total energy is conserved. In this scenario no central potential has been included. As the solitons reach the first point of closest approach, they become slightly deformed, exciting oscillatory modes which are manifest in the Figure as small scale oscillations superposed on the global behaviour. Figure \ref{fig:energy_change} demonstrates the relationship between the total integrated energy and the grid resolution for the same binary system of solitons used to generate Figure~\ref{fig:binary}. The vertical axis shows the ratio of the deviation in the total energy to the initial value of the energy, where the deviation is measured as the difference between the current and initial values. Energy is conserved at sub-percent level even at low resolutions ($96^3$), and increasing grid resolution greatly improves accuracy. Figure \ref{fig:energy_change_2} demonstrates the improvement in energy conservation with increasing grid resolution for a single soliton tidally disrupted in a Newtonian central potential, with the same set up as used in Figure \ref{fig:combined_1}. Namely, a single soliton of mass 12 code units is initialised at a distance of 3 code units from a central mass, $M=1000$. The initial velocity of the soliton is $\sqrt{M/r}$ where $r$ is the radial distance of the soliton from the central mass. The duration of the simulation is 0.5 code units so that the soliton undergoes significant tidal disruption as demonstrated in Figure \ref{fig:combined_1}. While we see that energy is conserved at sub-percent level even for $64^3$ grid resolution, the qualitative behaviour of the mass density distribution in this case is not correct, so we conclude that this resolution is insufficient for convergence despite good energy conservation. This highlights the importance of a multifaceted approach to convergence testing. At $256^3$, energy is conserved to parts in $10^{-6}$. \subsection{Spatial and Temporal Resolution}\label{sec:resolution} We now examine the convergence of the $\psi$ field configuration as a function of spatial resolution and timestep in a typical simulation. We initialise \textsc{PyUltraLight}\xspace with two diametrically opposed solitons orbiting a large Newtonian central potential, running until the solitons are tidally disrupted, as shown in Figure \ref{fig:disruption}. \begin{figure} \includegraphics[width=1.\textwidth,trim=0 0 0 0,clip]{first_and_last} \caption{The configuration used to test the sensitivity of solutions to the spatial and temporal resolution. Two solitons of mass $m=20$ are initialised at radial distances $r=2$ from a central mass with $M=1000$ moving in opposite directions with initial speeds $\vert v\vert=\sqrt{M/r}$, corresponding to clockwise orbits around the central mass. The box size is 10, while the total duration is 0.25 (all quantities in code units). Time runs from left to right. } \label{fig:disruption} \end{figure} To examine the sensitivity of the $\psi$ field configuration to the spatial resolution, we first run at $256^3$ with the default timestep. We then re-run at resolutions from $64^3$ to $320^3$ with the timestep fixed to the $256^3$ value and downsample the final outputs to $64^3$. We sort the resulting values by the density at the corresponding spatial location, and plot differences in the phase and the magnitude of $\psi$ relative to the values of the $320^3$ run as shown in Figure \ref{fig:spatial} (bottom). The convergence is poor at $64^3$, but improves with resolution, to the point that there is little difference between the $256^3$ and $320^3$ cases. To examine the sensitivity of the the $\psi$ field configuration to the timestep, we take the same default simulation at $256^3$, and then compare this to runs with timesteps 0.1, 10, and 50 times the default and down-sample the final output arrays to $64^3$. We sort sort the array values in order of the $\psi$ field magnitude in the run with the smallest timestep and in Figure \ref{fig:temporal} we show the difference in the phase and magnitude of $\psi$ as a function of the timestep. The difference between the results with the default timestep and a value 10 times smaller are negligible; and there is reasonable agreement between the default case and those with the timestep boosted by a factor of 10. However, when the timestep is increased by a factor of $50$ the accuracy of both the phase and magnitude data are significantly reduced. Figure \ref{fig:comparison} shows profiles of the density through the simulation volume, as a function of spatial resolution and timestep. Each plot represents the density profile down the axis of symmetry of the initial configuration (vertical axis in Figure \ref{fig:disruption}) after approximately half a revolution around the central potential, or t=0.28 code units -- slightly after the final frame in Figure \ref{fig:disruption} - when the solitons have become distorted due to tidal forces, but are not yet completely disrupted. We see that as the timestep is varied from 0.1 to 50 times the default value, the results with the default and the shorter timestep are virtually indistinguishable, and results are still reasonably accurate at 10 times the default timestep, with small deviations at high densities. However, the results are significantly distorted at 50 times the default timestep. We also see that as the spatial resolution is decreased from $320^3$ to $64^3$, the lowest resolution performs poorly, but there is good convergence at resolutions of $192^3$ and above. \begin{figure} \includegraphics[width=1.\textwidth,trim=0 1cm 0 2cm,clip]{combined2} \caption{Top: Deviation of the phase of $\psi$ compared to the highest resolution result ($320^3$). Field values are arranged in order of increasing magnitude from left to right. A slight improvement in phase convergence can be seen for higher density regions to the right. Bottom: Improving convergence of $\vert\psi\vert$ with increased spatial resolution for the simulation shown in Figure \ref{fig:disruption}} \label{fig:spatial} \end{figure} \clearpage \begin{figure} \includegraphics[width=1.\textwidth,trim=0 1cm 0 2cm,clip]{combined} \caption{Top: Phase deviation of $\psi$, relative to solution with timestep 0.1 times default, sorted by the density. There is excellent agreement with the default timestep, and reasonable convergence at steps up to $10$ times the default, with better accuracy in high density regions. Bottom: Difference in magnitude of $\psi$, relative to the solution with timestep 0.1 times default. Again we see good convergence with the default timestep, and tolerable agreement in high density regions when the step is a factor of $10$ or less than default.} \label{fig:temporal} \end{figure} \clearpage \begin{figure} \includegraphics[width=1.\textwidth,trim=0 1.2cm 0 2.0cm,clip]{comparison} \caption{Top: Effect of decreasing the spatial resolution on the density profile at half a revolution. Bottom: Effect of increasing the timestep on the density profile at half a revolution.} \label{fig:comparison} \end{figure} \clearpage \section{Discussion and Outlook} \textsc{PyUltraLight}\xspace is an accurate, flexible and easy to use tool for studying the dynamics of ultralight dark matter governed by the Schr{\"o}dinger-Poisson system of equations. The code makes use of a pseudospectral symmetrised split-step Fourier methodology, in which all spatial derivatives are treated via explicit multiplication in the Fourier domain, thereby avoiding difficulties associated with finite-differencing methods. Energy conservation within \textsc{PyUltraLight}\xspace is excellent, at sub-percent level for simulations run at $128^3$, with even better performance as resolution is increased. The code captures complex phenomena resulting from the wave-like properties of ultralight dark matter, including the interference patterns arising during high-velocity collisions of solitonic cores, and effective forces observed in cases where the colliding cores are out of phase. These phenomena can be clearly observed at relatively low spatial resolution, avoiding the need for high-performance computing infrastructure to study the fundamental behaviour of ULDM systems in simple configurations. This makes \textsc{PyUltraLight}\xspace a useful tool for investigating the dynamics of ULDM systems. \textsc{PyUltraLight}\xspace is Python-based, and as such is particularly simple to understand and use. The accompanying Jupyter notebook allows for the efficient adjustment of simulation parameters, and offers a useful browser interface for quick visualisation of simulation results. While Python-based, the code makes use of low-level language resources, namely the FFTW libraries through the use of the Pythonic pyFFTW wrapper and will operate at $\sim 80\%$ efficiency on a 16 core desktop workstation, suggesting that it is computationally efficient. The current implementation of \textsc{PyUltraLight}\xspace is already a useful tool for simulating dynamical ULDM systems and exploring their dynamics However, there is much scope for improvement. In particular, future releases may incorporate a variable timestep and more sophisticated physics, including explicit self-interactions in the axion sector or additional matter components. Augmented versions of the code may also include higher-order generalisations of the pseudo-spectral method, such as those used in \cite{Levkov:2018kau}. \textsc{PyUltraLight}\xspace is publicly available under a BSD license. \acknowledgments We thank Xiaolong Du, Lam Hui, David Marsh, Nathan Musoke, Jens Niemeyer, and Chanda Prescod-Weinstein for valuable discussions on axion / ULDM cosmology, and thank Miro Erkintalo for advice on Schr{\"o}dinger-Poisson solvers in optical systems. We acknowledge support from the Marsden Fund of the Royal Society of New Zealand, and the use of the New Zealand eScience Infrastructure (NeSI) high-performance computing facilities, which are funded jointly by NeSI's collaborator institutions and through the Ministry of Business, Innovation \& Employment's Research Infrastructure programme \url{https://www.nesi.org.nz}.
1,941,325,220,961
arxiv
\section{Introduction} We consider pure SU(2) Yang Mills theory at finite temperature in the confinement phase $T<T_c$, which is a crude approximation of Quantum Chromodynamics (QCD). We study a model of dyons, in particular the quark antiquark free energy, with the aim to better understand the phenomenon of confinement. In this work we mainly focus on technical difficulties associated with the long-range nature of the dyon potentials. Physical aspects and conclusions are discussed in detail in \cite{Bruckmann:2009nw,Bruckmann:2011yd} and another talk given at this conference \cite{MMP}. \section{\label{SEC001}The non-interacting dyon model} In Yang Mills theory observables $O$ are given by the path integral \begin{equation} \label{EQN002} \l<O\r> = \frac1Z \int\mathcal DA\ O[A]\exp\l(-S_{\text{\textrm{YM}}}[A]\r) . \end{equation} Since there are currently no methods to solve this path integral for low-energy observables analytically, one either resorts to numerical lattice gauge theory or to certain simplifying approximations. One such approach particularly useful to obtain a qualitative understanding of certain phenomena of Yang-Mills theory and QCD is the semi-classical approximation. One expands the path integral around classical solutions of the Yang-Mills field equations, for which the action is locally minimized, and which are expected to dominate the path integral. A specific kind of a semi-classical model and its capability to generate confinement is based on dyons. Dyons are localized objects carrying electric charge as well as magnetic charge and are named after particles with similar properties \cite{Schwinger:1969ib}. The path integration (\ref{EQN002}) is transformed from field coordinates to dyon collective coordinates and quantum fluctuations, where a Jacobian emerges. Part of this Jacobian is the determinant of the so-called moduli space metric. This moduli space metric has been calculated analytically for calorons \cite{Kraan:1998pn}, following \cite{KraanVanBaal1,KraanVanBaal2,LeeLu}, which are pairs of different-kind dyons. A proposal for a metric of an arbitrary number of same-kind and different kind dyons was made in \cite{Diakonov:2007nv}. Numerical investigations of this generalized metric \cite{Bruckmann:2009nw}, however, indicated certain shortcomings, in particular its non-positive-definiteness, casting severe doubts on its usefulness. Our main interest in this work is to test a numerical method able to treat the dyon long-range potentials in a proper way. Therefore, we study a much simpler model of dyons without any interactions, i.e.\ where the moduli space metric is ignored. The key observable we are studying is the free energy of a static quark antiquark pair at separation $d = \l|\v r-\v r'\r|$, which is given by \begin{equation} F_{Q\bar Q}(d) = -T\ \log\l< P(\v r)P^{\dagger}(\v r')\r>. \end{equation} The Polyakov loop correlator $\l< P(\v r)P^{\dagger}(\v r')\r>$ is obtained as a statistical average in dyon ensembles characterized by the spatial dyon density $\rho$ and the temperature $T$ (we always consider maximally non-trivial holonomy, which seems to be intimately connected to the confinement phase \cite{Gerhold:2006sk}). The dyon ensembles we study are neutral, i.e.\ there is an identical number of dyons and antidyons. Individual dyon configurations of these ensembles are given by the randomly and uniformly chosen dyon positions $\l\{\v r_j\r\}$. One can show that in such ensembles a Polyakov loop is given by \begin{equation} P(\v r) = -\sin\l(\frac1{2T}\Phi(\v r)\r), \end{equation} where $\Phi$ is the superposition of the $0$-component of the dyon gauge field in the Abelian limit \begin{equation} \label{EQN001} \Phi(\v r) = \sum_j\frac{q_j}{|\v r_j - \v r |} . \end{equation} For a more detailed discussion of these equations and the non-interacting dyon model in general we refer to \cite{Bruckmann:2009nw,Bruckmann:2011yd}. In the following sections we are mainly concerned with evaluating (\ref{EQN001}) numerically, which contains an infinite sum $\sum_j$ over $1 / r$ long-range potentials. \section{Long-range dyon potentials and finite volume effects} The $1/r$ long-range nature of the dyon potential causes severe problems for numerical simulations. One expects that rather large volumes are needed, to render finite volume effects negligible. This in turn amounts to a huge number of dyons, which is proportional to the required computational resources. A first attempt to simulate dyon ensembles numerically, which suffers from the just mentioned problem, is described in \cite{Bruckmann:2009nw}. There $n_D$ dyons are considered in a cubic spatial volume of length $L$. Observables are then evaluated in a cubic spatial volume of length $\ell < L$ located at the center of the larger volume (cf.\ \fig{\ref{fig:box_copies}a}). This straightforward method, however, has certain shortcomings: (A) reducing finite volume effects to a moderate level requires $\ell \ll L$, which drastically reduces the volume, in which observables can be evaluated; this clearly increases statistical errors; (B) an extrapolation to infinite volume is technically difficult, since it has to be done with respect to two parameters ($\ell$ and $L$); (C) when attractive and repulsive forces between dyons are taken into account, dyons tend to accumulate near the boundary of the large volume, i.e.\ translational invariance is broken severely. \begin{figure}[htb] \centering \begin{minipage}{.47\textwidth} \centering \includegraphics[width=5cm]{bilder/box_mit_ausschnitt}\\ \begin{flushleft} \vspace{-6cm}\textbf{a)}\vspace{4.5cm} \end{flushleft} \vspace{-1.45cm} \Large\textcolor{red}{$\ell\ \ \ \ $}\\ \vspace{0.4cm} \hspace{-1.5cm}\textcolor{blue}{$L$}\\ \end{minipage} \hspace{\fill} \begin{minipage}{.47\textwidth} \centering \includegraphics[width=7cm]{bilder/box_copies_periodic}\\ \begin{flushleft} \vspace{-7.3cm}\textbf{b)}\vspace{6cm} \end{flushleft} \vspace{-2.5cm} \hspace{-1cm}\Large\textcolor{blue}{$L$} \vspace{1.8cm} \end{minipage} \caption{\label{fig:box_copies}\textbf{a)} Dyons in a cubic volume of length $L$ (blue), evaluating observables in a cubic volume of length $\ell < L$ (red). \textbf{b)} Dyons in a volume of length $L$ with periodic boundary conditions.} \end{figure} A better method for treating long range dyon ensembles, which solves or eases the problems mentioned above, seems to mimic infinite volume by implementing periodic boundary conditions. One considers a cubic spatial volume of length $L$ filled with $n_D$ dyons (dyon density $\rho=n_D/L^3$). This volume is periodically repeated in all three spatial directions (cf. \fig{\ref{fig:box_copies}b}). With this setting one expects (A) that finite volume effects are significantly reduced, since the original volume is ``surrounded by infinitely many dyons'' in every spatial direction; (B) an extrapolation to infinite volume only has to be done with respect to one parameter, $L$; (C) interacting dyons will not accumulate near the boundary of the volume, because periodicity implies exact translational invariance. \section{Ewald's method} To implement periodic boundary conditions for our long-range dyon potentials we resort to a method proposed first in the context of condensed matter physics \cite{Ewald:1921} (``Ewald's method'') and is widely used in plasma physics, as well. The central quantity, when one is e.g.\ interested in Polyakov loop averages or the quark antiquark free energy, is the potential of $n_D$ dyons in a periodic cubic volume of length $L$, \begin{equation} \Phi(\textbf{r})=\sum_{\textbf{n}\in\mathbb Z^3}\sum_{j=1}^{n_D}\frac{q_j}{\l|\textbf{r}-\textbf{r}_j+\textbf{n} L\r|} \end{equation} (cf.\ also section~\ref{SEC001}). Due to the long-range nature of the individual dyon potentials, $\sum_{\textbf{n}\in\mathbb Z^3} \ldots$ converges far too slowly for any straightforward efficient numerical evaluation. Ewald's method solves this problem by splitting the sum into a short-range part and a long-range part, \begin{align} &\Phi(\textbf{r})=\Phi^{\text{Short}}(\textbf{r}) + \Phi^{\text{Long}}(\v r) \\ &\Phi^{\text{Short}}(\textbf{r})=\sum_{\textbf{n}\in\mathbb Z^3}\sum_{j=1}^{n_D}\frac{q_j}{\l|\textbf{r}-\textbf{r}_j+\textbf{n} L\r|}\ \text{erfc}\l(\frac{\l|\textbf{r}-\textbf{r}_j+\textbf{n} L\r|}{\sqrt{2}\lambda}\r)\\ &\Phi^{\text{Long}}(\v r)=\frac{4\pi}{V}\sum_{\textbf{k}\neq0}\sum_{j=1}^{n_D}\frac{q_j}{k^2}\ e^{i\textbf{k}(\textbf{r}-\textbf{r}_j)}\ e^{-\lambda^2k^2/2} . \end{align} For the long-range part the sum is over momenta $\v k=(2\pi/L) \v m$ with $\v m\in\mathbb Z^3$. The arbitrary parameter $\lambda$ controls the trade-off between the short-range and the long-range part. Both parts converge exponentially fast, due to $\text{erfc}(\l|\textbf{r}-\textbf{r}_j+\textbf{n} L\r| / \sqrt{2}\lambda)$ and $e^{-\lambda^2k^2/2}$, respectively. A detailed and pedagogical derivation of the splitting into short-range and a long-range part can be found in \cite{leewei}. \subsection{Evaluating the short-range part} To determine Polyakov loop averages, one typically computes the dyon potential at $M\propto V$ points $\v r$ distributed throughout the volume. Due to the exponential suppression by \\ $\text{erfc}(\l|\textbf{r}-\textbf{r}_j+\textbf{n} L\r| / \sqrt{2}\lambda)$, it is sufficient to consider dyons inside a sphere with radius $r_{\max} \propto \lambda$ and center $\v r$, i.e.\ \begin{equation} \Phi^{\text{Short}}(\textbf{r})=\sum_{\textbf{n}, j, |\textbf{r}-\textbf{r}_j+\textbf{n} L| < r_{\max}}\frac{q_j}{\l|\textbf{r}-\textbf{r}_j+\textbf{n} L\r|}\ \text{erfc}\l(\frac{\l|\textbf{r}-\textbf{r}_j+\textbf{n} L\r|}{\sqrt{2}\lambda}\r) . \end{equation} Consequently, the total computational cost is $\mathcal O(V\lambda^3)$. In practice $\lambda^3 \ll V$ is chosen. For an algorithm scaling according to $\mathcal O(V\lambda^3)$ one clearly needs to determine all dyons close to a point $\v r$ without iterating over all $n_D \propto V$ dyons. To this end, one divides the original volume into small cubic subvolumes. Technically, the volume is a list of subvolumes, where each subvolume corresponds to another individual list of those dyons located inside. For any point $\v r$ one can then easily determine all subvolumes inside or at the boundary of the corresponding sphere with radius $r_{\max}$ and then iterate over the relevant $\mathcal O(\lambda^3)$ dyons. The approximate sphere of subvolume copies (``LEGO$^{\text{\textregistered}}$ sphere'') is shown in \fig{\ref{fig:legosphere}}. Note that the radius of the LEGO$^{\text{\textregistered}}$ sphere{} is larger than $r_{\max}$, to ensure that all dyons inside the ``$r_{\max}$ sphere'' are considered. Since spheres associated with points $\v r$ near the boundary reach into neighboring volumes, one needs to shift the LEGO$^{\text{\textregistered}}$ sphere{} periodically. \subsection{Evaluating the long-range part} Introducing the structure functions $S(\v k) = \sum_{j=1}^{n_D}q_je^{-i\textbf{k}\textbf{r}_j}$ the long range part can be written according to \begin{align} \Phi^{\text{Long}}(\textbf{r}) &= \frac{4\pi}{V}\sum_{\textbf{k}\neq0}e^{i\textbf{k}\textbf{r}}\frac{e^{-\lambda^2\textbf{k}^2/2}}{\textbf{k}^2}\ S(\textbf{k}). \end{align} Due to the exponential suppression by $e^{-\lambda^2k^2/2}$, it is sufficient to consider momenta \\ $\v k < k_{\max} \propto 1 / \lambda$. Consequently, one has to sum over $\mathcal O(L^3/\lambda^3)$ different momenta $\v k$ and the cost for computing the structure functions is $\mathcal O(V^2 / \lambda^3)$. Since the structure functions $S(\v k)$ do not depend on $\v r$, they have to be computed only once for a given set of dyon positions. In a second step the long-range part has to be evaluated at $M\propto V$ points $\v r$ amounting again to a computational cost of $\mathcal O(V^2 / \lambda^3)$. \subsection{Optimizing the trade-off between the short-range and the long-range part} The computational cost for the whole algorithm is minimized, when the scaling of the short-range and the long-range part are identical, i.e.\ if $\mathcal O(V \lambda^3) = \mathcal O(V^2 / \lambda^3)$. This can easily be achieved by choosing $\lambda\propto V^{1/6}\propto\sqrt{L}$, resulting in a total computational cost of $\mathcal O(V^{3/2})$. A more detailed discussion, in particular, of how to determine the optimal value for $\lambda$ for a specific dyon ensemble, we refer to \cite{Bruckmann:2011yd}. \section{\label{sec:numerical_results}Numerical results} The methods discussed in the previous section were used to evaluate the Polyakov loop correlator for many different dyon ensembles (cf.\ \cite{Bruckmann:2011yd} for a detailed discussion). Selected results for the quark antiquark free energy as a function of the separation in the case of non-interacting dyons are shown in \fig{\ref{fig:free_energy_comparison}}. These results correspond to $\rho/T^3=1$ and four different volumes $L\,T\in\{10,20,30,40\}$, which is equivalent to $n_D\in\{1.000 \, , \, 8.000 \, , \, 27.000 \, , \, 64.000\}$ (we express all dimensionful quantities in units of the temperature $T$). The curves grow linearly at large quark antiquark separations $d\,T$ and seem to converge to an infinite volume curve with increasing $L\,T$. \begin{figure}[t!] \begin{minipage}{.49\textwidth} \centering \includegraphics[width=7cm]{bilder/real_space_2}\vspace{0.35cm} \caption[LEGO$^{\text{\textregistered}}$ sphere]{\label{fig:legosphere}The LEGO$^{\text{\textregistered}}$ sphere{}.} \vspace{0.65cm} \end{minipage} \begin{minipage}{.49\textwidth} \centering \includegraphics[height=6cm]{bilder/ewald_F_QQ_a} \caption{\label{fig:free_energy_comparison}The quark antiquark free energy as a function of the separation $d\,T$ for $T^3/\rho=1$ and finite periodic volume of length $L$.} \end{minipage} \end{figure} To determine this infinite volume curve, we perform a linear $\chi^2$ minimizing fit in $1 / L\,T$ for each quark antiquark separation and extrapolate to infinite volume, corresponding to $1/L\,T = 0$ (cf.\ \fig{\ref{fig:free_energy_extrapolation}a}). The colored points at $1/L\,T = 0$ are analytical results derived in \cite{Bruckmann:2011yd}. The numerical extrapolations are in agreement to these analytical results within statistical errors. The infinite volume free energies (both the numerical Ewald result as well as the analytical result) are shown in \fig{\ref{fig:free_energy_extrapolation}b}, again demonstrating that by means of Ewald's method one can reliably and efficiently determine an infinite volume quark antiquark potential in a long-range dyon ensemble. \begin{figure}[tbp] \centering \includegraphics[width=14cm]{bilder/ewald_FQQ_extrapolation} \begin{flushleft} \vspace{-5.6cm} \textbf{a)} \hspace{7.2cm} \textbf{b)} \vspace{4.6cm} \end{flushleft} \caption{\label{fig:free_energy_extrapolation}\textbf{a)} Quark antiquark free energy as a function of the inverse length of the volume $1/L\,T$ for different separations $d\,T$ and extrapolations to $1/L\,T=0$. The solid curves indicate the fitting range, whereas the dashed lines mark the extrapolations. \textbf{b)} Comparison of the numerical and the analytical infinite volume quark antiquark free energy at these. All data was obtained using $\rho/T^3=1$. } \end{figure} \section{Generalization of Ewald's method for arbitrary $1 / r^p$ long-range potentials} In the previous section we presented numerical results for non-interacting dyons. A more realistic model would be to take dyon interactions into account originating from their moduli space metric. The problem of non-positive-definiteness of the moduli space metric proposed in \cite{Diakonov:2007nv} can be cured by only considering two-dyon, but no three-dyon, four-dyon, etc.\ interactions \cite{Bruckmann:2009nw,Maier:2011}. One then obtains an effective dyon action \begin{equation} \label{eq:action} S^{\text{eff}}(\{\v r_k\}) = \frac12\sum_{i=1}^{n_D}\sum^{n_D}_{j=1,j\neq i}\underbrace{\ln\l(1-\frac{2q_iq_j}{\pi T\l|\v r_i - \v r_j\r|}\r)}_{\psi(|\v r_i - \v r_j|)}. \end{equation} This effective action also contains ``long-range potentials'' $\psi$, which can be expanded in a power series with respect to inverse dyon separations $r$, \begin{equation} \psi(r) = \frac{\#}{r} + \frac{\#}{r^2} + \frac{\#}{r^3} + \ldots \end{equation} The individual terms $1/r^p$, $p\geq 1$ of this series may be treated with a generalization of Ewald's method to arbitrary powers $1/r^p$ (cf.\ e.g.\ \cite{Essmann:1995}), where the short- and long-range parts are given by \begin{align} \Phi^{\text{Short}}_p(\v r) &= \,\sum_{\v n}\sum_{j=1}^{n_D}\frac{q_j(p)}{|\textbf{r} -\textbf{r}_j+\v nL|^p}\ g_p\l(\frac{|\textbf{r} -\textbf{r}_j+\v nL|}{\sqrt{2}\lambda}\r) \\ \Phi^{\text{Long} }_p(\v r) &= \frac{\pi^{3/2}}{V\l(\sqrt 2\lambda\r)^{p-3}}\sum_{\v k}\sum_{j=1}^{n_D}q_j(p)\,\exp\Big(i\,\v k(\v r-\v r_j)\Big)\ f_p\l(\frac{k\lambda}{\sqrt2}\r), \end{align} where the charge $q_j(p)$ of dyon $j$ may depend on the power $p$. Again exponential convergence is guaranteed, due to \begin{align} g_p(x)&=\frac{2}{\Gamma(p/2)}\int_x^{\infty}s^{p-1}\,\exp\l(-s^2\r)\,\d s, \\ f_p(x)&=\frac{2x^{p-3}}{\Gamma(p/2)}\int_x^{\infty}s^{2-p}\,\exp\l(-s^2\r)\,\d s. \end{align} \section{Summary and Outlook} We applied Ewald's method to simulate non-interacting long-range dyon ensembles numerically. We have demonstrated that this is an efficient method suited to extract the infinite volume quark antiquark free energy. A generalization of Ewald's method offers the possibility to also simulate interacting dyons. Such investigations might help to understand the effects and implications of the moduli space metric of dyons on the quark antiquark free energy and their relevance regarding the phenomenon of confinement. Another interesting aspect would be a possible generalization to manifestly non-Abelian and typically non-rotationally invariant objects like regular gauge instantons, merons, or meron pairs, which also seem to yield confinement (cf.\ e.g.\ \cite{Lenz:2003jp,Wagner:2006qn,Zimmermann:2012zi}). An alternative method to enforce periodic boundary conditions for such non-Abelian long-range objects has already been proposed and tested in \cite{Szasz:2008qk}. \section*{Acknowledgements} The authors express their gratitude for financial support by the German Research Foundation (DFG) with various grants: M.W. by the Emmy Noether Programme with grant WA 3000/1-1 and F.B. with grant BR 2872/4-2. This work was supported in part by the Helmholtz International Center for FAIR within the framework of the LOEWE program launched by the State of Hesse.
1,941,325,220,962
arxiv
\section{Introduction and main results} \label{sec:intro} Our interest in this work is in modelling the pattern of genetic variation left behind when a gene that is favoured by natural selection `sweeps' through a spatially structured population in a travelling wave. The interaction between natural selection and spatial structure is a classical problem; the novelty of what we propose here is that we replace the simple directional selection considered in the majority of the mathematical work in this area by a model of selection acting on diploid individuals (carrying two copies of the gene in question) that provides a toy model for the dynamics of so-called hybrid zones. Hybrid zones are widespread in naturally occurring populations,~\cite{barton/hewitt:1989}, and there is a wealth of recent empirical work on their dynamics; see~\cite{arntzen:2019} for an example and a brief discussion. In our simple model, we shall suppose that the population is living in one spatial dimension, and that the gene has exactly two forms (alleles), $A$ and $a$, and that type $AA$ individuals are at a selective advantage over $aa$ individuals, but that $Aa$ individuals are at a selective disadvantage relative to both. Our goal is to understand the genealogical trees that describe the relationships between individual genes sampled from the present day population. In the case of directional selection, there is a large body of work, of varying degrees of rigour, that suggests that if we take a sample of favoured individuals from close to the wavefront then, on suitable timescales, their genealogy is described by the so-called Bolthausen-Sznitman coalescent. In our models, where expansion of the favoured type is driven from the bulk of the wave, we shall see that the corresponding object is the classical Kingman coalescent. Before giving a precise mathematical definition of our model in Section~\ref{subsec:modeldefn} and stating our main results in Section~\ref{subsec:mainresults}, we place our work in context. \subsection*{Directional selection: the (stochastic) Fisher-KPP equation} The mathematical modelling of the way in which a genetic type favoured by natural selection spreads through a population that is distributed across space can be traced back at least to Fisher~(\cite{fisher:1937}) and Kolmogorov, Petrovsky \& Piscounov~(\cite{kolmogorov/petrovsky/piscounov:1937}). They introduced the now classical Fisher-KPP equation, \begin{eqnarray} \label{FKPP equation} \frac{\partial p}{\partial t}(t,x)=\frac{m}{2}\Delta p(t,x)+s_0p(t,x)\big(1-p(t,x)\big) \qquad && \text{for }x\in\R, \, t> 0,\\ \nonumber 0\leq p(0,x)\leq 1 \qquad &&\forall x\in\R, \end{eqnarray} as a model for the way in which the proportion $p(t,x)$ of genes that are of the favoured type changes with time. A shortcoming of this equation is that it does not take account of random genetic drift, that is, the randomness due to reproduction in a finite population. The classical way to introduce such randomness is through a Wright-Fisher noise term, so that the equation becomes \begin{equation} \label{stochastic FKPP} dp(t,x)=\frac{m}{2}\Delta p(t,x) dt + s_0p(t,x)\big(1-p(t,x)\big)dt +\sqrt{\frac{1}{\rho_e}p(t,x)\big(1-p(t,x)\big)}W(dt,dx), \end{equation} where $W$ is a space-time white noise and $\rho_e$ is an effective population density. This is a continuous space analogue of Kimura's stepping stone model~\cite{kimura:1953}, with the additional non-linear term capturing selection. This equation has the limitation that it only makes sense in one space dimension, but like~(\ref{FKPP equation}) it exhibits travelling wave solutions (\cite{mueller/sowers:1995}) which can be thought of as modelling a selectively favoured type `sweeping' through the population and, consequently, it has been the object of intensive study. From a biological perspective, the power of mathematical models is that they can throw some light on the patterns of genetic variation that one might expect to see in the present day population if it has been subject to natural selection. Neither of the models above is adequate for this task. If it survives at all, one can expect a selectively favoured type to eventually be carried by all individuals in a population and from simply observing that type, we have no way of knowing whether it is fixed in the population as a result of natural selection, or purely by chance. However, in reality, it is not just a single letter in the DNA sequence that is modelled by the equation, but a whole stretch of genome that is passed down intact from parent to offspring, and on which we can expect some neutral mutations to arise. The pattern of {\em neutral} variation can be understood if we know how individuals sampled from the population are related to one another; that is, if we have a model for the genealogical trees relating individuals in a sample from the population. Equation~(\ref{FKPP equation}) assumes an infinite population density everywhere so that a finite sample of individuals will be unrelated; in order to understand genealogies we have to consider~(\ref{stochastic FKPP}). The first step is to understand the effect of the stochastic fluctuations on the forwards in time dynamics of the waves. Any solution to~\eqref{FKPP equation} with a front-like initial condition $p(0,x)$ which decays sufficiently fast as $x\rightarrow \infty$ converges to the travelling wave solution with minimal wavespeed $\sqrt{2ms_0}$ (\cite{uchiyama:1978, bramson:1983}). Since the speed of this travelling wave is determined by the behaviour in the `tip' of the wave, where the frequency of the favoured type is very low, it is very sensitive to stochastic fluctuations. A great deal of work has gone into understanding the effect of those fluctuations on the progress of the `bulk' of the wave (\cite{brunet/derrida:1997, brunet/derrida:2001, vansaarloos:2003, brunet/derrida/mueller/munier:2006, hallatschek/nelson:2008, mueller/mytnik/quastel:2011, berestycki/berestycki/schweinsberg:2013}). The first striking fact is that the wave is significantly slowed by the noise (\cite{brunet/derrida/mueller/munier:2006, mueller/mytnik/quastel:2011}). The second ramification of the noise is that there really is a well-defined `wavefront'; that is, assuming that the favoured type is spreading from left to right in our one-dimensional spatial domain, there will be a rightmost point of the support of the stochastic travelling wave (\cite{mueller/sowers:1995}). Moreover, the shape of the wavefront is well-approximated by a truncated Fisher wave (\cite{brunet/derrida:1997, mueller/mytnik/quastel:2011}). If we were to take a sample of favoured individuals from a population evolving according to the analogue of~(\ref{stochastic FKPP}) without space, then, from~\cite{barton/etheridge/sturm:2004}, their genealogy would be given by a `coalescent in a random background'; that is, it would follow a Kingman coalescent but with the instantaneous rate of coalescence of each pair of lineages at time $t$ before the present given by $1/(N_0\overleftarrow{p}(t))$, where $\overleftarrow{p}(t)$ is the proportion of the population that is of the favoured type at time $t$ before the present, and $N_0$ is the total population size. This suggests that in the spatial context, as we trace back ancestral lineages, their instantaneous rate of coalescence on meeting at the point $x$ should be proportional to $1/\overleftarrow{p}(t,x)$. In particular, this means that if several lineages are in the tip at the same time, then they can coalesce very quickly. In fact, principally because $p(t,x)$ is very rough, it is difficult to study the genealogy directly by tracking ancestral lineages and analysing when and where they meet. However, several plausible approximations (at least for the population close to the wavefront) have been proposed for which the frequencies of different types in the population are approximated by~(\ref{stochastic FKPP}) and a consensus has emerged that for biologically reasonable models, over suitable timescales, the genealogy will be determined by a Bolthausen-Sznitman coalescent (\cite{brunet/derrida/mueller/munier:2006, berestycki/berestycki/schweinsberg:2013}). We emphasize that this arises as a further scaling of the Kingman coalescent in a random background. It reflects a separation of timescales. The `multiple merger' events correspond to bursts of coalescence when several lineages are close to the tip of the wave. This then is the third ramification of adding genetic drift to~(\ref{FKPP equation}); the genealogy of a sample of favoured alleles from the wavefront will be dominated by `founder effects', resulting from the fluctuations in the wavefront. The idea is that from time to time a fortunate individual gets ahead of the wavefront, where its descendants can reproduce uninhibited by competition, at least until the rest of the population catches up, by which time they form a significant portion of the wavefront. \subsection*{Other forms of selection: pushed and pulled waves of expansion} The Fisher-KPP equation, and its stochastic analogue~\eqref{stochastic FKPP}, model a situation in which each individual in the population carries one copy of a gene that can occur in one of two types, usually denoted $a$ and $A$ and referred to as alleles. If the type $A$ has a small selective advantage (in a sense to be made more precise when we describe our individual based model below), then in a suitable scaling limit, $p(t,x)$ represents the proportion of the population at location $x$ at time $t$ that carries the $A$ allele. This can also be used as a model for the frequency of $A$ alleles in a diploid population, provided that the advantage of carrying two copies of the $A$ allele is twice that of carrying one. However, natural selection is rarely that simple; here our goal is to model a situation in which there is selection against heterozygotes, that is, individuals carrying one $A$ allele and one $a$ allele, and in which $AA$-homozygotes are fitter than $aa$. As we shall explain below, the analogue of the Fisher-KPP equation in this situation takes the form \begin{equation} \label{AC equation} \begin{aligned} \frac{\partial p}{\partial t}(t,x)=\frac{m}{2}\Delta p(t,x)+ s_0 f\big(p(t,x)\big) \qquad & \text{for }x\in\R, \, t> 0, \\ 0\leq p(0,x)\leq 1 \qquad &\forall x\in\R, \\ \text{where }\quad f(p)=p(1-p)(2p-1+\alpha), \qquad & \end{aligned} \end{equation} with $\alpha >0$ a parameter which depends on the relative fitnesses of $AA$, $Aa$ and $aa$ individuals. In the case $\alpha \in (0,1)$, the non-linear term $f$ is bistable (since $f(0)=0=f(1)$, $f'(0)<0$, $f'(1)<0$ and $f<0$ on $(0,(1-\alpha)/2)$, $f>0$ on $((1-\alpha)/2,1)$) and the equation has a unique travelling wave solution given up to translation by the exact form \begin{equation} \label{AC stationary wave} p(t,x)=g\big(x-\alpha \sqrt{\tfrac{ms_0}2}t\big), \quad \text{where } g(y)=\big(1+e^{\sqrt{\frac{2s_0}m}y}\big)^{-1}. \end{equation} For $\alpha \in [1,2)$, the travelling wave solution with minimal wavespeed is also given by~\eqref{AC stationary wave}. In both cases, solutions of~\eqref{AC equation} with suitable front-like initial conditions converge to the travelling wave~\eqref{AC stationary wave}~\cite{fife/mcleod:1977,rothe:1981}. The case $\alpha=0$ corresponds to $AA$ and $aa$ being equally fit, in which case, for suitable initial conditions, there is a stationary `hybrid zone' trapped between two regions composed almost entirely of $AA$ and almost entirely of $aa$ individuals respectively. As observed, for example, by Barton~(\cite{barton:1979}), when $\alpha>2$ the symmetric wavefront of~(\ref{AC stationary wave}) is replaced by an asymmetric travelling wavefront moving at speed $\sqrt{2ms_0(\alpha -1)}$. This transition from symmetric to asymmetric wave corresponds to the transition from a `pushed' wave to a `pulled' wave, notions introduced by Stokes~(\cite{stokes:1976}). Considering the equation~\eqref{AC equation} for general monostable $f$ (i.e.~$f$ satisfying $f(0)=0=f(1)$, $f'(0)>0$, $f'(1)<0$ and $f>0$ on $(0,1)$), the travelling wave solution with minimal wavespeed $c$ is called a pushed wave if $c>\sqrt{2ms_0 f'(0)}$, and is a pulled wave if $c=\sqrt{2ms_0 f'(0)}$. (Here, $\sqrt{2ms_0 f'(0)}$ is the spreading speed of solutions of the linearised equation.) The travelling wave solutions in the bistable case can also be seen as pushed waves (see~\cite{garnier/giletti/hamel/roques:2012}). The natural stochastic version of~\eqref{AC equation}, which was also discussed briefly by Barton~(\cite{barton:1979}), simply adds a Wright-Fisher noise as in~\eqref{stochastic FKPP}. For $\alpha >1$, this is a reparametrisation of an equation considered by Birzu et al.~(\cite{birzu/hallatschek/korolev:2018}). Their model is framed in the language of ecology. Let $n(t,x)$ denote the population density at point $x$ at time $t$. They consider \begin{equation} \label{Birzu equation} dn(t,x)=\frac{m}{2}\Delta n(t,x)dt + n(t,x)r\big(n(t,x)\big)dt +\sqrt{\gamma\big(n(t,x)\big)n(t,x)}W(dt,dx), \end{equation} where $W$ is space-time white noise, $\gamma (n)$ quantifies the strength of the fluctuations, and $r(n)$ is the (density dependent) per capita growth rate. For example, for logistic growth, one would take $r=r_0(1-n/N)$ for some `carrying capacity' $N$. A pushed wave arises when species grow best at intermediate population densities, known as an Allee effect in ecology. This effect is typically incorporated by adding a cooperative term to the logistic equation, for example by taking $$r(n)=r_0\left(1-\frac{n}{N}\right)\left(1+\frac{Bn}{N}\right)$$ for some $B>0$. If we write $p=n/N$, then, writing $$s_0\left(1-\frac{n}{N}\right)\left(\frac{2n}{N}-1+\alpha\right)= s_0(\alpha-1)\left(1-\frac{n}{N}\right)\left(\frac{2}{\alpha-1}\frac{n}{N}+1 \right),$$ we see that for $\alpha>1$ we can recover~(\ref{Birzu equation}) from a stochastic version of~(\ref{AC equation}) by setting $B=2/(\alpha -1)$ and $r_0=s_0(\alpha -1)$. Birzu et al.~(\cite{birzu/hallatschek/korolev:2018}) define the travelling wave solution with minimal wavespeed to the deterministic equation with this form of $r$ to be pulled if $B\leq 2$, `semi-pushed' if $2<B<4$ and `fully pushed' if $B\ge 4$ (see equation~(7) in~\cite{birzu/hallatschek/korolev:2018} for a more general definition). In our parametrisation this says that the wave is pulled for $\alpha\geq 2$ (as observed by \cite{barton:1979}), semi-pushed for $3/2<\alpha<2$ and fully pushed for $\alpha\leq 3/2$. For $B\leq 2$ the wavespeed is determined by the growth rate in the tip (in particular it is independent of $B$), and just as for the Fisher wave, one can expect the behaviour to be very sensitive to stochastic fluctuations. For $B>2$, the velocity of the wave increases with $B$, and also the region of highest growth rate shifts from the tip into the bulk of the wave. These waves should be much less sensitive to fluctuations in the tip. Moreover if we follow the ancestry of an allele of the favoured type $A$, that is we follow an ancestral lineage, then in the pulled case, we expect the lineage to spend most of its time in the tip of the wave, and in contrast, in the pushed case, it will spend more time in the bulk. Indeed, if the shape of the advancing wave is close to that of $g$ in~(\ref{AC stationary wave}) and the speed is close to $\nu=\alpha \sqrt{ms_0/2}$, then we should expect the motion of the ancestral lineage {\em relative to the wavefront} to be approximately governed by the stochastic differential equation \begin{equation} \label{sde for ancestral lineage} dZ_t \nu dt+\frac{m\nabla g(Z_t)}{g(Z_t)}dt+\sqrt{m}dB_t, \end{equation} where $(B_t)_{t\geq 0}$ is a standard Brownian motion. (We shall explain this in more detail in the context of our model in Section~\ref{heuristics} below.) The stationary measure of this diffusion (if it exists) will be the renormalised speed measure, \begin{equation} \label{defn of pi} \pi(x)=\frac{C}{m}g(x)^2 \exp\big(2\nu x/m \big)= \frac C m e^{\frac{2\nu}m x} (1+e^{\sqrt{\frac{2s_0}m}x})^{-2}. \end{equation} Substituting for the wavespeed, $\nu=\alpha\sqrt{ms_0/2}$, we find that $\pi$ is integrable for $0< \alpha<2$. In other words, the diffusion defined by~(\ref{sde for ancestral lineage}) has a non-trivial stationary distribution when the wave is pushed, but not when it is pulled. The expression~\eqref{defn of pi} appears in equation S28 in~\cite{birzu/hallatschek/korolev:2018}, and earlier in~\cite{roques/garnier/hamel/klein:2012} (where the authors study the deterministic equation~\eqref{AC equation}) and in Theorem~2 of~\cite{garnier/giletti/hamel/roques:2012} (in relation to pushed wave solutions of general reaction-diffusion equations). In \cite{birzu/hallatschek/korolev:2018}, through a mixture of simulations and calculations, the authors also conjecture that the behaviour of the genealogical trees of a sample of $A$ alleles from near the wavefront will change at $B=2$ (corresponding to $\alpha=3/2$) from being, on appropriate timescales, a Kingman coalescent for $\alpha \in (0,3/2)$ to being a multiple merger coalescent for $\alpha>3/2$. Our calculation of the stationary distribution only tells us about a single ancestral lineage; to understand why there should be a further transition at $\alpha =3/2$, we need to understand the behaviour of multiple lineages. We seek a `separation of timescales' in which ancestral lineages reach stationarity on a faster timescale than coalescence; c.f.~\cite{nordborg/krone:2002}. Recalling that we are sampling type $A$ alleles from near the wavefront, then just as for the Fisher-KPP case, the instantaneous rate of coalescence of two lineages that meet at the position $x\in\R$ relative to the wavefront should be proportional to the inverse of the density of $A$ alleles at $x$, which we approximate as $1/(2N_0 g(x))$ for a large constant $N_0$ (corresponding to the population density). If $N_0$ is sufficiently large, then the lineages will not coalesce before their spatial positions reach equilibrium, and so the probability that the two lineages are both at position $x$ relative to the wavefront should be proportional to $\pi(x)^2$. This suggests that in this scenario the time to coalescence should be approximately exponential, with parameter proportional to $\int_{-\infty}^\infty \pi(x)^2/g(x)dx$ (this calculation appears in \cite{birzu/hallatschek/korolev:2018} in their equation~S119). This quantity is finite precisely when $\alpha \in (0,3/2)$. If we sample $k$ lineages, one can conjecture that, because of the separation of timescales, once a first pair of lineages coalesces, the additional time until the next merger is the same as if the remaining $k-1$ lineages were started from points sampled independently according to the stationary distribution $\pi$. This then strongly suggests that in the regime $\alpha \in (0,3/2)$, after suitable scaling, the genealogy of a sample will converge to a Kingman coalescent. Although we believe that the suitably timescaled genealogy of lineages sampled from near the wavefront of the advance of the favoured type really will converge to Kingman's coalescent for all $\alpha \in (0,3/2)$, our main results in this article will be restricted to the case $\alpha \in (0,1)$. The difficulty is that for $\alpha >1$, as $x\rightarrow\infty$, the stationary measure $\pi(x)$ does not decay as quickly as the wave profile $g(x)$. Consequently, a diffusion driven by~(\ref{sde for ancestral lineage}) will spend a non-negligible proportion of its time in the region where $g$ is very small, which is precisely where the fluctuations of $p$ about $g$ (or rather fluctuations of $1/p$ about $1/g$) become significant and our approximations break down. For this reason, in what follows, we shall restrict ourselves to the case $\alpha<1$. Unlike the parameter range corresponding to~(\ref{Birzu equation}), in this setting, the growth rate in the tip of the wave is actually negative, and the non-linear term $f$ in~\eqref{AC equation} is bistable. In ecology this would correspond to a strong Allee effect; for us, it means that we can control the time that the ancestral lineage of an $A$ allele spends in the tip of the wave (from which it is repelled). In Section~\ref{heuristics} below, we will briefly discuss the case $\alpha \in [1,3/2)$ in the context of our model. \subsection*{Some biological considerations} Our goal is to write down a mathematically tractable, but biologically plausible, individual based model for a population subject to selection acting on diploids, and to show that when suitably scaled the genealogy of a sample from near the wavefront of expansion of $A$ alleles converges to a Kingman coalescent. As we will see below, for this model the proportion of $A$ alleles will be governed by a discrete space stochastic analogue of~(\ref{AC equation}) with $0<\alpha <1$. The model that we define and analyse below will be a modification of a classical Moran model for a spatially structured population with selection in which we treat each allele as an individual. In order to justify this choice, we first follow a more classical approach by considering a variant of a model that is usually attributed to Fisher and Wright, for a large (diploid) population, evolving in discrete generations. First we explain the form of the nonlinearity in~(\ref{AC equation}). For simplicity, let us temporarily consider a population without spatial structure. We are following the fate of a gene with two alleles, $a$ and $A$. Individuals in the population each carry two copies of the gene. During reproduction, each individual produces a very large number of germ cells (containing a copy of all the genetic material of the parent) which then split into gametes (each carrying just one copy of the gene). All the gametes produced in this way are pooled and, if the population is of size $N_0$, then $2N_0$ gametes are sampled (without replacement) from the pool. The sampled gametes fuse at random to form the next generation of diploid individuals. To model selection, we suppose that the numbers of germ cells produced by individuals are in the proportion $1+2\alpha s: 1+(\alpha -1)s:1$ for genetic types $AA$, $Aa$, $aa$ respectively. Here $\alpha \in (0,1)$ is a positive constant and $s>0$ is small, with $(\alpha +1)s<1$. Notice in particular that type $AA$ homozygotes are `fitter' than type $aa$ homozygotes, in that they contribute more gametes to the pool (fecundity selection). Both are fitter than the heterozygotes ($Aa$ individuals). Suppose that the proportion of type $A$ alleles in the population is $w$. If the population is in Hardy-Weinberg proportions, then the proportions of $AA$, $Aa$ and $aa$ individuals are $w^2$, $2w(1-w)$ and $(1-w)^2$ respectively. Hence the proportion of type $A$ in the (effectively infinite) pool of gametes produced during reproduction is \begin{align} \nonumber &\frac{(1+2\alpha s)w^2+\tfrac{1}{2}(1+(\alpha-1)s)2w(1-w)}{1+2\alpha sw^2+(\alpha-1)s \cdot 2w(1-w)}\\ &\quad =(1+\alpha s-s)w+(3-\alpha )sw^2 -2sw^3 +\mathcal O(s^2) \notag \\ \label{change over a single generation 1} &\quad =(1-(\alpha+1)s)w+\alpha s (2w-w^2)+s(3w^2-2w^3)+\mathcal O (s^2)\\ &\quad =w+\alpha s w(1-w) +sw(1-w)(2w-1) +\mathcal O (s^2). \label{change over a single generation 2} \end{align} We will assume that $s$ is sufficiently small that terms of $\mathcal O (s^2)$ are negligible. If the population were infinite, then the frequency of $A$ alleles would evolve deterministically, and if $s=s_0/K$ for some large $K$, then measuring time in units of $K$ generations, we see that $w$ will evolve approximately according to the differential equation \begin{equation} \label{ODE for w} \frac{dw}{dt}=\alpha s_0w(1-w)+s_0 w(1-w)(2w-1)=s_0w(1-w)(2w-1+\alpha), \end{equation} and we recognise the nonlinearity in~(\ref{AC equation}). The easiest way to incorporate spatial structure into the Wright-Fisher model described above is to suppose that the population is subdivided into demes (islands of population) which we can, for example, take to be the vertices of a lattice, and in each generation a proportion of the gametes produced in a deme is distributed to its neighbours (plausible, for example, for a population of plants). If we assume that this dispersal is symmetric, the population size in each deme is the same, and the proportion of gametes that migrate scales as $1/K$, then this will result in the addition of a term involving the discrete Laplacian to the equation~(\ref{ODE for w}). Since we are interested in understanding the interplay of selection, spatial structure, and random genetic drift, we must consider a finite population. We shall nonetheless assume that the population in each deme is large, so that our assumption that the population is in Hardy-Weinberg equilibrium remains valid. When this assumption is satisfied, to specify the evolution of the proportions of the types $AA$, $Aa$, $aa$, it suffices to track the proportion of $A$ gametes in each deme. Moreover, because we assume that the chosen gametes fuse at random to form the next generation, the genealogical trees relating a sample of alleles from the population can also be recovered from tracing just single types. The only role that pairing of genes in individuals plays is in determining what proportion of the gamete pool will be contributed by a given allele in the parental population. Suppose that the proportion of $A$ alleles in some generation $t$ is $w$ and recall that the population consists of $2N_0$ alleles. The probability that two type $A$ alleles sampled from generation $t+1$ are both descendants of the same parental allele is approximately $1/(2N_0w)$ since $s$ is small, while the probability that three or more are all descended from the same parent is $\mathcal O(1/N_0^2)$. Recalling that $s=s_0/K$ for some large $K$, if now we measure time in units of $K$ generations, the forwards in time model for allele frequencies will be approximated by a stochastic differential equation, $$dw=s_0w(1-w)(2w-1+\alpha)dt+\sqrt{\frac{K}{2N_0}w(1-w)}dB_t,$$ where $(B_t)_{t\geq 0}$ is a Brownian motion, and the genealogy of a sample of type $A$ alleles from our population will be well-approximated by a time-changed Kingman coalescent in which the instantaneous rate of coalescence, when the proportion of type $A$ alleles in the population is $w$, is $K/(2N_0w)$. The Wright-Fisher model is inconvenient mathematically, but we now see that for the purpose of understanding the genealogy, we can replace it by any other model in which, over large timescales, the allele frequencies evolve in (approximately) the same way and in which, as we trace backwards in time, the genealogy of a sample of favoured alleles is (approximately) the same (time-changed) Kingman coalescent. This will allow us to replace the discrete generation (diploid) `Wright-Fisher' model by a much more mathematically convenient `Moran model', in which changes in allele frequencies in each deme will be driven by Poisson processes of reproduction events in which exactly one allele is born and exactly one dies. Because our Moran model deals directly with alleles, from now on we shall refer to alleles as {\em individuals}. To understand the form that our Moran model should take, let us first consider the non-spatial setting. Once again we trace $2N_0$ individuals (alleles), but now we label them $1,2,\ldots , 2N_0$. Reproduction events will take place at the times of a rate $2N_0K$ Poisson process. Inspired by~(\ref{change over a single generation 2}), we divide events into three types: neutral events, which will take place at rate $2N_0K(1-(\alpha +1)s)$, events capturing directional selection at rate $2N_0K\alpha s$, and events capturing selection against heterozygosity, at rate $2N_0Ks$. In a neutral event, an ordered pair of individuals is chosen uniformly at random from the population; the first dies and is replaced by an offspring of the second (and this offspring inherits the label of the first individual). At an event corresponding to directional selection, an ordered pair of individuals is chosen uniformly at random from the population; if the type of the second is $A$, then it produces an offspring which replaces the first. At an event corresponding to selection against heterozygosity, an ordered triplet of individuals is picked from the population; if the second and third are of the same type, then the second produces an offspring that replaces the first. (Note that in such an event, the first individual is either replaced by or remains a type $A$ if and only if at least two of the triplet of individuals picked were type $A$.) Noting that if $X_1$, $X_2$ and $X_3$ are i.i.d.~Bernoulli($w$) random variables then $$ \mathbb{P}\left(X_1+X_2 \geq 1 \right)=2w-w^2 \quad \text{ and } \quad \mathbb{P}\left(X_1+X_2+X_3 \geq 2 \right)=3w^2-2w^3, $$ and recalling that $s=s_0/K$, using~(\ref{change over a single generation 1}), we see that for large $K$, the proportion of $A$ alleles under this model will be close to that under our time-changed Wright-Fisher model. Moreover, since there is at most one birth event at a time, coalescence of ancestral lineages is necessarily pairwise. If in a reproduction event the parent is type $A$, then the probability that a pair of type $A$ ancestral lineages corresponds to the parent and its offspring (and therefore merges in the event) is $1/(2N_0w(2N_0w-1))$. Since $s$ is very small, the instantaneous rate at which events with a type $A$ parent fall is approximately $2N_0Kw$. Thus, the probability that a particular pair of two type $A$ individuals sampled from the population at time $t+\delta t$ are descended from the same type $A$ individual at time $t$ is (up to a lower order error) $K/(2N_0w)\delta t$ and we see that the genealogy under this model will be (up to a small error) the same as under the Wright-Fisher model. In what follows, to avoid too many factors of two, we are going to write $N=2N_0$ for the number of individuals in our Moran model. \subsection{Definition of the model} \label{subsec:modeldefn} We now give a precise definition of our model. Take $\alpha \in (0,1)$, $s_0>0$ and $m>0$. Let $n,N\in \mathbb{N}$. We are going to define our (structured) Moran model on $\frac1n \mathbb{Z}$ in such a way that there are $N$ individuals in each site (or deme) and they are indexed by $[N]:=\{1,\ldots,N\}$. We shall denote the type of the $i$th individual at site $x$ at time $t$ by $\xi_t^n(x,i)\in\{0,1\}$, with $\xi_t^n(x,i)=1$ meaning that the individual is type $A$, and $\xi_t^n(x,i)=0$ meaning that the individual is type $a$. For $x\in \frac1n \mathbb{Z}$ and $t\geq 0$, let $$p^n_t(x)=\frac{1}{N}\sum_{i=1}^N \xi^n_t (x,i) $$ be the proportion of type $A$ at $x$ at time $t$. We shall reserve the symbol $x$ for space and $i,j,k$ for the label of an individual. Let \begin{equation} \label{eq:snrndefn} s_n=\frac{2s_0}{n^{2}} \quad \text{ and }\quad r_n= \frac{n^2}{2N}. \end{equation} (Here, $s_n$ is a selection parameter which determines the space scaling needed to see a non-trivial limit, and $r_n$ is a time scaling parameter.) To specify the dynamics of the process, we define four independent families of i.i.d.~Poisson processes. These will govern neutral reproduction, directional selection, selection against heterozygotes and migration respectively. Let $((\mathcal P_t^{x,i,j})_{t\geq 0})_{x\in \frac1n \mathbb{Z}, i \neq j \in [N]}$ be i.i.d.~Poisson processes with rate $r_n(1-(\alpha+1)s_n)$. Let $((\mathcal S_t^{x,i,j})_{t\geq 0})_{x\in \frac1n \mathbb{Z}, i \neq j \in [N]}$ be i.i.d.~Poisson processes with rate $r_n \alpha s_n$. Let $((\mathcal Q_t^{x,i,j,k})_{t\geq 0})_{x\in \frac1n \mathbb{Z}, i,j,k \in [N]\text{ distinct}}$ be i.i.d.~Poisson processes with rate $\frac{1}{N}r_n s_n$. Let $((\mathcal R_t^{x,i,y,j})_{t\geq 0})_{x,y \in \frac1n \mathbb{Z}, |x-y|=n^{-1} , i,j\in[N]}$ be i.i.d.~Poisson processes with rate $m r_n$. For a given initial condition $p^n_0:\frac 1n \mathbb{Z} \rightarrow \frac 1 N \mathbb{Z} \cap [0,1]$, we assign labels to the type $A$ individuals in each site uniformly at random. That is, we define $(\xi^n_0(x,i))_{x\in \frac 1n \mathbb{Z}, i\in [N]}$ as follows. For each $x\in \frac 1n \mathbb{Z}$ independently, take $I_x\subseteq [N]$, where $I_x$ is chosen uniformly at random from $\{A\subseteq [N]:|A|=Np^n_0(x)\}$. For $i\in [N]$, let $\xi^n_0(x,i)=\mathds{1}_{\{i\in I_x\}}$. The process $(\xi^n_t(x,i))_{x\in \frac 1n \mathbb{Z}, i\in [N], t\ge 0}$ evolves as follows. \begin{enumerate} \item If $t$ is a point in $\mathcal P^{x,i,j}$, then at time $t$, the individual at $(x,i)$ is replaced by offspring of the individual at $(x,j)$, i.e.~we let $\xi^n_t(x,i)=\xi^n_{t-}(x,j)$. \item If $t$ is a point in $\mathcal S^{x,i,j}$, then at time $t$, if the individual at $(x,j)$ is type $A$ then the individual at $(x,i)$ is replaced by offspring of the individual at $(x,j)$, i.e.~we let $$\xi^n_t(x,i)= \begin{cases} \xi^n_{t-}(x,j) \quad \text{ if }\xi^n_{t-}(x,j)=1,\\ \xi^n_{t-}(x,i) \quad \text{ otherwise}. \end{cases}$$ \item If $t$ is a point in $\mathcal Q^{x,i,j,k}$, then at time $t$, if the individuals at $(x,j)$ and $(x,k)$ have the same type then the individual at $(x,i)$ is replaced by offspring of the individual at $(x,j)$, i.e.~we let $$\xi^n_t(x,i)= \begin{cases} \xi^n_{t-}(x,j) \quad \text{ if }\xi^n_{t-}(x,j)=\xi^n_{t-}(x,k),\\ \xi^n_{t-}(x,i) \quad \text{ otherwise}. \end{cases}$$ \item If $t$ is a point in $\mathcal R^{x,i,y,j}$, then at time $t$, the individual at $(x,i)$ is replaced by offspring of the individual at $(y,j)$, i.e.~we let $\xi^n_t(x,i)=\xi^n_{t-}(y,j)$. \end{enumerate} Ancestral lineages will be represented in the form of a pair with the first coordinate recording the spatial position and the second the label of the ancestor. More precisely, for $T\ge 0$, $t\in [0,T]$, $x_0\in \frac1n \mathbb{Z}$ and $i_0\in [N]$, if the individual at site $y$ with label $j$ is the ancestor at time $T-t$ of the individual at site $x_0$ with label $i_0$ at time $T$, then we let $(\zeta^{n,T}_t(x_0,i_0),\theta^{n,T}_t(x_0,i_0))=(y,j)$. The pair $(\zeta_t^{n,T}(x_0,i_0),\theta_t^{n,T}(x_0,i_0))_{t\in [0,T]}$ is a jump process with $$(\zeta^{n,T}_0(x_0,i_0),\theta^{n,T}_0(x_0,i_0))=(x_0,i_0).$$ For some $t\in (0,T]$, suppose that $(\zeta^{n,T}_{t-}(x_0,i_0),\theta^{n,T}_{t-}(x_0,i_0))=(x,i)$. Then if $T-t$ is a point in $\mathcal P^{x,i,j}$ for some $j\neq i$, we let $(\zeta^{n,T}_t(x_0,i_0),\theta^{n,T}_t(x_0,i_0))=(x,j)$. If instead $T-t$ is a point in $\mathcal S^{x,i,j}$ for some $j\neq i$, we let $$(\zeta^{n,T}_t(x_0,i_0),\theta^{n,T}_t(x_0,i_0))= \begin{cases} (x,j) \quad \text{ if }\xi^n_{(T-t)-}(x,j)=1,\\ (x,i) \quad \text{ otherwise}. \end{cases}$$ If instead $T-t$ is a point in $\mathcal Q^{x,i,j,k}$ for some $j\neq k\in [N]\setminus \{i\}$, we let $$(\zeta^{n,T}_t(x_0,i_0),\theta^{n,T}_t(x_0,i_0))= \begin{cases} (x,j) \quad \text{ if }\xi^n_{(T-t)-}(x,j)=\xi^n_{(T-t)-}(x,k),\\ (x,i) \quad \text{ otherwise}. \end{cases}$$ Finally, if $T-t$ is a point in $\mathcal R^{x,i,y,j}$ for some $y\in \{x-n^{-1}, x+n^{-1}\}$, $j\in[N]$, we let $(\zeta^{n,T}_t(x_0,i_0),\theta^{n,T}_t(x_0,i_0))=(y,j)$. These are the only times at which the ancestral lineage process $(\zeta^{n,T}_s(x_0,i_0),\theta^{n,T}_s(x_0,i_0))_{s\in [0,T]}$ jumps. \subsection{Main results} \label{subsec:mainresults} Recall from~\eqref{AC stationary wave} that $g:\R\rightarrow \R$ is given by \begin{equation} \label{eq:gdefn} g(x)=(1+e^{\sqrt{\frac{2s_0}m}x})^{-1}. \end{equation} In our main results, we will make the following assumptions on the initial condition $p^n_0$, for $b_1,b_2>0$ to be specified later: \begin{align} \label{eq:conditionA} &p^n_0(x)=0\; \forall x\ge N, \quad p^n_0(x)=1\; \forall x\le -N, \notag \\ &\sup_{x\in \frac 1n \mathbb{Z}}|p_0^n(x)-g(x)|\leq b_1 \qquad \text{and }\qquad \sup_{z_1,z_2\in \frac 1n \mathbb{Z},|z_1-z_2|\leq n^{-1/3}}|p^n_0(z_1)-p^n_0(z_2)| \le n^{-b_2}. \tag{A} \end{align} We will assume throughout that there exists $a_0>0$ such that $(\log N)^{a_0}\le \log n$ for $n$ sufficiently large. The idea is that we need $N\gg n\gg 1$, in order that we are close to the deterministic limit, but we do not want $N$ to tend to infinity so quickly that we don't see the effect of the stochastic perturbation at all. For $t\ge 0$, define the position of the random travelling front at time $t$ by letting \begin{equation} \label{eq:muntdefn} \mu^n_t =\sup\{x\in \tfrac 1n \mathbb{Z} : p^n_t(x)\ge 1/2\}. \end{equation} For $t\ge 0$ and $R>0$, let \begin{equation} \label{eq:Gdefn} G_{R,t}=\{(x,i)\in \tfrac 1n \mathbb{Z} \times [N] :|x-\mu^n_t|\le R, \, \xi^n_t(x,i)=1\}, \end{equation} the set of type $A$ individuals which are near the front at time $t$. Our first main result says that if at a large time $T_n$ we sample a type $A$ individual from near the front, then the position of its ancestor relative to the front at a much earlier time $T_n-T'_n$ has distribution approximately given by $\pi$ (as defined in~\eqref{eq:pidefn}). \begin{theorem} \label{thm:statdist} Suppose $\alpha \in (0,1)$ and, for some $a_1>1$, $N\ge n^{a_1}$ for $n$ sufficiently large. There exists $b_1>0$ such that for $b_2>0$ and $K_0<\infty$ the following holds. Suppose condition~\eqref{eq:conditionA} holds, $T_n \le N^2$ and $T'_n \rightarrow \infty$ as $n\rightarrow \infty$ with $T_n-T'_n \ge (\log N)^2$. Let $(X_0,J_0)\in \frac 1n \mathbb{Z} \times [N]$ be measurable with respect to $\sigma((\xi^n_{T_n}(x,i))_{x\in \frac 1n \mathbb{Z}, i\in [N]})$ with $(X_0,J_0)\in G_{K_0,T_n}.$ Then $$ \zeta^{n,T_n}_{T'_n}(X_0, J_0) - \mu^n_{T_n-T'_n} \stackrel{d}{\rightarrow} Z \quad \text{as }n\rightarrow \infty, $$ where $Z$ is a random variable with density \begin{equation} \label{eq:pidefn} \pi(x)=\frac{g(x)^2 e^{\alpha \sqrt{\frac {2s_0}m} x}}{\int_{-\infty}^\infty g(y)^2 e^{\alpha \sqrt{\frac {2s_0}m} y}dy}. \end{equation} \end{theorem} Our second main result says that the genealogy of a sample of type $A$ individuals from near the front at a large time $T_n$ is approximately given by a Kingman coalescent (under a suitable time rescaling). \begin{theorem} \label{thm:main} Suppose $\alpha \in (0,1)$ and, for some $a_2>3$, $N \ge n^{a_2}$ for $n$ sufficiently large. There exists $b_1>0$ such that for $b_2>0$, $k_0\in \mathbb{N}$ and $K_0<\infty$, the following holds. Suppose condition~\eqref{eq:conditionA} holds, and take $T_n\in [N,N^2]$. Let $(X_1,J_1), \ldots ,(X_{k_0},J_{k_0})$ be measurable with respect to $\sigma((\xi^n_{T_n}(x,i))_{x\in \frac 1n \mathbb{Z}, i\in [N]})$ and distinct, with $(X_i,J_i) \in G_{K_0,T_n}$ $\forall i \in [ k_0]$. For $i, j\in [k_0],$ let $\tau^{n}_{i,j}$ denote the time at which the $i^{\text{th}}$ and $j^{\text{th}}$ ancestral lineages coalesce, i.e.~let $$ \tau^{n}_{i,j} =\inf\{t\geq 0:(\zeta^{n,T_n}_t(X_i,J_i),\theta^{n,T_n}_t(X_i,J_i)) =(\zeta^{n,T_n}_t(X_j,J_j),\theta^{n,T_n}_t(X_j,J_j))\}. $$ Then $$ \left( \frac{(2m+1) n}{N}\frac{\int_{-\infty}^\infty g(x)^3 e^{2\alpha \sqrt{\frac {2s_0}m} x}dx}{\left( \int_{-\infty}^\infty g(x)^2 e^{\alpha \sqrt{\frac {2s_0}m} x}dx \right)^2}\tau^{n}_{i,j} \right)_{i,j \in [k_0]} \stackrel{d}{\longrightarrow} (\tau_{i,j} )_{i,j \in [k_0]} \quad \text{as }n\rightarrow \infty, $$ where $\tau_{i,j}$ is the time at which the $i^{\text{th}}$ and $j^{\text{th}}$ ancestral lineages coalesce in the Kingman ${k_0}$-coalescent. \end{theorem} \subsection{Strategy of the proof} \label{heuristics} We will show that if $N\gg n$, then if $n$ is large and $T_0$ is not too large, $(p^n_t)_{t\in [0,T_0]}$ is approximately given by the solution of the PDE \begin{equation} \label{eq:PDE} \frac{\partial u}{\partial t}=\tfrac 12 m \Delta u+s_0 u(1-u)(2u-1+\alpha). \end{equation} (Recall from our discussion of a non-spatial Moran model before Section~\ref{subsec:modeldefn} that the non-linear term in~\eqref{eq:PDE} comes from the events corresponding to the Poisson processes $(\mathcal S^{x,i,j})_{x,i,j}$ and $(\mathcal Q^{x,i,j,k})_{x,i,j,k}$. The Laplacian term comes from the Poisson processes $(\mathcal R^{x,i,y,j})_{x,i,y,j}$ which cause migration between neighbouring sites and whose rate was chosen to coincide with the diffusive rescaling.) As noted in~\eqref{AC stationary wave}, $u(t,x):=g(x-\alpha \sqrt{\frac{ms_0}2} t)$ is a travelling wave solution of~\eqref{eq:PDE}. In the case $\alpha \in (0,1)$, work of Fife and McLeod~\cite{fife/mcleod:1977} shows that for a front-like initial condition $u_0$ satisfying $\limsup_{x\rightarrow \infty}u_0(x)<\frac 12 (1-\alpha)$ and $\liminf_{x\rightarrow -\infty}u_0(x)>\frac 12 (1-\alpha)$, the solution of~\eqref{eq:PDE} converges to a moving front with shape $g$ and wavespeed $\alpha \sqrt{\frac{ms_0}2}$. We can use this to show that if $N\gg n$, then for large $n$, with high probability, \begin{equation} \label{eq:heurevent} p^n_t(x)\approx g(x-\mu^n_t)\; , \forall x\in \tfrac 1n \mathbb{Z}, t\in [\log N,N^2] \quad \text{ and }\quad \frac{\mu^n_t-\mu^n_s}{t-s}\approx \alpha \sqrt{\tfrac{ms_0}2} \; ,\forall s<t\in [\log N,N^2], \end{equation} where $\mu^n_t$ is the front location defined in~\eqref{eq:muntdefn} (see Proposition~\ref{prop:eventE1}). Suppose the event in~\eqref{eq:heurevent} occurs, and sample a type $A$ individual at time $T_n$ by taking $(X_0,J_0)$ with $\xi^n_{T_n}(X_0,J_0)=1$. We will show that the recentred ancestral lineage process $(\zeta^{n,T_n}_t(X_0,J_0)-\mu^n_{T_n-t})_{t\in [0,T_n]}$ moves approximately according to the diffusion $$ dZ_t =\alpha \sqrt{\tfrac{ms_0}2} dt+\frac{m\nabla g(Z_t)}{g(Z_t)}dt +\sqrt m dB_t, $$ where $(B_t)_{t\ge 0}$ is a Brownian motion (see Lemmas~\ref{lem:vnvbound} and~\ref{lem:qnvnonepoint}). This can be explained heuristically as follows. Observe first that $(\mu^n_{T_n-t}-\mu^n_{T_n-t-s})/s\approx \alpha \sqrt{\frac{ms_0}2}$ for $s>0$. Then if $\zeta^{n,T_n}(X_0,J_0)$ jumps at some time $t$, and $\zeta^{n,T_n}_{t-}(X_0,J_0)=x_0$, the conditional probability that $\zeta^{n,T_n}_t(X_0,J_0)=x_0+n^{-1}$ is $$ \frac{p^n_{T_n-t}(x_0+n^{-1})}{p^n_{T_n-t}(x_0-n^{-1})+p^n_{T_n-t}(x_0+n^{-1})} \approx \frac 12 +\frac 12 \frac{\nabla g(x_0-\mu^n_{T_n-t})}{g(x_0-\mu^n_{T_n-t})}n^{-1}. $$ Finally, the total rate at which $\zeta^{n,T_n}(X_0,J_0)$ jumps is given by $2mr_n N =mn^2$, and the jumps have increments $\pm n^{-1}$. As we observed before in~\eqref{defn of pi}, $(Z_t)_{t\ge 0}$ has a unique stationary distribution given by $\pi$, as defined in~\eqref{eq:pidefn}. In Theorem~\ref{thm:statdist}, we show rigorously that for large $t$, $\zeta^{n,T_n}_t(X_0,J_0)-\mu^n_{T_n-t}$ has distribution approximately given by $\pi$. Theorem~\ref{thm:statdist} is not strong enough to give the precise estimates that we need for Theorem~\ref{thm:main}, and so in fact we prove Theorem~\ref{thm:main} first and then Theorem~\ref{thm:statdist} will follow from results that we have obtained along the way. A pair of ancestral lineages can only coalesce if they are distance at most $n^{-1}$ apart. Take a pair of type $A$ individuals at time $T_n$ by sampling $(X_1,J_1)\neq (X_2,J_2)$ with $\xi^n_{T_n}(X_1,J_1)=1=\xi^n_{T_n}(X_2,J_2)$. Suppose at some time $T_n-t$ that their ancestral lineages are at the same site, i.e.~$\zeta^{n,T_n}_t(X_1,J_1)=x=\zeta^{n,T_n}_t(X_2,J_2)$ for some $x\in \frac 1n \mathbb{Z}$. For $\delta_n >0$ small, on the time interval $[T_n-t-\delta_n,T_n-t]$, each type $A$ individual at $x$ produces offspring at $x$ at rate approximately $r_n N$, and not many types produce more than one offspring. Hence the number of pairs of type $A$ individuals at $x$ at time $T_n-t$ which have common ancestors at time $T_n-t-\delta_n$ is approximately $r_n N^2 \delta_n p^n_{T_n-t-\delta_n}(x)$ (see Lemma~\ref{lem:coalCB}). Therefore, the probability that our pair of lineages coalesce within time $\delta_n$ (backwards in time), which is the same as the probability that it is one such pair, is approximately \begin{equation} \label{eq:heurcoal} \frac{r_n N^2 \delta_n p^n_{T_n-t-\delta_n}(x)}{{N p^n_{T_n-t}(x) \choose 2}} \approx \frac{n^2 \delta_n}{Np^n_{T_n-t}(x)}. \end{equation} Similarly, if $\zeta^{n,T_n}_t(X_1,J_1)=x$ and $\zeta^{n,T_n}_t(X_2,J_2)=x+n^{-1}$ then, since an individual at $x$ produces offspring at $x+n^{-1}$ at rate $mr_nN$ and vice-versa, the probability that the pair of lineages coalesce within time $\delta_n$ is approximately \begin{equation} \label{eq:heurcoal2} \frac{m r_n N^2 \delta_n (p^n_{T_n-t-\delta_n}(x)+p^n_{T_n-t-\delta_n}(x+n^{-1}))}{N p^n_{T_n-t}(x)\cdot Np^n_{T_n-t}(x+n^{-1})} \approx \frac{m n^2 \delta_n}{Np^n_{T_n-t}(x)}. \end{equation} These heuristics suggest that for $x_0\in \frac 1n \mathbb{Z}$, since $\pi(x_0)\pi(x_0+n^{-1})^{-1} \approx 1$ and $\pi(x_0)\pi(x_0-n^{-1})^{-1} \approx 1$, the rate at which the pair of ancestral lineages of $(X_1,J_1)$ and $(X_2,J_2)$ coalesce with the ancestral lineage of $(X_1,J_1)$ at location $x_0$ relative to the front should be approximately \begin{equation*} n^{-2} \pi(x_0)^2 \cdot \frac{n^2}{N g(x_0)}+2n^{-2} \pi(x_0)^2 \cdot \frac{m n^2}{N g(x_0)} = (2m+1)\frac{\pi(x_0)^2}{N g(x_0)}. \end{equation*} Note that for some constants $C_1,C_2>0$, \begin{equation} \label{eq:heurasym} \frac{\pi(x_0)^2}{g(x_0)}\sim C_1 e^{(2\alpha -3)\sqrt{\frac{2s_0}m}x_0}\rightarrow 0 \; \text{as }x_0\rightarrow \infty \quad \text{ and } \quad \frac{\pi(x_0)^2}{g(x_0)}\sim C_2 e^{2\alpha \sqrt{\frac{2s_0}m} x_0}\rightarrow 0 \; \text{as }x_0\rightarrow -\infty . \end{equation} This suggests that coalescence only occurs (fairly) close to the front. If a pair of lineages coalesce close to the front, then the rate at which they subsequently coalesce with any other lineage is $\mathcal O(n^2 N^{-1})$, which suggests that if $N\gg n^2$, their location relative to the front will have distribution approximately given by $\pi$ before any more coalescence occurs. Hence the genealogy of a sample of type $A$ individuals from near the front should be approximately given by a Kingman coalescent with rate $$ \sum_{x_0 \in \frac 1n \mathbb{Z}} (2m+1) \frac{\pi(x_0)^2}{Ng(x_0)}\approx (2m+1) \frac n N \int_{-\infty}^\infty \frac{\pi(y)^2}{g(y)}dy. $$ This result is proved in Theorem~\ref{thm:main} (with the additional technical assumption that $N\gg n^3$). For $\alpha \in [1,2)$, work of Rothe~\cite{rothe:1981} shows that for the PDE~\eqref{eq:PDE}, if the initial condition $u_0(x)$ decays sufficiently quickly as $x\rightarrow \infty$ then the solution converges to a moving front with shape $g$ and wavespeed $\alpha \sqrt{\frac{ms_0}2}$. Moreover,~\eqref{eq:heurasym} holds for any $\alpha \in (0,3/2)$, which suggests that Theorem~\ref{thm:main} should hold for any $\alpha \in (0,3/2)$. The main difficulty in proving the theorem is that $p^n_t(x)^{-1}$ is hard to control when $x-\mu^n_t$ is very large, i.e.~far ahead of the front. This in turn makes it hard to control the motion of ancestral lineages if they are far ahead of the front. For $\alpha \in (0,1)$, the non-linear term $f(u)=u(1-u)(2u-1+\alpha)$ in the PDE~\eqref{eq:PDE} satisfies $f(u)<0$ for $u\in (0,\frac 12 (1-\alpha))$, which means that far ahead of the front, the proportion of type $A$ decays. This allows us to show that with high probability, no lineages of type $A$ individuals stay far ahead of the front for a long time (see Proposition~\ref{prop:eventE4}), which then gives us upper bounds on the probabilities of lineages being far ahead of the front at a fixed time (see Proposition~\ref{prop:intip}). A proof of Theorem~\ref{thm:main} for $\alpha \in [1,3/2)$ would require a different method to bound these tail probabilities, along with more delicate estimates on $p^n_t(x)$ for large $x$ in order to apply~\cite{rothe:1981} and ensure that $p^n_t(\cdot)\approx g(\cdot - \mu^n_t)$ with high probability at large times $t$. One of the main tools in the proofs of Theorems~\ref{thm:statdist} and~\ref{thm:main} is the notion of tracers. In population genetics, this corresponds to labelling a subset of individuals by a neutral genetic marker, which is passed down from parent to offspring, and which has no effect on the fitness of an individual by whom it is carried. Such markers allow us to deduce which individuals in the population are descended from a particular subset of ancestors (c.f.~\cite{donnelly/kurtz:1999}). The idea of using these markers, or `tracers', in the context of expanding biological populations goes back at least to Hallatschek and Nelson~\cite{hallatschek/nelson:2008}, and has subsequently been used, for example, by Durrett and Fan~\cite{durrett/fan:2016}, Birzu et al.~\cite{birzu/hallatschek/korolev:2018} and Biswas et al.~\cite{biswas/etheridge/klimek:2018}. The idea is that at some time $t_0$, a subset of the type $A$ individuals are labelled as `tracers'. At a later time $t$, we can look at the subset of type $A$ individuals which are descended from the original set of tracers. In particular, for $0\le t_0 \le t$ and $x_1,x_2 \in \frac 1n \mathbb{Z}$, we can record the proportion of individuals at $x_2$ at time $t$ which are descended from type $A$ individuals at $x_1$ at time $t_0$. This tells us the conditional probability that the time-$t_0$ ancestor of a randomly chosen type $A$ individual at $x_2$ at time $t$ was at $x_1$. For $x_1,x_2 \in \frac 1n \mathbb{Z}$ and $t\ge 0$, and taking $\delta_n>0$ very small, we can also record the number of pairs of type $A$ individuals at $x_1$ and $x_2$ at time $t+\delta_n$ which have the same ancestor at time $t$. This tells us the conditional probability that a randomly chosen pair of type $A$ lineages at $x_1$ and $x_2$ at time $t+\delta_n$ coalesce in the time interval $[t,t+\delta_n]$. In Section~\ref{sec:mainproof}, we will define a `good' event $E$ in terms of these `tracer' random variables, and in Sections~\ref{sec:eventE1}-\ref{sec:eventE4}, we will show that the event $E$ occurs with high probability. In Section~\ref{sec:mainproof}, we will show that conditional on the tracer random variables, if the event $E$ occurs, the locations of ancestral lineages relative to the front approximately have distribution $\pi$ (see Lemma~\ref{lem:fromxixj}), pairs of nearby lineages coalesce at approximately the rates given in~\eqref{eq:heurcoal} and~\eqref{eq:heurcoal2} (see Proposition~\ref{prop:coal}), and we are unlikely to see two pairs of lineages coalesce in a short time (see Proposition~\ref{prop:doublecoal}). We can also prove bounds on the tail probabilities of lineages being far ahead of or far behind the front (see Propositions~\ref{prop:intip} and~\ref{prop:RlogN}). These results combine to give a proof of Theorem~\ref{thm:main}. Finally, in Section~\ref{sec:thmstatdist}, we use results from the earlier sections to complete the proof of Theorem~\ref{thm:statdist}. \section{Proof of Theorem~\ref{thm:main}} \label{sec:mainproof} Throughout Sections~\ref{sec:mainproof}-\ref{sec:thmstatdist}, we suppose $\alpha \in (0,1)$. We let \begin{equation} \label{eq:kappanu} \kappa =\sqrt{\frac{2s_0}m} \qquad \text{and}\qquad \nu=\alpha \sqrt{\frac{ms_0}2}. \end{equation} For $k\in \mathbb{N}$, let $[k]=\{1,\ldots,k\}$. For $0\le t_1 \le t_2$ and $x_1,x_2 \in \frac 1n \mathbb{Z}$, let \begin{equation} \label{eq:qt1t2defn} q^n_{t_1,t_2}(x_1,x_2) =\frac 1 N |\{i\in [N] :\xi^n_{t_2}(x_2,i)=1, \, \zeta^{n,t_2}_{t_2-t_1}(x_2,i)=x_1\}|, \end{equation} the proportion of individuals at $x_2$ at time $t_2$ which are type $A$ and are descended from an individual at $x_1$ at time $t_1$. Similarly, for $0\le t_1 \le t_2$ and $x_1\in \R$, $x_2 \in \frac 1n \mathbb{Z}$, let \begin{align} \label{eq:qn+-defn} q^{n,+}_{t_1,t_2}(x_1,x_2) &=\frac 1 N |\{i\in [N]:\xi^n_{t_2}(x_2,i)=1, \, \zeta^{n,t_2}_{t_2-t_1}(x_2,i)\ge x_1\}| \notag \\ \text{and }\quad q^{n,-}_{t_1,t_2}(x_1,x_2) &=\frac 1 N |\{i\in [N] :\xi^n_{t_2}(x_2,i)=1, \, \zeta^{n,t_2}_{t_2-t_1}(x_2,i)\le x_1\}|. \end{align} Fix a large constant $C>2^{13}\alpha^{-2}$, and let \begin{equation} \label{eq:paramdefns} \delta_n = \lfloor N^{1/2} n^2 \rfloor ^{-1} ,\; \epsilon_n =\lfloor (\log N)^{-2}\delta_n ^{-1}\rfloor \delta_n,\; \gamma_n =\lfloor (\log \log N)^4 \rfloor \text{ and } d_n =\kappa^{-1} C\log \log N. \end{equation} For $t\ge 0$, $\ell \in \mathbb{N}$ and $x_1,\ldots ,x_\ell \in \frac 1n \mathbb{Z}$, let \begin{equation} \label{eq:Cntdefn} \begin{aligned} &\mathcal C^n_t(x_1,x_2,\ldots, x_\ell )\\ &=\Big\{(i_1,\ldots, i_\ell) \in [N] ^\ell : (x_j,i_j) \neq (x_{j'},i_{j'}) \, \forall j\neq j' \in [\ell], \; \xi^n_{t+\delta_n}(x_j,i_j)=1 \, \forall j\in [\ell ],\\ &\qquad \qquad \quad (\zeta^{n,t+\delta_n}_{\delta_n}(x_j,i_j),\theta^{n,t+\delta_n}_{\delta_n}(x_j,i_j))=(\zeta^{n,t+\delta_n}_{\delta_n}(x_1,j_1),\theta^{n,t+\delta_n}_{\delta_n}(x_1,j_1)) \, \forall j\in [\ell] \Big\}, \end{aligned} \end{equation} the set of $\ell$-tuples of distinct type $A$ individuals at $x_1,\ldots, x_\ell$ at time $t+\delta_n$ which all have a common ancestor at time $t$. Recall the definition of $\mu^n_t$ in~\eqref{eq:muntdefn}. For $y,\ell>0$, $0\le s \le t$ and $x \in \frac 1n \mathbb{Z}$, let \begin{equation} \label{eq:rnystdefn} r^{n,y,\ell}_{s,t}(x) = \frac 1N \big| \big\{ i \in [N] : \xi^n_t(x,i)=1, \; \zeta^{n,t}_{t'} (x,i) \ge \mu^n_{t-t'}+y \;\, \forall t' \in \ell \mathbb{N}_0 \cap [0,s] \big\} \big|, \end{equation} the proportion of individuals at $x$ at time $t$ which are type $A$ and whose ancestor at time $t-t'$ was to the right of $\mu^n_{t-t'}+y$ for each $t' \in \ell \mathbb{N}_0 \cap [0,s]$. Fix $T_n \in [(\log N)^2,N^2]$ and define the sigma algebra \begin{align*} \mathcal F &= \sigma\Big( (p^n_t(x))_{x\in \frac 1n \mathbb{Z}, t\le T_n}, (\xi^n_{T_n}(x,i))_{x\in \frac 1n \mathbb{Z}, i\in [N]}, (q^n_{T_n-t_1,T_n-t_2}(x_1,x_2))_{x_1,x_2\in \frac 1n \mathbb{Z}, t_1,t_2\in \delta_n \mathbb{N}_0, t_2\le t_1 \le T_n},\\ &\hspace{2.5cm} (\mathcal C^n_{T_n-t}(x_1,x_2))_{x_1,x_2 \in \frac 1n \mathbb{Z}, t\in \delta_n \mathbb{N},\, t\le T_n}, (\mathcal C^n_{T_n-t}(x_1,x_2,x_3))_{x_1,x_2,x_3 \in \frac 1n \mathbb{Z}, t\in \delta_n \mathbb{N}, \, t\le T_n}\Big). \end{align*} We now define some `good' events, which occur with high probability, as we will show later. Take $c_1,c_2>0$ small constants, and $t^*,K\in \mathbb{N}$ large constants, to be specified later. The first event will allow us to show that the probability a lineage at $x_2$ at time $t+\gamma_n$ has an ancestor at $x_1$ at time $t$ is approximately $n^{-1} \pi(x_1-\mu^n_{t})$. For $x_1, x_2 \in \tfrac 1n \mathbb{Z}$ and $0 \le t \le T_n$, define the event $$ A^{(1)}_{t}(x_1,x_2) =\left\{ \left| \frac{q^n_{t,t+\gamma_n}(x_1,x_2)}{p^n_{t+\gamma_n}(x_2)}- n^{-1}\pi(x_1-\mu^n_{t}) \right| \le n^{-1}(\log N)^{-3C} \right\}. $$ The next two events will allow us to control the probability that a lineage is far ahead of, or far behind, the front. For $x_1,x_2\in \frac 1n \mathbb{Z}$ and $0 \le t \le T_n$, define the events \begin{align*} A^{(2)}_t(x_1,x_2) &= \left\{ \frac{q^{n,+}_{t,t+t^*}(x_1,x_2)}{p^n_{t+t^*}(x_2)}\le c_1 e^{-(1+\frac 12 (1-\alpha))\kappa(x_1 -(x_2 -\nu t^*)\vee (\mu^n_{t}+K)+2)} \right\}\\ \text{and} \qquad A^{(3)}_t(x_1,x_2) &= \left\{ \frac{q^{n,-}_{t,t+t^*}(x_1,x_2)}{p^n_{t+t^*}(x_2)}\le c_1 e^{-\frac 12 \alpha \kappa ((x_2 -\nu t^*)-x_1+1)} \right\}. \end{align*} The next two events will give us a useful bound on the probability that a lineage is at the site $x$ at time $t$, conditional on its location at time $t+\epsilon_n$, and will allow us to show that lineages do not move more than distance $1$ in time $\epsilon_n$. For $x \in \frac 1n \mathbb{Z}$ and $0 \le t \le T_n$, define the events \begin{align*} A^{(4)}_t(x) &= \left\{ q^{n}_{t ,t+\epsilon_n}(x,x')\le n^{-1} \epsilon_n^{-1}p^n_{t+\epsilon_n}(x') \, \forall x' \in \tfrac 1n \mathbb{Z} \right\}\\ \text{ and } \quad A^{(5)}_t(x) &= \left\{ q^{n}_{t ,t+\epsilon_n}(x',x)\le \mathds{1}_{|x-x'|\le 1} \, \forall x' \in \tfrac 1n \mathbb{Z} \right\}. \end{align*} The next event will allow us to show that lineages do not move more than distance $(\log N)^{2/3}$ in time $t^*$. For $x\in \frac 1n \mathbb{Z}$ and $0 \le t\le T_n$, define the event $$ A^{(6)}_t(x) = \left\{ q^{n}_{t ,t+k\delta_n}(x',x)\le \mathds{1}_{|x-x'|\le (\log N)^{2/3}} \; \forall k\in [t^* \delta_n^{-1}], x'\in \tfrac 1n \mathbb{Z} \right\}. $$ The next four events will give us estimates on the probability that a pair of lineages at the same site or neighbouring sites coalesce in time $\delta_n$, and bounds on the probabilities that a pair of lineages further apart coalesce, or a set of three lineages coalesce. For $x\in \frac 1n \mathbb{Z}$ and $0 \le t \le T_n $, define the events \begin{align*} B^{(1)}_t(x) &= \left\{ \frac{\big| \, |\mathcal C^n_t (x,x)|-n^2 N \delta_n p^n_t(x)\big|}{n^2 N \delta_n p^n_t(x)}\le 2n^{-1/5} \right\}, \\ B^{(2)}_t(x) &= \left\{ \frac{\big|\, |\mathcal C^n_t (x,x+n^{-1})|-\frac 12 mn^2 N \delta_n (p^n_t(x)+p^n_t(x+n^{-1}))\big|}{\frac 12 mn^2 N \delta_n (p^n_t(x)+p^n_t(x+n^{-1}))}\le 2n^{-1/5} \right\},\\ B^{(3)}_t(x) &= \left\{ \frac{|\mathcal C^n_t (x,x')|}{n^2 N \delta_n p^n_t(x)} \le n^{-1/5}\mathds{1}_{|x-x'|< Kn^{-1}} \; \forall x' \in \tfrac 1n \mathbb{Z} \text{ with } |x'-x|>n^{-1} \right\},\\ \text{and }\quad B^{(4)}_t(x) &= \left\{ \frac{|\mathcal C^n_t (x,y,y')|}{n^2 N \delta_n p^n_t(x)} \le n^{-1/5}\mathds{1}_{|y-x|\vee |y'-x|< Kn^{-1}} \; \forall y,y' \in \tfrac 1n \mathbb{Z} \right\}. \end{align*} Fix $c_0>0$ sufficiently small that $(1+\frac 14 (1-\alpha))(1-2c_0)>1$. Let \begin{equation} \label{eq:Dn+-defn} D^+_n =(1/2-c_0)\kappa^{-1} \log (N/n)\quad \text{ and }\quad D^-_n=- 26 \kappa^{-1} \alpha^{-1} \log N \end{equation} and for $t\ge 0$ and $\epsilon \in (0,1)$, recalling~\eqref{eq:paramdefns}, let \begin{equation} \label{eq:Intdefn} I^n_t =\tfrac 1n \mathbb{Z} \cap [\mu^n_t-N^4,\mu^n_t+D_n^+], \; I^{n,\epsilon}_t =\tfrac 1n \mathbb{Z} \cap [\mu^n_t+D_n^-,\mu^n_t+(1-\epsilon) D_n^+] \; \text{and } i^n_t =\tfrac 1n \mathbb{Z} \cap [\mu^n_t-d_n,\mu^n_t+d_n]. \end{equation} We will show that with high probability, a pair of lineages are never both more than $D_n^+$ ahead of the front before they coalesce, and neither lineage is ever more than $|D_n^-|$ behind the front. We now define an event which says that $(p^n_t)_{t\in [0,N^2]}$ is close to a moving front with shape $g$ and wavespeed approximately $\nu$. Let \begin{equation} \label{eq:eventE1} \begin{aligned} E_1=E_1(c_2)&= \Big\{ \sup_{x\in \frac 1n \mathbb{Z}, t\in [\log N,N^2]} |p^n_t(x)-g(x-\mu^n_t)|\le e^{-(\log N)^{c_2}}\Big\}\\ &\qquad \cap \big\{ p^n_t(x) \in [\tfrac 15 g(x-\mu^n_t), 5g(x-\mu^n_t)] \; \forall t \in [\tfrac 12 (\log N)^2,N^2], x\le \mu^n_t+D_n^++2 \big\}\\ &\qquad \cap \big\{ p^n_t(x) \le 5g(D^+_n) \; \forall t \in [\tfrac 12 (\log N)^2,N^2], x\ge \mu_t^n +D^+_n \big\}\\ & \qquad \cap \big\{ |\mu^n_{t+s}-\mu^n_t -\nu s |\le e^{-(\log N)^{c_2}} \; \forall t\in [\log N, N^2],s\in [0,1\wedge (N^2-t)]\big\}\\ &\qquad \cap \big\{ |\mu^n_{\log N } |\le 2\nu \log N \big\}. \end{aligned} \end{equation} Let $T_n^-=T_n-(\log N)^2$ and define the event \begin{align} \label{eq:eventE2} E_2 &=E_2(c_1,t^*,K) \notag \\ &= E'_2 \cap \bigcap_{t\in \delta_n\mathbb{N}_0 \cap [0,T_n^-]} \bigg( \bigcap_{x_1\in i^n_{T_n-t-\gamma_n}, \, x_2 \in i^n_{T_n-t}}A^{(1)}_{T_n-t-\gamma_n}(x_1,x_2) \cap \bigcap_{x\in I^n_{T_n-t-\epsilon_n} }A^{(4)}_{T_n-t-\epsilon_n}(x) \bigg) , \end{align} where \begin{equation} \label{eq:eventE'2} \begin{aligned} E'_2 =E'_2(c_1,t^*,K)&= \bigcap_{t\in \delta_n\mathbb{N}_0 \cap [0,T_n^-]} \bigcap_{x_1\in I^n_{T_n-t-t^*}, \, x_2 \in I^n_{T_n-t}, \, x_1-\mu^n_{T_n-t-t^*}\ge K} A^{(2)}_{T_n-t-t^*}(x_1,x_2)\\ &\quad \cap \bigcap_{t\in \delta_n\mathbb{N}_0 \cap [0,T_n^-]} \bigcap_{x_1\in I^n_{T_n-t-t^*}, \, x_2 \in I^n_{T_n-t}, \, x_1-\mu^n_{T_n-t-t^*}\le -K} A^{(3)}_{T_n-t-t^*}(x_1,x_2)\\ &\quad \cap \bigcap_{t\in \delta_n\mathbb{N}_0 \cap [0,T_n^- +t^*]} \bigcap_{x\in \frac 1n \mathbb{Z}\cap [-N^5,N^5] }(A^{(5)}_{T_n-t-\epsilon_n}(x)\cap A^{(6)}_{T_n-t-\delta_n}(x)). \end{aligned} \end{equation} Define the event \begin{align} \label{eq:eventE3} E_3=E_3(K) &= \bigcap_{t\in \delta_n\mathbb{N}_0 \cap [0,T_n^-]} \bigcap_{x\in I^n_{T_n-t}} \bigcap_{j=1}^4 B^{(j)}_{T_n-t-\delta_n}(x) . \end{align} Finally, we define an event which says that with high probability, no lineages stay distance $K$ ahead of the front for time $K\log N$. Let $$ E_4 = E_4(t^*, K)=\bigcap_{t\in \delta_n \mathbb{N}_0 \cap [0,T_n^-]} \left\{\p{r^{n,K,t^*}_{ K\log N , T_n-t}(x)=0 \; \forall x\in \tfrac 1n \mathbb{Z} \Big|\mathcal F}\ge 1-\left( \frac n N \right)^2 \right\}, $$ and let $E=\cap_{j=1}^4 E_j.$ The following result will be proved in Sections~\ref{sec:eventE1}-\ref{sec:eventE4}. \begin{prop} \label{prop:eventE} Suppose for some $a_2>3$, $N\ge n^{a_2}$ for $n$ sufficiently large. Take $c_1>0$. There exist $t^*,K \in \mathbb{N}$ (with $K>104 \kappa^{-1} \alpha^{-1} t^*$) and $b_1,c_2>0$ such that for $b_2>0$, if condition~\eqref{eq:conditionA} holds, for $n$ sufficiently large, $$ \p{E^c}\le \frac n N . $$ \end{prop} From now on in this section, we will take $c_1\in (0,1)$ sufficiently small that letting $\lambda = \frac 14 (1-\alpha)$, \begin{equation} \label{eq:cchoice} \begin{aligned} c_1 ((e^{\lambda \kappa}-1)^{-1}e^{\lambda \kappa }+e^{-(1+\lambda)\kappa}(1-e^{-(1+\lambda )\kappa})^{-1})^2 +e^{-2(1+\lambda )\kappa} &< 1, \\ c_1 (e^{\lambda \kappa}-1)^{-1}e^{\lambda \kappa} +e^{-(1+\lambda)\kappa} &< 1,\\ c_1(1+ e^{3\alpha \kappa/4}(e^{\alpha \kappa/4}-1)^{-1})+e^{-\alpha \kappa /4} &<1, \\ \text{ and } \qquad \qquad \qquad e^{-\alpha \kappa /4}+c_1 (1-e^{-\alpha \kappa /4})^{-1}&<e^{-\alpha \kappa /5}, \end{aligned} \end{equation} and then take $t^*$, $K$, $b_1$, $b_2$ and $c_2$ as in Proposition~\ref{prop:eventE}. Take $K_0<\infty$, $k_0 \in \mathbb{N}$ and $(X_1,J_1)$, $(X_2,J_2), \ldots , (X_{k_0},J_{k_0}) \in \frac 1n \mathbb{Z} \times [N]$ measurable with respect to $\sigma((\xi^n_{T_n}(x,i))_{x\in \frac 1n \mathbb{Z}, i\in [N]})$ and distinct, with $(X_i,J_i) \in G_{K_0,T_n}$ $\forall i \in [k_0]$. For $t\in [0, T_n]$ and $i\in [k_0]$, let \begin{equation} \label{eq:zetadefns} \zeta^{n,i}_t = \zeta^{n,T_n}_t(X_i,J_i) \quad \text{ and } \quad \tilde \zeta^{n,i}_t = \zeta^{n,T_n}_t(X_i,J_i)-\mu^n_{T_n-t}, \end{equation} the location of the $i^{\text{th}}$ ancestral lineage at time $T_n-t$, and its location relative to the front. For $i, j \in [k_0]$, let $$ \tau^n_{i,j} = \inf\{t\ge 0: (\zeta^{n,T_n}_t(X_i,J_i),\theta^{n,T_n}_t(X_i,J_i))=(\zeta^{n,T_n}_t(X_j,J_j),\theta^{n,T_n}_t(X_j,J_j))\}, $$ the time at which the $i^{\text{th}}$ and $j^{\text{th}}$ lineages coalesce. For $t \in [0,T_n]$, define the sigma algebra $$ \mathcal F_t =\sigma \big(\mathcal F, \sigma((\zeta^{n,j}_s)_{s\le t, j\in [k_0]},(\mathds{1}_{\tau_{i,j}^n \le s})_{s\le t, i, j \in [k_0]})\big). $$ Then $((\zeta^{n,j}_{k\delta_n})_{j\in [k_0]},(\mathds{1}_{\tau^n_{i,j}\le k\delta_n})_{i,j\in [k_0]})_{k\in \mathbb{N}_0, k\le T_n \delta_n^{-1}}$ is a strong Markov process with respect to the filtration $(\mathcal F_{k\delta_n})_{k\in \mathbb{N}_0, k\le T_n \delta_n^{-1}}$. For $k\in \mathbb{N}_0$, let $t_k=k \lfloor (\log N)^C \rfloor$. For $i,j \in [k_0]$, let \begin{equation} \label{eq:tildetaudefn} \tilde \tau^{n}_{i,j} = \begin{cases} \tau^{n}_{i,j} &\text{if }\tau^{n}_{i,j}\notin (t_k, t_k+2K \log N] \, \forall k \in \mathbb{N}_0 \text{ and }|\tilde \zeta^{n,i}_{\lfloor \tau^{n}_{i,j} \delta_n^{-1} \rfloor \delta_n}|\wedge |\tilde \zeta^{n,j}_{\lfloor \tau^{n}_{i,j} \delta_n^{-1} \rfloor \delta_n}| \le \tfrac{1}{64} \alpha d_n, \\ T_n &\text{otherwise,} \end{cases} \end{equation} i.e. $\tilde \tau^{n}_{i,j}$ only counts coalescence which happens fairly near the front and not too soon after $t_k$ (backwards in time from time $T_n$) for any $k$. Let \begin{equation} \label{eq:betadefn} \beta_n =(1+2m) \frac n N t_1 \frac{\int_{-\infty}^\infty g(y)^3 e^{2\alpha \kappa y} dy}{\left(\int_{-\infty}^\infty g(y)^2 e^{\alpha \kappa y} dy\right)^2} =(1+2m) \frac n N t_1 \int_{-\infty}^\infty \pi(y)^2 g(y)^{-1}dy. \end{equation} Along with Proposition~\ref{prop:eventE}, the following three propositions are the main intermediate results in the proof of Theorem~\ref{thm:main}, and will be proved in Section~\ref{subsec:mainprops}. The first proposition says that if a pair of lineages $i$ and $j$ have not coalesced by time $t_k$, and one of them is not too far from the front, then the probability that $\tilde \tau^n_{i,j}\le t_{k+1}$ is approximately $\beta_n$. \begin{prop} \label{prop:tauk} Suppose for some $a_2>3$, $N\ge n^{a_2}$ for $n$ sufficiently large. On the event $E$, for $i,j \in [k_0]$, $\epsilon \in (0,1)$ and $k\in \mathbb{N}_0$ with $t_{k+1}\le T_n^-$, if $\zeta^{n,i}_{t_k} \wedge \zeta^{n,j}_{t_k} \in I^{n,\epsilon}_{T_n-t_k}$ and $\tau^{n}_{i,j}>t_k$ then \begin{align*} \p{\tilde \tau^{n}_{i,j} \in (t_k, t_{k+1}] \Big| \mathcal F_{t_k}} &=\beta_n (1+\mathcal O((\log N)^{-2})). \end{align*} \end{prop} The second proposition says that two pairs of lineages are unlikely to coalesce in the same time interval $(t_k,t_{k+1}]$. \begin{prop} \label{prop:doublecoaltk} Suppose for some $a_2>3$, $N\ge n^{a_2}$ for $n$ sufficiently large. For $\epsilon \in (0,1)$, there exists $\epsilon '>0$ such that on the event $E$, for $k\in \mathbb{N}_0$ with $t_{k+1}\le T_n^-$ the following holds. For $i,j_1,j_2 \in [k_0]$ distinct, if $ \zeta^{n,\ell }_{t_k} \wedge \zeta^{n,\ell '}_{t_k} \in I^{n,\epsilon}_{T_n-t_k}$ and $\tau^{n}_{\ell, \ell '}>t_k$ $\forall \ell \neq \ell ' \in \{i,j_1,j_2\}$ then \begin{equation} \label{eq:propdoubletk1} \p{\tilde \tau^{n}_{i,j_1}, \tilde \tau^{n}_{i,j_2} \in (t_k,t_{k+1}] \Big| \mathcal F_{t_k} } =\mathcal O(n^{1-\epsilon '} N^{-1}). \end{equation} For $i_1,i_2,j_1,j_2 \in [k_0]$ distinct, if $\zeta^{n,\ell}_{t_k} \wedge \zeta^{n,\ell '}_{t_k}\in I^{n,\epsilon}_{T_n-t_k}$ and $\tau^{n}_{\ell, \ell '}>t_k$ $\forall \ell \neq \ell ' \in \{i_1,i_2,j_1,j_2\}$ then \begin{equation} \label{eq:propdoubletk2} \p{\tilde \tau^{n}_{i_1,j_1}, \tilde \tau^{n}_{i_2,j_2} \in (t_k,t_{k+1}] \Big| \mathcal F_{t_k} } =\mathcal O(n^{1-\epsilon '} N^{-1}). \end{equation} \end{prop} The last proposition says that for a pair of lineages $i$ and $j$, with high probability $\tilde \tau^n_{i,j}= \tau^n_{i,j}$, and at least one of the lineages is fairly near the front until they have coalesced. \begin{prop} \label{prop:tautautilde} Suppose $T_n\ge N$ and, for some $a_2>3$, $N\ge n^{a_2}$ for $n$ sufficiently large. For $\epsilon \in (0,1)$ sufficiently small, for $n$ sufficiently large, on the event $E$, for $i\neq j \in [k_0]$, $$\p{\tau^{n}_{i,j} \neq \tilde \tau^{n}_{i,j} \Big| \mathcal F_0 } \le (\log N)^{-2}$$ and $$ \p{\exists t\in \delta_n \mathbb{N}_0 \cap [0, N n^{-1} \log N] : \zeta^{n,i}_{t} \wedge \zeta^{n,j}_{t}\notin I^{n,\epsilon}_{T_n-t}, \; \tau^{n}_{i,j}>t \Big| \mathcal F_0}\le (\log N)^{-2}. $$ \end{prop} Before proving Propositions~\ref{prop:tauk}-\ref{prop:tautautilde}, we show how they can be combined with Proposition~\ref{prop:eventE} to prove Theorem~\ref{thm:main}. \begin{proof}[Proof of Theorem~\ref{thm:main}] Let $(B_{i,j,k})_{i<j \in [k_0], k\in \mathbb{N}_0}$ be i.i.d.~Bernoulli random variables with $$\p{B_{i,j,k}=1}=\beta_n,$$ and let $B_{j,i,k}=B_{i,j,k}$ for $i<j \in [k_0]$. For $k\in \mathbb{N}_0$, let $$ P_k = \{i \in [k_0]: \tau^{n}_{i,j}>t_k \; \forall j \in [i-1]\}, $$ the set of lineages at time $T_n-t_k$ which have not coalesced with a lineage of lower index. Take $\epsilon>0$ sufficiently small that Proposition~\ref{prop:tautautilde} holds, and take $\epsilon'>0$ as in Proposition~\ref{prop:doublecoaltk}. Define the event \begin{align*} A_k &= \left\{\zeta^{n,i}_{t_k} \wedge \zeta^{n,j}_{t_k} \in I^{n,\epsilon}_{T_n-t_k} \; \forall i\neq j \in P_k\right\}. \end{align*} Take $k\in \mathbb{N}_0$ with $t_{k+1}\le T_n^-$, and suppose the event $E\cap A_k$ occurs. Then by Proposition~\ref{prop:tauk}, for each pair of lineages $i\neq j \in P_k$, $$ \p{\tilde \tau^{n}_{i,j} \in (t_k, t_{k+1}] \Big| \mathcal F_{t_k}}= \beta_n (1+\mathcal O((\log N)^{-2})), $$ and by Proposition~\ref{prop:doublecoaltk}, $$ \p{|\{(i,j):i<j \in P_k \text{ and } \tilde \tau^{n}_{i,j}\in (t_k,t_{k+1}]\}|\ge 2 \Big| \mathcal F_{t_k}} =\mathcal O(n^{1-\epsilon '}N^{-1})=o(\beta_n (\log N)^{-2}) $$ by the definition of $\beta_n$ in~\eqref{eq:betadefn}. Therefore, conditional on $\mathcal F_{t_k}$, we can couple $(\tilde \tau^{n}_{i,j})_{i,j \in P_k}$ and $(B_{i,j,k})_{i<j \in [k_0]}$ in such a way that if $E \cap A_k$ occurs then \begin{equation} \label{eq:mainproof1} \p{\exists i\neq j \in P_k :B_{i,j,k}\ne \mathds{1}_{\tilde \tau^{n}_{i,j} \in (t_k,t_{k+1}]} \Big| \mathcal F_{t_k}} =\mathcal O(\beta_n (\log N)^{-2}). \end{equation} Note that for $n$ sufficiently large, if the event $E$ occurs, then by Proposition~\ref{prop:tautautilde}, \begin{align} \label{eq:mainproof2} \p{\bigcup_{k=0}^{\lfloor Nn^{-1}t_1^{-1} \log N \rfloor } (A_k)^c \Bigg| \mathcal F_0} &\le {{k_0} \choose 2} (\log N)^{-2}. \end{align} Now define $(\sigma^n_{i,j})_{i, j \in [k_0]}$ iteratively as follows. Let $\sigma^n_{i,i}=0$ $\forall i\in [k_0]$. For $k\in \mathbb{N}_0$ and $i\in [k_0]$, let $\pi_k(i)=\min\{i' \in [k_0]:\sigma^n_{i',i}\le t_k\}$. Then for each pair $i,j\in [k_0]$ with $\pi_k(i)\neq \pi_k(j)$, set $\sigma^n_{i,j}=t_{k+1}$ if $B_{\pi_k(i),\pi_k(j),k}=1$; otherwise $\sigma^n_{i,j}>t_{k+1}$. Suppose $\tilde \tau^n_{i,j}=\tau^n_{i,j}$ $\forall i,j \in [k_0]$. For some $k\in \mathbb{N}_0$, suppose $\{(i,j):\tau^n_{i,j}>t_k\}=\{(i,j):\sigma^n_{i,j}>t_k\}$ and $B_{i,j,k}=\mathds{1}_{\tilde \tau^n_{i,j}\in (t_k,t_{k+1}]}$ $\forall i \neq j\in P_k$. Then for $i,j\in [k_0]$ with $\tau^n_{i,j}>t_k$ we have that $\tau^n_{\pi_k(i),i}\le t_k$ and $\tau^n_{\pi_k(j),j}\le t_k$, and so $$ \mathds{1}_{\tau^n_{i,j}\in (t_k,t_{k+1}]}=\mathds{1}_{\tilde \tau^n_{i,j}\in (t_k,t_{k+1}]}=\mathds{1}_{\tilde \tau^n_{\pi_k(i),\pi_k(j)}\in (t_k,t_{k+1}]} =B_{\pi_k(i),\pi_k(j),k}=\mathds{1}_{\sigma^n_{i,j}=t_{k+1}}, $$ since $\pi_k(i),\pi_k(j)\in P_k$. In particular, $\{(i,j):\tau^n_{i,j}>t_{k+1}\}=\{(i,j):\sigma^n_{i,j}>t_{k+1}\}$. By induction, it follows that for $k^*\in \mathbb{N}$, if for each $k\in \{0\}\cup [k^*]$ we have $B_{i,j,k}=\mathds{1}_{\tilde \tau^n_{i,j}\in (t_k,t_{k+1}]}$ $\forall i\neq j\in P_k$ then $$ \{(i,j):\tau^n_{i,j}\in (t_k,t_{k+1}]\}=\{(i,j):\sigma^n_{i,j}=t_{k+1}\} \; \forall k\in \{0\}\cup [k^*]. $$ Therefore, if the event $E$ occurs, then by a union bound, \begin{align*} &\p{\exists i, j \in [k_0]: |\tau^{n}_{i,j}-\sigma^n_{i,j}|\ge (\log N)^C \Big|\mathcal F_0}\\ &\le \p{\exists i, j \in [k_0]: \tau^{n}_{i,j} \neq \tilde \tau^{n}_{i,j} \Big|\mathcal F_0}\\ &\quad +\sum_{k=0}^{\lfloor Nn^{-1}t_1^{-1} \log N \rfloor }\p{\{\exists i\neq j \in P_k : B_{i,j,k}\neq \mathds{1}_{\tilde \tau^{n}_{i,j}\in (t_k,t_{k+1}]} \} \cap A_k \Big| \mathcal F_0}\\ &\quad + \p{\bigcup_{k=0}^{\lfloor Nn^{-1}t_1^{-1} \log N \rfloor } (A_k)^c \Bigg| \mathcal F_0} +\p{\exists i,j \in [k_0] : \sigma^n_{i,j}> t_{\lfloor Nn^{-1}t_1^{-1} \log N \rfloor } \Big| \mathcal F_0}\\ &\le 2{{k_0}\choose 2} (\log N)^{-2} +\sum_{k=0}^{\lfloor Nn^{-1}t_1^{-1} \log N \rfloor }\mathcal O(\beta_n (\log N)^{-2}) +{{k_0} \choose 2} (1-\beta_n)^{\lfloor Nn^{-1}t_1^{-1} \log N \rfloor }\\ &=\mathcal O((\log N)^{-1}), \end{align*} where the second inequality follows for $n$ sufficiently large by Proposition~\ref{prop:tautautilde},~\eqref{eq:mainproof1} and~\eqref{eq:mainproof2}, and the last inequality follows by the definition of $\beta_n$ in~\eqref{eq:betadefn}. The result follows easily by Proposition~\ref{prop:eventE} and then by a coupling between $(\beta_n t_1^{-1}\sigma^n_{i,j})_{i,j\in [k_0]}$ and $(\tau_{i,j})_{i,j\in [k_0]}$. \end{proof} \subsection{Proof of Propositions~\ref{prop:tauk},~\ref{prop:doublecoaltk} and~\ref{prop:tautautilde}} \label{subsec:mainprops} The next five results will be used in the proofs of Propositions~\ref{prop:tauk},~\ref{prop:doublecoaltk} and~\ref{prop:tautautilde}. The first three results will also be used in Section~\ref{sec:thmstatdist} in the proof of Theorem~\ref{thm:statdist}. The first result says that a pair of lineages are unlikely to be far ahead of the front, and will be proved in Section~\ref{subsec:tipbulkproofs}. \begin{prop} \label{prop:intip} Suppose for some $a_1>1$, $N\ge n^{a_1}$ for $n$ sufficiently large. For $n$ sufficiently large, on the event $E_1\cap E'_2 \cap E_4$, for $i,j \in [k_0]$, $s\le t \in \delta_n \mathbb{N}_0 \cap [0,T_n^-]$ and $\ell_1, \ell_2 \in \mathbb{N} \cap [K,D^+_n]$, the following holds. If $t-s \ge K \log N$ then \begin{align} \p{\tilde \zeta^{n,i}_t \ge \ell_1, \tilde \zeta^{n,j}_t \ge \ell_2, \tau^n_{i,j}> t \Big| \mathcal F_{s} } &\le (\log N)^7 e^{-(1+\frac 14 (1-\alpha))\kappa(\ell_1+\ell_2)} \label{eq:propintipstat1} \\ \text{ and }\quad \p{\tilde \zeta^{n,i}_t \ge \ell_1 \Big| \mathcal F_{s} } &\le (\log N)^3 e^{-(1+\frac 14 (1-\alpha))\kappa\ell_1}. \label{eq:propintipstat2} \end{align} If instead $t-s \in t^* \mathbb{N}_0 \cap [0,K \log N)$ then \begin{align} \p{\tilde \zeta^{n,i}_t \ge \ell_1, \tilde \zeta^{n,j}_t \ge \ell_2, \tau^n_{i,j}> t \Big| \mathcal F_{s} } &\le (\log N)^4 e^{(1+\frac 14 (1-\alpha))\kappa(\tilde \zeta^{n,i}_{s}\vee 0 -\ell_1 +\tilde \zeta^{n,j}_{s}\vee 0 - \ell_2)} \label{eq:propintipstat*} \\ \text{ and }\quad \p{\tilde \zeta^{n,i}_t \ge \ell_1 \Big| \mathcal F_{s} } &\le (\log N)^2 e^{(1+\frac 14 (1-\alpha))\kappa(\tilde \zeta^{n,i}_{s}\vee 0 -\ell_1)}. \label{eq:propintipstat3} \end{align} \end{prop} The next result says that lineages are unlikely to be far behind the front, and will be proved in Section~\ref{subsec:behindfront}. \begin{prop} \label{prop:RlogN} Suppose for some $a_1>1$, $N\ge n^{a_1}$ for $n$ sufficiently large. For $n$ sufficiently large, on the event $E_1\cap E'_2$ the following holds. For $i\in [k_0]$, \begin{equation} \label{eq:propRlogN1} \p{\exists t\in \delta_n \mathbb{N}_0 \cap[0, T_n^-] : \tilde \zeta^{n,i}_{t}\le D_n^- \Big|\mathcal F_0 } \le N^{-1}. \end{equation} For $i\in [k_0]$ and $s\le t \in \delta_n \mathbb{N}_0 \cap [0,T_n^-]$ with $t-s \ge K \log N$, if $\tilde \zeta^{n,i}_{s } \ge D_n^-$ then \begin{align} \label{eq:propRlogN2} &\p{\tilde \zeta^{n,i}_t \le -d_n \Big| \mathcal F_{s}} \le (\log N)^{2-\frac 18 \alpha C} \quad \text{ and }\quad \p{\tilde \zeta^{n,i}_t \le -\tfrac 1 {64} \alpha d_n +2 \Big| \mathcal F_{s}} \le (\log N)^{2-2^{-9} \alpha^2 C}. \end{align} For $i\in [k_0]$ and $t\in t^*\mathbb{N}_0\cap [0,T^-_n]$, \begin{equation} \label{eq:propRlogN3} \p{\tilde \zeta^{n,i}_t \le -d_n \Big| \mathcal F_{0}} \le (\log N)^{-\frac 18 \alpha C}. \end{equation} \end{prop} The next lemma gives estimates on the probability that a pair of lineages are at a particular pair of sites, and gives bounds on the increments of $\zeta^{n,i}$. \begin{lemma} \label{lem:fromxixj} Suppose for some $a_1>1$, $N\ge n^{a_1}$ for $n$ sufficiently large. For $n$ sufficiently large, the following holds. Suppose the event $E$ occurs. Take $t\in \delta_n \mathbb{N}_0 \cap [0,T_n^-]$, $i,j \in [k_0]$ and $x_i,x_j \in \frac 1n \mathbb{Z}$. If $x_i,x_j \in i^n_{T_n-t-\gamma_n}$, $\zeta^{n,i}_t, \zeta^{n,j}_t \in i^n_{T_n-t}$ and $\tau^n_{i,j}>t$ then \begin{equation} \label{eq:lemfromxixj1} \p{\zeta^{n,i}_{t+\gamma_n}=x_i, \zeta^{n,j}_{t+\gamma_n}=x_j \Big| \mathcal F_t} =n^{-2} \pi(x_i -\mu^n_{T_n-t-\gamma_n}) \pi(x_j -\mu^n_{T_n-t-\gamma_n}) (1+\mathcal O((\log N)^{-C})). \end{equation} If $x_i,x_j \in I^n_{T_n -t-\epsilon_n}$ and $\tau^n_{i,j}>t$ then \begin{equation} \label{eq:lemfromxixj2} \p{\zeta^{n,i}_{t+\epsilon_n}=x_i, \zeta^{n,j}_{t+\epsilon_n}=x_j \Big| \mathcal F_t} \le 2n^{-2} \epsilon_n^{-2}. \end{equation} Suppose instead the event $E_1 \cap E'_2$ occurs. For $t\in \delta_n \mathbb{N}_0 \cap [0,T_n^-]$, $i\in [k_0]$ and $t'\in \delta_n \mathbb{N}_0 \cap [t,t+t^*]$, \begin{equation} \label{eq:lemfromxixj3} |\zeta^{n,i}_t- \zeta^{n,i}_{t'}|\le (\log N)^{2/3}, \quad |\zeta^{n,i}_t| \vee |\tilde \zeta^{n,i}_t|\le N^3 \quad \text{ and }\quad |\zeta^{n,i}_t -\zeta^{n,i}_{t+\epsilon_n}|\le 1. \end{equation} \end{lemma} \begin{proof} Suppose the event $E$ occurs and $\tau^n_{i,j}>t$. Then for $s\in \delta_n \mathbb{N}_0 \cap [0,T_n-t]$, \begin{align} \label{eq:lemfromxpf1} \p{\zeta^{n,i}_{t+s}=x_i, \zeta^{n,j}_{t+s}=x_j \Big| \mathcal F_t} =\frac{q^n_{T_n-t-s, T_n-t}(x_i,\zeta^{n,i}_t)}{p^n_{T_n-t}(\zeta^{n,i}_t)} \frac{q^n_{T_n-t-s, T_n-t}(x_j,\zeta^{n,j}_t) - N^{-1} \mathds{1}_{\zeta^{n,i}_t =\zeta^{n,j}_t, \, x_i=x_j}}{p^n_{T_n-t}(\zeta^{n,j}_t)-N^{-1}\mathds{1}_{\zeta^{n,i}_t=\zeta^{n,j}_t}}. \end{align} If $x_i,x_j \in i^n_{T_n-t-\gamma_n}$ and $\zeta^{n,i}_t, \zeta^{n,j}_t \in i^n_{T_n-t}$ then by the definition of the event $E_2$ in~\eqref{eq:eventE2}, the events $A^{(1)}_{T_n-t-\gamma_n}(x_i, \zeta^{n,i}_t)$ and $A^{(1)}_{T_n-t-\gamma_n}(x_j, \zeta^{n,j}_t)$ occur. Moreover, $p^n_{T_n-t}(\zeta^{n,j}_t) \ge \frac 15 g(d_n)\ge \frac 1 {10} (\log N)^{-C}$ by the definition of the event $E_1$ in~\eqref{eq:eventE1} and the definition of $d_n$ in~\eqref{eq:paramdefns}, and so \begin{align*} &\p{\zeta^{n,i}_{t+\gamma_n}=x_i, \zeta^{n,j}_{t+\gamma_n}=x_j \Big| \mathcal F_t}\\ &=(n^{-1} \pi(x_i-\mu^n_{T_n-t-\gamma_n})+\mathcal O(n^{-1} (\log N)^{-3C}))\cdot (1+\mathcal O(N^{-1} (\log N)^C))\\ &\quad \cdot (n^{-1} \pi(x_j-\mu^n_{T_n-t-\gamma_n})+\mathcal O(n^{-1} (\log N)^{-3C})+\mathcal O(N^{-1} (\log N)^C)). \end{align*} Since $\pi(x_i -\mu^n_{T_n-t-\gamma_n})^{-1} \vee \pi(x_j -\mu^n_{T_n-t-\gamma_n})^{-1} \le \pi(d_n)^{-1}\vee \pi(-d_n)^{-1}= \mathcal O((\log N)^{2C})$,~\eqref{eq:lemfromxixj1} follows. If $x_i,x_j \in I^n_{T_n-t-\epsilon_n}$ then by the definition of the event $E'_2$ in~\eqref{eq:eventE'2}, the events $A^{(4)}_{T_n-t-\epsilon_n}(x_i)$ and $A^{(4)}_{T_n-t-\epsilon_n}(x_j)$ occur. If $\zeta^{n,i}_t=\zeta^{n,j}_t$ then $p^n_{T_n-t}(\zeta^{n,j}_t)-N^{-1} \ge \frac 12 p^n_{T_n-t}(\zeta^{n,j}_t)$, and so~\eqref{eq:lemfromxixj2} follows from~\eqref{eq:lemfromxpf1}. Suppose now that the event $E_1\cap E_2'$ occurs, and suppose for some $s\in \delta_n \mathbb{N}_0 \cap [0,T_n^-]$ that $|\zeta^{n,i}_s|\le N^3$. Then the events $A^{(5)}_{T_n-s-\epsilon_n}(\zeta^{n,i}_s)$ and $\cap_{k\in [t^*\delta_n^{-1}]}A^{(6)}_{T_n-s-k\delta_n}(\zeta^{n,i}_s)$ occur, and so $|\zeta^{n,i}_{s+\epsilon_n}-\zeta^{n,i}_s|\le 1$ and $|\zeta^{n,i}_s-\zeta^{n,i}_{s'}|\le (\log N)^{2/3}$ $\forall s' \in \delta_n \mathbb{N}_0 \cap [s,s+t^*]$. Since $|\tilde \zeta^{n,i}_0|\le K_0$ and $|\zeta^{n,i}_0|\le K_0 +|\mu^n_{T_n}|\le 2\nu N^2$ for $n$ sufficiently large, it follows by an inductive argument that $|\zeta^{n,i}_t|\vee |\tilde \zeta^{n,i}_t|\le N^3$ $\forall t\in \delta_n \mathbb{N}_0 \cap [0,T_n^-]$, which completes the proof. \end{proof} From now on in Section~\ref{subsec:mainprops}, we will assume for some $a_2>3$, $N\ge n^{a_2}$ for $n$ sufficiently large. We will also need an estimate for the probability that a pair of lineages coalesce in a time interval of length $\delta_n$. \begin{prop} \label{prop:coal} Suppose the event $E$ occurs. Take $t\in \delta_n \mathbb{N}_0 \cap [0,T_n^-]$ and $x,y \in \frac 1n \mathbb{Z}$ with $|x-y|>n^{-1}$ and $x\in I^n_{T_n-t}$. If $\zeta^{n,i}_t=x=\zeta^{n,j}_t$ and $\tau^n_{i,j}>t$ then \begin{equation*} \p{\tau^n_{i,j}\in (t,t+\delta_n] \big| \mathcal F_t } = \begin{cases} n^2 N^{-1} \delta_n g(x-\mu^n_{T_n-t})^{-1}\big(1+\mathcal O((\log N)^{-C})\big) \quad &\text{if }x\in i^n_{T_n-t},\\ \mathcal O(n^2 N^{-1} \delta_n g(x-\mu^n_{T_n-t})^{-1}) &\text{otherwise.} \end{cases} \end{equation*} If instead $\zeta^{n,i}_t=x$, $\zeta^{n,j}_t=x+n^{-1}$ and $\tau^n_{i,j}>t$ then \begin{equation*} \p{\tau^n_{i,j}\in (t,t+\delta_n] \big| \mathcal F_t } = \begin{cases} m n^2 N^{-1} \delta_n g(x-\mu^n_{T_n-t})^{-1}\big(1+\mathcal O((\log N)^{-C})\big) \quad &\text{if }x\in i^n_{T_n-t},\\ \mathcal O(n^2 N^{-1} \delta_n g(x-\mu^n_{T_n-t})^{-1}) &\text{otherwise.} \end{cases} \end{equation*} If instead $\zeta^{n,i}_t=x,\zeta^{n,j}_t=y$ and $\tau^n_{i,j}>t$ then \begin{align*} \p{\tau^n_{i,j}\in (t,t+\delta_n] \big|\mathcal F_t } &= \mathcal O(n^{9/5} N^{-1} \delta_n g(x-\mu^n_{T_n-t})^{-1}\mathds{1}_{|x-y|<Kn^{-1}}). \end{align*} \end{prop} \begin{proof} For $t\in \delta_n \mathbb{N}_0 \cap [0,T_n^-]$ and $x,x' \in \frac 1n \mathbb{Z}$, if $\zeta^{n,i}_t=x$, $\zeta^{n,j}_t=x'$ and $\tau^n_{i,j}>t$, then by the definition of $\mathcal C^n_{T_n-t-\delta_n}(x,x')$ in~\eqref{eq:Cntdefn}, \begin{align*} \p{\tau^n_{i,j}\in (t,t+\delta_n] \big| \mathcal F_t } &= \begin{cases} \frac{|\mathcal C^n_{T_n-t-\delta_n}(x,x')|}{N p^n_{T_n-t}(x) \cdot Np^n_{T_n-t}(x')} \quad &\text{if }x\neq x' ,\\ \frac{|\mathcal C^n_{T_n-t-\delta_n}(x,x)|}{N p^n_{T_n-t}(x) (Np^n_{T_n-t}(x)-1)} \quad &\text{if }x= x' . \end{cases} \end{align*} If $x\in I^n_{T_n-t}$ and $E$ occurs, then by the definition of the event $E_3$ in~\eqref{eq:eventE3}, $\cap_{j=1}^3 B^{(j)}_{T_n-t-\delta_n}(x)$ occurs. Hence \begin{align*} |\mathcal C^n_{T_n-t-\delta_n}(x,x)|&=n^2 N \delta_n p^n_{T_n-t-\delta_n}(x)(1+\mathcal O(n^{-1/5})),\\ |\mathcal C^n_{T_n-t-\delta_n}(x,x+n^{-1})|&=\tfrac 12 m n^2 N \delta_n (p^n_{T_n-t-\delta_n}(x)+p^n_{T_n-t-\delta_n}(x+n^{-1}))(1+\mathcal O(n^{-1/5})),\\ \text{and }\quad |\mathcal C^n_{T_n-t-\delta_n}(x,y)|&=\mathcal O(n^{9/5} N \delta_n) p^n_{T_n-t-\delta_n}(x)\mathds{1}_{|x-y|<Kn^{-1}} \; \forall y\in \tfrac 1n \mathbb{Z} \text{ with }|y-x|>n^{-1}. \end{align*} The result follows by the definition of the event $E_1$ in~\eqref{eq:eventE1}, and since $n^{-1/5}=o((\log N)^{-C})$, $Np^n_{T_n-t}(x)\ge \frac 15 N g(D^+_n)\ge \frac 1 {10} n^{1/2}N^{1/2}$ for $x\in I^n_{T_n-t}$ and $g(d_n+n^{-1})^{-1}=\mathcal O((\log N)^C)$. \end{proof} Finally, we need a bound on the probability that two pairs of lineages coalesce in the same time interval of length $\delta_n$. \begin{prop} \label{prop:doublecoal} Suppose the event $E$ occurs. For $t\in \delta_n \mathbb{N}_0 \cap [0,T_n^-]$, $x_1 \in i^n_{T_n-t}$, $x_2,x_3 \in \frac 1n \mathbb{Z}$, and $i_1,i_2,i_3\in [k_0]$, if $\zeta^{n,i_k}_t=x_k$ for $k\in \{1,2,3\}$ and $\tau^n_{i_k,i_\ell}>t$ $\forall k\neq \ell \in \{1,2,3\}$ then \begin{align} \label{eq:doublecoalstat1} \p{\tau^n_{i_1,i_2},\tau^n_{i_1,i_3} \in (t,t+\delta_n] \Big| \mathcal F_t} =\mathcal O(n^{9/5}N^{-2} \delta_n (\log N)^{2C} \mathds{1}_{|x_1-x_2|\vee |x_1-x_3|< Kn^{-1}}). \end{align} For $x_1,x_3 \in i^n_{T_n-t}$, $x_2,x_4 \in \frac 1n \mathbb{Z}$ and $i_1,i_2,i_3,i_4 \in [k_0]$, if $\zeta^{n,i_k}_t=x_k$ for $k\in \{1,2,3,4\}$ and $\tau^n_{i_k,i_\ell}>t$ $\forall k\neq \ell \in \{1,2,3,4\}$ then \begin{align}\label{eq:doublecoalstat2} \p{\tau^n_{i_1,i_2},\tau^n_{i_3,i_4} \in (t,t+\delta_n] \Big| \mathcal F_t } = \mathcal O(n^4 N^{-2} \delta_n^2 (\log N)^{2C} \mathds{1}_{|x_1-x_2|\vee |x_3-x_4|<Kn^{-1}}). \end{align} \end{prop} \begin{proof} For the first statement, since $B^{(4)}_{T_n-t-\delta_n}(x_1)$ occurs by the definition of the event $E_3$ in~\eqref{eq:eventE3}, \begin{align*} &\p{\tau^n_{i_1,i_2},\tau^n_{i_1,i_3} \in (t,t+\delta_n] \big| \mathcal F_t}\\ &= \frac{|\mathcal C^n_{T_n-t-\delta_n}(x_1,x_2,x_3)|}{Np^n_{T_n-t}(x_1) (Np^n_{T_n-t}(x_2)-\mathds{1}_{x_1=x_2})( Np^n_{T_n -t}(x_3)-\mathds{1}_{x_1=x_3}-\mathds{1}_{x_2=x_3})}\\ &\le \mathds{1}_{|x_1-x_2|\vee |x_1-x_3|< Kn^{-1}} \frac{6n^{9/5}N^{-2} \delta_n p^n_{T_n-t-\delta_n}(x_1)}{p^n_{T_n-t}(x_1)p^n_{T_n-t}(x_2)p^n_{T_n-t}(x_3) }. \end{align*} By the definition of the event $E_1$ in~\eqref{eq:eventE1} and since $x_1-\mu^n_{T_n-t}\le d_n$ and $g(d_n+Kn^{-1})^{-1}=\mathcal O((\log N)^C)$,~\eqref{eq:doublecoalstat1} follows. For the second statement, since $B^{(3)}_{T_n-t-\delta_n}(x_1)$ and $B^{(3)}_{T_n-t-\delta_n}(x_3)$ occur, \begin{align*} &\p{\tau^n_{i_1,i_2},\tau^n_{i_3,i_4} \in (t,t+\delta_n] \big| \mathcal F_t }\\ &\le \frac{|\mathcal C^n_{T_n-t-\delta_n}(x_1,x_2)||\mathcal C^n_{T_n-t-\delta_n}(x_3,x_4)|}{Np^n_{T_n-t}(x_1) (Np^n_{T_n-t}(x_2) -\mathds{1}_{x_1=x_2})(Np^n_{T_n-t}(x_3) -\sum_{j=1}^2\mathds{1}_{x_j=x_3})( Np^n_{T_n-t}(x_4)-\sum_{j=1}^3 \mathds{1}_{x_j=x_4}) }\\ &\le \mathds{1}_{|x_1-x_2|\vee |x_3-x_4| <Kn^{-1} } \frac{24 |\mathcal C^n_{T_n-t-\delta_n}(x_1,x_2)||\mathcal C^n_{T_n-t-\delta_n}(x_3,x_4)| }{N^4 p^n_{T_n-t}(x_1)p^n_{T_n-t}(x_2)p^n_{T_n-t}(x_3)p^n_{T_n-t}(x_4)}. \end{align*} Since $\cap_{j=1}^3 B^{(j)}_{T_n-t-\delta_n}(x_1)$ and $\cap_{j=1}^3 B^{(j)}_{T_n-t-\delta_n}(x_3)$ occur, and $(x_1-\mu^n_{T_n-t})\vee (x_3 -\mu^n_{T_n-t})\le d_n$,~\eqref{eq:doublecoalstat2} follows by the definition of the event $E_1$ in~\eqref{eq:eventE1}. \end{proof} We are now ready to prove Propositions~\ref{prop:tauk}-\ref{prop:tautautilde}. \begin{proof}[Proof of Proposition~\ref{prop:tauk}] Suppose $n$ is sufficiently large that $\gamma_n \le K \log N$. Suppose the event $E$ occurs. Take $t\in \delta_n \mathbb{N} \cap [t_k+2K \log N,t_{k+1})$, and take $x\in\frac 1n \mathbb{Z}$ such that $|x-\mu^n_{T_n-t}|\le \frac 1 {64}\alpha d_n+1$. Then by conditioning on $\mathcal F_t$, \begin{align} \label{eq:tauk*} &\p{\tilde \tau^{n}_{i,j} \in (t,t+\delta_n ], \zeta^{n,i}_{t}=x \Big| \mathcal F_{t_k} } \notag \\ &= \E{\p{\tilde \tau^{n}_{i,j} \in (t,t+ \delta_n ] \Big| \mathcal F_{t}}\mathds{1}_{ \zeta^{n,i}_{t}=x}\mathds{1}_{\tau^{n}_{i,j}>t} \bigg| \mathcal F_{t_k} } \notag \\ &\le \mathbb E \Big[ n^2 N^{-1} \delta_n g(x-\mu^n_{T_n-t})^{-1} (1+\mathcal O((\log N)^{-C})) \notag \\ &\qquad \qquad \big(\mathds{1}_{\zeta^{n,j}_{t}=x} +m \mathds{1}_{|\zeta^{n,j}_{t}-x|=n^{-1}}+\mathcal O(n^{-1/5}) \mathds{1}_{|\zeta^{n,j}_{t}-x|< Kn^{-1}}\big) \mathds{1}_{ \zeta^{n,i}_{t}=x}\mathds{1}_{\tau^{n}_{i,j}> t} \Big| \mathcal F_{t_k} \Big] \notag \\ &=n^2 N^{-1} \delta_n g(x-\mu^n_{T_n-t})^{-1} (1+\mathcal O((\log N)^{-C}))\notag \\ &\qquad \Big(\p{\zeta^{n,i}_{t}=x=\zeta^{n,j}_{t}, \tau^{n}_{i,j}>t \Big| \mathcal F_{t_k}} +m\p{\zeta^{n,i}_{t}=x, |\zeta^{n,j}_{t}-x|=n^{-1}, \tau^{n}_{i,j}>t \Big| \mathcal F_{t_k}} \notag \\ &\qquad \qquad +\mathcal O (n^{-1/5}) \p{\zeta^{n,i}_{t}=x, |\zeta^{n,j}_{t}-x| < K n^{-1}, \tau^{n}_{i,j}>t \Big| \mathcal F_{t_k}}\Big), \end{align} where the inequality follows by Proposition~\ref{prop:coal} and the definition of $\tilde \tau^n_{i,j}$. By conditioning on $\mathcal F_{t-\gamma_n}$ and then on $\mathcal F_{t-\epsilon_n}$, \begin{align} \label{eq:tauk**} &\p{\zeta^{n,i}_{t}=x=\zeta^{n,j}_{t}, \tau^{n}_{i,j}>t \Big| \mathcal F_{t_k}} \notag \\ &= {\mathbb E} \Big[\p{\zeta^{n,i}_{t}=x=\zeta^{n,j}_{t}, \tau^{n}_{i,j}>t \Big| \mathcal F_{t -\gamma_n}} \mathds{1}_{\tau^{n}_{i,j}>t -\gamma_n }\mathds{1}_{|\tilde \zeta^{n,i}_{t -\gamma_n}| \vee |\tilde \zeta^{n,j}_{t -\gamma_n}|\le d_n} \Big| \mathcal F_{t_k}\Big] \notag \\ &\qquad + {\mathbb E} \Big[ \p{\zeta^{n,i}_{t}=x=\zeta^{n,j}_{t}, \tau^{n}_{i,j}>t \Big|\mathcal F_{t -\epsilon_n}} \mathds{1}_{\tau^{n}_{i,j}>t -\epsilon_n}\mathds{1}_{|\tilde \zeta^{n,i}_{t -\gamma_n}| \vee |\tilde \zeta^{n,j}_{t -\gamma_n}|> d_n} \Big| \mathcal F_{t_k} \Big]. \end{align} For the second term on the right hand side, note that by a union bound, and then by~\eqref{eq:propRlogN2} in Proposition~\ref{prop:RlogN} and~\eqref{eq:propintipstat2} in Proposition~\ref{prop:intip}, and since $\tilde \zeta^{n,i}_{t_k}\wedge \tilde \zeta^{n,j}_{t_k}\ge D_n^-$ by the definition of $I^{n,\epsilon}_{T_n-t_k}$ in~\eqref{eq:Intdefn}, and $t-\gamma_n-t_k\ge K \log N$, \begin{align} \label{eq:tautilde(A)} \p{|\tilde \zeta^{n,i}_{t -\gamma_n}| \vee |\tilde \zeta^{n,j}_{t -\gamma_n}|> d_n \Big| \mathcal F_{t_k}} &\le \p{\tilde \zeta^{n,i}_{t -\gamma_n}\wedge \tilde \zeta^{n,j}_{t -\gamma_n}< -d_n \Big| \mathcal F_{t_k}} +\p{\tilde \zeta^{n,i}_{t -\gamma_n}\vee \tilde \zeta^{n,j}_{t -\gamma_n}> d_n \Big|\mathcal F_{t_k}} \notag \\ &\le 2(\log N)^{2-\frac 18 \alpha C}+2(\log N)^3 e^{-(1+\frac 14 (1-\alpha))\kappa\lfloor d_n \rfloor} \notag \\ &=\mathcal O((\log N)^{3-\frac 18 \alpha C}) \end{align} by the definition of $d_n$ in~\eqref{eq:paramdefns}. Therefore, by~\eqref{eq:tauk**} and by~\eqref{eq:lemfromxixj1} and~\eqref{eq:lemfromxixj2} from Lemma~\ref{lem:fromxixj}, \begin{align*} &\p{\zeta^{n,i}_{t}=x=\zeta^{n,j}_{t}, \tau^n_{i,j}>t \Big|\mathcal F_{t_k}}\\ &\le n^{-2} \pi(x-\mu^n_{T_n-t})^2 \left(1+\mathcal O((\log N)^{-C})\right) +2n^{-2}\epsilon_n^{-2} \cdot \mathcal O((\log N)^{3-\frac 18 \alpha C})\\ &= n^{-2} \pi(x-\mu^n_{T_n-t})^2 (1+\mathcal O((\log N)^{-2})), \end{align*} since $\epsilon_n^{-2}=\mathcal O((\log N)^4)$, $\pi(x-\mu^n_{T_n-t})^{-2}=\mathcal O((\log N)^{\frac 1 {16}\alpha C})$ and we chose $C>2^{13}\alpha^{-2}$, so in particular $\frac 1 {16} \alpha C -7>2$. Hence using the same argument for the other terms on the right hand side of~\eqref{eq:tauk*}, and since $\pi(y-\mu^n_{T_n-t}) =\pi(x-\mu^n_{T_n-t}) (1+\mathcal O(n^{-1}))$ if $|x-y|<Kn^{-1}$, \begin{align*} &\p{\tilde \tau^{n}_{i,j} \in (t,t+\delta_n ], \zeta^{n,i}_{t}=x \Big| \mathcal F_{t_k} }\\ &\le N^{-1}\delta_n (1+2m) g(x-\mu^n_{T_n-t})^{-1} \pi(x-\mu^n_{T_n-t})^2 \left(1+\mathcal O((\log N)^{-2})\right). \end{align*} Note that if $\tilde \tau^n_{i,j} \in (t,t+\delta_n]$ then $|\tilde \zeta^{n,i}_t|\wedge |\tilde \zeta^{n,j}_t |\le \frac 1 {64} \alpha d_n$ by the definition of $\tilde \tau^n_{i,j}$, and $|\tilde \zeta^{n,i}_t-\tilde \zeta^{n,j}_t|< Kn^{-1}$ by Proposition~\ref{prop:coal}, and so for $n$ sufficiently large, we must have $|\tilde \zeta^{n,i}_t|\le \frac 1 {64}\alpha d_n +1$. Letting $\tilde i^n_s =\frac 1n \mathbb{Z} \cap [\mu^n_s-\frac 1 {64}\alpha d_n-1, \mu^n_s+\frac 1 {64}\alpha d_n +1]$ for $s\ge 0$, it follows that \begin{align} \label{eq:taukupper} & \p{\tilde \tau^{n}_{i,j} \in (t_k +2K \log n, t_{k+1}] \Big| \mathcal F_{t_k}} \notag \\ &\le N^{-1} \delta_n (1+2m) \left(1+\mathcal O((\log N)^{-2})\right) \sum_{t\in \delta_n \mathbb{N} \cap [t_k+ 2K \log N, t_{k+1})} \sum_{x\in \tilde i^n_{T_n-t}} g(x-\mu^n_{T_n-t})^{-1} \pi(x-\mu^n_{T_n-t})^{2} \notag \\ &\le \beta_n \left(1+\mathcal O((\log N)^{-2})\right), \end{align} by the definition of $\beta_n $ in~\eqref{eq:betadefn}. For a lower bound, note that for $t\in \delta_n \mathbb{N} \cap [t_k+2K \log N,t_{k+1})$, \begin{align} \label{eq:tauk(B)} &\p{\tilde \tau^{n}_{i,j} \in (t,t+\delta_n ] \Big| \mathcal F_{t_k}} \notag \\ &\geq \sum_{x\in 2(\log N)^{-C} \mathbb{Z}, |x-\mu_{T_n-t}|\le \frac 1 {64}\alpha d_n-1} \p{\tilde \tau^{n}_{i,j} \in (t,t+\delta_n ], |\zeta^{n,i}_{t}-x|< (\log N)^{-C} \Big| \mathcal F_{t_k}}. \end{align} Now for $x\in 2(\log N)^{-C} \mathbb{Z}$ with $ |x-\mu_{T_n-t}|\le \frac 1 {64}\alpha d_n-1$, by conditioning on $\mathcal F_t$, \begin{align} \label{eq:taukdagger} &\p{\tilde \tau^{n}_{i,j} \in (t,t+\delta_n ], |\zeta^{n,i}_{t}-x|< (\log N)^{-C} \Big| \mathcal F_{t_k}} \notag \\ & = \E{\p{\tilde \tau^{n}_{i,j} \in (t,t+\delta_n ]\Big| \mathcal F_{t}} \mathds{1}_{\tau^{n}_{i,j}>t}\mathds{1}_{ |\zeta^{n,i}_{t}-x|< (\log N)^{-C}} \Big| \mathcal F_{t_k}} \notag \\ & \geq {\mathbb E} \Big[ n^2 N^{-1} \delta_n g(\zeta^{n,i}_t -\mu^n_{T_n-t})^{-1} (1-\mathcal O((\log N)^{-C})) (\mathds{1}_{\zeta^{n,i}_{t}=\zeta^{n,j}_{t}} +m\mathds{1}_{|\zeta^{n,i}_{t}-\zeta^{n,j}_{t}|=n^{-1}}) \notag \\ &\hspace{9.5cm} \mathds{1}_{\tau^{n}_{i,j}>t}\mathds{1}_{ |\zeta^{n,i}_{t}-x|< (\log N)^{-C}} \Big| \mathcal F_{t_k} \Big] \notag \\ & = n^2 N^{-1} \delta_n g(x-\mu^n_{T_n-t})^{-1}(1-\mathcal O((\log N)^{-C})) \notag \\ &\qquad \Big(\p{\zeta^{n,i}_{t}=\zeta^{n,j}_{t}, |\zeta^{n,i}_{t}-x|< (\log N)^{-C}, \tau^{n}_{i,j}>t \Big| \mathcal F_{t_k}} \notag \\ &\qquad \qquad +m \p{|\zeta^{n,i}_{t}-\zeta^{n,j}_{t}|=n^{-1}, |\zeta^{n,i}_{t}-x|< (\log N)^{-C}, \tau^{n}_{i,j}>t \Big| \mathcal F_{t_k}} \Big), \end{align} where the inequality follows by Proposition~\ref{prop:coal}. For the first term on the right hand side, by conditioning on $\mathcal F_{t-\gamma_n}$, \begin{align} \label{tauklower*} & \p{\zeta^{n,i}_{t}=\zeta^{n,j}_{t}, |\zeta^{n,i}_{t}-x|< (\log N)^{-C}, \tau^{n}_{i,j}>t \Big| \mathcal F_{t_k}} \notag \\ &\ge {\mathbb E} \Big[ \p{\zeta^{n,i}_{t}=\zeta^{n,j}_{t}, |\zeta^{n,i}_{t}-x|< (\log N)^{-C}, \tau^{n}_{i,j}>t \Big| \mathcal F_{t -\gamma_n}} \mathds{1}_{\tau^{n}_{i,j}>t -\gamma_n} \mathds{1}_{|\tilde \zeta^{n,i}_{t -\gamma_n}| \vee |\tilde \zeta^{n,j}_{t -\gamma_n}| \le d_n} \Big| \mathcal F_{t_k} \Big]. \end{align} By a union bound, if $\tau^n_{i,j}>t-\gamma_n$ then \begin{align} \label{eq:taudagger(*)2} \p{\tau^{n}_{i,j} \le t \Big| \mathcal F_{t -\gamma_n} } &\le \sum_{s\in \delta_n \mathbb{N} \cap [t-\gamma_n ,t)} \p{\tau^{n}_{i,j} \in (s,s+ \delta_n ], \zeta^{n,i}_s \in I^n_{T_n-s} \text{ or }\zeta^{n,j}_s \in I^n_{T_n-s} \Big| \mathcal F_{t -\gamma_n}} \notag \\ &\quad + \p{\exists s\in \delta_n \mathbb{N} \cap [t-\gamma_n, t):\zeta^{n,i}_{s}, \zeta^{n,j}_{s}\notin I^n_{T_n-s}, \, \tau^n_{i,j}>s \Big| \mathcal F_{t -\gamma_n}}. \end{align} Suppose $|\tilde \zeta^{n,i}_{t-\gamma_n}|\vee |\tilde \zeta^{n,j}_{t-\gamma_n}|\le d_n$. Take $s\in \delta_n \mathbb{N} \cap [t-\gamma_n,t)$, and let $I=2\mathbb{Z} \cap [\mu^n_{T_n-s}+(\log N)^{2/3}+K+\nu t^*+3, \mu^n_{T_n-s}+D^+_n]$; then by conditioning on $\mathcal F_{s}$ and using Proposition~\ref{prop:coal}, \begin{align} \label{eq:tauktauupper} &\p{\tau^{n}_{i,j} \in (s,s+ \delta_n ], \zeta^{n,i}_s \in I^n_{T_n-s} \Big| \mathcal F_{t -\gamma_n}} \notag \\ &\le {\mathbb E} \Big[\mathcal O(n^2 N^{-1} \delta_n g(\zeta^{n,i}_s-\mu^n_{T_n-s})^{-1}) \mathds{1}_{|\zeta^{n,i}_{s}-\zeta^{n,j}_{s}|< Kn^{-1}} \mathds{1}_{\tau^{n}_{i,j}>s} \mathds{1}_{\zeta^{n,i}_s \in I^n_{T_n-s}} \Big| \mathcal F_{t -\gamma_n}\Big] \notag \\ &\le \mathcal O(n^2 N^{-1}\delta_n) \sum_{x' \in I} g(x' +1-\mu^n_{T_n-s})^{-1} \mathbb{P}\Big(| \zeta^{n,i}_{s} - x'|\le 1, |\zeta^{n,j}_{s} -x'|\le 2, \tau^{n}_{i,j}>s \Big| \mathcal F_{t -\gamma_n}\Big) \notag \\ &\quad +\mathcal O(n^2 N^{-1}\delta_n g((\log N)^{2/3}+K+\nu t^*+4)^{-1}). \end{align} Take $s'\in [s-t^*,s]$ such that $s'-(t-\gamma_n)\in t^* \mathbb{N}_0$. Then by~\eqref{eq:lemfromxixj3} in Lemma~\ref{lem:fromxixj}, for $x'\in I$, \begin{align*} &\mathbb{P}\Big( |\zeta^{n,i}_{s} - x'|\le 1, |\zeta^{n,j}_{s} - x'|\le 2, \tau^{n}_{i,j}>s \Big| \mathcal F_{t -\gamma_n}\Big)\\ &\le \mathbb{P}\Big( \zeta^{n,i}_{s' } \ge x'-1-(\log N)^{2/3}, \, \zeta^{n,j}_{s'} \ge x'-2-(\log N)^{2/3}, \tau^{n}_{i,j}>s' \Big| \mathcal F_{t -\gamma_n}\Big)\\ &\le (\log N)^4 e^{2(1+\frac 14 (1-\alpha))\kappa(d_n-(x'-3-(\log N)^{2/3}-\mu^n_{T_n-s'}))} \end{align*} by~\eqref{eq:propintipstat*} in Proposition~\ref{prop:intip} (since $s'-(t-\gamma_n)\le \gamma_n\le K \log N$ and we are assuming $\tilde \zeta^{n,i}_{t-\gamma_n}\vee \tilde \zeta^{n,j}_{t-\gamma_n}\le d_n$). Therefore, by~\eqref{eq:tauktauupper}, \begin{align} \label{eq:tauearly*} &\p{\tau^{n}_{i,j} \in (s,s+ \delta_n ], \zeta^{n,i}_s \in I^n_{T_n-s} \Big| \mathcal F_{t-\gamma_n}} \notag \\ &\le \mathcal O(n^2 N^{-1}\delta_n) \Big( \sum_{x'\in I} g(x'+1-\mu^n_{T_n-s})^{-1} (\log N)^{4+4C} e^{4\kappa (\log N)^{2/3}} e^{-2(1+\frac 14 (1-\alpha))\kappa(x'-3-\mu^n_{T_n-s'})} \notag \\ &\hspace{4cm} + 2e^{\kappa ((\log N)^{2/3}+K+\nu t^*+4)}\Big) \notag \\ &= \mathcal O(n^2 N^{-1}\delta_n (\log N)^{4+4C} e^{4\kappa(\log N)^{2/3}} ) \end{align} since $g(y)^{-1} \le 2e^{\kappa y}$ for $y\ge 0$, and by the definition of the event $E_1$ in~\eqref{eq:eventE1}. For the second term on the right hand side of~\eqref{eq:taudagger(*)2}, note that by~\eqref{eq:lemfromxixj3} in Lemma~\ref{lem:fromxixj} and by the definition of the event $E_1$, \begin{align*} &\p{\exists s\in \delta_n \mathbb{N} \cap [t-\gamma_n,t): \zeta^{n,i}_s, \zeta^{n,j}_s \notin I^n_{T_n-s},\, \tau^{n}_{i,j}>s \Big| \mathcal F_{t -\gamma_n}}\\ &\le \mathbb{P} \Big(\exists s ' \in [t-\gamma_n,t): s'-(t-\gamma_n) \in t^*\mathbb{N}_0, \tilde \zeta^{n,i}_{s'}\wedge \tilde \zeta^{n,j}_{s'} \ge D^+_n -(\log N)^{2/3}-2\nu t^*, \tau^{n}_{i,j}>s' \Big| \mathcal F_{t-\gamma_n}\Big) \\ &\le (t^*)^{-1}\gamma_n (\log N)^4 e^{2(1+\frac 14 (1-\alpha))\kappa (d_n-(D_n^+-(\log N)^{2/3}-2\nu t^* -1))} \end{align*} by~\eqref{eq:propintipstat*} in Proposition~\ref{prop:intip} and since $\tilde \zeta^{n,i}_{t-\gamma_n} \vee \tilde \zeta^{n,j}_{t-\gamma_n}\le d_n$. Note that $e^{-2(1+\frac 14 (1-\alpha)) \kappa D_n^+}=\left( \frac n N \right)^{(1+\frac 14 (1-\alpha))(1-2c_0)}\le \frac n N$ by~\eqref{eq:Dn+-defn} and our choice of $c_0$. Hence, by~\eqref{eq:tauearly*}, substituting into~\eqref{eq:taudagger(*)2}, \begin{align*} \p{\tau^{n}_{i,j} \le t \Big| \mathcal F_{t -\gamma_n} } &\le \mathcal O(n^2 N^{-1} \gamma_n (\log N)^{4+4C}e^{4\kappa(\log N)^{2/3}}) + \mathcal O(\gamma_n (\log N)^{4+4C} e^{4\kappa(\log N)^{2/3}} nN^{-1})\\ &=\mathcal O(n^{-1-\frac 12 (a_2-3)}), \end{align*} since $N\ge n^{a_2}$ for $n$ sufficiently large, with $a_2>3$. Therefore if $|\tilde \zeta^{n,i}_{t-\gamma_n}|\vee |\tilde \zeta^{n,j}_{t-\gamma_n}|\le d_n$ and $\tau^n_{i,j}>t-\gamma_n$, \begin{align} \label{eq:taukstatdist} &\p{\zeta^{n,i}_{t}=\zeta^{n,j}_{t}, |\zeta^{n,i}_{t}-x|< (\log N)^{-C}, \tau^{n}_{i,j}>t \Big| \mathcal F_{t -\gamma_n}} \notag \\ &\ge \p{\zeta^{n,i}_{t}=\zeta^{n,j}_{t}, |\zeta^{n,i}_{t}-x|< (\log N)^{-C} \Big| \mathcal F_{t -\gamma_n}} - \p{\tau^{n}_{i,j} \le t \Big| \mathcal F_{t -\gamma_n} } \notag \\ &\ge \pi(x-\mu^n_{T_n-t})^2 \cdot 2(\log N)^{-C} n^{-1} \left(1-\mathcal O((\log N)^{-C})\right) -\mathcal O(n^{-1-\frac 12 (a_2-3)}) \end{align} by~\eqref{eq:lemfromxixj1} in Lemma~\ref{lem:fromxixj} and since $\pi(y-\mu^n_{T_n-t})=\pi(x-\mu^n_{T_n-t})(1+\mathcal O((\log N)^{-C}))$ if $|y-x|<(\log N)^{-C}$. To bound the other terms in~\eqref{tauklower*}, note first that by a union bound, \begin{align} \label{eq:tauk*3} \p{\tau^{n}_{i,j} \le t -\gamma_n \Big| \mathcal F_{t_k } } &\le \sum_{s\in \delta_n \mathbb{N}_0 \cap [t_k,t-\gamma_n)} \p{\tau^{n}_{i,j} \in (s,s+ \delta_n ], \zeta^{n,i}_s \in I^n_{T_n-s} \text{ or }\zeta^{n,j}_s \in I^n_{T_n-s} \Big| \mathcal F_{t_k}} \notag \\ &\qquad + \p{\exists s'\in \delta_n \mathbb{N}_0 \cap [t_k,t-\gamma_n): \zeta^{n,i}_{s'}\wedge \zeta^{n,j}_{s'} \notin I^n_{T_n-s'} \Big| \mathcal F_{t_k }} . \end{align} By Proposition~\ref{prop:coal}, for $s\in \delta_n \mathbb{N}_0 \cap [t_k,t-\gamma_n)$, \begin{align} \label{eq:tauearly**} \p{\tau^{n}_{i,j} \in (s,s+\delta_n ], \zeta^{n,i}_s \in I^n_{T_n-s} \Big| \mathcal F_{t_k}} &=\E{ \p{\tau^{n}_{i,j} \in (s,s+\delta_n ] \Big| \mathcal F_{s}} \mathds{1}_{\zeta^{n,i}_s \in I^n_{T_n-s}} \Big| \mathcal F_{t_k}} \notag \\ &= \mathcal O(n^2 N^{-1} \delta_n g(D_n^+)^{-1}) \notag \\ &= \mathcal O(n^{3/2}N^{-1/2} \delta_n) \end{align} since $\kappa D_n^+\le \frac 12 \log (N/n)$. For the second term on the right hand side of~\eqref{eq:tauk*3}, by~\eqref{eq:lemfromxixj3} in Lemma~\ref{lem:fromxixj} and by the definition of the event $E_1$ in~\eqref{eq:eventE1}, \begin{align*} &\p{\exists s' \in \delta_n \mathbb{N}_0 \cap [t_k, t-\gamma_n): \zeta^{n,i}_{s'}\wedge \zeta^{n,j}_{s'} \notin I^n_{T_n-s'} \Big| \mathcal F_{t_k }}\\ &\le \p{\exists s'\in [t_k,t-\gamma_n): s'-t_k \in t^*\mathbb{N}_0, \tilde \zeta^{n,i}_{s'}\wedge \tilde \zeta^{n,j}_{s'} \ge D^+_n - (\log N)^{2/3}-2\nu t^* \Big| \mathcal F_{t_k }}\\ &\le (t^*)^{-1} t_1 (\log N)^3 e^{(1+\frac 14 (1-\alpha))\kappa((1-\epsilon)D_n^+ -(D^+_n -(\log N)^{2/3}-2\nu t^*-1))} \end{align*} by~\eqref{eq:propintipstat2} and~\eqref{eq:propintipstat3} in Proposition~\ref{prop:intip} and since $\tilde \zeta^{n,i}_{t_k} \wedge \tilde \zeta^{n,j}_{t_k}\le (1-\epsilon)D_n^+$. Hence by~\eqref{eq:tauk*3} and~\eqref{eq:tauearly**}, and since $\kappa(1+\frac 14 (1-\alpha))D^+_n \ge \frac 12 \log (N/n)$ by the definition of $D_n^+$ in~\eqref{eq:Dn+-defn}, \begin{align} \label{eq:taukA} \p{\tau^{n}_{i,j} \le t-\gamma_n \Big| \mathcal F_{t_k } } &\le \mathcal O(t_1 n^{3/2} N^{-1/2})+\mathcal O(t_1 (\log N)^3 e^{2\kappa(\log N)^{2/3}} n^{\epsilon/2} N^{-\epsilon/2})\notag \\ &=\mathcal O(n^{-(\frac 13 (a_2-3) \wedge \epsilon)}). \end{align} Therefore, substituting into~\eqref{tauklower*} and using~\eqref{eq:tautilde(A)} and~\eqref{eq:taukstatdist}, \begin{align*} & \p{\zeta^{n,i}_{t}=\zeta^{n,j}_{t}, |\zeta^{n,i}_{t}-x|< (\log N)^{-C}, \tau^{n}_{i,j}>t \Big| \mathcal F_{t_k}} \notag \\ &\ge 2\pi(x-\mu^n_{T_n-t})^2 (\log N)^{-C} n^{-1} \left(1-\mathcal O((\log N)^{-C})\right) (1-\mathcal O(n^{-(\frac 13 (a_2-3)\wedge \epsilon)})-\mathcal O((\log N)^{3-\frac 18 \alpha C})). \end{align*} Since we chose $C>2^{13}\alpha^{-2}$, we have $\frac 18 \alpha C -3>2$. Hence by the same argument for the second term on the right hand side of~\eqref{eq:taukdagger}, and then substituting into~\eqref{eq:tauk(B)}, \begin{align*} &\p{\tilde \tau^{n}_{i,j} \in (t,t+\delta_n ] \Big| \mathcal F_{t_k}}\\ &\geq \sum_{x\in 2(\log N)^{-C} \mathbb{Z}, |x-\mu^n_{T_n-t}|\le \frac 1 {64}\alpha d_n-1} 2(\log N)^{-C} n N^{-1} \delta_n \cdot (1+2m) \frac{\pi(x-\mu^n_{T_n-t})^2} {g(x-\mu^n_{T_n-t})} \left(1-\mathcal O((\log N)^{-2})\right)\\ &=\beta_n t_1^{-1} \delta_n (1-\mathcal O((\log N)^{-2})), \end{align*} since $\frac 1 {32}\alpha^2 C >2$ and $\frac 1 {64}\alpha C>2$, which, together with~\eqref{eq:taukupper}, completes the proof. \end{proof} \begin{proof}[Proof of Proposition~\ref{prop:doublecoaltk}] Suppose $n$ is sufficiently large that $2K\log N \ge \epsilon_n$. Suppose the event $E$ occurs. We begin by proving the first statement~\eqref{eq:propdoubletk1}. Take $s<t \in \delta_n \mathbb{N}\cap [t_k+2K \log N, t_{k+1})$. Note that if for some $\ell,\ell'\in [k_0]$, $\tilde \tau^n_{\ell, \ell '}\in (t,t+\delta_n]$ then $|\tilde \zeta^{n,\ell}_t|\wedge |\tilde \zeta^{n,\ell '}_t|\le \frac 1 {64} \alpha d_n$ by the definition of $\tilde \tau^n_{\ell, \ell '}$ in~\eqref{eq:tildetaudefn}, and $|\tilde \zeta^{n,\ell}_t- \tilde \zeta^{n,\ell '}_t|<Kn^{-1}$ by Proposition~\ref{prop:coal}, so in particular $|\tilde \zeta^{n,\ell }_t|\le d_n$. Hence by conditioning on $\mathcal F_{t}$ and applying Proposition~\ref{prop:coal}, \begin{align} \label{eq:doublecoaltk*} &\p{\tilde \tau^{n}_{i,j_1} \in (s,s+\delta_n], \tilde \tau^{n}_{i,j_2} \in (t,t+\delta_n] \Big|\mathcal F_{t_k}} \notag \\ &\le \E{ \mathcal O(n^2 N^{-1}\delta_n g(\tilde \zeta^{n,i}_t)^{-1}) \mathds{1}_{| \tilde \zeta^{n,i}_{t}|\le d_n} \mathds{1}_{\tilde \tau^{n}_{i,j_1}\in (s,s+\delta_n]} \Big|\mathcal F_{t_k}} \notag \\ &\le \mathcal O( n^2 N^{-1} \delta_n (\log N)^C) \p{\tilde \tau^{n}_{i,j_1} \in (s,s+\delta_n] \Big|\mathcal F_{t_k}}. \end{align} By conditioning on $\mathcal F_{s}$ and applying Proposition~\ref{prop:coal}, \begin{align*} &\p{\tilde \tau^{n}_{i,j_1} \in (s,s+\delta_n] \Big|\mathcal F_{t_k}}\\ &\le {\mathbb E} \Big[\mathcal O(n^2 N^{-1} \delta_n g(\tilde \zeta^{n,i}_s)^{-1}) \mathds{1}_{\tau^n_{i,j_1}>s} \mathds{1}_{|\tilde \zeta^{n,i}_{s}|\le d_n} \mathds{1}_{|\zeta^{n,i}_{s}-\zeta^{n,j_1}_{s}|< Kn^{-1}} \Big| \mathcal F_{t_k} \Big]\\ &= \mathcal O( n^2 N^{-1} \delta_n (\log N)^{C}) \p{|\tilde \zeta^{n,i}_{s}|\le d_n, |\zeta^{n,i}_{s}-\zeta^{n,j_1}_{s}|< Kn^{-1}, \tau^n_{i,j_1}>s \Big| \mathcal F_{t_k} }. \end{align*} Then since $s-t_k \ge \epsilon_n$, by conditioning on $\mathcal F_{s-\epsilon_n}$, \begin{align} \label{eq:doublecoaltkS} &\p{|\tilde \zeta^{n,i}_{s}|\le d_n, |\zeta^{n,i}_{s}-\zeta^{n,j_1}_{s}|< Kn^{-1}, \tau^n_{i,j_1}>s \Big| \mathcal F_{t_k} } \notag \\ &\le {\mathbb E} \Big[ \p{|\tilde \zeta^{n,i}_{s}|\le d_n, |\zeta^{n,i}_{s}-\zeta^{n,j_1}_{s}|< Kn^{-1}\Big| \mathcal F_{s-\epsilon_n}} \mathds{1}_{\tau^n_{i,j_1}>s-\epsilon_n} \Big| \mathcal F_{t_k} \Big] \notag \\ &\le \E{\sum_{x\in i^n_{T_n-s},y \in \frac 1n \mathbb{Z}, |x-y|< Kn^{-1}} \p{\zeta^{n,i}_s =x, \zeta^{n,j}_s=y \Big| \mathcal F_{s-\epsilon_n}} \mathds{1}_{\tau^n_{i,j_1}>s-\epsilon_n} \Bigg| \mathcal F_{t_k}} \notag \\ &\le (2n d_n+1) 2K\cdot 2 n^{-2} \epsilon_n^{-2} \end{align} by~\eqref{eq:lemfromxixj2} in Lemma~\ref{lem:fromxixj}. Hence, by~\eqref{eq:doublecoaltk*}, and by the same argument for the case $s>t$, if $s\neq t \in \delta_n \mathbb{N} \cap [t_k+2K\log N,t_{k+1})$, \begin{align} \label{eq:doublecoalA} &\p{\tilde \tau^{n}_{i,j_1} \in (s,s+\delta_n], \tilde \tau^{n}_{i,j_2} \in (t , t+\delta_n] \Big|\mathcal F_{t_k}} =\mathcal O(n^3 N^{-2}\delta_n^2 (\log N)^{2C+5}). \end{align} By Proposition~\ref{prop:doublecoal}, for $t \in \delta_n \mathbb{N} \cap [t_k+2K\log N,t_{k+1})$, \begin{align} \label{eq:doublecoal1} \p{\tilde \tau^{n}_{i,j_1} , \tilde \tau^{n}_{i,j_2} \in (t,t+\delta_n] \Big|\mathcal F_{t_k}} &=\mathcal O( n^{9/5}N^{-2} \delta_n (\log N)^{2C}) +\p{\tilde \tau^{n}_{i,j_1}\in (t,t+\delta_n ], \tau^{n}_{j_1,j_2} \le t \Big| \mathcal F_{t_k}}. \end{align} By a union bound, and then by conditioning on $\mathcal F_t$ and using Proposition~\ref{prop:coal}, \begin{align*} &\p{\tilde \tau^{n}_{i,j_1} \in (t,t+\delta_n ], \tau^{n}_{j_1,j_2} \in (t-\epsilon_n, t] \Big| \mathcal F_{t_k}}\\ &= \sum_{t'\in \delta_n \mathbb{N}\cap [t-\epsilon_n, t)} \p{\tilde \tau^{n}_{i,j_1} \in (t,t+\delta_n ], \tau^{n}_{j_1,j_2} \in (t',t'+ \delta_n] \Big| \mathcal F_{t_k}} \\ &\le \sum_{t'\in \delta_n \mathbb{N}\cap [t-\epsilon_n, t)} \E{\mathcal O(n^2 N^{-1} \delta_ng(\tilde \zeta^{n,j_1}_t)^{-1})\mathds{1}_{|\tilde \zeta^{n,j_1}_{t}|\le d_n} \mathds{1}_{\tau^{n}_{j_1,j_2}\in (t',t'+\delta_n]} \Big| \mathcal F_{t_k}} \\ &\le \sum_{t'\in \delta_n \mathbb{N}\cap [t-\epsilon_n, t)} \mathcal O(n^2 N^{-1}\delta_n (\log N)^C) \p{\tau^{n}_{j_1,j_2}\in (t',t'+\delta_n] , |\tilde \zeta^{n,j_1}_{t'}|\le d_n+(\log N)^{2/3}+1 \Big| \mathcal F_{t_k}} \end{align*} by~\eqref{eq:lemfromxixj3} in Lemma~\ref{lem:fromxixj} and the definition of the event $E_1$ in~\eqref{eq:eventE1}. Then by Proposition~\ref{prop:coal} again, for $t'\in \delta_n \mathbb{N} \cap [t-\epsilon_n,t)$, by conditioning on $\mathcal F_{t'}$, \begin{align*} \p{\tau^{n}_{j_1,j_2}\in (t',t'+\delta_n] , |\tilde \zeta^{n,j_1}_{t'}|\le d_n+(\log N)^{2/3}+1 \Big| \mathcal F_{t_k}} &= \mathcal O(n^2 N^{-1}\delta_n g(d_n+(\log N)^{2/3}+1)^{-1}). \end{align*} Hence \begin{align} \label{eq:doublecoal2} \p{\tilde \tau^{n}_{i,j_1} \in (t,t+\delta_n ], \tau^{n}_{j_1,j_2} \in (t-\epsilon_n, t] \Big| \mathcal F_{t_k}} &=\mathcal O(n^4 N^{-2} \delta_n \epsilon_n (\log N)^{C}e^{2\kappa(\log N)^{2/3}}) \notag \\ &=\mathcal O(n^{1-\frac 12 (a_2-3)}N^{-1} \delta_n). \end{align} Moreover, by Proposition~\ref{prop:coal}, conditioning on $\mathcal F_t$, and then conditioning on $\mathcal F_{t-\epsilon_n}$, \begin{align} \label{eq:doublecoaltk*2} &\p{\tilde \tau^{n}_{i,j_1} \in (t,t+\delta_n ], \tau^{n}_{j_1,j_2} \le t-\epsilon_n \Big| \mathcal F_{t_k}} \notag \\ &=\E{\mathcal O(n^2 N^{-1} \delta_n g(\tilde \zeta^{n,i}_t)^{-1})\mathds{1}_{\tau^n_{i,j_1}>t} \mathds{1}_{|\tilde \zeta^{n,i}_{t}|\le d_n} \mathds{1}_{|\zeta^{n,i}_{t}-\zeta^{n,j_1}_{t}|< Kn^{-1}} \mathds{1}_{\tau^{n}_{j_1,j_2}\le t -\epsilon_n} \Big| \mathcal F_{t_k}} \notag \\ &\le \mathcal O(n^2 N^{-1} \delta_n(\log N)^C) \notag \\ &\qquad \qquad \cdot {\mathbb E} \Big[ \p{|\zeta^{n,i}_{t}-\zeta^{n,j_1}_{t}| < Kn^{-1}, |\tilde \zeta^{n,i}_{t}|\le d_n \Big| \mathcal F_{t -\epsilon_n}} \mathds{1}_{\tau^n_{i,j_1}>t-\epsilon_n} \mathds{1}_{\tau^{n}_{j_1,j_2} \le t-\epsilon_n} \Big| \mathcal F_{t_k} \Big]. \end{align} By the same argument as in~\eqref{eq:doublecoaltkS}, if $\tau^n_{i,j_1}>t-\epsilon_n$ then \begin{align*} \p{|\zeta^{n,i}_{t}-\zeta^{n,j_1}_{t}| < Kn^{-1}, |\tilde \zeta^{n,i}_{t}|\le d_n \Big| \mathcal F_{t -\epsilon_n}} &\le (2nd_n+1)2K \cdot 2 n^{-2} \epsilon_n^{-2} =\mathcal O(n^{-1} (\log N)^5). \end{align*} By the same argument as in~\eqref{eq:taukA} in the proof of Proposition~\ref{prop:tauk}, $$ \p{\tau^{n}_{j_1,j_2}\le t -\epsilon_n \Big| \mathcal F_{t_k}} =\mathcal O(n^{-(\frac 13 (a_2-3)\wedge \epsilon)}). $$ Hence by~\eqref{eq:doublecoaltk*2}, \begin{align} \label{eq:doublecoal3} \p{\tilde \tau^{n}_{i,j_1} \in (t,t+\delta_n ], \tau^{n}_{j_1,j_2} \le t-\epsilon_n \Big| \mathcal F_{t_k}} &= \mathcal O(n^{1-(\frac 13 (a_2-3)\wedge \epsilon)} N^{-1} \delta_n (\log N)^{C+5}). \end{align} Therefore, by~\eqref{eq:doublecoal1},~\eqref{eq:doublecoal2} and~\eqref{eq:doublecoal3}, \begin{align*} &\p{\tilde \tau^{n}_{i,j_1} , \tilde \tau^{n}_{i,j_2} \in (t,t+ \delta_n ]\Big| \mathcal F_{t_k}}\\ &=\mathcal O( n^{9/5}N^{-2} \delta_n (\log N)^{2C}) +\mathcal O(n^{1-\frac 12 (a_2-3)}N^{-1} \delta_n) +\mathcal O(n^{1-(\frac 13 (a_2-3)\wedge \epsilon)} N^{-1}\delta_n (\log N)^{C+5})\\ &= \mathcal O(n^{1-\frac 12 (\frac 13 (a_2-3)\wedge \epsilon)}N^{-1} \delta_n). \end{align*} Hence, by~\eqref{eq:doublecoalA} and a union bound, and since $N\ge n^3$, \begin{align*} \p{\tilde \tau^{n}_{i,j_1} , \tilde \tau^{n}_{i,j_2} \in (t_k , t_{k+1} ]\Big| \mathcal F_{t_k}} &=\mathcal O(N^{-1} (\log N)^{2C+5}t_1^2)+ \mathcal O(n^{1-\frac 12 (\frac 13 (a_2-3)\wedge \epsilon)}N^{-1} t_1), \end{align*} which completes the proof of the first statement~\eqref{eq:propdoubletk1}. For the second statement~\eqref{eq:propdoubletk2}, by Proposition~\ref{prop:doublecoal}, for $t\in \delta_n \mathbb{N} \cap [t_k +2K \log N, t_{k+1})$, \begin{align*} &\p{\tilde \tau^{n}_{i_1,j_1}, \tilde \tau^{n}_{i_2,j_2} \in (t,t+\delta_n] \Big| \mathcal F_{t_k}}\\ &\le \mathcal O(n^4 N^{-2} \delta_n^2 (\log N)^{2C}) + \sum_{i,j \in \{i_1,i_2,j_1,j_2\}, i\neq j} \p{\tilde \tau^{n}_{i_1,j_1}, \tilde \tau^n_{i_2,j_2} \in (t,t+\delta_n] , \tau^{n}_{i,j}\le t \Big| \mathcal F_{t_k}}. \end{align*} The second statement~\eqref{eq:propdoubletk2} then follows by the same argument as for~\eqref{eq:propdoubletk1}. \end{proof} \begin{proof}[Proof of Proposition~\ref{prop:tautautilde}] Suppose the event $E$ occurs. By the definition of $c_0$ before~\eqref{eq:Dn+-defn}, we can take $\epsilon>0$ sufficiently small that $2(1+\frac 14 (1-\alpha))(1-2\epsilon)(\frac 12-c_0)>1$. For $t\in \delta_n \mathbb{N}_0\cap [0,T_n^-]$ and $x\in I^{n,\epsilon}_{T_n-t}$, by conditioning on $\mathcal F_t$, \begin{align} \label{eq:tildetau*} &\p{\tau^{n}_{i,j} \in (t,t+ \delta_n], \zeta^{n,i}_{t}=x \Big| \mathcal F_0} \notag \\ &=\E{\p{\tau^{n}_{i,j} \in (t,t+\delta_n] \Big| \mathcal F_{t}}\mathds{1}_{\tau^{n}_{i,j}>t}\mathds{1}_{\zeta^{n,i}_{t}=x}\Big|\mathcal F_0} \notag \\ &= \E{\mathcal O( n^2 N^{-1}\delta_n g(x-\mu^n_{T_n-t})^{-1})\mathds{1}_{\tau^{n}_{i,j}>t} \mathds{1}_{|\zeta^{n,j}_{t}-x|<Kn^{-1}}\mathds{1}_{\zeta^{n,i}_{t}=x}\Big|\mathcal F_0} \notag \\ &= \mathcal O(n^2 N^{-1} \delta_n g(x-\mu^n_{T_n-t})^{-1}) \p{|\zeta^{n,j}_{t}-x|<Kn^{-1}, \zeta^{n,i}_{t}=x, \tau^{n}_{i,j}>t \Big|\mathcal F_0}, \end{align} where the second equality follows by Proposition~\ref{prop:coal}. If $t\ge \epsilon_n$, then for $y\in \frac 1n \mathbb{Z}$ with $|y-x|<Kn^{-1}$, by conditioning on $\mathcal F_{t-\epsilon_n}$, and by~\eqref{eq:lemfromxixj3} in Lemma~\ref{lem:fromxixj}, \begin{align} \label{eq:tildetau**} &\p{\zeta^{n,j}_{t}=y, \zeta^{n,i}_{t}=x,\tau^{n}_{i,j}>t \Big|\mathcal F_0} \notag \\ &= \E{\p{\zeta^{n,j}_{t}=y, \zeta^{n,i}_{t}=x, \tau^{n}_{i,j}>t \Big|\mathcal F_{t -\epsilon_n }} \mathds{1}_{\tau^{n}_{i,j}>t -\epsilon_n} \mathds{1}_{|\zeta^{n,j}_{t-\epsilon_n}-y|\le 1}\mathds{1}_{|\zeta^{n,i}_{t-\epsilon_n}-x|\le 1} \Big|\mathcal F_0} \notag \\ &\le 2 n^{-2} \epsilon_n^{-2} \p{|\zeta^{n,j}_{t -\epsilon_n}-x|\le 2,|\zeta^{n,i}_{t -\epsilon_n}-x|\le 1, \tau^{n}_{i,j}>t -\epsilon_n \Big|\mathcal F_0 }, \end{align} for $n$ sufficiently large, by~\eqref{eq:lemfromxixj2} in Lemma~\ref{lem:fromxixj}. For $s\ge 0$, let $$i^{n,-}_s = \tfrac 1n \mathbb{Z} \cap [\mu^n_s+D^-_n, \mu^n_s-\tfrac 1 {64}\alpha d_n] \quad \text{ and } \quad i^{n,+}_s = \tfrac 1n \mathbb{Z} \cap [\mu^n_s+\tfrac 1 {64} \alpha d_n, \mu^n_s-(1-\epsilon)D^+_n].$$ Suppose $x\in i^{n,+}_{T_n-t}$. Since $x\le \mu^n_{T_n-t}+(1-\epsilon)D_n^+$, if $t\ge K \log N+\epsilon_n$ then by~\eqref{eq:propintipstat1} in Proposition~\ref{prop:intip}, \begin{align*} \p{\zeta^{n,j}_{t -\epsilon_n}\ge x- 2,\zeta^{n,i}_{t -\epsilon_n}\ge x- 1, \tau^{n}_{i,j}>t -\epsilon_n \Big|\mathcal F_0 } & \le (\log N)^7 e^{-2(1+\frac 14 (1-\alpha))\kappa(x-3-\mu^n_{T_n-t+\epsilon_n})}. \end{align*} Therefore, by~\eqref{eq:tildetau*} and~\eqref{eq:tildetau**}, if $t\ge K\log N+\epsilon_n$, \begin{align} \label{eq:tildetauB} &\p{\tau^{n}_{i,j} \in (t,t+\delta_n ], \zeta^{n,i}_{t}=x \Big| \mathcal F_0} \notag \\ &\le \mathcal O(n^2 N^{-1} \delta_n g(x-\mu^n_{T_n-t})^{-1}) \cdot 4K n^{-2} \epsilon_n^{-2} \cdot (\log N)^7 e^{-2(1+\frac 14 (1-\alpha))\kappa(x-3-\mu^n_{T_n-t+\epsilon_n})} \notag \\ &=\mathcal O\left( (\log N)^{11} N^{-1} \delta_n e^{-(1+\frac 12 (1-\alpha))\kappa(x-\mu^n_{T_n-t})}\right) \end{align} by the definition of the event $E_1$ in~\eqref{eq:eventE1}, and since $g(z)^{-1}\le 2e^{\kappa z}$ for $z\ge 0$. By~\eqref{eq:tildetau*} and~\eqref{eq:tildetau**}, if $t\ge \epsilon_n$ and $x\in i^{n,-}_{T_n-t}$, \begin{align*} \p{\tau^{n}_{i,j} \in (t,t+\delta_n], \zeta^{n,i}_{t}=x \Big| \mathcal F_0} &=\mathcal O( n^2 N^{-1} \delta_n) \cdot 4K n^{-2} \epsilon_n^{-2} \p{|\zeta^{n,i}_{t -\epsilon_n}-x|\le 1 \Big| \mathcal F_0}. \end{align*} Therefore, if $t\ge K \log N+\epsilon_n$, \begin{align*} \p{\tau^{n}_{i,j} \in (t,t+\delta_n ], \zeta^{n,i}_{t}\in i^{n,-}_{T_n-t} \Big|\mathcal F_0} &\le \mathcal O(N^{-1} \delta_n \epsilon_n^{-2}) \sum_{x\in i^{n,-}_{T_n-t}} \p{| \zeta^{n,i}_{t-\epsilon_n}-x|\le 1 \Big|\mathcal F_0}\\ &=\mathcal O(nN^{-1} \delta_n \epsilon_n^{-2} (\log N)^{2-2^{-9}\alpha^2 C}) \end{align*} by~\eqref{eq:propRlogN2} in Proposition~\ref{prop:RlogN} and by the definition of the event $E_1$. By~\eqref{eq:tildetauB}, we now have that for $t\in \delta_n \mathbb{N} \cap [ K\log N +\epsilon_n,T_n^-]$, \begin{align} \label{eq:tautautilde4} &\p{\tau^{n}_{i,j} \in (t,t+\delta_n], |\tilde \zeta^{n,i}_{t}|\ge \tfrac 1 {64} \alpha d_n, \zeta^{n,i}_{t} \in I^{n,\epsilon}_{T_n-t} \bigg|\mathcal F_0} \notag \\ &= \mathcal O( n N^{-1}\delta_n (\log N)^{6-2^{-9} \alpha^2 C}) + \mathcal O( N^{-1} \delta_n (\log N)^{11}) \sum_{x\in i^{n,+}_{T_n-t}} e^{-(1+\frac 12 (1-\alpha))\kappa(x-\mu^n_{T_n-t})} \notag \\ &= \mathcal O( n N^{-1}\delta_n (\log N)^{11-2^{-9} \alpha^2 C}). \end{align} For $t\in \delta_n \mathbb{N} \cap [ \epsilon_n,T^-_n]$ and $x\in \frac 1n \mathbb{Z}$ with $|x-\mu^n_{T_n-t}|\le \frac 1 {64}\alpha d_n$, by~\eqref{eq:tildetau*} and~\eqref{eq:tildetau**}, \begin{align*} \p{\tau^{n}_{i,j}\in (t,t+\delta_n], \zeta^{n,i}_{t}=x \Big| \mathcal F_0} &\le \mathcal O(n^2 N^{-1} \delta_n g(\tfrac 1 {64}\alpha d_n)^{-1}) \cdot 4K \epsilon_n^{-2} n^{-2}\\ &=\mathcal O(N^{-1} \delta_n (\log N)^{4+\frac 1 {64}\alpha C}). \end{align*} Therefore, by~\eqref{eq:tautautilde4} and since we chose $C>2^{13}\alpha^{-2}$, for $t\in \delta_n \mathbb{N} \cap [ K \log N+\epsilon_n, T^-_n]$, \begin{align} \label{eq:tautautilde3} \p{\tau^{n}_{i,j} \in (t,t+\delta_n], \zeta^{n,i}_{t} \in I^{n,\epsilon}_{T_n-t} \Big|\mathcal F_0} &=\mathcal O( n N^{-1} \delta_n d_n (\log N)^{4+\frac 1 {64}\alpha C}). \end{align} Now note that for any $t\in \delta_n \mathbb{N}_0 \cap [0,T_n^-]$, \begin{align} \label{eq:tautautilde2} \p{\tau^{n}_{i,j} \in (t,t+\delta_n], \zeta^{n,i}_{t} \in I^{n,\epsilon}_{T_n-t} \Big| \mathcal F_0} &=\E{\p{\tau^{n}_{i,j} \in (t,t+\delta_n]\Big| \mathcal F_{t}} \mathds{1}_{\zeta^{n,i}_{t} \in I^{n,\epsilon}_{T_n-t}} \Big| \mathcal F_0} \notag \\ &=\mathcal O(n^2 N^{-1}\delta_n g(D_n^+)^{-1}) \end{align} by Proposition~\ref{prop:coal}. Finally, by~\eqref{eq:lemfromxixj3} in Lemma~\ref{lem:fromxixj}, for $n$ sufficiently large, \begin{align} \label{eq:tautautilde1} & \p{\exists t \in \delta_n \mathbb{N}_0 \cap [0,Nn^{-1} \log N] :\zeta^{n,i}_{t} \wedge \zeta^{n,j}_{t} \notin I^{n,\epsilon}_{T_n-t} , \tau^{n}_{i,j}>t \Big| \mathcal F_0} \notag \\ &\le \p{\exists t \in t^*\mathbb{N}_0 \cap [0,Nn^{-1} \log N] :\tilde \zeta^{n,i}_{t} \wedge \tilde \zeta^{n,j}_{t}\ge (1-2\epsilon)D^+_n , \tau^{n}_{i,j}>t \Big| \mathcal F_0} \notag \\ &\quad + \p{\exists t\in\delta_n \mathbb{N}_0\cap [0,Nn^{-1}\log N]:\tilde \zeta^{n,i}_{t} \wedge \tilde \zeta^{n,j}_{t}\le D_n^- \Big| \mathcal F_0} \notag \\ &\le ( (t^*)^{-1} Nn^{-1} \log N+1) (\log N)^7 e^{2(1+\frac 14 (1-\alpha))\kappa(K_0-(1-2\epsilon)D_n^+ -1)}+2N^{-1} \notag \\ & \le N^{-\epsilon'} \end{align} for some $\epsilon '>0$, where the second inequality follows by~\eqref{eq:propintipstat1} and~\eqref{eq:propintipstat*} in Proposition~\ref{prop:intip} and~\eqref{eq:propRlogN1} in Proposition~\ref{prop:RlogN}, and the last inequality since we chose $\epsilon>0$ sufficiently small that $2(1+\frac 14(1-\alpha))(1-2\epsilon)(\frac 12 -c_0)>1$ and since $\kappa D^+_n =(1/2-c_0)\log (N/n)$. Hence by a union bound, \begin{align} \label{eq:tautautildeconc} &\p{\{\tau^{n}_{i,j}\neq \tilde \tau^{n}_{i,j}\}\cap \{\tau^n_{i,j}\le N n^{-1} \log N\} \Big| \mathcal F_0} \notag\\ &\le \p{\exists t \in \delta_n \mathbb{N}_0 \cap [0, N n^{-1} \log N]: \zeta^{n,i}_{t} \wedge \zeta^{n,j}_{t}\notin I^{n,\epsilon}_{T_n-t}, \tau^n_{i,j}>t \Big| \mathcal F_0} \notag \\ &\quad + \sum_{\{k\in \mathbb{N}_0:t_k \le N n^{-1} \log N\}} \sum_{t\in \delta_n \mathbb{N}_0\cap [t_k,t_k+2K \log N), i'\in \{i,j\}} \p{\tau^{n}_{i,j} \in (t,t+ \delta_n] , \zeta^{n,i'}_{t}\in I^{n,\epsilon}_{T_n-t} \Big| \mathcal F_0} \notag \\ &\quad + \sum_{t\in \delta_n \mathbb{N} \cap [2K\log N, Nn^{-1} \log N], i'\in \{i,j\}} \p{ \tau^{n}_{i,j} \in (t,t+\delta_n], |\tilde \zeta^{n,i'}_{t}|\ge \tfrac 1 {64}\alpha d_n, \zeta^{n,i'}_t \in I^{n,\epsilon}_{T_n-t} \Big| \mathcal F_0 } \notag \\ &\le N^{-\epsilon '}+\mathcal O(n^2 N^{-1}g(D_n^+)^{-1}\log N) + \mathcal O( n N^{-1} d_n (\log N)^{4+\frac 1 {64}\alpha C} \cdot N n^{-1} (\log N)^{2-C})\notag \\ &\qquad +\mathcal O( n N^{-1}(\log N)^{11-2^{-9} \alpha^2 C} \cdot N n^{-1} \log N) \notag \\ &\le \tfrac 12 (\log N)^{-2} \end{align} for $n$ sufficiently large, where the second inequality follows by~\eqref{eq:tautautilde1},~\eqref{eq:tautautilde2},~\eqref{eq:tautautilde3} and~\eqref{eq:tautautilde4}, and the last inequality since we chose $C>2^{13}\alpha^{-2}$ and so $2^{-9}\alpha^2 C -12>2$ and $\frac 12 C-6>2$, and since $g(D_n^+)^{-1}\le 2e^{\kappa D_n^+}=\mathcal O\left( (\frac N n )^{1/2-c_0}\right)$ and $N\ge n^3$. By a union bound and Proposition~\ref{prop:tauk}, for $n$ sufficiently large, \begin{align*} &\p{\tau^n_{i,j}>Nn^{-1} \log N \Big| \mathcal F_0}\\ &\le \p{\exists t\in \delta_n \mathbb{N}_0 \cap [0,Nn^{-1} \log N]: \zeta^{n,i}_t \wedge \zeta^{n,j}_t \notin I^{n,\epsilon}_{T_n-t}, \tau^n_{i,j}>t \Big| \mathcal F_0} +(1-\tfrac 12 \beta_n)^{\lfloor (t_1)^{-1} N n^{-1} \log N \rfloor}\\ &\le \tfrac 12 (\log N)^{-2}, \end{align*} for $n$ sufficiently large, by~\eqref{eq:tautautilde1} and the definition of $\beta_n$ in~\eqref{eq:betadefn}. By~\eqref{eq:tautautilde1} and~\eqref{eq:tautautildeconc}, this completes the proof. \end{proof} \subsection{Proof of Proposition~\ref{prop:intip}} \label{subsec:tipbulkproofs} Throughout the rest of Section~\ref{sec:mainproof}, we assume for some $a_1>1$, $N\ge n^{a_1}$ for $n$ sufficiently large. We need two preliminary lemmas for the proof of Proposition~\ref{prop:intip}. The first is an easy consequence of the definition of the event $E'_2$. \begin{lemma} \label{lem:jumpbound} For $n$ sufficiently large, on the event $E_1\cap E'_2$, for $t\in \delta_n \mathbb{N}_0 \cap [0,T^-_n]$, $i,j\in [k_0]$ and $\ell_1 , \ell_2 \in \frac 1n \mathbb{Z}\cap [K,D^+_n]$, if $ \zeta^{n,i}_t, \zeta^{n,j}_t \in I^n_{T_n-t}$, \begin{align*} \p{\tilde \zeta^{n,i}_{t+t^*}\ge \ell_1, \tilde \zeta^{n,j}_{t+t^*}\ge \ell_2 \Big| \mathcal F_t}\mathds{1}_{\tau^n_{i,j}>t} & \le c_1 e^{-(1+\frac 12 (1-\alpha))\kappa(\ell_1+1 -(\tilde \zeta^{n,i}_t \vee K)+\ell_2+1 -(\tilde \zeta^{n,j}_t \vee K))}\\ \text{and }\qquad \p{\tilde \zeta^{n,i}_{t+t^*}\ge \ell_1 \Big| \mathcal F_t} & \le c_1 e^{-(1+\frac 12 (1-\alpha))\kappa(\ell_1+1-(\tilde \zeta^{n,i}_t \vee K))}. \end{align*} \end{lemma} \begin{proof} Write $t'=T_n-(t+t^*)$. By the definition of $q^{n,+}$ in~\eqref{eq:qn+-defn}, and the definition of $\tilde \zeta^{n,i}$ and $\tilde \zeta^{n,j}$ in~\eqref{eq:zetadefns}, for $\ell_1, \ell_2 \in \frac 1n \mathbb{Z}$, if $\tau^n_{i,j}>t $, \begin{align} \label{eq:tracerformula} \p{\tilde \zeta^{n,i}_{t+t^*}\ge \ell_1, \tilde \zeta^{n,j}_{t+t^*}\ge \ell_2 \Big| \mathcal F_t } &\le \frac{q^{n,+}_{t',t'+t^*}(\ell_1+\mu^n_{t'},\zeta^{n,i}_t)}{p^n_{t'+t^*}(\zeta^{n,i}_t)} \frac{q^{n,+}_{t',t'+t^*}(\ell_2+\mu^n_{t' },\zeta^{n,j}_t)}{p^n_{t'+t^*}(\zeta^{n,j}_t)-N^{-1} \mathds{1}_{\zeta^{n,j}_t =\zeta^{n,i}_t}}. \end{align} By the definition of the event $E'_2$ in~\eqref{eq:eventE'2}, for $\ell \in I^n_{t'}$ and $z\in I^n_{t'+t^*}$ with $\ell -\mu^n_{t'}\ge K$, the event $A^{(2)}_{t'}( \ell , z)$ occurs, and so \begin{align*} \frac{q^{n,+}_{t',t'+t^*}(\ell,z)}{p^n_{t'+t^*}(z)} \le c_1 e^{-(1+\frac 12 (1-\alpha))\kappa(\ell -(z-\nu t^*)\vee (\mu^n_{t'}+K)+2)}. \end{align*} Note that by the definition of the event $E_1$ in~\eqref{eq:eventE1}, if $\zeta^{n,j}_t \in I^n_{t'+t^*}$ then $p^n_{t'+t^*}(\zeta^{n,j}_t)\ge \frac 1 {10}\left( \frac n N \right)^{1/2}$. Therefore by~\eqref{eq:tracerformula}, if $\tau^n_{i,j}>t$ and $ \zeta^{n,i}_t, \zeta^{n,j}_t \in I^n_{T_n-t}$, for $\ell_1,\ell_2 \in \frac 1n \mathbb{Z} \cap [K,D_n^+]$, \begin{align} \label{eq:fromtipB} &\p{\tilde \zeta^{n,i}_{t+t^*}\ge \ell_1, \tilde \zeta^{n,j}_{t+t^*}\ge \ell_2 \Big| \mathcal F_t} \notag \\ &\le (1+\mathcal O(N^{-1/2}))c_1^2 e^{-(1+\frac 12 (1-\alpha))\kappa((\ell_1+ \mu^n_{t'})-(\zeta^{n,i}_t -\nu t^*) \vee (\mu^n_{t'}+K)+2+(\ell_2+ \mu^n_{t'})-(\zeta^{n,j}_t -\nu t^*) \vee (\mu^n_{t'}+K)+2)} \notag \\ &\le (1+\mathcal O(N^{-1/2})) c_1^2 e^{-(1+\frac 12 (1-\alpha))\kappa((\ell_1- \tilde \zeta^{n,i}_t\vee K)-t^* e^{-(\log N)^{c_2}}+2 +(\ell_2- \tilde \zeta^{n,j}_t\vee K)-t^* e^{-(\log N)^{c_2}}+2)}, \end{align} since, by the definition of the event $E_1$, $|(\mu^n_{t'}-\nu t^*)-\mu^n_{T_n-t}|\le t^* e^{-(\log N)^{c_2}}$. Since $c_1<1$, the first statement follows by taking $n$ sufficiently large. The second statement follows by the same argument. \end{proof} We now use Lemma~\ref{lem:jumpbound} and an inductive argument to prove the following result. \begin{lemma} \label{lem:intip} For $t\in \delta_n \mathbb{N}_0 \cap [0,T_n^-]$ and $k\in [k_0]$, let \begin{equation} \label{eq:tau+kdefn} \tau^{+,k}_t=\inf\left\{s\ge t: s-t\in t^* \mathbb{N}_0, \tilde \zeta^{n,k}_{s} \ge D^+_n \right\}. \end{equation} Take $i,j \in [k_0]$ and let $\tau^{+}_t = \tau^{+,i}_t \wedge \tau^{+,j}_t \wedge \tau^n_{i,j}$. On the event $E_1\cap E'_2 $, for $s\in [0,T^-_n]$ with $s-t\in t^* \mathbb{N}_0$, for $\ell_1,\ell_2 \in \mathbb{N}\cap [K,D^+_n]$, \begin{align} \label{eq:indhyptail} \p{\tilde \zeta^{n,i}_{s} \ge \ell_1, \tilde \zeta^{n,j}_{s} \ge \ell_2, \tau_t^+\ge s \Big| \mathcal F_t } & \le e^{(1+\frac 14 (1-\alpha))\kappa (\tilde \zeta^{n,i}_t \vee K-\ell_1 + \tilde \zeta^{n,j}_t \vee K-\ell_2)} \\ \text{ and for }i'\in \{i,j\}, \quad \p{\tilde \zeta^{n,i'}_{s} \ge \ell_1 , \tau^{+,i'}_t\ge s \Big| \mathcal F_t } &\le e^{(1+\frac 14 (1-\alpha))\kappa (\tilde \zeta^{n,i'}_t \vee K-\ell_1)} . \label{eq:indhyptail2} \end{align} \end{lemma} \begin{proof} Let $\lambda = \frac 14 (1-\alpha)$, and recall from~\eqref{eq:cchoice} that we chose $c_1>0$ sufficiently small that \begin{equation} \label{eq:delta_cond} \begin{aligned} c_1 ((e^{\lambda \kappa}-1)^{-1}e^{\lambda \kappa }+e^{-(1+\lambda)\kappa}(1-e^{-(1+\lambda )\kappa})^{-1})^2 +e^{-2(1+\lambda )\kappa} &< 1 \\ \text{ and }\quad c_1 (e^{\lambda \kappa}-1)^{-1}e^{\lambda \kappa} +e^{-(1+\lambda)\kappa} &< 1. \end{aligned} \end{equation} The proof is by induction. Take $t' \in [0,T^-_n]$ with $t'-t\in t^*\mathbb{N}_0$, and suppose~\eqref{eq:indhyptail} and~\eqref{eq:indhyptail2} hold for $s=t'$. Let $A=e^{(1+\lambda)\kappa(\tilde \zeta^{n,i}_t \vee K + \tilde \zeta^{n,j}_t \vee K)}$. Note that by~\eqref{eq:lemfromxixj3} in Lemma~\ref{lem:fromxixj}, if $\tau^+_t > t'$ then $\zeta^{n,i}_{t'}, \zeta^{n,j}_{t'}\in I^n_{T_n-t'}$. For $\ell_1,\ell_2 \in \mathbb{N} \cap [K, D^+_n]$, let $J_{\ell_1,\ell_2}=\{(k_1,k_2): k_1,k_2 \in \mathbb{N} \cap (K,D^+_n], k_1\le \ell_1 \text{ or } k_2\le \ell_2\}$. Then by Lemma~\ref{lem:jumpbound} and a union bound, \begin{align*} &\p{\tilde \zeta^{n,i}_{t'+t^*} \ge \ell_1, \tilde \zeta^{n,j}_{t'+t^*} \ge \ell_2, \tau^+_t\ge t'+t^* \Big| \mathcal F_t }\\ &\le \sum_{(k_1,k_2)\in J_{\ell_1,\ell_2}} c_1 e^{-(1+2\lambda)\kappa((\ell_1-k_1 )\vee 0+(\ell_2-k_2)\vee 0)} \p{\tilde \zeta^{n,i}_{t'}\in [ k_1,k_1+1) , \tilde \zeta^{n,j}_{t'}\in [k_2,k_2+1), \tau^+_t>t' \Big| \mathcal F_t}\\ &\quad +\sum_{k\in \mathbb{N} \cap (K,D_n^+]} \Big(c_1 e^{-(1+2\lambda)\kappa((\ell_1-k) \vee 0+\ell_2-K)} \p{\tilde \zeta^{n,i}_{t'}\in [k,k+1), \tau^{+,i}_t >t' \Big| \mathcal F_t}\\ &\hspace{4cm} +c_1 e^{-(1+2\lambda)\kappa ((\ell_2-k)\vee 0+\ell_1-K)} \p{\tilde \zeta^{n,j}_{t'}\in [ k,k+1) , \tau^{+,j}_t >t' \Big| \mathcal F_t}\Big) \\ &\quad + c_1 e^{-(1+2\lambda)\kappa (\ell_1-K+\ell_2-K)} +\p{\tilde \zeta^{n,i}_{t'}\ge \ell_1+1, \tilde \zeta^{n,j}_{t'}\ge \ell_2+1, \tau^+_t>t' \Big| \mathcal F_t}\\ &\le \sum_{k_1,k_2\in \mathbb{N} \cap [K ,D^+_n]}Ae^{-(1+\lambda)\kappa (k_1+k_2)}c_1 e^{-(1+2\lambda)\kappa((\ell_1-k_1)\vee 0+(\ell_2-k_2)\vee 0)} +A e^{-(1+\lambda)\kappa (\ell_1+\ell_2+2)} \end{align*} by the induction hypothesis and since by the definition of $A$, $e^{(1+\lambda)\kappa(\tilde \zeta^{n,i'}_t\vee K)}\le A e^{-(1+\lambda)\kappa K}$ for $i'\in \{i,j\}$ and $Ae^{-(1+\lambda)2\kappa K}\ge 1$. Therefore \begin{align} \label{eq:lemind1} &\p{\tilde \zeta^{n,i}_{t'+t^*} \ge \ell_1, \tilde \zeta^{n,j}_{t'+t^*} \ge \ell_2, \tau^+_t \ge t'+t^* \Big| \mathcal F_t } \notag \\ &\le A c_1 \left(\sum_{k_1= K}^{\ell_1} e^{-(1+\lambda)\kappa k_1}e^{-(1+2\lambda)\kappa (\ell_1-k_1)} +\sum_{k_1=\ell_1+1}^{\lfloor D^+_n \rfloor }e^{-(1+\lambda)\kappa k_1}\right) \notag \\ &\qquad \cdot \left(\sum_{k_2=K}^{\ell_2} e^{-(1+\lambda) \kappa k_2}e^{-(1+2\lambda)\kappa (\ell_2-k_2)} +\sum_{k_2=\ell_2+1}^{\lfloor D^+_n \rfloor}e^{-(1+\lambda)\kappa k_2}\right) +Ae^{-(1+\lambda)\kappa (\ell_1+\ell_2+2)}. \end{align} Note that \begin{align*} \sum_{k_1=K}^{\ell_1} e^{-(1+\lambda)\kappa k_1}e^{-(1+2\lambda)\kappa (\ell_1-k_1)} <\sum_{k_1=0}^{\ell_1} e^{-(1+2\lambda)\kappa \ell_1}e^{\lambda \kappa k_1} &<e^{-(1+2\lambda)\kappa \ell_1}(e^{\lambda \kappa }-1)^{-1} e^{\lambda \kappa (\ell_1 +1)}\\ &=(e^{\lambda \kappa}-1)^{-1}e^{\lambda \kappa}e^{-(1+\lambda) \kappa \ell_1}. \end{align*} Hence, since $\sum_{k_1=\ell_1+1}^{\lfloor D^+_n \rfloor }e^{-(1+\lambda)\kappa k_1}<(1-e^{-(1+\lambda)\kappa})^{-1} e^{-(1+\lambda)\kappa(\ell_1+1)}$, substituting into~\eqref{eq:lemind1}, \begin{align*} &\p{\tilde \zeta^{n,i}_{t'+t^*} \ge \ell_1, \tilde \zeta^{n,j}_{t'+t^*} \ge \ell_2, \tau_t^+\ge t'+t^* \Big| \mathcal F_t }\\ & \le A e^{-(1+\lambda)\kappa(\ell_1+\ell_2)} \left(c_1 ((e^{\lambda \kappa}-1)^{-1}e^{\lambda \kappa}+e^{-(1+\lambda)\kappa}(1-e^{-(1+\lambda )\kappa})^{-1})^2 +e^{-2(1+\lambda )\kappa}\right)\\ &\le A e^{-(1+\lambda)\kappa(\ell_1+\ell_2)} \end{align*} by~\eqref{eq:delta_cond}. Similarly, letting $A_1=e^{(1+\lambda)\kappa(\tilde \zeta^{n,i}_t \vee K) }$, for $\ell \in \mathbb{N}\cap [K,D^+_n]$, by Lemma~\ref{lem:jumpbound} and a union bound, \begin{align*} \p{\tilde \zeta^{n,i}_{t'+t^*} \ge \ell, \tau^{+,i}_t\ge t'+t^* \Big| \mathcal F_t } &\le \sum_{k\in \mathbb{N} \cap (K, \ell]} c_1 e^{-(1+2\lambda)\kappa(\ell -k )} \p{\tilde \zeta^{n,i}_{t'}\in [ k,k+1), \tau^{+,i}_t>t' \Big| \mathcal F_t}\\ &\qquad +c_1 e^{-(1+2\lambda)\kappa (\ell -K)} +\p{\tilde \zeta^{n,i}_{t'}\ge \ell+1, \tau^{+,i}_t>t' \Big| \mathcal F_t}\\ &\le \sum_{k\in \mathbb{N} \cap [K, \ell]} c_1 e^{-(1+2\lambda)\kappa(\ell -k )} A_1 e^{-(1+\lambda )\kappa k} +A_1 e^{-(1+\lambda )\kappa (\ell+1)} \end{align*} by the induction hypothesis and since $A_1 e^{-(1+\lambda)\kappa K}\ge 1$. Hence \begin{align*} \p{\tilde \zeta^{n,i}_{t'+t^*} \ge \ell, \tau^{+,i}_t\ge t'+t^* \Big| \mathcal F_t } &\le A_1 \left(c_1 e^{-(1+2\lambda)\kappa \ell } (e^{\lambda \kappa}-1)^{-1} e^{\lambda \kappa (\ell +1)}+e^{-(1+\lambda )\kappa (\ell+1)}\right)\\ &= A_1 e^{-(1+\lambda )\kappa \ell} (c_1 (e^{\lambda \kappa}-1)^{-1} e^{\lambda \kappa}+e^{-(1+\lambda )\kappa} )\\ & \le A_1 e^{-(1+\lambda )\kappa \ell} \end{align*} by~\eqref{eq:delta_cond}. By the same argument, $\p{\tilde \zeta^{n,j}_{t'+t^*} \ge \ell, \tau^{+,j}_t\ge t'+t^* \Big| \mathcal F_t } \le e^{(1+\lambda )\kappa (\tilde \zeta^{n,j}_t \vee K- \ell)}$. The result follows by induction. \end{proof} \begin{proof}[Proof of Proposition~\ref{prop:intip}] If $t-s\ge K\log N$, for $i'\in \{i,j\}$, let $$ \sigma_{i'} =\inf\{s' : s'-(t-t^*\lfloor (t^*)^{-1}K \log N \rfloor)\in t^* \mathbb{N}_0, \tilde \zeta^{n,i'}_{s'}\le K \}. $$ If instead $t-s < K \log N$ with $t-s\in t^* \mathbb{N}_0$, then let $\sigma_{i'} = s$ for $i'\in \{i,j\}$. Note that in both cases $t-\sigma_{i'}\le K \log N$. Let $\lambda = \frac 14 (1-\alpha)$. Condition on $\mathcal F_{\sigma_i \vee \sigma_j}$ and suppose $\sigma_i\le \sigma_j \le t$. Recall the definition of $\tau^{+,i}_{\sigma_j}$ and $\tau^{+,j}_{\sigma_j}$ in~\eqref{eq:tau+kdefn}. Then for $\ell_1,\ell_2 \in\mathbb{N}\cap [K,D_n^+]$, by a union bound and Lemma~\ref{lem:intip}, \begin{align} \label{eq:intipstar} &\p{\tilde \zeta^{n,i}_t \ge \ell_1, \tilde \zeta^{n,j}_t \ge \ell_2, \tau^n_{i,j}> t \Big| \mathcal F_{\sigma_i \vee \sigma_j} } \notag \\ &\le e^{(1+\lambda )\kappa (\tilde \zeta^{n,i}_{\sigma_j} \vee K -\ell_1 +\tilde \zeta^{n,j}_{\sigma_j} \vee K-\ell_2)} +\p{\tilde \zeta^{n,i}_t \ge \ell_1, \tau^n_{i,j}> t, \tau^{+,i}_{\sigma_j}\ge t, \tau^{+,j}_{\sigma_j}<t \Big| \mathcal F_{\sigma_i \vee \sigma_j} } \notag \\ &\quad +\p{\tilde \zeta^{n,j}_t \ge \ell_2, \tau^n_{i,j}> t, \tau^{+,j}_{\sigma_j}\ge t, \tau^{+,i}_{\sigma_j}<t \Big| \mathcal F_{\sigma_i \vee \sigma_j} } +\p{\tau^n_{i,j}> t, \tau^{+,i}_{\sigma_j}< t, \tau^{+,j}_{\sigma_j}<t \Big| \mathcal F_{\sigma_i \vee \sigma_j} }. \end{align} We now bound the last three terms on the right hand side. Recall that we let $\tau^+_{\sigma_j}=\tau^{+,i}_{\sigma_j}\wedge \tau^{+,j}_{\sigma_j}\wedge \tau^n_{i,j}$. For $s' \in [\sigma_j,t]$ with $s'-\sigma_j \in t^* \mathbb{N}_0$, by conditioning on $\mathcal F_{s'}$, \begin{align*} &\p{\tilde \zeta^{n,i}_t \ge \ell_1, \tau^n_{i,j}> t, \tau^{+,i}_{\sigma_j}\ge t, \tau^{+,j}_{\sigma_j}=s' \Big| \mathcal F_{\sigma_i \vee \sigma_j} } \\ & \le \E{\p{\tilde \zeta^{n,i}_t \ge \ell_1, \tau^{+,i}_{s'} \ge t\Big| \mathcal F_{s'}} \mathds{1}_{\tilde \zeta^{n,j}_{s'} \ge D^+_n, \tau^+_{\sigma_j}=s'} \Big| \mathcal F_{\sigma_i \vee \sigma_j}}\\ &\le \sum_{ \ell '_1=K}^{ \ell_1-1}\p{\tilde \zeta^{n,i}_{s'} \in [ \ell '_1,\ell'_1+1), \tilde \zeta^{n,j}_{s'} \ge D^+_n, \tau^+_{\sigma_j}\ge {s'} \Big| \mathcal F_{\sigma_i \vee \sigma_j}} \cdot e^{(1+\lambda )\kappa (\ell'_1+1-\ell_1)}\\ &\qquad +\p{\tilde \zeta^{n,i}_{s'}\le K,\tilde \zeta^{n,j}_{s'} \ge D^+_n, \tau^+_{\sigma_j}\ge s' \Big| \mathcal F_{\sigma_i \vee \sigma_j}} \cdot e^{(1+\lambda )\kappa(K-\ell_1)}\\ &\qquad + \p{\tilde \zeta^{n,i}_{s'} \ge \ell_1, \tilde \zeta^{n,j}_{s'} \ge D^+_n, \tau^+_{\sigma_j}\ge {s'} \Big| \mathcal F_{\sigma_i \vee \sigma_j}} \end{align*} by~\eqref{eq:indhyptail2} in Lemma~\ref{lem:intip}. Therefore, by Lemma~\ref{lem:intip} again, \begin{align} \label{eq:zetatailtau+*} &\p{\tilde \zeta^{n,i}_t \ge \ell_1, \tau^n_{i,j}> t, \tau^{+,i}_{\sigma_j}\ge t, \tau^{+,j}_{\sigma_j}=s' \Big| \mathcal F_{\sigma_i \vee \sigma_j} } \notag \\ &\le \sum_{\ell '_1=K}^{\ell_1} e^{(1+\lambda )\kappa (\tilde \zeta^{n,i}_{\sigma_j} \vee K-\ell '_1 +\tilde \zeta^{n,j}_{\sigma_j} \vee K -\lfloor D_n^+ \rfloor )} \cdot e^{(1+\lambda )\kappa (\ell'_1+1-\ell_1)} + e^{(1+\lambda )\kappa (\tilde \zeta^{n,j}_{\sigma_j} \vee K -\lfloor D^+_n \rfloor)} \cdot e^{(1+\lambda )\kappa (K-\ell_1)} \notag \\ &\le e^{(1+\lambda )\kappa (\tilde \zeta^{n,i}_{\sigma_j} \vee K +\tilde \zeta^{n,j}_{\sigma_j} \vee K)} (\ell_1 e^{-(1+\lambda)\kappa(\ell_1 +\lfloor D^+_n \rfloor -1)}+e^{-(1+\lambda)\kappa(\ell_1+\lfloor D^+_n \rfloor)}) \notag \\ &\le e^{(1+\lambda )\kappa(\tilde \zeta^{n,i}_{\sigma_j} \vee K +\tilde \zeta^{n,j}_{\sigma_j} \vee K +1)} e^{-(1+\lambda )\kappa (\ell_1+\lfloor D_n^+ \rfloor)}(D^+_n+1), \end{align} since $\ell_1 \le D_n^+$. Therefore, for $n$ sufficiently large, since $t-\sigma_j\le K \log N$, \begin{align} \label{eq:propintipA} \p{\tilde \zeta^{n,i}_t \ge \ell_1, \tau^n_{i,j}> t, \tau^{+,i}_{\sigma_j}\ge t, \tau^{+,j}_{\sigma_j}<t \Big| \mathcal F_{\sigma_i \vee \sigma_j} } &\le e^{(1+\lambda )\kappa(\tilde \zeta^{n,i}_{\sigma_j} \vee K - \ell_1 +\tilde \zeta^{n,j}_{\sigma_j} \vee K -\lfloor D_n^+ \rfloor +1)} K \kappa^{-1} (\log N)^2, \end{align} and by the same argument, \begin{align} \label{eq:propintipB} \p{\tilde \zeta^{n,j}_t \ge \ell_2, \tau^n_{i,j}> t, \tau^{+,j}_{\sigma_j}\ge t, \tau^{+,i}_{\sigma_j}<t \Big| \mathcal F_{\sigma_i \vee \sigma_j} } &\le e^{(1+\lambda )\kappa (\tilde \zeta^{n,i}_{\sigma_j} \vee K -\lfloor D^+_n \rfloor +\tilde \zeta^{n,j}_{\sigma_j} \vee K -\ell_2 +1)} K \kappa^{-1} (\log N)^2 . \end{align} For the last term on the right hand side of~\eqref{eq:intipstar}, note that for $\sigma_j\le s_1\le s_2\le t$ with $s_1-\sigma_j, s_2 -\sigma_j \in t^* \mathbb{N}_0$, by the same argument as for~\eqref{eq:zetatailtau+*}, \begin{align} \label{eq:zetataildagger} \p{ \tau^n_{i,j}> t, \tau^{+,i}_{\sigma_j} =s_1, \tau^{+,j}_{\sigma_j}=s_2 \Big| \mathcal F_{\sigma_i \vee \sigma_j }} &\le \p{ \tau^n_{i,j}>s_2, \tau^{+,i}_{\sigma_j} =s_1, \tau^{+,j}_{\sigma_j}\ge s_2, \tilde \zeta^{n,j}_{s_2}\ge \lfloor D^+_n \rfloor \Big| \mathcal F_{\sigma_i \vee \sigma_j }} \notag \\ &\le e^{(1+\lambda )\kappa (\tilde \zeta^{n,i}_{\sigma_j} \vee K -\lfloor D_n^+ \rfloor +\tilde \zeta^{n,j}_{\sigma_j} \vee K -\lfloor D_n^+\rfloor +1)} (D^+_n+1), \end{align} and by the same argument~\eqref{eq:zetataildagger} also holds for $s_1 \ge s_2$. Hence by~\eqref{eq:intipstar},~\eqref{eq:propintipA} and~\eqref{eq:propintipB}, for $n$ sufficiently large, if $\sigma_i\le \sigma_j \le t$ then for $\ell_1,\ell_2 \in \mathbb{N} \cap [K, D_n^+]$, \begin{align} \label{eq:propintip1} \p{\tilde \zeta^{n,i}_t \ge \ell_1, \tilde \zeta^{n,j}_t \ge \ell_2, \tau^n_{i,j}> t \Big| \mathcal F_{\sigma_i \vee \sigma_j} } &\le e^{(1+\lambda )\kappa(\tilde \zeta^{n,i}_{\sigma_j} \vee 0 -\ell_1 +\tilde \zeta^{n,j}_{\sigma_j} \vee 0 -\ell_2 )}(\log N)^4. \end{align} By a simpler version of the same argument, for $i'\in \{i,j\}$ and $\ell \in \mathbb{N} \cap [K,D_n^+]$, if $\sigma_i \le \sigma_j\le t$ then \begin{align} \label{eq:propintiponel} &\p{\tilde \zeta^{n,i'}_t \ge \ell \Big| \mathcal F_{\sigma_i \vee \sigma_j}}\notag \\ &\le \p{\tilde \zeta^{n,i'}_t \ge \ell, \tau^{+, i'}_{\sigma_j}\ge t \Big| \mathcal F_{\sigma_i \vee \sigma_j}} +\sum_{s'\in [\sigma_j, t), s'-\sigma_j \in t^* \mathbb{N}_0} \p{\tilde \zeta^{n,i'}_{s'} \ge D_n^+, \tau^{+, i'}_{\sigma_j}\ge s' \Big| \mathcal F_{\sigma_i \vee \sigma_j}} \notag \\ &\le (\log N)^2 e^{(1+\lambda) \kappa(\tilde \zeta^{n,i'}_{\sigma_j}\vee 0-\ell)} \end{align} for $n$ sufficiently large, by~\eqref{eq:indhyptail2} in Lemma~\ref{lem:intip}. Since we let $\sigma_i=\sigma_j =s$ in the case $t-s<K\log N$, this completes the proof of~\eqref{eq:propintipstat*} and~\eqref{eq:propintipstat3}. From now on, assume $t-s\ge K \log N$. Condition on $\mathcal F_{\sigma_i \wedge \sigma_j}$ and suppose $\sigma_i \wedge \sigma_j = \sigma_i \le t$; then \begin{align} \label{eq:propintipsigmai} &\E{e^{(1+\lambda)\kappa(\tilde \zeta^{n,i}_{\sigma_j}\vee 0)}\mathds{1}_{\tau^{+,i}_{\sigma_i}> \sigma_j}\mathds{1}_{\sigma_j \le t} \Big| \mathcal F_{\sigma_i\wedge \sigma_j}} \notag \\ &\le e^{(1+\lambda)\kappa K} + \sum_{\ell =K}^{\lfloor D^+_n \rfloor} e^{(1+\lambda)\kappa (\ell +1)} \sum_{s'-\sigma_i \in t^*\mathbb{N}_0, \, s'\le t} \p{\tilde \zeta^{n,i}_{s'} \in [\ell, \ell+1) , \tau^{+,i}_{\sigma_i}\ge s' \Big| \mathcal F_{\sigma_i\wedge \sigma_j}} \notag \\ &\le e^{(1+\lambda)\kappa K} + \sum_{\ell =K}^{\lfloor D^+_n \rfloor} e^{(1+\lambda)\kappa (\ell +1)} ((t^*)^{-1}K\log N +1) e^{(1+\lambda)\kappa (\tilde \zeta^{n,i}_{\sigma_i} \vee K-\ell)} \notag \\ &\le e^{(1+\lambda)\kappa (1+K)}K \kappa^{-1} (\log N)^2 \end{align} for $n$ sufficiently large, where the second inequality follows by~\eqref{eq:indhyptail2} in Lemma~\ref{lem:intip} and since $t-\sigma_i \le K \log N$, and the last inequality since $\tilde \zeta^{n,i}_{\sigma_i}\le K$. Therefore, if $\sigma_i \wedge \sigma_j =\sigma_i \le t$, by conditioning on $\mathcal F_{\sigma_i \vee \sigma_j}$, and then by~\eqref{eq:propintip1},~\eqref{eq:propintiponel} and~\eqref{eq:propintipsigmai}, and since $\tilde \zeta^{n,j}_{\sigma_j}\le K$, \begin{align} \label{eq:propintipB2} &\p{\tilde \zeta^{n,i}_t \ge \ell_1, \tilde \zeta^{n,j}_t \ge \ell_2, \tau^n_{i,j}> t \Big| \mathcal F_{\sigma_i \wedge \sigma_j} } \notag \\ &\le \E{\p{\tilde \zeta^{n,i}_t \ge \ell_1, \tilde \zeta^{n,j}_t \ge \ell_2, \tau^n_{i,j}> t \Big| \mathcal F_{\sigma_i \vee \sigma_j} }\mathds{1}_{\sigma_j \le t}(\mathds{1}_{\tau^{+,i}_{\sigma_i}>\sigma_j}+\mathds{1}_{\tau^{+,i}_{\sigma_i}\le \sigma_j}) \Big| \mathcal F_{\sigma_i \wedge \sigma_j} } \notag \\ &\qquad + \p{\sigma_j >t \big| \mathcal F_{\sigma_i \wedge \sigma_j}} \notag \\ &\le e^{(1+\lambda )\kappa (1+2K)} K \kappa^{-1} (\log N)^2 \cdot (\log N)^4 e^{-(1+\lambda)\kappa (\ell_1+\ell_2)} \notag \\ &\quad +\E{(\log N)^2 e^{(1+\lambda)\kappa (K-\ell_2)} \mathds{1}_{\sigma_j \le t} \mathds{1}_{\tau^{+,i}_{\sigma_i}\le \sigma_j} \Big| \mathcal F_{\sigma_i \wedge \sigma_j}} +\p{ \sigma_j >t \big| \mathcal F_{\sigma_i \wedge \sigma_j}}. \end{align} By~\eqref{eq:indhyptail2} in Lemma~\ref{lem:intip}, if $\sigma_i \wedge \sigma_j = \sigma_i \le t$, then since $\tilde \zeta^{n,i}_{\sigma_i}\le K$, \begin{align} \label{eq:tau+t} \p{\tau^{+,i}_{\sigma_i}\le t \Big| \mathcal F_{\sigma_i \wedge \sigma_j}} &\le \sum_{s'\le t, \, s' -\sigma_i \in t^* \mathbb{N}_0} \p{\tau^{+,i}_{\sigma_i}\ge s', \tilde \zeta ^{n,i}_{s'}\ge D_n^+ \Big| \mathcal F_{\sigma_i \wedge \sigma_j}} \notag \\ &\le ((t^*)^{-1} K \log N+1) e^{(1+\lambda )\kappa (K-\lfloor D^+_n \rfloor)} . \end{align} Hence, for $n$ sufficiently large, by a union bound and then by~\eqref{eq:propintipB2} (using the same argument for the case $\sigma_j \le \sigma_i$), \begin{align} \label{eq:propintiptwol} &\p{\tilde \zeta^{n,i}_t \ge \ell_1, \tilde \zeta^{n,j}_t \ge \ell_2, \tau^n_{i,j}> t \Big| \mathcal F_{s} } \notag \\ &\le \p{\sigma_i \wedge \sigma_j >t \Big| \mathcal F_{s} } + \E{\p{\tilde \zeta^{n,i}_t \ge \ell_1, \tilde \zeta^{n,j}_t \ge \ell_2, \tau^n_{i,j}> t \Big| \mathcal F_{\sigma_i \wedge \sigma_j}}\mathds{1}_{\sigma_i \wedge \sigma_j \le t} \Big| \mathcal F_{s} } \notag \\ &\le \p{\sigma_i \wedge \sigma_j >t \Big| \mathcal F_{s} }+ \p{\sigma_i \vee \sigma_j >t \Big| \mathcal F_{s} } + \tfrac 12 (\log N)^7 e^{-(1+\lambda )\kappa(\ell_1 +\ell_2)} \end{align} for $n$ sufficiently large. Finally, letting $t'=t-t^* \lfloor (t^*)^{-1}K \log N \rfloor \in \delta_n \mathbb{N}_0 \cap [0,T^-_n]$ with $t'\ge s$, since $(r^{n,K,t^*}_{K\log N, T_n-t'}(x))_{x\in \frac 1n \mathbb{Z}}$ only depends on the Poisson processes $(\mathcal P^{x,i,j})_{x,i,j}$, $(\mathcal S^{x,i,j})_{x,i,j}$, $(\mathcal Q^{x,i,j,k})_{x,i,j,k}$ and $(\mathcal R^{x,i,y,j})_{x,y,i,j}$ in the time interval $[0,T_n-t']$, $$ \p{r^{n,K,t^*}_{ K\log N , T_n- t'}(x)=0 \; \forall x\in \tfrac 1n \mathbb{Z} \Big| \mathcal F_{s}} =\p{r^{n,K,t^*}_{ K\log N, T_n- t'}(x)=0 \; \forall x\in \tfrac 1n \mathbb{Z} \Big| \mathcal F}\ge 1-\left( \frac n N \right)^2 $$ by the definition of the event $E_4$. By the definition of $r^{n,K,t^*}_{K\log N,T_n-t'}(x)$ in~\eqref{eq:rnystdefn}, it follows that $\p{\sigma_i \vee \sigma_j >t \big| \mathcal F_{s} }\le \left( \frac n N \right)^2$. By~\eqref{eq:propintiptwol}, and since $(1+\lambda)\kappa (\ell_1+\ell_2)\le 4(1/2-c_0)\log (N/n)$, this completes the proof of~\eqref{eq:propintipstat1}. By a union bound and then by the same argument as in~\eqref{eq:propintiponel} and since $\tilde \zeta^{n,i}_{\sigma_i}\le K$, \begin{align*} \p{\tilde \zeta^{n,i}_t \ge \ell_1 \Big| \mathcal F_s } &\le \p{\sigma_i >t \Big| \mathcal F_s} + \E{\p{\tilde \zeta^{n,i}_t \ge \ell_1 \Big| \mathcal F_{\sigma_i}} \mathds{1}_{\sigma_i \le t} \Big| \mathcal F_s}\\ &\le \left( \frac n N \right)^2 +(\log N)^2 e^{(1+\lambda) \kappa(K-\ell_1)}, \end{align*} which completes the proof. \end{proof} \subsection{Proof of Proposition~\ref{prop:RlogN}} \label{subsec:behindfront} We first prove two preliminary lemmas, similar to the lemmas in Section~\ref{subsec:tipbulkproofs}. Write $d'_n=\frac 1 {64}\alpha d_n$. \begin{lemma} \label{lem:jumpbulk} For $n$ sufficiently large, on the event $E_1\cap E'_2$, for $t\in \delta_n \mathbb{N}_0 \cap [0,T_n^-]$, $i\in [k_0]$ and $y,y' \le -\frac 12 d'_n$, if $\tilde \zeta^{n,i}_t \ge y$ then $$ \p{\tilde \zeta^{n,i}_{t+t^*}\le y' \Big| \mathcal F_t } \le c_1 e^{-\frac 12 \alpha \kappa(y-y')}. $$ \end{lemma} \begin{proof} Suppose $y'\ge -N^3$. For $n$ sufficiently large, by the definition of the event $E_1$ in~\eqref{eq:eventE1}, if $\tilde \zeta^{n,i}_t \ge y$ and $\zeta^{n,i}_t \in I^n_{T_n-t}$, \begin{align*} \p{\tilde \zeta^{n,i}_{t+t^*}\le y' \Big| \mathcal F_t} &\le \p{\zeta^{n,i}_{t+t^*}\le \mu^n_{T_n-t}-\nu t^* +1 +y' \Big| \mathcal F_t}\\ &= \frac{q^{n,-}_{T_n-t-t^*,T_n-t}(\mu^n_{T_n-t}-\nu t^* +1 +y', \tilde \zeta^{n,i}_t +\mu^n_{T_n-t})}{p^n_{T_n-t}(\tilde \zeta^{n,i}_t +\mu^n_{T_n-t})} \\ &\le c_1 e^{-\frac 12 \alpha \kappa (y-y')} \end{align*} since the event $A^{(3)}_{T_n-t-t^*}(n^{-1} \lfloor n (\mu^n_{T_n-t}-\nu t^*+1+y')\rfloor , \zeta^{n,i}_t)$ occurs by the definition of the event $E'_2$ in~\eqref{eq:eventE'2}. If instead $y'<-N^3$ or $\zeta^{n,i}_t \notin I^n_{T_n-t}$ then by~\eqref{eq:lemfromxixj3} in Lemma~\ref{lem:fromxixj}, $\p{\tilde \zeta^{n,i}_{t+t^*}\le y' \Big| \mathcal F_t}=0$. \end{proof} We now use Lemma~\ref{lem:jumpbulk} and an induction argument to prove the following result. \begin{lemma} \label{lem:bulktail} On the event $E_1\cap E'_2$, for $t\in \delta_n \mathbb{N}_0 \cap [0,T^-_n]$, $i\in [k_0]$, $k\in \mathbb{N}_0$ and $t' \in [0,T_n^-]$ with $t'-t \in t^*\mathbb{N}_0$, \begin{equation} \label{eq:lembulktailstat} \p{\tilde \zeta^{n,i}_{t'}\le -\tfrac 12 d'_n -k \Big| \mathcal F_t } \le e^{-\frac 14 \alpha \kappa( ( \frac 12 d'_n+\tilde \zeta^{n,i}_t)\wedge 0+ k)}. \end{equation} \end{lemma} \begin{proof} Recall from~\eqref{eq:cchoice} that we chose $c_1>0$ sufficiently small that \begin{equation} \label{eq:lembulktailA} c_1+ c_1 e^{3\alpha \kappa/4}(e^{\alpha \kappa/4}-1)^{-1}+e^{-\alpha \kappa /4}<1. \end{equation} Let $A=e^{-\frac 14 \alpha \kappa ((\frac 12 d'_n+\tilde \zeta^{n,i}_t)\wedge 0)}$. Suppose, for an induction argument, that for some $t' \ge t$ with $t'\in [0,T_n^-]$ and $t'-t \in t^*\mathbb{N}_0$,~\eqref{eq:lembulktailstat} holds for all $k\in \mathbb{N}_0$. Then by Lemma~\ref{lem:jumpbulk}, for $k\in \mathbb{N}_0$, \begin{align*} \p{\tilde \zeta^{n,i}_{t'+t^*}\le -\tfrac 12 d'_n -k \Big| \mathcal F_t } &\le \sum_{k'=0}^k \p{\tilde \zeta^{n,i}_{t'} \in (-\tfrac 12 d'_n-k'-1,-\tfrac 12 d'_n-k'] \Big| \mathcal F_t} c_1 e^{-\frac 12 \alpha \kappa (k-k'-1)}\\ &\qquad +\p{\tilde \zeta^{n,i}_{t'} \le -\tfrac 12 d'_n-k-1 \Big| \mathcal F_t} + c_1 e^{-\frac 12 \alpha \kappa k} \\ &\le \sum_{k'=0}^k A e^{-\frac 14 \alpha \kappa k'}c_1 e^{-\frac 12 \alpha \kappa (k-k'-1)} +A e^{-\frac 14 \alpha \kappa (k+1)} + c_1 e^{-\frac 12 \alpha \kappa k} \end{align*} by our induction hypothesis. Therefore, since $A\ge 1$, \begin{align*} \p{\tilde \zeta^{n,i}_{t'+t^*}\le -\tfrac 12 d'_n -k \Big| \mathcal F_t } &\le A \left( c_1 e^{-\frac 12 \alpha \kappa (k-1)}\sum_{k'=0}^k e^{\frac 14 \alpha \kappa k'}+e^{-\frac 14 \alpha \kappa (k+1)}+ c_1 e^{-\frac 12 \alpha \kappa k}\right)\\ &= A \left(c_1 e^{-\frac 12 \alpha \kappa (k-1)}\frac{e^{\frac 14 \alpha \kappa (k+1)}-1}{e^{\frac 14 \alpha \kappa}-1}+e^{-\frac 14 \alpha \kappa (k+1)}+ c_1 e^{-\frac 12 \alpha \kappa k} \right)\\ &< A e^{-\frac 14 \alpha \kappa k}\left(c_1 e^{\frac 34 \alpha \kappa}(e^{\frac 14 \alpha \kappa}-1)^{-1} +e^{-\frac 14 \alpha \kappa }+c_1 \right)\\ &\le A e^{-\frac 14 \alpha \kappa k} \end{align*} by~\eqref{eq:lembulktailA}. The result follows by induction. \end{proof} \begin{proof}[Proof of Proposition~\ref{prop:RlogN}] We begin by proving~\eqref{eq:propRlogN1}. For $n$ sufficiently large, by~\eqref{eq:lemfromxixj3} in Lemma~\ref{lem:fromxixj} and then by a union bound and Lemma~\ref{lem:bulktail}, and since $\tilde \zeta^{n,i}_0\ge -K_0$, \begin{align*} \p{\exists t \in \delta_n \mathbb{N}_0 \cap[0, T^-_n] : \tilde \zeta^{n,i}_{t}\le D_n^- \Big|\mathcal F_0 } &\le \p{\exists t\in t^* \mathbb{N}_0 \cap [0,T^-_n] : \tilde \zeta^{n,i}_t \le \tfrac 12 D_n^- \Big| \mathcal F_0} \\ &\le( (t^*)^{-1} T^-_n +1) e^{-\frac 14 \alpha \kappa (-\frac 12 D_n^- -\frac 12 d'_n)} \\ & \le N^{-1} \end{align*} for $n$ sufficiently large, since, by~\eqref{eq:Dn+-defn}, $\frac 18 \alpha \kappa D_n^- =-\frac{13}4 \log N$ and since $T_n^-\le N^2$. Note that the last statement~\eqref{eq:propRlogN3} follows directly from Lemma~\ref{lem:bulktail} (since $\tilde \zeta^{n,i}_0\ge -K_0$ and $d_n>d'_n$). We now prove~\eqref{eq:propRlogN2}. Recall from~\eqref{eq:cchoice} that we chose $c_1>0$ sufficiently small that \begin{equation} \label{eq:proplogNB} e^{-\alpha \kappa /4}+c_1 (1-e^{-\alpha \kappa /4})^{-1}<e^{-\alpha \kappa /5}. \end{equation} Let $A\sim \text{Ber}(c_1)$ and $G\sim \text{Geom}(1-e^{-\alpha \kappa /2})$ be independent (with $\p{G\ge k}=e^{-\alpha \kappa k/2}$ for $k\in \mathbb{N}_0$). For $t' \in \delta_n \mathbb{N}_0 \cap [0,T^-_n]$, if $\tilde \zeta^{n,i}_{t'} \le -\frac 12 d'_n$ then by Lemma~\ref{lem:jumpbulk}, for $k\in \mathbb{N}_0$, \begin{equation} \label{eq:propRlogN*} \p{\tilde \zeta^{n,i}_{t'} - \tilde \zeta^{n,i}_{t'+t^*} \ge k \Big|\mathcal F_{t'}}\le c_1 e^{-\frac 12 \alpha \kappa k} \le \p{AG-(1-A)\ge k}. \end{equation} Since $AG-(1-A)\ge -1$,~\eqref{eq:propRlogN*} holds for each $k\in \mathbb{Z}$. Let $(A_j)_{j=1}^\infty$ and $(G_j)_{j=1}^\infty$ be independent families of i.i.d.~random variables with $A_1 \stackrel{d}{=}A$ and $G_1\stackrel{d}{=} G$. Suppose $\tilde \zeta^{n,i}_s \ge D_n^-$ and $t-s\ge K\log N$, and take $s'\in [s,s+t^*]$ such that $t-s'\in t^*\mathbb{N}_0$. For $n$ sufficiently large, by~\eqref{eq:lemfromxixj3} in Lemma~\ref{lem:fromxixj}, we have $\tilde \zeta^{n,i}_{s'}\ge 2D_n^-$. Then using~\eqref{eq:propRlogN*} in the second inequality, \begin{align*} &\p{\tilde \zeta^{n,i}_{s'+\ell t^*}\le -\tfrac 12 d'_n \; \forall \ell \in \{0\}\cup [ 4|D_n^-|] \Big|\mathcal F_{s'}}\\ &\le\mathbb P \Big( \tilde \zeta^{n,i}_{s'+\ell t^*}\le -\tfrac 12 d'_n \; \forall \ell \in \{0\}\cup [ 4|D_n^-|-1], \sum_{j=1}^{4|D_n^-|}(\tilde \zeta^{n,i}_{s'+(j-1)t^*}-\tilde \zeta^{n,i}_{s'+jt^*})\ge 2D_n^- \Big|\mathcal F_{s'} \Big) \\ &\le \p{ \sum_{j=1}^{4|D_n^-|}(A_j G_j - (1-A_j) ) \ge 2 D_n^- }. \end{align*} By Markov's inequality, \begin{align*} \p{ \sum_{j=1}^{4|D_n^-|}(A_j G_j - (1-A_j)) \ge 2 D_n^- } &\le e^{\frac 14 \alpha \kappa \cdot 2 |D_n^-|} \E{e^{\frac 14 \alpha \kappa (A_1 G_1-(1-A_1))}}^{4|D_n^-|}\\ &\le e^{\frac 12 \alpha \kappa |D_n^-|} \left( (1-c_1) e^{-\frac 14 \alpha \kappa }+c_1 \frac{1-e^{-\alpha \kappa /2}}{1-e^{-\alpha \kappa /4}} \right)^{4|D_n^-|}\\ &\le e^{-\frac 3{10} \alpha \kappa |D^-_n|} \end{align*} by~\eqref{eq:proplogNB}. Therefore, since $\alpha \kappa |D^-_n|=26 \log N$ by~\eqref{eq:Dn+-defn}, and since $K\log N>(4|D_n^-|+1)t^*$ by our choice of $K$ in Proposition~\ref{prop:eventE}, \begin{align*} \p{\tilde \zeta^{n,i}_t \le -d_n \Big| \mathcal F_{s}} &\le N^{-7} + \sum_{\ell =0}^{4|D_n^-|}\E{\p{\tilde \zeta^{n,i}_{s' +\ell t^*}\ge -\tfrac 12 d'_n, \tilde \zeta^{n,i}_t \le -d_n \Big| \mathcal F_{s'}}\Big| \mathcal F_{s}}\\ &\le N^{-7} +\sum_{\ell=0}^{4|D_n^-|} e^{-\frac 14 \alpha \kappa \cdot \frac 12 d_n }\\ &\le (\log N)^{2-\frac 18 \alpha C} \end{align*} for $n$ sufficiently large, where the second inequality follows by Lemma~\ref{lem:bulktail} and since $d_n>d'_n$. Since $d'_n =2^{-6}\alpha d_n$, by the same argument, for $n$ sufficiently large, $\p{\tilde \zeta^{n,i}_t \le -d'_n+2 \Big| \mathcal F_{s}} \le (\log N)^{2-2^{-9} \alpha^2 C}$. \end{proof} \section{Event $E_1$ occurs with high probability} \label{sec:eventE1} In this section and the following three sections, we will prove Proposition~\ref{prop:eventE}. We begin with some notation which will be used throughout the rest of the article. For $h:\frac{1}{n}\mathbb{Z} \rightarrow \R$ and $x\in \frac1n \mathbb{Z}$, let $$\nabla _n h(x)=n\left(h(x+n^{-1} )-h(x)\right)$$ and let $$\Delta _n h(x)=n^2\left(h(x+n^{-1} )-2h(x)+h(x-n^{-1} )\right).$$ Define $f:\R \rightarrow \R$ by letting \begin{equation} \label{eq:fdef} f(u)=u(1-u)(2u-1+\alpha). \end{equation} Recall the definition of the event $E_1$ in~\eqref{eq:eventE1}. In this section, we will prove the following result (along with some technical lemmas which will be used in later sections). \begin{prop} \label{prop:eventE1} For $t\ge 0$, let $(u^n_{t,t+s})_{s\ge 0}$ denote the solution of \begin{equation} \label{eq:unttsdef} \begin{cases} \partial_s u^n_{t,t+s}=\tfrac 12 m \Delta_n u^n_{t,t+s}+s_0 f(u^n_{t,t+s}) \quad \text{for }s>0, \\ u^n_{t,t}=p^n_t. \end{cases} \end{equation} For $c_2>0$, define the event \begin{equation} \label{eq:E1'defn} E '_1=E_1 \cap \left\{ \sup_{s\in [0,\gamma_n],x\in \frac 1n \mathbb{Z}}|u^n_{t,t+s}(x)-g(x-\mu^n_t -\nu s)|\le e^{-(\log N)^{c_2}}\; \forall t\in [\log N,N^2]\right\}. \end{equation} Suppose for some $a_1>1$, $N\ge n^{a_1}$ for $n$ sufficiently large. For $\ell \in \mathbb{N}$, for $b_1,c_2>0$ sufficiently small and $b_2>0$, if condition~\eqref{eq:conditionA} holds then for $n$ sufficiently large, $$ \p{(E'_1)^c}\le \left( \frac n N \right)^\ell . $$ \end{prop} From now on in this section, we will assume for some $a_1>1$, $N\ge n^{a_1}$ for $n$ sufficiently large. We will need some more notation; we use notation similar to~\cite{durrett/fan:2016}. For $f_1,f_2:\frac1n \mathbb{Z} \rightarrow \R$, write $$\langle f_1,f_2 \rangle _n:=n^{-1} \sum_{w\in \frac1n \mathbb{Z}} f_1(w) f_2(w).$$ Let $(X^n_t)_{t\geq 0}$ denote a continuous-time simple symmetric random walk on $\frac1n \mathbb{Z}$ with jump rate $n^2$. For $z\in \frac1n \mathbb{Z}$, let $\mathbf P_z(\cdot):=\mathbb{P} \left(\cdot \left| X_0^n=z \right. \right)$. Then for $z, w\in \frac1n \mathbb{Z} $ and $0\le s \le t$, let \begin{equation} \label{eq:phitzdefq} \phi_s^{t,z}(w):= n \mathbf P_z \left( X^n_{m(t-s)}=w \right) . \end{equation} For $a\in \R$, $z,w \in \frac 1n \mathbb{Z}$ and $0\le s \le t$, let \begin{equation} \label{eq:phiadef} \phi^{t,z,a}_s(w)=e^{-a(t-s)}\phi_s^{t,z}(w). \end{equation} Let $(u^n_t)_{t\geq 0}$ denote the solution of \begin{equation} \label{eq:undef} \begin{cases} \partial_t u^n_t &= \tfrac 12 m \Delta_n u^n_t +s_0 f(u^n_t) \quad \text{for }t>0,\\ u^n_0 &= p^n_0. \end{cases} \end{equation} We will prove in Proposition~\ref{prop:pnun} below that if $t$ is not too large, $p^n_t$ and $u^n_t$ are close with high probability. By the comparison principle, $u^n_t\in [0,1]$. Since $\partial_s \phi^{t,z}_s +\frac 12 m \Delta_n \phi^{t,z}_s =0$ for $s\in (0,t)$, we have that for $a\in \R$, $z\in \frac 1n \mathbb{Z}$ and $t\ge 0$, by integration by parts, \begin{align*} &\langle u^n_t , \phi^{t,z,a}_t \rangle_n\\ &= \langle u^n_0, \phi^{t,z,a}_0 \rangle_n +\int_0^t \langle u^n_s, \partial_s \phi^{t,z,a}_s \rangle_n ds +\int_0^t \langle u^n_s, \tfrac 12 m \Delta_n \phi^{t,z,a}_s \rangle_n ds +s_0 \int_0^t \langle f(u^n_s), \phi^{t,z,a}_s \rangle_n ds\\ &= e^{-at} \langle p_0^n, \phi_0^{t,z}\rangle _n +\int_0^t e^{-a(t-s)}\langle s_0 f(u^n_s)+a u^n_s,\phi_s^{t,z}\rangle_n ds. \end{align*} Therefore, since $\langle u^n_t , \phi^{t,z,a}_t \rangle_n =u^n_t(z)$, it follows that for $a\in \R$, $z\in \frac 1n \mathbb{Z}$ and $t\ge 0$, \begin{equation} \label{eq:ungreena} u^n_t(z)=e^{-at} \langle p_0^n, \phi_0^{t,z}\rangle _n +\int_0^t e^{-a(t-s)}\langle s_0 f(u^n_s)+a u^n_s,\phi_s^{t,z}\rangle_n ds. \end{equation} Note that by~\eqref{eq:ungreena} with $a=-(1+\alpha)s_0 $, since $f(u)\le (1+\alpha)u$ for $u\in [0,1]$, \begin{equation} \label{eq:uneasybound} u^n_t(z)\le e^{(1+\alpha)s_0 t}\langle p^n_0, \phi^{t,z}_0 \rangle _n. \end{equation} In this section, alongside proving Proposition~\ref{prop:eventE1}, we will prove some preliminary tracer dynamics results which will be used in later sections, so we need some notation for tracer dynamics with an arbitrary initial condition. Take $\mathcal I_0 \subseteq \{(x,i):\xi^n_0(x,i)=1\}.$ Then for $t\ge 0$, let \begin{equation} \label{eq:etandefn} \eta^n_t(x,i)=\mathds{1}_{(\zeta^{n,t}_t(x,i),\theta^{n,t}_t(x,i))\in \mathcal I_0} \quad\text{for }x\in \tfrac 1n \mathbb{Z}, \, i\in [N], \end{equation} i.e.~$\eta^n_t(x,i)=1$ if and only if the $i^{\text{th}}$ individual at $x$ at time $t$ is descended from an individual in $\mathcal I_0$ at time 0. For $t\ge 0$ and $x\in \frac 1n \mathbb{Z}$, let \begin{equation} \label{eq:qndef} q^n_t(x)=\frac 1N \sum_{i=1}^N \eta^n_t(x,i) , \end{equation} i.e.~the proportion of individuals at $x$ at time $t$ which are descended from individuals in $\mathcal I_0$ at time 0. Let $(v^n_t)_{t\geq 0}$ denote the solution of \begin{equation} \label{eq:vndef} \begin{cases} \partial_t v^n_t &= \tfrac{1}{2}m \Delta_n v^n_t +s_0 v^n_t (1-u^n_t)(2u^n_t-1+\alpha ) \quad \text{for }t>0,\\ v^n_0 &= q^n_0. \end{cases} \end{equation} We will prove in Proposition~\ref{prop:pnun} below that if $t$ is not too large, $q^n_t$ and $v^n_t$ are close with high probability. Note that by the comparison principle, $0\le v^n_t \le u^n_t$. Moreover, for $a\in \R$, $t\ge 0$ and $z\in \frac 1n \mathbb{Z}$, by the same argument as for~\eqref{eq:ungreena}, \begin{equation} \label{eq:vngreena} v^n_t(z) = e^{-at}\langle q^n_0, \phi^{t,z}_0 \rangle_n +\int_0^t e^{-a(t-s)}\langle v^n_s (s_0 (1-u^n_s)(2u^n_s-1+\alpha )+a), \phi^{t,z}_s \rangle_n ds. \end{equation} For $t\ge 0$ and $z\in \frac 1n \mathbb{Z}$, by~\eqref{eq:vngreena} with $a=-(1+\alpha)s_0$ and since $(1-u)(2u-1+\alpha )\le 1+\alpha$ for $u\in [0,1]$, \begin{align} \label{eq:veasybound} v^n_t(z) &\le e^{(1+\alpha)s_0 t}\langle q^n_0, \phi^{t,z}_0\rangle _n. \end{align} The following result says that if $t$ is not too large, $|p^n_t -u^n_t|$ and $|q^n_t -v^n_t|$ are small with high probability; the proof is postponed to Section~\ref{subsec:pnunproof}. \begin{prop} \label{prop:pnun} Suppose $c_3>0$ and $\ell \in \mathbb{N}$. Then there exists $c_4=c_4(c_3,\ell)\in (0,1/2)$ such that for $n$ sufficiently large, for $T\leq 2(\log N)^{c_4}$, \begin{align*} \p{\sup_{x\in \frac1n \mathbb{Z}, |x|\leq N^5}\sup_{t\in [0,T]}|p^n_{t}(x)-u^n_t(x)| \geq \left(\frac{n}{N}\right)^{1/2-c_3}} &\leq \left( \frac{n}{N}\right)^\ell \end{align*} and for $t\le 2(\log N)^{c_4}$, $$ \p{\sup_{x\in \frac 1n \mathbb{Z}, |x|\le N^5} |q^n_t(x)-v^n_t(x)|\ge \left(\frac n N \right)^{1/2-c_3}}\le \left(\frac n N \right)^{\ell}. $$ For $k\in \mathbb{N}$ with $k\ge 2$, there exists a constant $C_1=C_1(k)<\infty$ such that for $t\geq 0$, \begin{align} \label{eq:gronwall1stat} \sup_{x\in \frac1n \mathbb{Z}} \E{ |p^n_{t}(x)-u^n_t(x)|^k} &\leq C_1 \left(\frac{n^{k/2}t^{k/4}}{N^{k/2}}+N^{-k}\right) e^{C_1 t^k}. \end{align} \end{prop} We also need to control $p^n_t(x)$ when $x$ is not in the interval $ [-N^5,N^5]$ covered by Proposition~\ref{prop:pnun}. \begin{lemma} \label{lem:p01} For $n$ sufficiently large, if $p^n_0(x)=0$ $\forall x\geq N$ and $p^n_0(x)=1$ $\forall x\leq -N$ then \begin{align*} \mathbb{P}\left(\exists t\in [0,2N^2],\, x\in \tfrac1n \mathbb{Z} \cap[ N^5, \infty): p^n_t(x)>0 \right) &\leq e^{-N^5}\\ \text{and} \qquad \mathbb{P}\left(\exists t\in [0,2N^2],\, x\in \tfrac1n \mathbb{Z}\cap (-\infty, -N^5]: p^n_t(x)<1 \right) &\leq e^{-N^5}. \end{align*} \end{lemma} \begin{proof} For $x\in \frac1n \mathbb{Z}$, let $$ \tau_x :=\inf \{t\geq 0: p^n_t(x)>0\}. $$ Let $(E_\ell)_{\ell =1}^\infty$ be a sequence of i.i.d.~random variables with $E_1\sim \text{Exp}(mr_n N^2)$. For $x>N$, $\tau_x$ occurs at a jump time after time $\tau_{x-n^{-1}}$ in $\mathcal R^{x,i,x-n^{-1},j}$ for some $i,j\in [N]$. Therefore we can couple the process $(\xi^n_t(x,i))_{x\in \frac1n \mathbb{Z},\, i\in [N], t\geq 0}$ with $(E_\ell)_{\ell = 1}^\infty $ in such a way that for each $\ell \in \mathbb{N}$, $$ \tau_{N+\ell n^{-1}}- \tau_{N+(\ell -1) n^{-1}}\geq E_\ell . $$ It follows that $$ \tau_{N^5}\geq \sum_{\ell =1}^{n(N^5-N)}E_\ell . $$ Therefore, letting $Y_n$ denote a Poisson random variable with mean $2mr_n N^4$, we have that \begin{align*} \mathbb{P}\left(\tau_{N^5}\leq 2N^2\right) &\leq \mathbb{P}\left(\sum_{\ell =1}^{n(N^5-N)}E_\ell\leq 2 N^2\right)\\ &= \mathbb{P}\left(Y_n \geq n(N^5-N) \right). \end{align*} By Markov's inequality, and then since $r_n =\frac12 n^2 N^{-1}$, \begin{align*} \mathbb{P}\left(Y_n \geq n(N^5-N) \right) &\leq e^{-n(N^5-N)}\E{e^{Y_n}} \leq e^{-n(N^5-N)}e^{ mn^2 N^3 (e-1)} \leq e^{-N^5} \end{align*} for $n$ sufficiently large, since $N\geq n$. Therefore for $n$ sufficiently large, $$\mathbb{P}\left(\tau_{N^5}\leq 2 N^2\right) \leq e^{-N^5}.$$ Letting $ \sigma_x :=\inf \{t\geq 0: p^n_t(x)<1\}$ for $x\in \frac1n \mathbb{Z}$, by the same argument we have that $$\mathbb{P}\left(\sigma_{-N^5}\leq 2 N^2\right) \leq e^{-N^5}$$ for $n$ sufficiently large, which completes the proof. \end{proof} Recall from~\eqref{eq:gdefn} and~\eqref{eq:kappanu} that $g(x)=(1+e^{\kappa x})^{-1}$, and recall the definition of $f$ in~\eqref{eq:fdef}. Note that $u(t,x):=g(x-\nu t)$ is a travelling wave solution of the partial differential equation $$ \partial_t u = \tfrac 12 m \Delta u +s_0 f(u). $$ Since $\alpha\in (0,1)$, we have that $f(0)=f(1)=0$, $f(u)<0$ for $u\in (0,\frac12 (1-\alpha))$, $f(u)>0$ for $u\in (\frac12 (1-\alpha),1)$, $f'(0)<0$ and $f'(1)<0$. This allows us to apply results from~\cite{fife/mcleod:1977} as follows. For an initial condition $u_0 :\R\rightarrow [0,1]$, let $u(t,x)$ denote the solution of \begin{equation} \label{eq:upde} \begin{cases} \partial_t u &=\tfrac 12 m \Delta u +s_0 f(u) \quad \text{for }t>0,\\ u(0,\cdot)&=u_0. \end{cases} \end{equation} \begin{lemma} \label{lem:x0approx} There exist constants $C_2<\infty$ and $c_5>0$ such that for $\epsilon\leq c_5$, if $u_0$ is piecewise continuous with $0\leq u_0\leq 1$ and, for some $z_0\in \R$, $|u_0(z)-g(z-z_0)|\leq \epsilon$ $\forall z\in \R$, then $$|u(t,x)-g(x-\nu t-z_0)|\leq C_2 \epsilon \quad \forall x\in \R,\; t>0.$$ \end{lemma} \begin{proof} The result follows directly from Lemma~4.2 in~\cite{fife/mcleod:1977} and its proof. \end{proof} \begin{prop} \label{prop:expconvtog} There exist constants $c_6>0$ and $C_3<\infty$ such that if $u_0$ is piecewise continuous with $0\leq u_0\leq 1$ and $|u_0(z)-g(z)|\leq c_6$ $\forall z \in \R$, then for some $z_0\in \R$ with $|z_0|\leq 1$, $$ |u(t,x)-g(x-\nu t-z_0)|\leq C_3 e^{-c_6 t} \quad \forall x\in \R, \; t>0. $$ \end{prop} This is a slight modification of Theorem~3.1 in~\cite{fife/mcleod:1977} (to ensure that $C_3$ and $c_6$ do not depend on the initial condition $u_0$, as long as $\|u_0-g\|_\infty $ is sufficiently small); we postpone the proof to Appendix~\ref{sec:append}. The next lemma says that if the initial condition $p^n_0$ is not too rough, then $u^n_t$ is close to a solution of~\eqref{eq:upde}. \begin{lemma} \label{lem:unu} Let $(u_t)_{t\geq 0}$ denote the solution of \begin{equation} \label{eq:ueq} \begin{cases} \partial_t u_t &= \tfrac 12 m \Delta u_t +s_0 f(u_t) \quad \text{for }t>0, \\ u_0&=\bar{p}^n_0, \end{cases} \end{equation} for some $\bar{p}^n_0:\R\rightarrow [0,1]$ with $\bar{p}^n_0(y)=p^n_0(y)$ $\forall y\in \frac 1n \mathbb{Z}$. There exists a constant $C_4<\infty$ such that for $T\geq 1$, $$ \sup_{t\in [0,T],\, x\in \frac1n \mathbb{Z}} |u_t^n(x)-u_t(x)|\leq \left(C_4 n^{-1/3}+\sup_{z_1,z_2\in\R,|z_1-z_2|\leq n^{-1/3}}|\bar{p}^n_0(z_1)-\bar{p}^n_0(z_2)| \right)T^2 e^{(1+\alpha)s_0 T}. $$ \end{lemma} \begin{proof} For $t\geq 0$ and $z\in \frac1n \mathbb{Z}$, by~\eqref{eq:ungreena} and since $p^n_0(y)=\bar{p}^n_0(y)$ $\forall y\in \frac 1n \mathbb{Z}$, \begin{equation*} u^n_t(z)=\langle \bar{p}_0^n,\phi^{t,z}_0\rangle _n+ s_0 \int_0^t \langle f(u^n_s), \phi^{t,z}_s \rangle _n ds. \end{equation*} Let $G_t(x)=\frac{1}{\sqrt{2\pi t}}e^{-x^2/(2t)}$; then for $z\in \R$ and $t> 0$, \begin{equation} \label{eq:lemunu*} u_t(z)=G_{mt}\ast \bar{p}^n_0(z)+s_0 \int_0^t G_{m(t-s)}\ast f(u_s)(z)ds. \end{equation} Letting $(B_t)_{t\geq 0}$ denote a Brownian motion, and by the definition of $\phi^{t,z}_s$ in~\eqref{eq:phitzdefq}, it follows that for $z\in \frac1n \mathbb{Z}$ and $t> 0$, \begin{align} \label{eq:unminusu0} &|u_t^n(z)-u_t(z)| \notag\\ & \leq \left|\Esubb{z}{\bar{p}^n_0(X^n_{mt})}-\Esub{z}{\bar{p}^n_0(B_{mt})} \right| +s_0 \int_0^t \left|\Esubb{z}{f(u^n_s(X^n_{m(t-s)}))}-\Esub{z}{f(u_s(B_{m(t-s)}))} \right| ds. \end{align} By a Skorokhod embedding argument (see e.g.~Theorem~3.3.3 in~\cite{lawler/limic:2010}), for $n$ sufficiently large, $(X^n_t)_{t\ge 0}$ and $(B_t)_{t\ge 0}$ can be coupled in such a way that $X^n_0=B_0$ and for $t\ge 0$, \begin{align} \label{eq:couplingBM0} \p{|X^n_{mt}-B_{mt}|\geq n^{-1/3}} \leq (t+1)n^{-1/2}. \end{align} Since $\bar{p}_0^n \in [0,1]$, it follows that \begin{align} \label{eq:(*)unuproof0} \left|\Esubb{z}{\bar{p}^n_0(X^n_{mt})}-\Esub{z}{\bar{p}^n_0(B_{mt})} \right| &\leq (t+1)n^{-1/2}+\sup_{z_1,z_2\in\R,|z_1-z_2|\leq n^{-1/3}}|\bar{p}^n_0(z_1)-\bar{p}^n_0(z_2)| . \end{align} For the second term on the right hand side of~\eqref{eq:unminusu0}, note that $\sup_{v\in [0,1]}|f(v)|<1$ and, since $f'(u)=6u(1-u)-1+\alpha (1-2u)$, we have $\sup_{v\in [0,1]}|f'(v)|=1+\alpha$. Therefore, using the triangle inequality and then by the same coupling argument as for~\eqref{eq:(*)unuproof0}, for $s\in [0,t]$, \begin{align} \label{eq:(A)unapprox} &\left|\Esubb{z}{f(u^n_s(X^n_{m(t-s)}))}-\Esub{z}{f(u_s(B_{m(t-s)}))} \right| \notag \\ &\leq \left|\Esubb{z}{f(u^n_s(X^n_{m(t-s)}))}-\Esubb{z}{f(u_s(X^n_{m(t-s)}))} \right| +\left|\Esubb{z}{f(u_s(X^n_{m(t-s)}))}-\Esub{z}{f(u_s(B_{m(t-s)}))} \right| \notag \\ & \leq (1+\alpha) \sup_{x\in \frac1n \mathbb{Z}}|u^n_s(x)- u_s(x)| +2(t+1)n^{-1/2}+(1+\alpha) \|\nabla u_s\|_\infty n^{-1/3}. \end{align} We now bound $\|\nabla u_s\|_\infty$. For $t> 0$ and $x\in \R$, by differentiating both sides of~\eqref{eq:lemunu*}, \begin{align} \label{eq:dut0} \nabla u_t(x) &=G'_{mt}\ast \bar{p}^n_0(x)+s_0 \int_0^t G'_{m(t-s)}\ast f(u_s)(x)ds . \end{align} For the first term on the right hand side, since $\bar{p}^n_0 \in [0,1]$, \begin{align*} |G'_{mt}\ast \bar{p}^n_0(x)| &\le \int_{-\infty}^\infty |G'_{mt}(z)|dz = 2G_{mt}(0) =2(2\pi m t)^{-1/2}. \end{align*} For the second term on the right hand side of~\eqref{eq:dut0}, since $\sup_{v\in [0,1]}|f(v)|<1$, \begin{align*} \left|\int_0^t G'_{m(t-s)}\ast f(u_s)(x)ds\right| &\leq \int_0^t \int_{-\infty}^\infty |G'_{m(t-s)}(z)| dz ds =4 (2\pi m)^{-1/2} t^{1/2}. \end{align*} Hence by~\eqref{eq:dut0}, for $t> 0$, $$ \|\nabla u_t \|_\infty \le (2\pi m )^{-1/2}(2t^{-1/2}+4s_0 t^{1/2}). $$ Substituting into~\eqref{eq:(A)unapprox} and then into~\eqref{eq:unminusu0}, and using~\eqref{eq:(*)unuproof0}, we now have that for $t>0$ and $z\in \frac 1n \mathbb{Z}$, \begin{align*} &|u_t^n(z)-u_t(z)| \\ & \leq \left(t+1\right) n^{-1/2}+\sup_{z_1,z_2\in \R,|z_1-z_2|\leq n^{-1/3}}|\bar{p}^n_0(z_1)-\bar{p}^n_0(z_2)|\\ & \, +s_0 \int_0^t \bigg((1+\alpha)\sup_{x\in \frac1n \mathbb{Z}}|u^n_s(x)-u_s(x)| +2(t+1)n^{-1/2} + 2(2\pi m )^{-1/2} (2s^{-1/2}+4s_0 s^{1/2}) n^{-1/3}\bigg) ds. \end{align*} Hence there exists a constant $C_4<\infty$ such that for $T\geq 1$, for $t\in [0,T]$, \begin{align*} &\sup_{x\in \frac1n \mathbb{Z}}|u_t^n(x)-u_t(x)|\\ &\leq \left(C_4 n^{-1/3}+\sup_{z_1,z_2\in\R ,|z_1-z_2|\leq n^{-1/3}}|\bar{p}^n_0(z_1)-\bar{p}^n_0(z_2)| \right)T^2 +(1+\alpha)s_0 \int_0^t \sup_{x\in \frac1n \mathbb{Z}}|u^n_s(x)-u_s(x)| ds. \end{align*} The result follows by Gronwall's inequality. \end{proof} The following lemma will be used in the proof of Proposition~\ref{prop:eventE1} to show that with high probability, $\sup_{|z_1-z_2|\le n^{-1/3}}|p^n_t(z_1)-p^n_t(z_2)|$ is small at large times $t$, which will allow us to use Lemma~\ref{lem:unu}. \begin{lemma} \label{lem:nablaunbound} There exists a constant $C_5<\infty$ such that \begin{equation} \label{eq:phidiffbound} n\langle 1,|\phi^{t,z+n^{-1}}_0-\phi^{t,z}_0|\rangle_n \le C_5 t^{-1/2} \quad \forall \, t>0, z\in \tfrac 1n \mathbb{Z}, \end{equation} and $\sup_{t\ge 1, x \in \frac 1n \mathbb{Z}}|\nabla_n u^n_t (x)|\le C_5$. \end{lemma} \begin{proof} For $t> 0$, $z\in \frac 1n \mathbb{Z}$ and $t_0\in (0,t]$, by~\eqref{eq:ungreena}, \begin{equation} \label{eq:**pnun} \nabla_n u^n_t(z)=n\langle u^n_{t-t_0},\phi^{t_0,z+n^{-1}}_0-\phi^{t_0,z}_0\rangle _n+ ns_0 \int_0^{t_0} \langle f(u^n_{t-t_0+s}), \phi^{t_0,z+n^{-1}}_s-\phi^{t_0,z}_s \rangle _n ds. \end{equation} Since $u^n_{t-t_0}\in [0,1]$, we have that \begin{align} \label{eq:(*)pnun} |n\langle u^n_{t-t_0},\phi^{t_0,z+n^{-1}}_0-\phi^{t_0,z}_0\rangle _n| &\leq n \langle 1, |\phi^{t_0,z+n^{-1}}_0-\phi^{t_0,z}_0|\rangle_n . \end{align} Let $(S_j)_{j=0}^\infty $ be a discrete-time simple symmetric random walk on $\mathbb{Z}$ with $S_0=0$. By Proposition~2.4.1 in~\cite{lawler/limic:2010} (which follows from the local central limit theorem), there exists a constant $K_1<\infty$ such that for $j\in \mathbb{N}$, $$ \sum_{y\in \mathbb{Z}} \left|\p{S_j =y-1}-\p{S_j=y} \right|\leq K_1 j^{-1/2}. $$ Let $(R_s)_{s\geq 0}$ denote a Poisson process with rate $1$. Then by the definition of $\phi^{t,z}_s$ in~\eqref{eq:phitzdefq}, and since $(X^n_s)_{s\geq 0}$ jumps at rate $n^2$, \begin{align} \label{eq:lemnabla*} n\langle 1, |\phi_0^{t_0, z+n^{-1}}-\phi_0^{t_0,z} |\rangle_n &= n \sum_{y\in \frac1n \mathbb{Z}}\left|\psubb{0}{X^n_{mt_0}=y-n^{-1}}-\psubb{0}{X^n_{mt_0}=y} \right| \notag \\ &\leq n \sum_{y\in \frac1n \mathbb{Z}}\sum_{j=0}^\infty \p{R_{mn^2 t_0}=j}\left|\p{S_j=ny-1}-\p{S_j=ny} \right| \notag \\ &\leq n \sum_{j=1}^\infty \p{R_{mn^2 t_0}=j}K_1 j^{-1/2}+2n \p{R_{mn^2t_0}=0}. \end{align} By Markov's inequality, and since $R_{mn^2 t_0}\sim\text{Poisson}(mn^2 t_0)$, \begin{align*} \p{R_{m n^2t_0}\leq \tfrac 12 mn^2 t_0} =\p{e^{-R_{mn^2 t_0}\log 2} \ge e^{-\frac 12 mn^2 t_0 \log 2}} &\le e^{\frac 12 mn^2 t_0 \log 2}e^{mn^2 t_0 (e^{-\log 2}-1)}\\ &=e^{-\frac 12 mn^2 t_0(1-\log 2)}. \end{align*} Therefore, by substituting into~\eqref{eq:lemnabla*}, \begin{align} \label{eq:probdiff} n\langle 1, |\phi_0^{t_0, z+n^{-1}}-\phi_0^{t_0,z} |\rangle_n &\leq n\left((K_1+2)\p{R_{mn^2t_0}\leq \tfrac 12 m n^2 t_0}+K_1(\tfrac 12 m n^2 t_0)^{-1/2} \right)\notag \\ &\leq t_0^{-1/2}\left((K_1+2)(n^2 t_0)^{1/2} e^{-\frac 12mn^2 t_0(1-\log 2)}+\sqrt{2} m^{-1/2}K_1\right) \notag \\ &\leq K_2 t_0^{-1/2}, \end{align} where $K_2=(K_1+2)\sup_{s\geq 0}(s^{1/2} e^{-\frac 12 m(1-\log 2) s})+\sqrt{2}m^{-1/2}K_1<\infty $. This completes the proof of~\eqref{eq:phidiffbound}. Since $|f(u^n_{t-t_0+s})|\leq 1$ for $s\in [0,t_0]$, and then by~\eqref{eq:probdiff}, \begin{align*} \left| n s_0 \int_0^{t_0} \langle f(u^n_{t-t_0+s}), \phi^{t_0,z+n^{-1}}_s-\phi^{t_0,z}_s \rangle _n ds\right| &\leq s_0 \int_0^{t_0} n \langle 1, |\phi_0^{t_0-s, z+n^{-1}}-\phi_0^{t_0-s,z} |\rangle_n ds\\ &\le 2s_0 K_2 t_0^{1/2}. \end{align*} Therefore, by~\eqref{eq:**pnun},~\eqref{eq:(*)pnun} and~\eqref{eq:probdiff}, for $t\ge 1$ and $t_0 \in (0,t]$ we have $\sup_{x\in \frac 1n \mathbb{Z}}|\nabla_n u^n_t(x)|\leq K_2(t_0^{-1/2}+2s_0 t_0^{1/2})$, and the result follows by taking $t_0=1$. \end{proof} We will use the following easy lemma repeatedly in the rest of this section, and in Section~\ref{sec:eventE2}. \begin{lemma} \label{lem:Xnmgf} For $a\in \R$ with $|a|\le n$ and $t\geq 0$, \begin{align*} \Esubb{0}{e^{aX^n_{mt}}} &=e^{\frac 12 m a^2 t +\mathcal O (ta^3 n^{-1})}. \end{align*} \end{lemma} \begin{proof} Let $(R^+_s)_{s\geq 0}$ and $(R^-_s)_{s\geq 0}$ be independent Poisson processes with rate $1$. Then for $a\in \R$, since $(X^n_t)_{t\geq 0}$ is a continuous-time simple symmetric random walk on $\frac 1n \mathbb{Z}$ with jump rate $n^2$, \begin{align*} \Esubb{0}{e^{aX^n_{mt}}} &=\E{e^{an^{-1} (R^+_{m n^2 t/2}-R^-_{m n^2 t/2})}}\\ &=\exp(\tfrac 12 m n^2 t(e^{an^{-1}}-1))\exp(\tfrac 12 m n^2 t(e^{-an^{-1}}-1))\\ &=\exp\left(\tfrac 12 m n^2 t\left(an^{-1}+\tfrac{1}{2}a^2 n^{-2}+\mathcal O \left(a^3 n^{-3} \right)- an^{-1}+\tfrac{1}{2}a^2 n^{-2}+\mathcal O \left(a^3 n^{-3} \right)\right)\right)\\ &=e^{\frac 12 m a^2 t +\mathcal O (ta^3 n^{-1})}, \end{align*} where the second line follows since $R^+_{m n^2t/2}$ and $R^-_{m n^2t/2}$ are both Poisson distributed with mean $\frac 12 m n^2 t$. \end{proof} The following two lemmas will allow us to control $p^n_t(x)$ for large $x$. The first lemma gives us an upper bound. \begin{lemma} \label{lem:untailinit} There exists a constant $c_7\in (0,1)$ such that for $n$ sufficiently large, the following holds. Suppose that $p^n_0(x)=0$ $\forall x \geq N^6$. Take $c\in (0,1/2)$. Suppose for some $R >0$ with $R \left(\frac{n}{N}\right)^{1/2-c}\leq c_7$ that \begin{equation} \label{eq:pexpupper} p_0^n(x)\leq 3 e^{-\kappa(1-(\log N)^{-2}) x }+R\left(\frac{n}{N}\right)^{1/2-c} \quad \forall x\in \tfrac1n \mathbb{Z}, \end{equation} and that for some $T\in (1,\log N]$, $\sup_{y\in \frac 1n \mathbb{Z}, |y|\leq N ,\, t\in [0,T]} |u^n_t(y)-g(y-\nu t)| \leq c_7 (\log N)^{-2}$. Then for $t\in [0,T]$, \begin{equation*} \label{eq:untailinit} u^n_t(x)\leq \tfrac 43 \left(3 e^{-\kappa(1-(\log N)^{-2}) (x-\nu t) }+R\left(\frac{n}{N}\right)^{1/2-c}\right)\quad \forall x\in \tfrac1n \mathbb{Z}, \end{equation*} and for $t\in[ 1,T]$, $$ u^n_t(x)\leq (1-c_7 (\log N)^{-2}) 3 e^{-\kappa(1-(\log N)^{-2}) (x-\nu t) }+(1-c_7)R \left(\frac{n}{N}\right)^{1/2-c}\quad \forall x\in \tfrac1n \mathbb{Z}. $$ \end{lemma} \begin{proof} Take $d\in (0,1/3)$ such that \begin{equation} \label{eq:ddef} d< \min\left(\tfrac 1 {10} (2-\alpha)s_0,\tfrac 1 4 e^{-(1-\alpha)s_0}(1-\alpha) s_0\right). \end{equation} Suppose that \begin{equation} \label{eq:A1nNsmall} R \left(\frac{n}{N}\right)^{1/2-c}<\tfrac 1{12}(1+d)^{-1} e^{-(1-\alpha)s_0}(1-\alpha)s_0 , \end{equation} and that $T\in (1,\log N]$ with \begin{align} \label{eq:nearg} \sup_{y\in \frac 1n \mathbb{Z} ,|y|\leq N,\, t\in [0,T]} |u^n_t(y)-g(y-\nu t)| & <\tfrac 1{73} e^{-5s_0 }(2-\alpha) (\log N)^{-2}. \end{align} Let $\theta_N=(1-(\log N)^{-2})\kappa$, and let $$\tau =T \wedge \inf\left \{t\geq 0:\exists \, x \in \tfrac1n \mathbb{Z} \text{ s.t. }u^n_t(x)\geq (1+d(\log N)^{-2})3 e^{-\theta_N (x-\nu t) }+(1+d)R\left(\frac{n}{N}\right)^{1/2-c}\right\}. $$ By~\eqref{eq:uneasybound}, and then since $p_0^n(x)=0$ $\forall x\geq N^6$, for $t\ge 0$ and $z\in \frac 1n \mathbb{Z}$, \begin{align} \label{eq:uncrudebound} u^n_t(z)\leq e^{(1+\alpha)s_0 t}\langle p_0^n, \phi_0^{t,z}\rangle _n &\leq e^{(1+\alpha)s_0 t}\psubb{z}{X^n_{mt}\leq N^6} \notag\\ &=e^{(1+\alpha)s_0 t}\psubb{0}{X^n_{mt}\geq z-N^6} \notag \\ &\leq e^{(1+\alpha)s_0 t}\Esubb{0}{e^{2\theta_N X^n_{mt}}}e^{-2\theta_N z+2\theta_N N^6} \notag\\ &\leq e^{(2s_0 +3m\theta_N^2 )t}e^{-2\theta_N z+2\theta_N N^6} \end{align} for $n$ sufficiently large, by Markov's inequality and Lemma~\ref{lem:Xnmgf}. Therefore, since $u^n_t(x)\in [0,1]$, there exists $N'<\infty$ such that $$ \tau =T \wedge \min_{x\in \frac1n \mathbb{Z} \cap [0,N']} \inf \left \{t\geq 0:u^n_t(x) \geq (1+d(\log N)^{-2})3 e^{-\theta_N (x-\nu t) }+(1+d)R \left(\frac{n}{N}\right)^{1/2-c}\right\}. $$ Hence (by continuity of $u^n_t(x)$ for each $x\in \frac 1n \mathbb{Z}$ and by our assumption on the initial condition in~\eqref{eq:pexpupper}) we have that $\tau>0$. Moreover, if $\tau<T$ then there exists $x\in \frac1n \mathbb{Z}\cap [0, N']$ such that \begin{equation} \label{eq:untau*} u^n_\tau(x) \geq (1+d(\log N)^{-2})3 e^{-\theta_N (x-\nu \tau) }+(1+d)R \left(\frac{n}{N}\right)^{1/2-c}. \end{equation} Note that for $u\in [0,1]$, \begin{equation} \label{eq:f+bound} f(u)+(1-\alpha)u=-2u^3+(3-\alpha)u^2\leq (3-\alpha)u^2. \end{equation} Now by~\eqref{eq:ungreena}, for $0<t\leq \tau$ and $x\in \frac 1n \mathbb{Z}$, for $0<t_0\leq t\wedge 1$, \begin{align} \label{eq:pexpupperA} u^n_t(x) &= e^{-(1-\alpha)s_0 t_0}\langle u^n_{t-t_0},\phi_0^{t_0,x} \rangle _n +s_0 \int_0^{t_0} e^{-(1-\alpha)s_0 (t_0-s)}\langle f(u^n_{t-t_0+s})+(1-\alpha)u^n_{t-t_0+s},\phi_s^{t_0,x}\rangle _n ds \notag \\ &\leq e^{-(1-\alpha)s_0 t_0}\langle u^n_{t-t_0},\phi_0^{t_0,x} \rangle _n +3s_0\int_0^{t_0} e^{-(1-\alpha)s_0(t_0-s)}\langle (u^n_{t-t_0+s})^2,\phi_s^{t_0,x}\rangle _n ds, \end{align} where the second line follows by~\eqref{eq:f+bound}. Since $t\le \tau$, we have \begin{align*} \langle u^n_{t-t_0},\phi_0^{t_0,x} \rangle _n &\le (1+d(\log N)^{-2}) \Esubb{x}{3 e^{-\theta_N (X^n_{mt_0}-\nu (t-t_0))}}+(1+d)R \left( \frac n N \right)^{1/2-c}\\ &\leq (1+d(\log N)^{-2}) 3 e^{-\theta_N (x-\nu (t-t_0))} e^{\frac 12 m \theta_N^2 t_0 +\mathcal O(t_0 n^{-1})}+(1+d)R \left( \frac n N \right)^{1/2-c}, \end{align*} by Lemma~\ref{lem:Xnmgf}. For the second term on the right hand side of~\eqref{eq:pexpupperA}, we have that for $s\in [0,t_0)$, \begin{align} &\langle (u^n_{t-t_0+s})^2,\phi_s^{t_0,x}\rangle _n \notag \\ &\leq 2 \left((1+d(\log N)^{-2})^2\Esubb{x}{9 e^{-2\theta_N (X^n_{m(t_0-s)}-\nu(t-t_0+s))}} + (1+d)^2 R^2 \left(\frac{n}{N}\right)^{1-2c}\right) \notag \\ &\leq 2 \left((1+d(\log N)^{-2})^2 9 e^{-2\theta_N (x-\nu(t-t_0+s))} e^{2m\theta_N^2 (t_0-s) +\mathcal O (t_0 n^{-1})} + (1+d)^2 R^2 \left(\frac{n}{N}\right)^{1-2c}\right) \notag \end{align} by Lemma~\ref{lem:Xnmgf}. Note that by~\eqref{eq:kappanu}, $(1-\alpha)s_0 +\theta_N \nu -\frac 12 m \theta_N^2 =(2-\alpha -(\log N)^{-2})s_0 (\log N)^{-2}$. Hence for $n$ sufficiently large, substituting into~\eqref{eq:pexpupperA}, \begin{align*} & u^n_t(x)\\ &\leq e^{- ((1-\alpha)s_0+\theta_N \nu -\frac 12 m \theta_N^2)t_0+\mathcal O(t_0 n^{-1})} (1+d(\log N)^{-2}) 3 e^{-\theta_N (x-\nu t)} +e^{-(1-\alpha)s_0 t_0}(1+d) R\left(\frac{n}{N}\right)^{1/2-c}\\ &\qquad +6s_0 (1+d(\log N)^{-2})^2 9 e^{-2\theta_N (x-\nu t)} e^{5s_0 t_0}t_0 +6(1+d)^2 R^2 \left(\frac{n}{N}\right)^{1-2c} t_0\\ &\leq (1+d(\log N)^{-2}) 3 e^{-\theta_N (x-\nu t)} +(1+d) R \left(\frac{n}{N}\right)^{1/2-c} \\ &\qquad +t_0 (1+d(\log N)^{-2}) 3 e^{-\theta_N (x-\nu t)}\Big( 18s_0 (1+d(\log N)^{-2}) e^{-\theta_N (x-\nu t)}e^{5s_0 t_0}\\ &\hspace{7cm}- e^{-\frac 12 (2-\alpha) s_0 (\log N)^{-2}t_0} \tfrac 12 s_0 (2-\alpha)(\log N)^{-2} \Big)\\ &\qquad +t_0 (1+d) R \left(\frac{n}{N}\right)^{1/2-c}\left(6(1+d) R \left(\frac{n}{N}\right)^{1/2-c}-e^{-(1-\alpha)s_0 t_0}(1-\alpha)s_0 \right), \end{align*} where the second inequality holds since for $y \geq 0$, $e^{-y}=1-(1-e^{-y})\leq 1-ye^{-y}$. Suppose $x$ is such that $$ 18(1+d(\log N)^{-2}) e^{-\theta_N (x-\nu t)}e^{5s_0 t_0} - \tfrac 14 e^{-\frac 12 (2-\alpha )s_0 (\log N)^{-2} t_0} (2-\alpha)(\log N)^{-2} \le 0. $$ Then since $t_0 \in (0,1]$, and by~\eqref{eq:A1nNsmall} and the definition of $d$ in~\eqref{eq:ddef}, if $n$ is sufficiently large we have that \begin{equation} \label{eq:unboundholds} u_t^n(x)< (1+(d-2t_0 d) (\log N)^{-2})3 e^{-\theta_N (x-\nu t)}+(1+d-2t_0 d ) R \left(\frac{n}{N}\right)^{1/2-c}. \end{equation} If instead $x\geq \nu t$ and \begin{equation} \label{eq:pntailcase2} 18(1+d(\log N)^{-2}) e^{-\theta_N (x-\nu t)}e^{5s_0 t_0} >\tfrac 14 e^{-\frac 12 (2-\alpha )s_0 (\log N)^{-2} t_0} (2-\alpha)(\log N)^{-2}, \end{equation} then since $T\leq \log N$, for $n$ sufficiently large we have $|x|\leq N$. Since $d<1/3$ and $t_0\leq 1$, we have that for $n$ sufficiently large, \begin{align*} (1+(d-2t_0 d)(\log N)^{-2} )3 e^{-\theta_N (x-\nu t)} &\geq e^{-\kappa (x-\nu t)}+ e^{-\theta_N (x-\nu t)}\\ &> g(x-\nu t)+\sup_{y\in \frac 1n \mathbb{Z} , |y|\leq N, s\in [0,T]} |u^n_s(y)-g(y-\nu s)| \end{align*} by~\eqref{eq:pntailcase2} and our assumption in~\eqref{eq:nearg}. Therefore for $n$ sufficiently large, in this case we also have that~\eqref{eq:unboundholds} holds. Finally, for $n$ sufficiently large, if $x< \nu t$ then since $d<1/3$, $t_0\leq 1$ and $u^n_t(x)\le 1$ we have that~\eqref{eq:unboundholds} holds. Suppose that $\tau<T$; then~\eqref{eq:untau*} holds, and by setting $t=\tau$ and $t_0=1\wedge \tau$, we have a contradiction by~\eqref{eq:unboundholds}. It follows that $\tau=T$, and so the first statement of the lemma holds. The second statement follows by setting $t_0=1$ in~\eqref{eq:unboundholds}. \end{proof} The next lemma will give us a corresponding lower bound on $p^n_t(x)$ for large $x$. \begin{lemma} \label{lem:untailinitminus} There exists a constant $c_8\in (0,1)$ such that the following holds for $n$ sufficiently large. Take $c\in (0,1/2)$. Suppose for some $R>0$ that \begin{equation} \label{eq:untailminusicd} p_0^n(x)\geq \tfrac 13 e^{-\kappa(1+(\log N)^{-2}) x }\mathds{1}_{x\ge 0}-R \left(\frac{n}{N}\right)^{1/2-c} \quad \forall x\in \tfrac1n \mathbb{Z}, \end{equation} and that for some $T\in (1,\log N]$, $\sup_{y\in \frac 1n \mathbb{Z}, |y|\le N, t\in [0,T]} |u^n_t(y)-g(y-\nu t)|\le c_8 (\log N)^{-2}$. Then for $t\in [0,T]$, \begin{equation*} \label{eq:untailinitminus} u^n_t(x)\geq \tfrac 14 e^{-\kappa(1+(\log N)^{-2}) (x-\nu t) }\mathds{1}_{x\ge \nu t}- R\left(\frac{n}{N}\right)^{1/2-c} \quad \forall x\in \tfrac1n \mathbb{Z}, \end{equation*} and for $t\in [1,T]$, $\forall x\in \tfrac1n \mathbb{Z}$, $$ u^n_t(x)\geq (1+ c_8 (\log N)^{-2}) \tfrac 13 e^{-\kappa(1+(\log N)^{-2}) (x-\nu t) }\mathds{1}_{x\ge \nu t -c_8}-(1-c_8)R \left(\frac{n}{N}\right)^{1/2-c} . $$ \end{lemma} \begin{proof} Note that for $u\in [0,1]$, \begin{equation} \label{eq:f+pos} f(u)+(1-\alpha)u=-2u^3+(3-\alpha)u^2 \ge 0. \end{equation} Take $d\in \big(0, \min\big(\frac 1{100} e^{-4(\kappa+2s_0)} (1-e^{-\frac 12 \kappa}) (2-\alpha)s_0 , \log (10/9) \kappa^{-1} \big)\big)$, and suppose \begin{equation} \label{eq:unguntailminusA} \sup_{y\in \frac 1n \mathbb{Z}, |y|\le N, t\in [0,T]} |u^n_t(y)-g(y-\nu t)| \le d(\log N)^{-2}. \end{equation} Let $\theta '_N=(1+(\log N)^{-2})\kappa$. For some $t_1 \in [0, T]$, suppose \begin{equation} \label{eq:unindhyp} u^n_{t_1}(x)\ge \tfrac 13 e^{-\theta '_N (x-\nu t_1)}\mathds{1}_{x\ge \nu t_1}-R \left( \frac n N \right)^{1/2-c} \quad \forall x\in \tfrac 1n \mathbb{Z}. \end{equation} Take $t\in (t_1, t_1+1]$ and let $t_0 =t-t_1$. Then for $x\in \frac 1n \mathbb{Z}$, by~\eqref{eq:ungreena}, \begin{align*} u^n_t(x) &= e^{-(1-\alpha)s_0 t_0} \langle u^n_{t_1}, \phi_0^{t_0,x}\rangle _n +s_0 \int_0^{t_0} e^{-(1-\alpha)s_0 (t_0-s)} \langle f(u^n_{t_1+s})+(1-\alpha) u^n_{t_1+s}, \phi^{t_0,x}_s \rangle_n ds \\ & \ge e^{-(1-\alpha)s_0 t_0} \langle u^n_{t_1}, \phi_0^{t_0,x}\rangle _n \end{align*} by~\eqref{eq:f+pos}. Hence by~\eqref{eq:unindhyp}, \begin{equation} \label{eq:untailminusge*} u^n_t(x) \ge e^{-(1-\alpha)s_0 t_0} \left( \Esubb{x}{\tfrac 13 e^{-\theta_N ' (X^n_{mt_0}-\nu t_1)} \mathds{1}_{X^n_{mt_0}\ge \nu t_1}} -R \left( \frac n N \right)^{1/2-c} \right). \end{equation} Note that \begin{align} \label{eq:untailminusdiff*} \Esubb{x}{e^{-\theta '_N (X^n_{mt_0}-\nu t_1)} \mathds{1}_{X^n_{mt_0}\ge \nu t_1}} &=\Esubb{x}{e^{-\theta '_N (X^n_{mt_0}-\nu t_1)}}-\Esubb{x}{e^{-\theta '_N (X^n_{mt_0}-\nu t_1)}\mathds{1}_{X^n_{mt_0}< \nu t_1}} \notag \\ &= e^{-\theta '_N (x-\nu t_1)} e^{\frac 12 m(\theta '_N)^2 t_0 +\mathcal O(n^{-1} t_0)} -e^{\theta '_N \nu t_1}\Esubb{x}{e^{-\theta '_N X^n_{m t_0}} \mathds{1}_{X^n_{m t_0}< \nu t_1}} \end{align} by Lemma~\ref{lem:Xnmgf}. For the second term on the right hand side, \begin{align*} \Esubb{x}{e^{-\theta '_N X^n_{mt_0}} \mathds{1}_{X^n_{mt_0}< \nu t_1}} &\le \sum_{k= \lfloor x-\nu t_1 \rfloor} ^\infty e^{-\theta '_N (x-k-1)} \psubb{x}{X^n_{mt_0}\le x-k}\\ &\le e^{-\theta '_N x} \sum_{k= \lfloor x-\nu t_1 \rfloor}^\infty e^{\theta '_N(k+1)}e^{-2\theta_N ' k} e^{2m(\theta '_N)^2 t_0+\mathcal O(t_0 n^{-1})}\\ &\le e^{-\theta '_N x} e^{\theta '_N+2m(\theta '_N)^2 t_0+\mathcal O(t_0 n^{-1})}e^{-\theta '_N \lfloor x-\nu t_1 \rfloor}(1-e^{-\theta '_N})^{-1}, \end{align*} where the second inequality follows by Markov's inequality and Lemma~\ref{lem:Xnmgf}. Suppose $x\ge \nu t_1$ with \begin{equation} \label{eq:untailminusA} e^{-\theta '_N(x-\nu t_1)} \le e^{-3(\theta '_N+m(\theta '_N)^2)} (1-e^{-\theta '_N}) \tfrac 15 (2-\alpha)s_0 (\log N)^{-2}. \end{equation} Then by~\eqref{eq:untailminusdiff*} and since $t_0\le 1$, for $n$ sufficiently large, \begin{align*} & e^{-(1-\alpha)s_0 t_0} \Esubb{x}{\tfrac 13 e^{-\theta '_N (X^n_{m t_0}-\nu t_1)} \mathds{1}_{X^n_{m t_0}\ge \nu t_1}} \\ &\ge e^{-(1-\alpha)s_0 t_0} \tfrac 13 e^{-\theta '_N (x-\nu t_1)} (e^{\frac 12 m(\theta '_N)^2 t_0 +\mathcal O(t_0 n^{-1})} -e^{3(\theta '_N+m(\theta '_N)^2)} e^{-\theta '_N(x-\nu t_1)}(1-e^{-\theta_N '})^{-1}) \\ &\ge \tfrac 13 e^{-\theta '_N (x-\nu t)} e^{((-1+\alpha)s_0 -\theta '_N \nu +\frac 12 m(\theta '_N)^2 +\mathcal O(n^{-1})) t_0 } (1 -e^{3(\theta '_N+m(\theta '_N)^2)} e^{-\theta '_N(x-\nu t_1)}(1-e^{-\theta '_N})^{-1})\\ &\ge \tfrac 13 e^{-\theta '_N (x-\nu t)} e^{\frac 12 (2-\alpha)s_0 (\log N)^{-2} t_0 } (1-\tfrac 15 (2-\alpha)s_0 (\log N)^{-2}) \end{align*} for $n$ sufficiently large, where the second inequality holds since $t_1=t-t_0$ and the last inequality follows since $(-1+\alpha)s_0 -\theta '_N \nu +\frac 12 m(\theta '_N)^2 \ge (2-\alpha) s_0 (\log N)^{-2}$ and by our assumption~\eqref{eq:untailminusA} on $x$. By~\eqref{eq:untailminusge*}, it follows that for $n$ sufficiently large, if $x\ge \nu t_1$ and~\eqref{eq:untailminusA} holds, then for $t\in (t_1,t_1+1]$, $$ u^n_t(x) \ge \tfrac 13 e^{-\theta '_N (x-\nu t)} e^{\frac 12 (2-\alpha)s_0 (\log N)^{-2}(t-t_1)} (1-\tfrac 15 (2-\alpha)s_0 (\log N)^{-2}) -e^{-(1-\alpha)s_0(t-t_1)}R \left( \frac n N \right)^{1/2-c}. $$ If instead $t\in (t_1,(t_1+1)\wedge T]$ and $x\ge \nu t$ with $e^{-\theta '_N(x-\nu t_1)}> e^{-3(\theta '_N+m(\theta '_N)^2)}(1-e^{-\theta '_N}) \frac 15 (2-\alpha)s_0 (\log N)^{-2}$, then if $n$ is sufficiently large, we have $|x|\le N$ and so by~\eqref{eq:unguntailminusA}, $$ u^n_t(x)\ge g(x-\nu t)-d(\log N)^{-2} \ge \tfrac 12 e^{-\kappa(x-\nu t)} - \tfrac 1{20} e^{-\theta '_N (x-\nu t_1)}\ge\tfrac 9{20} e^{-\theta '_N(x-\nu t)}, $$ where the second inequality follows since $g(y)\ge \tfrac 12 e^{-\kappa y}$ $\forall y\ge 0$ and by the definition of $d$ and our assumption on $x$. For $x\in [\nu t-d,\nu t]$, by~\eqref{eq:unguntailminusA}, $$ u^n_t(x)\ge \tfrac 12 -d(\log N)^{-2}\ge \tfrac 25 e^{\theta '_N d}\ge \tfrac 25 e^{-\theta '_N (x-\nu t)} $$ for $n$ sufficiently large, since $e^{\kappa d} \le 10/9$ by the definition of $d$. Since~\eqref{eq:unindhyp} holds for $t_1=0$ by our assumption in~\eqref{eq:untailminusicd}, for $n$ sufficiently large that $e^{\frac 9 {40} (2-\alpha)s_0 (\log N)^{-2}}(1-\frac 15 (2-\alpha)s_0(\log N)^{-2})\ge 1$,~\eqref{eq:unindhyp} holds for each $t_1 \in \frac 12 \mathbb{N}_0\cap [0,T]$ by induction. Then for $t\in [1,T]$, there exists $t_1\in [0,T]$ such that~\eqref{eq:unindhyp} holds and with $t-t_1\in [1/2,1]$, and the result follows. \end{proof} The following result will allow us to show that $|u^n_{t,t+s}(x)-g(x-\mu^n_t -\nu s)|$ is small in the proof of Proposition~\ref{prop:eventE1}. \begin{lemma} \label{lem:gronwallun} Suppose $(u^{n,1}_t)_{t\ge 0}$ and $(u^{n,2}_t)_{t\ge 0}$ solve~\eqref{eq:undef} with initial conditions $p_0^{n,1}$ and $p_0^{n,2}$ respectively. Then for $t\ge 0$, $$ \sup_{x\in \frac 1n \mathbb{Z}}| u^{n,1}_t(x) -u^{n,2}_t(x)|\le e^{(1+\alpha)s_0 t} \sup_{y\in \frac 1n \mathbb{Z}}| p^{n,1}_0(y) -p^{n,2}_0(y) |. $$ \end{lemma} \begin{proof} By~\eqref{eq:ungreena}, for $x\in \frac 1n \mathbb{Z}$ and $t\ge 0$, \begin{align*} | u^{n,1}_t(x) -u^{n,2}_t(x)| &\le \langle |p^{n,1}_0 -p^{n,2}_0|, \phi^{t,x}_0\rangle_n +s_0 \int_0^t \langle |f(u^{n,1}_s)-f(u^{n,2}_s)|, \phi^{t,x}_s \rangle_n ds\\ &\le \sup_{y\in \frac 1n \mathbb{Z}}| p^{n,1}_0(y) -p^{n,2}_0(y) |+(1+\alpha) s_0 \int_0 ^t \sup_{y\in \frac 1n \mathbb{Z}}| u^{n,1}_s(y) -u^{n,2}_s(y)|ds \end{align*} since $\sup_{u\in [0,1]}|f'(u)|=1+\alpha$. The result follows by Gronwall's inequality. \end{proof} We are now ready to prove Proposition~\ref{prop:eventE1}. \begin{proof}[Proof of Proposition~\ref{prop:eventE1}] Without loss of generality, assume $b_2\in (0,1/3)$ is sufficiently small that $\left( \frac n N \right)^{1/3} \le n^{-b_2}$ for $n$ sufficiently large. Take $c_5, c_6>0$ as defined in Lemma~\ref{lem:x0approx} and Proposition~\ref{prop:expconvtog}. Let $b_1 =\frac 12 (c_5\wedge c_6)$, and suppose condition~\eqref{eq:conditionA} holds. Define the event $$ A=\left\{ p^n_t(x)=0 \;\forall t \in [0,2N^2], x\ge N^5 \right\}\cap \left\{ p^n_t(x)=1\; \forall t \in [0,2N^2], x\le -N^5 \right\}. $$ Recall from~\eqref{eq:Dn+-defn} that $D_n^+= (1/2-c_0)\kappa^{-1} \log (N/n)$. Take $c_3\in (0,c_0\wedge 1/6)$, and take $\ell ' \in \mathbb{N}$ sufficiently large that $N^2 \left( \frac n N \right)^{\ell '}\le \left( \frac n N \right)^{\ell+1}$ for $n$ sufficiently large. Take $c_4=c_4(c_3,\ell ')\in (0,1/2)$ as defined in Proposition~\ref{prop:pnun}, and let $T_0=(\log N)^{c_4}$. By making $c_4$ smaller if necessary, we can assume $c_4<a_0$ (recall that $(\log N)^{a_0}\le \log n$ for $n$ sufficiently large). For $k\in \mathbb{Z}$, let $t_k=(k+1)T_0$, and for $k\in \mathbb{N}_0$, let $(u^{n,k}_t)_{t\geq 0}$ denote the solution of \begin{equation*} \begin{cases} \partial_t u^{n,k}_t &= \tfrac 12 m \Delta_n u^{n,k}_t +s_0 f(u^{n,k}_t) \quad \text{for } t>0,\\ u^{n,k}_0 &= p^n_{t_{k-1}}. \end{cases} \end{equation*} For $k\in \mathbb{N}_0$, define the event $$ A_{k}= \left\{ \sup_{x\in \frac 1n \mathbb{Z}, |x|\leq N^5}\sup_{t\in [0,2T_0]}|p^n_{t+ t_{k-1}}(x)-u^{n,k}_t(x)|\leq \left(\frac{n}{N} \right)^{1/2-c_3} \right\}. $$ Let $j_0=\lfloor N^2 T_0^{-1}\rfloor$. Note that by a union bound, and then by Proposition~\ref{prop:pnun} and Lemma~\ref{lem:p01}, for $n$ sufficiently large, \begin{equation} \label{eq:eventE1prob} \p{A^c \cup \bigcup_{j=0}^{j_0+1} A^c_j } \le 2e^{-N^5}+(j_0+2) \left( \frac n N \right)^{\ell '} \le \left( \frac n N \right)^\ell \end{equation} by our choice of $\ell '$. From now on, suppose that the event $A \cap \bigcap_{j=0}^{j_0 +1}A_{j} $ occurs. For $k\in \mathbb{N}_0$, let $(u^k_t)_{t\geq 0}$ denote the solution of \begin{equation*} \begin{cases} \partial_t u^k_t &= \tfrac 12 m \Delta u^k_t +s_0 f(u^k_t) \quad \text{for }t>0,\\ u_0^k&=\bar{p}^n_{t_{k-1}}, \end{cases} \end{equation*} where $\bar{p}^n_{t_{k-1}}:\R\rightarrow [0,1]$ is the linear interpolation of $p_{t_{k-1}}^n:\frac1n \mathbb{Z} \rightarrow [0,1]$. Now for an induction argument, for $k\in \mathbb{N}_0$ with $k\leq j_0+1$, suppose there exists $z_{k-1}\in \R$ with $|z_{k-1}|\leq k$ such that \begin{align} D_k := \sup_{x\in \frac 1n \mathbb{Z}}|p^n_{ t_{k-1}}(x)-g(x-\nu t_{k-1}-z_{k-1})| &\le \tfrac 12 (c_5 \wedge c_6)=b_1 \label{eq:pgclose} \\ \text{and} \quad \sup_{x_1,x_2\in \frac 1n \mathbb{Z}, |x_1-x_2|\le n^{-1/3}} |p^n_{ t_{k-1}}(x_1)-p^n_{t_{k-1}}(x_2)| &\le n^{-b_2}. \label{eq:prough} \end{align} (Note that~\eqref{eq:pgclose} and~\eqref{eq:prough} hold for $k=0$, by condition~\eqref{eq:conditionA}.) Then by the triangle inequality, \begin{align} \label{eq:(J)thmpn} \|\bar{p}^n_{t_{k-1}}-g(\cdot-\nu t_{k-1}-z_{k-1})\|_\infty &\leq D_k +n^{-1} \|\nabla g\|_\infty +n^{-b_2} \notag \\ &\leq c_5 \wedge c_6 \end{align} for $n$ sufficiently large. Hence by Proposition~\ref{prop:expconvtog}, there exists $z_{k}\in \R$ with $|z_k|\le k+1$ such that \begin{equation} \label{eq:(*)pn} |u^k_t(x)-g(x-\nu ( t_{k-1}+t)-z_{k})| \leq C_3 e^{- c_6 t} \quad \forall x\in \R, \; t>0. \end{equation} Therefore by Lemma~\ref{lem:unu}, for $t\in [0,2T_0]$, \begin{align} \label{eq:(**)pn} \sup_{x\in \frac 1n \mathbb{Z}}|u^{n,k}_t(x)-g(x-\nu ( t_{k-1}+t)-z_k)|\leq (C_4 n^{-1/3}+2n^{-b_2} )4T_0^2 e^{2(1+\alpha)s_0 T_0}+C_3 e^{-c_6 t}. \end{align} Then by the definition of the event $A_{k}$, for $t\in [ T_0,2T_0]$, \begin{align*} &\sup_{x\in \frac 1n \mathbb{Z}, |x|\leq N^5}|p^n_{ t_{k-1}+t}(x)-g(x-\nu ( t_{k-1}+t)-z_k)|\\ &\leq \left( \frac{n}{N}\right)^{1/2-c_3}+(C_4 n^{-1/3}+2n^{-b_2}) 4T_0^2 e^{2(1+\alpha)s_0 T_0}+C_3 e^{-c_6 T_0}\\ &\leq e^{-\frac 12 c_6 T_0} \end{align*} for $n$ sufficiently large. Therefore, for $n$ sufficiently large, since $k\leq j_0+1$ and $|z_k|\leq k+1$, and by the definition of the event $A$, we have that for $t\in [ T_0,2T_0]$, \begin{align} \label{eq:pgclose**} &\sup_{x\in \frac 1n \mathbb{Z}}|p^n_{ t_{k-1}+t}(x)-g(x-\nu (t_{k-1}+t)-z_k)| \notag \\ &\qquad \leq \max\left(e^{-\frac 12 c_6 T_0},\sup_{y\geq N^5-N^3}g(y),\sup_{y\leq -N^5 +N^2}(1-g(y)) \right) = e^{-\frac 12 c_6 T_0}. \end{align} By the definitions of the events $A_{k}$ and $A$, and then by Lemma~\ref{lem:nablaunbound} and our choice of $b_2$ and $c_3$, we have that \begin{align*} \sup_{x_1,x_2\in \frac 1n \mathbb{Z}, |x_1-x_2|\le n^{-1/3}} |p^n_{ t_k}(x_1)-p^n_{t_k}(x_2)| &\le n^{-1} \lfloor n^{2/3} \rfloor \sup_{x\in \frac 1n \mathbb{Z}} |\nabla_n u^{n,k}_{T_0}(x)|+2\left( \frac n N \right)^{1/2-c_3}\\ &\le n^{-b_2} \end{align*} for $n$ sufficiently large. By induction, we now have that for $n$ sufficiently large, for $k\in \mathbb{N}$ with $k\leq j_0+1$, there exists $z_{k-1}\in \R$ with $|z_{k-1}|\le k$ such that~\eqref{eq:pgclose} and~\eqref{eq:prough} hold with $D_k \le e^{-\frac 12 c_6 T_0}$. By Lemma~\ref{lem:x0approx} and~\eqref{eq:(J)thmpn}, if $n$ is sufficiently large then for $t\geq 0$ and $x\in \R$, $$ |u^k_t(x)-g(x-\nu ( t_{k-1}+t)-z_{k-1})|\leq C_2(D_k +2n^{-b_2}) $$ and so by~\eqref{eq:(*)pn}, $\|g(\cdot-z_k)-g(\cdot-z_{k-1})\|_\infty \leq C_2(D_k +2n^{-b_2})$. For $n$ sufficiently large, since $\nabla g(0)=-\kappa /4$, it follows that \begin{equation*} |z_{k-1}-z_k|\leq 5\kappa^{-1} C_2(D_k +2n^{-b_2}) \leq e^{-\frac 13 c_6 T_0}. \end{equation*} Therefore, by~\eqref{eq:pgclose**}, for $n$ sufficiently large, for $k\in \mathbb{N}_0$ with $k\le j_0$, \begin{align} \label{eq:(A)pn} |z_{k+1}-z_{k}| \leq e^{-\frac 13 c_6 T_0}\quad \text{and} \quad \sup_{t\in [ t_k, t_{k+1}],\, x\in \frac 1n \mathbb{Z}} |p^n_t(x)-g(x-\nu t-z_k)| &\leq e^{-\frac 12 c_6 T_0} . \end{align} Note that for $k\in \mathbb{N}_0$ with $k \le j_0$, by~\eqref{eq:(A)pn}, \begin{align} \label{eq:ungbound} &\sup_{x\in \frac 1n \mathbb{Z}, |x-(z_k+\nu t_k)|\le N,\, t\in [0, T_0]} |u^{n,k+1}_t (x)-g(x-\nu (t+ t_k)-z_k)| \notag \\ &\le e^{-\frac 12 c_6 T_0} +\sup_{|x|\le N^5,\, t\in [0, T_0]}|u^{n,k+1}_t(x)-p^n_{t+ t_k}(x)| \notag \\ &\le e^{-\frac 12 c_6 T_0} +\left( \frac n N \right)^{1/2-c_3} \end{align} by the definition of the event $A_{k+1}$. We now use Lemma~\ref{lem:untailinit} to prove an upper bound on $p^n_t(x)$ for large $x$. Let $c_9=c_7\wedge c_8 \in (0,1)$ and $R_{0}= e^{-\frac 12 c_6 T_0}\left( \frac n N \right)^{-(1/2-c_3)}$. Define $(R_k)_{k=1}^\infty$ inductively by letting $R_{k}=(1-c_9)R_{k-1}+1$ for $k\ge 1$. Let $$ k^* =\frac{\log (2c_9^{-1})-\log R_0}{\log (1-c_9/2)}. $$ Then since $R_k \le (1-c_9/2)R_{k-1}$ if $R_{k-1}\ge 2c_9^{-1}$ and $R_k \le 2c_9^{-1}-1$ if $R_{k-1}\le 2c_9^{-1}$, we have $R_k \le 2c_9^{-1}$ for $k\ge k^*$. Suppose $n$ is sufficiently large that $e^{-\frac 12 c_6 T_0}\le c_9$ and $e^{-\frac 12 c_6 T_0}+\left( \frac n N \right)^{1/2-c_3}\le c_9 (\log N)^{-2}$. Then by Lemma~\ref{lem:untailinit},~\eqref{eq:ungbound} and the definition of the event $A$, for $k\in \mathbb{N}_0$ with $k\le j_0$, if \begin{equation} \label{eq:pnsk} p^n_{t_k}(x)\le 3 e^{-\kappa(1-(\log N)^{-2})(x-\nu t_k -z_k)} +R_{k}\left( \frac n N \right)^{1/2-c_3} \quad \forall x\in \tfrac 1n \mathbb{Z}, \end{equation} then for $t\in [0,T_0],$ $$ u^{n,k+1}_{t}(x)\le \tfrac 43 \left( 3 e^{-\kappa(1-(\log N)^{-2})(x-\nu (t+ t_k) -z_k)} +R_{k}\left( \frac n N \right)^{1/2-c_3}\right) \quad \forall x\in \tfrac 1n \mathbb{Z} . $$ Therefore, by the definition of the events $A_{k+1}$ and $A$, for $t\in [t_k,t_{k+1}]$ and $x\in \frac 1n \mathbb{Z}$, \begin{equation} \label{eq:pntailE1*} p^n_t(x) \le 4 e^{-\kappa(1-(\log N)^{-2})(x-\nu t-z_k)} +(1+\tfrac 43 R_k) \left( \frac n N \right)^{1/2-c_3}. \end{equation} Moreover, by Lemma~\ref{lem:untailinit} and~\eqref{eq:ungbound}, for $t\in [1,T_0]$ and $x\in \frac 1n \mathbb{Z}$, $$ u_t^{n,k+1}(x) \le (1-c_7 (\log N)^{-2}) 3e^{-\kappa(1-(\log N)^{-2})(x-\nu (t+t_k)-z_k)}+(1-c_7)R_k \left( \frac n N \right)^{1/2-c_3}, $$ and so by the definition of the events $A_{k+1}$ and $A$, for $x\in \frac 1n \mathbb{Z}$, \begin{align*} p^n_{t_{k+1}}(x) &\le(1-c_7 (\log N)^{-2}) 3 e^{-\kappa(1-(\log N)^{-2})(x-\nu t_{k+1}-z_k)} +(1+(1-c_7)R_{k})\left( \frac n N \right)^{1/2-c_3}\\ &\le 3 e^{-\kappa(1-(\log N)^{-2})(x-\nu t_{k+1}-z_{k+1})} +R_{k+1}\left( \frac n N \right)^{1/2-c_3} \end{align*} for $n$ sufficiently large, by the definition of $R_{k+1}$ and since $|z_k-z_{k+1}|\le e^{-\frac 13 c_6 T_0}$ by~\eqref{eq:(A)pn}. Note that~\eqref{eq:pnsk} holds for $k=0$ by~\eqref{eq:(A)pn} and the definition of $R_0$, and since $g(y)\le e^{-\kappa y}\wedge 1$ $\forall y\in \R$. Hence by induction,~\eqref{eq:pnsk} holds for each $0\le k \le j_0$. Therefore, by~\eqref{eq:pntailE1*}, for $k\ge k^*$, for $t\in [t_k,t_{k+1}]$ and $x\in \frac 1n \mathbb{Z}$, \begin{equation} \label{eq:pntailE1A} p^n_t(x)\le 4e^{-\kappa(1-(\log N)^{-2})(x-\nu t-z_k)}+(1+\tfrac 83 c_9^{-1}) \left( \frac n N \right)^{1/2-c_3}. \end{equation} We now use Lemma~\ref{lem:untailinitminus} to establish a corresponding lower bound. By Lemma~\ref{lem:untailinitminus} and~\eqref{eq:ungbound}, if for some $k\in \mathbb{N}_0$ with $k \le j_0$ \begin{equation} \label{eq:plowerboundhyp} p^n_{t_k}(x)\ge \tfrac 13 e^{-\kappa(1+(\log N)^{-2})(x-\nu t_k -z_k)}\mathds{1}_{x\ge \nu t_k +z_k} -R_{k}\left( \frac n N \right)^{1/2-c_3} \quad \forall x\in \tfrac 1n \mathbb{Z}, \end{equation} then for $t\in [0,T_0]$, $$ u^{n,k+1}_{t}(x)\ge \tfrac 14 e^{-\kappa(1+(\log N)^{-2})(x-\nu (t+t_k) -z_k)}\mathds{1}_{x\ge \nu (t_k +t)+z_k} -R_{k}\left( \frac n N \right)^{1/2-c_3} \quad \forall x\in \tfrac 1n \mathbb{Z} . $$ Hence by the definition of the event $A_{k+1}$ and since $p^n_t\ge 0$, for $t\in [t_k,t_{k+1}]$ and $x\in \frac 1n \mathbb{Z}$, \begin{equation} \label{eq:pntaileventE1dagger} p^n_t(x) \ge \tfrac 14 e^{-\kappa(1+(\log N)^{-2})(x-\nu t-z_k)}\mathds{1}_{x\ge \nu t+z_k} -(1+R_k)\left( \frac n N \right)^{1/2-c_3}. \end{equation} Moreover, by Lemma~\ref{lem:untailinitminus} and~\eqref{eq:ungbound}, for $t\in [1,T_0]$ and $x\in \frac 1n \mathbb{Z}$, \begin{align*} u^{n,k+1}_t (x) \ge (1+c_8 (\log N)^{-2}) \tfrac 13 e^{-\kappa(1+(\log N)^{-2})(x-\nu (t+t_k)-z_k)}&\mathds{1}_{x\ge \nu (t_k+t)+z_k-c_8}\\ &- (1-c_8) R_k\left( \frac n N \right)^{1/2-c_3}, \end{align*} and so by the definition of the event $A_{k+1}$ and since $p^n_t\ge 0$, for $x\in \frac 1n \mathbb{Z}$, \begin{align*} p^n_{t_{k+1}}(x) &\ge (1+c_8 (\log N)^{-2}) \tfrac 13 e^{-\kappa(1+(\log N)^{-2})(x-\nu t_{k+1}-z_k)}\mathds{1}_{x\ge \nu t_{k+1} +z_k-c_8} \\ &\hspace{5cm} -((1-c_8)R_{k}+1)\left( \frac n N \right)^{1/2-c_3}\\ &\ge \tfrac 13 e^{-\kappa(1+(\log N)^{-2})(x-\nu t_{k+1}-z_{k+1})} \mathds{1}_{x\ge \nu t_{k+1} +z_{k+1}} -R_{k+1}\left( \frac n N \right)^{1/2-c_3} \end{align*} for $n$ sufficiently large, by the definition of $R_{k+1}$ and since $|z_k-z_{k+1}|\le e^{-\frac 13 c_6 T_0}$. By~\eqref{eq:(A)pn} and the definition of $R_0$, and since $g(z)\ge \frac 12 e^{-\kappa z}$ for $z\ge 0$,~\eqref{eq:plowerboundhyp} holds for $k=0$. Hence by induction,~\eqref{eq:plowerboundhyp} holds for each $0\le k\le j_0$. Then by~\eqref{eq:pntaileventE1dagger}, for $k\ge k^*$, for $t\in [t_k,t_{k+1}]$ and $x\in \frac 1n \mathbb{Z}$, \begin{equation} \label{eq:pntailE1B} p^n_t(x)\ge \tfrac 14 e^{-\kappa(1+(\log N)^{-2})(x-\nu t-z_k)}\mathds{1}_{x\ge \nu t +z_k}- (1+2c_9^{-1}) \left( \frac n N \right)^{1/2-c_3}. \end{equation} We are now ready to complete the proof. Take $c_2\in (0,c_4)$. Recall that for $t\ge 0$, $\mu^n_t =\sup\{x\in \frac 1n \mathbb{Z}:p^n_t(x)\ge 1/2\}$. By~\eqref{eq:(A)pn} and since $\nabla g(0)=-\kappa /4$, for $n$ sufficiently large, for $k\in \mathbb{N}_0$ with $k\le j_0$, for $t\in [t_k,t_{k+1}]$, \begin{equation} \label{eq:pnunend*} |(\nu t+z_k)-\mu^n_t| \le 5 \kappa^{-1} e^{-\frac 12 c_6 T_0}. \end{equation} Therefore, for $n$ sufficiently large, by~\eqref{eq:(A)pn}, \begin{equation} \label{eq:pntailE1C} \sup_{x\in \frac 1n \mathbb{Z}, t\in [T_0,N^2]} |p^n_t(x)-g(x-\mu^n_t)| \le e^{-\frac 12 c_6 T_0} +5 \kappa^{-1} e^{-\frac 12 c_6 T_0} \|\nabla g\|_\infty \le e^{-2(\log N)^{c_2}} \end{equation} since $c_2<c_4$. By~\eqref{eq:pnunend*} and since $|z_0|\le 1$ and $|z_k-z_{k-1}|\le e^{-\frac 13 c_6 T_0}$ $\forall k\in \mathbb{N}$ with $k\le j_0$, if $n$ is sufficiently large we have $|\mu^n_{\log N}|\le 2\nu \log N$ and for $t\in [\log N,N^2]$ and $s\in [0,1]$ with $t+s\le N^2$, $$|\mu^n_{t+s}-\mu^n_t -\nu s |\le 10 \kappa^{-1} e^{-\frac 12 c_6 T_0}+e^{-\frac 13 c_6 T_0} \le e^{-(\log N)^{c_2}}. $$ Now for $ t\in [\frac 12 (\log N)^2,N^2]$, take $x\in \frac 1n \mathbb{Z}$ such that $g(x-\mu^n_t)\le 2e^{-(\log N)^{c_2}}$. Then for $n$ sufficiently large that $k^*\le \frac 12 (\log N)^{3/2}$, by~\eqref{eq:pntailE1A} and~\eqref{eq:pnunend*}, $$ p^n_t(x)\le 4e^{-\kappa(1-(\log N)^{-2})(x-\mu^n_t -5 \kappa^{-1} e^{-\frac 12 c_6 T_0})}+(1+\tfrac 83c_9^{-1}) \left( \frac n N \right)^{1/2-c_3} \le 5 g((x-\mu^n_t)\wedge D_n^+)) $$ for $n$ sufficiently large, since $\kappa D^+_n (\log N)^{-1}\le 1/2$, $c_3<c_0$ and $g(y) \sim e^{-\kappa y}$ as $y\rightarrow \infty$. Similarly, for $n$ sufficiently large, by~\eqref{eq:pntailE1B} and~\eqref{eq:pnunend*}, if $x-\mu^n_t\le D^+_n+2$ then \begin{align*} p^n_t(x) &\ge \tfrac 14 e^{-\kappa(1+(\log N)^{-2})(x-\mu^n_t+5\kappa^{-1} e^{-\frac 12 c_6 T_0})}-(1+2c_9^{-1})\left( \frac n N \right)^{1/2-c_3} \ge \tfrac 15 g(x-\mu^n_t). \end{align*} If instead $g(x-\mu^n_t)\ge 2e^{-(\log N)^{c_2}}$, then $p^n_t(x) \in [\frac 12 g(x-\mu^n_t),\frac 32 g(x-\mu^n_t)]$ by~\eqref{eq:pntailE1C}. Finally, for $t\in [\log N,N^2]$, let $(\tilde u^n_{t,t+s})_{s\ge 0}$ solve~\eqref{eq:unttsdef} with $\tilde u^n_{t,t}(x)=g(x-\mu^n_t)$ for $x\in \frac 1n \mathbb{Z}$. Then for $s\in [0,\gamma_n]$, by Lemma~\ref{lem:gronwallun} and~\eqref{eq:pntailE1C}, \begin{align*} &\sup_{x\in \frac 1n \mathbb{Z}}|u^n_{t,t+s}(x)-g(x-\mu^n_t-\nu s)|\\ &\quad \le e^{(1+\alpha)s_0 \gamma_n} e^{-2(\log N)^{c_2}}+\sup_{x\in \frac 1n \mathbb{Z}}|\tilde u^n_{t,t+s}(x)-g(x-\mu^n_t-\nu s)|\\ &\quad \le e^{(1+\alpha)s_0\gamma_n} e^{-2(\log N)^{c_2}}+(C_4 +\|\nabla g\|_\infty) n^{-1/3}\gamma_n^2 e^{(1+\alpha)s_0 \gamma_n}\\ &\quad \le e^{-(\log N)^{c_2}} \end{align*} for $n$ sufficiently large, where the second inequality follows by Lemma~\ref{lem:unu} and since $(g(\cdot -\mu^n_t -\nu s))_{s\ge 0}$ solves~\eqref{eq:ueq}. The result follows by~\eqref{eq:eventE1prob}. \end{proof} \subsection{Proof of Proposition~\ref{prop:pnun}} \label{subsec:pnunproof} The proof of Proposition~\ref{prop:pnun} uses similar arguments to those in \cite{durrett/fan:2016}. The following lemma is the main step in the proof. \begin{lemma} \label{lem:qnphi} Suppose $\phi:[0,\infty) \times \frac 1n \mathbb{Z} \rightarrow \R$ is continuously differentiable in $t$, and write $\phi_t(x):= \phi(t,x)$. Suppose that for any $t>0$, $\sup_{s\in [0,t]}\langle |\phi_s |,1\rangle_n<\infty$ and $\sup_{s\in [0,t]}\langle |\partial_s \phi_s |,1\rangle_n<\infty$. Then for $t\ge 0$, \begin{align} \label{eq:lemqnphi} &\langle q^n_t, \phi_t \rangle_n -\langle q^n_0 , \phi_0 \rangle_n -\int_0^t \langle q^n_s, \partial_s \phi_s \rangle_n ds \notag \\ &\quad = s_0 \int_0^t \langle q^n_s (1-p^n_s)(2p^n_s-1+\alpha ), \phi_s \rangle_n ds +\tfrac 12 m \int_0^t \langle q^n_s, \Delta_n \phi_s \rangle_n ds +M^n_t(\phi), \end{align} where $(M^n_t(\phi))_{t\ge 0}$ is a martingale with $M^n_0(\phi)=0$ and $$ \langle M^n(\phi)\rangle_t \le \frac n N \int_0^t \langle (1+m) q^n_s(\cdot) +\tfrac 12 m (q^n_s(\cdot -n^{-1}) + q^n_s(\cdot +n^{-1})), \phi_s^2 \rangle_n ds. $$ \end{lemma} Before proving Lemma~\ref{lem:qnphi}, we prove the following useful consequence. \begin{cor} \label{cor:qnMa} For $a\in \R$, $t\ge 0$ and $z\in \frac 1n \mathbb{Z}$, \begin{align} \label{eq:qnC} q^n_t(z) &= e^{-at}\langle q^n_0, \phi^{t,z}_0 \rangle _n + \int_0^t e^{-a(t-s)}\langle q^n_{s}(s_0(1-p^n_{s})(2p^n_{s}-1+\alpha )+a),\phi^{t,z}_s \rangle _n ds +M^n_t(\phi^{t,z,a}). \end{align} \end{cor} \begin{proof} Recall the definitions of $\phi^{t,z}$ and $\phi^{t,z,a}$ in \eqref{eq:phitzdefq} and~\eqref{eq:phiadef}. Note that $\partial_s \phi^{t,z}_s +\frac12 m \Delta _n \phi^{t,z}_s=0$ for $s\in (0,t)$. Hence $$ \partial_s \phi_s^{t,z,a}+\tfrac 12 m \Delta_n \phi^{t,z,a}_s=a\phi_s^{t,z,a}. $$ Therefore, by substituting $\phi_s(x):=\phi_s^{t,z,a}(x)$ into~\eqref{eq:lemqnphi} in Lemma~\ref{lem:qnphi} we have \begin{align*} \langle q^n_t, \phi^{t,z,a}_t \rangle _n &= \langle q^n_0, \phi^{t,z,a}_0 \rangle _n + \int_0^t \langle q^n_{s}(s_0(1-p^n_{s})(2p^n_{s}-1+\alpha )+a),\phi^{t,z,a}_s \rangle _n ds +M^n_t(\phi^{t,z,a}). \end{align*} Since $\phi_t^{t,z,a}(w)=n\mathds{1}_{w=z}$, the result follows. \end{proof} \begin{proof}[Proof of Lemma~\ref{lem:qnphi}] For $t\ge 0$, $x\in \frac 1n \mathbb{Z}$ and $i\in [N]$, by the definition of $\eta^n$ in~\eqref{eq:etandefn} we have that \begin{align*} \eta^n_t(x,i) &= \eta_0^n(x,i) +\sum_{j\in [N] \setminus \{i\}} \int_0^t (\eta_{s-}^n(x,j)-\eta_{s-}^n(x,i))d \mathcal P_s^{x,i,j}\\ &\hspace{1.8cm}+\sum_{j\in [N]\setminus \{i\}} \int_0^t \xi_{s-}^n(x,j)(\eta_{s-}^n(x,j)-\eta_{s-}^n(x,i))d \mathcal S_s^{x,i,j}\\ &\hspace{1.8cm}+\sum_{j\neq k\in [N]\setminus \{i\}} \int_0^t \mathds{1}_{\xi_{s-}^n(x,j)=\xi_{s-}^n(x,k)}(\eta_{s-}^n(x,j)-\eta_{s-}^n(x,i))d \mathcal Q_s^{x,i,j,k}\\ &\hspace{1.8cm}+\sum_{j\in [N],y\in \{x-n^{-1},\, x+n^{-1}\}} \int_0^t (\eta_{s-}^n(y,j)-\eta_{s-}^n(x,i))d \mathcal R_s^{x,i,y,j}. \end{align*} Recall from~\eqref{eq:qndef} that $q^n_s(y)=N^{-1} \sum_{j\in [N]}\eta^n_s(y,j)$ for $y\in \frac 1n \mathbb{Z}$ and $s\ge 0$. By integration by parts applied to $\eta^n_t(x,i) \phi_t(x)$, and then summing over $i$ and $x$, using our assumptions on $\phi$, \begin{align} \label{eq:qnA} &\langle q^n_t, \phi_t \rangle _n - \langle q^n_0, \phi_0 \rangle _n -\int_0^t \langle q^n_s, \partial_s\phi_s \rangle _n ds \notag \\ &\qquad = \frac{1}{Nn}\sum_{x\in \frac1n \mathbb{Z}}\sum_{i=1}^N \sum_{j\in [N] \setminus \{i\}} \int_0^t (\eta_{s-}^n(x,j)-\eta_{s-}^n(x,i))\phi_s(x)d \mathcal P_s^{x,i,j} \notag\\ &\hspace{1.3cm}+\frac{1}{Nn}\sum_{x\in \frac1n \mathbb{Z}}\sum_{i=1}^N\sum_{j\in[N]\setminus \{i\}} \int_0^t \xi_{s-}^n(x,j)(\eta_{s-}^n(x,j)-\eta_{s-}^n(x,i))\phi_s(x) d \mathcal S_s^{x,i,j} \notag\\ &\hspace{1.3cm}+\frac{1}{Nn}\sum_{x\in \frac1n \mathbb{Z}}\sum_{i=1}^N\sum_{j\neq k\in [N]\setminus \{i\}} \int_0^t \mathds{1}_{\xi_{s-}^n(x,j)=\xi_{s-}^n(x,k)}(\eta_{s-}^n(x,j)-\eta_{s-}^n(x,i))\phi_s(x) d \mathcal Q_s^{x,i,j,k}\notag \\ &\hspace{1.3cm}+\frac{1}{Nn}\sum_{x\in \frac1n \mathbb{Z}}\sum_{i=1}^N\sum_{j\in [N] ,y\in \{x-n^{-1}, \, x+n^{-1}\}} \int_0^t (\eta_{s-}^n(y,j)-\eta_{s-}^n(x,i))\phi_s(x) d \mathcal R_s^{x,i,y,j}. \end{align} We shall consider each line on the right hand side of~\eqref{eq:qnA} separately. For the first line, \begin{align*} A^1_t &:= \frac{1}{Nn}\sum_{x\in \frac1n \mathbb{Z}}\sum_{i=1}^N \sum_{j\in[N]\setminus \{i\}} \int_0^t (\eta_{s-}^n(x,j)-\eta_{s-}^n(x,i))\phi_s(x)d \mathcal P_s^{x,i,j}\\ &=\frac{1}{Nn}\sum_{x\in \frac1n \mathbb{Z}}\sum_{i=1}^N \sum_{j\in[N]\setminus \{i\}} \int_0^t (\eta_{s-}^n(x,j)-\eta_{s-}^n(x,i))\phi_s(x)(d \mathcal P_s^{x,i,j}-r_n (1-(\alpha+1)s_n)ds)\\ &\quad + \frac{1}{Nn}\sum_{x\in \frac1n \mathbb{Z}}\sum_{i=1}^N \sum_{j\in [N] \setminus \{i\}} \int_0^t (\eta_{s-}^n(x,j)-\eta_{s-}^n(x,i))\phi_s(x)r_n (1-(\alpha+1)s_n)ds. \end{align*} Now for $x\in \frac1n \mathbb{Z}$ and $s\in [0,t]$, $$ \sum_{i=1}^N \sum_{j\in [N]\setminus \{i\}} (\eta^n_{s-}(x,j)-\eta^n_{s-}(x,i))=0. $$ Hence \begin{align} \label{eq:A1q} A^1_t &=M^{n,1}_t(\phi) \notag\\ &:=\frac{1}{Nn}\sum_{x\in \frac1n \mathbb{Z}}\sum_{i=1}^N \sum_{j\in [N]\setminus \{i\}} \int_0^t (\eta_{s-}^n(x,j)-\eta_{s-}^n(x,i))\phi_s(x)(d \mathcal P_s^{x,i,j}-r_n (1-(\alpha+1)s_n)ds), \end{align} which is a martingale (since we assumed $\sup_{s\in [0,t']}\langle |\phi_s | ,1\rangle_n <\infty$ for any $t'>0$). For the second line on the right hand side of~\eqref{eq:qnA}, \begin{align*} A^2_t &:= \frac{1}{Nn}\sum_{x\in \frac1n \mathbb{Z}}\sum_{i=1}^N\sum_{j\in [N]\setminus \{i\}} \int_0^t \xi_{s-}^n(x,j)(\eta_{s-}^n(x,j)-\eta_{s-}^n(x,i))\phi_s(x) d \mathcal S_s^{x,i,j}\\ &=\frac{1}{Nn}\sum_{x\in \frac1n \mathbb{Z}}\sum_{i=1}^N\sum_{j\in [N]\setminus \{i\}} \int_0^t \xi_{s-}^n(x,j)(\eta_{s-}^n(x,j)-\eta_{s-}^n(x,i))\phi_s(x) (d \mathcal S_s^{x,i,j}-r_n\alpha s_n ds)\\ &\quad + \frac{1}{Nn}\sum_{x\in \frac1n \mathbb{Z}}\sum_{i=1}^N\sum_{j\in [N]\setminus \{i\}} \int_0^t \xi_{s-}^n(x,j)(\eta_{s-}^n(x,j)-\eta_{s-}^n(x,i))\phi_s(x) r_n\alpha s_n ds . \end{align*} For the expression on the last line, for $x\in \frac 1n \mathbb{Z}$ and $s\in [0,t]$, since $\xi_{s-}^n(x,j)=1$ if $\eta_{s-}^n(x,j)=1$, \begin{align*} &\sum_{i=1}^N\sum_{j\in [N]\setminus \{i\}} \xi_{s-}^n(x,j)(\eta_{s-}^n(x,j)-\eta_{s-}^n(x,i))\\ &\quad =\sum_{i=1}^N\sum_{j\in [N] \setminus \{i\}} \eta_{s-}^n(x,j)-\sum_{i=1}^N \eta^n_{s-}(x,i)\left(\sum_{j=1}^N \xi^n_{s-}(x,j)-1\right)\\ &\quad = (N-1)N q^n_{s-}(x)-N q^n_{s-}(x)(Np^n_{s-}(x)-1)\\ &\quad = N^2 q^n_{s-}(x)(1-p^n_{s-}(x)). \end{align*} Therefore we can write \begin{align*} &\frac{1}{Nn}\sum_{x\in \frac1n \mathbb{Z}}\sum_{i=1}^N\sum_{j\in [N] \setminus \{i\}} \int_0^t \xi_{s-}^n(x,j)(\eta_{s-}^n(x,j)-\eta_{s-}^n(x,i))\phi_s(x) r_n\alpha s_n ds \\ &=\alpha Nr_n s_n \int_0^t \langle q_{s-}^n(1-p_{s-}^n),\phi_s \rangle _n ds. \end{align*} Hence, since $Nr_n s_n=s_0$, \begin{equation} \label{eq:A2q} A^2_t=\alpha s_0 \int_0^t \langle q_{s}^n(1-p_{s}^n),\phi_s \rangle _n ds +M_t^{n,2}(\phi), \end{equation} where \begin{equation} \label{eq:M2defq} M_t^{n,2}(\phi):=\frac{1}{Nn}\sum_{x\in \frac1n \mathbb{Z}}\sum_{i=1}^N\sum_{j\in [N]\setminus \{i\}} \int_0^t \xi_{s-}^n(x,j)(\eta_{s-}^n(x,j)-\eta_{s-}^n(x,i))\phi_s(x) (d \mathcal S_s^{x,i,j}-r_n\alpha s_n ds) \end{equation} is a martingale. For the third line on the right hand side of~\eqref{eq:qnA}, \begin{align*} A^3_t&:= \frac{1}{Nn}\sum_{x\in \frac1n \mathbb{Z}}\sum_{i=1}^N\sum_{j\neq k\in [N] \setminus \{i\}} \int_0^t \mathds{1}_{\xi_{s-}^n(x,j)=\xi_{s-}^n(x,k)}(\eta_{s-}^n(x,j)-\eta_{s-}^n(x,i))\phi_s(x) d \mathcal Q_s^{x,i,j,k}\\ &=\frac{1}{Nn}\sum_{x\in \frac1n \mathbb{Z}}\sum_{i=1}^N\sum_{j\neq k\in [N]\setminus \{i\}} \int_0^t \mathds{1}_{\xi_{s-}^n(x,j)=\xi_{s-}^n(x,k)}(\eta_{s-}^n(x,j)-\eta_{s-}^n(x,i))\phi_s(x) (d \mathcal Q_s^{x,i,j,k}-\tfrac1N r_n s_n ds)\\ &\quad +\frac{1}{Nn}\sum_{x\in \frac1n \mathbb{Z}}\sum_{i=1}^N\sum_{j\neq k\in [N] \setminus \{i\}} \int_0^t \mathds{1}_{\xi_{s-}^n(x,j)=\xi_{s-}^n(x,k)}(\eta_{s-}^n(x,j)-\eta_{s-}^n(x,i))\phi_s(x) \tfrac1N r_n s_n ds. \end{align*} For $x\in \frac 1n \mathbb{Z}$ and $s\in [0,t]$, since $\eta^n_{s-}(x,j)=0$ if $\xi^n_{s-}(x,j)=0$, \begin{align*} &\sum_{i=1}^N\sum_{j\neq k\in [N] \setminus \{i\}} \mathds{1}_{\xi_{s-}^n(x,j)=\xi_{s-}^n(x,k)}(\eta_{s-}^n(x,j)-\eta_{s-}^n(x,i))\\ &\quad =\sum_{i,j,k \in [N] \text{ distinct}} \Big( \mathds{1}_{\eta_{s-}^n(x,j)=\xi_{s-}^n(x,k)=1} - \mathds{1}_{\xi_{s-}^n(x,j)=\xi_{s-}^n(x,k)=\eta_{s-}^n(x,i)=1}\\ &\hspace{8cm}- \mathds{1}_{\xi_{s-}^n(x,j)=\xi_{s-}^n(x,k)=0,\, \eta_{s-}^n(x,i)=1} \Big) \\ &\quad = (N-2)N q^n_{s-}(x)(Np^n_{s-}(x)-1)-N q^n_{s-}(x)(Np^n_{s-}(x)-1)(Np^n_{s-}(x)-2)\\ &\hspace{1cm} -Nq^n_{s-}(x)(N-Np^n_{s-}(x))(N-N p^n_{s-}(x)-1)\\ &\quad = N^3 q^n_{s-}(x) (1-p^n_{s-}(x))(2p^n_{s-}(x)-1). \end{align*} Therefore, since $N r_n s_n=s_0$, \begin{equation} \label{eq:A3q} A^3_t= s_0 \int_0^t \langle q^n_{s}(1-p^n_{s})(2p^n_{s}-1),\phi_s \rangle _n ds +M^{n,3}_t(\phi), \end{equation} where \begin{align} \label{eq:M3defq} &M^{n,3}_t(\phi) \notag \\ &:=\frac{1}{Nn}\sum_{x\in \frac1n \mathbb{Z}}\sum_{i=1}^N\sum_{j\neq k\in [N]\setminus \{i\}} \int_0^t \mathds{1}_{\xi_{s-}^n(x,j)=\xi_{s-}^n(x,k)}(\eta_{s-}^n(x,j)-\eta_{s-}^n(x,i))\phi_s(x) (d \mathcal Q_s^{x,i,j,k}-\tfrac1N r_n s_n ds) \end{align} is a martingale. Finally, for the fourth line on the right hand side of~\eqref{eq:qnA}, \begin{align*} A^4_t&:=\frac{1}{Nn}\sum_{x\in \frac1n \mathbb{Z}}\sum_{i=1}^N\sum_{j\in [N] ,y\in \{x-n^{-1},\, x+n^{-1}\}} \int_0^t (\eta_{s-}^n(y,j)-\eta_{s-}^n(x,i))\phi_s(x) d \mathcal R_s^{x,i,y,j}\\ &=\frac{1}{Nn}\sum_{x\in \frac1n \mathbb{Z}}\sum_{i=1}^N\sum_{j\in [N],y\in \{x-n^{-1},\, x+n^{-1}\}} \int_0^t (\eta_{s-}^n(y,j)-\eta_{s-}^n(x,i))\phi_s(x) (d \mathcal R_s^{x,i,y,j}-mr_n ds)\\ &\quad +\frac{1}{Nn}\sum_{x\in \frac1n \mathbb{Z}}\sum_{i=1}^N\sum_{j\in [N],y\in \{x-n^{-1},\, x+n^{-1}\}} \int_0^t (\eta_{s-}^n(y,j)-\eta_{s-}^n(x,i))\phi_s(x) mr_n ds. \end{align*} For $x\in \frac 1n \mathbb{Z}$ and $s\in [0,t]$, \begin{align*} \sum_{i,j\in [N],y\in \{x-n^{-1},\, x+n^{-1}\}} (\eta_{s-}^n(y,j)-\eta_{s-}^n(x,i)) = N^2(q^n_{s-}(x-n^{-1})+ q^n_{s-}(x+n^{-1}))-2N^2 q^n_{s-}(x). \end{align*} Therefore we can write \begin{align*} &\frac{1}{Nn}\sum_{x\in \frac1n \mathbb{Z}}\sum_{i=1}^N\sum_{j\in [N],y\in \{x-n^{-1},\, x+n^{-1}\}} \int_0^t (\eta_{s-}^n(y,j)-\eta_{s-}^n(x,i))\phi_s(x) mr_n ds \\ &=\frac{mr_n}{Nn}\sum_{x\in \frac1n \mathbb{Z}} \int_0^t (N^2(q_{s-}^n(x-n^{-1})+q_{s-}^n(x+n^{-1}))-2N^2 q_{s-}^n(x))\phi_s(x) ds \\ &=\frac{N mr_n}{n}\sum_{x\in \frac1n \mathbb{Z}} \int_0^t q_{s-}^n(x)(\phi_s(x+n^{-1})+\phi_s(x-n^{-1})-2 \phi_s(x)) ds \\ &=\frac{N mr_n}{n^2}\int_0^t \langle q_{s}^n, \Delta_n \phi_s \rangle _n ds , \end{align*} where the second equality follows by summation by parts. Hence, since $Nr_n n^{-2}=\frac12$, \begin{equation} \label{eq:A4q} A^4_t=\tfrac12 m\int_0^t \langle q_{s}^n, \Delta_n \phi_s \rangle _n ds +M^{n,4}_t (\phi), \end{equation} where \begin{equation} \label{eq:M4defq} M^{n,4}_t (\phi):=\frac{1}{Nn}\sum_{x\in \frac1n \mathbb{Z}}\sum_{i=1}^N\sum_{j\in [N],y\in \{x-n^{-1},\, x+n^{-1}\}} \int_0^t (\eta_{s-}^n(y,j)-\eta_{s-}^n(x,i))\phi_s(x) (d \mathcal R_s^{x,i,y,j}-mr_n ds) \end{equation} is a martingale. Combining~\eqref{eq:A1q},~\eqref{eq:A2q},~\eqref{eq:A3q} and~\eqref{eq:A4q} with~\eqref{eq:qnA}, we have that \begin{align*} &\langle q^n_t, \phi_t \rangle _n - \langle q^n_0, \phi_0 \rangle _n -\int_0^t \langle q^n_s, \partial_s\phi_s \rangle _n ds \notag \\ &= s_0 \int_0^t \langle q^n_{s}(1-p^n_{s})(2p^n_{s}-1+\alpha ),\phi_s \rangle _n ds +\tfrac12 m\int_0^t \langle q_{s}^n, \Delta_n \phi_s \rangle _n ds +M^n_t(\phi), \end{align*} where $M^{n}_t (\phi):=\sum_{i=1}^4 M^{n,i}_t (\phi)$ is a martingale with $M^n_0(\phi)=0$. It remains to bound $\langle M^n(\phi)\rangle_t$. Since $(\mathcal P^{x,i,j})$, $(\mathcal S^{x,i,j})$, $(\mathcal Q^{x,i,j,k})$ and $(\mathcal R^{x,i,y,j})$ are independent families of Poisson processes, \begin{equation} \label{eq:Mqvsumq} \langle M^{n}(\phi)\rangle _t =\sum_{i=1}^4 \langle M^{n,i}(\phi)\rangle _t . \end{equation} By the definition of $M^{n,1}(\phi)$ in~\eqref{eq:A1q}, we have \begin{align} \label{eq:Mn1ineqq} \langle M^{n,1}(\phi)\rangle _t &=\frac{1}{N^2 n^2}r_n(1-(\alpha +1)s_n) \sum_{x\in \frac1n \mathbb{Z}}\sum_{i=1}^N \sum_{j\in [N] \setminus \{i\}} \int_0^t (\eta_{s-}^n(x,j)-\eta_{s-}^n(x,i))^2\phi_s(x)^2 ds \notag\\ &=\frac{r_n}{n^2}(1-(\alpha +1)s_n) \int_0^t \sum_{x\in \frac1n \mathbb{Z}} 2q_{s-}^n(x)(1-q_{s-}^n(x))\phi_s(x)^2 ds \notag\\ &\leq \frac{r_n}{n}(1-(\alpha +1)s_n) \int_0^t \langle 2q^n_s, \phi^2_s \rangle_n ds. \end{align} By the same argument, by the definition of $M^{n,2}(\phi)$ in~\eqref{eq:M2defq}, \begin{align*} \langle M^{n,2}(\phi)\rangle _t &\leq \frac{r_n}{n}\alpha s_n \int_0^t \langle 2q^n_s, \phi^2_s \rangle_n ds . \end{align*} Then by the definition of $M^{n,3}(\phi)$ in~\eqref{eq:M3defq}, \begin{align*} &\langle M^{n,3}(\phi)\rangle _t \\ &\quad = \frac{1}{N^2 n^2}\frac{r_n s_n}{N}\sum_{x\in \frac1n \mathbb{Z}}\sum_{i=1}^N\sum_{j\neq k\in [N] \setminus \{i\}} \int_0^t \mathds{1}_{\xi_{s-}^n(x,j)=\xi_{s-}^n(x,k)}(\eta_{s-}^n(x,j)-\eta_{s-}^n(x,i))^2\phi_s(x)^2 ds \\ &\quad \leq \frac{1}{N^2 n^2}\frac{r_n s_n}{N}\sum_{x\in \frac1n \mathbb{Z}}N^3 \int_0^t 2q^n_{s-}(x)(1-q^n_{s-}(x)) \phi_s(x)^2 ds \\ &\quad \le \frac{r_n}{n} s_n \int_0^t \langle 2q^n_s, \phi^2_s \rangle_n ds . \end{align*} Finally, by the definition of $M^{n,4}(\phi)$ in~\eqref{eq:M4defq}, \begin{align*} \langle M^{n,4}(\phi)\rangle _t &\leq \frac{1}{N^2 n^2}m r_n\sum_{x\in \frac1n \mathbb{Z}}N^2 \int_0^t (q^n_{s-}(x-n^{-1})+2q^n_{s-}(x)+q^n_{s-}(x+n^{-1})) \phi_s(x)^2 ds \\ &= \frac{m r_n}{n} \int_0^t \langle q^n_s(\cdot-n^{-1})+2q^n_s(\cdot)+q^n_s(\cdot+n^{-1}), \phi^2_s \rangle_n ds . \end{align*} By~\eqref{eq:Mqvsumq}, and since $r_n n^{-1} =\frac 12 n N^{-1}$, the result follows. \end{proof} The following result, which is a version of the local central limit theorem in~\cite{lawler/limic:2010}, will be used several times in the rest of the article. Recall that we let $(X^n_t)_{t\ge 0}$ denote a simple symmetric random walk on $\frac 1n \mathbb{Z}$ with jump rate $n^2$. \begin{lemma}[Theorem~2.5.6 in~\cite{lawler/limic:2010}] \label{lem:lclt} For $x\in \frac 1n \mathbb{Z}$ and $t>0$ with $|x|\le \frac 12 nt$, $$ \psubb{0}{X^n_t=x}=\frac 1n \frac{1}{\sqrt{2\pi t}}e^{-\frac{x^2}{2t}}e^{\mathcal O(n^{-1}t^{-1/2}+n^{-1} |x|^3 t^{-2})}. $$ \end{lemma} The next lemma gives us useful bounds on $\langle M^n(\phi^{t,z})\rangle_t$. \begin{lemma} \label{lem:Mqvarbound} There exists a constant $C_6<\infty$ such that for $t\ge 0$, $s\in [0,t]$ and $z\in \frac 1n \mathbb{Z}$, \begin{align} \langle 1, (\phi^{t,z}_s)^2 \rangle_n =n \psubb{0}{X^n_{2m(t-s)}=0}, \qquad &\int_0^t \langle 1, (\phi^{t,z}_s)^2\rangle_n ds \leq C_6 t^{1/2} \label{eq:intphitzq}\\ \text{and } \qquad \quad \langle M^{n}(\phi^{t,z})\rangle _t &\leq C_6 t^{1/2} \frac{n }{N}. \label{eq:lemMqvar2} \end{align} \end{lemma} \begin{proof} For $s\in [0,t]$, by the definition of $\phi^{t,z}_s$ in~\eqref{eq:phitzdefq} and by translational invariance, \begin{align} \label{eq:phitz2sumq} \sum_{x\in \frac1n \mathbb{Z}} \phi^{t,z}_s(x)^2 &=n^2 \sum_{x\in \frac1n \mathbb{Z}} \mathbf{P}_0 \left( X^n_{m(t-s)}=x \right)^2 \notag\\ &=n^2 \sum_{x\in \frac1n \mathbb{Z}} \mathbf{P}_0 \left( X^n_{m(t-s)}=-x \right) \mathbf{P}_0 \left( X^n_{m(t-s)}=x \right) \notag\\ &=n^2 \mathbf{P}_0 \left( X^n_{2m(t-s)}=0 \right), \end{align} where the second line follows by symmetry. (This argument is used in~(54) of \cite{durrett/fan:2016}.) By Lemma~\ref{lem:lclt}, for $t_0>0$, \begin{align*} \int_0^{t_0} n \psubb{0}{X^n_s=0}ds \le \min(n t_0,n^{-1})+\int_{t_0\wedge n^{-2}}^{t_0} (2\pi s)^{-1/2} e^{\mathcal O(1)}ds \le K_3 t_0^{1/2}, \end{align*} for some constant $K_3$. By~\eqref{eq:phitz2sumq}, the first statement~\eqref{eq:intphitzq} follows, and the second statement~\eqref{eq:lemMqvar2} follows by Lemma~\ref{lem:qnphi} and since $q^n_s \in [0,1]$. \end{proof} We will use the following lemma in the proof of Proposition~\ref{prop:pnun}, and also later on in Section~\ref{sec:eventE2}. \begin{lemma} \label{lem:qnvndet} For $k\in \mathbb{N}$, $t\ge 0$ and $z\in \frac 1n \mathbb{Z}$, \begin{align*} &|q^n_t(z)-v^n_t(z)|^k\\ &\le 3^{2k-1} s_0^k t^{k-1}\left( \int_0^t \langle |q^n_s - v^n_s|^k , \phi^{t,z}_s \rangle_n ds + \int_0^t \sup_{x\in \frac 1n \mathbb{Z}} v^n_s(x)^k \langle |p^n_s - u^n_s|^k, \phi^{t,z}_s \rangle_n ds\right) +3^{k-1}|M^n_t(\phi^{t,z})|^k. \end{align*} \end{lemma} \begin{proof} By Corollary~\ref{cor:qnMa} and~\eqref{eq:vngreena} with $a=0$, for $t\ge 0$ and $z\in \frac 1n \mathbb{Z}$, \begin{align*} |q^n_t(z)-v^n_t(z)| &\leq s_0 \int_0^t | \langle (q^n_s-v^n_s)(1-p^n_s)(2p^n_s-1+\alpha), \phi^{t,z}_s \rangle _n | ds\\ &\, +s_0 \int_0^t | \langle v^n_s((1-p^n_s)(2p^n_s-1+\alpha)-(1-u^n_s)(2u^n_s-1+\alpha)), \phi^{t,z}_s \rangle _n | ds +|M^n_t (\phi^{t,z})|. \end{align*} Therefore, since $|(1-u)(2u-1+\alpha )|\le 1+\alpha$ for $u\in [0,1]$, and since $|(1-x)(2x-1+\alpha )-(1-y)(2y-1+\alpha )|\le 3 |x-y|$ for $x,y \in [0,1]$, for $k\in \mathbb{N}$, \begin{align} \label{eq:lemqnvnA} |q^n_t(z)-v^n_t(z)|^k &\leq 3^{k-1}s_0^k \left( \int_0^t \langle (1+\alpha) |q^n_s-v^n_s|, \phi^{t,z}_s \rangle _n ds\right)^k \notag \\ &\quad +3^{k-1} s_0^k \left( \int_0^t \langle v^n_s\cdot 3 |p^n_s-u^n_s|, \phi^{t,z}_s \rangle _n ds \right)^k +3^{k-1} |M^n_t (\phi^{t,z})|^k. \end{align} Note that by the definition of $\phi^{t,z}$ in~\eqref{eq:phitzdefq}, for $s\in [0,t]$, $\langle 1,\phi^{t,z}_s\rangle_n =1$. Hence by two applications of Jensen's inequality, \begin{align*} \left(\int_0^t \langle (1+\alpha) |q^n_{s}-v^n_s|, \phi^{t,z}_s \rangle _n ds\right)^k &\leq t^{k-1}(1+\alpha) ^k \int_0^t \langle |q^n_{s}-v^n_s|, \phi^{t,z}_s \rangle _n ^k ds \\ &\leq t^{k-1}(1+\alpha) ^k \int_0^t \langle |q^n_{s}-v^n_s|^k, \phi^{t,z}_s \rangle _n ds. \end{align*} Similarly, \begin{align*} \left( \int_0^t \langle 3 v^n_s |p^n_s-u^n_s|, \phi^{t,z}_s \rangle _n ds \right)^k &\le t^{k-1}3^k \int_0^t \sup_{x\in \frac 1n \mathbb{Z}}v^n_s(x)^k \langle |p^n_{s}-u^n_s|^k, \phi^{t,z}_s \rangle _n ds. \end{align*} The result follows by~\eqref{eq:lemqnvnA}. \end{proof} We will use the following form of the Burkholder-Davis-Gundy inequality (see the proof of Lemma~4 in~\cite{mueller/tribe:1995}) in the proof of Proposition~\ref{prop:pnun} and also later in Section~\ref{sec:eventE2}. \begin{lemma}[Burkholder-Davis-Gundy inequality] \label{lem:BDG} For $k\in \mathbb{N}$ with $k\geq 2$ there exists $C(k)<\infty$ such that for $(M_t)_{t\geq 0}$ a c\`adl\`ag martingale with $M_0=0$, for $t\geq 0$, \begin{equation*} \E {\sup_{s\in [0,t]}|M_s|^k } \leq C(k) \E {\langle M \rangle_t^{k/2} +\sup_{s\in [0,t]}|M_s-M_{s-}|^k }. \end{equation*} \end{lemma} We are now ready to finish this section by proving Proposition~\ref{prop:pnun}. \begin{proof}[Proof of Proposition~\ref{prop:pnun}] For $t>0$ and $z\in \frac 1n \mathbb{Z}$, by Lemma~\ref{lem:qnphi} we have that almost surely \begin{equation*} \sup_{s\in [0,t]}|M_s^n(\phi^{t,z})-M_{s-}^n(\phi^{t,z})| = \sup_{s\in [0,t]} |\langle q^n_s, \phi^{t,z}_s \rangle_n - \langle q^n_{s-}, \phi^{t,z}_s \rangle_n | \leq N^{-1}. \end{equation*} It follows by Lemma~\ref{lem:Mqvarbound} and Lemma~\ref{lem:BDG} that for $k\geq 2$, \begin{equation*} \E {\sup_{s\in [0,t]}|M^n_s(\phi^{t,z})|^k } \leq C(k) \left(\left(C_6 t^{1/2}\frac{n }{N}\right)^{k/2}+N^{-k} \right). \end{equation*} By Lemma~\ref{lem:qnvndet}, and since $\langle 1, \phi^{t,z}_s\rangle_n =1$ and $v^n_s\in [0,1]$ for $s\in [0,t]$, \begin{align} \label{qnvnEbound} &\E{|q^n_t(z)-v^n_t(z)|^k} \notag \\ &\leq 3^{2k-1}s_0^k t^{k-1} \left(\int_0^t \sup_{x\in \frac1n \mathbb{Z}} \E{ |q^n_{s}(x)- v^n_s(x)|^k} ds +\int_0^t \sup_{x\in \frac1n \mathbb{Z}} \E{ |p^n_{s}(x)-u^n_s(x)|^k} ds \right) \notag \\ &\qquad +3^{k-1}C(k) \left(\left(C_6 t^{1/2} \frac{n }{N}\right)^{k/2}+N^{-k} \right). \end{align} Temporarily setting $\eta^n_0=\xi^n_0$ and so $q^n_0=p^n_0$, we have $p^n_s=q^n_s$ and $v^n_s=u^n_s$ $\forall s\ge 0$, and by Gronwall's inequality, for $t\ge 0$, \begin{align*} \sup_{x\in \frac1n \mathbb{Z}} \E{ |p^n_{t}(x)-u^n_t(x)|^k} &\leq 3^{k-1} C(k)\left( \left(C_6 t^{1/2} \frac{n}{N} \right)^{k/2}+N^{-k}\right) e^{3^{2k-1}2s_0^k t^k}. \end{align*} It follows that there exists a constant $C_1=C_1(k)<\infty$ such that for $t\geq 0$, \begin{align} \label{eq:gronwall1} \sup_{x\in \frac1n \mathbb{Z}} \E{ |p^n_{t}(x)-u^n_t(x)|^k} &\leq C_1 \left(\frac{n^{k/2}t^{k/4}}{N^{k/2}}+N^{-k}\right) e^{C_1 t^k}, \end{align} which establishes~\eqref{eq:gronwall1stat}. Then substituting into~\eqref{qnvnEbound}, \begin{align*} \E{|q^n_t(z)-v^n_t(z)|^k} &\leq 3^{2k-1}s_0^k t^{k-1} \int_0^t \sup_{x\in \frac1n \mathbb{Z}} \E{ |q^n_{s}(x)- v^n_s(x)|^k} ds\\ &\quad + 3^{2k-1} s_0^k t^{k-1} \int_0^t C_1\left(\frac{n^{k/2}s^{k/4}}{N^{k/2}}+N^{-k} \right) e^{C_1 s^k}ds\\ &\qquad +3^{k-1}C(k) \left(\left(C_6 t^{1/2}\frac{n }{N}\right)^{k/2}+N^{-k} \right). \end{align*} Hence by Gronwall's inequality, there exists a constant $K_4=K_4(k)<\infty$ such that for $t\ge 0$, \begin{align} \label{eq:gronwall2} \sup_{x\in \frac1n \mathbb{Z}} \E{ |q^n_{t}(x)-v^n_t(x)|^k} &\leq K_4 (t^{5k/4}+1)e^{C_1 t^k}\left(\frac n N \right)^{k/2} e^{3^{2k-1}s_0^k t^k}. \end{align} Note that for $x\in \frac1n \mathbb{Z}$, the rate at which $(p_t^n(x))_{t\geq 0}$ jumps is bounded above by $$ N^2 r_n(1-(\alpha+1)s_n)+N^2 r_n\alpha s_n +N^3 \cdot \tfrac{1}N r_n s_n+2N^2 m r_n =N^2 r_n (1+2m) =\tfrac12 N n^2 (1+2m). $$ Therefore, for $t\geq 0$ and $x\in \frac 1n \mathbb{Z}$, letting $Z\sim \text{Poisson}(\frac12 (1+2m))$ and then using Markov's inequality, \begin{align*} \p{\sup_{s\in [0,n^{-2} N^{-1}]}|p^n_{t+s}(x)-p^n_t(x)| \geq N^{-1/2}} &\leq \p{Z \geq N^{1/2}} \leq e^{-2N^{1/2}} \E{e^{2Z}} \leq e^{-N^{1/2}} \end{align*} for $n$ sufficiently large. Suppose $T\le N$. Then by a union bound, \begin{align} \label{eq:pntchange} &\p{\exists t\in n^{-2}N^{-1}\mathbb{N}_0\cap [0,T] , x\in \tfrac1n \mathbb{Z}\cap [- N^5,N^5]: \sup_{s\in [0,n^{-2}N^{-1}]}|p^n_{t+s}(x)-p^n_t(x)| \geq N^{-1/2}} \notag\\ &\leq \sum_{t\in n^{-2}N^{-1}\mathbb{N}_0\cap [0,T]} \sum_{ x\in \frac1n \mathbb{Z}\cap [- N^5,N^5]}\p{\sup_{s\in [0,n^{-2}N^{-1}]}|p^n_{t+s}(x)-p^n_t(x)| \geq N^{-1/2}}\notag\\ &\leq (n^2 N T +1)(2N^5 n+1)e^{-N^{1/2}}\notag\\ &\leq e^{-N^{1/2}/2} \end{align} for $n$ sufficiently large. For $t_1,t_2 \geq 0$ and $x\in \frac1n \mathbb{Z}$, since $\sup_{u\in [0,1]}|f(u)|< 1$, \begin{align*} |u^n_{t_1}(x)-u^n_{t_2}(x)|&\leq \tfrac12 m \sup_{s\geq 0, y\in \frac 1n \mathbb{Z}}|\Delta_n u^n_s(y)| |t_1-t_2| +s_0 |t_1-t_2|\\ &\leq (mn^2+s_0)|t_1-t_2|. \notag \end{align*} Therefore for $n$ sufficiently large, for $t\geq 0$ and $x\in \frac1n \mathbb{Z}$, \begin{equation} \label{eq:unchange} \sup_{s\in [0,n^{-2}N^{-1}]}|u^n_{t+s}(x)-u^n_t(x)| \leq 2m N^{-1}. \end{equation} Then by~\eqref{eq:pntchange},~\eqref{eq:unchange} and a union bound, for $c_3\in (0,1/2)$, for $n$ sufficiently large that $2mN^{-1}+N^{-1/2}\le \frac 12 \left( \frac n N \right)^{1/2-c_3}$, \begin{align*} &\p{\sup_{x\in \frac1n \mathbb{Z},\, |x|\leq N^5}\sup_{t\in [0,T]}|p^n_{t}(x)-u^n_t(x)| \geq \left(\frac{n}{N}\right)^{1/2-c_3}} \\ &\leq \sum_{t\in n^{-2}N^{-1}\mathbb{N}_0\cap [0,T]} \sum_{ x\in \frac1n \mathbb{Z},\, |x|\leq N^5}\p{|p^n_{t}(x)-u^n_t(x)| \geq \tfrac12\left(\frac{n}{N}\right)^{1/2-c_3}}+e^{-N^{1/2}/2}. \end{align*} Hence for $k\in \mathbb{N}$ with $k\geq 2$, by Markov's inequality, \begin{align*} &\p{\sup_{x\in \frac1n \mathbb{Z},\, |x|\leq N^5}\sup_{t\in [0,T]}|p^n_{t}(x)-u^n_t(x)| \geq \left(\frac{n}{N}\right)^{1/2-c_3}} \\ &\leq \sum_{t\in n^{-2}N^{-1}\mathbb{N}_0\cap [0,T]} \sum_{ x\in \frac1n \mathbb{Z},\, |x|\leq N^5}\E{|p^n_{t}(x)-u^n_t(x)|^k }2^k\left(\frac{n}{N}\right)^{-k(1/2-c_3)}+e^{-N^{1/2}/2}\\ &\leq \sum_{t\in n^{-2}N^{-1}\mathbb{N}_0\cap [0,T]} \sum_{ x\in \frac1n \mathbb{Z},\, |x|\leq N^5}C_1 \left(\frac{n^{k/2}t^{k/4}}{N^{k/2}}+N^{-k}\right) e^{C_1 t^k}2^k\left(\frac{n}{N}\right)^{-k(1/2-c_3)}+e^{-N^{1/2}/2}\\ &\leq (n^2 N T+1) (2n N^5+1)C_1 \left(\frac{n^{k/2}T^{k/4}}{N^{k/2}}+N^{-k}\right) e^{C_1 T^k}2^k\left(\frac{n}{N}\right)^{-k(1/2-c_3)}+e^{-N^{1/2}/2}, \end{align*} where the second inequality follows by~\eqref{eq:gronwall1}. Take $\ell'\in \mathbb{N}$ sufficiently large that $n^4 N^7 e^{2^k(C_1+3^{2k-1}s_0^k)(\log N)^{1/2}}\left( \frac n N \right)^{\ell'}\ \le 1$ for $n$ sufficiently large. For $\ell \in \mathbb{N}$, take $c_4\in (0,\frac 12 c_3(\ell+\ell'+1)^{-1})$. Since $1/(2c_4)>(\ell+\ell'+1)/c_3$ and $c_3<1/2$ we can take $k\in \mathbb{N} \cap ((\ell+\ell')/c_3,1/(2c_4))$ with $k\ge 2$. Therefore for $T\leq 2(\log N)^{c_4}$, for $n$ sufficiently large, \begin{align*} &\p{\sup_{x\in \frac1n \mathbb{Z},\, |x|\leq N^5}\sup_{t\in [0,T]}|p^n_{t}(x)-u^n_t(x)| \geq \left(\frac{n}{N}\right)^{1/2-c_3}}\\ &\quad \leq n^4 N^7\left( \frac{n}{N}\right)^{k/2} e^{C_1 2^k(\log N)^{c_4 k}}\left(\frac n N \right)^{-k(1/2-c_3)}+e^{-N^{1/2}/2}\\ &\quad \leq \left( \frac{n}{N}\right)^\ell \end{align*} for $n$ sufficiently large, since $kc_3>\ell+\ell'$ and $c_4 k<1/2$. Similarly, by a union bound and Markov's inequality, and then by~\eqref{eq:gronwall2}, for $t \le 2 (\log N)^{c_4}$, \begin{align*} \p{\sup_{x\in \frac1n \mathbb{Z},\, |x|\leq N^5}|q^n_{t}(x)-v^n_t(x)| \geq \left(\frac{n}{N}\right)^{1/2-c_3}} &\le \sum_{x\in \frac 1n \mathbb{Z}, |x|\le N^5} \E{|q^n_t(x)-v^n_t(x)|^k} \left( \frac n N \right)^{-k(1/2-c_3)} \\ &\leq (2 n N^5 +1) K_4 (t^{5k/4}+1)e^{C_1 t^k} e^{3^{2k-1}s_0^k t^k} \left(\frac n N \right)^{kc_3}\\ &\leq \left( \frac{n}{N}\right)^\ell \end{align*} for $n$ sufficiently large, which completes the proof. \end{proof} \section{Event $E_2$ occurs with high probability} \label{sec:eventE2} Recall the definitions of the events $E_2$ and $E'_2$ in~\eqref{eq:eventE2} and~\eqref{eq:eventE'2}. In this section, we will prove the following result. \begin{prop} \label{prop:eventE2} For $c_1,c_2>0$, for $t^*\in \mathbb{N}$ sufficiently large and $K\in \mathbb{N}$ sufficiently large (depending on $t^*$), the following holds. If $a_1>1$ and $N\ge n^{a_1}$ for $n$ sufficiently large, then for $n$ sufficiently large, $$ \p{(E'_2)^c\cap E'_1}\le \left( \frac n N \right)^2. $$ Moreover, if $a_2>3$ and $N\ge n^{a_2}$ for $n$ sufficiently large, then for $n$ sufficiently large, $$ \p{(E_2)^c\cap E'_1}\le \left( \frac n N \right)^2. $$ \end{prop} Suppose from now on in this section that for some $a_1>1$, $N\ge n^{a_1}$ for $n$ sufficiently large, and fix $c_1, c_2>0$. We begin by proving that for $t$, $x_1$ and $x_2$ such that $x_1$ and $x_2$ are not too far from the front, the event $A^{(1)}_{t}(x_1,x_2)$ occurs with high probability. Recall the definition of $(v^n_t)_{t\ge 0}$ in~\eqref{eq:vndef}. We begin by showing that the solution of a PDE closely related to~\eqref{eq:vndef} can be written in terms of a diffusion $(Z_t)_{t\ge 0}$. \begin{lemma} \label{lem:vtSDE} Suppose $h:\R \rightarrow [0,1]$ is measurable, and take $t_0>0$. For $x\in \R$ and $t\ge t_0$, let $$ v_t(x)=g(x-\nu t)\Esub{x-\nu t}{\frac{h(Z_{t-t_0}+\nu t_0)}{g(Z_{t-t_0})}}, $$ where under $\mathbb P_{x_0}$, $(Z_t)_{t\ge 0}$ solves the SDE \begin{equation} \label{eq:SDE} dZ_t =\nu \,dt+\frac{m\nabla g(Z_t)}{g(Z_t)}\,dt+\sqrt m \,dB_t, \quad Z_0 =x_0, \end{equation} and $(B_t)_{t\ge 0}$ is a Brownian motion. Then $v_{t_0}=h$ and $$ \partial_t v_t(x)=\tfrac 12 m \Delta v_t(x)+s_0 v_t(x)(1-g(x-\nu t))(2g(x-\nu t)-1+\alpha ) \quad \text{for }t>t_0, \, x\in \R. $$ \end{lemma} \begin{proof} For $t\ge t_0$ and $x\in \R$, let $$ v^{(1)}_t(x)=\Esub{x-\nu t}{\frac{h(Z_{t-t_0}+\nu t_0)}{g(Z_{t-t_0})}} =v_t(x)g(x-\nu t)^{-1}. $$ Since $\mathcal Af(x):= \frac 12 m\Delta f(x)+\left(\nu +\frac{m \nabla g(x)}{g(x)}\right) \nabla f(x)$ is the generator of the diffusion $(Z_t)_{t\ge 0}$ as defined in~\eqref{eq:SDE}, for $t>t_0$ and $x\in \R$, \begin{align*} \partial_t v^{(1)}_t(x)&= \tfrac 12 m \Delta v^{(1)}_t (x) +\left( \nu+\frac{m\nabla g(x-\nu t)}{g(x-\nu t)}\right)\nabla v^{(1)}_t (x)-\nu \nabla v^{(1)}_t (x) \end{align*} (see for example Theorem~7.1.5 in~\cite{durrett:1996}). Therefore \begin{align*} \partial_t v_t(x)&=-\nu \nabla g(x-\nu t)v^{(1)}_t (x)+ \tfrac 12 m g(x-\nu t)\Delta v^{(1)}_t (x)+m \nabla g(x-\nu t)\nabla v^{(1)}_t (x)\\ &=\tfrac 12 m \Delta v_t(x)-\tfrac 12 m\frac{\Delta g(x-\nu t)}{g(x-\nu t)}v_t(x)-\nu \frac{\nabla g(x-\nu t)}{g(x-\nu t)}v_t(x). \end{align*} Since $\Delta g=-\kappa^2 g(1-g)(2g-1)$ and $\nabla g=-\kappa g(1-g)$, the result follows by~\eqref{eq:kappanu}. \end{proof} We now show that for $(u^n_t)_{t\ge 0}$ and $(v^n_t)_{t\ge 0}$ defined as in~\eqref{eq:undef} and~\eqref{eq:vndef}, if $\sup_{s\in [0,t],\,x\in \frac 1n \mathbb{Z}}|u^n_s(x)-g(x-\nu s)|$ is small then $v^n_t$ is approximately given by an expectation of a function of $Z_t$. The proof is similar to the proof of Lemma~\ref{lem:unu}. \begin{lemma} \label{lem:vnvbound} Take $\delta,\epsilon \in (0,1)$. For $t\ge 0$ and $x\in \R$, let $$ v_t(x)=g(x-\nu t) \Esub{x-\nu t}{\bar{q}^n_0 (Z_t) g(Z_t)^{-1}}, $$ where $\bar{q}^n_0:\R\rightarrow [0,1]$ is the linear interpolation of $q^n_0: \frac 1n \mathbb{Z} \rightarrow [0,1]$, and $(Z_t)_{t\ge 0}$ is defined in~\eqref{eq:SDE}. Suppose $T\ge 1$, $\sup_{x\in \frac 1n \mathbb{Z}, s\in [0,T]}|u^n_s(x)-g(x-\nu s)|\le \delta$ and $\sup_{x_1,x_2\in\frac 1n \mathbb{Z},|x_1-x_2|\leq n^{-1/3}}|q^n_0(x_1)-q^n_0(x_2)|\le \epsilon$. There exists a constant $C_7<\infty$ such that for $n$ sufficiently large, for $t\in [0,T]$, \begin{align*} \sup_{x\in \frac1n \mathbb{Z}}|v_t^n(x)-v_t(x)| &\leq \left(C_7 (n^{-1/3}+\delta)\sup_{x\in \frac 1n \mathbb{Z}}q^n_0(x) +2\epsilon \right) e^{5s_0 T} T^2. \end{align*} \end{lemma} \begin{proof} For $t>0$ and $x\in \R$, let $G_t(x)=\frac{1}{\sqrt{2\pi t}}e^{-x^2/(2t)}.$ For $s\ge 0$ and $x\in \R$, let $f_s(x)=v_s(x)(1-g(x-\nu s))(2g(x-\nu s)-1+\alpha )$. By Lemma~\ref{lem:vtSDE}, for $a\in \R$, $z\in \R$ and $t> 0$, \begin{equation} \label{eq:vtgreen*} v_t(z)=e^{-at} G_{mt}\ast v_0(z)+\int_0^t e^{-a(t-s)} G_{m(t-s)}\ast (s_0 f_s+av_s)(z)ds. \end{equation} Therefore, by~\eqref{eq:vtgreen*} with $a=-(1+\alpha)s_0 $, and since $(1-u)(2u-1+\alpha )\le 1+\alpha$ for $u\in [0,1]$, \begin{align} \label{eq:lemvnv*} v_t(z) &\le e^{(1+\alpha)s_0 t} G_{mt}\ast v_0(z). \end{align} Letting $(B_t)_{t\geq 0}$ denote a Brownian motion, it follows from~\eqref{eq:vngreena} and~\eqref{eq:vtgreen*} with $a=0$ that for $z\in \frac1n \mathbb{Z}$ and $t\ge 0$, \begin{align} \label{eq:unminusu} |v_t^n(z)-v_t(z)| & \leq \left|\Esubb{z}{q^n_0(X^n_{mt})}-\Esub{z}{v_0(B_{mt})} \right| \notag\\ &\quad +s_0 \int_0^t \Big|\Esubb{z}{v^n_s(1-u^n_s)(2u^n_s-1+\alpha )(X^n_{m(t-s)})} -\Esub{z}{f_s(B_{m(t-s)})} \Big| ds. \end{align} Recall from~\eqref{eq:couplingBM0} in the proof of Lemma~\ref{lem:unu} that for $n$ sufficiently large, $(X^n_t)_{t\ge 0}$ and $(B_t)_{t\ge 0}$ can be coupled in such a way that $X^n_0=B_0$ and for $t\ge 0$, \begin{align} \label{eq:couplingBM} \p{|X^n_{mt}-B_{mt}|\ge n^{-1/3}}\leq (t+1)n^{-1/2}. \end{align} Since $v_0=\bar{q}^n_0$, which is the linear interpolation of $q^n_0$, it follows that for $z\in \frac 1n \mathbb{Z}$ and $t\ge 0$, \begin{align} \label{eq:(*)unuproof} \left|\Esubb{z}{q^n_0(X^n_{mt})}-\Esub{z}{v_0(B_{mt})} \right| &\leq (t+1)n^{-1/2} \sup_{x\in \frac 1n \mathbb{Z}}q^n_0(x) +\sup_{x_1,x_2\in\R,|x_1-x_2|\leq n^{-1/3}}|\bar{q}^n_0(x_1)-\bar{q}^n_0(x_2)| \notag \\ &\leq (t+1)n^{-1/2} \sup_{x\in \frac 1n \mathbb{Z}}q^n_0(x) +2\epsilon \end{align} for $n$ sufficiently large. For the second term on the right hand side of~\eqref{eq:unminusu}, note that if $t\le T$ then for $s\in [0,t]$ and $y\in \frac 1n \mathbb{Z}$, $$ |(1-u^n_s(y))(2u^n_s(y)-1+\alpha )-(1-g(y-\nu s))(2g(y-\nu s)-1+\alpha )|\le 3 \delta. $$ Hence by the triangle inequality and then by~\eqref{eq:couplingBM}, for $s\in [0,t]$, \begin{align} \label{eq:intvdiffbound} &\left|\Esubb{z}{v^n_s(1-u^n_s)(2u^n_s-1+\alpha )(X^n_{m(t-s)})}-\Esub{z}{f_s(B_{m(t-s)})} \right| \notag \\ &\quad \le \Esubb{z}{(|(v^n_s-v_s)(1-u^n_s)(2u^n_s-1+\alpha )| +3\delta v_s )(X^n_{m(t-s)})} \notag \\ &\qquad +\left|\Esubb{z}{f_s(X^n_{m(t-s)})}-\Esub{z}{f_s(B_{m(t-s)})} \right| \notag \\ &\quad \le 3\left( \sup_{x\in \frac 1n \mathbb{Z}}|v^n_s(x)-v_s(x)|+\delta \sup_{x\in \R} v_s(x) \right) +2 (t+1)n^{-1/2}\sup_{x\in \R}|f_s(x)| + n^{-1/3} \sup_{x\in \R} |\nabla f_s(x)| \notag \\ &\quad \leq 3 \Big( \sup_{x\in \frac 1n \mathbb{Z}}|v^n_s(x)-v_s(x)|+(\delta + 2(t+1)n^{-1/2}) e^{(1+\alpha)s_0 s}\|v_0\|_\infty \notag \\ &\hspace{4cm} + n^{-1/3}(\|\nabla v_s\|_{\infty}+e^{(1+\alpha)s_0 s}\|v_0\|_\infty \|\nabla g\|_{\infty}) \Big) \end{align} by~\eqref{eq:lemvnv*}. It remains to bound $\|\nabla v_s\|_\infty$. For $t> 0$ and $x\in \R$, by differentiating both sides of~\eqref{eq:vtgreen*}, \begin{align} \label{eq:dut} \nabla v_t(x) &=G'_{mt}\ast v_0(x)+s_0 \int_0^t G'_{m(t-s)}\ast f_s(x)ds . \end{align} For the first term on the right hand side, \begin{align*} |G'_{mt}\ast v_0(x)| &\le \|v_0\|_\infty \int_{-\infty}^\infty |G'_{mt}(z)|dz = 2\|v_0\|_\infty G_{mt}(0) =2 \|v_0\|_\infty(2\pi m t)^{-1/2}. \end{align*} For the second term on the right hand side of~\eqref{eq:dut}, since $|f_s(x)|\le (1+\alpha)e^{(1+\alpha)s_0 s}\|v_0\|_\infty$ by~\eqref{eq:lemvnv*}, \begin{align*} \left|\int_0^t G'_{m(t-s)}\ast f_s(x)ds\right| &\leq (1+\alpha) e^{(1+\alpha)s_0 t}\|v_0\|_\infty \int_0^t 2G_{m(t-s)}(0) ds, \end{align*} and so by~\eqref{eq:dut}, for $t> 0$, $$ \|\nabla v_t \|_\infty \leq (2t^{-1/2}+4s_0 (1+\alpha) e^{(1+\alpha)s_0 t}t^{1/2})(2\pi m)^{-1/2}\|v_0\|_\infty. $$ Substituting into~\eqref{eq:intvdiffbound} and then into~\eqref{eq:unminusu}, using~\eqref{eq:(*)unuproof}, we now have that for $t\in [0,T]$ and $z\in \R$, \begin{align*} |v_t^n(z)-v_t(z)| & \leq (t+1)n^{-1/2}\sup_{x\in \frac 1n \mathbb{Z}}q^n_0(x) +2\epsilon \\ & +3s_0\int_0^t \bigg( \sup_{x\in \frac1n \mathbb{Z}}|v^n_s(x)-v_s(x)| +e^{(1+\alpha)s_0 t} \|v_0\|_\infty (\delta +2(t+1)n^{-1/2}+ n^{-1/3}\|\nabla g\|_\infty)\\ &\hspace{2cm}+(t^{-1/2} +2s_0 (1+\alpha) e^{(1+\alpha)s_0 t} t^{1/2} )m^{-1/2} \|v_0\|_\infty n^{-1/3}\bigg) ds. \end{align*} The result follows by Gronwall's inequality. \end{proof} By the theory of speed and scale (see for example~\cite{karlin/taylor:1981}), $(Z_t)_{t\ge 0}$ as defined in~\eqref{eq:SDE} has scale function $S$ and speed measure density $M$ given by \begin{equation} \label{eq:smdefn} S(x)=\int_0^x \tfrac 14 e^{-\alpha \kappa y}g(y)^{-2}dy \quad \text{and}\quad M(x)=\frac 4m e^{\alpha \kappa x}g(x)^2. \end{equation} Therefore $(Z_t)_{t\ge 0}$ has a stationary distribution with density $\pi$ as defined in~\eqref{eq:pidefn}. We now establish some useful upper bounds on the total variation distance between $\pi$ and the law of $Z_t$ at a large time $t$. Recall the definitions of $\gamma_n$ and $d_n$ in~\eqref{eq:paramdefns}. \begin{lemma} \label{lem:Tbound} Take $z_0\in \R$ and suppose $(Z^{(1)}_t)_{t\ge 0}$ and $(Z^{(2)}_t)_{t\ge 0}$ solve the SDEs \begin{align*} dZ^{(1)}_t &= \nu dt +\frac{m\nabla g(Z^{(1)}_t)}{g(Z^{(1)}_t)}dt +\sqrt m dB^{(1)}_t, \quad Z^{(1)}_0=z_0\\ \text{and }\quad dZ^{(2)}_t &= \nu dt +\frac{m\nabla g(Z^{(2)}_t)}{g(Z^{(2)}_t)}dt +\sqrt m dB^{(2)}_t, \quad Z^{(2)}_0=Z, \end{align*} where $(B^{(1)}_t)_{t\ge 0}$ and $(B^{(2)}_t)_{t\ge 0}$ are independent Brownian motions and $Z$ is an independent random variable with density $\pi$. Let $$ T^Z=\inf\{t\ge 0: Z^{(1)}_t=Z^{(2)}_t\}. $$ Then for $n$ sufficiently large, if $|z_0|\le d_n+1$, \begin{equation} \label{eq:TZbound1} \p{T^Z \ge \tfrac 12 \gamma_n }\le (\log N)^{-12C}. \end{equation} For $A<\infty$, for $t\ge 0$ sufficiently large, if $|z_0|\le A$, \begin{equation} \label{eq:TZbound2} \p{T^Z \ge t }\le 2m^{-1/2} t^{-1/4}. \end{equation} \end{lemma} \begin{remark} The first bound~\eqref{eq:TZbound1} will be used in the proof of Proposition~\ref{prop:eventE2}, and the weaker bound in~\eqref{eq:TZbound2} will be used in Section~\ref{sec:thmstatdist} in the proof of Theorem~\ref{thm:statdist}. \end{remark} \begin{proof} Suppose first that $|z_0|\le d_n+1$. Since $g(x)\le \min(e^{-\kappa x},1)$ $\forall x\in \R$, for $y_0>0$ we have \begin{equation} \label{eq:pitail} \begin{aligned} \int_{y_0}^\infty g(y)^2 e^{\alpha \kappa y}dy &\le (2-\alpha)^{-1} \kappa^{-1} e^{-(2-\alpha)\kappa y_0}\\ \text{ and } \quad \int_{-\infty}^{-y_0}g(y)^2 e^{\alpha \kappa y} dy &\le \alpha^{-1} \kappa^{-1} e^{-\alpha \kappa y_0}. \end{aligned} \end{equation} It follows that \begin{equation} \label{eq:lemTbound1} \p{|Z^{(2)}_0|\ge 13\alpha^{-1} d_n}\le 2\alpha^{-1}\kappa^{-1} \left(\int_{-\infty}^\infty g(y)^2 e^{\alpha \kappa y}dy\right)^{-1} (\log N)^{-13C}. \end{equation} Take $(Z_t)_{t\ge 0}$ as defined in~\eqref{eq:SDE}, and for $a\in \R$, let $$ \tau^a = \inf\{t\ge 0 : Z_t=a\}. $$ By~\eqref{eq:smdefn} and the theory of speed and scale (see for example~\cite{karlin/taylor:1981}), and then since $g(y)\in [\frac 12 e^{-\kappa y},e^{-\kappa y}]$ $\forall y\ge 0$, for $x>0$, \begin{align*} \psub{x/2}{\tau^x <\tau^0} =\frac{S(0)-S(x/2)}{S(0)-S(x)} \le \frac{\int_0^{x/2}4e^{-\alpha\kappa y}e^{2\kappa y} dy}{\int_0^{x}e^{-\alpha \kappa y}e^{2\kappa y} dy} &=4\frac{e^{(2-\alpha)\kappa x/2}-1}{e^{(2-\alpha)\kappa x}-1}\\ &\le 8 e^{-(2-\alpha)\kappa x/2} \end{align*} for $x\ge \kappa^{-1} \log 2$. Similarly, since $g(y)\in [1/2,1]$ $\forall y\le 0$, \begin{align*} \psub{-x/2}{\tau^{-x} <\tau^0} =\frac{S(0)-S(-x/2)}{S(0)-S(-x)} \le \frac{\int_{-x/2}^0 4e^{-\alpha \kappa y} dy}{\int_{-x}^0 e^{-\alpha \kappa y} dy} =4\frac{e^{\alpha \kappa x/2}-1}{e^{\alpha \kappa x}-1} \le 8 e^{-\alpha \kappa x/2} \end{align*} for $x\ge \alpha^{-1} \kappa^{-1} \log 2$. Hence for $n$ sufficiently large, \begin{equation} \label{eq:lemTbound2} \max\left( \psub{13\alpha^{-1} d_n}{\tau^{26\alpha^{-1} d_n} <\tau^0},\psub{-13\alpha^{-1} d_n}{\tau^{-26\alpha^{-1} d_n} <\tau^0} \right) \le 8(\log N)^{-13C}. \end{equation} Let $(B_t)_{t\ge 0}$ denote a Brownian motion. Note that $\frac{\nabla g(y)}{g(y)}\in [-\kappa ,0]$ $\forall y\in \R$, and so $|\nu +\frac{m \nabla g(y)}{g(y)}|<\sqrt{2s_0 m}$. Hence for $x\in \R$ with $|x|\ge 13\alpha^{-1} d_n$, \begin{equation} \label{eq:lemTbound3} \psub{x}{\tau^0<1}\le \p{\sup_{t\in [0,1]}\sqrt m B_t \ge 13\alpha^{-1} d_n-\sqrt{2ms_0}} \le 2e^{-\frac 1{2m}(13\alpha^{-1} d_n-\sqrt{2m s_0})^2} \end{equation} by the reflection principle and a Gaussian tail bound. Therefore by a union bound, \begin{align} \label{eq:X12notfar} &\p{\exists j\in \{1,2\},t\in [0,\gamma_n]: |Z^{(j)}_t|\ge 26\alpha^{-1} d_n} \notag \\ &\le \p{|Z_0^{(2)}| \ge 13 \alpha^{-1} d_n} +2\lceil \gamma_n \rceil \max\left( \psub{13 \alpha^{-1}d_n}{\tau^{26 \alpha^{-1}d_n} <\tau^0},\psub{-13\alpha^{-1}d_n}{\tau^{-26\alpha^{-1}d_n} <\tau^0} \right) \notag \\ &\quad +2\lceil \gamma_n \rceil \max\left( \psub{13 \alpha^{-1}d_n }{\tau^0<1},\psub{-13 \alpha^{-1}d_n}{\tau^0<1} \right) \notag \\ &\le \tfrac 12 (\log N)^{-12C} \end{align} for $n$ sufficiently large, by~\eqref{eq:lemTbound1},~\eqref{eq:lemTbound2} and~\eqref{eq:lemTbound3}. For $t\ge 0$, define the sigma-algebra $\mathcal F^Z_t=\sigma((Z^{(1)}_s)_{s\le t},(Z^{(2)}_s)_{s\le t})$. Note that if $Z^{(1)}_t \le Z^{(2)}_t$ then for $s\in [t,T^Z \vee t]$, \begin{align} \label{eq:X2sX1s} &Z^{(2)}_s -Z^{(1)}_s \notag \\ &= (Z^{(2)}_t-Z^{(1)}_t)+m \int_t^s \left( \frac{\nabla g(Z^{(2)}_u)}{g(Z^{(2)}_u)}-\frac{\nabla g(Z^{(1)}_u)}{g(Z^{(1)}_u)}\right) du +\sqrt m ((B^{(2)}_s -B^{(2)}_t)-(B^{(1)}_s-B^{(1)}_t)) \notag \\ &\le (Z^{(2)}_t-Z^{(1)}_t)+\sqrt m ((B^{(2)}_s -B^{(2)}_t)-(B^{(1)}_s-B^{(1)}_t)), \end{align} since $y\mapsto \frac{\nabla g(y)}{g(y)}$ is decreasing. Therefore, for $n$ sufficiently large, for $t\ge 0$, if $|Z^{(1)}_t|\vee |Z^{(2)}_t|\le 26\alpha^{-1} d_n$ then \begin{align} \label{eq:tau21B} \p{T^Z> t + \gamma_n^{1/2} \Big| \mathcal F^Z_t} &\le \psub{52\alpha^{-1} d_n}{\sqrt{2m} B_s \ge 0 \; \forall s\in [0,\gamma_n^{1/2}]} \notag \\ &\le \psub{52\alpha^{-1}\kappa^{-1} C+1}{\sqrt{2m} B_s \ge 0 \; \forall s\in [0,1]}:=p>0 \end{align} by Brownian scaling and since $d_n =\kappa^{-1} C \log \log N$ and $\gamma_n=\lfloor (\log \log N)^4 \rfloor$. Therefore by~\eqref{eq:X12notfar} and a union bound, for $n$ sufficiently large, \begin{align*} &\p{T^Z \ge \tfrac 12 \gamma_n }\\ &\le \tfrac 12 (\log N)^{-12C} +\p{T^Z \ge \tfrac 12 \gamma_n , |Z^{(1)}_{k\gamma_n^{1/2}}|\vee |Z^{(2)}_{k\gamma_n^{1/2}}|\le 26\alpha^{-1}d_n \; \forall k\in \mathbb{N}_0 \cap [0,\tfrac 12 \gamma_n^{1/2}]}\\ &\le \tfrac 12 (\log N)^{-12C}+p^{\lfloor \gamma_n^{1/2}/2\rfloor } \end{align*} by~\eqref{eq:tau21B}, which completes the proof of~\eqref{eq:TZbound1}. Now take $A<\infty$ and suppose $|z_0|\le A$. Then for $t\ge A^4$, by a union bound and~\eqref{eq:X2sX1s}, \begin{align*} \p{T^Z \ge t} &\le \p{|Z^{(2)}_0|\ge t^{1/4}}+\psub{2t^{1/4}}{\sqrt{2m} B_s \ge 0 \; \forall s\in [0,t]}\\ &\le 2\alpha^{-1}\kappa^{-1} \left( \int_{-\infty}^\infty g(y)^2 e^{\alpha \kappa y} dy\right)^{-1} e^{-\alpha \kappa t^{1/4}}+\psub{0}{|B_{2mt}|\le 2t^{1/4}} \end{align*} by~\eqref{eq:pitail} and the reflection principle. Since $\psub{0}{|B_{2m t}|\le 2t^{1/4}}\le \frac{4t^{1/4}}{(4\pi m t)^{1/2}}$, the result follows by taking $t$ sufficiently large. \end{proof} Fix $x_0 \in \frac 1n \mathbb{Z}$, and take $(v^n_t)_{t\geq 0}$ as in~\eqref{eq:vndef} with $v_0^n(x)=p^n_0(x_0)\mathds{1}_{x=x_0}$, and where $(u^n_t)_{t\ge 0}$ is defined in~\eqref{eq:undef}. The following result will be combined with a bound on $|q^n_{\gamma_n}-v^n_{\gamma_n }|$ to show that the event $A^{(1)}_t(x_1,x_2)$ occurs with high probability for suitable $t$, $x_1$ and $x_2$. Recall that we fixed $c_2>0$. \begin{lemma} \label{lem:vtstat} Suppose $\sup_{x\in \frac 1n \mathbb{Z}, s\in [0,\gamma_n]}|u^n_s(x)-g(x-\nu s)|\le e^{-(\log N)^{c_2}}$. For $n$ sufficiently large, if $|x_0|\le d_n$ and $|x-\nu \gamma_n|\le d_n+1$, $$ \frac{v^n_{\gamma_n}(x)}{g(x-\nu \gamma_n )}=\frac{\pi(x_0)}{g(x_0)} p^n_0(x_0)n^{-1}(1+\mathcal O((\log N)^{-4C})). $$ \end{lemma} \begin{proof} Let $t_0=(\log N)^{-12C}$. For $x\in \frac 1n \mathbb{Z}$, let $P^n_{t_0,x_0}(x)=\psubb{x}{X^n_{mt_0}=x_0}$, and let $\bar{P}^n_{t_0,x_0}:\R\rightarrow [0,1]$ denote the linear interpolation of $P^n_{t_0,x_0}$. Let $\bar{v}^n_{t_0}$ denote the linear interpolation of $v^n_{t_0}$. For $t\ge t_0$ and $x\in \R$, let \begin{equation} \label{eq:vteq} v_t(x)=g(x-\nu t)\Esub{x-\nu t}{\frac{\bar{v}^n_{t_0}(Z_{t-t_0}+\nu t_0)}{g(Z_{t-t_0})}}, \end{equation} where $(Z_t)_{t\ge 0}$ is defined in~\eqref{eq:SDE}. By~\eqref{eq:veasybound}, for $t\ge 0$ and $y\in \frac 1n \mathbb{Z}$, \begin{equation} \label{eq:lemvtstatvnb} v^n_t(y)\le e^{(1+\alpha)s_0 t} p^n_0(x_0) \psubb{y}{X^n_{mt}=x_0}, \end{equation} and so for $t\ge t_0$ and $x\in \R$, \begin{align} v_t(x) &\le g(x-\nu t)p^n_0(x_0)e^{(1+\alpha)s_0 t_0} \Big(\Esub{x-\nu t}{g(Z_{t-t_0})^{-1}\bar{P}^n_{t_0,x_0}(Z_{t-t_0}+\nu t_0)\mathds{1}_{|Z_{t-t_0}+\nu t_0-x_0|<n^{1/4}}} \notag \\ &\qquad \qquad \qquad \qquad +\Esub{x-\nu t}{g(Z_{t-t_0})^{-1}\bar{P}^n_{t_0,x_0}(Z_{t-t_0}+\nu t_0)\mathds{1}_{|Z_{t-t_0}+\nu t_0-x_0|\ge n^{1/4}}} \Big). \label{eq:vtupper} \end{align} For the first term on the right hand side, we have that if $n$ is sufficiently large that $n^{1/4}\le \frac 12 m nt_0$, then by Lemma~\ref{lem:lclt}, \begin{align*} &\Esub{x-\nu t}{g(Z_{t-t_0})^{-1}\bar{P}^n_{t_0,x_0}(Z_{t-t_0}+\nu t_0)\mathds{1}_{|Z_{t-t_0}+\nu t_0-x_0|< n^{1/4}}}\\ &\le n^{-1}(2\pi m t_0)^{-1/2} e^{\mathcal O(n^{-1/5})} \Esub{x-\nu t}{g(Z_{t-t_0})^{-1} e^{-(Z_{t-t_0}+\nu t_0-x_0)^2/(2m t_0)}}. \end{align*} For the second term on the right hand side of~\eqref{eq:vtupper}, by the definition of $\bar{P}^n_{t_0,x_0}$ and then by Markov's inequality, for $n$ sufficiently large, \begin{align*} &\Esub{x-\nu t}{g(Z_{t-t_0})^{-1}\bar{P}^n_{t_0,x_0}(Z_{t-t_0}+\nu t_0)\mathds{1}_{|Z_{t-t_0}+\nu t_0-x_0|\ge n^{1/4}}}\\ &\le \Esub{x-\nu t}{(1+e^{\kappa Z_{t-t_0}}) \psubb{0}{X^n_{m t_0}\ge |Z_{t-t_0}+\nu t_0-x_0|-n^{-1}} \mathds{1}_{|Z_{t-t_0}+\nu t_0-x_0|\ge n^{1/4}}}\\ &\le \Esub{x-\nu t}{(1+e^{\kappa Z_{t-t_0}}) e^{-3\kappa |Z_{t-t_0}+\nu t_0-x_0|}e^{3\kappa n^{-1}}\Esubb{0}{e^{3\kappa X^n_{mt_0}}} \mathds{1}_{|Z_{t-t_0}+\nu t_0-x_0|\ge n^{1/4}}}\\ &\le e^{10s_0 t_0}(e^{-3\kappa n^{1/4}}+e^{\kappa |x_0|}e^{-2\kappa n^{1/4}}) \end{align*} by Lemma~\ref{lem:Xnmgf} and since $\frac 12 m \kappa^2 =s_0$ and $e^{\kappa Z_{t-t_0}} e^{-3 \kappa |Z_{t-t_0}+\nu t_0-x_0|}\le e^{(-\nu t_0 +x_0)\kappa} e^{-2\kappa |Z_{t-t_0}+\nu t_0-x_0|}$. Substituting into~\eqref{eq:vtupper}, it follows that \begin{align} \label{eq:vtupper2} v_t(x) &\le g(x-\nu t)p^n_0(x_0)e^{(1+\alpha)s_0 t_0} n^{-1}(2\pi m t_0)^{-1/2} \notag \\ &\qquad \Big(\mathcal O(nt_0^{1/2}e^{\kappa |x_0|} e^{-2\kappa n^{1/4}}) +e^{\mathcal O(n^{-1/5})}\Esub{x-\nu t}{g(Z_{t-t_0})^{-1}e^{-(Z_{t-t_0}+\nu t_0-x_0)^2/(2mt_0)} }\Big). \end{align} Note that for $y\in \R$, \begin{align*} g(y)^{-1} e^{-(y+\nu t_0-x_0)^2/(2m t_0)} &\le 1 + e^{\kappa(x_0 -\nu t_0)} e^{(\kappa-(2mt_0)^{-1}(y+\nu t_0-x_0))(y+\nu t_0-x_0)}\\ &\le 1+e^{\kappa|x_0|+s_0 t_0} \end{align*} since $\frac 12 m \kappa^2 =s_0$. Hence by Lemma~\ref{lem:Tbound}, for $n$ sufficiently large, if $t-t_0\ge \gamma_n/2$ and $|x-\nu t|\le d_n+1$, then \begin{align} \label{eq:fromxstat*} &\Esub{x-\nu t}{g(Z_{t-t_0})^{-1} e^{-(Z_{t-t_0}+\nu t_0-x_0)^2/(2m t_0)}} \notag \\ &\quad \le \int_{-\infty}^\infty \pi(y) g(y)^{-1} e^{-(y+\nu t_0-x_0)^2/(2m t_0)} dy +3e^{\kappa|x_0|}(\log N)^{-12C}. \end{align} Note that $g(y) e^{\alpha \kappa y}\le \min(e^{\alpha \kappa y},e^{-(1-\alpha)\kappa y})\le 1$ $\forall y\in \R$. Therefore, since $y\mapsto g(y)$ is decreasing, and letting $(B_s)_{s\ge 0}$ denote a Brownian motion, \begin{align*} & \int_{-\infty}^\infty g(y) e^{\alpha \kappa y} e^{-(y+\nu t_0-x_0)^2/(2mt_0)} dy\\ &\le g(x_0-\nu t_0-t_0^{1/3}) \int_{-\infty}^\infty e^{\alpha \kappa y} e^{-(y+\nu t_0-x_0)^2/(2mt_0)} dy +\int_{-\infty}^\infty e^{-(y+\nu t_0-x_0)^2/(2mt_0)} \mathds{1}_{|y+\nu t_0-x_0|>t_0^{1/3}} dy \\ &\le (2\pi m t_0)^{1/2} \left( g(x_0-\nu t_0-t_0^{1/3}) \Esub{x_0-\nu t_0}{e^{\alpha \kappa B_{mt_0}}} +\psub{0}{|B_{m t_0}|>t_0^{1/3}}\right)\\ &\le (2\pi m t_0)^{1/2} \left( g(x_0-\nu t_0-t_0^{1/3}) e^{\alpha \kappa (x_0-\nu t_0)}e^{\frac 12 m\alpha ^2\kappa^2 t_0} +2e^{-t_0^{-1/3}/(2m)}\right) \end{align*} by a Gaussian tail bound. Therefore if $|x_0|\le d_n$, by~\eqref{eq:fromxstat*} and since $|\frac{\nabla g(y)}{g(y)}|\le \kappa $ $\forall y\in \R$ and $g(y)^{-1} e^{-\alpha \kappa y}\le 2e^{\kappa |y|}$ $\forall y\in \R$, \begin{align*} & \Esub{x-\nu t}{g(Z_{t-t_0})^{-1} e^{-(Z_{t-t_0}+\nu t_0-x_0)^2/(2m t_0)}}\\ &\le (2\pi m t_0)^{1/2} \pi(x_0) g(x_0)^{-1} (1+\mathcal O(t_0^{1/3}) +\mathcal O(t_0^{-1/2}e^{2\kappa d_n}(\log N)^{-12C})). \end{align*} Substituting into~\eqref{eq:vtupper2}, we have that if $t-t_0 \ge \gamma_n /2$, $|x-\nu t|\le d_n+1$ and $|x_0|\le d_n$, \begin{align} \label{eq:lemvstatA} \frac{v_t(x)}{g(x-\nu t)} &\le n^{-1} p^n_0(x_0)\pi(x_0)g(x_0)^{-1}(1+\mathcal O((\log N)^{-4C})). \end{align} For a lower bound, note that by~\eqref{eq:vngreena} with $a=(1-\alpha)s_0 $ and since $(1-u)(2u-1+\alpha )\ge \alpha -1$ $\forall u\in [0,1]$, for $y\in \frac 1n \mathbb{Z}$, $$ v^n_{t_0}(y)\ge e^{-(1-\alpha)s_0 t_0} p^n_0(x_0) P^n_{t_0,x_0}(y). $$ Suppose $n$ is sufficiently large that $t_0^{1/3} \le \frac 12 m nt_0$, and then by~\eqref{eq:vteq}, \begin{align} \label{eq:lemvstatB} v_t(x) &\ge g(x-\nu t)\Esub{x-\nu t}{g(Z_{t-t_0})^{-1} e^{-(1-\alpha)s_0 t_0} p^n_0(x_0)\bar{P}^n_{t_0,x_0}(Z_{t-t_0}+\nu t_0)\mathds{1}_{|Z_{t-t_0}+\nu t_0-x_0|<t_0^{1/3}}} \notag \\ &\ge g(x-\nu t)p^n_0(x_0)e^{-(1-\alpha)s_0 t_0} g(x_0-\nu t_0 -t_0^{1/3})^{-1} \notag \\ &\qquad \Esub{x-\nu t}{n^{-1}(2\pi m t_0)^{-1/2}e^{-(Z_{t-t_0}+\nu t_0-x_0)^2/(2m t_0)}e^{\mathcal O(n^{-1} t_0^{-2})}\mathds{1}_{|Z_{t-t_0}+\nu t_0-x_0|<t_0^{1/3}}} \end{align} by Lemma~\ref{lem:lclt}. By Lemma~\ref{lem:Tbound}, for $n$ sufficiently large, if $t-t_0 \ge\gamma_n /2$ and $|x-\nu t|\le d_n+1$, \begin{align} \label{eq:lemvstatC} &\Esub{x-\nu t}{e^{-(Z_{t-t_0}+\nu t_0-x_0)^2/(2m t_0)}\mathds{1}_{|Z_{t-t_0}+\nu t_0-x_0|<t_0^{1/3}}} \notag \\ &\ge \int_{-\infty}^\infty \pi(y) e^{-(y+\nu t_0-x_0)^2/(2m t_0)}\mathds{1}_{|y+\nu t_0-x_0|<t_0^{1/3}} dy -(\log N)^{-12C}. \end{align} Since $y\mapsto g(y)$ is decreasing, \begin{align*} &\int_{-\infty}^\infty g(y)^2 e^{\alpha \kappa y} e^{-(y+\nu t_0-x_0)^2/(2m t_0)}\mathds{1}_{|y+\nu t_0-x_0|<t_0^{1/3}} dy \\ &\ge g(x_0-\nu t_0+t_0^{1/3})^2 e^{\alpha \kappa (x_0-\nu t_0-t_0^{1/3})}(2\pi m t_0)^{1/2} \left(1-\psub{0}{|B_{m t_0}|>t_0^{1/3}}\right) \\ &\ge g(x_0)^2 e^{\alpha \kappa x_0} (2\pi m t_0)^{1/2}(1+\mathcal O(e^{-t_0^{-1/3}/(2m)})+\mathcal O(t_0^{1/3})) \end{align*} by a Gaussian tail bound and since $|\frac{\nabla g(y)}{g(y)}|\le \kappa $ $\forall y\in \R$. Therefore if $t-t_0\ge \gamma_n/2$, $|x-\nu t|\le d_n+1$ and $|x_0|\le d_n$, by~\eqref{eq:lemvstatC} and~\eqref{eq:lemvstatB}, and since $(\log N)^{-12C} t_0^{-1/2}\pi(x_0)^{-1}=\mathcal O((\log N)^{-4C})$, \begin{align} \label{eq:lemvstatD} \frac{v_t(x)}{g(x-\nu t)} &\ge p^n_0(x_0)n^{-1}\pi(x_0)g(x_0)^{-1}(1-\mathcal O((\log N)^{-4C})). \end{align} It remains to bound $|v^n_{\gamma_n}(x)-v_{\gamma_n}(x)|$. By~\eqref{eq:lemvtstatvnb} and Lemma~\ref{lem:lclt}, for $z\in \frac 1n \mathbb{Z}$ and $t>0$, \begin{equation} \label{eq:vnvcor*} v^n_{t}(z) \le e^{2s_0 t}p^n_0(x_0) n^{-1} (2\pi m t)^{-1/2} e^{\mathcal O(n^{-1} t^{-1/2})}. \end{equation} Therefore, by Lemma~\ref{lem:vnvbound}, for $n$ sufficiently large, \begin{align} \label{eq:lemvnvcor1} &\sup_{x\in \frac 1n \mathbb{Z}}|v^n_{\gamma_n}(x)-v_{\gamma_n}(x)| \notag \\ &\le \Big(C_7 (n^{-1/3}+e^{-(\log N)^{c_2}}) e^{2s_0 t_0}p^n_0(x_0) (m t_0)^{-1/2} n^{-1} +2n^{-1/3}\sup_{z\in \frac 1n \mathbb{Z}}|\nabla_n v^n_{t_0}(z)|\Big) e^{5s_0{\gamma_n}}\gamma_n^2. \end{align} Let $t_1=t_0/2$; then for $z\in \frac 1n \mathbb{Z}$, by~\eqref{eq:vngreena}, \begin{align*} &|\nabla_n v^n_{t_0}(z)|\\ &=\Big| n \langle v^n_{t_1}, \phi^{t_1 ,z+n^{-1}}_0 -\phi^{t_1,z}_0 \rangle_n +ns_0 \int_0^{t_1} \langle v^n_{t_1+s}(1-u^n_{t_1+s})(2u^n_{t_1+s}-1+\alpha ), \phi^{t_1,z+n^{-1}}_s-\phi^{t_1,z}_s \rangle_n ds \Big|\\ &\le \sup_{x\in \frac 1n \mathbb{Z}, s\in [0,t_1]} v^n_{t_1+s}(x)\left(n\langle 1, |\phi^{t_1,z+n^{-1}}_0 -\phi^{t_1,z}_0 | \rangle_n +ns_0 \int_0^{t_1} \langle 1+\alpha , |\phi^{t_1,z+n^{-1}}_s-\phi^{t_1,z}_s | \rangle_n ds \right)\\ &\le e^{2s_0 t_0}p^n_0(x_0)n^{-1}(m t_1)^{-1/2} \left(C_5 t_1^{-1/2}+\int_0^{t_1} 2s_0 C_5 (t_1-s)^{-1/2}ds \right) \end{align*} for $n$ sufficiently large, by~\eqref{eq:vnvcor*} and Lemma~\ref{lem:nablaunbound}. Hence \begin{align*} \sup_{z\in \frac 1n \mathbb{Z}} |\nabla_n v^n_{t_0}(z)| &\le e^{2s_0 t_0}p^n_0(x_0) n^{-1}m^{-1/2} C_5 (2t_0^{-1}+4s_0). \end{align*} By~\eqref{eq:lemvnvcor1} it follows that for $n$ sufficiently large, $\sup_{x\in \frac 1n \mathbb{Z}}|v^n_{\gamma_n}(x)-v_{\gamma_n}(x)|\le p^n_0(x_0) n^{-1} (e^{-\frac 12 (\log N)^{c_2}} \vee n^{-1/6})$. By~\eqref{eq:lemvstatA} and~\eqref{eq:lemvstatD}, this completes the proof. \end{proof} We now show that $|q^n_{\gamma_n}-v^n_{\gamma_n}|$ is small with high probability, which, combined with the previous lemma, will imply that $A^{(1)}_t(x_1,x_2)$ occurs with high probability for suitable $x_1$, $x_2$ and $t$. This result is stronger than Proposition~\ref{prop:pnun} (but only applies when $q^n_0(x)=p^n_0(x_0)\mathds{1}_{x=x_0}$ for some $x_0$), and will also be used to show that $A^{(4)}_t(x)$ occurs with high probability for suitable $x$ and $t$. \begin{lemma} \label{lem:qnvnonepoint} For $c, c'\in (0,1/2)$ and $\ell \in \mathbb{N}$, the following holds for $n$ sufficiently large. Suppose $N\ge n^{3}$, and for some $x_0\in \frac 1n \mathbb{Z}$, $q^n_0(x)=p^n_0(x_0)\mathds{1}_{x=x_0}$ and $p^n_0(x_0)\ge \big( \frac{n^2}N \big)^{1-c}$. For $t\le \gamma_n $ and $z\in \frac 1n \mathbb{Z}$, $$ \p{ |q^n_t(z)-v^n_t(z)|\ge \left(\frac n N \right)^{1/2-c'}p^n_0(x_0)^{1/2}n^{-1/2}}\le \left(\frac n N \right)^{\ell}, $$ where $(q^n_t)_{t\ge 0}$ and $(v^n_t)_{t\ge 0}$ are defined in~\eqref{eq:qndef} and~\eqref{eq:vndef} respectively. \end{lemma} \begin{proof} By Lemma~\ref{lem:lclt}, there exists a constant $K_5>1$ such that \begin{equation} \label{eq:qnvnonepoint*} \psubb{0}{X^n_{mt}=0}\le K_5 n^{-1}t^{-1/2} \quad \forall \, n\in \mathbb{N},\, t>0. \end{equation} By Corollary~\ref{cor:qnMa} with $a=-(1+\alpha)s_0$, for $t\ge 0$ and $z\in \frac 1n \mathbb{Z}$, \begin{align} \label{eq:qntbound2} q^n_t(z) &\le e^{(1+\alpha )s_0 t } \langle q^n_0, \phi^{t,z}_0\rangle_n +M^n_t(\phi^{t,z,-(1+\alpha)s_0}) \notag \\ &\le e^{(1+\alpha )s_0 t } p^n_0(x_0)\min (K_5 n^{-1}t^{-1/2}, 1) +M^n_t(\phi^{t,z,-(1+\alpha)s_0}) \end{align} by~\eqref{eq:qnvnonepoint*}. Let $$ \tau=\inf\left\{t> 0 : \sup_{x\in \frac 1n \mathbb{Z}}q^n_t(x)\ge K_5 e^{2s_0 \gamma_n} p^n_0(x_0) n^{-1}t^{-1/2} \right\}. $$ We will show that $\tau>\gamma_n$ with high probability. By Lemma~\ref{lem:qnphi}, for $t>0$, \begin{align*} \sup_{s\in [0,t]}|M^n_s(\phi^{t,z,-(1+\alpha)s_0})-M^n_{s-}(\phi^{t,z,-(1+\alpha)s_0})| =\sup_{s\in [0,t]}|\langle q^n_s-q^n_{s-},\phi_s^{t,z,-(1+\alpha)s_0}\rangle _n|\le e^{(1+\alpha)s_0t}N^{-1}. \end{align*} Therefore, by the Burkholder-Davis-Gundy inequality as stated in Lemma~\ref{lem:BDG}, for $t\ge 0$, $z\in \frac 1n \mathbb{Z}$ and $k\in \mathbb{N}$ with $k\ge 2$, \begin{align} \label{eq:BDGqnvnonept} \E {\sup_{s\in [0,t]}|M^n_{s\wedge \tau}(\phi^{t,z,-(1+\alpha)s_0})|^k } \leq C(k) \E {\langle M^n(\phi^{t,z,-(1+\alpha)s_0}) \rangle_{t\wedge \tau}^{k/2} +e^{(1+\alpha)s_0 tk}N^{-k} }. \end{align} Then for $t\le \gamma_n$, by the definition of $\tau$ and by Lemma~\ref{lem:qnphi}, \begin{align} \label{eq:qnvnonepointd} \langle M^{n}(\phi^{t,z,-(1+\alpha)s_0}) \rangle_{t\wedge \tau} &\le \frac n N \int_0^t \langle (1+2m) K_5 e^{2s_0 \gamma_n} p^n_0(x_0) n^{-1}s^{-1/2} , (\phi_s^{t,z})^2 e^{2(1+\alpha)s_0 (t-s)}\rangle_n ds \notag\\ &\le \frac n N (1+2m) K_5 e^{6s_0 \gamma_n} p^n_0(x_0) \int_0^t s^{-1/2} \psubb{0}{X^n_{2m(t-s)}=0}ds, \end{align} by Lemma~\ref{lem:Mqvarbound}. Then by~\eqref{eq:qnvnonepoint*}, \begin{align*} \int_0^t s^{-1/2} \psubb{0}{X^n_{2m(t-s)}=0}ds &\le \int_{0}^{t} s^{-1/2}K_5 n^{-1}(2(t-s))^{-1/2}ds\\ &= K_5 n^{-1}2^{-1/2}\cdot 2 \int_0^{t/2} s^{-1/2}(t-s)^{-1/2} ds\\ &\le 2^{3/2}K_5 n^{-1}. \end{align*} Hence, by~\eqref{eq:qnvnonepointd}, for $t\le \gamma_n$, \begin{align} \label{eq:qnvnonepointA} \langle M^{n}(\phi^{t,z,-(1+\alpha)s_0 }) \rangle_{t\wedge \tau} &\leq \frac{1}{N}(1+2m) 2^{3/2}K_5^2 e^{6s_0 \gamma_n} p^n_0(x_0). \end{align} For $b \in (0,1/2)$ and $\ell_1 \in \mathbb{N}$, take $k\in \mathbb{N}$ with $k>\ell_1 /b$. Then for $n$ sufficiently large, for $t\le \gamma_n$ and $z\in \frac 1n \mathbb{Z}$, by Markov's inequality and~\eqref{eq:BDGqnvnonept}, and since $p^n_0(x_0)^{1/2}N^{-1/2}\ge (\frac{n^2}N)^{1/2}N^{-1/2}=n N^{-1}$, \begin{align} \label{eq:Mnttaubound} &\p{|M^n_{t\wedge \tau}(\phi^{t,z,-(1+\alpha)s_0})|\ge \left( \frac n N \right)^{1/2-b} p^n_0(x_0)^{1/2} n^{-1/2}} \notag\\ &\le \left( \frac n N \right)^{-k(1/2-b)} p^n_0(x_0)^{-k/2}n^{k/2}C(k)\cdot 2 \left( \frac{1}{N}(1+2m) 2^{3/2} K_5^2 e^{6s_0 \gamma_n} p^n_0(x_0) \right)^{k/2} \notag \\ &\le \left( \frac n N \right)^{\ell_1} \end{align} for $n$ sufficiently large, since $bk>\ell_1$ and $\gamma_n =\lfloor (\log \log N)^4 \rfloor$. Now let $b=c/4$. Then for $n$ sufficiently large, since $N\ge n^3$ and then since $p^n_0(x_0)\ge (\frac{n^2}N)^{1-c}$, \begin{equation} \label{eq:qnvnonea} \left( \frac n N \right)^{1/2-b}n^{-1/2}\le \left( \frac{n^2}N \right)^{(1-c)/2}n^{-1} \le \tfrac 13 K_5 e^{2s_0 \gamma_n} (\gamma_n+N^{-1})^{-1/2}p^n_0(x_0)^{1/2}n^{-1}. \end{equation} Since $p^n_0(x_0)\ge n^2 N^{-1}$, we can take $n$ sufficiently large that \begin{equation} \label{eq:qnvnoneb} N^{-1}\le \tfrac 13 K_5 e^{2s_0 \gamma_n} (\gamma_n+N^{-1})^{-1/2}p^n_0(x_0)n^{-1} \end{equation} and also, since $\alpha<1$ and $N\ge n^3$, \begin{equation} \label{eq:qnvnonec} e^{(1+\alpha)s_0 t}t^{-1/2}\le \tfrac 13 e^{2s_0 \gamma_n} (t+N^{-1})^{-1/2} \; \forall t\in [N^{-1},\gamma_n] \quad \text{ and }\quad \tfrac 13 n^{-1} (2N^{-1})^{-1/2}\ge 1. \end{equation} If $|M^n_{t\wedge \tau}(\phi^{t,z,-(1+\alpha)s_0})|\le \left( \frac n N \right)^{1/2-b} p^n_0(x_0)^{1/2} n^{-1/2}$ and $t\in [0, \tau \wedge \gamma_n]$ then by~\eqref{eq:qntbound2}, and since $K_5>1$, \begin{align} \label{eq:lemqnvnonequ} q^n_t(z) &\le K_5 e^{(1+\alpha)s_0 t} p^n_0(x_0) \min(n^{-1}t^{-1/2},1)+ \left( \frac n N \right)^{1/2-b} p^n_0(x_0)^{1/2}n^{-1/2} \notag \\ &\le K_5 e^{2s_0 \gamma_n} (t+N^{-1})^{-1/2}p^n_0(x_0) n^{-1}-N^{-1}, \end{align} by~\eqref{eq:qnvnonea},~\eqref{eq:qnvnoneb} and~\eqref{eq:qnvnonec} (using the second equation in~\eqref{eq:qnvnonec} for the case $t\le N^{-1}$). Take $\ell_2 \in \mathbb{N}$ and let $Y_n \sim \text{Poisson}((2m+1)N^{2-\ell_2}r_n)$. Then for $t\ge 0$ and $z\in \frac 1n \mathbb{Z}$, since $(q^n_s(z))_{s\ge 0}$ jumps at rate at most $(2m+1)r_n N^2$, \begin{equation} \label{eq:qndoublejump} \p{\sup_{s\in [0,N^{-\ell_2}]}|q^n_{t+s}(z)-q^n_t(z)|>N^{-1}} \le \p{Y_n \ge 2}\le (\tfrac 12 (2m+1)N^{1-\ell_2}n^2 )^2 \end{equation} since $r_n = \frac 12 n^2 N^{-1}$. Therefore, for $\ell_1,\ell_2 \in \mathbb{N}$, letting $\mathcal A=N^{-\ell_2}\mathbb{N}_0 \cap [0,\gamma_n ]$, by a union bound and~\eqref{eq:lemqnvnonequ}, \begin{align*} &\p{\tau \le \gamma_n } \notag \\ &\le \p{\exists t \in \mathcal A, z\in \tfrac 1n \mathbb{Z}: |z-x_0|\le N^5, |M^n_{t\wedge \tau}(\phi^{t,z,-(1+\alpha)s_0})|\ge \left(\frac n N \right)^{1/2-b}p^n_0(x_0)^{1/2}n^{-1/2}} \notag \\ &\quad + \p{\exists t \in \mathcal A, z\in \tfrac 1n \mathbb{Z}: |z-x_0|\le N^5, \sup_{s\in [0,N^{-\ell_2}]} |q^n_{t+s}(z)-q^n_t(z)|>N^{-1}} \notag \\ &\quad + \p{\exists z\in \tfrac 1n \mathbb{Z}, t\in [0,\gamma_n]: |z-x_0|> N^5, q^n_t(z)>0} \notag \\ &\le \sum_{t \in \mathcal A}(2nN^5+1) \left( \frac n N \right)^{\ell_1} +\sum_{t \in \mathcal A}(2nN^5+1) (\tfrac 12 (2m+1)N^{1-\ell_2}n^2)^2 +2e^{-N^5} , \end{align*} for $n$ sufficiently large, by~\eqref{eq:Mnttaubound} and~\eqref{eq:qndoublejump}, and by the same argument as Lemma~\ref{lem:p01} for the last term. For $\ell '\in \mathbb{N}$, take $\ell_2$ sufficiently large that $\gamma_n N^{\ell_2+5}n(N^{1-\ell_2}n^2)^2 =\gamma_n N^{7-\ell_2}n^5 \le \left( \frac n N \right)^{\ell '+1}$ for $n$ sufficiently large, and then take $\ell_1$ sufficiently large that $\gamma_n N^{\ell_2+5}n \left( \frac n N \right)^{\ell_1} \le \left( \frac n N \right)^{\ell '+1}$ for $n$ sufficiently large. It follows that for $n$ sufficiently large, \begin{equation} \label{eq:Ataut} \p{\tau \le \gamma_n } \le \left( \frac n N \right)^{\ell '}. \end{equation} Note that by~\eqref{eq:veasybound} and~\eqref{eq:qnvnonepoint*}, for $t\ge 0$ and $z\in \frac 1n \mathbb{Z}$, \begin{equation} \label{eq:qnvnonepointB} v^n_t(z)\le e^{(1+\alpha)s_0 t} \langle q^n_0, \phi^{t,z}_0\rangle_n \le e^{(1+\alpha)s_0 t} p^n_0(x_0)\min(K_5 n^{-1}t^{-1/2}, 1). \end{equation} Take $k\in \mathbb{N}$ with $k\ge 2$. By Lemma~\ref{lem:qnvndet} and since $q^n_t,v^n_t\in [0,1]$, we have that for $t\ge 0$ and $z\in \frac 1n \mathbb{Z}$, \begin{align*} |q^n_t(z)-v^n_t(z)|^k &\leq 3^{2k-1}s_0^k t^{k-1} \left(\int_0^t \langle |q^n_s - v^n_s|^k , \phi^{t,z}_s \rangle_n ds + \int_0^t \sup_{x\in \frac 1n \mathbb{Z}} v^n_s(x)^k \langle |p^n_s- u^n_s|^k , \phi^{t,z}_s \rangle_n ds \right)\\ &\qquad +\mathds{1}_{\tau<t}+3^{k-1}|M^n_{t\wedge \tau}(\phi^{t,z})|^k. \end{align*} Therefore, by~\eqref{eq:gronwall1stat} in Proposition~\ref{prop:pnun} and by~\eqref{eq:qnvnonepointB} and~\eqref{eq:Ataut}, for $\ell ' \in \mathbb{N}$, for $n$ sufficiently large, for $t\le \gamma_n$ and $z\in \frac 1n \mathbb{Z}$, \begin{align} \label{eq:qnvnkexp} &\E{|q^n_t(z)-v^n_t(z)|^k} \notag \\ &\leq 3^{2k-1}s_0^k t^{k-1} \int_0^t \sup_{x\in \frac 1n \mathbb{Z}}\E{ |q^n_s(x)-v^n_s(x)|^k } ds \notag \\ &\quad + 3^{2k-1}s_0^k t^{k-1}e^{(1+\alpha)s_0 tk}p_0^n(x_0)^k \int_0^t (K_5 n^{-1}s^{-1/2}\wedge 1)^k C_1\left( \frac{n^{k/2}s^{k/4}}{N^{k/2}}+N^{-k}\right) e^{C_1 s^k} ds \notag \\ &\qquad +\left(\frac n N \right)^{\ell '} +3^{k-1} \E{|M^n_{t\wedge \tau}(\phi^{t,z})|^k}. \end{align} Take $\ell '$ sufficiently large that for $n$ sufficiently large, $\left( \frac n N \right)^{\ell '}\le N^{-k/2} \left(\frac{n^2}N \right)^{k/2}\le N^{-k/2} p^n_0(x_0)^{k/2}$. Note that for the second term on the right hand side of~\eqref{eq:qnvnkexp}, \begin{align*} & \int_0^t (K_5 n^{-1}s^{-1/2}\wedge 1)^k C_1 \left( \frac{n^{k/2}s^{k/4}}{N^{k/2}}+N^{-k}\right) e^{C_1 s^k}ds\\ &\le C_1 \int_0^t (K_5^{k/2}N^{-k/2}+N^{-k}) e^{C_1 s^k}ds\\ &\le C_1 (K_5^{k/2}N^{-k/2}+N^{-k})te^{C_1 t^k}. \end{align*} By the same argument as in~\eqref{eq:BDGqnvnonept} and~\eqref{eq:qnvnonepointA}, since $t\le \gamma_n$, $$ \E{|M^n_{t\wedge \tau}(\phi^{t,z})|^k} \le C(k) \left(\left( \frac{1}{N}(1+2m) 2^{3/2}K_5^2 e^{2s_0 \gamma_n} p^n_0(x_0) \right)^{k/2} +N^{-k} \right). $$ Note that $N^{-1/2}p^n_0(x_0)^{1/2}\ge n N^{-1}$. Hence substituting into~\eqref{eq:qnvnkexp} and then by Gronwall's inequality, there exists a constant $K_6=K_6(k)$ such that for $n$ sufficiently large, for $t\in [0,\gamma_n]$, \begin{equation} \label{eq:vnqnonedagger} \sup_{x\in \frac 1n \mathbb{Z}}\E{|q^n_t(x)-v^n_t(x)|^k} \le K_6(\gamma_n^ke^{(1+\alpha)s_0 \gamma_n k} e^{C_1 \gamma_n^k}+1+e^{s_0\gamma_n k})N^{-k/2}p^n_0 (x_0)^{k/2}e^{3^{2k-1}s_0^k \gamma_n^{k-1}t}. \end{equation} The result now follows by Markov's inequality, taking $k\in \mathbb{N}$ sufficiently large that $kc'>\ell$, and then taking $n$ sufficiently large that~\eqref{eq:vnqnonedagger} holds with this choice of $k$. \end{proof} We are now ready to prove that $A^{(1)}_t(x_1,x_2)$ occurs with high probability for suitable $t$, $x_1$ and $x_2$. For $t\ge 0$ and $x_1 \in \frac 1n \mathbb{Z}$, let $(v^n_{t,t+s}(x_1,\cdot))_{s\ge 0}$ denote the solution of \begin{equation} \label{eq:vnx1defn} \begin{cases} \partial_s v^n_{t,t+s}(x_1,\cdot)&=\tfrac 12 m\Delta_n v^n_{t,t+s}(x_1,\cdot)+s_0 v^n_{t,t+s}(x_1,\cdot)(1-u^n_{t,t+s})(2u^n_{t,t+s}-1+\alpha ) \quad \text{for }s>0,\\ v^n_{t,t}(x_1,x)&= p^n_t(x_1) \mathds{1}_{x=x_1}, \end{cases} \end{equation} where $(u^n_{t,t+s})_{s\ge 0}$ is defined in~\eqref{eq:unttsdef}. Recall the definition of $q^n_{t_1,t_2}(x_1,x_2)$ in~\eqref{eq:qt1t2defn}. \begin{prop} \label{prop:eventA1} Suppose $N\ge n^{3}$ for $n$ sufficiently large. For $\ell \in \mathbb{N}$, the following holds for $n$ sufficiently large. For $t\in [(\log N)^2-\gamma_n,N^2]$ and $x_1,x_2 \in \frac 1n \mathbb{Z}$, $$ \p{A^{(1)}_{t}(x_1,x_2)^c \cap \{|x_1-\mu^n_t|\vee |x_2 -\mu^n_{t+\gamma_n}|\le d_n \} \cap E'_1} \le \left( \frac n N \right)^\ell. $$ \end{prop} \begin{proof} Fix $c'\in (0,1/4)$. By Lemma~\ref{lem:qnvnonepoint}, for $n$ sufficiently large, \begin{equation} \label{eq:propA11} \p{\left\{ |q^n_{t,t+\gamma_n}(x_1,x_2)-v^n_{t,t+\gamma_n}(x_1,x_2)|\ge \left( \frac n N \right)^{1/2-c'}n^{-1/2}\right\} \cap \left\{ p^n_{t}(x_1)\ge \left( \frac{n^2}N \right)^{3/4} \right\} } \le \left( \frac n N \right)^\ell. \end{equation} Suppose $n$ is sufficiently large that $(\log N)^2-\gamma_n \ge \frac 12 (\log N)^2 \vee \log N$. Recall the definition of $E'_1$ in~\eqref{eq:E1'defn}. By Lemma~\ref{lem:vtstat}, if $E_1'$ occurs and $|x_1-\mu^n_t|\le d_n$, $|x_2 -\nu \gamma_n -\mu^n_t|\le d_n+1$ then \begin{align*} \frac{v^n_{t,t+\gamma_n}(x_1,x_2)}{g(x_2-\nu \gamma_n-\mu^n_t)}&= \frac{\pi(x_1-\mu^n_t)}{g(x_1-\mu^n_t)} p^n_{t}(x_1)n^{-1} (1+\mathcal O((\log N)^{-4C})). \end{align*} Suppose $|x_1-\mu^n_t|\vee |x_2-\mu^n_{t+\gamma_n}|\le d_n$ and $E'_1$ occurs. Then if $n$ is sufficiently large, by the definition of $E_1$ in~\eqref{eq:eventE1} we have $p^n_t(x_1)\ge \frac 1 {10} (\log N)^{-C}$, $|x_2-\nu \gamma_n -\mu^n_t|\le d_n+1$, $|p^n_t(x_1)-g(x_1-\mu^n_t)|\le e^{-(\log N)^{c_2}}$, $|p^n_{t+\gamma_n}(x_2)-g(x_2-\mu^n_{t+\gamma_n})|\le e^{-(\log N)^{c_2}}$ and $|\mu^n_{t+\gamma_n}-(\mu^n_t +\nu \gamma_n)|\le \gamma_n e^{-(\log N)^{c_2}}$. Hence for $n$ sufficiently large, if $|q^n_{t,t+\gamma_n}(x_1,x_2)-v^n_{t,t+\gamma_n}(x_1,x_2)|\le \left( \frac n N \right)^{1/2-c'}n^{-1/2}\le n^{-3/2+2c'}$, then $A^{(1)}_{t}(x_1,x_2)$ occurs. By~\eqref{eq:propA11}, this completes the proof. \end{proof} The next two lemmas will be used to show $A^{(2)}_t(x_1,x_2)$ and $A^{(3)}_t(x_1,x_2)$ occur with high probability for suitable $t$, $x_1$ and $x_2$. Recall that we fixed $c_1>0$, and recall the definition of $D_n^+$ in~\eqref{eq:Dn+-defn}. \begin{lemma} \label{lem:vfromtipbound} For $\epsilon>0$ sufficiently small, $t^* \in \mathbb{N}$ sufficiently large and $K \in \mathbb{N}$ sufficiently large (depending on $t^*$), the following holds for $n$ sufficiently large. Suppose $\sup_{s\in [0,t^*],x\in \frac 1n \mathbb{Z}}|u^n_s(x)-g(x-\nu s)|<\epsilon$, and also $p^n_t(x)\in [\frac 16 g(x-\nu t), 6g(x-\nu t)]$ $\forall t\in [0,t^*]$, $x\le \nu t+D^+_n+1$ and $p_t(x)\le 6g(D^+_n)$ $\forall t\in [0,t^*], x\ge \nu t+D^+_n$. Suppose $q^n_0(z)=p^n_0(z)\mathds{1}_{z\ge \ell}$ for some $\ell \in \tfrac 1n \mathbb{Z}\cap [K, D^+_n ]$. Then for $z\le \nu t^*+ D^+_n +1$, $$ \frac{v^n_{t^*}(z)}{p^n_{t^*}(z)}\le \tfrac 12 c_1 e^{-(1+\frac 12 (1-\alpha))\kappa (\ell -(z-\nu t^*)\vee K +2)}, $$ where $(v^n_t)_{t\ge 0}$ is defined in~\eqref{eq:vndef}. \end{lemma} \begin{proof} Let $\lambda = \frac 12 (1-\alpha)$. Note that since $(\alpha-2)^2>1$, we have $\frac 14 (1-\alpha ^2)<1-\alpha$. Take $a\in (\frac 14 (1-\alpha ^2),1-\alpha)$ so that $$ \lambda^2 +\lambda \alpha -a = \tfrac 12 (1-\alpha)(\tfrac 12 (1-\alpha)+\alpha)-a=\tfrac 14 (1-\alpha^2)-a<0. $$ Take $t^* \in \mathbb{N}$ sufficiently large that $144 e^{(\lambda^2 +\lambda \alpha -a)s_0 t^* }\le \tfrac 13c_1 e^{-2\kappa (1+\lambda)}.$ Take $\epsilon \in (0, \frac 12 (1-\alpha))$ sufficiently small that $(1-\epsilon)(2\epsilon-1+\alpha)<-a$. Then take $K\in \mathbb{N}$ sufficiently large that $\nu t^*\le K/6$, $2s_0 t^* e^{4s_0 t^*} e^{-\lambda \kappa K/6}\le 1$, $72 e^{5s_0 t^*}e^{-(1-\lambda) \kappa K/2}\le \frac 12 c_1 e^{-2\kappa (1+\lambda)}$, $2g(K/3)+2\epsilon <1-\alpha$ and $$ (1-g(x)-\epsilon)(2(g(x)+\epsilon)-1+\alpha)\le -a \qquad \text{for }x\ge K/3. $$ Then for $s\ge 0$ and $x\in \frac 1n \mathbb{Z}$, if $x-\nu s \ge K/3$ and $|u^n_s(x)-g(x-\nu s)|<\epsilon$ we have $$ (1-u^n_s(x))(2u^n_s(x)-1+\alpha)+a\le 0. $$ If instead $x-\nu s\le K/3$, then by~\eqref{eq:veasybound}, \begin{align*} v^n_s(x)\le e^{(1+\alpha)s_0 s}\Esubb{x}{p^n_0(X^n_{ms}) \mathds{1}_{X^n_{ms}\ge \ell}} \le e^{(1+\alpha)s_0 s} \max_{y\ge \ell} p^n_0(y) \psubb{0}{X^n_{ms}\ge \ell -\tfrac 13 K-\nu s}. \end{align*} Moreover, for $u\in [0,1]$, we have $(1-u)(2u-1+\alpha)+a\le 2$. Suppose $\ell \in [K,D^+_n]$ and $\sup_{s\in [0,t^*],x\in \frac 1n \mathbb{Z}}|u^n_s(x)-g(x-\nu s)|<\epsilon$. For $z\in \frac 1n \mathbb{Z}$ and $t\in [0,t^*]$ we have by~\eqref{eq:vngreena} that \begin{align} \label{eq:vnupper*} v^n_t(z) &\le e^{-as_0 t}\langle q^n_0, \phi_0^{t,z}\rangle_n +\int_0^t 2s_0 e^{-as_0 (t-s)}\sup_{x-\nu s\le K/3}v^n_s(x) ds \notag \\ &\le \max_{x\ge \ell}p^n_0(x)\left( e^{-as_0 t} \psubb{z}{X^n_{mt}\ge \ell} +2s_0 e^{(1+\alpha )s_0 t}\int_0^t \psubb{0}{X^n_{ms}\ge \ell -\tfrac 13 K-\nu s}ds \right). \end{align} By Markov's inequality and Lemma~\ref{lem:Xnmgf}, and since $\frac 12 m \kappa^2 =s_0$, \begin{align*} \psubb{z}{X^n_{mt}\ge \ell} = \psubb{0}{X^n_{mt}\ge \ell -z} &\le e^{-\lambda \kappa (\ell-z)}\Esubb{0}{e^{\lambda \kappa X^n_{mt}}}\\ &= e^{-\lambda \kappa (\ell-z)}e^{(\lambda^2 +\mathcal O(n^{-1}))s_0 t}. \end{align*} Therefore, applying the same argument to the second term on the right hand side of~\eqref{eq:vnupper*}, \begin{align*} v^n_t(z) &\le \max_{x\ge \ell} p^n_0(x) (e^{-\lambda \kappa (\ell-z)} e^{(\lambda^2 -a +\mathcal O(n^{-1}))s_0 t} +2s_0 t e^{(1+\alpha)s_0 t} e^{-\lambda \kappa (\ell -\frac 13 K-\nu t )} e^{(\lambda^2 +\mathcal O(n^{-1}))s_0t})\\ &\le \max_{x\ge \ell} p^n_0(x) e^{-\lambda \kappa (\ell-z)}e^{(\lambda^2 -a +\mathcal O(n^{-1}))s_0 t} (1+2s_0 t e^{(1+\alpha +a +\lambda \alpha)s_0 t} e^{-\lambda \kappa (z-\frac 13 K)}), \end{align*} since $\kappa \nu =\alpha s_0$. Hence for $z \in [\frac 12 K+\nu t^*, D^+_n +1+ \nu t^*]$, \begin{align} \label{eq:vnpnt*} \frac{v^n_{t^*}(z)}{p^n_{t^*}(z)} &\le \frac{6g(\ell)}{\frac 16 g(z-\nu t^*)}e^{-\lambda \kappa (\ell -z)} e^{(\lambda^2 -a+\mathcal O(n^{-1}))s_0 t^*} (1+2s_0 t^* e^{4s_0 t^*} e^{-\lambda \kappa K/6}) \notag \\ &\le 36 e^{-\kappa \ell} \cdot 2 e^{\kappa (z-\nu t^*)}e^{-\lambda \kappa (\ell -z)} e^{(\lambda^2 -a+\mathcal O(n^{-1}))s_0 t^*}\cdot 2 \notag \\ &= 144 e^{-(1+\lambda)\kappa (\ell -(z-\nu t^*))} e^{(\lambda^2+\alpha \lambda -a+\mathcal O(n^{-1}))s_0 t^*} \notag \\ &\le \tfrac 12 c_1 e^{-(1+\lambda)\kappa (\ell -(z-\nu t^*)+2)} \end{align} for $n$ sufficiently large, where the second inequality follows by our choice of $K$, and the last inequality by our choice of $t^*$. Also, for any $z\in \frac 1n \mathbb{Z}$ and $t\ge 0$, by~\eqref{eq:veasybound} and then by Markov's inequality and Lemma~\ref{lem:Xnmgf}, \begin{align*} v^n_t(z) \le e^{(1+\alpha )s_0 t}\max_{x\ge \ell}p^n_0(x) \psubb{z}{X^n_{mt}\ge \ell} &\le e^{(1+\alpha)s_0 t} \max_{x\ge \ell}p^n_0(x) e^{-\kappa (\ell -z)} \Esubb{0}{e^{\kappa X^n_{mt}}}\\ &\le e^{(1+\alpha )s_0 t}\max_{x\ge \ell}p^n_0(x)e^{2s_0 t}e^{-\kappa (\ell -z)} \end{align*} for $n$ sufficiently large. Therefore, for $z\le \frac 12 K+\nu t^*\le \frac 23 K$, and then since $\kappa \nu =\alpha s_0$, \begin{align*} \frac{v^n_{t^*}(z)}{p^n_{t^*}(z)}&\le e^{(1+\alpha)s_0 t^*} \frac{6g(\ell)}{\frac 16 g(K/2)} e^{2s_0 t^*} e^{-\kappa (\ell -\frac 12 K-\nu t^*)}\\ &\le 72 e^{5s_0 t^*} e^{-(1+\lambda)\kappa (\ell -\frac 12 K)}e^{-(1-\lambda) \kappa \cdot \frac 12 K}\\ &\le \tfrac 12 c_1 e^{-(1+\lambda)\kappa (\ell -\frac 12 K+2)}, \end{align*} where the second inequality follows since $g(\ell)\le e^{-\kappa \ell}$, $g(K/2)^{-1}\le 2e^{\kappa K/2}$ and $\ell-\frac 12 K \ge \frac 12 K$ and the third inequality follows by our choice of $K$. By~\eqref{eq:vnpnt*}, this completes the proof. \end{proof} \begin{lemma} \label{lem:bulktail1} For $\epsilon>0$ sufficiently small and $t^* \in \mathbb{N}$ sufficiently large, for $K \in \mathbb{N}$ sufficiently large (depending on $t^*$), the following holds for $n$ sufficiently large. Suppose $\sup_{s\in [0,t^*],\, x\in \frac 1n \mathbb{Z}}|u^n_s(x)-g(x-\nu s)|< \epsilon$, and $p^n_t(x)\ge \frac 16 g(x-\nu t)$ $\forall t\in [0,t^*]$, $x\le \nu t+D^+_n$. Suppose $q^n_0(z)=p^n_0(z)\mathds{1}_{z\le \ell}$ for some $\ell \in \frac 1n \mathbb{Z}$ with $\ell \le -K$. Then for $z\le \nu t^*+D^+_n$, \begin{equation} \label{eq:lembulktail} \frac{v^n_{t^*}(z)}{p^n_{t^*}(z)}\le \tfrac 12 c_1 e^{-\frac 12 \alpha\kappa ((z-\nu t^*)-\ell+1)}, \end{equation} where $(v^n_t)_{t\ge 0}$ is defined in~\eqref{eq:vndef}. \end{lemma} \begin{proof} Take $c\in (0,\alpha^2 /4)$. Take $t^* \in \mathbb{N}$ sufficiently large that $e^{(c-\alpha^2/4)s_0 t^*}<\frac 1 {10} c_1 e^{-\kappa }$. Suppose $\sup_{s\in [0,t^*], x\in \frac 1n \mathbb{Z}} |u^n_s(x)-g(x-\nu s)|<c/4$. Take $K\in \mathbb{N}$ sufficiently large that $g(-K/2)\ge 1-c/4$, $2s_0 t^* e^{13s_0 t^*} e^{-\kappa K/2}<\frac 1 {10}c_1e^{-\kappa }$ and $e^{7 s_0 t^*}e^{-\kappa K}< \frac 1 {24} c_1 e^{-\kappa }$. Then for $s\in [0,t^*]$ and $x\in \frac 1n \mathbb{Z}$ with $x\le -\frac 12 K+\nu s$, we have $(1-u^n_s(x))(2u^n_s(x)-1+\alpha)\le c$. Take $\ell \in \frac 1n \mathbb{Z}$ with $\ell \le -K$. By~\eqref{eq:vngreena} with $a=-cs_0 $, and since $(1-u)(2u-1+\alpha)-c\le 2$ for $u\in [0,1]$, for $t\in [0,t^*]$ and $z\in \frac 1n \mathbb{Z}$, \begin{align} \label{eq:bulkvnt*} v^n_t(z) &\le e^{cs_0 t} \langle q^n_0, \phi^{t,z}_0 \rangle_n +s_0 \int_0^t e^{cs_0 (t-s)} \langle 2 v^n_s(\cdot) \mathds{1}_{\cdot \ge -\frac 12 K +\nu s}, \phi^{t,z}_s \rangle_n ds \notag \\ &\le e^{cs_0 t} \psubb{z}{X^n_{mt}\le \ell} +2 s_0 e^{cs_0 t} \int_0^t \sup_{x\ge -\frac 12 K+\nu s}v^n_s(x) ds. \end{align} For $s\in [0,t]$ and $x\ge -\frac 12 K+\nu s$, by~\eqref{eq:veasybound}, \begin{align*} v^n_s(x)\le e^{(1+\alpha)s_0 s}\psubb{x}{X^n_{ms}\le \ell} &\le e^{(1+\alpha)s_0 s}\psubb{0}{X^n_{ms}\ge -\ell -\tfrac 12 K+\nu s}\\ &\le e^{(1+\alpha)s_0 s}e^{3\kappa (\ell+\frac 12 K-\nu s)}e^{10s_0 s}, \end{align*} for $n$ sufficiently large, by Markov's inequality and Lemma~\ref{lem:Xnmgf}, and since $\frac 12 m\kappa^2 =s_0$. Hence by~\eqref{eq:bulkvnt*} and then by Lemma~\ref{lem:Xnmgf} and since $\kappa \nu =\alpha s_0$ and $\ell \le -K$, for $z\le \nu t^*$, \begin{align*} v^n_{t^*}(z)&\le e^{cs_0 t^*} e^{-\frac 12 \alpha \kappa (z-\ell)} \Esubb{0}{e^{\frac 12 \alpha \kappa X^n_{mt^*}}}+2s_0 t^* e^{13s_0 t^*} e^{3\kappa (\ell +\frac 12 K)}\\ &\le e^{-\frac 12 \alpha \kappa ((z-\nu t^*)-\ell)} e^{(c-\frac 14 \alpha^2+\mathcal O(n^{-1}))s_0 t^*}+2s_0 t^* e^{13s_0 t^*} e^{\kappa \ell} e^{-\kappa K/2}\\ &\le \tfrac 15 c_1 e^{-\frac 12 \alpha \kappa ((z-\nu t^*)-\ell +1)}, \end{align*} where the last line follows by our choice of $t^*$ and $K$ and since $z\le \nu t^*$. Hence for $z\le \nu t^*$, since $p^n_{t^*}(z)\ge 1/2-c/4>2/5$, we have that~\eqref{eq:lembulktail} holds. For $z\in [\nu t^*,\nu t^*+ D^+_n]$, by~\eqref{eq:veasybound} and then by Markov's inequality and Lemma~\ref{lem:Xnmgf}, for $n$ sufficiently large, \begin{align*} v^n_{t^*}(z)\le e^{(1+\alpha)s_0 t^*} \psubb{z}{X^n_{mt^*}\le \ell}\le e^{(1+\alpha)s_0 t^*}e^{-2\kappa (z-\ell)}e^{5 s_0 t^*}&\le e^{7 s_0 t^*}e^{-\kappa K}e^{-\kappa z}e^{-\kappa (z-\ell)}\\ &\le \tfrac 1 {24}c_1 e^{-\kappa z} e^{-\frac 12 \alpha \kappa ((z-\nu t^*)-\ell +1)} \end{align*} by our choice of $K$. The result follows since $p^n_{t^*}(z)\ge \frac 1 {12}e^{-\kappa (z-\nu t^*)}\ge \frac 1 {12}e^{-\kappa z}$. \end{proof} For $t\ge 0$ and $x_1\in \frac 1n \mathbb{Z}$, let $(v^{n,+}_{t,t+s}(x_1,\cdot))_{s\ge 0}$ denote the solution of \begin{equation*} \begin{cases} \partial_s v^{n,+}_{t,t+s}(x_1,\cdot)&=\tfrac 12 m\Delta_n v^{n,+}_{t,t+s}(x_1,\cdot)+s_0 v^{n,+}_{t,t+s}(x_1,\cdot)(1-u^n_{t,t+s})(2u^n_{t,t+s}-1+\alpha ) \quad \text{for }s>0,\\ v^{n,+}_{t,t}(x_1,x)&= p^n_t(x) \mathds{1}_{x \ge x_1}, \end{cases} \end{equation*} where $(u^n_{t,t+s})_{s\ge 0}$ is defined in~\eqref{eq:unttsdef}. Similarly, let $(v^{n,-}_{t,t+s}(x_1,\cdot))_{s\ge 0}$ denote the solution of \begin{equation*} \begin{cases} \partial_s v^{n,-}_{t,t+s}(x_1,\cdot)&=\tfrac 12 m\Delta_n v^{n,-}_{t,t+s}(x_1,\cdot)+s_0 v^{n,-}_{t,t+s}(x_1,\cdot)(1-u^n_{t,t+s})(2u^n_{t,t+s}-1+\alpha ) \quad \text{for }s>0,\\ v^{n,-}_{t,t}(x_1,x)&= p^n_t(x) \mathds{1}_{x \le x_1}. \end{cases} \end{equation*} We now use Lemmas~\ref{lem:vfromtipbound} and~\ref{lem:bulktail1} to prove the following result. \begin{lemma} \label{lem:A2A3} For $t^*\in \mathbb{N}$ sufficiently large, and $K\in \mathbb{N}$ sufficiently large (depending on $t^*$), for $\ell \in \mathbb{N}$, the following holds for $n$ sufficiently large. For $t\in [(\log N)^2-t^*,N^2]$ and $x_1,x_2 \in \frac 1n \mathbb{Z}$ with $x_1-x_2 \le (\log N)^{2/3}$, \begin{equation} \label{eq:lemA2A31} \p{A^{(2)}_{t}(x_1,x_2)^c \cap \{ x_1-\mu^n_t \in [K,D^+_n], x_2-\mu^n_{t+t^*}\le D^+_n \} \cap E'_1} \le \left( \frac n N \right)^\ell. \end{equation} For $t\in [(\log N)^2-t^*,N^2]$ and $x_1,x_2 \in \frac 1n \mathbb{Z}$ with $x_2-x_1 \le (\log N)^{2/3}$, \begin{equation} \label{eq:lemA2A32} \p{A^{(3)}_{t}(x_1,x_2)^c \cap \{ x_1-\mu^n_t \le -K \} \cap E'_1} \le \left( \frac n N \right)^\ell. \end{equation} \end{lemma} \begin{proof} Take $t^*,K\in \mathbb{N}$ sufficiently large that Lemmas~\ref{lem:vfromtipbound} and~\ref{lem:bulktail1} hold. Recall the definition of $E'_1$ in~\eqref{eq:E1'defn}. Suppose $n$ is sufficiently large that $(\log N)^2-t^*\ge \frac 12 (\log N)^2 \vee \log N$, and $E'_1$ occurs. Take $t\in [(\log N)^2 -t^*,N^2]$ and $x_1,x_2\in \frac 1n \mathbb{Z}$ with $x_1-x_2\le (\log N)^{2/3}$. Recall from~\eqref{eq:Dn+-defn} that $D^+_n =(1/2-c_0)\kappa^{-1} \log (N/n)$. Take $c_3\in (0,c_0)$ and suppose $|q^{n,+}_{t,t+t^*}(x_1,x_2)-v^{n,+}_{t,t+t^*}(x_1,x_2)|\le \left( \frac n N \right)^{1/2-c_3}$. Then for $n$ sufficiently large, by Lemma~\ref{lem:vfromtipbound} and by the definition of the event $E_1$ in~\eqref{eq:eventE1}, if $x_1-\mu^n_t \in [K, D^+_n]$ and $x_2 -\mu^n_{t+t^*} \le D^+_n$, \begin{align*} \frac{q^{n,+}_{t,t+t^*}(x_1,x_2)}{p^n_{t+t^*}(x_2)} &\le \tfrac 12 c_1 e^{-(1+\frac 12 (1-\alpha))\kappa (x_1 -(x_2-\nu t^*)\vee (\mu^n_t+K) +2)}+5g(D^+_n)^{-1} \left( \frac n N \right)^{1/2-c_3}\\ &\le c_1 e^{-(1+\frac 12 (1-\alpha))\kappa (x_1 -(x_2-\nu t^*)\vee (\mu^n_t+K) +2)} \end{align*} for $n$ sufficiently large, since $x_1-x_2 \le (\log N)^{2/3}$ and $g(D^+_n)^{-1}\le 2 \left( \frac N n \right)^{1/2-c_0}$ with $c_0>c_3$. By Proposition~\ref{prop:pnun}, the first statement~\eqref{eq:lemA2A31} follows. Now take $t\in [(\log N)^2 -t^*,N^2]$ and $x_1,x_2 \in \frac 1n \mathbb{Z}$ with $x_2-x_1 \le (\log N)^{2/3}$. Suppose $E'_1$ occurs and suppose $|q^{n,-}_{t,t+t^*}(x_1,x_2)-v^{n,-}_{t,t+t^*}(x_1,x_2)|\le \left( \frac n N \right)^{1/4}$. If $x_1-\mu^n_t\le -K$, then $x_2 -\mu^n_{t+t^*}\le (\log N)^{2/3}$ and so $p^n_{t+t^*}(x_2)^{-1}\le 10 e^{\kappa (\log N)^{2/3}}$. Hence by Lemma~\ref{lem:bulktail1}, \begin{align*} \frac{q^{n,-}_{t,t+t^*}(x_1,x_2)}{p^n_{t+t^*}(x_2)} &\le \tfrac 12 c_1 e^{-\frac 12 \alpha \kappa ( (x_2-\nu t^*)-x_1+1)}+10 e^{\kappa (\log N)^{2/3}}\left( \frac n N \right)^{1/4}\\ &\le c_1 e^{-\frac 12 \alpha \kappa ( (x_2-\nu t^*)-x_1+1)} \end{align*} for $n$ sufficiently large. By Proposition~\ref{prop:pnun}, the second statement~\eqref{eq:lemA2A32} follows, which completes the proof. \end{proof} We now show that $A^{(4)}_t(x)$ and $A^{(5)}_t(x)$ occur with high probability for suitable $x$ and $t$. \begin{lemma} \label{lem:A4A5} For $\ell \in \mathbb{N}$, the following holds for $n$ sufficiently large. For $x\in \frac 1n \mathbb{Z}$ and $t\ge 0$, \begin{equation} \label{eq:lemA4A52} \p{A^{(5)}_t(x)^c}\le \left( \frac n N \right)^\ell. \end{equation} If there exists $a_2>3$ such that $N\ge n^{a_2}$ for $n$ sufficiently large, then for $t\in [(\log N)^2-\epsilon_n ,N^2]$ and $x \in \frac 1n \mathbb{Z}$, \begin{equation} \label{eq:lemA4A51} \p{A^{(4)}_t(x)^c \cap \{ x-\mu^n_t \le D^+_n \} \cap E'_1} \le \left( \frac n N \right)^\ell. \end{equation} \end{lemma} \begin{proof} For $t\ge 0$ and $x_1,x_2\in \frac 1n \mathbb{Z}$, by Corollary~\ref{cor:qnMa} with $a=-(1+\alpha)s_0$, \begin{align*} \E{q^n_{t,t+\epsilon_n}(x_1,x_2)}\le e^{(1+\alpha)s_0 \epsilon_n} \psubb{x_2}{X^n_{m\epsilon_n}=x_1} \le e^{(1+\alpha)s_0 \epsilon_n} e^{-(\log N)^{3/2}|x_1-x_2|} e^{m(\log N)^3 \epsilon_n} \end{align*} for $n$ sufficiently large, by Markov's inequality and Lemma~\ref{lem:Xnmgf}. Recall from~\eqref{eq:paramdefns} that $\epsilon_n\le (\log N)^{-2}$. Therefore, for $n$ sufficiently large, for $x\in \frac 1n \mathbb{Z}$, by a union bound and then by Markov's inequality, \begin{align*} \p{A^{(5)}_t(x)^c} &\le \sum_{x'\in \frac 1n \mathbb{Z},|x-x'|\ge 1} \p{q^n_{t,t+\epsilon_n}(x',x)\ge N^{-1} }\\ &\le N e^{(1+\alpha)s_0 \epsilon_n}N^{m} \sum_{x' \in \frac 1n \mathbb{Z}, |x-x'|\ge 1} e^{-(\log N)^{3/2}|x-x'|}, \end{align*} which completes the proof of~\eqref{eq:lemA4A52}. From now on, assume there exists $a_2>3$ such that $N\ge n^{a_2}$ for $n$ sufficiently large. Suppose $n$ is sufficiently large that $(\log N)^2-\epsilon_n \ge \frac 12 (\log N)^2 \vee \log N$, and take $t\in [(\log N)^2-\epsilon_n, N^2]$ and $x_1,x_2\in \frac 1n \mathbb{Z}$ with $|x_1-x_2|\le 1$. Recall the definition of $(v^n_{t,t+s}(x_1,\cdot))_{s\ge 0}$ in~\eqref{eq:vnx1defn}. By~\eqref{eq:veasybound}, and then by Lemma~\ref{lem:lclt}, there exists a constant $K_7<\infty$ such that for $n$ sufficiently large, $$ v^n_{t,t+\epsilon_n}(x_1,x_2)\le e^{(1+\alpha)s_0 \epsilon_n} p^n_t(x_1) \psubb{x_2}{X^n_{m\epsilon_n}=x_1} \le K_7 n^{-1} \epsilon_n^{-1/2} p^n_t(x_1). $$ Suppose $E'_1$ occurs and $x_1 \le \mu^n_t+D^+_n$. Then for $n$ sufficiently large, by the definition of the event $E_1$ in~\eqref{eq:eventE1} and since $|x_1-x_2|\le 1$, there exists a constant $K_8<\infty$ such that $\frac{p^n_t(x_1)}{p^n_{t+\epsilon_n}(x_2)}\le K_8$, and so \begin{equation} \label{eq:useK8} \frac{v^n_{t,t+\epsilon_n}(x_1,x_2)}{p^n_{t+\epsilon_n}(x_2)}\le K_7 K_8 n^{-1} \epsilon_n^{-1/2}. \end{equation} Recall from~\eqref{eq:Dn+-defn} that $D_n^+=(1/2-c_0)\kappa^{-1} \log (N/n)$. Take $c' \in (0,c_0/2)$ and suppose $|q^n_{t,t+\epsilon_n}(x_1,x_2)-v^n_{t,t+\epsilon_n}(x_1,x_2)|\le \left( \frac n N \right)^{1/2-c'}p^n_t(x_1)^{1/2}n^{-1/2}$. By~\eqref{eq:useK8} and then since $x_2\le \mu^n_{t}+D^+_n+1$ and by the definition of $K_8$, \begin{align} \label{eq:lemA4A5A} \frac{q^n_{t,t+\epsilon_n}(x_1,x_2)}{p^n_{t+\epsilon_n}(x_2)} &\le K_7 K_8 n^{-1} \epsilon_n^{-1/2}+ p^n_{t+\epsilon_n}(x_2)^{-1/2} \left( \frac n N \right)^{1/2-c'} \left( \frac{p^n_{t}(x_1)}{p^n_{t+\epsilon_n}(x_2)}\right)^{1/2} n^{-1/2} \notag \\ &\le K_7 K_8 n^{-1} \epsilon_n^{-1/2}+10^{1/2} e^{\frac 12 \kappa (D^+_n +2)} \left( \frac n N \right)^{1/2-c'} K_8^{1/2} n^{-1/2} \notag \\ &\le (K_7 K_8 +1) n^{-1} \epsilon_n^{-1/2} \end{align} for $n$ sufficiently large, since $N\ge n^3$ and so $e^{\frac 12 \kappa D_n^+} \left( \frac n N \right)^{1/2-c'} =\left( \frac n N \right)^{1/4+c_0/2-c'}\le n^{-1/2}$. For $c\in (0,\frac 12 (a_2-2)^{-1} (a_2-3))$, we have $3/2-2c< a_2 (1/2-c)$ and so since $N\ge n^{a_2}$ we have $p^n_t(x_1)\ge \frac 1 {10} e^{-\kappa D^+_n}\ge \frac 1 {10} \left( \frac n N \right)^{1/2}\ge \left( \frac {n^2} N \right)^{1-c}$ for $n$ sufficiently large. Hence by Lemma~\ref{lem:qnvnonepoint}, for $n$ sufficiently large, \begin{align*} &\p{\{|q^n_{t,t+\epsilon_n}(x_1,x_2)-v^n_{t,t+\epsilon_n}(x_1,x_2)|\ge \left( \frac n N \right)^{1/2-c'}p^n_t(x_1)^{1/2}n^{-1/2} \} \cap \{x_1 \le \mu^n_t +D_n^+\} \cap E'_1}\\ &\qquad \le \left( \frac n N \right)^{\ell+1} , \end{align*} and by~\eqref{eq:lemA4A5A}, it follows that for $n$ sufficiently large, $$ \p{\{q^n_{t,t+\epsilon_n}(x_1,x_2)> n^{-1} \epsilon_n^{-1} p^n_{t+\epsilon_n}(x_2)\}\cap \{x_1-\mu^n_t \le D_n^+\} \cap E'_1}\le \left( \frac n N \right)^{\ell+1}. $$ By the same argument as for the proof of~\eqref{eq:lemA4A52}, the second statement~\eqref{eq:lemA4A51} now follows. \end{proof} Finally we show that $A^{(6)}_t(x)$ occurs with high probability; the proof is similar to the first half of the proof of Lemma~\ref{lem:A4A5}. \begin{lemma} \label{lem:eventA6} For $\ell \in \mathbb{N}$ and $t^*\in \mathbb{N}$, the following holds for $n$ sufficiently large. For $t\ge 0$ and $x \in \frac 1n \mathbb{Z}$, $$ \p{ A^{(6)}_t(x)^c} \le \left( \frac n N \right)^\ell. $$ \end{lemma} \begin{proof} By Corollary~\ref{cor:qnMa} with $a=-(1+\alpha)s_0 $, for $k\in [ t^* \delta_n^{-1}]$ and $x'\in \frac 1n \mathbb{Z}$, \begin{align*} \E{q^n_{t,t+k\delta_n}(x',x)} &\le e^{(1+\alpha)s_0 t^*} \psubb{x}{X^n_{mk\delta_n}=x'}\\ &\le e^{(1+\alpha)s_0 t^*} e^{-(\log N)^{1/2}|x-x'|} \Esubb{0}{e^{X^n_{mk\delta_n}(\log N)^{1/2}}}\\ &\le e^{(1+\alpha)s_0 t^*} e^{-(\log N)^{1/2} |x-x'|} e^{mt^* \log N} \end{align*} for $n$ sufficiently large, where the second inequality follows by Markov's inequality, and the third by Lemma~\ref{lem:Xnmgf}. Therefore, by a union bound and Markov's inequality, \begin{align*} &\p{\exists x' \in \tfrac 1n \mathbb{Z}, k\in[t^* \delta_n^{-1}]: |x-x'|\ge (\log N)^{2/3}, q^n_{t,t+k\delta_n}(x',x)\ge N^{-1}}\\ &\le t^* \delta_n^{-1} \cdot N e^{(1+\alpha)s_0 t^*} N^{mt^*} \sum_{x' \in \frac 1n \mathbb{Z}, |x-x'|\ge (\log N)^{2/3}}e^{-(\log N)^{1/2}|x-x'|}\\ &\le \left( \frac n N \right)^\ell \end{align*} for $n$ sufficiently large. \end{proof} We can now end this section by proving Proposition~\ref{prop:eventE2}. \begin{proof}[Proof of Proposition~\ref{prop:eventE2}] Note that if $x_1-x_2>(\log N)^{2/3}$ and $A_t^{(6)}(x_2)$ occurs, then $A^{(2)}_t(x_1,x_2)$ occurs. Similarly, if $x_2-x_1>(\log N)^{2/3}$ and $A_t^{(6)}(x_2)$ occurs, then $A^{(3)}_t(x_1,x_2)$ occurs. The result now follows directly from Proposition~\ref{prop:eventA1} and Lemmas~\ref{lem:A2A3},~\ref{lem:A4A5} and~\ref{lem:eventA6}. \end{proof} \section{Event $E_3$ occurs with high probability} \label{sec:eventE3} In this section, we will prove the following result. \begin{prop} \label{prop:eventE3} For $K\in \mathbb{N}$ sufficiently large, for $c_2>0$, if $N\ge n^{3}$ for $n$ sufficiently large, then for $n$ sufficiently large, if $p^n_0(x)=0$ $\forall x\ge N$, $$ \p{(E_3)^c \cap E_1}\le \left( \frac n N \right)^2. $$ \end{prop} By the definition of the events $E_1$ and $E_3$ in~\eqref{eq:eventE1} and~\eqref{eq:eventE3}, Proposition~\ref{prop:eventE3} follows directly from the following result. \begin{lemma} \label{lem:coalCB} For $\ell \in \mathbb{N}$, for $K\in \mathbb{N}$ sufficiently large, for $c_2>0$, if $N\ge n^{3}$ for $n$ sufficiently large then the following holds for $n$ sufficiently large. If $p^n_0(y)=0$ $\forall y\ge N$ then for $t\in [(\log N)^2-\delta_n,N^2]$, $x\in \frac 1n \mathbb{Z}$ with $x\ge -N^5$ and $j\in \{1,2,3,4\}$, \begin{equation} \label{eq:Bjbound} \p{B^{(j)}_t(x) ^c \cap E_1 \cap \{x\le \mu^n_t+D^+_n+1\}}\le \left( \frac n N \right)^\ell . \end{equation} \end{lemma} \begin{proof} We begin by proving~\eqref{eq:Bjbound} with $j=1$. For $x\in \frac 1n \mathbb{Z}$, $i \in [N]$ and $0\le t_1<t_2$, let $\mathcal A^{x,i}[t_1,t_2)$ denote the total number of points in the time interval $[t_1,t_2)$ in the Poisson processes $(\mathcal P^{x,i,i'})_{i'\in [N]\setminus\{ i\}}$, $(\mathcal S^{x,i,i'})_{i'\in [N]\setminus\{ i\}}$, $(\mathcal Q^{x,i,i',i''})_{i', i''\in [N]\setminus \{i\},i'\neq i''}$ and $(\mathcal R^{x,i,y,i'})_{i'\in [N], y\in \{x\pm n^{-1}\}}$. (These points correspond to the times at which the individual $(x,i)$ may be replaced by offspring of another individual.) For $t\ge 0$ and $x\in \frac 1n \mathbb{Z}$, let \begin{align*} \mathcal C^{n,1}_t(x) &=\{(i,j):i\neq j \in [N], \mathcal P^{x,i,j}[t,t+\delta_n)=1=\mathcal A^{x,i}[t,t+\delta_n), \, \mathcal A^{x,j}[t,t+\delta_n)=0,\\ &\qquad \qquad \hspace{10cm} \xi^n_{t}(x,j)=1\}. \end{align*} Recall the definition of $\mathcal C^n_t(x,x)$ in~\eqref{eq:Cntdefn}. If $(i,j)\in \mathcal C^{n,1}_t(x)$, we have $(\zeta^{n,t+\delta_n}_{\delta_n}(x,i),\theta^{n,t+\delta_n}_{\delta_n}(x,i))=(x,j)=(\zeta^{n,t+\delta_n}_{\delta_n}(x,j),\theta^{n,t+\delta_n}_{\delta_n}(x,j))$, and so $(i,j),(j,i)\in \mathcal C^n_t(x,x)$. Therefore, since if $(i,j) \in \mathcal C^{n,1}_t(x)$ then $(j,i) \notin \mathcal C^{n,1}_t(x)$, \begin{equation} \label{eq:lemcoalCB*} |\mathcal C^{n}_t(x,x)|\ge 2 |\mathcal C^{n,1}_t(x)|. \end{equation} For $t\ge 0$, $x\in \frac 1n \mathbb{Z}$ and $i\in [N]$, let \begin{equation} \label{eq:Dntxidef} \mathcal D^n_t(x,i) =\{(y,j)\in \tfrac 1n \mathbb{Z} \times [N] :(\zeta^{n,t+s}_s(y,j),\theta^{n,t+s}_s(y,j))=(x,i)\text{ for some }s\in [0,\delta_n]\}, \end{equation} the set of labels of individuals whose time-$t$ ancestor at some time in $[t,t+\delta_n]$ is $(x,i)$. Define \begin{equation} \label{eq:Mntdefn} \mathcal M^n_t=\max_{x\in \frac 1n \mathbb{Z} \cap [-2N^5, N^5],\, i\in [N]}|\mathcal D^n_t(x,i)|. \end{equation} For $t\ge 0$ and $x\in \frac 1n \mathbb{Z}$, let \begin{align} \label{eq:Cn2tdefn} \mathcal C^{n,2}_t(x) &=\Big\{(i,j) \in [N]^2 : \Big( \mathcal P^{x,i,j}+\mathcal S^{x,i,j}+\sum_{k\in [N]\setminus \{ i,j\} } \mathcal Q^{x,i,j,k}\Big) [t,t+\delta_n)>0, \, \xi^n_{t}(x,j)=1\Big\}. \end{align} Suppose $(i,j)\in \mathcal C^n_t(x,x)$, and $(i,j),(j,i)\notin \mathcal C^{n,2}_t(x)$. Then there exist $s\in [0,\delta_n]$, $(y,k)\notin\{ (x,i),(x,j)\}$ and $i'\in \{i,j\}$ such that $(\zeta^{n,t+\delta_n}_s(x,i'),\theta^{n,t+\delta_n}_s(x,i'))=(y,k)$. Then letting $(x_0,i_0)=(\zeta^{n,t+\delta_n}_{\delta_n}(x,i),\theta^{n,t+\delta_n}_{\delta_n}(x,i))$, we have $(x,i)$, $(x,j)$, $(y,k)\in \mathcal D^n_t(x_0,i_0)$. Since $\zeta^{n,t+\delta_n}(x,i)$ only jumps in increments of $\pm n^{-1}$, and $(\zeta^{n,t+\delta_n}_s(x,i), \theta^{n,t+\delta_n}_s(x,i))\in \mathcal D^n_t(x_0,i_0)$ $\forall s\in [0,\delta_n]$, we have $|x-x_0|<|\mathcal D^n_t(x_0,i_0)|n^{-1}. $ Hence if $x_0 \in [-2N^5,N^5]$ then $|x-x_0|< \mathcal M^n_t n^{-1}$. Therefore, by the definition of $q^{n,-}$ in~\eqref{eq:qn+-defn}, if $q^{n,-}_{t,t+\delta_n}(-2N^5,x)=0$ and $p^n_t(y)=0$ $\forall y\ge N^5$, then \begin{equation} \label{eq:coalA} |\mathcal C^n_t(x,x)| \le 2 |\mathcal C^{n,2}_t(x)|+2{\mathcal M^n_t \choose 2} |\{(x_0,i_0)\in \tfrac 1n \mathbb{Z} \times [N] :|x-x_0|< \mathcal M^n_t n^{-1}, |\mathcal D^n_t (x_0,i_0)|\ge 3\}| . \end{equation} We now use the inequalities~\eqref{eq:lemcoalCB*} and~\eqref{eq:coalA} to give lower and upper bounds on $|\mathcal C^n_t(x,x)|$. We begin with a lower bound. For $x\in \frac 1n \mathbb{Z}$, $i\in [N]$ and $0\le t_1<t_2$, let $\mathcal A^{1,x,i}[t_1,t_2)$ denote the total number of points in the time interval $[t_1,t_2)$ in the Poisson processes $(\mathcal P^{x,i,j})_{j\in [N]\setminus\{i\}, \xi^n_{t_1}(x,j)=1}$. Let $\mathcal A^{2,x,i}[t_1,t_2)$ denote the total number of points in the time interval $[t_1,t_2)$ in the Poisson processes $(\mathcal P^{x,i,j})_{j\in [N]\setminus\{i\}, \mathcal A^{x,j}[t_1,t_2)>0}$. Now fix $t\ge 0$ and $x\in \frac 1n \mathbb{Z}$ and let \begin{align*} A^{(1)}&= |\{i\in [N]:\xi^n_t(x,i)=1,\mathcal A^{x,i}[t,t+\delta_n)=1=\mathcal A^{1,x,i}[t,t+\delta_n)\}|,\\ A^{(2)}&= |\{i\in [N]:\xi^n_t(x,i)=0,\mathcal A^{x,i}[t,t+\delta_n)=1=\mathcal A^{1,x,i}[t,t+\delta_n)\}|,\\ \text{ and } \qquad B &=|\{i\in [N]:\mathcal A^{x,i}[t,t+\delta_n)=1= \mathcal A^{2,x,i}[t,t+\delta_n)\}|. \end{align*} Then by~\eqref{eq:lemcoalCB*} and the definition of $\mathcal C^{n,1}_t(x)$, \begin{equation} \label{eq:coalA12B1*} |\mathcal C^{n}_t(x,x)| \ge 2 |\mathcal C^{n,1}_t(x)|\ge 2( A^{(1)}+A^{(2)}-B). \end{equation} Let $(X^n_j)_{j=1}^\infty$ be i.i.d.~with $X^n_1\sim \text{Poisson }(r_n \delta_n(1-(\alpha +1)s_n))$, let $(Y^n_j)_{j=1}^\infty$ be i.i.d.~with $Y^n_1\sim \text{Poisson }(r_n \delta_n (\alpha s_n+N^{-1}s_n (N-2)))$, and let $(Z^n_j)_{j=1}^\infty$ be i.i.d.~with $Z^n_1\sim \text{Poisson }(m r_n\delta_n)$. Recall from~\eqref{eq:snrndefn} that $r_n = \frac 12 n^2 N^{-1}$ and $s_n=2s_0 n^{-2}$. Then conditional on $p^n_t(x)$, $A^{(1)}\sim \text{Bin}(Np^n_t(x),p_1)$ and $A^{(2)}\sim \text{Bin}(N(1-p^n_t(x)),p_2)$, where \begin{align*} p_1&= \p{\sum_{j=1}^{Np^n_t(x)-1} X^n_j =1, \sum_{j=Np^n_t(x)}^{N-1} X^n_j +\sum_{j=1}^{N-1}Y^n_j +\sum_{j=1}^{2N} Z^n_j=0}\\ &=\Big(\tfrac 12 n^2 \delta_n(p^n_t(x)-N^{-1})(1+\mathcal O(n^{-2}))+\mathcal O((n^2 \delta_n(p^n_t(x)-N^{-1}))^2)\Big) \big(1-\mathcal O(n^2 \delta_n)\big)\\ &=\tfrac 12 n^2 \delta_n (p^n_t(x)-N^{-1})(1+\mathcal O(n^{-2}+n^2 \delta_n)) \end{align*} and \begin{align*} p_2 &= \p{\sum_{j=1}^{Np^n_t(x)} X^n_j=1, \sum_{j=Np^n_t(x)+1}^{N-1} X^n_j +\sum_{j=1}^{N-1}Y^n_j +\sum_{j=1}^{2N} Z^n_j=0}\\ &=\tfrac 12 n^2 \delta_n p^n_t(x)(1+\mathcal O(n^{-2}+n^2 \delta_n)). \end{align*} Hence $$ \E{A^{(1)}+A^{(2)} \Big| p^n_t(x)} =\tfrac 12 Nn^2 \delta_n p^n_t(x)(1+\mathcal O(n^{-2}+n^2 \delta_n +N^{-1} p^n_t(x)^{-1})). $$ Recall from~\eqref{eq:paramdefns} that $\delta_n = \lfloor N^{1/2} n^2 \rfloor ^{-1}$. Suppose $n$ is sufficiently large that $(\log N)^2 -\delta_n \ge \frac 12 (\log N)^2$. Then on the event $E_1$, for $t\in [(\log N)^2 -\delta_n ,N^2]$ and $x\le \mu^n_t + D^+_n+1$, by~\eqref{eq:eventE1} and~\eqref{eq:Dn+-defn} we have $N^{-1} p^n_t(x)^{-1}\le 10N^{-1} e^{\kappa(D^+_n+1)}\le 10 e^\kappa N^{-1/2}n^{-1/2}$ and \begin{equation} \label{eq:n12lower} Nn^2 \delta_n p^n_t(x)\ge \tfrac 15 N^{1/2} g(x-\mu^n_t) \ge \tfrac 1 {10} N^{1/2}e^{-\kappa (D^+_n+1)}\ge 2n^{1/2} \end{equation} for $n$ sufficiently large. Hence for $n$ sufficiently large, for $t\in [(\log N)^2-\delta_n,N^2]$ and $x\in \frac 1n \mathbb{Z}$, by conditioning on $p^n_t(x)$ and then applying Theorem~2.3(c) in~\cite{mcdiarmid:1998}, \begin{align} \label{eq:coallemma1} \p{\left\{A^{(1)}+A^{(2)} \le \tfrac 12 N n^2 \delta_n p^n_t(x)(1-n^{-1/5})\right\} \cap \{x\le \mu^n_t+D^+_n+1\}\cap E_1} &\le e^{-\frac 13 n^{-2/5}n^{1/2}} \notag \\ &=e^{-\frac 13 n^{1/10}}. \end{align} For an upper bound on $B$, first let $$ A'=|\{i\in [N]:\mathcal A^{x,i}[t,t+\delta_n)>0\}|. $$ Then $A'\sim \text{Bin}(N,p)$ where \begin{align*} p&= \p{\sum_{j=1}^{N-1}(X^n_j +Y^n_j)+\sum_{j=1}^{2N} Z^n_j >0} =\tfrac 12 n^2 \delta_n (1+2m)(1+\mathcal O(n^2 \delta_n+n^{-2})). \end{align*} Conditional on $A'$, we have $B\le \text{Bin}(A',\frac{A'-1}{(1+2m)N-1})$. By Theorem~2.3(b) in~\cite{mcdiarmid:1998}, for $n$ sufficiently large, \begin{equation} \label{eq:coal1} \p{A'\ge N n^2 \delta_n (1+2m)}\le e^{-\frac 18 N n^2 \delta_n(1+2m)}. \end{equation} Moreover, since $\delta_n =\lfloor N^{1/2}n^2 \rfloor ^{-1}$, letting $B' \sim \text{Bin}(\lfloor 2N^{1/2}(1+2m) \rfloor, 2N^{-1/2})$, for $n$ sufficiently large, \begin{align} \label{eq:coal2} \p{B\ge n^{1/4} , A'\le N n^2 \delta_n (1+2m) }\le \p{B' \ge n^{1/4}} &\le e^{-n^{1/4}}(1+(e-1)2N^{-1/2})^{\lfloor 2N^{1/2}(1+2m)\rfloor } \notag \\ &\le e^{-\frac 12 n^{1/4}}, \end{align} where the second inequality follows by Markov's inequality. Therefore, by~\eqref{eq:coalA12B1*},~\eqref{eq:n12lower},~\eqref{eq:coallemma1},~\eqref{eq:coal1} and~\eqref{eq:coal2}, for $n$ sufficiently large, for $t\in [(\log N)^2-\delta_n ,N^2]$ and $x\in \frac 1n \mathbb{Z}$, \begin{align} \label{eq:coallemma2} &\p{ \left\{|\mathcal C^n_t(x,x)|\le N n^2 \delta_n p^n_t(x)(1-2n^{-1/5})\right\} \cap \{x\le \mu^n_t+D^+_n+1\}\cap E_1} \notag \\ &\quad \le e^{-\frac 13 n^{1/10}}+e^{-\frac 18 N^{1/2}}+ e^{-\frac 12 n^{1/4}}. \end{align} For an upper bound on $|\mathcal C^n_t(x,x)|$, note that by the definition of $\mathcal C^{n,2}_t(x)$ in~\eqref{eq:Cn2tdefn}, conditional on $p^n_t(x)$, $$ |\mathcal C^{n,2}_t(x)|\sim \text{Bin}(Np^n_t(x)(N-1),p'), $$ where \begin{align*} p' &=\mathbb{P}\bigg(\Big(\mathcal P^{x,1,2}+\mathcal S^{x,1,2}+\sum_{k\in [N]\setminus \{1,2\}} \mathcal Q^{x,1,2,k}\Big)[0,\delta_n)>0\bigg)\\ &=r_n \delta_n(1+\mathcal O(r_n \delta_n + n^{-2}N^{-1})). \end{align*} Then $Np^n_t(x)(N-1)p'=\frac 12 N n^2 \delta_n p^n_t(x)(1+\mathcal O(n^2 N^{-1}\delta_n+N^{-1}))$. Hence for $n$ sufficiently large, for $t\in [(\log N)^2-\delta_n ,N^2]$ and $x\in \frac 1n \mathbb{Z}$, by Theorem~2.3(b) in~\cite{mcdiarmid:1998} and~\eqref{eq:n12lower}, \begin{align} \label{eq:coalupper1} &\p{\left\{|\mathcal C^{n,2}_t(x)|\ge \tfrac 12 N n^2 \delta_n p^n_t(x) (1+n^{-1/5}) \right\} \cap \{x\le \mu^n_t+D^+_n+1\} \cap E_1} \notag\\ &\quad \le e^{-\frac 13 n^{-2/5}\cdot n^{1/2}} = e^{-\frac 13 n^{1/10}}. \end{align} We now bound the second term on the right hand side of~\eqref{eq:coalA}. For $x\in \frac 1n \mathbb{Z}$, $i\in [N]$ and $0\le t_1 < t_2$, let $\mathcal B^{x,i}[t_1,t_2)$ denote the total number of points in the time interval $[t_1,t_2)$ in the Poisson processes $(\mathcal P^{x,i',i})_{i'\in [N]\setminus \{i\}}$, $(\mathcal S^{x,i',i})_{i'\in [N]\setminus \{i\}}$, $(\mathcal Q^{x,i',i,i''})_{i', i''\in [N]\setminus \{i\},i'\neq i''}$ and $(\mathcal R^{y,i',x,i})_{i'\in [N], y\in \{x\pm n^{-1}\}}$. (These points correspond to the times at which offspring of the individual $(x,i)$ may replace another individual.) Let $\mathcal B^{1,x,i}[t_1,t_2)$ denote the total number of points in the time interval $[t_1,t_2)$ in $(\mathcal P^{x,i',i})_{i'\in [N]\setminus \{i\}, \mathcal B^{x,i'}[t_1,t_2)>0}$, $(\mathcal S^{x,i',i})_{i'\in [N]\setminus \{i\}, \mathcal B^{x,i'}[t_1,t_2)>0}$, $(\mathcal Q^{x,i',i,i''})_{i',i''\in [N]\setminus \{i\}, i''\neq i', \mathcal B^{x,i'}[t_1,t_2)>0}$ and $(\mathcal R^{y,i',x,i})_{i'\in [N], y\in \{x\pm n^{-1}\}, \mathcal B^{y,i'}[t_1,t_2)>0}$. Then fix $x\in \frac 1n \mathbb{Z}$ and $t\ge 0$, and let \begin{align*} C^{(1)}&= |\{i\in [N]:\mathcal B^{x,i}[t,t+\delta_n)\ge 2\}|\\ \text{and }\quad C^{(2)}&= |\{i\in [N]:\mathcal B^{x,i}[t,t+\delta_n)=1=\mathcal B^{1,x,i}[t,t+\delta_n)\}|. \end{align*} By the definition of $\mathcal D^n_t(x,i)$ in~\eqref{eq:Dntxidef}, we have that \begin{equation} \label{eq:DC1C2bound} |\{i\in [N]: |\mathcal D^n_t(x,i)|\ge 3\}| \le C^{(1)}+C^{(2)}. \end{equation} Then $C^{(1)} \sim \text{Bin}(N, p'')$, where \begin{align*} p''=\p{\mathcal B^{x,1}[t,t+\delta_n)\ge 2} &\le (r_n \delta_n N (1+2m))^2 =\tfrac 14 n^4 \delta_n^2 (1+2m)^2. \end{align*} Therefore, by Markov's inequality and since $n^4 \delta_n^2 \le 2N^{-1}$ for $n$ sufficiently large, $$ \p{C^{(1)} \ge n^{1/4}} \le e^{-n^{1/4}}(1+(e-1)\tfrac 14 n^4 \delta_n^2 (1+2m)^2)^N \le e^{-\frac 12 n^{1/4}} $$ for $n$ sufficiently large. For $y \in \frac 1n \mathbb{Z}$, let $D_y =|\{i\in [N]:\mathcal B^{y,i}[t,t+\delta_n)>0\}|$. Then conditional on $D_x$, $D_{x-n^{-1}}$ and $D_{x+n^{-1}}$ we have $C^{(2)} \le \text{Bin}(D_x,\frac{(D_x-1)(1-2N^{-1}s_n)+m(D_{x-n^{-1}}+D_{x+n^{-1}})}{ (1-2N^{-1}s_n)(N-1)+2mN})$. By the same argument as in~\eqref{eq:coal1} and~\eqref{eq:coal2}, it follows that for $n$ sufficiently large, $$ \p{C^{(2)} \ge n^{1/4}}\le 3e^{-\frac 18 N n^2 \delta_n(1+2m)}+e^{-\frac 12 n^{1/4}}. $$ Therefore, by~\eqref{eq:DC1C2bound}, for $n$ sufficiently large, for $x\in \frac 1n \mathbb{Z}$ and $t\ge 0$, \begin{align} \label{eq:Dmorethan3bound} \p{|\{i \in [N]: |\mathcal D^n_t(x,i)|\ge 3\}|\ge 2n^{1/4} } \le 3e^{-\frac 18 N n^2 \delta_n(1+2m)}+2e^{-\frac 12 n^{1/4}}. \end{align} For $K\in \mathbb{N}$, let $S^K_n \sim \text{Poisson}((2m+1)N r_n (K-1) \delta_n)$. Then since a set of $k$ individuals produces offspring individuals at total rate at most $(2m+1)N r_n k$, for $i\in [N]$, \begin{align*} \p{|\mathcal D^n_t(x,i)|\ge K} \le \p{S^K_n\ge K-1} &\le ((2m+1)N r_n (K-1)\delta_n )^{K-1}\\ &\le ((2m+1) (K-1))^{K-1}N^{- (K-1)/2} \end{align*} for $n$ sufficiently large. Therefore, by the definition of $\mathcal M^n_t$ in~\eqref{eq:Mntdefn}, for $\ell \in \mathbb{N}$, for $K\in \mathbb{N}$ sufficiently large that $7-\frac 12 (K-1)<-\ell$, for $t\ge 0$, \begin{align} \label{eq:Dmaxbound} \p{\mathcal M^n_t\ge K} \le \sum_{x\in \frac 1n \mathbb{Z} \cap [-2N^5,N^5], i\in [N]} \p{|\mathcal D^n_t(x,i)|\ge K} \le \tfrac 13 \left( \frac n N \right) ^\ell \end{align} for $n$ sufficiently large. For $x\ge -N^5$ and $t\ge 0$, by Corollary~\ref{cor:qnMa} with $a=-(1+\alpha)s_0$, and then by Markov's inequality, \begin{align} \label{eq:coallemmanew2} \E{q^{n,-}_{t,t+\delta_n}(-2N^5,x)} \le e^{(1+\alpha)s_0 \delta_n}\langle \mathds{1}_{\cdot \le -2N^5},\phi^{\delta_n,x}_0 \rangle_n &\le e^{(1+\alpha)s_0 \delta_n}\Esubb{0}{e^{X^n_{m\delta_n}}}e^{-N^5} \notag \\ &\le e^{1-N^5} \end{align} for $n$ sufficiently large, by Lemma~\ref{lem:Xnmgf}. By Lemma~\ref{lem:p01}, for $t\le N^2$, $\p{p^n_t(y)=0 \, \forall y\ge N^5}\ge 1-e^{-N^5}$. By~\eqref{eq:coalA},~\eqref{eq:n12lower},~\eqref{eq:coalupper1},~\eqref{eq:Dmorethan3bound} and~\eqref{eq:Dmaxbound}, it now follows that for $\ell \in \mathbb{N}$, for $n$ sufficiently large, for $x\in \frac 1n \mathbb{Z}$ with $x\ge -N^5$ and $t\in [(\log N)^2 -\delta_n ,N^2]$, \begin{equation} \label{eq:Cntxxupper} \p{ \left\{|\mathcal C^n_t(x,x)|\ge N n^2 \delta_n p^n_t(x)(1+2n^{-1/5})\right\} \cap \{x\le \mu^n_t+D^+_n+1\}\cap E_1} \le \tfrac 12 \left( \frac n N \right)^\ell. \end{equation} By~\eqref{eq:coallemma2}, we now have that~\eqref{eq:Bjbound} holds with $j=1$. For $t\ge 0$ and $x,y\in \frac 1n \mathbb{Z}$ with $|x-y|=n^{-1}$, let \begin{align*} \mathcal C^{n,1}_t(x,y) &= \{(i,j) \in [N]^2 : \mathcal R^{x,i,y,j}[t,t+\delta_n)=1=\mathcal A^{x,i}[t,t+\delta_n), \mathcal A^{y,j}[t,t+\delta_n)=0, \xi^n_t(y,j)=1\},\\ \mathcal C^{n,2}_t (x,y) &= \{(i,j) \in [N]^2 :\mathcal R^{x,i,y,j}[t,t+\delta_n)>0, \xi^n_t(y,j)=1\}. \end{align*} Then $|\mathcal C^{n}_t(x,x+n^{-1})|\ge |\mathcal C^{n,1}_t(x,x+n^{-1})|+ |\mathcal C^{n,1}_t(x+n^{-1},x)|$. If $q^{n,-}_{t,t+\delta_n}(-2N^5,x)=0$ and $p^n_t(y)=0$ $\forall y\ge N^5$, then by the same argument as for~\eqref{eq:coalA}, \begin{align*} |\mathcal C^{n}_t(x,x+n^{-1})| &\le |\mathcal C^{n,2}_t(x,x+n^{-1})|+|\mathcal C^{n,2}_t(x+n^{-1},x)|\\ &\quad +{\mathcal M^n_t \choose 2} |\{(x_0,i_0)\in \tfrac 1n \mathbb{Z} \times [N]:|x-x_0|< \mathcal M^n_t n^{-1}, |\mathcal D^n_t (x_0,i_0)|\ge 3\}|. \end{align*} By the same argument as for~\eqref{eq:coallemma2} and~\eqref{eq:Cntxxupper}, it follows that for $n$ sufficiently large, for $x\in \frac 1n \mathbb{Z}$ with $x\ge -N^5$ and $t\in [(\log N)^2-\delta_n,N^2]$,~\eqref{eq:Bjbound} holds with $j=2$. Suppose for some $k>1$ that $x,y\in \frac 1n \mathbb{Z}$ with $x\ge -N^5$ and $|x-y|=kn^{-1}$. Take $(i,j)\in \mathcal C^n_t(x,y)$, and let $(x_0,i_0)=(\zeta^{n,t+\delta_n}_{\delta_n} (x,i),\theta^{n,t+\delta_n}_{\delta_n} (x,i))$. Since $(\zeta^{n,t+\delta_n}_s(x,i),\theta^{n,t+\delta_n}_s(x,i))\in \mathcal D^n_t(x_0,i_0)$ and $(\zeta^{n,t+\delta_n}_s(y,j),\theta^{n,t+\delta_n}_s(y,j))\in \mathcal D^n_t(x_0,i_0)$ $\forall s\in [0,\delta_n]$, we have $(x,i),(y,j)\in \mathcal D^n_t(x_0,i_0)$ and $|\mathcal D^n_t(x_0,i_0)|\ge \max (k,n|x_0-x|)+1\ge 3.$ If $p^n_t(y)=0$ $\forall y\ge N^5$ and $q^{n,-}_{t,t+\delta_n}(-2N^5,x)=0$, then by~\eqref{eq:Mntdefn} it follows that $k<\mathcal M^n_t$ and $|x_0-x|<\mathcal M^n_t n^{-1}$. Therefore $$ |\mathcal C^n_t(x,y)| \le \mathds{1}_{|x-y|<\mathcal M^n_t n^{-1}}{\mathcal M^n_t \choose 2} |\{(x_0,i_0)\in \tfrac 1n \mathbb{Z}\times [N] : |x_0-x|< \mathcal M^n_t n^{-1}, |\mathcal D^n_t(x_0,i_0)|\ge 3\}| . $$ By Lemma~\ref{lem:p01},~\eqref{eq:coallemmanew2},~\eqref{eq:n12lower},~\eqref{eq:Dmorethan3bound} and~\eqref{eq:Dmaxbound}, it follows that for $K\in \mathbb{N}$ sufficiently large, for $n$ sufficiently large, for $x\ge -N^5$ and $t\in [(\log N)^2-\delta_n,N^2]$,~\eqref{eq:Bjbound} holds with $j=3$. Finally, suppose $x,y,y' \in \frac 1n \mathbb{Z}$ with $x \ge -N^5 $. Take $(i,j,j')\in \mathcal C^n_t(x,y,y')$, and let $(x_0,i_0)=(\zeta^{n,t+\delta_n}_{\delta_n}(x,i),\theta^{n,t+\delta_n}_{\delta_n}(x,i))$. Suppose that $p^n_t(y)=0$ $\forall y\ge N^5$ and $q^{n,-}_{t,t+\delta_n}(-2N^5,x)=0$. Then $(x,i),(y,j),(y',j')\in \mathcal D^n_t(x_0,i_0)$, and moreover $|x-x_0|<\mathcal M^n_t n^{-1}$ and $|x-y|\vee |x-y'|<\mathcal M^n_t n^{-1}$. Therefore \begin{align*} & |\mathcal C^n(x,y,y')|\\ &\quad \le \mathds{1}_{|x-y|\vee |x-y'|<\mathcal M^n_t n^{-1}} (\mathcal M^n_t)^3 |\{(x_0,i_0)\in \tfrac 1n \mathbb{Z}\times [N]: |x_0-x |< \mathcal M^n_t n^{-1}, |\mathcal D^n_t(x_0,i_0)|\ge 3\}|. \end{align*} By Lemma~\ref{lem:p01},~\eqref{eq:coallemmanew2},~\eqref{eq:n12lower},~\eqref{eq:Dmorethan3bound} and~\eqref{eq:Dmaxbound}, it follows that for $K\in \mathbb{N}$ sufficiently large, for $n$ sufficiently large, for $x\ge -N^5$ and $t\in [(\log N)^2 -\delta_n,N^2]$,~\eqref{eq:Bjbound} holds with $j=4$. This completes the proof. \end{proof} \section{Event $E_4$ occurs with high probability} \label{sec:eventE4} In this section, we complete the proof of Proposition~\ref{prop:eventE} by proving the following result. \begin{prop} \label{prop:eventE4} Suppose for some $a_1>1$, $N\ge n^{a_1}$ for $n$ sufficiently large. For $b_1>0$ sufficiently small, $b_2>0$ and $t^*\in \mathbb{N}$, for $K\in \mathbb{N}$ sufficiently large, then for $n$ sufficiently large, if condition~\eqref{eq:conditionA} holds, $$ \p{(E_4)^c } \le \left( \frac n N \right)^2. $$ \end{prop} Proposition~\ref{prop:eventE} now follows directly from Propositions~\ref{prop:eventE1},~\ref{prop:eventE2},~\ref{prop:eventE3} and~\ref{prop:eventE4}. From now on in this section, we assume that there exists $a_1>1$ such that $N\ge n^{a_1}$ for $n$ sufficiently large. We begin by proving the following lemma, which we will then use iteratively to show that with high probability no lineages consistently stay far ahead of the front. Fix $t^*\in \mathbb{N}$. \begin{lemma} \label{lem:decayintip} There exist $c\in (0,1)$ and $\epsilon \in (0,1)$ such that for $K\in \mathbb{N}$ sufficiently large, the following holds. Suppose $q^n_0$ is random, and define the event $$ A=\left\{ \sup_{t\in [0,t^*],\, x\in \frac 1n \mathbb{Z}}|p^n_t(x)-g(x-\mu^n_t)|\le \epsilon \right\}\cap \left\{\sup_{t\in [0,t^*]}\mu^n_t \le 2\nu t^*\right\}. $$ Then \begin{equation} \label{eq:decayintip} \sup_{z\ge K}\E{q^n_{t^*}(z)}\le c \sup_{x\in \frac 1n \mathbb{Z}} \E{q^n_0(x)}+4s_0 t^* \p{A^c}. \end{equation} \end{lemma} \begin{proof} Let $\delta = \p{A^c}$. For $a\in \R$, $t\ge 0$ and $z\in \frac 1n \mathbb{Z}$, by Lemma~\ref{lem:qnphi}, $(M^n_s(\phi^{t,z,a s_0}))_{s\ge 0}$ is a martingale with $M^n_0(\phi^{t,z,as_0 })=0$. Hence by Corollary~\ref{cor:qnMa}, \begin{align} \label{eq:Eqn} \E{q^n_t(z)} &=e^{-as_0 t}\langle \E{q^n_0},\phi^{t,z}_0\rangle_n +s_0 \int_0^t e^{-as_0 (t-s)}\langle \E{q^n_s((1-p^n_s)(2p^n_s-1+\alpha )+a)}, \phi^{t,z}_s\rangle_n ds. \end{align} Take $a\in (0,1-\alpha)$ and then take $\epsilon \in (0,\frac 12 (1-\alpha))$ sufficiently small that $(1-\epsilon)(2\epsilon-1+\alpha )<-a$. Take $K\in \mathbb{N}$ sufficiently large that $1-g(K/2-2t^* \nu )-\epsilon>0$, $e^{-as_0 t^*}+2s_0 t^*e^{(2s_0 +m) t^*-K/2}<1$ and $$ (1-g(x-2\nu t^*)-\epsilon)(2(g(x-2\nu t^*)+\epsilon)-1+\alpha)\le -a \qquad \text{for }x\ge K/2. $$ Then on the event $A$, $$ (1-p^n_s(x))(2p^n_s(x)-1+\alpha )+a\le 0 \qquad \forall \, x\ge K/2, \; s\in [0,t^*]. $$ It follows that for $x\ge K/2$ and $s\in [0,t^*]$, since $p^n_s(x)\in [0,1]$, \begin{align*} \E{q^n_s(x)((1-p^n_s(x))(2p^n_s(x)-1+\alpha )+a)} \le \E{q^n_s(x) (1+\alpha +a) \mathds{1}_{A^c}} &\le 2\delta , \end{align*} and for $x\le K/2$ and $s\in [0,t^*]$, \begin{align*} \E{q^n_s(x)((1-p^n_s(x))(2p^n_s(x)-1+\alpha )+a)} \le \E{q^n_s(x) (1+\alpha +a) } &\le 2\E{q^n_s(x)}. \end{align*} Hence for $t\in [0,t^*]$ and $z\in \frac 1n \mathbb{Z}$, substituting into~\eqref{eq:Eqn}, \begin{align} \label{eq:Eqntbound} \E{q^n_t(z)} &\le e^{-as_0 t}\langle \E{q^n_0},\phi^{t,z}_0\rangle_n +s_0 \int_0^t e^{-as_0 (t-s)}\langle 2\delta +2\sup_{y\in \frac 1n \mathbb{Z}}\E{q^n_s(y)}\mathds{1}_{\cdot \leq K/2}, \phi^{t,z}_s\rangle_n ds \notag \\ &\le e^{-as_0 t}\sup_{x\in \frac 1n \mathbb{Z}} \E{q^n_0(x)}+2s_0 t^* \delta +2s_0 \int_0^t \sup_{y\in \frac 1n \mathbb{Z}}\E{q^n_s(y)}\psubb{z}{X^n_{m(t-s)}\le K/2} ds. \end{align} In particular, for $t\in [0,t^*]$, since $a>0$, \begin{align*} \sup_{z\in \frac 1n \mathbb{Z}}\E{q^n_t(z)} \le \sup_{x\in \frac 1n \mathbb{Z}}\E{q^n_0(x)}+2s_0 t^* \delta +2s_0 \int_0^t \sup_{y \in \frac 1n \mathbb{Z}}\E{q^n_s(y)}ds. \end{align*} By Gronwall's inequality, it follows that for $t\in [0,t^*]$, \begin{equation} \label{eq:Eqngronwall} \sup_{z\in \frac 1n \mathbb{Z}}\E{q^n_t(z)} \le \left( \sup_{x\in \frac 1n \mathbb{Z}}\E{q^n_0(x)}+2s_0 t^* \delta \right) e^{2s_0 t}. \end{equation} Therefore, substituting the bound in~\eqref{eq:Eqngronwall} into~\eqref{eq:Eqntbound}, for $t\in [0,t^*]$ and $z\in \frac 1n \mathbb{Z}$ with $z\ge K$, \begin{align*} \E{q^n_t(z)} &\le e^{-as_0 t}\sup_{x\in \frac 1n \mathbb{Z}} \E{q^n_0(x)}+2s_0 t^* \delta \\ &\quad +2s_0 \int_0^t e^{2s_0 s} \left( \sup_{x\in \frac 1n \mathbb{Z}}\E{q^n_0(x)}+2s_0 t^* \delta \right)\psubb{K}{X^n_{m(t-s)}\le K/2} ds. \end{align*} For $0\le s \le t \le t^*$, by Markov's inequality and Lemma~\ref{lem:Xnmgf}, $$ \psubb{K}{X^n_{m(t-s)}\le K/2}=\psubb{0}{X^n_{m(t-s)}\ge K/2} \le e^{-K/2}\E{e^{X^n_{m(t-s)}}}\le e^{mt^*-K/2} $$ for $n$ sufficiently large. Hence for $z\in \frac 1n \mathbb{Z}$ with $z\ge K$, $$ \E{q^n_{t^*}(z)}\le (e^{-as_0 t^*}+2s_0 t^* e^{(2s_0+m) t^* -K/2})\sup_{x\in \frac 1n \mathbb{Z}} \E{q^n_0(x)} +2s_0 t^* \delta (1+2s_0 t^* e^{(2s_0+m) t^* -K/2}), $$ which completes the proof, since we chose $K$ sufficiently large that $e^{-as_0 t^*}+2s_0 t^* e^{(2s_0+m)t^*-K/2}<1$. \end{proof} Take $c\in (0,1)$ and $\epsilon \in (0,1)$ as in Lemma~\ref{lem:decayintip}. For $t\ge 0$, define the sigma-algebra $\mathcal F'_t=\sigma((p^n_s(x))_{s\in [0,t], x\in \frac 1n \mathbb{Z}})$. The following result will easily imply Proposition~\ref{prop:eventE4}. \begin{prop} \label{prop:eventE4int} For $\ell \in \mathbb{N}$, there exists $\ell '\in \mathbb{N}$ such that for $K\in \mathbb{N}$ sufficiently large and $c_2>0$, the following holds for $n$ sufficiently large. Take $t\in \delta_n \mathbb{N}_0\cap [0,T^-_n]$ and let $t'=T_n-t-t^* \lfloor (t^*)^{-1} K\log N \rfloor$. Suppose $p^n_{t'}(x)=0$ $\forall x\ge N^5$ and $\p{(E_1)^c | \mathcal F'_{t'}}\le \left( \frac n N \right)^{\ell '}$. Then $$ \p{r^{n,K,t^*}_{ K \log N , T_n-t }(x)=0 \; \forall x\in \tfrac 1n \mathbb{Z} \Big|\mathcal F'_{t'}}\ge 1-\left( \frac n N \right)^{\ell}. $$ \end{prop} \begin{proof} Take $\ell '$ sufficiently large that $nN^6 \left( \frac n N \right)^{\ell '} \le \left( \frac n N \right)^{\ell+1}$ for $n$ sufficiently large. Then take $c'\in (c,1)$ and take $K>t^* (\ell '+1)(-\log c')^{-1}$ sufficiently large that Lemma~\ref{lem:decayintip} holds. Suppose \begin{equation} \label{eq:pE1cbound} \p{(E_1)^c | \mathcal F'_{t'}}\le \left( \frac n N \right)^{\ell '}. \end{equation} For $k\in \mathbb{N}$ and $x\in \frac 1n \mathbb{Z}$, let $r^n_k(x)=r^{n,K,t^*}_{kt^*,t'+kt^*}(x)$. Take $k\in \mathbb{N}$ with $kt^*\le K \log N$. Then by the definition of $r^{n,y,\ell}_{s,t}$ in~\eqref{eq:rnystdefn}, \begin{align*} \sup_{z\in \frac 1n \mathbb{Z}}\E{r^n_k(z) \Big|\mathcal F'_{t'}} &=\sup_{z\in \frac 1n \mathbb{Z}}\E{r^n_k(z)\mathds{1}_{z\ge \mu^n_{{t'}+kt^*}+K} \Big|\mathcal F'_{t'}}\\ &\le \sup_{z\in \frac 1n \mathbb{Z},\, z\ge \mu^n_{t'}+\nu kt^* +K-\nu t^* }\E{r^n_k(z) \big|\mathcal F'_{t'}}+\p{(E_1)^c | \mathcal F'_{t'}} \end{align*} for $n$ sufficiently large, by the definition of the event $E_1$ in~\eqref{eq:eventE1}. Therefore, by~\eqref{eq:pE1cbound} and then by Lemma~\ref{lem:decayintip} with $q^n_0=r^n_{k-1}(\cdot +\mu^n_{t'}+\lfloor \nu (k-1)t^* n\rfloor n^{-1})$, \begin{align} \label{eq:tipdecay*} \sup_{z\in \frac 1n \mathbb{Z}} \E{r^n_k(z)\big|\mathcal F'_{t'}} &\le \sup_{z\in \frac 1n \mathbb{Z},\, z\ge \mu^n_{t'}+\lfloor \nu (k-1)t^* n\rfloor n^{-1}+K} \E{r^n_k(z)\big|\mathcal F'_{t'}}+\left( \frac n N \right)^{\ell '} \notag \\ &\le c \sup_{x\in \frac 1n \mathbb{Z}} \E{r^n_{k-1}(x)\big|\mathcal F'_{t'}} +(1+4s_0t^*) \left( \frac n N \right)^{\ell '} \end{align} for $n$ sufficiently large. Recall that we chose $c'\in (c,1)$, and let $$ k^*=\min \left\{k\in \mathbb{N}_0 : \sup_{x\in \frac 1n \mathbb{Z}}\E{r^n_k(x)\big|\mathcal F'_{t'}}\le \frac {1+4s_0t^*} {c'-c} \left( \frac n N \right)^{\ell '} \right\}. $$ Then for $k\in \mathbb{N}$ with $k\le \min(k^*, (t^*)^{-1} K\log N)$, we have $ (c'-c)\sup_{x\in \frac 1n \mathbb{Z}}\E{r^n_{k-1}(x) \big|\mathcal F'_{t'}}\ge (1+4s_0t^*) \left( \frac n N \right)^{\ell '}$ by the definition of $k^*$, and so by~\eqref{eq:tipdecay*}, $$ \sup_{x\in \frac 1n \mathbb{Z}}\E{r^n_{k}(x)|\mathcal F'_{t'}}\le c' \sup_{x\in \frac 1n \mathbb{Z}}\E{r^n_{k-1}(x)|\mathcal F'_{t'}} \le \ldots \le (c')^{k} \sup_{x\in \frac 1n \mathbb{Z}}\E{r^n_{0}(x)|\mathcal F'_{t'}} \le (c')^{k}. $$ Hence for $n$ sufficiently large, since $\lfloor (t^*)^{-1}K \log N \rfloor > (\ell '+1)(-\log c')^{-1} \log (N/n)$ by our choice of $K$, we have $k^*< (t^*)^{-1}K \log N$. For $k \in \mathbb{N} \cap [k^*+1, (t^*)^{-1}K \log N]$, if $\sup_{x\in \frac 1n \mathbb{Z}}\E{r^n_{k-1}(x)|\mathcal F'_{t'}}\le \frac {1+4s_0t^*} {c'-c} \left( \frac n N \right)^{\ell '}$ then by~\eqref{eq:tipdecay*}, \begin{equation} \label{eq:tipdecaydagger} \sup_{x\in \frac 1n \mathbb{Z}}\E{r^n_k(x)\big| \mathcal F'_{t'}}\le \left( \frac {c} {c'-c}+1\right)(1+4s_0t^*) \left( \frac n N \right)^{\ell '} \le \frac {1+4s_0t^*} {c'-c}\left( \frac n N \right)^{\ell '} \end{equation} since $c'<1$. Therefore, by induction,~\eqref{eq:tipdecaydagger} holds for all $k \in \mathbb{N} \cap [k^*, (t^*)^{-1}K\log N]$. By a union bound, and then by Lemma~\ref{lem:p01} and since $p^n_{t'}(x)=0$ $\forall x\ge N^5$, and by~\eqref{eq:tipdecay*}, \begin{align*} &\p{\sup_{x\in \frac 1n \mathbb{Z}}r^n_{\lfloor (t^*)^{-1}K \log N \rfloor }(x)>0 \bigg| \mathcal F'_{t'}}\\ &\le \p{\exists x\ge 2N^5 : p^n_{T_n-t }(x)>0 \Big|\mathcal F'_{t'}} +\p{\mu^n_{T_n-t}\le 0 \Big| \mathcal F'_{t'}}\\ &\qquad +\sum_{x\in \frac 1n \mathbb{Z}\cap [K, 2N^5]}N \E{r^n_{\lfloor (t^*)^{-1}K \log N \rfloor }(x)\Big|\mathcal F'_{t'}}\\ &\le e^{-N^5}+\p{(E_1)^c | \mathcal F'_{t'}}+ 2nN^5\cdot N \frac {1+4s_0t^*} {c'-c} \left( \frac n N \right)^{\ell '}\\ &\le \left( \frac n N \right)^{\ell} \end{align*} for $n$ sufficiently large, by~\eqref{eq:pE1cbound} and our choice of $\ell '$. \end{proof} \begin{proof}[Proof of Proposition~\ref{prop:eventE4}] Take $\ell \in \mathbb{N}$ sufficiently large that $\left( \frac n N \right)^{\ell-2}N^2 \delta_n^{-1}\le \left( \frac n N \right)^3$ for $n$ sufficiently large. Take $\ell ' \in \mathbb{N}$ and $K\in \mathbb{N}$ sufficiently large that Proposition~\ref{prop:eventE4int} holds. By Proposition~\ref{prop:eventE1}, by taking $b_1,c_2>0$ sufficiently small, $\p{(E_1)^c}\le\left( \frac n N \right)^{\ell+\ell'}$ for $n$ sufficiently large. For $t\in \delta_n \mathbb{N}_0 \cap [0,T^-_n]$, let $$ D_t =\Big\{r^{n,K,t^*}_{ K \log N , T_n-t}(x)=0 \; \forall x\in \tfrac 1n \mathbb{Z} \Big\}. $$ Then by Proposition~\ref{prop:eventE4int}, letting $t'=T_n-t-t^* \lfloor (t^*)^{-1} K \log N \rfloor$, $$ \p{D^c_t \big| \mathcal F'_{t' }}\le \left( \frac n N \right)^{\ell }+\mathds{1}_{\{\p{(E_1)^c |\mathcal F'_{t'}}>\left( \frac n N \right)^{\ell'}\}} +\mathds{1}_{\{\exists x \ge N^5:p^n_{t'}(x)>0\}}. $$ Hence by Markov's inequality and Lemma~\ref{lem:p01}, $$ \p{D^c_t }\le \left( \frac n N \right)^{\ell }+\left( \frac N n \right)^{\ell '}\p{E^c_1} +e^{-N^5} \le 3\left( \frac n N \right)^{\ell } $$ for $n$ sufficiently large. Therefore, by a union bound and then by Markov's inequality, $$ \p{(E_4)^c}\le \sum_{t\in \delta_n \mathbb{N}_0 \cap [0,T^-_n]} \p{\p{D^c_t | \mathcal F}\ge \left( \frac n N \right)^2} \le \sum_{t\in \delta_n \mathbb{N}_0 \cap [0,T^-_n]}\left( \frac N n \right)^2 \p{D^c_t}\le \left( \frac n N \right)^2 $$ for $n$ sufficiently large, by our choice of $\ell$, which completes the proof. \end{proof} \section{Proof of Theorem~\ref{thm:statdist}} \label{sec:thmstatdist} The proof of Theorem~\ref{thm:statdist} uses results from Sections~\ref{sec:mainproof},~\ref{sec:eventE1},~\ref{sec:eventE2} and~\ref{sec:eventE4}. \begin{proof}[Proof of Theorem~\ref{thm:statdist}] Recall from~\eqref{eq:paramdefns} that $\delta_n=\lfloor N^{1/2}n^2 \rfloor^{-1}$, and let $S_n=T_n-\delta_n \lfloor \delta_n^{-1} T'_n\rfloor $. Take $b_1,c_2>0$ sufficiently small and $t^*,K\in \mathbb{N}$ sufficiently large that Proposition~\ref{prop:eventE1} holds with $\ell=1$ and Propositions~\ref{prop:eventE2} and~\ref{prop:eventE4} hold. Assume $c_2<a_0$ (recall that $(\log N)^{a_0}\le \log n$ for $n$ sufficiently large). Condition on $\mathcal F_0$, and suppose the event $E'_1 \cap E'_2 \cap E_4$ occurs, so in particular by~\eqref{eq:eventE1} and~\eqref{eq:E1'defn}, \begin{equation} \label{eq:pnSn} |p^n_{S_n}(x)-g(x-\mu^n_{S_n})|\le e^{-(\log N)^{c_2}}\; \forall x\in \tfrac 1n \mathbb{Z}. \end{equation} Fix $x_0\in \R$ and take $\epsilon >0$. Define $v_0 :\frac 1n \mathbb{Z} \rightarrow [0,1]$ by letting \begin{equation} \label{eq:v0defn} v_0(y)= \begin{cases} p^n_{S_n}(y) \quad &\text{for }y< \mu^n_{S_n}+x_0,\\ \min(p^n_{S_n}(y), N^{-1} \lfloor N h(y)\rfloor ) \quad &\text{for }y\in [\mu^n_{S_n}+x_0,\mu^n_{S_n}+x_0+\epsilon],\\ 0 &\text{for }y> \mu^n_{S_n}+x_0+\epsilon, \end{cases} \end{equation} where $h : [\mu^n_{S_n}+x_0,\mu^n_{S_n}+x_0+\epsilon]\rightarrow [0,1]$ is linear with $h (\mu^n_{S_n}+x_0)=p^n_{S_n}(\mu^n_{S_n}+x_0)$ and $h (\mu^n_{S_n}+x_0+\epsilon)=0$. For each $y\in \frac 1n \mathbb{Z}$, take $I_y \subseteq \{(y,i):\xi^n_{S_n}(y,i)=1\}$ such that $|I_y|=N v_0(y)$. Then let $I = \cup_{y\in \frac 1n \mathbb{Z}}I_y$. For $t\ge S_n$ and $x\in \frac 1n \mathbb{Z}$, let $$ \tilde q^{n}_t(x)=N^{-1} |\{i\in [N]:(\zeta^{n,t}_{t-S_n}(x,i), \theta^{n,t}_{t-S_n}(x,i))\in I\}|, $$ the proportion of individuals at $x$ at time $t$ which are descended from the set $I$ at time $S_n$. Recall the definition of $q^{n,-}$ in~\eqref{eq:qn+-defn} and note that for $t\ge S_n$ and $x\in \frac 1n \mathbb{Z}$, \begin{equation} \label{eq:tildeqineqs} q^{n,-}_{S_n,t}(\mu^n_{S_n}+x_0,x)\le \tilde q^{n}_t(x) \le q^{n,-}_{S_n,t}(\mu^n_{S_n}+x_0+\epsilon ,x). \end{equation} Let $(\tilde v^{n}_t)_{t\ge S_n}$ solve $$ \begin{cases} \partial_t \tilde v^{n}_t =\tfrac 12 m \Delta_n \tilde v^{n}_t+s_0 \tilde v^{n}_t(1- u_{S_n,t}^n)(2u_{S_n,t}^n-1+\alpha ) \quad & \text{for }t>S_n,\\ \tilde v^{n}_{S_n}= v_0, \end{cases} $$ where $(u^n_{S_n,t})_{t\ge S_n}$ is defined as in~\eqref{eq:unttsdef}. Recall the definition of $\gamma_n$ in~\eqref{eq:paramdefns}. Note that by Proposition~\ref{prop:pnun}, for $n$ sufficiently large, for $t\le S_n +\gamma_n$, \begin{equation} \label{eq:thmstatqnvn} \p{\sup_{x\in \frac 1n \mathbb{Z}\cap [-N^5,N^5]}|\tilde q^{n}_t(x)-\tilde v^{n}_t(x)|\ge \left( \frac n N \right)^{1/4}} \le \frac n N. \end{equation} For $t\ge 0$ and $x\in \R$, let $$ \tilde v _t(x)=g(x-\mu^n_{S_n}-\nu t)\Esub{x-\mu^n_{S_n}-\nu t}{\bar{v}_0(Z_t +\mu^n_{S_n})g(Z_t)^{-1}}, $$ where $\bar{v}_0$ is the linear interpolation of $v_0$, and $(Z_t)_{t\ge 0}$ is defined in~\eqref{eq:SDE}. By Lemma~\ref{lem:vnvbound} and the definition of the event $E_1'$ in~\eqref{eq:E1'defn}, for $n$ sufficiently large, \begin{align*} &\sup_{x\in \frac 1n \mathbb{Z},\, t\in [0,\gamma_n]} |\tilde v^{n}_{S_n+t}(x)-\tilde v_t(x)|\\ &\le (C_7 (n^{-1/3}+e^{-(\log N)^{c_2}})+2\sup_{x_1,x_2\in\frac 1n \mathbb{Z},|x_1-x_2|\leq n^{-1/3}}|v_0(x_1)-v_0(x_2)|)e^{5s_0 \gamma_n}\gamma_n^2. \end{align*} By the definition of $v_0$ in~\eqref{eq:v0defn} and by~\eqref{eq:pnSn}, $$ \sup_{x_1,x_2\in\frac 1n \mathbb{Z},|x_1-x_2|\leq n^{-1/3}}|v_0(x_1)-v_0(x_2)| \le 2(2e^{-(\log N)^{c_2}}+n^{-1/3}\|\nabla g\|_\infty )+\epsilon^{-1} n^{-1/3}+N^{-1}. $$ Therefore, for $n$ sufficiently large, for $t\in [0, \gamma_n]$ and $x\in \frac 1n \mathbb{Z}$ with $|x-\mu^n_{S_n+t}|\le d_n$, \begin{equation} \label{eq:tildevvclose} \Big| \frac{\tilde v^{n}_{S_n+t}(x)}{g(x-\mu^n_{S_n}-\nu t)} - \Esub{x-\mu^n_{S_n}-\nu t}{\bar{v}_0(Z_t +\mu^n_{S_n})g(Z_t)^{-1}}\Big|\le e^{-\frac 12 (\log N)^{c_2}}. \end{equation} From now on, we consider two different cases; suppose first that $T'_n\le \gamma_n$. Recalling~\eqref{eq:tildeqineqs} and~\eqref{eq:thmstatqnvn}, suppose for all $x\in \frac 1n \mathbb{Z} \cap [-N^5,N^5]$ that $$ q^{n,-}_{S_n,T_n}(\mu^n_{S_n}+x_0,x)\le \tilde v^{n}_{T_n}(x)+\left( \frac n N \right)^{1/4} \quad \text{and}\quad q^{n,-}_{S_n,T_n}(\mu^n_{S_n}+x_0+\epsilon,x)\ge \tilde v^{n}_{T_n}(x)-\left( \frac n N \right)^{1/4}. $$ By the definition of the event $E_1$ in~\eqref{eq:eventE1}, for $n$ sufficiently large, if $x\in \frac 1n \mathbb{Z}$ with $|x-\mu^n_{T_n}|\le K_0$ then since we are assuming $T'_n \le \gamma_n$ we have $|x-\mu^n_{S_n}-\nu (T_n-S_n) |\le 2K_0$, and so by~\eqref{eq:tildevvclose} and by~\eqref{eq:TZbound2} in Lemma~\ref{lem:Tbound}, \begin{align} \label{eq:thmstatpf1} &\frac{q^{n,-}_{S_n,T_n}(\mu^n_{S_n}+x_0,x)}{g(x-\mu^n_{S_n}-\nu (T_n-S_n) )} \notag \\ &\le \int_{-\infty}^\infty \pi(y) \bar{v}_0 (y+\mu^n_{S_n})g(y)^{-1}dy +2m^{-1/2} (T_n-S_n)^{-1/4} \sup_{z\in \R}|\bar{v}_0(z+\mu^n_{S_n})g(z)^{-1}| \notag \\ &\qquad +e^{-\frac 12 (\log N)^{c_2}} +\left( \frac n N \right)^{1/4} g(2K_0)^{-1} \notag \\ &\le \int_{-\infty}^{x_0+\epsilon} \pi(y) dy +\epsilon \end{align} for $n$ sufficiently large, since by~\eqref{eq:pnSn} and by the definition of $v_0$ in~\eqref{eq:v0defn}, $v_0(y+\mu^n_{S_n})\le (g(y)+e^{-(\log N)^{c_2}})\mathds{1}_{y\le x_0 +\epsilon}$ $\forall y\in \frac 1n \mathbb{Z}$, and since we are assuming that $T'_n \rightarrow \infty$ as $n\rightarrow \infty$. Similarly, since $v_0(y+\mu^n_{S_n})\ge (g(y)-e^{-(\log N)^{c_2}})\mathds{1}_{y\le x_0}$ $\forall y\in \frac 1n \mathbb{Z}$, for $n$ sufficiently large we have \begin{align} \label{eq:thmstatpf2} \frac{q^{n,-}_{S_n,T_n}(\mu^n_{S_n}+x_0+\epsilon,x)}{g(x-\mu^n_{S_n}-\nu (T_n-S_n) )} &\ge \int_{-\infty}^{x_0} \pi(y) dy -\epsilon. \end{align} For $n$ sufficiently large, since $|T_n-T'_n-S_n|\le \delta_n$ we have that $|\mu^n_{T_n-T'_n}-\mu^n_{S_n}|\le \epsilon$. Recall the definition of $G_{K_0,T_n}$ in~\eqref{eq:Gdefn}. Then for $(X_0,J_0)\in G_{K_0,T_n}$ we have $|X_0-\mu^n_{T_n}|\le K_0$, and so for $n$ sufficiently large, by the definition of the event $E_1$ in~\eqref{eq:eventE1} and by~\eqref{eq:thmstatpf2}, $$ \p{\zeta^{n,T_n}_{T_n-S_n}(X_0,J_0)\le \mu^n_{T_n-T'_n} + x_0+2\epsilon \Big| \mathcal F_0}\ge \frac{q^{n,-}_{S_n,T_n}(\mu^n_{S_n}+x_0+\epsilon, X_0)}{p^n_{T_n}(X_0)} \ge \int_{-\infty}^{x_0} \pi(y) dy -2 \epsilon $$ and by~\eqref{eq:thmstatpf1}, $$ \p{\zeta^{n,T_n}_{T_n-S_n}(X_0,J_0)\le \mu^n_{T_n-T'_n}+ x_0-\epsilon \Big| \mathcal F_0}\le \frac{q^{n,-}_{S_n,T_n}(\mu^n_{S_n}+x_0,X_0)}{p^n_{T_n}(X_0)} \le \int_{-\infty}^{x_0+\epsilon} \pi(y) dy +2\epsilon. $$ Hence letting $y_0=x_0 +2\epsilon$, by~\eqref{eq:tildeqineqs} and~\eqref{eq:thmstatqnvn}, for $n$ sufficiently large, \begin{align} \label{eq:thmstatpf3} \p{\zeta^{n,T_n}_{T_n-S_n}(X_0,J_0)-\mu^n_{T_n-T'_n} \le y_0 }&\ge \left(\int_{-\infty}^{y_0-2\epsilon} \pi(y) dy -2 \epsilon\right) \left(1-\frac n N -\p{(E'_1\cap E'_2 \cap E_4)^c}\right) \notag \\ &\ge \int_{-\infty}^{y_0-2\epsilon} \pi(y) dy -3 \epsilon \end{align} for $n$ sufficiently large, by Propositions~\ref{prop:eventE1},~\ref{prop:eventE2} and~\ref{prop:eventE4}. Similarly, for $n$ sufficiently large, \begin{align} \label{eq:thmstatpf4} \p{\zeta^{n,T_n}_{T_n-S_n}(X_0,J_0)-\mu^n_{T_n-T'_n} \le y_0 }&\le \int_{-\infty}^{y_0+2\epsilon} \pi(y) dy +3 \epsilon. \end{align} Note that the rate at which $(\zeta^{n,T_n}_t(X_0,J_0))_{t\in [0,T_n]}$ jumps is bounded above by $2m r_n N=mn^2$, and so letting $Y_n \sim \text{Poisson}(mn^2 \delta_n)$, \begin{equation} \label{eq:thmstatpf6} \p{\zeta^{n,T_n}_{T'_n}(X_0,J_0)\neq \zeta^{n,T_n}_{T_n-S_n}(X_0,J_0)} \le \p{Y_n\ge 1} \le mn^2 \delta_n. \end{equation} Since $\epsilon>0$ can be taken arbitrarily small, this, together with~\eqref{eq:thmstatpf3} and~\eqref{eq:thmstatpf4}, completes the proof in the case $T'_n\le \gamma_n$. Now suppose instead that $T_n'\ge \gamma_n$, and take $s\in t^* \mathbb{N}_0$ such that $T_n-s \in [S_n+\gamma_n-t^*, S_n +\gamma_n]$. Recall from~\eqref{eq:paramdefns} that $d_n=\kappa^{-1} C \log \log N$. By Propositions~\ref{prop:intip} and~\ref{prop:RlogN}, if $(X_0,J_0)\in G_{K_0,T_n}$, \begin{equation} \label{eq:thmstatpf5} \p{|\zeta^{n,T_n}_{s}(X_0,J_0)-\mu^n_{T_n-s} |\ge d_n \Big| \mathcal F_0}=\mathcal O((\log N)^{3-\frac 18 \alpha C})=\mathcal O((\log N)^{-1}) \end{equation} since we chose $C>2^{13}\alpha^{-2}$. Suppose for all $y\in \frac 1n \mathbb{Z}\cap [-N^5,N^5]$ that $$ q^{n,-}_{S_n,T_n-s}(\mu^n_{S_n}+x_0,y)\le \tilde v^{n}_{T_n-s}(y)+\left( \frac n N \right)^{1/4} \quad \text{and} \quad q^{n,-}_{S_n,T_n-s}(\mu^n_{S_n}+x_0+\epsilon,y)\ge \tilde v^{n}_{T_n-s}(y)-\left( \frac n N \right)^{1/4}. $$ Take $x\in \frac 1n \mathbb{Z}$ with $|x-\mu^n_{T_n-s}|\le d_n$. Then for $n$ sufficiently large, by the definition of the event $E_1$ in~\eqref{eq:eventE1}, and by~\eqref{eq:tildevvclose} and by~\eqref{eq:TZbound1} in Lemma~\ref{lem:Tbound}, \begin{align*} &\frac{q^{n,-}_{S_n,T_n-s}(\mu^n_{S_n}+x_0,x)}{g(x-\mu^n_{S_n}-\nu (T_n-s-S_n) )} \notag \\ &\le \int_{-\infty}^\infty \pi(y) \bar{v}_0 (y+\mu^n_{S_n})g(y)^{-1}dy + (\log N)^{-12C} \sup_{z\in \R}|\bar{v}_0(z+\mu^n_{S_n})g(z)^{-1}| \notag \\ &\qquad +e^{-\frac 12 (\log N)^{c_2}} +\left( \frac n N \right)^{1/4} g(d_n+1)^{-1} \notag \\ &\le \int_{-\infty}^{x_0+\epsilon} \pi(y) dy +\epsilon \end{align*} for $n$ sufficiently large, as in~\eqref{eq:thmstatpf1}. Hence for $n$ sufficiently large that $|\mu^n_{T_n-T'_n}-\mu^n_{S_n}|\le \epsilon$, if $|\zeta^{n,T_n}_s (X_0,J_0)-\mu^n_{T_n-s}|\le d_n$ then \begin{align*} \p{\zeta^{n,T_n}_{T_n-S_n}(X_0,J_0)\le \mu^n_{T_n-T'_n} + x_0-\epsilon \Big| \mathcal F_s} &\le \frac{q^{n,-}_{S_n,T_n-s}(\mu^n_{S_n}+x_0, \zeta^{n,T_n}_s(X_0,J_0))}{p^n_{T_n-s}(\zeta^{n,T_n}_s(X_0,J_0))}\\ &\le \int_{-\infty}^{x_0+\epsilon} \pi(y) dy +2 \epsilon \end{align*} for $n$ sufficiently large, and similarly $$ \p{\zeta^{n,T_n}_{T_n-S_n}(X_0,J_0)\le \mu^n_{T_n-T'_n}+ x_0+2\epsilon \Big| \mathcal F_s} \ge \int_{-\infty}^{x_0} \pi(y) dy -2\epsilon. $$ As in~\eqref{eq:thmstatpf3} and~\eqref{eq:thmstatpf4}, it follows by~\eqref{eq:thmstatpf5},~\eqref{eq:tildeqineqs},~\eqref{eq:thmstatqnvn} and Propositions~\ref{prop:eventE1},~\ref{prop:eventE2} and~\ref{prop:eventE4} that for $n$ sufficiently large, $$ \int_{-\infty}^{y_0-2\epsilon} \pi(y) dy -3\epsilon \le \p{\zeta^{n,T_n}_{T_n-S_n}(X_0,J_0)- \mu^n_{T_n-T'_n}\le y_0} \le \int_{-\infty}^{y_0+2\epsilon} \pi(y) dy +3\epsilon. $$ By~\eqref{eq:thmstatpf6} and since $\epsilon>0$ can be taken arbitrarily small, this completes the proof. \end{proof}
1,941,325,220,963
arxiv
\section{Decision Making Process} \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{figures/figure1} \caption{Simplified architecture of an ADS illustrating how $PEM$s can help to study the impact of $S\&P$ errors on decision making: (top) Actual ADS as it operates in real world (below) ADS with $PEM$ in virtual world.} \label{fig:simple_AV_pipeline} \end{figure} Most autonomous systems can be regarded as discrete-time decision making systems that operate in a continuous-time physical world. In the context of AVs, we can term this decision making process as the Driving Policy $DP$ \cite{Shalev2017formal} that leads to a physical response, i.e. AV behavior. Although $DP$ can be hand-crafted (based on a rule book), it is tedious and less robust given the complex environment with ``surprises'' that the AV is expected to operate in. Therefore, many systems learn the art of decision-making from data using reinforcement learning \cite{Shalev2017formal}, introducing new challenges \cite{amodei2016concrete}. While generating a robust driving policy for a robot operating in a controlled environment is generally a tractable problem, this may not be the case for autonomous vehicles operating on public roads that are shared with other traffic participants including human-driven vehicles and vulnerable road users (such as pedestrians or cyclists). Therefore, it is very important to ensure that despite the complexity of the ODD, the driving policy is robust enough to generate an appropriate AV behavior which is safe as well as comfortable to the passengers in real time. This is arguably a complex dynamic spatio-temporal optimization problem, wherein the constraints possess high \textit{aleatoric} as well as \textit{epistemic} uncertainty \cite{McAllister2017}. Therefore, the AV research community (including industry, academia and regulators) tries to select scenarios to generate appropriate test cases, and relevant safety metrics that are measurable, objective and robust. Typical safety metrics used include safety clearance distances (between AV and other traffic participants), maximum and minimum limits on AV speed, acceleration and deceleration and many others. Nevertheless, a simple and practical metric is the \mbox{clearance} distance, both in temporal and spatial domains. \section{Introduction} Autonomous Vehicles (AVs) are, arguably, going to be the first mass deployment of robots that poses a safety impact on public spaces such as roads. It is well known that before an autonomous system can be deployed, each component must pass a set of tests to prove that it is capable of safely achieving its intended purpose. However, the Operational Design Domain (ODD) \cite{standard2018j3016} of the system may also include situations in which specific components (or subsystems) tend to exhibit diminished performance which may impact safety. In the case of AVs, this is a major safety concern when planning for deployment on public roads. For example, an AV may perform acceptably during daylight hours, but not very well when it gets dark. In such situations, we could intuitively infer that the more fallible component is not the decision-making process, but rather the perception subsystem, which may not be able to correctly perceive the surroundings without sufficient lighting thus leading to undesirable AV behavior. On the other hand, we could also conclude that the decision making process is not robust enough to handle such specific situations \cite{Benenson2008}. A recent (March 2018) fatal accident of an experimental AV with a jaywalking pedestrian under adverse lighting conditions is a case in point, as the investigation revealed that the decision making was not robust against realistic perception errors \cite{NTSB2019}. Evidently, a \textit{mereological} (part-whole) consideration is required, since neither of the subsystems is adequate or inadequate by itself; rather, their combination as a whole is necessary to obtain adequate performance. Therefore, limiting the performance evaluation to the separate components does not address the issue of estimating whether the system will be able to operate safely under specific conditions and edge cases. Furthermore, the current metrics designed to evaluate perception are inadequate to answer a critical question: \textit{is the performance of a perception subsystem sufficient for the decision making subsystem to make robust, safe decisions?} Virtual testing of AVs using simulations offers a safe and convenient way to validate safety \cite{Young2014}. However, high-fidelity models are necessary to achieve meaningful simulation results that represent the real world. In particular, physics-based sensor simulations can generate synthetic sensor signals to directly challenge the perception; but, they are highly compute-intensive and deters the real-time execution of virtual tests under full Automated Driving System (ADS)-in-the-loop or Hardware-in-the-loop configurations. Therefore, it is imperative to develop a feasible alternative that models the intended functionality together with the errors and uncertainty posed by the $S\&P$ subsystem of the ADS, to facilitate virtual testing. In this paper, we provide some insights towards answering the aforesaid question and make the following contributions: \begin{itemize} \item We review the state-of-the-art metrics used to measure the performance of AI-based perception algorithms, and identify their limitations in the context of decision making for an autonomous navigation task. \item We recommend some novel directions towards building a representative Perception Error Model ($\PEM$) that can meaningfully describe the performance of the actual sensing and perception of an autonomous system. \item We describe an experimental setup designed to exploit the potential of $\PEM$ in a virtual (simulated) environment which offers perfect ground truth, by employing $\PEM$s to replace the actual $S\&P$ of the autonomous system architecture. By including $PEM$s, we gain the flexibility to introduce meaningful and representative perception errors while eliminating the need to generate any synthetic sensor signals. \item We demonstrate the usefulness of this approach as a tool to analyze how the perception capabilities of the system can impact the AV behavior, by investigating several representative urban driving scenarios based on real-life situations. The $\PEM$s considered in the experiments will also highlight the limitations of the standard evaluation metrics for perception. \end{itemize} \section{Error Model} The surrounding environment can be summarized by 3 elements: the map, the ego-vehicle localization, and the other road users or obstacles. In this paper, we focus on the detection of obstacles and other road users. These are described in the Object Map $\OM = \{o_1, \ldots ,o_m\}$, the set of the $m$ perceived objects that the $S\&P$ system provides for each frame by observing the surrounding world $\W = \{w_1, \ldots ,w_n\}$. The world $\W$ represents the set of $n$ detectable objects (a.k.a. the Ground Truth). The $\OM$ is then analyzed and through the Driving Policy $DP$, a response $R$ is generated. To describe each object $w \in \W$, we adopt the same notation as for objects $o \in \OM$ where $o = (\mathbf{X}, \mathbf{C})$: \begin{itemize} \item \textbf{Pose}: the (6-9)DoF pose $\mathbf{X}$ of an object $o$, represented as a vector of 9 parameters (a 3D bounding box), \begin{equation} \mathbf{X}= \big(\text{\small{\textsf{position}, \textsf{rotation}, \textsf{dimensions}}}\big). \end{equation} This includes 3 parameters each for {\small\textsf{position}} (x, y, z), {\small\textsf{rotation}} (yaw, pitch, roll), and {\small\textsf{dimensions}} (length, width, height). Some parameters such as pitch, roll, or height may be dismissed in specific road traffic environments. \item \textbf{Class}: the class $\mathbf{C}$ of the object, \begin{equation} \mathbf{C} \in \{Vehicle, Pedestrian\}. \end{equation} Depending on the system, it can vary from a simple distinction between vehicles and pedestrians, to a finer classification discriminating between cars, bikes, trucks, etc. \item \textbf{Additional Parameters}: vector $\mathbf{X}$ can be extended to include any other relevant object parameters such as \mbox{velocity}, turning indicator status, or age (for pedestrians), based on the system under consideration and $\mathbf{C}$. \end{itemize} In \autoref{fig:simple_AV_pipeline}, we note $RawData = S(\W)$ and $\OM = P(RawData)$. We can observe that: \begin{equation}\label{eq:W+error} \OM = P(S(\W)) = S\&P(\W) = \W + \E, \end{equation} where $\E$ is the error between $\OM$ and $\W$. The response $R$ is what determines the \textit{behavior} of the vehicle and therefore, the overall safety and performance of the autonomous system: \begin{equation}\label{eq:Dp+error} R = DP(\OM) = DP(\W + \E). \end{equation} The task of assembling the $\OM$ requires to address both classification and regression problems, and has its roots in the object detection task in the Computer Vision (CV) field. \subsection{Evaluation Metrics - State of the Art} The error $\E$ includes predominantly 4 kinds of error. Let $o_i$ be an object perceived corresponding to $w_j$: \begin{itemize} \item \textbf{False negative}: $ o_i \notin \OM $; \item \textbf{False positive}: $w_j \notin \W$; \item \textbf{Misclassification}: $\mathbf{C}_{o_i}\neq \mathbf{C}_{w_j}$; \item \textbf{Parameters errors}: $\mathbf{X}_{o_i} - \mathbf{X}_{w_j} \neq \mathbf{0}$. \end{itemize} All these kinds of errors can be individually observed, statistically measured, and studied by comparing $\W$ and the $\OM$ produced by the $S\&P$. Given \autoref{eq:W+error}, the task of analyzing and describing the error is an extension of the task of \textit{measuring} the error of a perception subsystem. In fact, many metrics have been developed for the task of object detection from the CV field. In the field of AVs, many benchmarks on public datasets \cite{Geiger2012,Huang2018a,nuscenes2019} explore variations of these metrics. Intersection over Union (IoU) and Mean Average Precision (mAP) are popular metrics for assessing CV algorithms for generic object detection tasks \cite{Everingham2010,Cordts2016Cityscapes}. Similarly, Multiple Object Tracking Accuracy (MOTA) and Multiple Object Tracking Precision (MOTP) are common metrics for tracking evaluation \cite{Bernardin2008}. All of these metrics require an \textit{a priori} definition of a threshold in order to discriminate between a true positive and a false positive. For example, in \cite{Geiger2012} the authors consider IoU $\geq$ 0.7 for the correct detection of a car, or IoU $\geq$ 0.5 for a pedestrian. \subsubsection{Evaluation Metrics - Critical Issues} \label{ssec:criticalities} In the deployment of AI-based systems, it is not rare that accuracy is not necessarily the best metric to measure their capabilities \cite{PadovaniFL19bardo}. While it is not debatable that having a perfect score on accuracy-based metrics is the final goal (i.e., the perception perfectly overlaps with the ground truth, implying $\OM = \W$), these metrics were not designed to consider $DP$. Hence, they do not provide a model that is adequate enough to study \autoref{eq:Dp+error}. This is because the use of a single metric would hide the specifics of the type of error causing perturbations in the measurement. In particular, we identify 3 critical areas for analyzing the response $R$ (\autoref{fig:issues}), but are out of the scope of CV metrics: \begin{itemize} \item \textbf{I1: Temporal relevance}: if the system is deployed in a highly dynamic environment, the worst-case error (e.g., losing track of an object for longer intervals) may be more relevant than the average error for same duration. \item \textbf{I2: Overlap sensitivity}: The spatial error associated to each object is definitely important. However, considering the bounding box overlap alone may not be sufficient to gauge the quality of the response provided by $DP$. \item \textbf{I3: Relevance of the objects}: Generic CV tasks do not usually associate a weight to each object, as the context may not be considered. However, for an AV in a well-defined ODD, the metrics should judge the relevance of objects considering the context and dynamics (refer I1). \end{itemize} \begin{figure} \centering \includegraphics[width=\columnwidth]{figures/figure2.pdf} \caption{Illustration of the critical issues I1, I2, I3.\\ I1: Temporal considerations: short vs. long non detection intervals.\\ I2: Overlap Sensitivity: how sensitive is $DP$ to spatial error?\\ I3: Relevance of the objects: which ones are active constraints?} \label{fig:issues} \end{figure} For a more abstract understanding, we should ask: \textit{If the system response $R$ provides the desired outcome, such as avoiding a collision, does it really matter if the $\OM$ had significant errors?} E.g., if the AV brakes to avoid a perceived pedestrian, how much does it matter if the object was actually a cyclist? In this case, how to quantify the relevance of a specific error? There is no straightforward answer to this, since a major classification error could also cause the AV to respond in an unacceptable and/or unsafe manner. \subsection{Error Modeling Considerations} \label{ssec:err_model_considerations} To better understand how the error manifests itself and to subsequently analyze the performance of the $S\&P$, we must first understand the causes of the error. \paragraph{Positional aspects:} Our first observation is that the quality of $S\&P$ is influenced by the \textbf{relative position} of $w$ w.r.t. the ego-vehicle, for 2 reasons \cite{rosique2019systematic}: \begin{itemize} \item \textbf{Distance}: Performance of all sensors degrades at longer distances. E.g, a more distant object will be captured by fewer pixels by a camera and by fewer LiDAR points. \item \textbf{Field of View (FoV)}: Sensors cover different areas around the ego-vehicle. An object that is positioned in an area covered by multiple sensors could be detected with greater accuracy than an object located in an area covered by few, or weaker, sensors. \end{itemize} \paragraph{Parameter inter-dependencies:} The second aspect is that the values of any of the object parameters $\mathbf{X, C}$ can, by themselves, affect the error associated with other parameters of $\mathbf{X, C}$ \cite{Hoiem2012}. For example, a larger \textit{size} of an object makes it more likely to be seen at greater distances, whereas the object class $\mathbf{C}$ may limit the error on the size estimation. Some parameters are also not described in the $\OM$ since they are not directly relevant for $DP$, such as the color or the material of the object \cite{rosique2019systematic}. For example, dark/non-reflective surfaces or metallic artifacts may degrade the quality of $\OM$ when $S$ is primarily based on \mbox{LiDAR} or Radar respectively. \paragraph{Temporal aspects:} As a third observation, we can consider the \textit{temporal} aspects of the system, since the $DP$ deals with a sequence of detections, a dynamical system and environment. The overall error of the system can change over time, due to shifting light conditions (e.g., sun blinding, shadows), algorithm uncertainties or even any interference at the level of individual sensors, and should be modeled appropriately. Errors evolve over time and hence should be viewed as time series and be modeled by dynamical models \cite{Mitra2018}. \subsection{ Perception Error Model} In this paper, we propose a Perception Error Model ($\PEM$) that comprises both the sensing subsystem $S$ and a perception subsystem $P$, approximating their function: \begin{equation} PEM(\W) \approx S\&P(\W) = \OM = \W + \E. \end{equation} We propose the following abstraction: \begin{equation}\label{eq:pem} PEM = \{ \T, \Z, \C \}, \end{equation} where each component is defined as follows: \begin{itemize} \item $\T$: temporal and statistical description of the perception error in function of $\mathbf{X}$; \item $\Z$: a zone-based spatial description of the $S\&P$ error distribution around the AV considering the coverage by sensors (as illustrated in \autoref{sensors_illustration}), addressing the positional aspects, viz., the FoV and Distance problems. \item $\C$: a description of environmental conditions affecting the error, e.g., a system deployed on the road can be conditioned by the light intensity or fog density, which can be modeled as continuous variables in $\T$. Alternatively, one could choose to discretize them and provide distinct $\T$ for each value (e.g., $\T_\text{daylight}$, $\T_\text{night}$). \end{itemize} \subsubsection{Zones-based approach for $\Z$} As illustrated in \autoref{sensors_illustration}, we propose to address the positional aspects of error by representing the perception error in different \textit{zones}. The FoV problem is easily solved, by dedicating a zone to the overlap of a specific set of sensors. The distance problem instead is already considered in some CV benchmarks \cite{Cordts2016Cityscapes,waymo_open_dataset}. The common solution is to discretize the distance in different ranges, breaking down the metrics into different regions. Our \textit{zones} approach is, in fact, an extension of that approach; while the zones can also be determined by distance thresholds, we also make the entire approach \textit{sensor-agnostic}. Furthermore, we can apply the \textit{zones} approach to study the contextual \textit{relevance} of the objects for a given driving scenario and a planned manoeuvre. Dedicated models for each zone allows to better understand which are the critical areas of the surroundings. \begin{figure}[bt] \centering \subfloat[a][Camera]{\includegraphics[width=.3\columnwidth]{figures/figure3a}\label{fig:sensor1}} \subfloat[b][LiDAR]{\includegraphics[width=.3\columnwidth]{figures/figure3b.png}\label{fig:sensor2}} \subfloat[c][Overlap]{\includegraphics[width=.3\columnwidth]{figures/figure3c.png}\label{fig:sensor3}} \caption{Example of a zone-based partitioning of the $S\&P$ errors. (a) Simple camera FoV, divided into 2 zones based on its range; (b) LiDAR, with 1 zone; (c) Multiple sensors, leading to an overlap region (or regions) where objects can be detected by both sensors. The perception error will depend on how the signals are fused.} \label{sensors_illustration} \end{figure} \subsubsection{Key considerations for $PEM$} If $\T$ is designed to simply return each object $w$ without any alteration in its parameters, the model is replicating a perfect $S\&P$ system that is able to detect the ground truth. More interestingly, the model can also be designed to not return objects in specific zones $\in \Theta$, thus replicating cases of non-visibility such as blind spots (i.e. the object is not within the range of any sensor, or is occluded \cite{suchan2019}). Considering the above, we propose that designing a $\PEM$ is, without loss of generality, a \textit{regression} task, where the goal is to learn the rules and parameters which describe the difference ($\E$) between $\W$ and the $\OM$ generated by the $S\&P$ subsystem (\autoref{eq:W+error}), formalized as $\Phi$, $\Theta$, and $\Gamma$. Despite having similarities with a typical procedure for computation of evaluation metrics (i.e. comparing perception output to the ground truth), this task is more complex. As described in Section \ref{ssec:err_model_considerations}, it involves analyzing influence of spatio-temporal dependencies, and object-specific parameter inter-dependencies (covariances) which are relatively under-explored fields. Such aspects are mainly conditioned by the choice and configuration of sensors and perception algorithms. Thus, academic studies focus mostly on a generalized performance evaluation. To motivate research in this direction, in the next section we focus on showing how a richer and more descriptive $\PEM$ can serve to address some of the issues in the evaluation of both the $S\&P$ and $DP$. Such an evaluation process is not only crucial from a regulatory perspective, but can also facilitate the system development life-cycle; it can guide developers in choosing the sensors, training the perception models, as well as identifying weaknesses in the $DP$. \section{Experimental Setup} \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{figures/figure4.jpg} \caption{Screenshot of the co-simulation in a generic urban driving situation. Bounding boxes (yellow/purple) of $PEM$-based objects $\OM$ rendered by LGSVL (right) are consistent with what Apollo sees (left), even undetected objects. }\label{fig:cosim} \end{figure} \begin{figure*}[t!] \centering \subfloat[Illustration of 2 scenarios in our experiments (TC1-3, TC4-5).]{\includegraphics[width=0.95\columnwidth]{figures/figure5a.pdf}} \hspace{4mm} \subfloat[Following another vehicle.]{\includegraphics[width=0.48\columnwidth]{figures/figure5b.jpg}} \subfloat[Pedestrian on an urban road.]{ \includegraphics[width=0.48\columnwidth]{figures/figure5c.jpg}} \vspace{-1mm} \caption{Scenarios used in experiments: (a) a representative illustration, (b,c) instances from the nuScenes dataset \protect\cite{nuscenes2019}.} \vspace{-2mm} \label{fig:scenarios} \end{figure*} \begin{figure} \centering \includegraphics[width=\columnwidth]{figures/figure6.pdf} \caption{Functional relationship between scenarios, parameter variants, test cases and error models in our experiments.}\label{fig:exp} \end{figure} In this section, we describe the software tools and the experiments conducted to highlight how different kinds of error do (or do not) affect the response $R$, thus allowing us to observe how a specific $PEM$ can influence $R$. Furthermore, we describe different $PEM$ variants that serve to demonstrate some of the {critical issues} discussed in the earlier sections. In this paper, we focus on $\T$, i.e., the temporal and statistical description of $S\&P$ error, and specific statistics related to standard evaluation metrics. The experiments we designed require a driving simulator and an ADS. To this end, we chose open source tools, namely LGSVL simulator \cite{LGSVL} and Apollo 3.5 \cite{Fan2018}. LGSVL simulator is based on the Unity Engine and maintains a reliable bridge between Unity framework and the CyberRT middleware relied on in Apollo 3.5, thus enabling co-simulation (see \autoref{fig:cosim}). We developed python scripts to implement different scenarios, to automate the tests, configure the simulation environment and the actors in a deterministic manner, and to log the results. To facilitate our experiments, we adapted these tools so that we could include the $\PEM$ in the loop. To this end, we bypassed the built-in $S\&P$ subsystem in Apollo. Firstly, instead of processing (synthetic) raw sensor data, we adapted Apollo to directly read the $\OM$ from a new special-purpose CyberRT topic. Secondly, we defined a new sensor in LGSVL simulator that upon observing $\W$, generates $\OM$ by applying the specific $\PEM$ (see \autoref{eq:pem}) configured for the experiment, and then publishes $\OM$ on the new CyberRT topic that the decision making part of Apollo could read from. \subsection{List of Scenario-based Experiments} In order to study the influence of error models on the AV behavior, we generated a set of experiments following the scheme depicted in \autoref{fig:exp}. In particular, we defined a set of relevant driving scenarios (see \autoref{fig:scenarios}), configured their parameter variations to get concrete test cases, and tested them with different $PEM$s to form actual experiments that are executed multiple times (at least 30 runs each, to account for randomness involved in our $\PEM$s). \paragraph{Scenario 1 (Test cases TC1-3):} involves an AV driving on a straight road, approaching a traffic vehicle and then following it until they reach a red traffic light. For each test case, each traffic vehicle was set to drive at one of 3 different average speeds, viz. 7, 10, and 15 m/s. To challenge the $DP$, we applied $PEM$s that can correctly detect an object ($o_i = w_j$) but randomly fail to include it in $\OM$ for some frames (similar to tracking loss or sporadic non-detections). This allows us to study the critical issue of temporal relevance (see I1).\\ \textbf{Implementation TC1-3:} We model the False negative errors by means of Markov chains with two states. We tested different values of the parameters \textit{steady state probability }$\in [0.0,1.0]$ and \textit{mean sojourn time} $\in (0.0s,10s]$ (average time spent in a state before changing) so as to generate non-detection intervals of varying duration. \paragraph{Scenario 2 (Test cases TC4-5):} this is defined by the presence of a pedestrian in 2 different situations: standing in the middle of the road, or jaywalking. For these TCs, we applied $\PEM$s that generate different \textbf{positional errors}, with the intention of studying the impact of critical issue I2.\\ \textbf{Implementation TC4, TC5a:} Gaussian White Noise with varying standard deviation $\sigma$, applied to the relative position of $w$ w.r.t. the AV, in polar coordinates: \begin{itemize} \item multiplicative noise on radius $d$ as $\sigma_d \in [0\%,12\%]$; \item additive noise on azimuth $\theta$ as $\sigma_\theta \in [0^{\circ},1.5^{\circ}$]. \end{itemize} We then apply additional \PEMs\ to TC5, so as to replicate the failures that led to a recent AV accident \cite{NTSB2019}.\\ \textbf{Implementation TC5b}: Perfect detection at each frame, but with a tracking loss probability $p_{tl} \in [0,1]$ for the previously detected obstacles. This can result in considering the current detection as a new obstacle, which can hinder the computation of obstacle velocity and lead to unsafe behavior. \section{Conclusion} In this paper, we have described an approach to test and study perception errors in a virtual environment, by linking the respective performance of $S\&P$ and $DP$, and thereby enabling the identification of weaknesses in these subsystems. Furthermore, we have implemented an experimental setup to test handcrafted $\PEMs$ with the aim of highlighting some limitations of the currently used evaluation metrics for perception algorithms, while discussing how to analyze the resulting system behavior. Although our focus is on AVs, we believe that our approach is general enough to be applied to other domains involving navigational tasks and produce similar insights. The main limitation of the current work lies in our focus on the detection of other road users. Nevertheless, this is a key challenging problem hindering AV deployments, especially under adverse environmental conditions that degrade the $S\&P$ capabilities. In the near future, we aim to further explore the study of \PEMs, develop more realistic simulations that incorporate perception errors, investigate the robustness of $S\&P$ under different environmental conditions, and finally, better approaches to test weaknesses of $DP$. \section{Experimental Results} In this section, we show the scope of analysis afforded by our experimental setup. Our analysis focuses on the \textit{behavior} of AVs in particular, although the methodology can also be applied elsewhere. For ease of understanding, we show several representative examples from our experimentation. These examples serve to demonstrate the effectiveness of our approach in analyzing how the $PEM$s can impact the behavior, and thereby taking simple safety metrics under consideration. \newcommand\sizescatter{.17\textwidth} \newcommand{\unskip\ \vrule height 26mm width .01mm}{\unskip\ \vrule height 26mm width .01mm} \begin{figure*}[] \centering \subfloat[][TC(1-3)]{\includegraphics[width=\sizescatter]{figures/figure7a.pdf}\label{fig:sp_a}} \hspace{1mm} \subfloat[][TC(1-3)]{\includegraphics[width=\sizescatter]{figures/figure7b.pdf}\label{fig:sp_b}} \hspace{1mm} \subfloat[][TC(1-3)]{\includegraphics[width=\sizescatter]{figures/figure7c.pdf}\label{fig:sp_c}} \hspace{0.2cm} \unskip\ \vrule height 26mm width .01mm \hspace{0.1cm} \subfloat[][TC4]{\includegraphics[width=\sizescatter]{figures/figure7d.pdf}\label{fig:sp_d}} \hspace{0.2cm} \unskip\ \vrule height 26mm width .01mm \hspace{0.1cm} \subfloat[][TC5b]{\includegraphics[width=\sizescatter]{figures/figure7e.pdf}\label{fig:sp_e}} \\ \vspace{-4mm} \subfloat[][TC(1-3)]{\includegraphics[width=\sizescatter]{figures/figure7f.pdf}\label{fig:sp_f}} \hspace{1mm} \subfloat[][TC(1-3)]{\includegraphics[width=\sizescatter]{figures/figure7g.pdf}\textbf{}\label{fig:sp_g}} \hspace{1mm} \subfloat[][TC(1-3)]{\includegraphics[width=\sizescatter]{figures/figure7h.pdf}\label{fig:sp_h}} \hspace{0.2cm} \unskip\ \vrule height 26mm width .01mm \hspace{0.1cm} \subfloat[][TC5a]{\includegraphics[width=\sizescatter]{figures/figure7i.pdf}\label{fig:sp_i}} \hspace{0.2cm} \unskip\ \vrule height 26mm width .01mm \hspace{0.1cm} \subfloat[][TC5b]{\includegraphics[width=\sizescatter]{figures/figure7j.pdf}\label{fig:sp_j}} \vspace{-2mm} \caption{Relationship between safety evaluation metrics and some specific statistics under varying PEMs. These density scatter plots summarize all the runs of the relevant experiments, for specific test cases. Given the high number of samples (simulation runs), we highlighted the densest areas on a color scale from blue (low density) to yellow (high density).} \label{fig:scatterplots} \vspace{-3mm} \end{figure*} \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{figures/figure8.pdf} \caption{Illustration of ego-vehicle behavior for TC5a with two \PEMs: low $\sigma_d, \sigma_\theta$ (left), and high $\sigma_d, \sigma_\theta$ (right). } \label{fig:eval_illustration_VUT_behavior_errors_ped} \vspace{-2ex} \end{figure} \subsection{TC1-3: Following a traffic vehicle} In Figures\autoref{fig:sp_a},\autoref{fig:sp_b},\autoref{fig:sp_f},\autoref{fig:sp_g}, we plot the relationship between two metrics for \textit{behavior evaluation}, namely minimum spatial distance (m) and minimum temporal distance (s), and two statistics of the perception error, namely, relative frequency of detection (realization of the \textit{steady state probability}) and maximum non-detection interval (realization of the \textit{mean sojourn time}). In Figure\autoref{fig:sp_c}, we can observe that the success rate (no collision) is increasing with the increase of the relative detection frequency. However, this is true up to a threshold of $\sim75\%$, above which the success rate is stationary. On the other hand, in Figure\autoref{fig:sp_h}, the duration of the non-detection intervals has a much more significant impact on the success rate. Hence, we can observe that even in a situation of low visibility (i.e. low detection probability), if the intervals of non-detection are short enough, it is possible for the vehicle to avoid a collision. This highlights the importance of including the temporal aspects of the error (critical issue I1) into the perception evaluation. \subsection{TC4-5: Pedestrian on an urban road} The second scenario offers a different insight. As illustrated in \mbox{Figures \autoref{fig:sp_d} and \autoref{fig:sp_i}}, we cannot observe major differences in safe behavior (minimum spatial clearance) under varying positional errors generated by different $PEM$s including the ground truth. This indicates that the system failure is not due to the $\PEM$, but rather due to a weakness in the $DP$ of the ADS in \textit{our} experimental configuration, which is unable to robustly handle TC4. In fact, safety metrics in TC4 are not influenced by the magnitude of the positional error, since the system fails even with low/no errors. Similarly, in TC5a, the safety is not jeopardized by the error magnitude. This also relates to the other two critical issues I2 and I3 of the current evaluation metrics. Since a positional error by itself can still allow a safe response, it is not adequate to consider IoU as a metric for True Positive. On the contrary, a less restrictive metric should be considered, such as a distance threshold as proposed in \cite{nuscenes2019}. In \autoref{fig:eval_illustration_VUT_behavior_errors_ped} we compare $\PEM$s with different positional errors in TC5a. Here also, the error magnitude does not have a major impact on safety, although smaller errors can lead to a more consistent AV behavior. Furthermore, experiments for TC5b highlight how the safety decreases as $p_{tl}$ increases, as shown in \mbox{Figures \autoref{fig:sp_e} and \autoref{fig:sp_j}}. As $p_{tl}$ approaches $0.5$ and the obstacle velocity cannot be estimated, $DP$ is unable to predict the obstacle's trajectory and does not brake to avoid it, similar to the AV accident \cite{NTSB2019}. This is in contrast to the findings in TC1-3, where frequent tracking errors were easier to handle than infrequent ones. However, it provides an interesting insight towards understanding the contextual relevance of error types depending on the scenario. In particular, in TC1-3 the obstacle is always in the path of the AV, while in cases such as TC5b their paths cross during the scenario (jaywalking in TC5b, but may be similarly applicable for a cut-in scenario). In the latter case, proper obstacle trajectory prediction is critical to foresee the imminent collision.
1,941,325,220,964
arxiv
\section{Introduction} The goal of this paper is to obtain information on the cohomology of bundles on some generalizations of Hopf manifolds applying methods of \cite{libgober} in the bundle setting. Roughly, the cohomology of the bundles $\Omega^p_{{\mathcal H}}(E)$ on a topological Hopf manifold ${\mathcal H}$ where $E$ is a bundle with a trivial pull back on the universal cover, controlled by the ``Alexander modules'' naturally identified with the appropriate local cohomology associated with the universal cover (cf. theorem \ref{cohomologyresults}). To put things in perspective we start with a review of some results and viewpoints on manifolds which are natural generalizations of the classical construction of Hopf and Kodaira (cf. \cite{kodairanas} \cite{kodairaamerjourn}). In particular Kodaira had shown that a surface with universal cover biholomorphic to ${\mathbb C}^2-0$ is the quotient of the latter by a group having as a finite index normal subgroup an infinite cyclic group. In higher dimensions fixing the homological type leaves much more possibilities for the analytic type of the universal cover: for example Brieskorn and van de Ven found infinitely many examples of manifolds having homeomorphism or even diffeomorphism type of $S^1 \times S^{2n-1}$ but having analytically distinct universal cover. These universal covers are non singular loci of affine hypersurfaces (\ref{pham}) with isolated singularities and having as their links (possibly exotic) odd dimensional spheres. The action of the fundamental group in Brieskorn van de Ven examples is given by (\ref{homotety}). More generally, we consider generalized Hopf manifolds allowing arbitrary quotients \footnote{and not only corresponding to the actions (\ref{homotety})} of the non-singular loci of affine varieties which are weighted homogeneous complete intersection with isolated singularity having a ${\mathbb Q}$-sphere as its link. If this link is a ${\mathbb Z}$-sphere then the quotient is homeomorphic to $S^1 \times S^{2n-1}$. In the case when the universal cover of a primary Hopf manifold is ${\mathbb C}^n-0$, Haefliger (cf. \cite{haefl}) described actions of the fundamental group on the universal cover (extending the case $n=2$ studied by Kodaira) . In the case when the universal cover $V-0$ of a homological Hopf manifold is a complete intersecion (\ref{hammcase}) we show that, if the degree of such complete intersection is sufficiently large, then the corresponding quotients are finite quotients of $V-0/{\mathbb Z}$ with the action of ${\mathbb Z}$ given by (\ref{homotety}). More precisely we will see the following: \begin{theorem}\label{universalcoverBvdV} Let ${\mathcal H}$ be a homological Hopf manifold having as its universal cover the non-singular locus of an affine hypersurface $z_1^{a_1}+...+ z_n^{a_r}=0$ or a complete intersection (\ref{hammcase}). Then ${\mathcal H}$ is a quotient by a finite group of the Brieskorn van deVen manifold (or respectively $(V-0)/{{\mathbb Z}}$ where $V$ is a complete intersection (\ref{hammcase}) with the action of a generator of ${\mathbb Z}$ given by (\ref{homotety})). \end{theorem} This deduced from a result on the automorphism group of affine hypersurfaces and complete intersections (\ref{hammcase}) or (\ref{pham}) (cf. \ref{autBvdV}). It is an interesting problem to find a reasonable classification of homological or topological Hopf manifolds with fixed topological, differential or almost complex type. The analytic type of the universal cover and the action of the fundamental group on it provide discrete invariants of ``the components of the moduli space'' of complex structures (cf. \ref{propfundgroups}). The main point of this note is that regardless of specific properties of the analytic structure on the universal cover one has an interesting restriction on the cohomology of bundles generalizing results of \cite{mall1}, \cite{mall2} (in primary case) and \cite{zhou} (in non primary Hopf manifolds case). The main, rather technically looking result, is the following: \begin{theorem}\label{cohomologyresults} Let $H$ be a homological Hopf manifold with the universal covering space $p: V \rightarrow H$ where $V$ is a complement to compact set $A$ is a Stein space $\bar V$ and $i: V \rightarrow \bar V$. Let $f$ be an element of the infinite order in $\pi_1(H)$ and let $f_V: V \rightarrow V$. be the corresponding biholomorphic automorphism of $V$. Let $E$ be a bundle on $H$. We denote $f^k_V$ the corresponding automorphism of $H^k(V,p^*(E))$. Assume that the local cohomology group $H^r_A(\bar V,i_*(\pi^*(E)))$ are finite dimensional. Then: a) ${\rm dim} H^k(H,E)={\rm dim} {\rm Ker} f^k_V+{\rm dim} {\rm Coker} f^{k-1}_V$. b) In particular if $H^k(V,p^*(E))=0$ for $a \le k \le b$ then $H^k(H,E)=0$ for $a+1 \le k \le b-1$. \end{theorem} This yields the following result for the Hopf and Brieskorn Van de Ven manifolds and some homological Hopf manifolds (see next section for explanation of terminology): \begin{theorem}\label{vanishing} Assume $n \ge 3$. \noindent a) (cf.\cite{zhou}) Let $H$ be a Hopf manifold which is the quotient of ${\mathbb C}^n-0$ by the group containing an element of infinite $f \in Aut ({\mathbb C}^n)$ such that $f(0)$ has eigenvalues $\xi$ such that $\vert \xi \vert <1$. Let $E$ be a bundle on $H$ such that pullback on the universal cover is trivial. Then: \begin{equation}\label{vanishingformula1} H^q(H,\Omega^p(E))=0 \ \ \ {\rm for} \ q\ne 0,1,n-1,n \end{equation} \begin{equation}\label{vanishingformula2} H^0(H,\Omega^p(E))=H^1(H,\Omega^p(E)) \ \ \ {\rm and} \ \ \ H^{n-1}(H,\Omega^p(E))=H^{n}(H,\Omega^p(E)) \end{equation} \noindent b) Let ${\mathcal H}$ be Brieskorn van deVen manifold or a quotient $V-0/{{\mathbb Z}}$ where $V$ is the complete intersection (\ref{hammcase}) and ${\mathbb Z}$ is the cyclic group generated by the automorphism (\ref{homotety}). For a pair of finite dimensional vector spaces $A,B$, denote by ${\mathcal V}_n(A,B)=\oplus_{i=0}^{i=n} V_i$ the graded vector space such that $V_0=V_1=A, V_{n-1}=V_n=B$ and $V_i=0$ for $i \ne 0,1,n-1,n$, Let ${\mathcal W}_{n,q}=\oplus_{i=0}^{i=n} W_i$ denote a graded vector space such that ${\rm rk} W_{q-1}=1, {\rm rk} W_q=2, {\rm rk} W_{q+1}=1, W_i=0 \ (i \ne q,q \pm 1)$. Let $E$ be a vector bundle on ${\mathcal H}$ such that its pull back on the universal cover is trivial. If the multiplicity of $V$ at the origin is greater than one, then for $p \ge 1$ one has an isomorphism of graded spaces: $$ \oplus_q H^q({\mathcal H},\Omega^p(E))=\oplus {\mathcal V}_n(H^0(\Omega^p(E),H^n(\Omega^p(E)))) \oplus {\mathcal W}_{n,n-p-1}$$. \end{theorem} Other results of this note are a calculation of the cohomology of local systems on homological Hopf manifolds and a study of degeneration of Hodge-deRham spectral sequence. My interest in this material stemmed from the lecture of Prof. X.Y.Zhou on Xiamen conference. I want to thank organizers of Xiamen conference for their hospitality and Prof. Steven Yau for providing useful references related to material of this paper. \section{Generalizations of Hopf manifolds and cohomology of bundles on their universal covers} \subsection{Main definitions} {\it Notations:} Below, for a compact manifold ${\mathcal H}$, $b_i$ denotes ${\rm rk} H^i({\mathcal H},{\mathbb Q})$. \begin{dfn} A (${\mathbb Q}$)-{\it homological Hopf} manifold is a compact complex manifold of dimension $n$ with $b_1=b_{2n-1}=1, b_i=0, i \ne 0,1,2n-1,2n$. Such a manifold $\mathcal H$ is an integral homological Hopf manifold if it has the integral cohomology isomorphic to $H^*(S^1 \times S^{2n-1},{\mathbb Z})$. A ${\mathbb Z}$-homological Hopf manifold is called primary if its fundamental group is isomorphic to ${\mathbb Z}$. \end{dfn} ${\mathbb Q}$-homological Hopf manifolds (which may not be ${\mathbb Z}$-homological), in the case $n=2$, were considered in \cite{ebeling}. \begin{dfn} A {\it topological Hopf} manifold is a complex manifold with universal cover being a complement to a point in a contractible Stein space. \end{dfn} \begin{dfn} A {\it Hopf} manifolds is a complex manifold ${\mathcal H}$ for which the universal cover is biholomorphic to the complement in ${\mathbb C}^n$ to the origin $O \in {\mathbb C}^n$. A Hopf manifold is called primary if the Galois group of the universal covering (or equivalently the fundamental group of ${\mathcal H}$) is isomorphic to ${\mathbb Z}$. \end{dfn} In the last decade, many diverse constructions of non-Kahler manifolds were proposed (cf. \cite{messer}). An interesting problem is to classify homological or topological (primary or non primary) Hopf manifolds. \footnote{the terminology is suggested by more studied problem of classification of homological projective spaces} More precisely, one would like to describe the topological, differentiable and almost complex manifolds \footnote{cf. \cite{Morita} for a discussion of invariants of almost complex structures on Brieskorn van deVen manifolds} which admit a complex structure yielding a homological or topological Hopf manifold. Moreover, one would like to describe the moduli space parametrizing the complex structures on such an almost complex manifold. The main results of Kodaira on the classification of Hopf surfaces can be summarized as follows (cf. \cite{kodairanas}, \cite{kodairaamerjourn}). A Hopf surface is a quotient by a group $G$ which has ${\mathbb Z}$ as its center and such that $G/{\bf Z}$ is finite (cf. \cite{kodairaamerjourn}, theorem 30). Any Hopf surface contains a curve. A homological Hopf surface having algebraic dimension equal to zero and containing at least one curve is a Hopf surface (\cite{kodairaamerjourn}, theorem 34; case of algebraic dimension one considered in \cite{ebeling}). The first examples of topological Hopf manifolds, (which are not Hopf) were found by Brieskorn and Van de Ven (cf. \cite{BvdV}). The contractible Stein spaces, in which the complement to a point serves in \cite{BvdV} as the universal cover of the topological Hopf manifold are the affine hypersurfaces $V \subset {\mathbb C}^{n+1}$ where $V$ is a zero set of weighted homogeneous polynomial \begin{equation}\label{pham} z_0^{a_0}+...+z_n^{a_n}=0 \end{equation} Here the integers $a_i$ must satisfy conditions which assure that the link of a singularity (\ref{pham}) are topological (and possibly exotic) spheres. For example this is the case when $n$ is odd and $a_1=3,a_2=6r-1,a_i =2$ (varying $r$ yields all exotic spheres bounding a parallelizable manifold). The examples of primary topological Hopf manifolds are obtained as the quotients by the action of the restriction of the following automorphism of the complex linear space: \begin{equation}\label{homotety} T \cdot (z_1,....)= (\lambda^{1 \over {a_1}}z_1,...., \lambda^{1 \over {a_i}}z_i,....) \ \ \ (\vert \lambda \vert <1) \end{equation} (leaving the hypersurface (\ref{pham}) invariant). More generally, consider a complete intersection: \begin{equation}\label{hammcase} \sum_{\nu =1}^{n+k} \alpha_{\mu,\nu}z^{a_{\nu}} \ \ \ \mu=1,...,k \end{equation} with generic coefficients $\alpha_{\mu,nu}$. The latter assures that (\ref{hammcase}) has an isolated singularity at the origin. Under the appropriate conditions (cf. \cite{hamm} Satz 1.1) the link of this singularity (\ref{hammcase}) is a homology sphere (over ${\mathbb Q}$ or ${\mathbb Z}$). Note that Zaidenberg conjectures that if $V$ is set of zeros of a polynomial then contractible $V$ with one isolated singularity up to an automorphism of ${\mathbb C}^n$ is a zero set of a weighted homogeneous polynomial (cf. \cite{Zaidenberg}). This would imply that the non singular loci of hypersurfaces (\ref{pham}) are the only covering spaces of topological Hopf manifolds having an affine hypersurface as its closure. \subsection{Topological properties of ${\mathbb Q}$-Hopf manifolds} \par \noindent Kodaira's result on the fundamental groups of Hopf surfaces can be extended to topological Hopf manifolds. We shall start by considering the question when a quotient of $V-O$ is a homological Hopf manifold. \begin{prop}\label{propfundgroups} (i) The fundamental group $\pi_1({\mathcal H})$ of a topological Hopf manifold ${\mathcal H}$ is a central extension of ${\mathbb Z}$ by a finite group. (ii) The class of biholomorphic equivalence of the universal cover is an invariant deformation type of a topological Hopf manifold. Deformation type of a topological Hopf manifold is given by the type of $V-0$ and the conjugacy class of class of the subgroup $G \subset Aut_OV$ which is a central extension ${\mathbb Z}$ by a finite group. (iii ) Let $V$ be an affine hypersurface with ${\mathbb C}^*$ action or a complete intersection (\ref{hammcase}) and an isolated singularity at the origin. The quotient $V-0/\pi_1$ is a ${\mathbb Q}$-homology Hopf manifold iff the invariant subgroup of the action of $\pi_1$ on $H^{n-1}(V-O,{\mathbb Q})^{\pi_1}$ is trivial. In particular if the link of $V$ is a ${\mathbb Q}$-sphere (resp. ${\mathbb Z}$-sphere) then $V-O/\pi_1$ is a ${\mathbb Q}$-homology (resp. integral) Hopf manifold. \end{prop} \begin{proof} The proof (i) is a direct generalization of Kodaira's argument. Since $V$ is Stein, by Remmert embedding theorem (cf. \cite{narasim}) we can assume that $V$ is a subspace of ${\mathbb C}^N$ and select a ball $B \subset {\mathbb C}^N$ centered at the image of $O$. Let $g$ be an element of infinite order in $\pi_1$ acting properly discontinuously on $V-O$. By Hartogs theorem $g$ extends to an automorphism of $V$ fixing $O$. Either $g$ or $g^{-1}$ takes $\partial B \cap V$ into $B \cap V$ and $\cap_n g^n(B \cap V)=O$ since existence of $z \ne O$ in the boundary of this intersection contradicts proper discontinuity of the action of $\pi_1$. Hence the quotient of $V-0$ by the subgroup $\{g\}$ generated by $g$ is compact. The index of the cyclic subgroup $\{g\}$ in $\pi_1({\mathcal H})$ is equal to the degree of the cover $V-0/\{g\}$ and hence is finite. The subgroup of $\{g\}$ which is normal in $\pi_1({\mathcal H})$ yields claimed presentation of the latter as a central extension. (ii) is a a consequence of result of Andreotti and Vesentini on pseudo-rigidity of Stein manifolds (cf. \cite{andreotti}). To see (iii) consider the Hochschild-Serre spectral sequence \begin{equation}\label{hoch} E_2^{p,q}: H^p(\pi_1/{\mathbb Z},H^q({\mathbb Z},W)) \rightarrow H^{p+q}(\pi_1,W) \end{equation} where $W$ is a ${\mathbb Q}$-vector space with a structure of a $\pi_1$-module and ${\mathbb Z}$ is the center generated by an element $g \in \pi_1$. Since $\pi_1/{\mathbb Z}$ is finite and $H^q({\mathbb Z},W)$ has no ${\mathbb Z}$-torsion we have $E_2^{p,q}=0$ for $p \ge 1$. Moreover $H^q({\mathbb Z},W)$ is the subspace of $g$-invariants (resp. $g$-covariants) of $W$ for $q=0$ (resp. $q=1$) and is trivial for $q>1$. Hence in spectral sequence (\ref{hoch}) there are at most two non trivial terms and hence \begin{equation}\label{calculation} {\rm dim} H^q(\pi_1,W)= \begin{cases} W^g & {q =0} \\ W_g & {q=1} \\ 0 & q>1 \end{cases} \end{equation} Using a homotopy equivalence between $V-O$ and the link of isolated singularity of $V$ we obtain $H^q(V-O,{\mathbb Q})=0$ for $q \ne 0,n-1,n,2n-1$ (cf. \cite{milnor}). Next consider the spectral sequence of the action of $\pi_1$ on the universal cover $V-0$: \begin{equation} H^p(\pi_1,H^q(V-O)) \Rightarrow H^{p+q}(V-O/\pi_1) \end{equation} Applying (\ref{calculation}) for $W={\mathbb Z}$ for $q=0,2n-1$, $W=H^{n-1}(V-O)$ or $W=H^n(V-0)$ if $q=n-1,n$ and using $\pi_1$ equivariant identification $H^{n}(V-O)=H^{n-1}(V-O)^*$ (which is a consequence of the Poincare duality for the link of singularity of $V$) we obtain the result. \end{proof} \subsection{Holomorphic automorphisms of universal covers} Next we shall consider the question of existence of automorphisms of $V-O$ different than (\ref{homotety}) which generate an infinite cyclic group acting properly discontinuously with compact quotient i.e. automorphisms yielding primary topological Hopf manifolds. We shall assume that $V$ is a zero set of an arbitrary weighted homogeneous polynomial (i.e. a sum of monomials $z_1^{i_1} \cdot \cdot \cdot z_n^{i_n}$ such that $\sum {i_k} b_k=d$ where $b_k$ are the weights and $d$ is the degree) \footnote{for the hypersurface (\ref{pham}), one has $b_i={{{\rm l.c.m.}(a_1,...,a_n)} \over {a_i}}, d={\rm l.c.m.}(a_1,...,a_n)$} with isolated singularities. In the case when $V={\mathbb C}^n$ the automorphisms generating infinite properly discontinuously acting groups with compact quotients were described by Haefliger (cf. \cite{haefl}). \begin{lemma}\label{autBvdV} Let $V$ be a zero set of a weighted homogeneous polynomial $f$ having weights $a_i$ and the degree $d$. Then one has the extension: $$0 \rightarrow {\mathbb C}^* \rightarrow Aut(V-0) \rightarrow G \rightarrow 0$$ where the group $G$ is finite if $\sum_i {1 \over {a_i}} < 1$. More generally, one has the same type exact sequence if $V$ is a complete intersection (\ref{hammcase}) and $\sum {1 \over {a_i}} < k$. \end{lemma} \begin{proof} The affine hypersurface $V$ is a open subset in a ${\mathbb P}^1$-bundle $\bar V \rightarrow V_{\infty}$ over a hypersurface in a weighted projective space which one can identify with the hyperplane section at infinity. $Aut V$ is the subgroup of the group $Aut^{\circ} \bar V$ of automorphisms of $\bar V$ fixing invariant the two sections of $\bar V$ (corresponding to the exceptional set of the weighted blowup of projective closure of $V$ and the hyperplane section at infinity $V_{\infty}$). \footnote{Automorphisms of ruled surfaces were studied in \cite{maruyama}.} Clearly, the ${\mathbb C}^*$ action provides a normal subgroup ${\mathbb C}^*$ of $Aut^{\circ} \bar V$ and the quotient is the subgroup of the group of automorphisms of $V_{\infty}$. The latter has ample canonical class if the condition on the weights is met and hence $V_{\infty}$ has a finite automorphisms group. The argument in the case of complete intersection is the same. \end{proof} On the other hand for the quadric hypersurface $z_1^2+....+z_n^2$ one has many non-trivial automorphisms (cf. \cite{totaro}). For example one has the automorphism of \begin{equation} X_1X_2+X_3^2+X_4^2+....+X_n^2 \end{equation} given by Danilov and Gizatullin (cf. \cite{Danilov} p.101) defined by the change of variables: \begin{equation}\label{quadricauto} X'_1=\beta X_1,\ \ \ X'_2=\beta(\alpha^2 X_2+2 \alpha X_3f(X_1)+ X_1f^2(X_1)), \end{equation} $$ X'_3=\sqrt{-1}(\alpha \beta X_3+\beta X_1f(X_1)) \ \ \ X_i'=(\alpha \beta)X_i \ \ \ (4 \le i \le n)$$ preserves quadric for all $f$ and no power of (\ref{quadricauto}) has a fixed point on ${\mathbb C}^n-0$. More generally, the action of the infinite cyclic group generated by (\ref{quadricauto}) is proper discontinuous. The jacobian of (\ref{quadricauto}) at the origin \begin{equation} det \begin{pmatrix} \beta & 0 & \cr 0 & \beta \alpha^2 & 2\alpha f(0) \cr \beta f(0) & 0 & \alpha \beta \cr \end{pmatrix} \end{equation} Hence if eigenvalues (for $f(0)=0$, $\beta, \alpha \beta, \alpha^2 \beta$) have absolute value less than 1 which is the case for $\vert \beta \vert <1$ and $\vert \alpha \vert <1$. \subsection{Cohomology of bundles on universal covers} The results of sections \ref{cohovectbun} and \ref{hodgederham} deal primarily with the cohomology of bundles on topological Hopf manifolds. They depend on vanishing of the cohomology of the bundles on universal covers which we now review. In the case of topological Hopf manifolds the vanishing of the cohomology of bundles on the universal cover $V-O$ follows from the vanishing of cohomology of coherent sheaves on Stein spaces in positive dimensions (Cartan's theorem B) and from the following two results: \begin{theorem}\label{Scheja}(Scheja; cf \cite{Scheja}, \cite{Siuextension} p. 129) Let $V$ be a complex space, $A$ a subvariety of dimension $\le d$ and ${\mathcal F}$ a coherent sheaf on $V$ such that ${\rm codh} {\mathcal F} \ge d+q$. Then $H^k(V,{\mathcal F}) \rightarrow H^k(V-A,{\mathcal F})$ is an isomorphism for $0 \le k < q-1$ and injective for $k=q-1$. \end{theorem} \noindent and \begin{theorem}\label{siu} (Siu, cf. \cite{siuannals}) Let $V$ be a complex space, $A$ a subvariety of dimension $\le d$, $i: V-A \rightarrow V$ and ${\mathcal F}$ and coherent analytic sheaf on $V-A$. If $codh {\mathcal F} \ge d+3$ then $i_*({\mathcal F})$ is a coherent sheaf (on $V$). \end{theorem} \begin{corollary}\label{cohomologyoncover} Let $V$ be a Stein space with isolated singularity $O$ and let ${\mathcal F}$ be a coherent sheaf on $V-O$. Assume that ${\rm dim} V \ge 3$. Then ${\mathcal F}$ extends to a coherent sheaf on $V$ and $H^k(V-O,{\mathcal F})=0$ for $0 <k \le d-1$ where $d$ is the cohomological codimension of the stalk of this extension at $O$. In particular if ${\mathcal F}$ is a locally trivial bundle on $V-O$ which admits a locally trivial extension to $V$ then $H^k(V-O,{\mathcal F})=0$ for $k \ne 0, n-1$. \end{corollary} \begin{proof} Indeed, $hd_{V-O} {\mathcal F}=0$ since ${\mathcal F}$ is locally free and hence $codh {\mathcal F}={\rm dim} V$. Therefore $i_*({\mathcal F})$ is coherent. Taking $X=V, A=O$ in theorem \ref{Scheja} we see that $H^k(V-0,{\mathcal F}) \rightarrow H^k(V,\pi_*({\mathcal F}))$ is injective for $0 < k \le d-1$ and Cartan's theorem B yields the first claim. In the case when extension is locally trivial we have $d={\rm dim} V$ and hence the second assertion. \end{proof} \begin{remark} Consider the case when bundle ${\mathcal F}$ on ${\mathbb C}^n-0$ is a pullback of a bundle ${\mathcal F}'$ on ${\mathbb P}^{n-1}$ via Hopf map $\pi: {\mathbb C}^n-0 \buildrel {\mathbb C}^* \over \rightarrow {\mathbb P}^{n-1}$. Then ${\mathcal F}$ extends to a locally trivial bundle on ${\mathbb C}^n$ if and only if ${\mathcal F}'$ is a direct sum of line bundles (\cite{serre}). The cohomology of ${\mathcal F}$ can be found using Leray spectral sequence $H^p({\mathbb P}^{n-1},R^q\pi_*({\mathcal F})) \rightarrow H^{p+q}({\mathbb C}^n-O,{\mathcal F})$. Since by projection formula $R^q\pi_*(\pi^*({\mathcal F}'))=R^q\pi_*({\mathcal O}_{{\mathbb C}^n-O})\otimes {\mathcal F}'$ and since the fibers of $\pi$ are Stein (in fact just ${\mathbb C}^*$) we have $R^q\pi_*({\mathcal O}_{{\mathbb C}^n-O})=0$ for $q \ne 0$. Hence the above Leray sequence degenerates and vanishing $H^q({\mathbb C}^n-O,{\mathcal F})=0$ for $0< q\le n-2$ follows from a well known vanishing of cohomology of line bundles on ${\mathbb P}^{n-1}$ (i.e. in all dimensions $\ne 0,n-1$.) Note that vanishing of cohomology of a bundle ${\mathcal F}'$ on ${\mathbb P}^{n-1}$ in indicated range is closely related to existence of a splitting of the bundle (cf. \cite{bundles} p.39). \end{remark} \begin{remark} It follows from the result of extension of bundles $\pi^*({\mathcal F})$ to ${\mathbb C}^n$ mentioned in the last remark that $\pi^*(\Omega^p_{{\mathbb P}^{n-1}})$ cannot be extended to a {\it locally trivial} bundle on ${\mathbb C}^n$ and hence corollary \ref{cohomologyoncover} does not yield information on the cohomology of this bundle. \end{remark} \begin{remark} Note that the canonical class of the ${\mathbb Z}$-quotients of ${\mathbb C}^{n+1}-O$ or hypersurfaces (\ref{pham}) is given by effective divisor: \begin{equation} K_{{\mathcal H}}=\sum_{i=0}^{i={n}} D_i \end{equation} where $D_i$ divisor on ${\mathcal H}$ biholomorphic to Hopf (resp. Brieskorn van de Ven for quotients of (\ref{pham})) manifold which is the image of affine hypersurface in ${\mathbb C}^{n+1}$ (resp. in (\ref{pham})) given by $z_i=0$ ($i=0,..n$). Indeed, the form ${{dz_0} \over {z_0}} \wedge ...\wedge {{dz_{n}}\over {z_n}}$ is a meromorphic form with poles at $z_0 \cdot \cdot \cdot z_n=0$ on the universal cover invariant under the deck transformations and hence descending to the quotient. For a the quotient of the hypersurface (\ref{pham}) one has: \begin{equation}\label{canonicalbrieskorn} K_{{\mathcal H}}=-a_0D_0+\sum_{i} D_i \end{equation} Indeed, the restriction to $V$ of invariant under the action of (\ref{homotety}) meromorphic form $\omega_1={{dz_1} \over {z_1}} \wedge ...\wedge {{dz_{n}}\over {z_n}}$ descents to a meromorphic form on ${\mathcal H}$. On the other hand the form $\omega_2=Res {{dz_0 \wedge .... dz_n} \over {z_0^{a_0}+...z_n^{a_n}}}= {{dz_1 \wedge ...\wedge dz_{n}}\over {z_0^{a_0-1}}}$ in holomorphic and non-vanishing on $V-0$. The formula (\ref{canonicalbrieskorn}) follows by comparison of divisors of these forms $\omega_1$ and $\omega_2$ on ${\mathbb C}^n$. \end{remark} In the study of the cohomology of bundles $\Omega^p(E)$ on Hopf manifolds we restrict our-self to the case when $V-O$ is the complement to the fixed point in a hypersurface in ${\mathbb C}^{n+1}$ which supports a ${\mathbb C}^*$ action or complete intersection (\ref{hammcase}). The needed results on the cohomology of sheaves of differential forms are essentially contained in \cite{Yau} in the case of hypersurfaces and in \cite{Vose} in the case of complete intersections. \begin{lemma} Let $V$ be a hypersurface (\ref{pham}) or a complete intersection (\ref{hammcase}). Then one has the following: $${{\rm rk}} H^q(V-0,\Omega^p)= \begin{cases}0 & p+q \le n-2 \ \ \ \ 1 \le q \le n-2 \\ \tau & p+q=n-1,n \ \ 1 \le q \le n-2 \\ 0 & p+q \ge n+1 \ \ \ \ 1 \le q \le n-2 \end{cases}$$ \end{lemma} From this it follows: \begin{lemma}\label{maxideallemma} If the multiplicity of $V$ at the origin is greater than one then for the automorphism $T^*_{H^q(V-0,\Omega^p)}$ of $H^q(V-0,\Omega^p)$ induced by (\ref{homotety}) one has $${\rm dim} {\rm Ker} (T^*_{H^q(V-0,\Omega^p)}-I)=1$$ \end{lemma} \begin{proof} In \cite{Yau}, S.Yau obtained, for a hypersurface $V$, an isomorphism $H^q(V-0,\Omega^p)$ and the vector space: $$M_f={\mathbb C}[z_1,....,z_{n+1}]/(...{{\partial f}\over {\partial z_i}}...)$$ (or its dual). One has splitting $M_{f}={\mathbb C} \cdot 1 \oplus {\mathcal M}$ where $\mathcal M$ is the image in the quotient ring of the maximal ideal of the local ring at the origin. This image is a vector space which has the monomials $...\cdot z_i^{j_i} \cdot ...$ with $0 \le j_i <a_i, \sum_i j_i >0$ as its basis. The action of automorphism $T$ is given by $z_i \rightarrow \lambda_iz_i$ and the above monomials are eigenvectors of this action with eigenvalues all different from 1. Hence ${\rm Ker} (T^*_{H^q(V-0,\Omega^p)}-I)$ corresponds to the summand ${\mathbb C} \cdot 1$. A similar argument works in the case of complete intersections using Prop. 1.3 (d) of \cite{Vose}. \end{proof} \section{Cohomology of local systems on topological Hopf manifolds} \begin{prop}\label{homologylocalsystems} Let ${\mathcal L}$ be a local system on a topological Hopf manifold ${\mathcal H}$. Then $$ H^i({\mathcal L})=0 \ \ \ for \ \ i\ne 0,1,2n-1,2n $$ $$ {\rm dim} H^0({\mathcal L})={\rm dim} H^1({\mathcal L})={\rm dim} H^{2n-1}({\mathcal L})={\rm dim} H^{2n}({\mathcal L})=1 $$ \end{prop} \begin{proof} Let $\tilde {\mathcal H}$ be the universal covering space. We have the spectral sequence for the action of a subgroup ${\mathbb Z}$ (generated by $T$) of the center of $\pi_1({\mathcal H})$ on $\tilde {\mathcal H}$ yields: \begin{equation} H^p(\tilde {\mathcal H}) \buildrel T-I \over \rightarrow H^p(\tilde {\mathcal H}) \rightarrow H^p(\tilde {\mathcal H}/{\mathbb Z}) \rightarrow H^{p+1}(\tilde {\mathcal H}) \end{equation} Since $$H^p(\tilde {\mathcal H},{\mathbb Q})= \begin{cases}0 & p \ne 0,2n-1 \\ {\mathbb Q} & p=0,2n-1 \end{cases}$$ and the action of $T$ is trivial for both $p=0,2n-1$ we obtain the claim for the cohomology of the primary Hopf manifold $\tilde {\mathcal H}/{\mathbb Z}$. The claim for the local systems on $\tilde H/\pi_1({\mathcal H})$ follows from the spectral sequence (below $\sigma: \tilde {\mathcal H}/{\mathbb Z} \rightarrow {\mathcal H}$) \begin{equation} H^p(\pi_1({\mathcal H})/{\mathbb Z},H^q(\sigma^*({\mathcal L})) \rightarrow H^{p+q}({\mathcal H},{\mathcal L}) \end{equation} since the group $\pi_1({\mathcal H})/{\mathbb Z}$ is finite. \end{proof} \begin{remark} One can obtain the conclusion of Proposition \ref{homologylocalsystems} for homological Hopf manifolds, if one makes additional assumptions on the fundamental group e.g. for ${\mathcal H}$ such that $\pi_1({\mathcal H})$ satisfies the conclusions of Proposition \ref{propfundgroups} (i) or more generally on characteristic varieties of the group $\pi_1({\mathcal H})$ (cf. \cite{libgober}). \end{remark} \section{Cohomology of vector bundles}\label{cohovectbun} Let $X$ be a complex space on which a group $G$ acts via holomorphic maps freely and let $\pi: X \rightarrow X/G$. Let ${\mathcal F}$ be a coherent sheaf on $X/G$. Let $\bullet ^G: V \rightarrow V^G$ be the functor assigning to a ${\mathbb C}$-vector space with a $G$-action the subspace of invariants. One has canonical isomorphism (cf \cite{Groth}): $\Gamma(X,\pi^*({\mathcal F}))^G= \Gamma(X/G,{\mathcal F})$. The corresponding spectral sequence of the composition of functors is given by (cf. \cite{Groth}): \begin{equation}\label{spectralsequence} E_2^{p,q}=H^p(G,H^q(X,\pi^*{\mathcal F})) \Rightarrow H^{p+q}(X/G,{\mathcal F}) \end{equation} In particular this can be applied to the case when $X$ be a complex manifold and $\pi: \tilde X \rightarrow X$ is its universal covering space. Let ${\mathcal F}$ be a coherent sheaf on $X$. The above spectral sequence becomes: \begin{equation} E_2^{p,q}= H^p({\mathbb Z},H^q(\tilde X,\pi^*{\mathcal F})) \Rightarrow H^{p+q}(X,{\mathcal F}) \end{equation} In the case when $G={\mathbb Z}$ and hence has the cohomological dimension equal to $1$, this spectral sequence degenerates in term $E_2$ (i.e. due to the vanishing $H^i({\mathbb Z},V)=0, i \ge 2 $ for a ${\mathbb Z}$-module $V$). Since $E_2^{0,q}=H^q(X,\pi^*({\mathcal F}))^{{\mathbb Z}}={\rm Ker} (T-Id: H^q(X,\pi^*{{\mathcal F}}) \rightarrow H^q(X,\pi^*{{\mathcal F}})$ and $E_2^{1,q}={\rm Coker} T-Id: H^q(X,\pi^*{{\mathcal F}}) \rightarrow H^q(X,\pi^*{{\mathcal F}})$ Hence we have the exact sequence: \begin{equation}\label{milnor} \rightarrow H^{q-1}(X,\pi^*({\mathcal F})) \buildrel T-1 \over \rightarrow H^{q-1}(X,\pi^*({\mathcal F})) \rightarrow H^q(X/G,{\mathcal F}) \end{equation} $$\rightarrow H^q(X,\pi^*({\mathcal F})) \buildrel T-1 \over \rightarrow H^{q}(X,\pi^*({\mathcal F}))$$ Note that we have $H^i(\tilde X,{\mathcal O})=0$ for $i \ne 0,n-1,n$, if $\tilde X={\mathbb C}^n-0$ {\it Proof of the theorem \ref{cohomologyresults}}. The finiteness assumption and the exact sequence (for a sheaf ${\mathcal F}$ on $\bar V$): \begin{equation} H^q(\bar V) \rightarrow H^q(V,{\mathcal F}) \rightarrow H^{q+1}_A(\bar V,{\mathcal F}) \end{equation} yields that $H^q(V,p^*(E))$ is finite dimensional for $q \le r$ and is infinite dimensional for $q=0$. Moreover the space $H^0(V,p^*(E))$ is Montel with the semi-norm given by the compact subsets $\bigcup_i ( f^i(U) \cup f^{-i}(U))$ where $U$ is the closure of the fundamental domain of an infinite order transformation $f$ on $V$ in the center of $\pi_1(H)$. In particular $f^*$ is compact and hence index of $I-f^*$ acting on $H^0(V,p^*(E))$ is zero and hence has equal dimensions of kernel and cokernel. This and the exact sequence relating the cohomology of cover discussed above yield the claim. \qed {\it Proof of the theorem \ref{vanishing}.} We have $H^q(V-O,\Omega^p_{V-O} \otimes \pi^*(E))=0$ unless $q=0,n-1-p,n-p,n-1, p \ge 2$. In the case $q=0,n-1$ spaces are infinite dimensional but the operators on $H^q$ induced by $T$ are Fredholm with zero index. Hence the exact sequence (\ref{milnor}) yields: \begin{equation} {\rm dim} H^q(V-0/G,\Omega^p(E))={\rm dim} H^{q+1}(V-0/G,\Omega^p(E)) \ \ \ q=0,n-1 \end{equation} $$H^q(V-0/G,\Omega^p(E))=0 \ \ \ q \ne n-1-p,n-p, 0,1,n-1,n $$ $${\rm dim} H^{n-2-p}(V-0/G,\Omega^p(E))=\tau, $$ $${\rm dim} H^{n-1-p}(V-0/G,\Omega^p(E))= \tau \ \ $$ $${\rm dim} H^{n-p}(V-0/G,\Omega^p(E))=\tau \ \ p \ne 0,1$$ Indeed, for $q=0,1,n-1,n$ are cohomology of bundles on Hopf manifold are kernel and cokernel of Fredholm operators which, as was mentioned earlier, has zero index. For $q \ne 0,1,n-1,n$ contribution from $H^q(V-0,\Omega^p\otimes \pi^*(E))$ contributes $\tau$ into ${\rm dim}H^q$ and ${\rm dim} H^{q+1}$ since the action of the covering group on the cohomology $H^{n-2-p}(V-0,\Omega^p(E))$ is trivial and remaining groups are zeros. Now the result follows from lemma \ref{maxideallemma}. \begin{corollary} Let ${\mathcal H}$ be a homological Hopf manifold as in \ref{vanishing} Then $Pic({\mathcal H})={\mathbb C}^*$. \end{corollary} \begin{proof} We have $H^0({\mathcal H},{\mathcal O})=H^1({\mathcal H},{\mathcal O})={\mathbb C}$ from theorem \ref{vanishing}. Hence the cohomology sequence of the exponential sequence $0 \rightarrow {\mathbb Z} \rightarrow {\mathcal O} \rightarrow {\mathcal O}^* \rightarrow 0$ has the form $H^1({\mathcal H},{\mathbb Z}) \rightarrow {\mathbb C} \rightarrow H^1({\mathcal H},{\mathcal O}^*) \rightarrow H^2({\mathcal H},{\mathbb Z})$. Since $H^2({\mathcal H},{\mathbb Q})=0$, this yields the claim. \end{proof} \section{Hodge deRham spectral sequence}\label{hodgederham} \bigskip Recall that the category of local systems on a manifold is equivalent to the category of locally constant sheaves which in turn is equivalent to the category of locally free sheaves with integrable connection (cf. \cite{deligne}). For a local system, ${\mathcal L}$, let $E_{{\mathcal L}}$ be the corresponding locally free sheaf and and $\nabla_{{\mathcal L}}$ be the corresponding flat connection on $E_{{\mathcal L}}$. \begin{prop} Let ${\mathcal H}$ be a Hopf manifold, $E$ be a bundle on ${\mathcal H}$ with a trivial pullback on the universal cover and ${\mathcal L}$ be the corresponding local system. Then the Hodge deRham spectral sequence: \begin{equation} H^q({\mathcal H},\Omega^p(E)) \rightarrow H^{p+q}({\mathcal H},{\mathcal L}) \end{equation} degenerates in the term $E_2$. \end{prop} \begin{proof} In order to calculate the differential $d_1$ i.e. the map $E_1^{p,q} \rightarrow E_1^{p,q}$ for $q=0,1$ recall that the terms $E_1^{p,0}$ and $E_1^{p,1}$ are respectively invariants and covariants of the selfmap of $\Gamma({\mathbb C}^n,\Omega^p \otimes p^*(E))$ induced by a map $T: {\mathbb C}^n \rightarrow {\mathbb C}^n$ (such that ${\mathbb C}^n-O/t=h$. On the other hand the holomorphic deRham complex on ${\mathbb C}^n$: \begin{equation} \Gamma({\mathbb C}^n,\Omega^p) \buildrel d \over \rightarrow \Gamma({\mathbb C}^n,\Omega^{p+1}) \end{equation} is exact. The explicit construction of the holomorphic form $\eta$ such that for a given close form $\omega$ one has $\omega=d\eta$ shows that $\eta$ is $T$-invariant if $\omega$ is. Hence $d$ induced exact sequence of $T$-invariant forms. i.e. the term $E_2$ is zero (in particular one can recover the vanishing results of Prop. \ref{homologylocalsystems}). \end{proof}
1,941,325,220,965
arxiv
\section{Introduction} Kagom\'{e} lattices have attracted considerable interest from the aspect of geometric frustration in condensed-matter physics.\cite{atwood_kagom|[eacute]|_2002} There is extensive degeneracy of nondispersing resonant modes in a resonator system with kagom\'{e} symmetry. These eigenmodes form a flat band, where the resonant frequency of the band is the same for all wavevectors in the first Brillouin zone. The flat band originates purely from the topology of the lattice structure, and it remains flat even when the couplings between the adjacent sites are significantly large. Furthermore, such flat bands can lead to ferromagnetism of itinerant fermions,\cite{lieb_two_1989,mielke_ferromagnetic_1991,mielke_ferromagnetism_1991,tasaki_ferromagnetism_1992} supersolidity for bosons,\cite{huber_bose_2010,moeller_correlated_2012} crystalline ordering,\cite{wu_flat_2007} and other effects. Although the flat-band formation is first expected in quantum systems, it can occur in electromagnetic systems. The presence of electromagnetic flat bands in kagom\'{e} lattices has already been predicted theoretically in some electromagnetic systems, such as two-dimensional photonic crystals\cite{takeda_flat_2004} and metallophotonic waveguide networks.\cite{endo_tight-binding_2010} In the flat band, the group velocity is slowed down in all directions and the effective mass of the photons becomes very heavy. It is important to study the flat band in the electromagnetic system with kagom\'{e} symmetry in terms not only of fundamental physics, but also from an application standpoint, such as slow light;\cite{baba_slow_2008} however, there has been no experimental demonstration for the electromagnetic flat band. Here, we focus on the flat band for a terahertz (THz) plasmonic mode. Although there is no surface plasmon of metals in the THz region, the modes having the dispersion relation similar to surface plasmons are formed in structured metals, and called spoof surface plasmons.\cite{pendry_mimicking_2004,garcia-vidal_surfaces_2005,williams_highly_2008,maier_terahertz_2006} In this paper, we theoretically and experimentally obtain the dispersion relation for a spoof plasmon in the metallic kagom\'{e} lattice and demonstrate the electromagnetic flat band in the THz regime. Numerical simulations are also performed to provide confirmation of the experiments. \section{Theoretical model} \begin{figure}[!b] \includegraphics[width=8.5cm]{kbdrs.eps \caption{\label{fig:kbdrs}(Color online) (a) Schematic view of kagom\'{e}-type bar-disk resonators (KBDRs). (b) The fabricated KBDRs on a stainless-steel plate with $l=800$, $d=10$, and $r=145$, and thickness $h=30\,\U{\mu m}$. } \end{figure} We introduce kagom\'{e}-type bar-disk resonators (KBDRs), shown in Fig.~\ref{fig:kbdrs}(a). Metallic disks and narrow bars are connected to form a kagom\'{e} lattice. KBDRs are artificially engineered metallic structures, and considered as a kind of metamaterial.\cite{veselago_electrodynamics_1968,shelby_experimental_2001,pendry_magnetism_1999,pendry_negative_2000,leonhardt_optical_2006,pendry_controlling_2006,schurig_metamaterial_2006,liu_coupled_2009} In KBDRs, electric charge stored on each disk temporally oscillates between positive and negative values. We discuss the formation of a flat band in KBDRs by using a coupled oscillator model. We denote the electric potential at the $i$th disk as $\phi_i$. Introducing $\Phi_i:=\int \phi_i {\mathrm{d}} t$, we obtain a Lagrangian $\mathcal{L}$ as \begin{equation} \mathcal{L}= \frac{C}{2} \sum_{i} \dot{\Phi}_i^2 + C\sub{M} \sum_{i,j} \frac{A_{ij}}{2}\dot{\Phi}_i\dot{\Phi}_j- \frac{1}{2L}\sum_{i,j} \frac{A_{ij}}{2}(\Phi_i-\Phi_j)^2, \label{eq:1} \end{equation} with capacitance $C$ of the disk, inductance $L$ of the bar, coefficient of electric induction $C_\mathrm{M}$ between nearest disks, and adjacency matrix $A_{ij}$ of the kagom\'{e} lattice, whose element is $1$ if the $i$th and $j$th disks are directly connected by a bar for $i\ne j$; otherwise 0.\cite{biggs_algebraic_1994} The first, second, and third terms of Eq.~(\ref{eq:1}) represent the electric energy stored on disks, mutual electric energy stored between disks, and magnetic energy stored around bars, respectively. Here, we consider only the nearest mutual couplings. Using the Euler-Lagrange equation $({{\mathrm{d}}}/{{\mathrm{d}} t})({\partial \mathcal L}/{\partial \dot{\Phi}_i})- {\partial \mathcal L}/{\partial \Phi_i}=0$, we obtain coupled charge equations as \begin{equation} \ddot{q}_i + \omega_0^2\big(4q_i-\sum_jA_{ij} q_j\big)+\kappa \sum_jA_{ij}\ddot{q}_j=0, \label{eq:2} \end{equation} with stored charge $q_i=C\dot{\Phi}_i$ at the $i$th disk, resonant angular frequency $\omega_0=1/\sqrt{LC}$, and coupling coefficient $\kappa=C\sub{M}/C$. In the frequency domain, we rewrite Eq.~(\ref{eq:2}) as \begin{equation} \sum_j A_{ij} \tilde{q}_j = \frac{4-(\omega/\omega_0)^2}{1+\kappa(\omega/\omega_0)^2} \tilde{q}_i, \label{eq:3} \end{equation} where tildes represent complex amplitudes. Owing to the lattice symmetry, we can reduce Eq.~(\ref{eq:3}) to an eigenvalue problem for a $3\times3$ matrix and obtain the dispersion relation consisting of three bands as \begin{equation} \frac{\omega}{\omega_0} = \sqrt{\frac{6}{1-2\kappa}},\ \sqrt{\frac{3+2(3+F)\kappa\pm(1+4\kappa)\sqrt{3+2F}}{1+2\kappa-2(1+F)\kappa^2}}, \label{eq:4} \end{equation} where $F=\cos \vct{k}_\parallel\cdot \vct{a}_1+\cos \vct{k}_\parallel\cdot \vct{a}_2+\cos \vct{k}_\parallel\cdot (\vct{a}_1-\vct{a}_2)$ with wavevector $\vct{k}_\parallel$ in the $xy$ plane and unit-lattice vectors $\{\vct{a}_1,\ \vct{a}_2\}$ shown in Fig.~\ref{fig:kbdrs}(a). A calculated dispersion relation is shown in Fig.~\ref{fig:theoretical_band}(a) for $\kappa=0$. It is clear that the highest band $\omega/\omega_0=\sqrt{6/(1-2\kappa)}$ is flat or independent of $\vct{k}_\parallel$. It can be seen that the lowest band shows conical dispersion near the $\Gamma$ point and that the bending middle band touches the flat band at the $\Gamma$ point. The flat band is caused by the interference of spoof plasmon propagating in the kagom\'{e} lattice. The adjacency matrix $A_{ij}$ of the kagom\'{e} lattice has eigenmodes localized at hexagonal sites with an eigenvalue of $-2$. One of them is shown in Fig.~\ref{fig:theoretical_band}(b). The number of the eigenmodes is equal to the number of hexagons in the kagom\'{e} lattice. The flat band is formed from these degenerated localized modes as they are not coupled with each other. \begin{figure}[!t] \includegraphics[width=8.5cm]{theoretical_band.eps \caption{\label{fig:theoretical_band}(Color online) (a) Dispersion relation of KBDRs for $\kappa=0$. (b) Localized eigenmode of KBDRs. } \end{figure} \begin{figure}[!b] \includegraphics[width=8.5cm]{experiment.eps \caption{\label{fig:experiment}(Color online) Schematic view of the transmission experiment. The plane of a sample is represented by the coordinate system $(x,y)$ shown in Fig.~\ref{fig:kbdrs}(a). } \end{figure} \section{Experimental setup} We fabricate KBDRs on a stainless-steel plate. The dimensions depicted in Fig.~\ref{fig:kbdrs}(a) are as follows: period between bars $l=800\,\U{\mu m}$, bar width $d=10\,\U{\mu m}$, disk radius $r=145\,\U{\mu m}$, and metal thickness $h=30\,\U{\mu m}$. The size of the area patterned KBDRs is $1.1\,\U{cm}\times1.1\,\U{cm}$. A photomicrograph of a fabricated sample is shown in Fig.~\ref{fig:kbdrs}(b). To investigate the dispersion relation experimentally, we perform THz time-domain spectroscopy (THz-TDS), shown in Fig.~\ref{fig:experiment}. A THz emitter and detector (EKSPLA Ltd.) with dipole antennas attached to Si lenses are used. These antennas are integrated on low-temperature-grown GaAs photoconductors, and driven by a femtosecond fiber laser (F-100, IMRA America, Inc.) with a wavelength of $810\,\U{nm}$ and pulse duration of $120\,\U{fs}$. The THz beam is collimated with the Si lens near the emitter. The beam radius is about $3.7\,\U{mm}$, which covers a large number of KBDRs. The THz electric field $E(t)$ is coherently measured by the detector. We obtain the transmission spectrum $T(\omega)$ in the frequency domain from $T(\omega)=|\tilde{E}\sub{s}(\omega)/\tilde{E}\sub{ref}(\omega)|^2$, where $\tilde{E}\sub{s}(\omega)$ and $\tilde{E}\sub{ref}(\omega)$ are Fourier transformed electric fields with and without the sample, respectively. In order to obtain the band structure between the $\Gamma$ point and the $\mathrm{M}$ point, the sample is rotated by $\theta$ with respect to the $y$ axis from normal incidence (Fig.~\ref{fig:experiment}). The angles $\theta$ range from $\theta=0^\circ$ to $\theta=65^\circ$ with a step $\Delta\theta = 2.5^\circ$. The magnitude of the wavevector $\vct{k}_\parallel$ on the sample plane is given by $k_\parallel=(\omega/c) \sin\theta,$ where $c$ is the speed of light. We perform transmission experiments for two polarizations along the $x'$ axis (parallel configuration) and $y$ axis (perpendicular configuration). We denote the electric field of the incident wave as $\vct{E}$, and the projection of $\vct{E}$ to the sample plane as $\vct{E}_\parallel$. For parallel or perpendicular configurations, $\vct{E}_\parallel$ is parallel or perpendicular, respectively, to $\vct{k}_\parallel$. Wire-grid polarizers near the emitter and detector are adjusted so that the emitted and detected fields have the same polarization. \section{Results} Figure~\ref{fig:results_para4}(a) displays the transmission spectrum for parallel configuration. The wavevectors are estimated as $k_\parallel=(\omega/c) \sin\theta$. Transmission spectrum minima are observed from $0.21\,\U{THz}$ to $0.28\,\U{THz}$. With an increase of wavenumber, the frequency of the transmission minimum decreases from $0.28\,\U{THz}$ at the $\Gamma$ point and approaches $0.21\,\U{THz}$ at the $\mathrm{M}$ point. For further investigation, we calculate the electromagnetic response of KBDRs for parallel configuration. A commercial finite-element method solver (Ansoft HFSS) is used. In the simulation, a plane THz wave is injected into perfectly conducting KBDRs at an incident angle $\theta$. By using periodic boundaries with some phase shifts, the transmission and the electromagnetic fields in the unit cell are calculated for an oblique incident plane wave. The measured transmission spectra for $\theta =20^\circ$ are compared with the simulation in Fig.~\ref{fig:results_para4}(b). \begin{figure}[!tb] \includegraphics[width=8.3cm]{results_para4.eps \caption{\label{fig:results_para4} (Color online) Parallel configuration ($\vct{E}_\parallel \parallel \vct{k}_\parallel$). (a) Experimentally obtained transmission diagram of KBDRs. Transmission minima between $0.21\,\U{THz}$ and $0.28\,\U{THz}$ are observed and theoretically fitted by a dotted line. The solid line corresponds to $\theta=20^\circ$. (b) Transmission spectrum for $\theta=20^\circ$ obtained by simulation and experiment. (c) Surface electric charge distribution obtained by simulation at the transmission minimum $0.255\,\U{THz}$ for $\theta=20^\circ$ in the unit cell. } \end{figure} The frequency of the transmission minimum and shape of the curve for the simulation are consistent with the experimental result, which confirms the validity of the assumption of perfect conductors. Figure~\ref{fig:results_para4}(c) shows the calculated distribution of surface electric charges at a minimum ($0.255\,\U{THz}$) for $\theta=20^\circ$. This mode corresponds to the middle band. Disks along the $x$ axis are alternately charged. The in-phase currents along $\vct{a}_1$ and $\vct{a}_2$ are excited by the electromotive force due to $\vct{E}$. No resonance appears for $\theta=0$ because the current flowing into a disk is balanced by the current flowing out of it. For the excitation of this mode, a phase-shifted field in the $x$-direction is needed. By using Eq.~(\ref{eq:4}), the fitting parameters are obtained from experimental data as $\omega_0/(2\pi)=0.105\,\U{THz}$ and $\kappa=0.103$. The resultant dispersion curve is represented as a dotted curve in Fig.~\ref{fig:results_para4}(a). Positive charges on one disk induce negative charges on the other; therefore, $\kappa<0$ is ordinarily expected in the static limit ($\omega\rightarrow 0$). It seems strange that $\kappa$ would be positive. In our situation, it can be explained by a retardation effect.\cite{solymar_waves_2009} The phase shift between nearest disks is given by $(\omega_c/c)\times l/\sqrt{3} \sim 0.77\times \pi$ at the center frequency $\omega_c/(2\pi)=0.25\,\U{THz}$ of the middle band. The near $\pi$ shift leads to $\kappa>0$. Although $\kappa$ depends on frequency, we can approximately regard it as a constant between $0.2\,\U{THz}$ and $0.3\,\U{THz}$. Figure~\ref{fig:results_perp4}(a) displays the transmission spectrum for perpendicular configuration. Unlike in the case of parallel configuration, the flat band of the transmission minima is observed around $0.28\,\U{THz}$. In order to confirm that the flat band is due to the interference of a spoof plasmon, we perform a simulation for perpendicular configuration. A calculated transmission spectrum by simulation for $\theta=20^\circ$ is shown in Fig.~\ref{fig:results_perp4}(b) with the experimental data. We see a good agreement in the frequency of the transmission minimum and the shape of the curve. The calculated distribution of surface electric charges at a minimum ($0.278\,\U{THz}$) for $\theta=20^\circ$ is shown in Fig.~\ref{fig:results_perp4}(c). The resonant mode has antisymmetric amplitudes on the right two disks and there is no charge stored on the left disk. This mode can be constructed by the localized modes shown in Fig.~\ref{fig:theoretical_band}(b). Therefore, the flat transmission minima are caused by the topological nature of the kagom\'{e} lattice. The mode is excited by anti-phase electromotive force caused by $\vct{E}$ along bars parallel to $\vct{a}_1$ and $\vct{a}_2$. The electromotive force along vertical bars does not contribute to the storage of charges on disks because the currents flowing into and out of a disk are balanced. In the case of $\theta=0$, the current flowing into a disk is equal to the current flowing out of it and the flat-band mode cannot be excited. A dotted line in Fig.~\ref{fig:results_perp4}(a) represents the highest band given by Eq.~(\ref{eq:4}) with the previously derived parameters $\omega_0/(2\pi)=0.105\,\U{THz}$ and $\kappa=0.103$. It fits well with the minima experimentally obtained. The bend of the flat band caused by coupling to second (or higher) nearest sites is negligibly small, so the assumption of only nearest mutual disk coupling is appropriate. \begin{figure}[!tbhp] \includegraphics[width=8.3cm]{results_perp4.eps \caption{\label{fig:results_perp4} (Color online) Perpendicular configuration ($\vct{E} \perp \vct{k}_\parallel$). (a) Experimentally obtained transmission diagram of KBDRs. A flat transmission band is observed around $0.28\,\U{THz}$ and theoretically fitted by the dotted line. The solid line corresponds to $\theta=20^\circ$. (b) Transmission spectrum for $\theta=20^\circ$ obtained by simulation and experiment. (c) Surface electric charge distribution obtained by simulation at a transmission minimum of $0.278\,\U{THz}$ for $\theta=20^\circ$ in the unit cell. } \end{figure} \section{discussion} Our bar-disk resonators (BDRs) are obtained by inverting the metallic area and empty space of the slit-hole resonators (SHRs)\cite{liu_extraordinary_2009,zhu_electric_2010} composed of slits and holes engraved on an ultra thin metallic plate. The BDRs and SHRs are complementary structures related through the Babinet's principle,\cite{jackson_classical_1999, born_principles_1999,falcone_babinet_2004} based on the electric/magnetic reciprocity of a vacuum. We denote a pair of an electric field $\vct{E}$ and a magnetic field $\vct{H}$ as $(\vct{E},\vct{H})$. Due to the Babinet principle, the transmittance of a complementary metallic screen with an complementary incident wave $(\vct{E}',\vct{H}')=(Z_0\vct{H},-\vct{E}/Z_0)$ is equal to the reflectance of the original metallic screen illuminated by an incident wave $(\vct{E},\vct{H})$, where $Z_0$ is the impedance of a vacuum. Thus, the transmission peaks in SHRs correspond to the transmission minima in BDRs. This fact shows the duality of the Lagrangians of SHRs and BDRs. Electromagnetic flat bands for all crystal directions have been reported for photonic crystals with square symmetry, theoretically\cite{altug_two-dimensional_2004} and experimentally.\cite{altug_experimental_2005} In this case, the flat bands are formed due to good lateral confinement (high $Q$ factor) of the quadropole modes, which lack preferential coupling directions, at defects of photonic crystals. On the other hand, the flat band for KBDRs is not caused by highly confined modes, but by the topological nature of the kagom\'{e} lattice. The kagom\'{e} lattice prevents spoof plasmons from propagating despite the existence of strong coupling. Thus, the physical origin of the flat band on KBDRs differs from that for photonic crystals.\cite{altug_two-dimensional_2004, altug_experimental_2005} The flat band for propagating modes has been theoretically predicted for square waveguide networks.\cite{feigenbaum_resonant_2010} Our system is considered as an experimental realization of the flat band for the propagating mode. The flat band in the kagom\'{e} lattice comes from local interference effects. The global symmetry (i.e., periodicity of the lattice) is not necessarily required because local symmetries can support the localized mode. The resonance independent of the incident angle could be expected for the metallic structure having localized modes with the same resonant frequency, even without periodicity. \section{conclusion} In conclusion, we studied theoretically and experimentally the electromagnetic flat band on a metallic kagom\'{e} lattice. Kagom\'{e}-type bar-disk resonators were proposed to realize the flat band. A dispersion relation composed of three bands was theoretically predicted for KBDRs. The highest band was flat for all wavevectors. Two bands formed by transmission minima depending on the polarization of the incident terahertz beams were observed experimentally. One of the bands corresponded to the flat band. Theoretical fitting showed good agreement for these modes. By simulation, we revealed that the flat band was caused by the topological nature of the kagom\'{e} lattice. The flat band can be applicable to slow light, which is useful for the control of group velocity,\cite{Tamayama_2010,Tamayama_2012} high sensitive sensing, and other applications. In the flat band, the effective mass of photons becomes very heavy and their correlation has an important role. Multiphoton correlation effects in kagom\'{e} lattices are important in terms of fundamental physics and should be studied in the future. \begin{acknowledgments} The authors would like to thank A. Yao and T. Hikihara for technical assistance, and S. Endo for fruitful discussions. This research was supported in part by Grants-in-Aid for Scientific Research No.~22109004 and No.22560041, the Global COE program ``Photonics and Electronics Science and Engineering'' at Kyoto University, the Program for Improvement of Research Environment for Young Researchers from the Special Coordination Funds (SCF) for Promoting Science and Technology commissioned by the Ministry of Education, Culture, Sports, Science and Technology (MEXT) of Japan (T.O.), the Research Foundation for Opto-Science and Technology (T.O.), and research grants from the Murata Science Foundation (T.O. and T.N.). \end{acknowledgments} \nocite{*}
1,941,325,220,966
arxiv
\section{Introduction} Recently the major LHC experiments reported upon the discovery of a boson with the mass of about $125$~GeV \cite{:2012gk,:2012gu}. Further experimental studies~\cite{Aad:2013xqa,Chatrchyan:2013lba} showed that this particle behaves very much like the Higgs boson of the Standard Model (SM)~\cite{SM}. Nevertheless, the question about the fundamental mechanism(s) of mass generation is far beyond the final resolution. Moreover, there is a number of indirect evidences that the conformal symmetry (CS) might be the proper feature of the true fundamental theory, while the SM is just an effective theory with a softly broken CS, see {\it e.g.}~\cite{Gorsky:2014una} and references therein. According to the general wisdom, all SM particles (may be except neutrinos) own masses due to ``interaction'' with the Higgs boson vacuum expectation value. The latter emerges after the spontaneous breaking of the $O(4)$ symmetry in the scalar sector~\cite{Englert:1964et,Higgs:1964pj}. In the SM, one deals with the potential \begin{equation} \label{5a} V_{\rm Higgs}(\Phi)=\lambda(\Phi^\dagger\Phi)^2 + \mu^2\Phi^\dagger\Phi, \end{equation} where one component of the complex scalar doublet field $\Phi=\left(\begin{array}{c}\Phi^+\\ \Phi^0\end{array}\right)$ acquires a non-zero vacuum expectation value $\langle\Phi^0 \rangle = v/\sqrt{2}$ if $\mu^2<0$ (the vacuum stability condition $\lambda>0$ is assumed). Note that the tachyon-like mass term in the potential is crucial for this construction. In contrast to the $O(4)$ spontaneous symmetry breaking (SSB), it breaks the conformal symmetry explicitly being the only one {\em fundamental} dimensionful parameter in the SM. We recall that the explicit conformal symmetry breaking in the Higgs sector gives rise to the naturalness (or fine tuning) problem in the renormalization of the Higgs boson mass. That is certainly the most unpleasant feature of the SM. \section{The naturalness problem} Let us look at some details of the naturalness problem. In the one-loop approximation the Higgs boson mass gets huge corrections due to quadratically divergent amplitudes: \begin{eqnarray} \label{naturalness} M_H^2 = (M_H^0)^2 + \frac{3\Lambda^2}{8\pi^2v^2}\biggl[M_H^2 + 2M_W^2 + M_Z^2 - 4m_t^2 \biggr], \end{eqnarray} where $\Lambda$ is an ultraviolet cut-off parameter. Certainly it is unnatural to have a huge hierarchy between $M_H$ and $M_H^0$\footnote{We stress that $M_H^0$ here should be directly related to the tachyon-like mass parameter in the initial Lagrangian, where it appears as a {\it fundamental} scale.}. There are two general ways to solve the problem: \\ --- either to exploit some (super)symmetry to cancel out the huge terms, \\ --- or to introduce some new physics at a scale not very far from the electroweak (EW) one, {\it i.e.} making $\Lambda$ being not large. One can find in the literature quite a lot of models for both options. Actually, since the (super)symmetry is not observed experimentally at the EW scale, the first way besides the introduction of the symmetry requires application of a mechanism of its breaking at some energy scale close to the EW one. On the other hand, the experimental data coming from modern accelerators and rare decay studies disfavors most of scenarios of new physics with scales up to about 1~TeV and even higher. Moreover, it was shown that the measured value of the Higgs boson mass makes the SM being self-consistent up to very high energies of the order $10^{11}$~GeV~\cite{Bednyakov:2012rb} or even up to the Planck mass scale~\cite{Bezrukov:2012sa,Alekhin:2012py}. Direct and indirect experimental searches push up and up possible energy scale of new physical phenomena. So the naturalness problem becomes nowadays more and more prominent. And the question, why the top quark mass, the Higgs boson mass, the vacuum expectation value $v$, and the electroweak scale itself are of the same order becomes more and more intriguing. The correction (\ref{naturalness}) comes from Feynman diagrams with boson single-propagator loops (tadpoles) and from the two-point function with two top-quark propagators. The latter actually is reduced to a top quark tadpole: \begin{eqnarray} && -N_c\int_{\Lambda_t}\frac{d^4k}{i\pi^2}\, \frac{\mathrm{Tr}(\hat{k}+m_t)((\hat{p}-\hat{k})+m_t)}{(k^2-m_t^2)((p-k)^2-m_t^2} \to -4N_c\int_{\Lambda_t}\frac{d^4k}{i\pi^2}\,\frac{1}{k^2-m_t^2} + \mathcal{O}(m_t^2) \nonumber \\ && \quad = -4N_cA_0(m_t^2,\Lambda_t^2) + \mathcal{O}(m_t^2), \end{eqnarray} where $A_0$ is the standard Passarino-Veltman function. One the other hand, we have the following standard formal definition of the quark condensate: \begin{eqnarray} \langle \bar{q}\, q\rangle \equiv -N_C\int_{\Lambda_q}\frac{d^4k}{i\pi^2}\,\frac{\mathrm{Tr}(\hat{k}+m_q)}{k^2-m_q^2+i\varepsilon} \sim -4N_Cm_q A_0(m_q^2,\Lambda_q^2). \end{eqnarray} So, the top quark contribution to Eq.~(\ref{naturalness}) is {\em formally} provided by its condensate value. Our conjecture is that the formal correspondence has a deep physical meaning and reveals itself not only here. We claim that the very existence of the quark condensate comes out from the QFT rules. But its value of course depends on the details of the model. In particular, the value of the light quark condensate is rather well known from low-energy strong interactions, $\sqrt[3]{\langle \bar{q}\, q\rangle} \simeq - 250$~MeV. The possibility to extract this number from observables if provided by the presence of [non-trivial] non-perturbative interactions at the corresponding energy scale. On the other hand, nothing at all is known from the phenomenology concerning the value of the top quark condensate. That is just because the energy scale of the top quark mass allows only perturbative QCD interactions, which are not sensitive to the condensate\footnote{Due to the Furry theorem the coefficient before the fermion tadpole with vector vertex is zero. While the tadpole itself (with a scalar vertex) can be non-zero.}. Note also, that it is clear that the scale of the light quark condensate is provided by the $\Lambda_{\mathrm{QCD}}$ scale: $\langle \bar{q}\, q\rangle \sim - M_q\Lambda_{\mathrm{QCD}}^2$, where $M_q$ is the constituent quark mass which in its turn also has the same scale $M_q \sim \Lambda_{\mathrm{QCD}}$. \section{Coleman-Weinberg effective potential in the SM} Let us now consider a simple model with one scalar field $\phi$ and one fermion field $f$. We demand the conformal symmetry for the model. The symmetry allows\footnote{For terms in the Lagrangian of a renormalizable QFT model {\em allowed} is practically equivalent to {\em must have}.} the existence of two types of interactions in this models: the $\phi^4$ self-interaction of the scalar and the Yukawa term. So we start with the classical potential \begin{eqnarray} \label{V_CW_cl} V_{\mathrm{cl}}=\lambda\phi_c^4/4! + y\phi_c\bar{f}_c f_c, \end{eqnarray} where we used the notation of Ref.~\cite{Coleman:1973jx}, in particular the subscript ``c'' underlines that $\phi_c$ and $f_c$ are classical fields and they obey the conformal symmetry. The standard one-loop calculation of the effective potential gives two contributions: the one from scalar loops, and the one from fermion loops: \begin{eqnarray} \label{delta_V_sc} && \Delta V_{\mathrm{sc}}= \frac{1}{2}\int\frac{d^4k}{(2\pi^4)}\ln\left( 1 + \frac{\lambda\phi_c^2}{2k^2}\right) \to \frac{\lambda\Lambda^2}{256\pi^2}\phi_c^2 + \frac{\lambda^2\phi_c^4}{256\pi^2}\left(\ln\frac{\lambda\phi_c^2}{2\Lambda^2} -\frac{1}{2}\right), \\ \nonumber && \Delta V_{f} = -4N_C{\mathrm Tr}\int\frac{d^4k}{(2\pi^4)}\ln\left( 1 + \frac{y\phi_c(\hat{k}+m_f)}{k^2-m_f^2}\right) \to -4N_C\frac{ym_f\Lambda^2}{16\pi^2}\phi_c \\ \label{delta_V_f} && \qquad -4N_C\frac{y^2m_f^2\phi_c^2}{32\pi^2}\left(\ln\frac{ym_f\phi_c}{\Lambda^2} -\frac{1}{2}\right)+\ldots \end{eqnarray} Due to the condition of the classical conformal symmetry, we have to remormalize the first term on the right hand side of Eq.~(\ref{delta_V_sc}) to zero. On the other hand, it is clear that the effective potential possesses infrared divergence at $\phi=0$. So, some energy scale $M$ should be introduced to renormalize the logarithmic term. As demonstrated by S.~Coleman and E.~Weinberg, that induces a spontaneous breaking of the conformal symmetry and leads to the appearance of a non-zero mass and a non-zero vacuum expectation value of $\phi$. Obviously, if we have $\langle\phi\rangle\neq 0$ in a model with Yukawa interactions, we automatically get a mass for the fermion. Then the second contribution~(\ref{delta_V_f}) to the effective potential emerges. Note that the conformal symmetry doesn't require to drop ({\it i.e.} renormalize to 0) the first term on the right hand side there, since it is proportional to $m_f$ which is zero in the unbroken phase. This term, is again nothing else but the tadpole contribution, {\it i.e.} the fermion condensate. In this way we have two phases: the classical one with $$ m_\phi=m_f\equiv 0,\qquad \langle \phi\rangle \equiv 0, \qquad \langle \bar{f}\, f\rangle \equiv 0$$ and the one with spontaneously broken CS: $$ m_\phi\sim m_f\sim M,\qquad \langle \phi\rangle \sim M, \qquad \langle \bar{f}\, f\rangle \sim -M^3, $$ where $M$ is the renormalization scale, and we assumed that the coupling constants are not extremely small $\lambda\sim y\sim 1$. So, we clearly see that system~(\ref{V_CW_cl}) is unstable in the infrared region, which leads to the effect of dimensional transmutation. According to the logic of the original paper~\cite{Coleman:1973jx}, the scale comes to the model somewhere from outside the theory via the renormalization procedure. Another crucial point is the question about stability and perturbativity in the phase with the broken conformal symmetry. \section{Dimensional Transmutation in the SM} We suggested~\cite{Pervushin:2012dt} the following minimal modification of the Standard model: let us drop the tachyon mass term from the Lagrangian. We take the most intensive interaction of the Higgs filed $h$ \begin{eqnarray} L_{\mathrm{int}} = - \frac{\lambda}{4}h^4 - \frac{y_t}{\sqrt{2}}h\;\bar{t}t, \end{eqnarray} where $h$ is related to the initial field $\Phi$ in the standard way. In this case we have a model with a classical conformal invariance. We claim that the apparent breaking of this symmetry can happen spontaneously because of the infrared instability at the quantum level. Obviously, in such a case, see {\it e.g.} Ref.~\cite{Bardeen:1995kv}, the softly broken classical symmetry will protect the Higgs boson mass from rapid running in the UV region. So let us look at a stable solution in the broken phase. The leading contribution to the Coleman-Weinberg effective potential comes from the top quark tadpole: \begin{eqnarray} V_{\rm eff}(h) \approx \frac{\lambda}{4}h^4 + \frac{y_t}{\sqrt{2}}\langle \bar t\,t\rangle h. \end{eqnarray} Naturally we choose the electroweak energy scale as the (re)normalization point. Than all dimensionful parameters in the effective potential are defined by this scale. The extremum condition for the potential $dV_{\rm cond}/dh|_{h=v}=0$ yields the relation \begin{equation} \label{lambda} \lambda {v^3} = -\frac{y_t}{\sqrt{2}} \langle \bar t\, t\rangle. \end{equation} It follows from the fact that the Higgs field has a zero harmonic $v$ in the standard decomposition of the field $h$ over harmonics $h = v + H$, where $H$ represents excitations (non-zero harmonics) with the condition $\int d^3x H=0$. The Yukawa coupling of the top quark $y_t = 0.995(5)$ is known from the experimental value of top quark mass $m_t=v y_t/\sqrt{2}\simeq 173.2(9)$~GeV~\cite{Agashe:2014kda}, and $v=(\sqrt{2}G_{\mathrm{Fermi}})^{-1/2}\approx 246.22$~GeV is related to the Fermi coupling constant derived from the muon life time measurements. So, the spontaneous symmetry breaking yields the potential minimum which results in the non-zero vacuum expectation value $v$ and the Higgs boson mass. In fact, the substitution $h = v + H$ gives \begin{equation} V_{\rm eff}(h) = V_{\rm eff}(v) + \frac{m_H^2}{2}H^2 + \lambda^2 v H^3+\frac{\lambda}{4}H^4, \end{equation} which defines the scalar particle mass as \begin{equation} \label{mh} m_H^2= 3\lambda v^2. \end{equation} We stress that this relation is different from the one $(m_H^2=2\lambda v^2)$ which emerges in the SM with the standard Higgs potential~(\ref{5a}). With the aid of Eqs.~(\ref{lambda}) and (\ref{mh}), the squared scalar particle mass can be expressed in terms of the top quark condensate: \begin{equation}\label{h_mass} m^2_H = -\frac{3 y_t \langle \bar t\, t\rangle}{\sqrt{2}v}. \end{equation} To get $m_H=125.7$~GeV we need \begin{equation} \langle \bar t\, t\rangle \approx -(122\ {\mathrm{GeV}})^3. \end{equation} As discussed above, such a large value of the top quark condensate does not affect the low energy QCD phenomenology. Since heavy quark condensates do not contribute to observed QCD quantites ({\it e.g.} via sum rules), we do not have any experimental or theoretical limits on the top condensate value, see~\cite{Cvetic:1997eb} and references therein. Note that the energy scale of the top quark condensate appears to be the same as the general electroweak one. We believe that the scale of light quark condensate is related to the scale of the conformal anomaly in QCD. At the same time those anomalous properties of the QCD vacuum lead to the constituent mass of a light quark to be of the order $300$~MeV. As concerning the top quark, some anomalous properties of the {\em relevant} vacuum give rise to the mass of this quark\footnote{Certainly, QCD effects both in the mass and in the condensate value of the top quark are small compared to the Yukawa ones.} and to the condensate being of the same energy scale. One can note that even we have dropped the scalar field mass term from the classical Lagrangian, it will re-appear after quantization and subsequent renormalization. In fact, such a counter-term in the Higgs sector is necessary. But as described in Ref.~\cite{Bardeen:1995kv}, the conformal symmetry of the classical Lagrangian will lead to just the proper quantity in the mass term being consistent with all other quantum effects. A similar situation takes place in QCD: the chiral symmetry at the quark level re-appear at the hadronic level even so that the breaking is obvious~\cite{Witten:1983tw}. In conclusion, we suggest to apply the Coleman-Weinberg mechanism of dimensional transmutation to induce spontaneous conformal symmetry breaking in the Standard Model. This enables us to avoid the problem of the regularization of the divergent tadpole loop integrals by relating them to condensate values hopefully extracted from experimental observations. The top quark condensate can supersede the tachyon-like mass term in the Higgs potential. The suggested mechanism allows to establish relations between condensates and masses including the Higgs boson one. In a sense, we suggest a simple bootstrap between the Higgs and top fields (and their condensates). We underline that we consider the Higgs boson to be an elementary particle without introduction of any additional interaction beyond the SM contrary to various technicolor models. After the spontaneous symmetry breaking in the tree level Lagrangian, the difference from the SM appears only in the value of the Higgs boson self-coupling $\lambda$. The latter hardly can be extracted from the LHC data, but it will be certainly measured at a future linear $e^+e^-$ collider. {\bf Acknowledgments} A.Arbuzov is grateful to the Dynasty foundation for financial support.
1,941,325,220,967
arxiv
\section{Introduction and the statements of the main results} \label{S:intro} Chemotaxis, the tendency of cells, bacteria, or organisms to orient the direction of their movements toward the increasing or decreasing concentration of a signaling chemical substance, has a crucial role in a wide range of biological phenomena such as immune system response, embryo development, tumor growth, etc. (see \cite{ISM04}). Recent studies describe also macroscopic process such as population dynamics or gravitational collapse, etc., in terms of chemotaxis (see \cite{DAL1991}). Because of its crucial role in the above mentioned process and others, chemotaxis has attracted great attention in both biological and mathematical communities. In 1970's, Keller and Segel proposed in \cite{KS1970, KS71} a celebrated mathematical model (K-S model) made up of two parabolic equations to describe chemotaxis. Since their pioneering works, a large amount of work has been devoted to determine whether solutions exists globally or when blow-up occurs in the K-S model and its various forms of variants. For a broad survey on the progress of various chemotaxis models and a rich selection of references, we refer the reader to the survey papers \cite{NBYTMW05, THKJP09, H03, Win2013}. But many fundamental problems are still not well understood yet. In particular, there is little study on chemotaxis systems with time and space dependent logistic sources. In reality, the environments of many living organisms are spatially and temporally heterogeneous. It is of both biological and mathematical interests to study chemotaxis models with certain time and space dependence. To the best of our knowledge, the first paper on chemotaxis systems with time and space dependent logistic sources is our paper \cite{ITBWS16}, where we considered a one species parabolic-elliptic chemotaxis model and proved, under the well known assumption of smallness of chemotaxis effect comparing to logistic damping effect, the global existence and boundedness of nonnegative classical solutions, the existence of positive entire solutions, and under some further conditions, the uniqueness and nonlinear stability of entire solutions. In the current paper, we consider the following two species parabolic-parabolic-elliptic chemotaxis system with heterogeneous Lotka-Volterra type competition terms, \begin{equation} \label{u-v-w-eq00} \begin{cases} u_t=d_1\Delta u-\chi_1\nabla\cdot (u \nabla w)+u\Big(a_0(t,x)-a_1(t,x)u-a_2(t,x)v\Big),\quad x\in \Omega\cr v_t=d_2\Delta v-\chi_2\nabla \cdot(v \nabla w)+v\Big(b_0(t,x)-b_1(t,x)u-b_2(t,x)v\Big),\quad x\in \Omega\cr 0=d_3\Delta w+k u+lv-\lambda w,\quad x\in \Omega \cr \frac{\p u}{\p n}=\frac{\p v}{\p n}=\frac{\p w}{\p n}=0,\quad x\in\p\Omega, \end{cases} \end{equation} where $\Omega \subset \mathbb{R}^n(n\geq 1)$ is a bounded domain with smooth boundary, $d_i$ ($i=1,2,3$) are positive constants, $\chi_1,\chi_2,k,l,\lambda$ are nonnegative constants, and $a_i(t,x)$ and $b_i(t,x)$ ($i=0,1,2$) are positive bounded smooth functions. Note that, in the absence of chemotaxis, that is, $\chi_1=\chi_2=0$, the dynamics of \eqref{u-v-w-eq00} is determined by the first two equations, that is, the following two species competition system, \begin{equation} \begin{cases} \label{u-v-eq00} u_t=d_1\Delta u+u\Big(a_0(t,x)-a_1(t,x)u-a_2(t,x)v\Big),\quad x\in \Omega\cr v_t=d_2\Delta v+v\Big(b_0(t,x)-b_1(t,x)u-b_2(t,x)v\Big),\quad x\in \Omega\cr \frac{\p u}{\p n}=\frac{\p v}{\p n}=0,\quad x\in\p\Omega. \end{cases} \end{equation} Among interesting dynamical issues in \eqref{u-v-w-eq00} and \eqref{u-v-eq00} are persistence, coexistence, and extinction. These dynamical issues for \eqref{u-v-eq00} have been extensively studied (see \cite{Ahm}, \cite{FuMa97}, \cite{HeSh}, \cite{HeSh02}, etc.). Several authors have studied these issues for system \eqref{u-v-w-eq00} with constant coefficients \cite{TBJLMM16, ITBRS17, NT13, STW13, TW12}. For example in \cite{ITBRS17}, the authors considered a more general competitive-cooperative chemotaxis system with nonlocal terms logistic sources and proved both the phenomena of coexistence and of exclusion for parameters in some natural range. However, there is little study of these important issues for \eqref{u-v-w-eq00} with time and space dependent coefficients. The objective of this paper is to investigate the persistence, coexistence, and extinction dynamics of \eqref{u-v-eq00}. In particular, we identify the circumstances under which persistence or extinction occurs, and in the case that persistence occurs, we study the existence of coexistence states. In order to do so, we first study the global existence of classical solutions of \eqref{u-v-w-eq00} with any given nonnegative initial functions. Let $$ { C^+(\bar{\Omega})=\left\{u\in C(\bar{\Omega}) \,|\, u \geq 0 \right \}.} $$ Note that for any given $t_0\in\RR$ and $u_0,v_0 \in C^+(\bar{\Omega})$, $\eqref{u-v-eq00}$ has a unique bounded global classical solution $(u(x,t;t_0,u_0,v_0),v(x,t;t_0,u_0,v_0))$ satisfying that \begin{equation} \label{ic} (u(x,t_0;t_0,u_0,v_0),v(x,t_0;t_0,u_0,v_0))=(u_0(x),v_0(x)). \end{equation} However, it is not known whether for any given $t_0\in\RR$ and $u_0,v_0 \in { C^+(\bar{\Omega})}$, \eqref{u-v-w-eq00} has a unique bounded global classical solution $(u(x,t;t_0,u_0,v_0),v(x,t;t_0,u_0,v_0),w(x,t;t_0,u_0,v_0))$ satisfying \eqref{ic}. { It can be proved that for given $u_0,v_0 \in C^+(\bar{\Omega})$ and $t_0 \in \RR,$ \eqref{u-v-w-eq00} has a unique local classical solution $(u(x,t;t_0,u_0,v_0),v(x,t;t_0,u_0,v_0),w(x,t;t_0,u_0,v_0))$ satisfying \eqref{ic} (see Lemma \ref{lm-local-001}). If no confusion occurs, we denote $(u(x,t;t_0,u_0,v_0),v(x,t;t_0,u_0,v_0),w(x,t;t_0,u_0,v_0))$ by $(u(x,t;t_0),v(x,t;t_0),w(x,t;t_0))$.} To formulate our results on global existence of classical solutions of \eqref{u-v-w-eq00}, we introduce the following notations. For a given function $f_i(t,x)$ defined on $\mathbb{R} \times \bar \Omega$ we put \begin{equation*} \label{f-i-sup-inf-eq1} f_{i,\inf}=\inf _{ t \in\RR,x \in\bar{\Omega}}f_i(t,x),\quad f_{i,\sup}=\sup _{ t \in\RR,x \in\bar{\Omega}}f_i(t,x), \end{equation*} \begin{equation*} \label{f-i-sup-inf-eq2} f_{i,\inf}(t)=\inf _{x \in\bar{\Omega}}f_i(t,x),\quad f_{i,\sup}(t)=\sup _{x \in\bar{\Omega}}f_i(t,x), \end{equation*} unless specified otherwise. We also introduce the following assumptions. \smallskip \noindent {\bf (H1)} {\it $a_i(t,x)$, $b_i(t,x)$, $\chi_i$ and $d_3$, $k$ and $l$ satisfy \begin{equation} \label{global-existence-cond-eq00} a_{1,\inf}>\frac{ k\chi_1}{d_3},\quad a_{2,\inf}\geq \frac{l \chi_1}{d_3}, \quad b_{1,\inf}\geq \frac{k \chi_2}{d_3},\quad \text{and} \quad b_{2,\inf}>\frac{ l\chi_2}{d_3}. \end{equation} } \noindent {\bf (H2)} {\it $a_i(t,x)$, $b_i(t,x)$, $\chi_i$ and $d_3$, $k$ and $l$ satisfy \begin{equation} \label{global-existence-cond-eq01} a_{1,\inf}>\frac{ k\chi_1}{d_3},\quad b_{2,\inf}>\frac{l \chi_2}{d_3},\quad {\rm and}\quad \big(a_{1,\inf}-\frac{ k\chi_1}{d_3}\big)\big( b_{2,\inf}-\frac{ l\chi_2}{d_3}\big)>\frac{k \chi_2}{d_3}\frac{l \chi_1}{d_3}. \end{equation} } \noindent {\bf (H3)} {\it $a_i(t,x)$, $b_i(t,x)$, $\chi_i$ and $d_3$, $k$ and $l$ satisfy \begin{equation} \label{global-existence-cond-eq02} a_{1,inf}>\max\{0,\frac{\chi_1k(n-2)}{d_3n}\}\ ,\qquad a_{2,\inf}> \max\{0,\frac{\chi_1l(n-2)}{d_3n}\}, \end{equation} and \begin{equation} \label{global-existence-cond-eq03} b_{1,\inf}>\max\{0, \frac{\chi_2k(n-2)}{d_3n}\}\ ,\qquad b_{2,\inf}>\max\{0,\frac{\chi_2l(n-2)}{d_3n}\}. \end{equation} } Our results on global existence and boundedness of nonnegative classical solutions of \eqref{u-v-w-eq00} are stated in the following theorem. \begin{theorem}{ (Global Existence)} \label{thm-global-000} \begin{itemize} \item[(1)] Assume that { (H1)} holds. Then for any $t_0\in\RR$ and $u_0,v_0 \in { C^+(\bar{\Omega})}$, $\eqref{u-v-w-eq00}{ +\eqref{ic}}$ has a unique bounded global classical solution $(u(x,t;t_0),v(x,t;t_0)$,$w(x,t;t_0))$ which satisfies that \begin{equation}\label{global-existence-eq00} \lim_{t\to {t_0}^+ }\big(\|u(\cdot,t;t_0)-u_0\|_{ \infty}+\|v(\cdot,t;t_0)-v_0\|_{ \infty}\big)=0. \end{equation} Moreover, for any $\epsilon>0$, there is { $T(u_0,v_0,\epsilon)\ge 0$} such that \[0\leq u(x,t;t_0) \leq \bar A_1+\epsilon \quad {\rm and}\quad 0\leq v(x,t;t_0)\leq \bar A_2+\epsilon \] for all $t\ge t_0+T(u_0,v_0,\epsilon)$, where \begin{equation} \label{I1-I2-overbar} \bar A_1= \frac{a_{0,\sup}}{a_{1,\inf}-\frac{k\chi_1}{d_3}},\quad \bar A_2=\frac{b_{0,\sup}}{b_{2,\inf}-\frac{l\chi_2}{d_3}}. \end{equation} If $u_0\le \bar A_1+\epsilon$, $v_0\le \bar A_2+\epsilon$, then $T(u_0,v_0,\epsilon)$ can be chosen to be zero. \item[(2)] Assume that { (H2)} holds. Then for any $t_0\in\RR$ and $u_0,v_0 \in { C^+(\bar{\Omega})}$, $\eqref{u-v-w-eq00}{ +\eqref{ic}}$ has a unique bounded global classical solution $(u(x,t;t_0),v(x,t;t_0)$,$w(x,t;t_0))$ which satisfies \eqref{global-existence-eq00}. Moreover, for any $\epsilon>0$, there is { $T(u_0,v_0,\epsilon)>0$} such that $$0\leq u(x,t;t_0) \leq \bar B_1+\epsilon \quad {\rm and}\quad 0\leq v(x,t;t_0)\leq \bar B_2+\epsilon $$ for all $t\ge t_0+T(u_0,v_0,\epsilon)$, where \begin{equation} \label{A1-overbar-0}\bar {B_1}=\frac{a_{0,\sup}(b_{2,\inf}-\frac{l\chi_2}{d_3})+\frac{l\chi_1}{d_3}b_{0,\sup}}{(a_{1,\inf}-\frac{k\chi_1}{d_3})(b_{2,\inf}-\frac{l\chi_2}{d_3}) -\frac{lk\chi_1\chi_2}{d_3^2}} \end{equation} and \begin{equation} \label{A2-overbar-0} \bar {B_2}=\frac{b_{0,\sup}(a_{1,\inf}-\frac{k\chi_1}{d_3})+\frac{k\chi_2}{d_3}a_{0,\sup}}{(a_{1,\inf}-\frac{k\chi_1}{d_3})(b_{2,\inf} -\frac{l\chi_2}{d_3})-\frac{lk\chi_1\chi_2}{d_3^2}}. \end{equation} If $u_0\le \bar B_1+\epsilon$, $v_0\le \bar B_2+\epsilon$, $T(u_0,v_0,\epsilon)$ can be chosen to be zero. \item[(3)]Assume { (H3)} holds. Then for any $t_0\in\RR$ and $u_0,v_0 \in { C^+(\bar{\Omega})}$, system $\eqref{u-v-w-eq00}{ +\eqref{ic}}$ has a unique bounded global classical solution $(u(x,t;t_0)$, $v(x,t;t_0)$, $w(x,t;t_0))$ which satisfies \eqref{global-existence-eq00}. Moreover, \[0\leq\int_{\Omega}u(x,t;t_0)dx \leq \max\left\{\int_{\Omega}u_0,\frac{a_{0,\sup}}{a_{1,\inf}} \right\} \] and \[0\leq\int_{\Omega}v(x,t;t_0)\leq \max\left\{\int_{\Omega}v_0,\frac{b_{0,\sup}}{ b_{2,\inf}} \right\} \] for all $t\ge t_0$. \end{itemize} \end{theorem} \begin{remark} \label{rk-1} \begin{itemize} \item[(1)] Under the assumption (H1), $(\bar A_1,\bar A_2)$ is the unique positive equilibrium of the following decoupled system, $$ \begin{cases} u_t=u\big(a_{0,\sup}-(a_{1,\inf}-\frac{k\chi_1}{d_3}) u\big)\cr v_t=v\big(b_{0,\sup}-(b_{2,\inf}-\frac{l\chi_2}{d_3})v\big). \end{cases} $$ Under the assumption (H2), $(\bar B_1,\bar B_2)$ is the unique positive equilibrium of the following cooperative system, \begin{equation*} \begin{cases} u_t=u\big( a_{0,\sup}-(a_{1,\inf}-k\frac{\chi_1}{d_3}) u+l\frac{\chi_1}{d_3} v\big)\\ v_t=v\big( b_{0,\sup}-(b_{2,inf}-l\frac{\chi_2}{d_3})v+k\frac{\chi_2}{d_3}u\big). \end{cases} \end{equation*} \item[(2)] Conditions {(H1)}, {(H2)} and {(H3)} are natural in the sense that when no chemotaxis is present, i.e., $\chi_1=\chi_2=0,$ conditions {(H1)} and {(H2)} become the trivial conditions $a_{1,\inf}>0$ and $b_{2,\inf}>0$ while {(H3)} becomes $a_{1,\inf}>0,$ $a_{2,\inf}>0,$ $b_{1,\inf}>0,$ and $b_{2,\inf}>0.$ \item[(3)] By {(H3)}, finite time blow up cannot happen when $n=1$ or $n=2$. In general, it remains open whether for any ${ t_0} \in\RR$ and $u_0,v_0\in { C^+(\bar\Omega), (u(x,t;t_0),v(x,t;t_0),w(x,t;t_0)) }$ { which satisfies \eqref{ic} } exists for all $t\ge t_0$. \item[(4)] It is proved in \cite{ITBWS16} that, under the assumption (H1), (H2), or (H3), there are semitrivial entire solutions $(u^*(x,t),0,w_u^*(x,t))$ and $(0,v^*(x,t),w_v^*(x,t))$ of \eqref{u-v-w-eq00} with $$ \inf_{t\in\RR,x\in\bar\Omega} u^*(x,t)>0,\quad \inf_{t\in\RR,x\in\bar\Omega}v^*(x,t)>0. $$ In the absence of chemotaxis (i.e. $\chi_1=\chi_2=0$), such semitrivial solutions are unique. \item[(5)] The condition of global existence and boundedness of classical solutions in \cite[Theorem 1.1(1)]{ITBRS17} implies {(H2)}. Therefore Theorem \ref{thm-global-000}(2) is an improvement of the global existence result in \cite[Theorem 1.1(1)]{ITBRS17}. { Notice also that when $d_3=l=1,$ $a_1=\mu_1,$ $b_2=\mu_2,$ (H2) coincide with the boundedness condition in \cite[Lemma 2.2]{STW13}. Thus {(H2)} is a generation of the global existence condition in \cite{STW13}. } \end{itemize} \end{remark} We now investigate the persistence, coexistence, and extinction dynamics of \eqref{u-v-w-eq00} under the assumption (H1), (H2), or (H3). A solution $(u(x,t),v(x,t),w(x,t))$ of \eqref{u-v-w-eq00} defined for all $t\in\RR$ is called an {\it entire solution}. A {\it coexistence state} of \eqref{u-v-w-eq00} is an entire positive solution $(u^{**}(x,t),v^{**}(x,t),w^{**}(x,t))$ with \begin{equation*} \label{coexistence-eq} \inf_{t\in\RR,x\in\bar\Omega} u^{**}(x,t)>0,\quad \inf_{t\in\RR,x\in\bar\Omega} v^{**}(x,t)>0. \end{equation*} We say that {\it persistence} occurs in \eqref{u-v-w-eq00} if there is $\eta>0$ such that for any $u_0,v_0\in { C^0(\bar\Omega)}$ with $u_0> 0$ and $v_0> 0$, there is $\tau(u_0,v_0)>0$ such that \begin{equation*} \label{persistence-eq} u(x,t;t_0,u_0,v_0)\ge \eta,\quad v(x,t;t_0,u_0,v_0)\ge \eta\quad \forall\,\, x\in\bar\Omega, \,\, t\ge t_0+\tau(u_0,v_0),\,\, t_0\in\RR. \end{equation*} We say that {\it extinction of one species}, {say, species $u$}, occurs in \eqref{u-v-w-eq00} if for any $t_0\in\RR$ and { $u_0,v_0\in C^+(\bar\Omega)$ with $u_0\not \equiv 0$ and $v_0\not \equiv 0$, there holds \begin{equation*} \label{extinction-eq1} \lim_{t\to\infty} \|u(\cdot,t+t_0;t_0,u_0,v_0)\|_{\infty}=0, \,\, {\rm and}\,\, \liminf_{t\to\infty}\|v(\cdot,t;t_0,u_0,v_0)\|_\infty>0. \end{equation*} } To state our results on persistence and coexistence, we further introduce the following assumptions. \medskip \noindent {\bf (H4)} {\it $a_i(t,x)$, $b_i(t,x)$, $\chi_i$ and $d_3$, $k$ and $l$ satisfy (H1) and \begin{equation} \label{invariant-rectangle-cond-eq} a_{0,\inf}>a_{2,\sup}\bar A_2\,\,\,\,\,\text{and} \,\, \,\,\, b_{0,\inf}>b_{1,\sup}\bar A_1. \end{equation}} \noindent {\bf (H5)} {\it $a_i(t,x)$, $b_i(t,x)$, $\chi_i$ and $d_3$, $k$ and $l$ satisfy (H2) and \begin{equation} \label{invariant-rectangle-cond-eq} a_{0,inf}>\big(a_{2,\sup}-\frac{\chi_1 l}{d_3}\big)_+\bar B_2+\frac{\chi_1 l}{d_3}\bar B_2\,\,\, \,\,\text{and} \,\,\,\,\, b_{0,inf}> \big(b_{1,\sup} -\frac{\chi_2 k}{d_3}\big)_+\bar B_1+\frac{\chi_2 k}{d_3}\bar B_1, \end{equation} where $(\cdots)_+$ represents the positive part of the expression inside the brackets. } \medskip We have the following theorem on the persistence in \eqref{u-v-w-eq00}. \begin{theorem} [Persistence] \label{thm-entire-001} \begin{itemize} \item[(1)] Assume (H4). Then there are $\underbar A_1>0$ and $\underbar A_2>0$ such that for any $\epsilon>0$ and $u_0,v_0\in { C^+(\bar \Omega)}$ with $u_0,v_0\not \equiv 0$, there exists $t_{\epsilon,u_0,v_0}$ such that \begin{equation}\label{attracting-set-eq00} \underbar A_1 \le u(x,t;t_0,u_0,v_0) \le \bar{A_1}+\epsilon,\,\,\,\underbar A_2 \le v(x,t;t_0,u_0,v_0) \le \bar{A_2}+\epsilon \end{equation} for all $x\in\bar\Omega$, $t\ge t_0+t_{\epsilon,u_0,v_0}$, and $t_0\in\RR$. \item[(2)] Assume (H5). Then there are $\underbar B_1>0$ and $\underbar B_2>0$ such that for any $\epsilon>0$ and $u_0,v_0\in { C^+(\bar \Omega)}$ with $u_0,v_0\not \equiv 0$, there exists $t_{\epsilon,u_0,v_0}$ such \eqref{attracting-set-eq00} holds with $\underbar A_1$, $\bar A_1$, $\underbar A_2$, and $\bar A_2$ being replaced by $\underbar B_1$, $\bar B_1$, $\underbar B_2$, and $\bar B_2$, respectively. \end{itemize} \end{theorem} \begin{remark} \label{rk-2} \begin{itemize} \item[(1)] It should be pointed out that in \cite{TBJLMM16}, \cite{STW13}, \cite{TW12}, global asymptotic stability and uniqueness of coexistence states are obtained for \eqref{u-v-eq00} when the coefficients are constants and satisfy certain weak competition condition (see also \cite{NT13} when the system involves nonlocal terms). In such cases, the persistence follows from the asymptotic stability and uniqueness of coexistence states. The persistence in two species chemotaxis systems without assuming the asymptotic stability of coexistence states is studied for the first time, even when the coefficients are constants. It should be also pointed out that the authors of \cite{TaoWin} studied the persistence of a parabolic-parabolic chemotaxis system with logistic source. The persistence in \eqref{u-v-eq00} implies the persistence of mass, that is, if persistence occurs in \eqref{u-v-eq00}, then for any $u_0,v_0 \in C(\bar\Omega)$ with $u_0> 0$ and $v_0> 0$, there is $m(u_0,v_0)>0$ such that \begin{equation*} \int_\Omega u(x,t;t_0,u_0,v_0)dx\ge m(u_0,v_0),\quad \int_\Omega v(x,t;t_0,u_0,v_0)dx\ge m(u_0,v_0)\quad \forall\,\, t\ge t_0,\,\, t_0\in\RR. \end{equation*} We will study persistence in fully parabolic two species competition system with chemotaxis somewhere else. \item[(2)] It is well known that, in the absence of chemotaxis (i.e., $\chi_1=\chi_2=0$), the instability of the unique semitrivial solutions $(u^*,0)$ and $(0,v^*)$ of \eqref{u-v-eq00} implies that the persistence occurs in \eqref{u-v-eq00}. Note that both (H4) and (H5) imply \begin{equation} \label{stability-cond-1-eq1} a_{0,\inf} b_{2,\inf}>a_{2,\sup} b_{0,\sup},\quad b_{0,\inf} a_{1,\inf}>b_{1,\sup} a_{0,\sup}, \end{equation} which implies that the semitrivial solutions $(u^*,0)$ and $(0,v^*)$ of \eqref{u-v-eq00} are unstable. When $\chi_1=\chi_2=0$, the conditions {(H4)} and {(H5)} coincide and become \eqref{stability-cond-1-eq1}, and $$\bar A_1=\bar B_1=\frac{a_{0,\sup}}{a_{1,\inf}},\quad \bar A_2=\bar B_2=\frac{b_{0,\sup}}{b_{2,\inf}}. $$ Hence { Theorem} \ref{thm-entire-001} recovers the uniform persistence result of \eqref{u-v-eq00} in \cite[Theorem E(1)]{HeSh02}. \item[(3)] The conditions {(H4)} and {(H5)} are sufficient conditions for semi-trivial positive entire solutions of \eqref{u-v-w-eq00} to be unstable. In fact, assume (H4) or (H5) and suppose that $(u^*,0,w_u^*)$ is a semi-trivial solution of \eqref{u-v-w-eq00}. Then we have the following linearized equation of \eqref{u-v-w-eq00} at $(u^*,0,w_u^*)$, \begin{equation} \label{u-v-w-lin-eq00} \begin{cases} u_t=d_1\Delta u-\chi_1\nabla\cdot (u^* \nabla w)-\chi_1\nabla\cdot (u \nabla w_u^*) \cr \qquad\quad +\Big(a_0(t,x)-2a_1(t,x)u^*)u-a_2(t,x)u^*v\Big),\quad x\in \Omega\cr v_t=d_2\Delta v-\chi_2\nabla\cdot (v \nabla w_u^*)+\Big(b_0(t,x)-b_1(t,x)u^*\Big)v,\quad x\in \Omega\cr 0=d_3\Delta w+k u+lv-\lambda w,\quad x\in \Omega \cr \frac{\p u}{\p n}=\frac{\p v}{\p n}=\frac{\p w}{\p n}=0,\quad x\in\p\Omega. \end{cases} \end{equation} Note that the second equation in \eqref{u-v-w-lin-eq00} is independent of $u$ and $w$. Assume (H4). Then $$ u^*\le \bar A_1,\quad w_u^*\le \frac{k}{\lambda} \bar A_1 $$ and \begin{align*} v_t&=d_2\Delta v-\chi_2\nabla v\cdot \nabla w_u^*-\chi_2 v \Delta w_u^* +\Big(b_0(t,x)-b_1(t,x)u^*\Big)v\\ &=d_2\Delta v-\chi_2\nabla v\cdot \nabla w_u^* +\Big(b_0(t,x)-(b_1(t,x)-\frac{\chi_2 k}{d_3})u^*-\chi_2 \frac{\lambda w_u^*}{d_3}\Big)v\\ &\ge d_2\Delta v-\chi_2\nabla v\cdot \nabla w_u^* +\Big(b_{0,\inf}-(b_{1,\sup}-\frac{\chi_2 k}{d_3})\bar A_1-\chi_2 \frac{\lambda \frac{k}{\lambda}\bar A_1}{d_3}\Big)v\\ &= d_2\Delta v-\chi_2\nabla v\cdot \nabla w_u^* +\Big(b_{0,\inf}-b_{1,\sup}\bar A_1\Big)v. \end{align*} This together with $b_{0,\inf}>b_{1,\sup}\bar A_1$ implies that $(u^*,0,w_u^*)$ is linearly unstable. Other cases can be proved similarly. The proof that (H4) or (H5) implies persistence \eqref{u-v-w-eq00} is very nontrivial. To prove Theorem \ref{thm-entire-001}, we first prove five nontrivial lemmas (i.e. Lemmas \ref{persistence-lm1} to \ref{persistence-lm5}), some of which also play an important role in the study of coexistence. \item[(4)] Consider the following one species parabolic-elliptic chemotaxis model, \begin{equation} \begin{cases} \label{u-w-eq00} u_t=d_1\Delta u-\chi_1\nabla\cdot (u \nabla w)+u\Big(a_0(t,x)-a_1(t,x)u\Big),\quad x\in \Omega\cr 0=d_3\Delta w+k u-\lambda w,\quad x\in \Omega \cr \frac{\p u}{\p n}=\frac{\p w}{\p n}=0,\quad x\in\p\Omega \end{cases} \end{equation} and assume that \begin{equation} \label{u-w-cond} a_{1,\inf}>\frac{k \chi_1}{d_3}. \end{equation} By the arguments of Theorem \ref{thm-entire-001}, we have the following persistence for \eqref{u-w-eq00}, which is new. There is $\underbar A_1$ such that for any $\epsilon>0,$ $t_0\in\RR,$ $u_0\in C^0(\bar \Omega)$ with $u_0\ge 0,$ and $u_0\not \equiv 0$, there exists $ t_{\epsilon,u_0}$ such that \begin{equation}\label{attracting-set-for-u-w-eq00} \underbar A_1 \le u(x,t;t_0,u_0) \le \bar{A_1}+\epsilon \end{equation} for all $x\in\bar\Omega$ and $t\ge t_0+ t_{\epsilon,u_0}$, where $(u(x,t;t_0,u_0),w(x,t;t_0,u_0))$ is the global solution of \eqref{u-w-eq00} with $u(x,t_0;t_0,u_0)=u_0(x)$ (see Corollary \ref{u-w-cor}). \end{itemize} \end{remark} The next theorem is about the existence of coexistence states of \eqref{u-v-w-eq00}. \begin{theorem} [Coexistence] \label{thm-entire-002} \begin{itemize} \item[(1)] Assume (H4). Then there is a coexistence state $(u^{**}(x,t)$, $v^{**}(x,t)$, $ w^{**}(x,t))$ of \eqref{u-v-w-eq00}. Moreover, the following holds. \begin{itemize} \item[(i)] If there is $T>0$ such that $a_i(t+T,x)=a_i(t,x),$ $b_i(t+T,x)=b_i(t,x)$ for $i=0,1,2$, then \eqref{u-v-w-eq00} has a $T$-periodic coexistence state $(u^{**}(x,t),v^{**}(x,t), w^{**}(x,t))$, that is, $$(u^{**}(x,t+T), v^{**}(x,t+T), w^{**}(x,t+T))=(u^{**}(x,t), v^{**}(x,t), w^{**}(x,t)). $$ \item[(ii)] If $a_i(t,x)\equiv a_i(x),$ $b_i(t,x)\equiv b_i(x)$ for $i=0,1,2$, then \eqref{u-v-w-eq00} has a steady state coexistence state $$(u^{**}(t,x), v^{**}(t,x),w^{**}(t,x))\equiv (u^{**}(x), v^{**}(x),w^{**}(x)). $$ \item[(iii)] If $a_i(t,x)\equiv a_i(t),$ $b_i(t,x)=b_i(t)$ for $i=0,1,2$, then \eqref{u-v-w-eq00} has a spatially homogeneous coexistence state $$(u^{**}(x,t),v^{**}(x,t), w^{**}(x,t))\equiv (u^{**}(t),v^{**}(t), w^{**}(t))$$ with $w^{**}(t)=ku^{**}(t)+lv^{**}(t)$, and if $a_i(t),$ $b_i(t)$ $(i=0,1,2)$ are periodic or almost periodic, so is $(u^{**}(t),v^{**}(t),w^{**}(t))$. \end{itemize} \item[(2)] Assume (H5). Then there is a coexistence state $(u^{**}(x,t),v^{**}(x,t), w^{**}(x,t))$ of \eqref{u-v-w-eq00} { which satisfies (i)-(iii) of (1).} \end{itemize} \end{theorem} \begin{remark} \begin{itemize} \item[(1)] By Theorem \ref{thm-entire-001}, (H4) or (H5) implies the persistence in \eqref{u-v-w-eq00}. It is known that persistence in \eqref{u-v-eq00} implies the existence of a coexistence state. In the spatially homogeneous case, persistence in \eqref{u-v-w-eq00} also implies the existence of a coexistence state by the fact that the solutions of the following systems of ODEs are solutions of \eqref{u-v-w-eq00}, \begin{equation} \begin{cases} \label{u-v-w-ode} u_t=u\big(a_0(t)-a_1(t)u-a_2(t)v\big)\cr v_t=v\big(b_0(t)-b_1(t)u-b_2(t)v\big)\cr 0=k u+lv-\lambda w. \end{cases} \end{equation} In general, it is very nontrivial to prove that persistence in \eqref{u-v-w-eq00} implies the existence of a coexistence state. \item[(2)] As it is mentioned in Remark 1.2(1), when $\chi_1=\chi_2=0$, the conditions {(H4)} and {(H5)} coincide and become \eqref{stability-cond-1-eq1}. Hence theorem \ref{thm-entire-002} recovers the coexistence result for \eqref{u-v-eq00} in \cite[Theorem E(1)]{HeSh02}. \item[(3)] {Under some additional condition, the stability and uniqueness of coexistence states are proved in our recent paper \cite{ITBWS17b}. The proofs are very nontrivial. To control the length, we hence did not study the stability and uniqueness of coexistence states in this paper. } \end{itemize} \end{remark} Our last theorem is about the extinction of { the species $u$}. \begin{theorem} \label{thm-extinction} Assume that { (H1)} or { (H2)}, and suppose furthermore that \begin{equation} \label{Asymp-exclusion-eq-00} b_{2,\inf} >2\frac{\chi_2 }{d_3}l, \quad a_{2,\inf}\geq\frac{\chi_1 }{d_3}l, \end{equation} \begin{align} \label{Asymp-exclusion-eq-03} a_{2,\inf}\big(b_{0,\inf}(b_{2,\inf}-l\frac{\chi_2}{d_3})-b_{0,\sup}\frac{\chi_2}{d_3}l\big) \geq a_{0,\sup}\big((b_{2,\inf}-l\frac{\chi_2}{d_3})(b_{2,\sup}-l\frac{\chi_2}{d_3})-(l\frac{\chi_2}{d_3})^2\big), \end{align} and \begin{align}\label{Asymp-exclusion-eq-04} &\big(a_{1,\inf}-\frac{\chi_{1}k}{d_{3}}\big)\Big(b_{0,\inf}(b_{2,\inf}-\frac{l\chi_{2}}{d_{3}})-b_{0,\sup}\frac{l\chi_2}{d_3}\Big)\nonumber\\ &>\Big[{ \Big(\Big(b_{1,\sup}-k\frac{\chi_2}{d_3}\Big)_++k\frac{\chi_2}{d_3}\Big)}(b_{2,\inf}-\frac{\chi_{2}l}{d_{3}})+\frac{l\chi_2}{d_3}{ \Big(b_{1,\inf}-k\frac{\chi_2}{d_3}\Big)_-}\Big]a_{0,\sup}. \end{align} Then for every $t_0 \in \mathbb{R}$ and $u_{0},v_0\in { C^{+}(\overline{\Omega})}$ with $\|v_0\|_\infty>0, $ the unique bounded and globally defined classical solution $(u(\cdot,\cdot;t_0),v(\cdot,\cdot;t_0)$, $w(\cdot,\cdot;t_0))$ of \eqref{u-v-w-eq00}{$+\eqref{ic}$} satisfies \begin{equation}\label{MainAsym-eq-001} \lim_{t\to\infty}\left\|u(\cdot,t+t_0;t_0)\right\|_\infty=0, \end{equation} \begin{equation}\label{MainAsym-eq-002} \alpha\leq \liminf_{t \to \infty}(\min_{x \in \bar{\Omega}}v(x,t;t_0))\leq\limsup_{t \to \infty}(\max_{x \in \bar{\Omega}}v(x,t;t_0))\leq \beta, \end{equation} \begin{equation}\label{MainAsym-eq-003} l\alpha\leq\lambda \liminf_{t \to \infty}(\min_{x \in \bar{\Omega}}w(x,t;t_0))\leq \lambda\limsup_{t \to \infty}(\max_{x \in \bar{\Omega}}w(x,t,t_0))\leq l\beta,\quad \forall x \in \bar{\Omega} \quad t\geq t_0, \end{equation} where $$\beta=\frac{b_{0,\sup}(b_{2,\sup}-l\frac{\chi_2}{d_3})-l\frac{\chi_2}{d_3}b_{0,\inf}}{(b_{2,\inf}-l\frac{\chi_2}{d_3})(b_{2,\sup}-l\frac{\chi_2}{d_3})-(l\frac{\chi_2}{d_3})^2},$$ and $$\alpha=\frac{b_{0,\inf}-l\frac{\chi_2}{d_3}\beta}{b_{2,\sup}-l\frac{\chi_2}{d_3}}>0.$$ Furthermore, if there is a unique entire positive solution $(v^*(x,t;\tilde b_0,\tilde b_2),w^*(x,t;\tilde b_0,\tilde b_2))$ of \begin{equation} \label{v-w-eq00} \begin{cases} v_t=d_2\Delta v-\chi_2\nabla \cdot(v \nabla w)+v\Big(\tilde b_0(t,x)-\tilde b_2(t,x)v\Big),\quad x\in \Omega\cr 0=d_3\Delta w+lv-\lambda w,\quad x\in \Omega \cr \frac{\p v}{\p n}=\frac{\p w}{\p n}=0,\quad x\in\p\Omega \end{cases} \end{equation} for any $(\tilde b_0,\tilde b_2)\in H(b_0,b_2)$, where \begin{align*} H(b_0,b_2)=&\{(c_0(\cdot,\cdot),c_2(\cdot,\cdot))\,|\, \exists \, t_n\to\infty \,\, {\rm such\,\, that}\\ &\lim_{n\to\infty}(b_0(t+t_n,x),b_2(t+t_n,x))=(c_0(t,x),c_2(t,x))\,\, {\rm locally \,\, uniformly\,\, in}\,\, (t,x)\in\RR\times\RR^N\}, \end{align*} then \begin{equation} \label{MainAsym-eq-004} \lim_{t\to\infty} \| v(\cdot,t+t_0;t_0)-v^*(\cdot,t+t_0;b_0,b_2)\|_\infty=0. \end{equation} \end{theorem} \begin{remark} \begin{itemize} \item[(1)] \eqref{Asymp-exclusion-eq-03} and \eqref{Asymp-exclusion-eq-04} imply \begin{equation} \label{extinction-cond-eq1} \frac{a_{0,\sup}}{b_{0,\inf}}\le \frac{a_{2,\inf}}{b_{2,\sup}},\quad \frac{a_{0,\sup}}{b_{0,\inf}}< \frac{a_{1,\inf}}{b_{1,\sup}}. \end{equation} {To see this, we first note that \eqref{Asymp-exclusion-eq-04} implies that \begin{align*} \big(a_{1,\inf}-\frac{\chi_{1}k}{d_{3}}\big)b_{0,\inf}(b_{2,\inf}-\frac{l\chi_{2}}{d_{3}}) &>\Big[{ \Big(\Big(b_{1,\sup}-k\frac{\chi_2}{d_3}\Big)_++k\frac{\chi_2}{d_3}\Big)}(b_{2,\inf}-\frac{\chi_{2}l}{d_{3}})\Big]a_{0,\sup}\nonumber\\ &\geq b_{1,\sup}(b_{2,\inf}-\frac{\chi_{2}l}{d_{3}})a_{0,\sup}. \end{align*} Thus since $b_{2,\inf}-\frac{\chi_{2}l}{d_{3}}>0,$ we get $$\big(a_{1,\inf}-\frac{\chi_{1}k}{d_{3}}\big)b_{0,\inf}>b_{1,\sup}a_{0,\sup},$$ which implies the second inequality in \eqref{extinction-cond-eq1}. Second, note that \eqref{Asymp-exclusion-eq-03} implies that \begin{align*} a_{2,\inf}\big(b_{0,\inf}(b_{2,\inf}-l\frac{\chi_2}{d_3})-b_{0,\sup}\frac{\chi_2}{d_3}l\big)&\geq a_{0,\sup}\big((b_{2,\inf}-l\frac{\chi_2}{d_3})b_{2,\sup}-l\frac{\chi_2}{d_3}b_{2,\inf}\big)\nonumber\\ &\geq a_{0,\sup}(b_{2,\inf}-2l\frac{\chi_2}{d_3})b_{2,\sup}. \end{align*} This together with the fact that $a_{2,\inf}\big(b_{0,\inf}(b_{2,\inf}-l\frac{\chi_2}{d_3})-b_{0,\sup}\frac{\chi_2}{d_3}l\big)\leq a_{2,\inf}b_{0,\inf}(b_{2,\inf}-2l\frac{\chi_2}{d_3}) $ implies that $$ a_{2,\inf}b_{0,\inf}(b_{2,\inf}-2l\frac{\chi_2}{d_3}) \geq a_{0,\sup}(b_{2,\inf}-2l\frac{\chi_2}{d_3})b_{2,\sup}, $$ which combines with $b_{2,\inf}-2l\frac{\chi_2}{d_3}>0$ implies the first inequality in \eqref{extinction-cond-eq1}.} \item[(2)] When $\chi_1=\chi_2=0$, \eqref{Asymp-exclusion-eq-00} becomes $$ b_{2,\inf}>0,\quad a_{2,\inf}>0,\quad a_{1,\inf}>0; $$ \eqref{Asymp-exclusion-eq-03} and \eqref{Asymp-exclusion-eq-04} become $$ \frac{a_{0,\sup}}{b_{0,\inf}}\le \frac{a_{2,\inf}}{b_{2,\sup}}\quad {\rm and}\quad \frac{a_{0,\sup}}{b_{0,\inf}}< \frac{a_{1,\inf}}{b_{1,\sup}}, $$ respectively. Therefore, the extinction results for \eqref{u-v-eq00} in \cite{HeSh02} are recovered. \item[(3)] When the coefficients are constants, Theorem \ref{thm-extinction} coincide with the exclusion Theorem in \cite[Theorem 1.4]{ITBRS17}. Thus Theorem \ref{thm-extinction} give a natural extension to the phenomenon of exclusion in heterogeneous media. \item[(4)] The reader is referred to \cite{ITBWS16} for the existence and uniqueness of positive entire solutions of \eqref{v-w-eq00}. \end{itemize} \end{remark} {The results established in this paper provide various conditions under which persistence or extinction occurs. All the conditions depend on the chemotaxis sensitivity coefficients $\chi_1$ and $\chi_2$, which reflect some effects of chemotaxis on the persistence and extinction of the system. However, it remains open whether chemotaxis makes species easier to persist or go extinct. This is a very interesting issue and we plan to study it somewhere else. } The rest of the paper is organized as follows. In section 2, we study the global existence of classical solutions and prove Theorem \ref{thm-global-000}. Section 3 is devoted to the study of the persistence and boundedness of classical solutions. It is here that we present the proof of Theorem \ref{thm-entire-001}. In section 4, we study the existence of coexistence states and prove Theorem \ref{thm-entire-002}. Finally in section 5, we study the phenomenon of exclusion and prove Theorem \ref{thm-extinction}. \medskip \begin{comment} \noindent {\bf Acknowledgement.} { The authors also would like to thank the referee for valuable comments and suggestions which improved the presentation of this paper considerably.} \end{comment} \section{Global existence of bounded classical solutions} In this section, we study the existence of bounded classical solutions of system \eqref{u-v-w-eq00} and prove Theorem \ref{thm-global-000}. We start with the following important result on the local existence of classical solutions of system \eqref{u-v-w-eq00} with initial functions in ${ C^+(\bar{\Omega}).}$ \begin{lemma} \label{lm-local-001} For any given $t_0 \in \mathbb{R},$ $u_0,v_0 \in { C^+(\bar{\Omega})}$, there exists $T_{\max}(t_0,u_0,v_0) \in (0,\infty]$ such that $\eqref{u-v-w-eq00}{ +\eqref{ic}}$ has a unique nonnegative classical solution $(u(x,t;t_0)$, $v(x,t;t_0)$, $w(x,t;t_0))$ on $(t_0,t_0+T_{\max}(t_0,u_0,v_0))$ satisfying that $$\lim_{t \nearrow t_0}\|u(\cdot,t;t_0)-u_0\|_{\infty}=0,\quad \lim_{t\nearrow t_0}\|v(\cdot,t;t_0)-v_0\|_{ \infty}=0, $$ and moreover if $T_{\max}(t_0,u_0,v_0)< \infty,$ then \begin{equation} \label{local-eq00} \limsup_{t \nearrow T_{\max}(t_0,u_0,v_0)}\left( \left\| u(\cdot,t_0+t;t_0) \right\|_{\infty} +\left\| v(\cdot,t_0+t;t_0) \right\|_{ \infty}\right) =\infty. \end{equation} \end{lemma} \begin{proof} If follows from the similar arguments as those in \cite[Lemma 2.1]{STW13}. \end{proof} Next, we consider the following system of ODEs induced from system \eqref{u-v-w-eq00}, \begin{equation} \label{ode00} \begin{cases} \overline{u}'=\frac{\chi_1}{d_3} \overline{u}\big(k \overline {u}+l\overline v-k\underline{u}-l\underline{v}\big)+ \overline{u}\big[a_{0,\sup}(t)-a_{1,\inf}(t)\overline u -a_{2,\inf}(t)\underline{v}\big]\\ \underline{u}'=\frac{\chi_1}{d_3} \underline{u}\big(k \underline {u}+l\underline v-k\overline{u}-l\overline{v}\big)+ \underline{u}\big[a_{0,\inf}(t)-a_{1,\sup}(t)\underline u -a_{2,\sup}(t)\overline{v}\big]\\ \overline{v}'=\frac{\chi_2}{d_3} \overline{v}\big(k \overline {u}+l\overline v-k\underline{u}-l\underline{v}\big)+ \overline{v}\big[b_{0,\sup}(t)-b_{2,\inf}(t)\overline v -b_{1,\inf}(t)\underline{u}\big]\\ \underline{v}'=\frac{\chi_2}{d_3} \underline{v}\big(k \underline {u}+l\underline v-k\overline{u}-l\overline{v}\big)+ \underline{v}\big[b_{0,\inf}(t)-b_{2,\sup}(t)\underline v -b_{1,\sup}(t)\overline{u}\big]. \end{cases} \end{equation} For convenience, we let \begin{align*} &\left(\overline{u}(t),\underline{u}(t),\overline{v}(t),\underline{v}(t)\right)\\ &=\left(\overline{u}\left(t;t_0,\overline{u}_0,\underline{u}_0,\overline{v}_0,\underline{v}_0\right),\underline{u}\left(t;t_0,\overline{u}_0,\underline{u}_0,\overline{v}_0,\underline{v}_0\right),\overline{v}\left(t;t_0,\overline{u}_0,\underline{u}_0,\overline{v}_0,\underline{v}_0\right),\underline{v}\left(t;t_0,\overline{u}_0,\underline{u}_0,\overline{v}_0, \underline{v}_0\right)\right) \end{align*} be the solution of \eqref{ode00} with initial condition \begin{align}\label{initial-ode00} &\left(\overline{u}\left(t_0;t_0,\overline{u}_0,\underline{u}_0,\overline{v}_0,\underline{v}_0\right),\underline{u}\left(t_0;t_0,\overline{u}_0, \underline{u}_0,\overline{v}_0,\underline{v}_0\right),\overline{v}\left(t_0;t_0,\overline{u}_0,\underline{u}_0,\overline{v}_0,\underline{v}_0\right), \underline{v}\left(t_0;t_0,\overline{u}_0,\underline{u}_0,\overline{v}_0, \underline{v}_0\right)\right)\nonumber\\ &=\left(\overline{u}_0,\underline{u}_0,\overline{v}_0,\underline{v}_0\right) \in \mathbb{R}^4_+. \end{align} Then for given $t_0\in\mathbb{R}$ and $\left(\overline{u}_0,\underline{u}_0,\overline{v}_0,\underline{v}_0\right) \in \mathbb{R}^4_+,$ there exists $T_{\max}\left(t_0,\overline{u}_0,\underline{u}_0,\overline{v}_0,\underline{v}_0\right)>0$ such that \eqref{ode00} has a unique classical solution $\left(\overline{u}(t),\underline{u}(t),\overline{v}(t),\underline{v}(t)\right)$ on $(t_0,t_0+T_{\max}\left(t_0,\overline{u}_0,\underline{u}_0,\overline{v}_0,\underline{v}_0\right))$ satisfying \eqref{initial-ode00}. Moreover if $T_{\max}\left(t_0,\overline{u}_0,\underline{u}_0,\overline{v}_0,\underline{v}_0\right)<\infty,$ then \begin{equation}\label{blow-creterion-ode00} \limsup_{t \nearrow T_{\max}\left(t_0,\overline{u}_0,\underline{u}_0,\overline{v}_0,\underline{v}_0\right)} \left(|\overline{u}(t_0+t)|+|\underline{u}(t_0+t)|+|\overline{v}(t_0+t)|+|\underline{v}(t_0+t)|\right)=\infty. \end{equation} We now state and prove the following important lemma which provides sufficient conditions for the boundedness of classical solutions of system \eqref{ode00}. \begin{lemma}\label{lem-1-ode00} { Let} $\left(\overline{u}(t),\underline{u}(t),\overline{v}(t),\underline{v}(t)\right)$ be the solution of \eqref{ode00} which satisfies \eqref{initial-ode00}. Then \begin{itemize} \item[(i)] $0\leq\underline{u}_0 \leq \overline{u}_0\quad \text{and} \quad 0\leq \underline{v}_0 \leq \overline{v}_0$ imply $ 0\leq\underline{u}(t )\leq \overline{u}(t) \quad \text{and} \quad 0\leq \underline{v}(t)\leq \overline{v}(t)$ for all $t \in [t_0,t_0+T_{\max}\left(\overline{u}_0,\underline{u}_0,\overline{v}_0,\underline{v}_0\right)).$ \item[(ii)] If { (H2)} holds, then $T_{\max}\left(t_0,\overline{u}_0,\underline{u}_0,\overline{v}_0,\underline{v}_0\right)=\infty$ and $$ \limsup_{t\to\infty} \overline{u}(t) \leq \bar B_1,\quad \limsup_{t\to\infty}\overline{v}(t)\leq \bar B_2, $$ where $\bar B_1$ and $\bar B_2$ are as in \eqref{A1-overbar-0} and \eqref{A2-overbar-0}, respectively. \end{itemize} \end{lemma} \begin{proof} (i) Let $\epsilon>0$ and $\left(\overline{u}_{\epsilon}(t),\underline{u}_{\epsilon}(t),\overline{v}_{\epsilon}(t),\underline{v}_{\epsilon}(t) \right)$ { be} the solution of \eqref{ode00} { with $a_{0,\sup}(t)$ and $b_{0,\sup}(t)$ being replaced by $a_{0,\sup}(t)+\epsilon$ and $b_{0,\sup}(t)+\epsilon$, respectively, and} satisfying \eqref{initial-ode00} with $\overline{u}_0,$ $\overline{v}_0$ being replaced respectively by $\overline{u}^{\epsilon}_0=\overline{u}_0+\epsilon$ and $\overline{v}^{\epsilon}_0=\overline{v}_0+\epsilon.$ We claim first that (i) holds for $\left(\overline{u}_{\epsilon}(t),\underline{u}_{\epsilon}(t),\overline{v}_{\epsilon}(t),\underline{v}_{\epsilon}(t) \right).$ Suppose by contradiction that our claim does not hold. Then there exists $\overline{t} \in (t_0,t_0+T_{\max}\left(t_0,\overline{u}^{\epsilon}_0,\underline{u}_0,\overline{v}^{\epsilon}_0,\underline{v}_0\right))$ such that \begin{equation}\label{proof-eq00-ode00} 0\leq\underline{u}_{\epsilon}(t ) < \overline{u}_{\epsilon}(t),\quad 0\leq\underline{v}_{\epsilon}(t)< \overline{v}_{\epsilon}(t) \, ,\forall t \in [t_0,\overline{t}) \end{equation} and { $$ {\rm either}\quad \underline{u}_{\epsilon}(\overline{t} ) = \overline{u}_{\epsilon}(\overline{t})\quad {\rm or}\quad \underline{v}_{\epsilon}(\overline{t} ) = \overline{v}_{\epsilon}(\overline{t}). $$ Without loss of generality, assume that $\underline{u}_{\epsilon}(\overline{t} ) = \overline{u}_{\epsilon}(\overline{t})$. Then} on one hand \eqref{proof-eq00-ode00} implies that \begin{equation}\label{proof-eq01-ode00} \left(\overline{u}_{\epsilon}-\underline{u}_{\epsilon}\right)^{'}(\overline{t})\leq 0, \end{equation} and on the other hand the difference between the first and the second equations of \eqref{ode00} gives \begin{align*} \left(\overline{u}_{\epsilon}-\underline{u}_{\epsilon}\right)^{'}(\overline{t})&=\overline{u}_{\epsilon}(\overline{t})\left\{a_{0,\sup}(\overline{t})+\epsilon-a_{0,\inf}(\overline{t})+(a_{1,\sup}(\overline{t})-a_{1,\inf}(\overline{t}))\overline{u}_{\epsilon}(\overline{t})+2l\frac{\chi_1}{d_3}\left(\overline{v}_{\epsilon}-\underline{v}_{\epsilon}\right)(\overline{t})\right\}\nonumber\\ &\qquad \qquad \qquad +\overline{u}_{\epsilon}(\overline{t})\left\{a_{2,\sup}(\overline{t})\overline{v}_{\epsilon}(\overline{t})-a_{2,\inf}(\overline{t}) \underline{v}_{\epsilon}(\overline{t})\right\}>0, \end{align*} which contradicts to \eqref{proof-eq01-ode00}. Thus (i) holds for $\big(\overline{u}_{\epsilon}(t),\underline{u}_{\epsilon}(t)$, $\overline{v}_{\epsilon}(t)$, $\underline{v}_{\epsilon}(t) \big)$. Letting $\epsilon \to 0$, we have that (i) holds for $\left(\overline{u}(t),\underline{u}(t),\overline{v}(t),\underline{v}(t) \right)$. (ii) First from the first and third equations of \eqref{ode00} we get \begin{equation} \label{ode01} \begin{cases} \overline{u}' \leq \overline{u}\Big[a_{0,\sup}-\left(a_{1,\inf}-k\frac{\chi_1}{d_3}\right)\overline u+l\frac{\chi_1}{d_3}\overline{v}\Big]\\ \overline{v}' \leq \overline{v}\Big[b_{0,\sup}-\left(b_{2,\inf}-l\frac{\chi_2}{d_3}\right)\overline v+k\frac{\chi_2}{d_3}\overline{u}\Big]. \end{cases} \end{equation} Thus the result follows from comparison principle for cooperative systems and the fact that $(\bar B_1, \bar B_2)$ is a uniformly asymptotically stable solution for the following system of ODEs, \begin{equation*} \begin{cases} u'=u\left\{ a_{0,\sup}-(a_{1,\inf}-k\frac{\chi_1}{d_3}) u+l\frac{\chi_1}{d_3} v\right\}\\ v'=v\left\{ b_{0,\sup}-\left(b_{2,inf}-l\frac{\chi_2}{d_3}\right)v+k\frac{\chi_2}{d_3}u\right\}. \end{cases} \end{equation*} \end{proof} Now we prove Theorem \ref{thm-global-000}. \begin{proof} [{ Proof of Theorem \ref{thm-global-000}}] Let $u_0,v_0 \in {C^+(\bar{\Omega})}.$ (1) From the first equation of system \eqref{u-v-w-eq00}, we have that for $t\in (t_0,t_0+T_{\max}(t_0,u_0,v_0))$, \begin{align*} u_t=&d_1\Delta u-\chi_1 \nabla u \cdot \nabla w+u\left\{ a_0(t,x)-\left(a_1(t,x)-k\frac{\chi_1 }{d_3}\right)u -\left(a_2(t,x)-l\frac{\chi_1 }{d_3} \right)v-\frac{\chi_1 }{d_3}\lambda w\right\}\\ & \leq d_1\Delta u-\chi_1 \nabla u \cdot \nabla w+u\left\{ a_{0,\sup}-\left(a_{1,\inf}-k\frac{\chi_1 }{d_3}k\right)u-\left(a_{2,\inf}-l\frac{\chi_1 }{d_3}\right) v - \frac{\chi_1 }{d_3}\lambda w \right\}. \end{align*} This together with { (H1)} gives for $t\in (t_0,t_0+T_{\max}(t_0,u_0,v_0))$, \begin{align} \label{aux-eq1} u_t& \leq d_1\Delta u-\chi_1 \nabla u \cdot \nabla w+u\left\{ a_{0,\sup}-\left(a_{1,\inf}-k\frac{\chi_1 }{d_3}k\right)u \right\}. \end{align} { Let $u(t;\|u_0\|_\infty)$ be the solution of the ODE $$ u^{'}=u\left\{ a_{0,\sup}-\left(a_{1,\inf}-k\frac{\chi_1 }{d_3}k\right)u \right\} $$ with $u(0;\|u_0\|_\infty)=\|u_0\|_\infty$. Then $u(t;\|u_0\|_\infty)$ is increasing if $\|u_0|_\infty<\frac{a_{0,\sup}}{a_{1,\inf}-k\frac{\chi_1}{d_3}}$ and is decreasing if $\|u_0\|_\infty>\frac{a_{0,\sup}}{a_{1,\inf}-k\frac{\chi_1}{d_3}}$, and $u(t;\|u_0\|_\infty)$ converges to $\frac{a_{0,\sup}}{a_{1,\inf}-k\frac{\chi_1}{d_3}}$ as $t\to\infty$. } Therefore by comparison principle for parabolic equations, we get \begin{equation} \label{global-u-eq} 0\leq u(x,t;t_0,u_0,v_0)\leq \max\left\{\|u_0\|_\infty,\frac{a_{0,\sup}}{a_{1,\inf}-k\frac{\chi_1}{d_3}} \right\}\quad \forall\,\, t\in [t_0,t_0+T_{\max}(t_0,u_0,v_0)). \end{equation} Similarly, the second equation of system \eqref{u-v-w-eq00} gives \begin{equation} \label{global-v-eq} 0\leq v(x,t;t_0,u_0,v_0)\leq \max\left\{\|v_0\|_\infty,\frac{b_{0,\sup}}{b_{2,\inf}-l\frac{\chi_2}{d_3}} \right\}\quad \forall\,\, t\in [t_0,t_0+T_{\max}(t_0,u_0,v_0)). \end{equation} By \eqref{local-eq00}, \eqref{global-u-eq}, and \eqref{global-v-eq}, we have $T_{\max}(t_0,u_0,v_0)=\infty$. Moreover, by \eqref{aux-eq1} and comparison principle for parabolic equations again, { for any $\epsilon>0$, there is $T_1(u_0,v_0,\epsilon)\ge 0$ such that $$ 0\le u(x,t;t_0,u_0,v_0)\le \frac{a_{0,\sup}}{a_{1,\inf}-k\frac{\chi_1}{d_3}}+\epsilon\quad \forall\,\, x\in\bar\Omega,\,\, t\ge t_0+T_1(u_0,v_0,\epsilon), $$ and $T_1(u_0,v_0,\epsilon)$ can be chosen to be zero if $u_0\le\bar A_1+\epsilon$. Similarly, for any $\epsilon>0$, there is $T_2(u_0,v_0,\epsilon)\ge 0$ such that $$ 0\le v(x,t;t_0,u_0,v_0)\le \frac{b_{0,\sup}}{b_{2,\inf}-l\frac{\chi_2}{d_3}}+\epsilon \quad \forall\,\, x\in\bar\Omega,\,\, t\ge t_0+T_2(u_0,v_0,\epsilon), $$ and $T_2(u_0,v_0,\epsilon)$ can be chosen to be zero if $v_0\le\bar A_2+\epsilon$. (1) thus follows with $T(u_0,v_0,\epsilon)=\max\{T_1(u_0,v_0,\epsilon),T_2(u_0,v_0,\epsilon)\}$.} (2) Let $\overline{u}_0=\max_{x \in \bar \Omega}u_0(x),$ $\underline{u}_0=\min_{x \in \bar \Omega}u_0(x),$ $\overline{v}_0=\max_{x \in \bar \Omega}v_0(x)$ , $\underline{v}_0=\min_{x \in \bar \Omega}v_0(x)$ and let $\left(\overline{u}(t),\underline{u}(t),\overline{v}(t),\underline{v}(t) \right)$ { be} the solution of \eqref{ode00} satisfying initial condition \eqref{initial-ode00}. By the similar arguments as those in \cite[Theorem 1.1(1)]{ITBRS17}, under the condition { (H2)} we have \[ 0\leq \underline{u}(t) \leq u(x,t) \leq \overline{u}(t) \quad \text{and} \quad 0 \leq \underline{v}(t) \leq v(x,t) \leq \overline{v}(t) \, , \forall x \in \bar \Omega \, \,\, t \in (t_0, t_0+T_{\max}). \] This together with Lemma \ref{lem-1-ode00} implies Theorem \ref{thm-global-000} (2). (3) It follows from the similar arguments as those in \cite[Theorem 1.1(2)]{ITBRS17}. \end{proof} \section{Persistence} \label{Persistence} In this section, we study the persistence in \eqref{u-v-w-eq00} and prove Theorem \ref{thm-entire-001}. Fix $T>0$. We first prove five Lemmas. \begin{lemma} \label{persistence-lm1} \begin{itemize} \item[(1)] Assume (H1). For any $\epsilon>0$, there is $\delta=\delta(\epsilon)>0$ such that for any { $u_0, v_0 \in C^+(\bar{\Omega})$, the solution $(u(x,t;t_0)$, $v(x,t;t_0)$, $w(x,t;t_0))$ of $\eqref{u-v-w-eq00}+\eqref{ic}$ satisfies the following.} \begin{itemize} \item[(i)] If $0\le u_0\le \delta$, then $u(x,t;t_0)\le \epsilon$ for $t\in [t_0,t_0+T]$ and $x\in\bar\Omega$. \item[(ii)] If $0\le v_0\le \delta$, then $v(x,t;t_0)\le \epsilon$ for $t\in [t_0,t_0+T]$ and $x\in\bar\Omega$. \end{itemize} \item[(2)] Assume (H2). For any $\epsilon>0$, there is $\delta=\delta(\epsilon)>0$ such that for any { $u_0, v_0 \in C^+(\bar{\Omega})$,} { the solution $(u(x,t;t_0)$, $v(x,t;t_0)$, $w(x,t;t_0))$ of $\eqref{u-v-w-eq00} +\eqref{ic}$ satisfies the following.} \begin{itemize} \item[(i)] If $0\le u_0\le \delta$ { and $0\le v_0\le \bar B_2+\epsilon$}, then $u(x,t;t_0)\le \epsilon$ for $t\in [t_0,t_0+T]$ and $x\in\bar\Omega$. \item[(ii)] If $0\le v_0\le \delta$ { and $0\le u_0\le \bar B_1+\epsilon$}, then $v(x,t;t_0)\le \epsilon$ for $t\in [t_0,t_0+T]$ and $x\in\bar\Omega$. \end{itemize} \end{itemize} \end{lemma} \begin{proof} (1)(i) Assume (H1). Then \begin{align*} u_t&=d_1\Delta u-\chi_1 \nabla \cdot (u\nabla w)+u\Big(a_0(t,x)-a_1(t,x)u-a_2(t,x)v\Big)\\ &=d_1\Delta u-\chi_1 \nabla u\cdot\nabla w+u\Big(a_0(t,x)-(a_1(t,x)-\frac{\chi_1 k}{d_3})u-(a_2(t,x)-\frac{\chi_1 l}{d_3})v-\frac{\chi_1 \lambda}{d_3}w\Big)\\ &\le d_1\Delta u-\chi_1 \nabla u\cdot\nabla w+a_{0,\sup} u. \end{align*} Hence, by comparison principle for parabolic equations, we have $$ u(x,t;t_0)\le e^{a_{0,\sup}(t-t_0)} \|u_0\|\quad \forall\,\, t\ge t_0. $$ (1)(i) thus follows with $\delta=\epsilon e^{-a_{0,\sup}T}$ for any given $\epsilon>0$. (1)(ii) It can be proved by the similar arguments as in (1)(i). (2)(i) By Theorem \ref{thm-global-000}(2), $$ v(x,t+t_0;t_0)\le \bar B_2+\epsilon \quad \forall \,\, t\ge { t_0},\quad x\in\bar\Omega. $$ Assume (H2). Then \begin{align*} u_t&=d_1\Delta u-\chi_1 \nabla \cdot (u\nabla w)+u\Big(a_0(t,x)-a_1(t,x)u-a_2(t,x)v\Big)\\ &=d_1\Delta u-\chi_1 \nabla u\cdot\nabla w+u\Big(a_0(t,x)-(a_1(t,x)-\frac{\chi_1 k}{d_3})u-(a_2(t,x)-\frac{\chi_1 l}{d_3})v-\frac{\chi_1 \lambda}{d_3}w\Big)\\ &\le d_1\Delta u-\chi_1 \nabla u\cdot\nabla w+\big(a_{0,\sup}+\frac{\chi_1 l}{d_3}(\bar B_2+\epsilon)\big) u. \end{align*} By comparison principle for parabolic equations, we have $$ u(x,t;t_0)\le e^{\big(a_{0,\sup}+\frac{\chi_1 l}{d_3}(\bar B_2+\epsilon)\big)(t-t_0)} \|u_0\|\quad \forall\,\, t\ge t_0. $$ (2)(i) thus follows with $\delta=\epsilon e^{-\big(a_{0,\sup}+\frac{\chi_1 l}{d_3}(\bar B_2+\epsilon)\big)T}$ for any given $\epsilon>0$. (2)(ii) It can be proved by the similar arguments as in (2)(i). \end{proof} \begin{comment} \begin{remark} \label{u-w-rk1} Consider \eqref{u-w-eq00} and assume \eqref{u-w-cond}. By the arguments of Lemma \ref{persistence-lm1}, we have that, for any $\epsilon>0$, there is $\delta=\delta(\epsilon)>0$ such that for any $u_0\in C^+(\bar\Omega)$ , if $0\le u_0\le \delta$, then $u(x,t;t_0,u_0)\le \epsilon$ for $t\in [t_0,t_0+T]$ and $x\in\bar\Omega$. \end{remark} \end{comment} \begin{lemma} \label{persistence-lm2} \begin{itemize} \item[(1)] {Assume (H4). Let $\epsilon_0$ and $\delta_0=\delta_0(\epsilon_0)$ be such that Lemma \ref{persistence-lm1}(1) holds with $\epsilon=\epsilon_0$ and $\delta=\delta_0$, $$ a_{0,\inf}>a_{2,\sup}(\bar A_2+\epsilon_0)+\frac{\chi_1 k}{d_3}\epsilon_0,\quad b_{0,\inf}> b_{1,\sup}(\bar A_1+\epsilon_0)+\frac{\chi_2 l}{d_3}\epsilon_0, $$ and \begin{align*} \delta_0<{ \min}\Big\{\frac{a_{0,\inf}-a_{2,\sup}(\bar A_2+\epsilon_0)-\frac{\chi_1 k}{d_3}\epsilon_0}{a_{1,\sup}-\frac{\chi_1 k}{d_3}},\frac{b_{0,\inf}- b_{1,\sup}(\bar A_1+\epsilon_0)-\frac{\chi_2 l}{d_3}\epsilon_0}{b_{2,\sup}-\frac{\chi_2 l}{d_3}}\Big\}. \end{align*} } For given { $u_0, v_0 \in C^+(\bar{\Omega})$, the solution $(u(x,t;t_0)$, $v(x,t;t_0)$, $w(x,t;t_0))$ of $\eqref{u-v-w-eq00} +\eqref{ic}$ satisfies the following.} \begin{itemize} \item[(i)] If $0<u_0<\delta_0$ and { $0\le v_0\le \bar A_2+\epsilon$}, then $u(x,t+t_0;t_0)>\inf_{ \bar{\Omega}} u_0(x)\quad \forall \,\, 0<t\le T. $ \item[(ii)] If $0<v_0<\delta_0$ and { $0\le u_0\le \bar A_1+\epsilon$}, then $ v(x,t+t_0;t_0)>\inf_{ \bar{\Omega}} v_0(x)\quad \forall \,\, 0<t\le T. $ \end{itemize} \item[(2)] { Assume (H5). Let $\epsilon_0$ and $\delta_0=\delta_0(\epsilon_0)$ be such that Lemma \ref{persistence-lm1}(2) holds with $\epsilon=\epsilon_0$ and $\delta=\delta_0$, $$ a_{0,\inf}>\big[\big(a_{2,\sup}-\frac{\chi_1 l}{d_3}\big)_++\frac{\chi_1 l}{d_3}\big](\bar B_2+\epsilon_0)+\frac{\chi_1 k}{d_3}\epsilon_0, $$ $$ b_{0,\inf}>\big[\big(b_{1,\sup}-\frac{\chi_2 k}{d_3}\big)_++\frac{\chi_2 k}{d_3}\big](\bar B_1+\epsilon_0)+\frac{\chi_2 l}{d_3}\epsilon_0, $$ and \begin{align*} \delta_0<{ \min}\Big\{& \frac{a_{0,\inf}-\big[\big(a_{2,\sup}-\frac{\chi_1 l}{d_3}\big)_++\frac{\chi_1 l}{d_3}\big](\bar B_2+\epsilon_0)-\frac{\chi_1 k}{d_3}\epsilon_0} {a_{1,\sup}-\frac{\chi_1 k}{d_3}},\\ & \frac{b_{0,\inf}-\big[\big(b_{1,\sup}-\frac{\chi_2 k}{d_3}\big)_++\frac{\chi_2 k}{d_3}\big](\bar B_1+\epsilon_0)-\frac{\chi_2 l}{d_3}\epsilon_0} {b_{2,\sup}-\frac{\chi_2 l}{d_3}}\Big\}. \end{align*} } For given { $u_0, v_0 \in C^+(\bar{\Omega})$,} { the solution $(u(x,t;t_0)$, $v(x,t;t_0)$, $w(x,t;t_0))$ of $\eqref{u-v-w-eq00} +\eqref{ic}$ satisfies the following.} \begin{itemize} \item[(i)] If $0<u_0<\delta_0$ and { $0\le v_0\le \bar B_2+\epsilon$}, then $u(x,t+t_0;t_0)>\inf_{ \bar{\Omega}} u_0(x)\quad \forall \,\, 0<t\le T.$ \item[(ii)] If $0<v_0<\delta_0$ and { $0\le u_0\le \bar B_1+\epsilon$}, then $v(x,t+t_0;t_0,)>\inf_{ \bar{\Omega}} v_0(x)\quad \forall \,\, 0<t\le T.$ \end{itemize} \end{itemize} \end{lemma} \begin{proof} (1)(i) Without loss of generality, assume $\inf_{ \bar{\Omega}}u_0(x)>0$. By Theorem \ref{thm-global-000} (1), $$ v(x,t+t_0;t_0)\le \bar A_2+\epsilon_0 \quad \forall \,\, t\ge { t_0},\quad x\in\bar\Omega. $$ This together with Lemma \ref{persistence-lm1} (1) implies that \begin{align*} u_t&=d_1\Delta u-\chi_1 \nabla \cdot (u\nabla w)+u\Big(a_0(t,x)-a_1(t,x)u-a_2(t,x)v\Big)\\ &=d_1\Delta u-\chi_1 \nabla u\cdot\nabla w+u\Big(a_0(t,x)-(a_1(t,x)-\frac{\chi_1 k}{d_3})u-(a_2(t,x)-\frac{\chi_1 l}{d_3})v-\frac{\chi_1 \lambda}{d_3}w\Big)\\ &\ge d_1\Delta u-\chi_1 \nabla u\cdot\nabla w\\ &\,\, +u\Big(a_0(t,x)-(a_1(t,x)-\frac{\chi_1 k}{d_3})u-(a_2(t,x)-\frac{\chi_1 l}{d_3})(\bar A_2+\epsilon_0)-\frac{\chi_1 \lambda}{d_3}\big(\frac{k}{\lambda} \epsilon_0+\frac{l}{\lambda} (\bar A_2+\epsilon_0)\big)\Big) \\ &=d_1\Delta u-\chi_1 \nabla u\cdot\nabla w+u\Big(a_0(t,x)-a_2(t,x)(\bar A_2+\epsilon_0)-\frac{\chi_1 k}{d_3}\epsilon_0-(a_1(t,x)-\frac{\chi_1 k}{d_3})u\Big)\\ &\ge d_1\Delta u-\chi_1 \nabla u\cdot\nabla w+u\Big(a_{0,\inf}-a_{2,\sup}(\bar A_2+\epsilon_0)-\frac{\chi_1 k}{d_3}\epsilon_0-(a_{1,\sup}-\frac{\chi_1 k}{d_3})u\Big) \end{align*} for $0<t\le T$. Let $\tilde u(t)$ be the solution of $$ \tilde u_t=\tilde u\Big(a_{0,\inf}-a_{2,\sup}(\bar A_2+\epsilon_0)-\frac{\chi_1 k}{d_3}\epsilon_0-(a_{1,\sup}-\frac{\chi_1 k}{d_3})\tilde u\Big) $$ with $\tilde u(t_0)=\inf_{ \bar\Omega} u_0(x)$. We have $\tilde u(t)$ is monotonically increasing in $t\ge t_0$ and $$\lim_{t\to\infty} \tilde u(t)=\frac{a_{0,\inf}-a_{2,\sup}(\bar A_2+\epsilon_0)-\frac{\chi_1 k}{d_3}\epsilon_0}{(a_{1,\sup}-\frac{\chi_1 k}{d_3})}. $$ By comparison principle for parabolic equations, we have $$ u(x,t+t_0;t_0)\ge \tilde u(t+t_0)> \inf_{ \bar\Omega} u_0(x)\quad \forall\,\, 0<t\le T. $$ (1)(ii) It can be proved by the similar arguments as those in (1)(i). (2)(i) (i) Again, without loss of generality, assume $\inf_{ \bar\Omega}u_0(x)>0$. By Theorem \ref{thm-global-000} (2), $$ v(x,t+t_0;t_0)\le \bar B_2+\epsilon_0 \quad \forall \,\, t\ge { t_0},\quad x\in\bar\Omega. $$ This together with Lemma \ref{persistence-lm1} (2) implies that \begin{align*} u_t&=d_1\Delta u-\chi_1 \nabla \cdot (u\nabla w)+u\Big(a_0(t,x)-a_1(t,x)u-a_2(t,x)v\Big)\\ &=d_1\Delta u-\chi_1 \nabla u\cdot\nabla w+u\Big(a_0(t,x)-(a_1(t,x)-\frac{\chi_1 k}{d_3})u-(a_2(t,x)-\frac{\chi_1 l}{d_3})v-\frac{\chi_1 \lambda}{d_3}w\Big)\\ &\ge d_1\Delta u-\chi_1 \nabla u\cdot\nabla w\\ &\,\, +u\Big(a_0(t,x)-(a_1(t,x)-\frac{\chi_1 k}{d_3})u-\big(a_2(t,x)-\frac{\chi_1 l}{d_3}\big)_+(\bar B_2+\epsilon_0)-\frac{\chi_1 \lambda}{d_3}\big(\frac{k}{\lambda} \epsilon_0+\frac{l}{\lambda} (\bar B_2+\epsilon_0)\big)\Big) \\ &\ge d_1\Delta u-\chi_1 \nabla u\cdot\nabla w\\ &\,\, +u\Big(a_{0,\inf}-\big[\big(a_{2,\sup}-\frac{\chi_1 l}{d_3}\big)_++\frac{\chi_1 l}{d_3}\big](\bar B_2+\epsilon_0)-\frac{\chi_1 k}{d_3}\epsilon_0-(a_{1,\sup}-\frac{\chi_1 k}{d_3})u\Big) \end{align*} for $0<t\le T$. Let $\tilde u(t)$ be the solution of $$ \tilde u_t=\tilde u\Big(a_{0,\inf}-\big[\big(a_{2,\sup}-\frac{\chi_1 l}{d_3}\big)_++\frac{\chi_1 l}{d_3}\big](\bar B_2+\epsilon_0)-\frac{\chi_1 k}{d_3}\epsilon_0-(a_{1,\sup}-\frac{\chi_1 k}{d_3})\tilde u\Big) $$ with $\tilde u(t_0)=\inf_{ \bar\Omega}u_0(x)$. We have $\tilde u(t)$ is monotonically increasing in $t\ge t_0$ and $$\lim_{t\to\infty} \tilde u(t)=\frac{a_{0,\inf}-\big[\big(a_{2,\sup}-\frac{\chi_1 l}{d_3}\big)_++\frac{\chi_1 l}{d_3}\big](\bar B_2+\epsilon_0)-\frac{\chi_1 k}{d_3}\epsilon_0}{(a_{1,\sup}-\frac{\chi_1 k}{d_3})}. $$ By comparison principle for parabolic equations, we have $$ u(x,t+t_0;t_0)\ge \tilde u(t+t_0)> \inf_{ \bar\Omega} u_0(x)\quad \forall\,\, 0<t\le T. $$ (2)(ii)It can be proved by the similar arguments as those in (2)(i). \end{proof} \begin{comment} \begin{remark} \label{u-w-rk2} Consider \eqref{u-w-eq00} and assume \eqref{u-w-cond}. By the arguments of Lemma \ref{persistence-lm2}(1), the following holds. Let $\epsilon_0$ and $\delta_0=\delta_0(\epsilon_0)$ be such that Remark \ref{u-w-rk1} holds with $\epsilon=\epsilon_0$ and $\delta=\delta_0$, and $$ a_{0,\inf}>\frac{\chi_1 k}{d_3}\epsilon_0\quad {\rm and}\quad \delta_0<\frac{a_{0,\inf}-\frac{\chi_1 k}{d_3}\epsilon_0}{a_{1,\sup}-\frac{\chi_1 k}{d_3}}. $$ For given $0\le u_0\le \bar A_1+\epsilon_0$, if $0<u_0<\delta_0$, then $u(x,t+t_0;t_0,u_0)>\inf u_0(x)\quad \forall \,\, 0<t\le T.$ \end{remark} \end{comment} \begin{lemma} \label{persistence-lm3} \begin{itemize} \item[(1)] Assume (H1). { Let $\epsilon_0$ and $\delta_0=\delta_0(\epsilon_0)$ be such that Lemma \ref{persistence-lm1}(1) holds with $\epsilon=\epsilon_0$ and $\delta=\delta_0$.} There are $\underbar A_1^1>0$ and $\underbar A_2^1>0$ such that for any $t_0\in\RR$ and { $u_0, v_0 \in C^+(\bar{\Omega})$ with} $0< u_0\le { \bar A_1+\epsilon}$ and $0< v_0\le { \bar A_2+\epsilon}$, { the solution $(u(x,t;t_0)$, $v(x,t;t_0)$, $w(x,t;t_0))$ of $\eqref{u-v-w-eq00} +\eqref{ic}$ satisfies the following.} \begin{itemize} \item[(i)] For any $t\ge T$, if $\sup_{ \bar \Omega} u(x,t+t_0;t_0)\ge \delta_0$, $\inf_{ \bar\Omega} u(x,t+t_0;t_0)\ge \underbar A_1^1$. \item[(ii)] For any $t\ge T$, if $\sup_{ \bar \Omega} v(x,t+t_0;t_0)\ge \delta_0$, $\inf_{l \bar\Omega} v(x,t+t_0;t_0)\ge \underbar A_2^1$. \end{itemize} \item[(2)] Assume (H2). { Let $\epsilon_0$ and $\delta_0=\delta_0(\epsilon_0)$ be such that Lemma \ref{persistence-lm1}(2) holds with $\epsilon=\epsilon_0$ and $\delta=\delta_0$.} There are $\underbar B_1^1>0$ and $\underbar B_2^1>0$ such that for any $t_0\in\RR$ and { $u_0, v_0 \in C^+(\bar{\Omega})$ with} $0< u_0\le \bar B_1+\epsilon$ and $0< v_0\le \bar B_2+\epsilon$, { the solution $(u(x,t;t_0)$, $v(x,t;t_0)$, $w(x,t;t_0))$ of $\eqref{u-v-w-eq00} +\eqref{ic}$ satisfies the following.} \begin{itemize} \item[(i)] For any $t\ge T$, if $\sup_{ \bar \Omega} u(x,t+t_0;t_0)\ge \delta_0$, $\inf_{ \bar\Omega} u(x,t+t_0;t_0)\ge \underbar B_1^1$. \item[(ii)] For any $t\ge T$, if $\sup_{ \bar \Omega} v(x,t+t_0;t_0)\ge \delta_0$, $\inf_{ \bar\Omega} v(x,t+t_0;t_0)\ge \underbar B_2^1$. \end{itemize} \end{itemize} \end{lemma} \begin{proof} (1)(i) Assume that (1)(i) does not hold. Then there are $t_{0n}\in\RR$, $t_n\ge T$, and $u_n,v_n$ with { $0<u_n \le \bar A_1+\epsilon$ and $0<v_n \le \bar A_2+\epsilon$ } such that $$ {\sup_{x \in \bar \Omega}} u(x,t_n+t_{0n};t_{0n},u_n,v_n)\ge \delta_0,\quad \lim_{n\to\infty} \inf_{x\in\bar\Omega} u(x,t_n+t_{0n};t_{0n},u_n,v_n)=0. $$ By Theorem \ref{thm-global-000}(1), $$ { 0<u(x,t+t_{0n};t_{0n},u_n,v_n)\le \bar A_1+\epsilon_0,\quad 0<v(x,t+t_{0n};t_{0n},u_n,v_n)\leq \bar A_2+\epsilon_0}\quad \forall\,\, t>{ t_0},\,\,\, x\in\bar\Omega. $$ Without loss of generality, we may assume that { $$ \lim_{n\to\infty} a_i(x,t+t_n+t_{0n})=\tilde a_i(x,t),\quad \lim_{n\to\infty} b_i(x,t+t_n+t_{0n})=\tilde b_i(x,t) $$} and $$ \lim_{n\to\infty} u(x,t+t_n+t_{0n};t_{0n})=\tilde u(x,t),\quad \lim_{n\to\infty} v(x,t+t_n+t_{0n};t_{0n})=\tilde v(x,t) $$ { uniformly in $x\in\bar\Omega$ and $t$ in bounded closed sets of $(-T,\infty)$.} {Note that $$ u(x,t+t_n+t_{0n};t_{0n},u_n,v_n)=u(x,t+t_n+t_{0n};t_n+t_{0n},u(\cdot,t_n+t_{0n};t_{0n},u_n,v_n),v(\cdot,t_n+t_{0n};t_{0n},u_n,v_n)), $$ and $$ v(x,t+t_n+t_{0n};t_{0n},u_n,v_n)=v(x,t+t_n+t_{0n};t_n+t_{0n},u(\cdot,t_n+t_{0n};t_{0n},u_n,v_n),v(\cdot,t_n+t_{0n};t_{0n},u_n,v_n)). $$ Therefore $$ \tilde u(x,t)= \tilde u(x,t;0,\tilde u(\cdot,0),\tilde v(\cdot,0)), \quad \tilde v(x,t)= \tilde v(x,t;0,\tilde u(\cdot,0),\tilde v(\cdot,0)), $$ where $\left( \tilde u(x,t;0,\tilde u(\cdot,0),\tilde v(\cdot,0)),\tilde v(x,t;0,\tilde u(\cdot,0),\tilde v(\cdot,0)), \tilde w(x,t;0,\tilde u(\cdot,0),\tilde v(\cdot,0))\right)$ is the solution of \eqref{u-v-w-eq00} on $(-T,\infty)$ with $a_i $ being replaced by $\tilde a_i$ and $b_i $ being replaced by $\tilde b_i $, and $$\big( \tilde u(x,0;0,\tilde u(\cdot,0),\tilde v(\cdot,0)),\tilde v(x,0;0,\tilde u(\cdot,0),\tilde v(\cdot,0))\big)=\big(\tilde u(x,0),\tilde v(x,0)\big). $$ Moreover,} { we have $$ \tilde u(x,-\frac{T}{2})=\lim_{n\to\infty} u(x,-\frac{T}{2}+t_n+t_{0n};t_{0n},u_n,v_n), $$ and $$ \tilde v(x,-\frac{T}{2})=\lim_{n\to\infty} v(x,-\frac{T}{2}+t_n+t_{0n};t_{0n},u_n,v_n). $$} Hence $\tilde u(x,-T/2)\ge 0$, $\tilde v(x,-T/2)\ge 0$ for $x\in\bar\Omega$, and $\sup_{ \bar\Omega}\tilde u(x,0)\ge \delta_0,\quad \inf_{ \bar\Omega} \tilde u(x,0)=0,$ which is a contradiction { by comparison principle for parabolic equations}. Hence (1)(i) holds. { (1)(ii), (2)(i), (2)(ii) can be proved by the similar arguments as those in (1)(i).} \end{proof} \begin{comment} \begin{remark} \label{u-w-rk3} Consider \eqref{u-w-eq00} and assume \eqref{u-w-cond}. By the arguments of Lemma \ref{persistence-lm3}(1), the following holds. Let $\epsilon_0$ and $\delta_0=\delta_0(\epsilon_0)$ be such that Remark \ref{u-w-rk1} holds with $\epsilon=\epsilon_0$ and $\delta=\delta_0$. There is $\underbar A_1^1>0$ such that for any $t_0\in\RR$ and $0<u_0<\bar A_1+\epsilon_0$, for any $t\ge T$, if $\sup_{x\in\bar \Omega} u(x,t+t_0;t_0,u_0)\ge \delta_0$, then $\inf_{x\in\bar\Omega} u(x,t+t_0;t_0,u_0)\ge \underbar A_1^1$. \end{remark} \end{comment} \begin{lemma} \label{persistence-lm4} \begin{itemize} \item[(1)] { Assume (H4). Let $\epsilon_0$ and $\delta_0=\delta_0(\epsilon_0)$ be such that Lemma \ref{persistence-lm1}(1) and Lemma \ref{persistence-lm2}(1) hold with $\epsilon=\epsilon_0$ and $\delta=\delta_0$.} There are $\underbar A_1^2>0$ and $\underbar A_2^2>0$ such that for any $t_0\in\RR$ and { $u_0, v_0 \in C^+(\bar{\Omega})$ with} $0< u_0\le { \bar A_1+\epsilon}$ and $0< v_0\le { \bar A_2+\epsilon}$, { the solution $(u(x,t;t_0)$, $v(x,t;t_0)$, $w(x,t;t_0))$ of $\eqref{u-v-w-eq00} +\eqref{ic}$ satisfies the following.} \begin{itemize} \item[(i)] For any $\underbar A_1\le \underbar A_1^2$, if $\inf_{ \bar\Omega} u_0(x)\ge \underbar A_1$, then $\inf_{ \bar\Omega} u(x,T+t_0;t_0)\ge \underbar A_1$. \item[(ii)] For any $\underbar A_2\le \underbar A_2^2$, if $\inf_{ \bar\Omega}v_0(x)\ge \underbar A_2$, then $\inf_{ \bar\Omega} v(x,T+t_0;t_0)\ge \underbar A_2$. \end{itemize} \item[(2)] { Assume (H5). Let $\epsilon_0$ and $\delta_0=\delta_0(\epsilon_0)$ be such that Lemma \ref{persistence-lm1}(2) and Lemma \ref{persistence-lm2}(2) hold with $\epsilon=\epsilon_0$ and $\delta=\delta_0$.} There are $\underbar B_1^1>0$ and $\underbar B_2^1>0$ such that for any $t_0\in\RR$ and { $u_0, v_0 \in C^+(\bar{\Omega})$ with} $0< u_0\le \bar B_1+\epsilon$ and $0< v_0\le \bar B_2+\epsilon$, { the solution $(u(x,t;t_0)$, $v(x,t;t_0)$, $w(x,t;t_0))$ of $\eqref{u-v-w-eq00} +\eqref{ic}$ satisfies the following.} \begin{itemize} \item[(i)] For any $\underbar B_1\le \underbar B_1^2$, if $\inf_{ \bar\Omega}u_0(x)\ge \underbar B_1$, then $\inf_{ \bar\Omega} u(x,T+t_0;t_0)\ge \underbar B_1$. \item[(ii)] For any $\underbar B_2\le \underbar B_2^2$, if $\inf_{ \bar\Omega}v_0(x)\ge \underbar B_2$, then $\inf_{ \bar\Omega} v(x,T+t_0;t_0)\ge \underbar B_2$. \end{itemize} \end{itemize} \end{lemma} \begin{proof} (1)(i) We prove it using properly modified similar arguments of \cite[Lemma 5.3]{ITBWS16}. Assume that (1)(i) does not hold. Then there are $\underbar A_{1,n}\to 0$, { $u_n, v_n \in C^+(\bar{\Omega})$ with $0<u_n \le \bar A_1+\epsilon$ and $0<v_n \le \bar A_2+\epsilon$ }, $t_n\in\RR$, and $x_n\in\Omega$ such that $$ u_n(x)\ge \underbar A_{1,n}\quad \forall\,\, x\in\bar\Omega \quad {\rm and}\quad u(x_n,T+t_n;t_n,u_n,v_n)<\underbar A_{1,n}. $$ Let $$ \Omega_n=\{x\in\Omega\,|\, u_n(x)\ge \frac{\delta_0}{2}\}. $$ Without loss of generality, we may assume that $\lim_{n\to\infty}|\Omega_n|$ exists. Let $$ m_0=\lim_{n\to\infty}|\Omega_n|. $$ {Assume that $m_0=0$. Then there is $\tilde u_n\in C^0(\bar\Omega)$ such that $$ \underbar A_{1,n}\le \tilde u_n(x)\le \frac{\delta_0}{2}\quad {\rm and}\quad \lim_{n\to\infty} \|u_n-\tilde u_n\|_{L^p(\Omega)}=0\quad \forall\,\, 1\le p<\infty. $$ This implies that $$ \lim_{n\to\infty} \|\phi^1_n(\cdot,t)\|_{L^p(\Omega)}+\lim_{n\to\infty} \|\phi^2_n(\cdot,t)\|_{L^p(\Omega)}=0 $$ uniformly in $t\in[t_n,t_n+T]$ for all $1\le p<\infty,$ where $\phi^1_n(\cdot,t)=u(\cdot,t;t_n,u_n,v_n)-u(\cdot,t;t_n,\tilde u_n,v_n)$ and $\phi^2_n(\cdot,t)=v(\cdot,t;t_n,u_n,v_n)-v(\cdot,t;t_n,\tilde u_n,v_n).$ Indeed, let $$G^1_n(\cdot,t)=u(\cdot,t;t_n,u_n,v_n),\,\, G^2_n(\cdot,t)=v(\cdot,t;t_n,u_n,v_n),\,\, W_n(\cdot,t)=w(\cdot,t;t_n, u_n,v_n), $$ $$ \tilde G^1_n(\cdot,t)=u(\cdot,t;t_n,\tilde u_n,v_n),\,\, \tilde G^2_n(\cdot,t)=v(\cdot,t;t_n,\tilde u_n,v_n),\,\, \tilde W_n(\cdot,t)=w(\cdot,t;t_n, \tilde u_n,v_n), $$ and $$\hat W_n(\cdot,t)(\cdot,t)=w(\cdot,t;t_n, u_n,v_n)-w(\cdot,t;t_n,\tilde u_n,v_n).$$ Then \begin{align} \label{prooflemm3.4-eq1} \phi^1_n(\cdot,t)=& e^{-A(t-t_n)}\big(u_n-\tilde u_n\big)-\chi_1\int_{t_n}^t e^{-A(t-s)}\nabla \cdot \big[\phi^1_n(\cdot,s) \nabla W_n(\cdot,s)+\tilde G^1_n(\cdot,s) \nabla \hat W_n(\cdot,s) \big]ds\nonumber\\ &+\int_{t_n}^t e^{-A(t-s)}\phi^1_n(\cdot,s)\Big(1+ a_0(s,\cdot)-a_1(s,\cdot)(G^1_n(\cdot,s)+\tilde G^1_n(\cdot,s))-a_2(s,\cdot)G^2_n(\cdot,s) \Big)ds\nonumber\\ &-\int_{t_n}^t e^{-A(t-s)}a_2(s,\cdot)\big( \tilde G^1_n(\cdot,t)\big)\phi^2_n(\cdot,s)ds, \end{align} and \begin{align} \label{prooflemm3.4-eq2} \phi^2_n(\cdot,t)=& -\chi_2\int_{t_n}^t e^{-A(t-s)}\nabla\cdot \big[\phi^2_n(\cdot,s) \nabla W_n(\cdot,s)+\tilde G^2_n(\cdot,s) \nabla \hat W_n(\cdot,s) \big]ds\nonumber\\ &+\int_{t_n}^t e^{-A(t-s)}\phi^2_n(\cdot,s)\Big(1+ b_0(s,\cdot)-b_2(s,\cdot)(G^2_n(\cdot,s)+\tilde G^2_n(\cdot,s))-b_1(s,\cdot)G^1_n(\cdot,s) \Big)ds\nonumber\\ &-\int_{t_n}^t e^{-A(t-s)}b_1(s,\cdot)\big( \tilde G^2_n(\cdot,t)\big)\phi^1_n(\cdot,s)ds, \end{align} where $A=-\Delta +I$ with $D(A)=\Big\{ u \in W^{2,p}(\Omega) \, | \, \frac{\p u}{\p n}=0 \quad \text{on } \, \p \Omega \Big\} $ (it is known that $A$ is a sectorial operator in { $X=L^p(\Omega)$}). Now, fix $1<p <\infty.$ By regularity and a priori estimates for elliptic equations, \cite[Theorem 1.4.3]{DH77}, \cite[Lemma 2.2]{ITBWS16}, \eqref{prooflemm3.4-eq1}, and \eqref{prooflemm3.4-eq2}, for any $\epsilon \in (0, \frac{1}{2}),$ there is $C=C(\epsilon)>0$ such that \vspace{-0.1in}\begin{align} \label{prooflemm3.4-eq3} &\|\phi^1_n(\cdot,t)\|_{L^p(\Omega)}\nonumber\\ &\leq \|u_n-\tilde u_n\|_{L^p(\Omega)}+ C\chi_1\max_{t_n\leq s\leq t_n+T}\|\nabla W_n(\cdot,s))\|_{ \infty}\int_{t_n}^t(t-s)^{-\epsilon-\frac{1}{2}}\|\phi^1_n(\cdot,s)\|_{L^p(\Omega)}ds\nonumber\\ &\,\, +C\chi\max_{t_n\leq s\leq t_n+T}\|\hat W_n(\cdot,s)\|_{ \infty}\int_{t_n}^t(t-s)^{-\epsilon-\frac{1}{2}}(\|\phi^1_n(\cdot,s)\|_{L^p(\Omega)}+\|\phi^2_n(\cdot,s)\|_{L^p(\Omega)})ds\nonumber\\ &\,\, +C\int_{t_n}^t \{1+ a_{0,sup}+a_{1,\sup}[\max_{t_n\leq s\leq t_n+T}(\| G^1(\cdot,s)\|_{ \infty}+\|\tilde G^1(\cdot,s)\|_{\infty})]\}\|\phi^1_n(\cdot,s)\|_{L^p(\Omega)}ds\nonumber \\ &\,\, +Ca_{2,\sup}\max_{t_n\leq s\leq t_n+T}\| G^2(\cdot,s)\|_{ \infty}\int_{t_n}^t \|\phi^1_n(\cdot,s)\|_{L^p(\Omega)}ds\nonumber\\ &\,\, +Ca_{2,\sup}\max_{t_n\leq s\leq t_n+T}\|\tilde G^1(\cdot,s)\|_{ \infty}\int_{t_n}^t \|\phi^2_n(\cdot,s)\|_{L^p(\Omega)}ds. \end{align} and \vspace{-0.1in}\begin{align} \label{prooflemm3.4-eq4} &\|\phi^2_n(\cdot,t)\|_{L^p(\Omega)}\nonumber\\ &\le C\chi_2\max_{t_n\leq s\leq t_n+T}\|\nabla W_n(\cdot,s))\|_{\infty}\int_{t_n}^t(t-s)^{-\epsilon-\frac{1}{2}}\|\phi^2_n(\cdot,s)\|_{L^p(\Omega)}ds\nonumber\\ &\,\, +C\chi\max_{t_n\leq s\leq t_n+T}\|\hat W_n(\cdot,s)\|_{ \infty}\int_{t_n}^t(t-s)^{-\epsilon-\frac{1}{2}}(\|\phi^1_n(\cdot,s)\|_{L^p(\Omega)}+\|\phi^2_n(\cdot,s)\|_{L^p(\Omega)})ds\nonumber\\ &\,\, +C\int_{t_n}^t \{1+ b_{0,sup}+b_{2,\sup}[\max_{t_n\leq s\leq t_n+T}(\| G^2(\cdot,s)\|_{\infty}+\|\tilde G^2(\cdot,s)\|_{\infty})]\}\|\phi^2_n(\cdot,s)\|_{L^p(\Omega)}ds\nonumber \\ &\,\, +C b_{1,\sup}\max_{t_n\leq s\leq t_n+T}\| G^1(\cdot,s)\|_{ \infty}\int_{t_n}^t\|\phi^2_n(\cdot,s)\|_{L^p(\Omega)}ds\nonumber\\ &\,\, +Cb_{1,\sup}\max_{t_n\leq s\leq t_n+T}\|\tilde G^2(\cdot,s)\|_{ \infty}\int_{t_n}^t\|\phi^1_n(\cdot,s)\|_{L^p(\Omega)}ds. \end{align} Therefore there exists a positive constant $C_0$ independent of $n$ such that \begin{align} \label{prooflemm3.4-eq5} &\|\phi^1_n(\cdot,t+t_n)\|_{L^p(\Omega)}+\|\phi^1_n(\cdot,t+t_n)\|_{L^p(\Omega)}\nonumber\\ &\leq \|u_n-\tilde u_n\|_{L^p(\Omega)}+ C_0\int_{0}^{t}(t-s)^{-\epsilon-\frac{1}{2}}(\|\phi^1_n(\cdot,s+t_n)\|_{L^p(\Omega)}+\|\phi^1_n(\cdot,s+t_n)\|_{L^p(\Omega)})ds \end{align} for all $t\in [0,T]$. By \eqref{prooflemm3.4-eq5} and the generalized Gronwall's inequality (see \cite[page 6]{DH77}), we get $$ \lim_{n\to\infty} (\|\phi^1_n(\cdot,t)\|_{L^p(\Omega)}+\|\phi^1_n(\cdot,t)\|_{L^p(\Omega)})=0 $$ uniformly in $t\in[t_n,t_n+T]$ for all $1\leq p<\infty.$ This implies that $$ \lim_{n\to\infty}\|w(\cdot,t;t_n,u_n,v_n))-w(\cdot,t;t_n;\tilde u_n,v_n)\|_{C^1(\bar \Omega)}=0 $$ uniformly in $t\in[t_n,t_n+T]$. Note that $v(x,t;t_n,\tilde u_n,v_n)\le \bar A_2+\epsilon_0$ for $t\in[t_n,t_n+T]$ and by Lemma \ref{persistence-lm1}(1), $u(x,t;t_n,\tilde u_n,v_n)\le \epsilon_0$ for $t\in [t_n,t_n+T]$. Hence $$ w(\cdot,t;t_n;\tilde u_n,v_n)\le \frac{k}{\lambda}\epsilon_0+\frac{l}{\lambda}(\bar{A_2}+\epsilon_0) $$ for all $t\in [t_n,t_n+T]$ and $x\in\Omega$. It then follows that for any $\epsilon>0$, $$ w(\cdot,t;t_n;u_n,v_n)\le (\frac{k}{\lambda}+\epsilon)\epsilon_0+\frac{l}{\lambda}(\bar{A_2}+\epsilon_0) $$ for all $t\in [t_n,t_n+T]$, $x\in\Omega$, and $n\gg 1$. Then by the arguments of Lemma \ref{persistence-lm2}, $\inf u(\cdot,t_n+T;t_n,u_n)\ge A_{1,n}$, which is a contradiction. Therefore, $m_0\not =0$. By $m_0\not =0$ and comparison principle for parabolic equations, without loss of generality, we may assume that $$ \liminf_{n\to\infty}\|e^{-At}u_n\|_{\infty}>0\quad \forall\,\, t\in [0,T]. $$ This implies that there is $0<T_0<T$ and $\delta_\infty>0$ such that $$\sup_{x\in\bar\Omega} u(x,t_n+T_0;t_n,u_n,v_n)\ge \delta_\infty$$ for all $n\gg 1$. By a priori estimates for parabolic equations, without loss of generality, we may assume that $$ u(\cdot,t_n+T_0;t_n,u_n,v_n)\to u_0^*,\quad v(\cdot,t_n+T_0;t_n,u_n,v_n)\to v_0^* $$ and $$ u(\cdot,t_n+T;t_n,u_n,v_n)\to u^*,\quad v(\cdot,t+n+T;t_n,u_n,v_n)\to v^* $$ as $n\to\infty$. Without loss of generality, we may also assume that $$ a_i(t+t_n,x)\to a_i^*(t,x),\quad b_i(t+t_n,\cdot)\to b_i^*(t,x) $$ as $n\to\infty$ locally uniformly in $(t,x)\in\RR\times\bar\Omega$. Then we have $$ u^*(x)=u^*(x,T;T_0,u_0^*,v_0^*),\quad v^*(x)=v^*(x,T;t_0,u_0^*,v_0^*) $$ and $$ \inf_{\bar\Omega} u^*(x)=0,\quad \inf_{ \bar\Omega }v^*(x)\ge 0, $$ where $(u^*(x,t;T_0,u_0^*,v_0^*), v^*(x,t;T_0,u_0^*,v_0^*),w(x,t;T_0,u_0^*,v_0^*))$ is the solution of \eqref{u-v-w-eq00} with $a_i(t,x)$ and $b_i(t,x)$ being replaced by $a_i^*(t,x)$ and $b_i^*(t,x)$, and $(u^*(x,T_0;T_0,u_0^*,v_0^*), v^*(x,T_0;T_0,u_0^*,v_0^*)) =(u_0^*(x),v_0^*(x))$. By comparison principle, we must have $u_0^*\equiv 0$. But $\sup_{\bar\Omega} u_0^*\ge \delta_\infty.$ This is a contradiction. } (1)(ii) It can be proved by the similar arguments as those in (1)(i). (2) Follows by similar arguments as those in (1). \end{proof} \begin{comment} \begin{remark} \label{u-w-rk4} Consider \eqref{u-w-eq00} and assume \eqref{u-w-cond}. By the arguments of Lemma \ref{persistence-lm4}, the following holds. Let $\epsilon_0$ and $\delta_0=\delta_0(\epsilon_0)$ be such that Remark \ref{u-w-rk1} and Remark \ref{u-w-rk2} hold with $\epsilon=\epsilon_0$ and $\delta=\delta_0$. There is $\underbar A_1^2>0$ such that for any $t_0\in\RR$ and $0<u_0<\bar A_1+\epsilon_0$, for any $\underbar A_1\le \underbar A_1^2$, if $\inf_{x\in\bar\Omega} u_0(x)\ge \underbar A_1$, then $\inf_{x\in\bar\Omega} u(x,T+t_0;t_0,u_0)\ge \underbar A_1$. \end{remark} \end{comment} Let \begin{equation} \label{I-underbar-eq} \underbar A_1=\min\{\underbar A_1^1,\underbar A_1^2\},\quad \underbar A_2=\min\{\underbar A_2^1,\underbar A_2^2\} \end{equation} and \begin{equation} \label{A-underbar-eq} \underbar B_1=\min\{\underbar B_1^1,\underbar B_1^2\},\quad \underbar B_2=\min\{\underbar B_2^1,\underbar B_2^2\}. \end{equation} \medskip { Note that the constants $\underbar A_1$, $\underbar A_2$, $\underbar B_1$ and $\underbar B_2$ depend on $T$ and $\epsilon_0$. } \begin{lemma} \label{persistence-lm5} \begin{itemize} \item[(1)] { Assume (H4). Let $\epsilon_0$ and $\delta_0=\delta_0(\epsilon_0)$ be such that Lemma \ref{persistence-lm1}(1) and Lemma \ref{persistence-lm2}(1) hold with $\epsilon=\epsilon_0$ and $\delta=\delta_0$.} For any { $u_0, v_0 \in C^+(\bar{\Omega})$ with} $0< u_0\le {\bar A_1+\epsilon}$ and $0< v_0\le {\bar A_2+\epsilon}$, {the solution $(u(x,t;t_0)$, $v(x,t;t_0)$, $w(x,t;t_0))$ of $\eqref{u-v-w-eq00} +\eqref{ic}$ satisfies the following.} \begin{itemize} \item[(i)] If $\inf_{ \bar\Omega}u_0(x)\ge \underbar A_1$, then \begin{equation} \label{lower-bound-eq1} \underbar A_1\le u(x,t+t_0;t_0)\le \bar A_1+\epsilon_0\quad \forall\,\, t\ge T,\,\,\, x\in\bar\Omega. \end{equation} \item[(ii)] If $\inf_{ \in\bar\Omega}v_0(x)\ge \underbar A_2$, then \begin{equation} \label{lower-bound-eq2} \underbar A_2\le v(x,t+t_0;t_0)\le \bar A_2+\epsilon_0\quad \forall\,\, t\ge T,\,\,\, x\in\bar\Omega. \end{equation} \end{itemize} \item[(2)] {Assume (H5). Let $\epsilon_0$ and $\delta_0=\delta_0(\epsilon_0)$ be such that Lemma \ref{persistence-lm1}(2) and Lemma \ref{persistence-lm2}(2) hold with $\epsilon=\epsilon_0$ and $\delta=\delta_0$.} For any $t_0\in\RR$ and {$u_0, v_0 \in C^+(\bar{\Omega})$ with} $0< u_0\le \bar B_1+\epsilon$ and $0< v_0\le \bar B_2+\epsilon$, { the solution $(u(x,t;t_0)$, $v(x,t;t_0)$, $w(x,t;t_0))$ of $\eqref{u-v-w-eq00} +\eqref{ic}$ satisfies the following.} \begin{itemize} \item[(i)] If $\inf_{\in\bar\Omega}u_0(x)\ge \underbar B_1$, then \begin{equation} \label{lower-bound-eq1-2} \underbar B_1\le u(x,t+t_0;t_0)\le \bar B_1+\epsilon_0\quad \forall\,\, t\ge T,\,\,\, x\in\bar\Omega. \end{equation} \item[(ii)] If $\inf_{\bar\Omega}v_0(x)\ge \underbar B_2$, then \begin{equation} \label{lower-bound-eq2-2} \underbar B_2\le v(x,t+t_0;t_0)\le \bar B_2+\epsilon_0\quad \forall\,\, t\ge T,\,\,\, x\in\bar\Omega. \end{equation} \end{itemize} \end{itemize} \end{lemma} \begin{proof} (1)(i) First of all, by Lemma \ref{persistence-lm4}(1), we have $$ \underbar A_1\le u(x,T+t_0;t_0)\le \bar A_1+\epsilon_0\quad \forall\,\, x\in\bar\Omega. $$ Note that we have $$ {\rm either}\,\,\, \sup_{\bar \Omega} u(x,T+t_0;t_0)> \delta_0\,\,\, {\rm or}\,\,\, \sup_{\bar \Omega} u(x,T+t_0;t_0) \le \delta_0. $$ In the former case, if $\sup_{\bar \Omega} u(x,t+T+t_0;t_0)> \delta_0$ for all $0\le t\le T$, by Lemma \ref{persistence-lm3}, \eqref{lower-bound-eq1} holds for all $T\le t\le 2T$. If there is $t^*\in (T,2T)$ such that $\sup_{\bar \Omega} u(x,t+t_0;t_0)> \delta_0$ for $T\le t\le t^*$ and $\sup_{\bar \Omega} u(x,t^*+t_0;t_0)= \delta_0$, then by Lemma \ref{persistence-lm3}, \eqref{lower-bound-eq1} holds for all $T\le t\le t^*$, which together with Lemma \ref{persistence-lm2} implies that \eqref{lower-bound-eq1} also holds for all $t^*\le t\le 2T$. In the later case, by Lemma \ref{persistence-lm2}, \eqref{lower-bound-eq1} also holds for all $T\le t\le 2T$. Therefore, in any case, \eqref{lower-bound-eq1} also holds for all $T\le t\le 2T$. Repeating the above process, we have that \eqref{lower-bound-eq1} also holds for all $t\ge T$. (1)(ii) It can be proved by the similar arguments as those in (1)(i). (2) It follows from the similar arguments as those in (1). \end{proof} \begin{comment} \begin{remark} \label{u-w-rk5} Consider \eqref{u-w-eq00} and assume \eqref{u-w-cond}. By the arguments of Lemma \ref{persistence-lm5}, the following holds. Let $\epsilon_0$ and $\delta_0=\delta_0(\epsilon_0)$ be such that Remark \ref{u-w-rk1} and Remark \ref{u-w-rk2} hold with $\epsilon=\epsilon_0$ and $\delta=\delta_0$. For any $0<u_0<\bar A_1+\epsilon_0$, if $\inf_{x\in\bar\Omega}u_0(x)\ge \underbar A_1$, then \begin{equation} \label{u-w-lower-bound-eq1} \underbar A_1\le u(x,t+t_0;t_0,u_0)\le \bar A_1+\epsilon_0\quad \forall\,\, t\ge T,\,\,\, x\in\bar\Omega. \end{equation} \end{remark} \end{comment} We now prove Theorem \ref{thm-entire-001}. \begin{proof}[Proof of Theorem \ref{thm-entire-001}] (1) Let $\epsilon_0$ and $\delta_0=\delta_0(\epsilon_0)$ be such that Lemma \ref{persistence-lm1}(1) and Lemma \ref{persistence-lm2}(1) hold with $\epsilon=\epsilon_0$ and $\delta=\delta_0$. Let $\underbar A_1$, $\bar A_1$, $\underbar A_2$, and $\bar A_2$ be as in Lemma \ref{persistence-lm5}(1). By the assumption that $u_0\not \equiv 0$, $v_0\not\equiv 0$, and comparison principle for parabolic equations, without loss of generality, we may assume that $\inf_{\bar\Omega}u_0(x)>0$ and $\inf_{\bar\Omega} v_0(x)>0$. First, by Theorem \ref{thm-global-000}, there is $T_1=T_1(u_0,v_0,\epsilon_0)$ such that $$ u(x,t+t_0;t_0)\le \bar A_1+\epsilon_0,\,\, \, v(x,t+t_0;t_0)\le \bar A_2+\epsilon_0 \quad \forall \,\, t\ge T_1,\quad x\in\bar\Omega. $$ Observe that if $\sup_{\bar\Omega} u(x,t+t_0;t_0)<\delta_0$, then \begin{align*} u_t&=d_1\Delta u-\chi_1 \nabla \cdot (u\nabla w)+u\Big(a_0(t,x)-a_1(t,x)u-a_2(t,x)v\Big)\\ &=d_1\Delta u-\chi_1 \nabla u\cdot\nabla w+u\Big(a_0(t,x)-(a_1(t,x)-\frac{\chi_1 k}{d_3})u-(a_2(t,x)-\frac{\chi_1 l}{d_3})v-\frac{\chi_1 \lambda}{d_3}w\Big)\\ &\ge d_1\Delta u-\chi_1 \nabla u\cdot\nabla w\\ &\,\, +u\Big(a_0(t,x)-(a_1(t,x)-\frac{\chi_1 k}{d_3})u-(a_2(t,x)-\frac{\chi_1 l}{d_3})(\bar A_2+\epsilon_0)-\frac{\chi_1 \lambda}{d_3}(\frac{k}{\lambda} \delta_0+\frac{l}{\lambda} (\bar A_2+\epsilon_0))\Big) \\ &\ge d_1\Delta u-\chi_1 \nabla u\cdot\nabla w+u\Big(a_0(t,x)-a_2(t,x)(\bar A_2+\epsilon_0)-\frac{\chi_1 k}{d_3}\epsilon_0-(a_1(t,x)-\frac{\chi_1 k}{d_3})u\Big). \end{align*} Let $\tilde u(t;\tilde u_0)$ be the solution of $$ \tilde u_t=\tilde u\Big(a_{0,\inf}-a_{2,\sup}(\bar A_2+\epsilon_0)-\frac{\chi_1 k}{d_3}\epsilon_0-(a_{1,\sup}-\frac{\chi_1 k}{d_3})\tilde u\Big) $$ with $\tilde u(0;\tilde u_0)=\tilde u_0\in (0,\delta_0)$. We have $\tilde u(t)$ is monotonically increasing in $t\ge 0$ and \begin{equation} \label{new-add-eq1} \lim_{t\to\infty} \tilde u(t;\tilde u_0)=\frac{a_{0,\inf}-a_{2,\sup}(\bar A_2+\epsilon_0)-\frac{\chi_1 k}{d_3}\epsilon_0}{(a_{1,\sup}-\frac{\chi_1 k}{d_3})} >\delta_0. \end{equation} Observe also that \begin{equation} \label{new-add-eq2} \inf_{t_0\in\RR} \inf_{x \in \bar\Omega}u(x,T+t_0;t_0)>0. \end{equation} Indeed, we have either $\sup_{\bar\Omega} u_0 <\delta_0 $ or $\sup_{\bar\Omega} u_0 \geq \delta_0 $. If $\sup_{\bar\Omega} u_0 <\delta_0 $, we have by Lemma \ref{persistence-lm2} (i) that $ \inf_{\bar\Omega}u(x,T+t_0;t_0)\geq \inf_{\bar\Omega} u_0 >0$ for all $t_0\in \RR$ and then \eqref{new-add-eq2} follows. If $\sup_{\bar\Omega} u_0\ge \delta_0$, but \eqref{new-add-eq2} does not hold, then there are $t_{0n}\in\RR$ and $x_n\in\bar\Omega$ such that $$ \lim_{n\to\infty} u(x_n,T+t_{0n};t_{0n},u_0,v_0)=0. $$ Let $a_i^n(t,x)=a_i(t+t_{0n},x)$ and $b_i^n(t,x)=b_i(t+t_{0n},x)$ for $i=0,1,2$. Then \begin{align*} &(u(x,t+t_{0n};t_{0n},u_0,v_0),v(x,t+t_{0n};t_{0n},u_0,v_0),w(x,t+t_{0n};t_{0n},u_0,v_0))\\ &=(u^n(x,t;u_0,v_0),v^n(x,t;u_0,v_0),w^n(x,t;u_0,v_0)) \end{align*} for $t\ge 0$, where $(u^n(x,t;u_0,v_0),v^n(x,t;u_0,v_0),w^n(x,t;u_0,v_0))$ is the solution of \eqref{u-v-w-eq00} with $a_i$ and $b_i$ ($i=0,1,2$) being replaced by $a_i^n$ and $b_i^n$ ($i=0,1,2$) and $(u^n(x,0;u_0,v_0),v^n(x,0;u_0,v_0))=(u_0(x),v_0(x))$. Without loss of generality, we may assume that $$ \lim_{n\to\infty}a_i^n(t,x)=a_i^\infty(t,x),\quad \lim_{n\to\infty} b_i^n(t,x)=b_i^\infty(t,x) $$ uniformly in $x\in\bar\Omega$ and $t$ in bounded sets of $\RR$, and $$ \lim_{n\to\infty} x_n=x_\infty. $$ Then \begin{align*} &\lim_{n\to\infty} (u^n(x,t;u_0,v_0),v^n(x,t;u_0,v_0),w^n(x,t;u_0,v_0))\\ &=(u^\infty(x,t;u_0,v_0),v^\infty(x,t;u_0,v_0),w^\infty(x,t;u_0,v_0)) \end{align*} uniformly in $x\in\bar\Omega$ and $t$ in bounded set of $[0,\infty)$, where $(u^\infty(x,t;u_0,v_0),v^\infty(x,t;u_0,v_0)$, $w^\infty(x,t;u_0,v_0))$ is the solution of \eqref{u-v-w-eq00} with $a_i$ and $b_i$ ($i=0,1,2$) being replaced by $a_i^\infty$ and $b_i^\infty$ ($i=0,1,2$) and $(u^\infty(x,0;u_0,v_0),v^\infty(x,0;u_0,v_0))=(u_0(x),v_0(x))$. It then follows that $$ \inf_{\bar\Omega} u_0(x)>0\quad {\rm and}\quad u^\infty(x_\infty,T;u_0,v_0)=0, $$ which is a contradiction. Hence if $\sup_{\bar\Omega} u_0\ge \delta_0$, \eqref{new-add-eq2} also holds. { Note that we have either $\sup _{\bar\Omega} u(x,T+t_0;t_0)\ge \delta_0$ or $\sup _{\bar\Omega} u(x,T+t_0;t_0)< \delta_0$.} { If $\sup_{\bar\Omega} u(x,T+t_0;t_0)<\delta_0$, by \eqref{new-add-eq1}, \eqref{new-add-eq2}, and comparison principle for parabolic equations, there are $ T_2(u_0,v_0, \epsilon_0)\ge T$ and $T\le\tilde T_2(u_0,v_0,\epsilon_0)\le T_2(u_0,v_0,\epsilon_0)$ such that $$ \sup_{\bar\Omega} u(x,\tilde T_2(u_0,v_0,\epsilon_0)+t_0;t_0)=\delta_0. $$ Hence, in either case, there is $\tilde T_2(u_0,v_0,\epsilon_0)\in [T, T_2(u_0,v_0,\epsilon_0)]$ such that \begin{equation} \label{new-add-eq3} \sup_{\bar\Omega} u(x, \tilde T_2(u_0,v_0,\epsilon_0)+t_0;t_0,) \geq\delta_0. \end{equation}} {By \eqref{new-add-eq3} and Lemma \ref{persistence-lm3}, $$ \inf_{\bar\Omega}u(x,\tilde T_2(u_0,v_0,\epsilon_0)+t_0;t_0)\ge \underbar A_1. $$} Then by Lemma \ref{persistence-lm5}(1), \begin{equation} \label{thm2-proof-eq1} \underbar A_1\le u(x,t+t_0;t_0)\le \bar A_1+\epsilon_0\quad \forall \,\, t\ge \max\{T_1(u_0,v_0,\epsilon_0),{ T+T_2(u_0,v_0,\epsilon_0)}\}. \end{equation} Similarly, we can prove that there are $\tilde T_1(u_0,v_0,\epsilon_0)>0$ and $\tilde T_2(u_0,v_0,\epsilon_0)\ge T$ such that \begin{equation} \label{thm-proof-eq2} \underbar A_2\le v(x,t+t_0;t_0)\le \bar A_2+\epsilon_0\quad \forall \,\, t\ge \max\{\tilde T_1(u_0,v_0,\epsilon_0),T+ \tilde T_2(u_0,v_0,\epsilon_0)\}. \end{equation} By Theorem \ref{thm-global-000}, \eqref{thm2-proof-eq1}, and \eqref{thm-proof-eq2}, for any $\epsilon>0$, there is $t_{\epsilon,u_0,v_0}$ such that \eqref{attracting-set-eq00} holds. (2) It follows from the similar arguments as those in (1). \end{proof} \begin{corollary} \label{u-w-cor} Consider \eqref{u-w-eq00} and assume \eqref{u-w-cond}. There is $\underbar A_1$ such that for any $\epsilon>0,$ $t_0\in\RR,$ $u_0\in C^0(\bar \Omega)$ with $u_0\ge 0,$ and $u_0\not \equiv 0$, there exists $t_{\epsilon,u_0}$ such that \begin{equation*} \underbar A_1 \le u(x,t;t_0,u_0) \le \bar{A_1}+\epsilon \end{equation*} for all $x\in\bar\Omega$ and $t\ge t_0+t_{\epsilon,u_0}$, where $(u(x,t;t_0,u_0),w(x,t;t_0,u_0))$ is the global solution of \eqref{u-w-eq00} with $u(x,t_0;t_0,u_0)=u_0(x)$ \end{corollary} \begin{proof}{ We outline the proof in the following 6 steps. \smallskip {\bf Step 1.} Fix $T>0.$ By the arguments of Lemma \ref{persistence-lm1} (1), we have that, for any $\epsilon>0$, there is $\delta=\delta(\epsilon,T)>0$ such that for any $u_0\in C^+(\bar\Omega)$, if $0\le u_0\le \delta$, then $u(x,t;t_0,u_0)\le \epsilon$ for $t\in [t_0,t_0+T]$ and $x\in\bar\Omega$. \smallskip {\bf Step 2.} By the arguments of Lemma \ref{persistence-lm2} (1), the following holds. Let $\epsilon_0$ and $\delta_0=\delta_0(\epsilon_0,T)$ be such that {\bf Step 1} holds with $\epsilon=\epsilon_0$ and $\delta=\delta_0$, and $$ a_{0,\inf}>\frac{\chi_1 k}{d_3}\epsilon_0\quad {\rm and}\quad \delta_0<\frac{a_{0,\inf}-\frac{\chi_1 k}{d_3}\epsilon_0}{a_{1,\sup}-\frac{\chi_1 k}{d_3}}. $$ For given $u_0 \in C^+(\bar\Omega)$, if $0<u_0<\delta_0$, then $u(x,t+t_0;t_0,u_0)>\inf u_0(x)\quad \forall \,\, 0<t\le T.$ \smallskip {\bf Step 3.} By the arguments of Lemma \ref{persistence-lm3} (1), the following holds. Let $\epsilon_0$ and $\delta_0=\delta_0(\epsilon_0,T)$ be such that {\bf Step 2} holds with $\epsilon=\epsilon_0$ and $\delta=\delta_0$. There is $\underbar A_1^1>0$ such that for any $t_0\in\RR$ and $u_0 \in C^+(\bar\Omega)$, for any $t\ge T$, if $\sup_{\bar \Omega} u(x,t+t_0;t_0,u_0)\ge \delta_0$, then $\inf_{\bar\Omega} u(x,t+t_0;t_0,u_0)\ge \underbar A_1^1$. \smallskip {\bf Step 4.} By the arguments of Lemma \ref{persistence-lm4} (1), the following holds. Let $\epsilon_0$ and $\delta_0=\delta_0(\epsilon_0)$ be such that {\bf Steps 1 and 2} hold with $\epsilon=\epsilon_0$ and $\delta=\delta_0$. There is $\underbar A_1^2>0$ such that for any $t_0\in\RR$ and $0<u_0 \in C^+(\bar\Omega)$, for any $\underbar A_1\le \underbar A_1^2$, if $\inf_{\bar\Omega} u_0(x)\ge \underbar A_1$, then $\inf_{\bar\Omega} u(x,T+t_0;t_0,u_0)\ge \underbar A_1$. \smallskip {\bf Step 5.} By the arguments of Lemma \ref{persistence-lm5} (1), the following holds. Let $\epsilon_0$ and $\delta_0=\delta_0(\epsilon_0,T)$ be such that {\bf Steps 1 and 2} hold with $\epsilon=\epsilon_0$ and $\delta=\delta_0$. For any $0<u_0 \in C^+(\bar\Omega) $, if $\inf_{\bar\Omega}u_0(x)\ge \underbar A_1$, then \begin{equation*} \label{u-w-lower-bound-eq1} \underbar A_1\le u(x,t+t_0;t_0,u_0)\le \bar A_1+\epsilon_0\quad \forall\,\, t\ge T,\,\,\, x\in\bar\Omega. \end{equation*} \smallskip {\bf Step 6.} Complete the proof by combining {\bf Step 5} and the arguments of equation \eqref{thm2-proof-eq1} in the proof of Theorem \ref{thm-entire-001}.} \end{proof} \section{Coexistence} \label{Coexistenec} In this section, we study the existence of coexistence states in \eqref{u-v-w-eq00} and prove Theorem \ref{thm-entire-002}. We first prove a lemma. \begin{lemma} \label{persistence-lm6} Consider \begin{equation} \begin{cases} \label{u-v-ode} u_t=u\big(a_0(t)-a_1(t)u-a_2(t)v\big)\cr v_t=v\big(b_0(t)-b_1(t)u-b_2(t)v\big). \end{cases} \end{equation} Assume \eqref{stability-cond-1-eq1} is satisfied. Then there is a positive entire solution $(u^{**}(t),v^{**}(t))$ of \eqref{u-v-ode}. Moreover, for any $u_0,v_0>0$ and $t_0\in\RR$, $$ (u(t;t_0,u_0,v_0),v(t;t_0,u_0,v_0))-(u^{**}(t),v^{**}(t))\to 0 $$ as $t\to\infty$, where $(u(t;t_0,u_0,v_0)$, $v(t;t_0,u_0,v_0))$ is the solution of \eqref{u-v-ode} with $(u(t_0;t_0,u_0,v_0)$, $v(t_0;t_0,u_0,v_0))=(u_0,v_0)$. In addition, if $a_i(t)$ and $b_i(t)$ are almost periodic, then so is $(u^{**}(t)$, $v^{**}(t))$. \end{lemma} \begin{proof} First, let $$s_1=\frac{b_{2,\inf}a_{0,\inf}-a_{2,\sup}b_{0,\sup}}{b_{2,\inf}a_{1,\sup}-a_{2,\sup}b_{1,\inf}},\quad r_1=\frac{b_{2,\sup}a_{0,\sup}-a_{2,\inf}b_{0,\inf}}{b_{2,\sup}a_{1,\inf}-a_{2,\inf}b_{1,\sup}},$$ and $$ r_2=\frac{a_{1,\inf}b_{0,\inf}-b_{1,\sup}a_{0,\sup}}{a_{1,\inf}b_{2,\sup}-b_{1,\sup}a_{2,\inf}},\quad s_2=\frac{a_{1,\sup}b_{0,\sup}-b_{1,\inf}a_{0,\inf}}{a_{1,\sup}b_{2,\inf}-b_{1,\inf}a_{2,\sup}}. $$ Then $$0<s_1\leq r_1 \quad \text{and} \quad 0<r_2\leq s_2.$$ Next, for given $t_0 \in \mathbb{R}$ and $u_0,v_0\in\RR$, if $0<u_0\leq r_1$ and ${ v_0\geq r_2}$, by \cite[Lemma 3.1]{Ahm}, we have \begin{equation} \label{aux-existence-eq1} 0<u(t;t_0,u_0,v_0)\leq r_1\quad \text{and}\quad v(t;t_0,u_0,v_0)\geq { r_2}\quad \forall\,\, t\ge t_0. \end{equation} And if $u_0\geq s_1$ and $0<v_0\leq { s_2}$, by \cite[Lemma 3.2]{Ahm} again, \begin{equation} \label{aux-existence-eq2} u(t;t_0,u_0,v_0)\geq s_1\quad \text{and} \quad 0<v(t;t_0,u_0,v_0)\leq { s_2}\quad \forall\,\, t\ge t_0. \end{equation} { Next, by the pullback method, there exists a positive entire solution of \eqref{u-v-ode} which satisfies $s_1\leq u(t)\leq r_1 \,\, \text{and}\,\,s_2\leq v(t)\leq r_2,$ for all $t \in \mathbb{R}.$ We omit the proof here because a similar proof will be given in Theorem \ref{thm-entire-002}} \begin{comment} We now start with the proof of existence of positive entire solutions of \eqref{u-v-ode} by the so called pullback method. Fix $u_0,v_0 \in \mathbb{R}$ such that $s_1\leq u_0\leq r_1 \,\, \text{and}\,\,s_2\leq v_0\leq r_2.$ For $n \in \mathbb{N},$ let $t_n=-n,$ $u_n=u(0;t_n,u_0,v_0)$ and $v_n=v(0;t_n,u_0,v_0).$ Then by \eqref{aux-existence-eq1} and \eqref{aux-existence-eq2}, we have $$s_1\leq u_n\leq r_1 \,\, \text{and}\,\,s_2\leq v_n\leq r_2\quad \forall\,\, n\in\NN. $$ Therefore there exists $u^0_0,v^0_0 \in \mathbb{R}$ such that up to subsequence $u_n \to u_0^0 \,\, \text{as} \,\, n \to \infty$ and $v_n \to v_0^0 \,\, \text{as} \,\, n \to \infty.$ And so $$s_1\leq u^0_0\leq r_1 \,\, \text{and}\,\,s_2\leq v_0^0\leq r_2.$$ Furthermore $(u(t;t_n,u_0,v_0),v(t;t_n,u_0,v_0))\to (u(t;0,u_0^0,v_0^0),v(t;0,u_0^0,v_0^0)) \,\, \text{as} \,\, n \to \infty.$ Again by \eqref{aux-existence-eq1} and \eqref{aux-existence-eq2}, we have $$s_1\leq u(t;0,u_0^0,v_0^0)\leq r_1 \,\, \text{and}\,\,s_2\leq u(t;0,u_0^0,v_0^0)\leq r_2\quad \forall\,\, t\geq 0.$$ We claim that $(u(t;0,u_0^0,v_0^0),v(t;0,u_0^0,v_0^0))$ has backward extension. Indeed for each $m \in \mathbb{N},$ we define for $n>m,$ $u_n^m=u(-m;t_n,u_0,v_0)$ and $v_n^m=v(-m;t_n,u_0,v_0).$ Then by similar arguments as before there exist $u^m_0,v^m_0 \in \mathbb{R}$ such that up to subsequence $u_n^m \to { u_0^m} \,\, \text{as} \,\, n \to \infty$ and $v_n^m \to { v_0^m} \,\, \text{as} \,\, n \to \infty,$ $s_1\leq u^m_0\leq r_1 \,\, \text{and}\,\,s_2\leq v_0^m\leq r_2,$ and $(u(t;t_n,u_0,v_0),v(t;t_n,u_0,v_0))\to (u(t;-m,u_0^m,v_0^m),v(t;-m,u_0^m,v_0^m)) \,\, \text{as} \,\, n \to \infty.$ It follows that $$(u(t;0,u_0^0,v_0^0),v(t;0,u_0^0,v_0^0))=(u(t;-m,u_0^m,v_0^m),v(t;-m,u_0^m,v_0^m))$$ for all $t\geq 0.$ Thus $(u(t;0,u_0^0,v_0^0),v(t;0,u_0^0,v_0^0))$ has backward extension up $t\geq -m,$ for each $m \in \mathbb{N}.$ This show that $(u(t;0,u_0^0,v_0^0),v(t;0,u_0^0,v_0^0))$ is defined for all $t \in \mathbb{R}$ and moreover we have $s_1\leq u(t;0,u_0^0,v_0^0)\leq r_1 \,\, \text{and}\,\,s_2\leq v(t;0,u_0^0,v_0^0)\leq r_2,$ for all $t \in \mathbb{R}.$ Hence $u(t;0,u_0^0,v_0^0),v(t;0,u_0^0,v_0^0))$ is a positive entire solution of \eqref{u-v-ode}. \end{comment} Finally, we prove the stability of positive entire solutions and the almost periodicity of positive entire solutions when the coefficients are almost periodic. Let $(u^{**}(t),v^{**}(t))$ be a positive entire solution of \eqref{u-v-ode} and let $u_0,v_0>0$ and $t_0\in\RR.$ It follows from \cite[Theorem 1]{Ahm} that $$ (u(t;t_0,u_0,v_0),v(t;t_0,u_0,v_0))-(u^{**}(t),v^{**}(t))\to 0 \quad \text{as} \,\, \to \infty. $$ By \cite[Theorem C]{HeSh}, when $a_i(t)$ and $b_i(t)$ ($i=0,1,2$) are almost periodic in $t$, then positive entire solutions of \eqref{u-v-ode} are unique and almost periodic. The lemma thus follows. \end{proof} We now prove Theorem \ref{thm-entire-002}. Let $T>0$ be fixed and $\underbar A_i$, $\bar A_i$, $\underbar B_i$, and $\bar B_i$ ($i=1,2$) be as in the previous section. \begin{proof}[Proof of Theorem \ref{thm-entire-002}] (1) We first prove the existence of positive entire solutions. { Let $u_0, v_0 \in C^0(\bar \Omega)$ be such that $0<\underbar A_1\leq u_0(x)\leq \bar A_1 \, \text{and} \,0<\underbar A_2\leq v_0(x)\leq \bar A_2.$ By Theorem \ref{thm-global-000}(1) and Lemma \ref{persistence-lm5}(1), \begin{equation} \label{proof-entire-eq0} 0<\underbar A_1\leq u(x,t+t_0;t_0,u_0,v_0)\leq \bar A_1 \quad \text{and} \quad 0<\underbar A_2\leq v(x,t+t_0;t_0,u_0,v_0)\leq \bar A_2 \end{equation} for all $x \in \bar \Omega$, $t\geq T$, and $t_0 \in \mathbb{R}$. For $n \in \mathbb N$ with $n>T$, set $t_n=-n,$ $u_n=u(\cdot,0;t_n,u_0,v_0)$ and $v_n=v(\cdot,0;t_n,u_0,v_0).$ Then by parabolic regularity there exist $t_{n_k} \in \mathbb{N},$ $u^{**}_0, \, v^{**}_0 \in C^0(\bar{\Omega})$ such that $$u_{n_k} \to u^{**}_0 \quad \text{and} \quad v_{n_k} \to v^{**}_0 \quad \text{in}\,\, C^0(\bar{\Omega}).$$ We have $u(\cdot,t;t_{n_k},u_0,v_0)=u(\cdot,t;0,u(\cdot,0;t_{n_k},u_0,v_0),v(\cdot,0;t_{n_k},u_0,v_0)),$ and $v(\cdot,t;t_{n_k},u_0,v_0)=v(\cdot,t;0,u(\cdot,0;t_{n_k},u_0,v_0),v(\cdot,0;t_{n_k},u_0,v_0)).$ Thus for $t\geq 0$ we have $$ (u(\cdot,t;t_{n_k},u_0,v_0),v(\cdot,t;t_{n_k},u_0,v_0)) \to (u(\cdot,t;0,u^{**}_0,v^{**}_0),v(\cdot,t;0,u^{**}_0,v^{**}_0)) \, \text{in} \, C^0(\bar{\Omega})\times C^0(\bar{\Omega}) .$$ Moreover $$ 0<\underbar A_1\leq u(x,t;0,u^{**}_0,v^{**}_0)\leq \bar A_1 \quad \text{and} \quad 0<\underbar A_2\leq v(x,t;0,u^{**}_0,v^{**}_0)\leq \bar A_2\quad \forall\,\, x \in \Omega,\,\, t \geq 0. $$ We now prove that $(u(\cdot,t;0,u^{**}_0,v^{**}_0),v(\cdot,t;0,u^{**}_0,v^{**}_0))$ has backward extension. In order to prove that, fix $m \in \mathbb{N}$ and define $u^m_n=u(\cdot,-m;t_n,u_0,v_0)$ and $v^m_n=v(\cdot,-m;t_n,u_0,v_0)$ for all $n>m +T.$ Then by parabolic regularity, without loss of generality, we may assume that there exist $u^{**}_m, \, v^{**}_m \in C^0(\bar{\Omega})$ such that $$ u^m_{n_k} \to u^{**}_m \quad \text{and} \quad v^m_{n_k} \to v^{**}_m \quad \text{in}\,\, C^0(\bar{\Omega}). $$ Furthermore we have $u(\cdot,t;t_{n_k},u_0,v_0)=u(\cdot,t;-m,u(\cdot,-m;t_{n_k},u_0,v_0),v(\cdot,-m;t_{n_k},u_0,v_0)),$ and $v(\cdot,t;t_{n_k},u_0,v_0)=u(\cdot,t;-m,u(\cdot,-m;t_{n_k},u_0,v_0),v(\cdot,-m;t_{n_k},u_0,v_0)).$ Therefore we have $$ (u(\cdot,t;t_{n_k},u_0,v_0),v(\cdot,t;t_{n_k},u_0,v_0)) \to (u(\cdot,t;-m,u^{**}_m,v^{**}_m),v(\cdot,t;-m,u^{**}_m,v^{**}_m)) \, \text{in} \, C^0(\bar{\Omega})\times C^0(\bar{\Omega}) $$ for all $t\geq -m$ , which implies that $(u(\cdot,t;0,u^{**}_0,v^{**}_0),v(\cdot,t;0,u^{**}_0,v^{**}_0))$ has backward extension in the sense that $$ (u(\cdot,t;0,u^{**}_0,v^{**}_0),v(\cdot,t;0,u^{**}_0,v^{**}_0))=(u(\cdot,t;-m,u^{**}_m,v^{**}_m),v(\cdot,t;-m,u^{**}_m,v^{**}_m)) $$ for all { $t> -m$ } and $m \in \mathbb{N}.$ Moreover $$ 0<\underbar A_1\leq u(\cdot,t;-m,u^{**}_m,v^{**}_m)\leq \bar A_1 \quad \text{and} \quad 0<\underbar A_2\leq v(\cdot,t;-m,u^{**}_m,v^{**}_m)\leq \bar A_2\,\,\, \forall\, x \in \Omega,\,\, t \geq -m. $$ Set { $u^{**}(x,t)=u(x,t;0,u^{**}_0,v^{**}_0)$, $v^{**}(x,t)=v(x,t;0,u^{**}_0,v^{**}_0)$}, and $w^{**}=(-\Delta+I)^{-1}(ku^{**}+lv^{**}).$ Then $(u^{**}(x,t),v^{**}(x,t),w^{**}(x,t))$ is a positive bounded entire solution of \eqref{u-v-w-eq00}. } \medskip (i) Assume that $a_i(t+T,x)=a_i(t,x)$ and $b_i(t+T,x)=b_i(t,x)$ for $i=0,1,2$. Set \begin{equation} \label{entire-solut-eq1} E(T)=\{(u_0,v_0) \in C^0(\bar{\Omega})\times C^0(\bar{\Omega}) \,|\, 0<\underbar A_1\leq u_0(x)\leq \bar A_1 \, \text{and} \,0<\underbar A_2\leq v_0(x)\leq \bar A_2\}. \end{equation} Note that $E$ is nonempty, closed, convex and bounded subset of $C^0(\bar{\Omega})\times C^0(\bar{\Omega}).$ Define the map $ \mathcal{T}(T):E(T) \to C^0(\bar{\Omega})\times C^0(\bar{\Omega})$ by $$\mathcal{T}(T)(u_0,v_0)=(u(\cdot,T;0,u_0,v_0),v(\cdot,T;0,u_0,v_0)). $$ Note that $\mathcal{T}(T)$ is well defined, $\mathcal{T}(T) E(T) \subset E(T),$ and continuous by continuity with respect to initial conditions. Moreover by regularity and Arzella-Ascoli's Theorem, $\mathcal{T}(T)$ is completely continuous and therefore by Schauder fixed point there exists $(u_T,v_T) \in E(T) $ such that $(u(\cdot,T;0,u_T,v_T),v(\cdot,T;0,u_T,v_T))=(u_T,v_T).$ Then $((u(\cdot,t;0,u_T,v_T),v(\cdot,t;0,u_T,v_T)$, $w(\cdot,t;0,u_T,v_T)))$ is a positive periodic solution of \eqref{u-v-w-eq00} with periodic $T.$ (ii) Assume that $a_i(t,x)\equiv a_i(x)$ and $b_i(t,x)\equiv a_i(x)$ $(i=0,1,2$). In this case, each $\tau>0$ is a period for $a_i$ and $b_i$ . By (i), there exist $(u^\tau,v^\tau) \in E(\tau)$ such that $(u(\cdot,t;0,u^\tau,v^\tau),v(\cdot,t;0,u^\tau,v^\tau)$, $w(\cdot,t;0,u^\tau,v^\tau))$ is a positive periodic solution of \eqref{u-v-w-eq00} with period $\tau$. Observe that $C^0(\bar\Omega)\subset L^p(\Omega)$ for any $1\le p<\infty$. Choose $p>1$ and $\alpha\in (1/2,1)$ are such that $X^\alpha \hookrightarrow C^1(\bar\Omega)$, where $X^\alpha =D(A^\alpha)$ with the graph norm $\|u\|_\alpha=\|A^\alpha u\|_{L^p(\Omega)}$ and $A=I-\Delta$ with domain $D(A)=\{u\in W^{2,p}(\Omega)\,|\, \frac{\p u}{\p n}=0$ on $\p\Omega\}$. Note that there is $\tilde M>0$ such that for each $\tau>0$ and $(u_0,v_0) \in E(\tau),$ $\|u(\cdot,t;0,u_0,v_0)\|_{\alpha}+\|v(\cdot,t;0,u_0,v_0)\|_{\alpha} \leq \tilde M $ for each $1\leq t \leq 2.$ Let $\tau_n=\frac{1}{n},$ then there exists $u_n,v_n \in E(\frac{1}{n})$ such that $(u(\cdot,t;0,u_n,v_n),v(\cdot,t;0,u_n,v_n)$, $w(\cdot,t;0,u_n,v_n))$ is periodic with period $\tau_n$ and \begin{equation} \label{entire-eq5} \|u_n\|_{\alpha}+\|v_n\|_{\alpha}=\|u(\cdot, N\tau_n;0, u_n,v_n)\|_{\alpha}+\|v(\cdot, N\tau_n;0, u_n,v_n)\|_{\alpha} \leq \tilde M, \end{equation} where $N$ is such that $1\leq N\tau _n \leq 2.$ { We claim that there is $\delta_1>0$ and $\delta_2>0$ such that \vspace{-0.05in}\begin{equation} \label{entire-eq6} \|u_n\|_{\infty} \ge \delta_1 \quad \forall\,\, n\ge 1. \vspace{-0.05in}\end{equation} and \vspace{-0.05in}\begin{equation} \label{entire-eq6bis} \|v_n\|_{\infty} \ge \delta_2 \quad \forall\,\, n\ge 1. \vspace{-0.05in}\end{equation} Since the proof of \eqref{entire-eq6} and \eqref{entire-eq6bis} are similar, we only prove \eqref{entire-eq6}. Suppose by contradiction that \eqref{entire-eq6} does not hold. Then there exists $n_k $ such that $\|u_{n_k}\|_{\infty} < \frac{1}{n_k}$ for every $k \ge 1$. Let $k_0$ such that $\frac{1}{n_{k}}<\delta_0$ for all $k\ge k_0.$ By Lemma \ref{persistence-lm1} and the proof of Lemma \ref{persistence-lm2}, we get that $u(\cdot,t;0,u_{n_{k}},v_{n_{k}}) \ge u(t;\inf u_{n_k})$ for all $t>0$ and $k\ge k_0,$ where $u(t;\inf u_{n_k})$ is the solution of $$ u_t= u\Big(a_{0,\inf}-a_{2,\sup}\bar A_2-\frac{\chi_1 k}{d_3}\epsilon_0-(a_{1,\sup}-\frac{\chi_1 k}{d_3}) u\Big) $$ with $u(0;\inf u_{n_k})=\inf u_{n_k}$. Let $\delta_*=\frac{ a_{0,\inf}-a_{2,\sup}\bar A_2-\frac{\chi_1 k}{d_3}\epsilon_0}{2(a_{1,\sup}-\frac{\chi_1 k}{d_3})}$ and choose $k$ large enough such that $\frac{1}{n_k}<\delta_*$. There is $t_0>0$ such that $ u(t;\inf u_{n_k})>\delta_*$ for all $t\ge t_0$. Then we have \vspace{-0.05in} $$ u_{n_k}(x)=u(\cdot,m \tau_{n_k};0,u_{n_k},u_{n_k})\ge u(m \tau_{n_k};\inf u_{n_k})>\delta^* \vspace{-0.05in} $$ for all $m\in \NN$ satisfying that $m\tau_{n_k}>t_0$. This is a contradiction. Therefore, \eqref{entire-eq6} holds. By \eqref{entire-eq5} and Arzela-Ascoli theorem, there exist $\{n_k\}$, $(u^{**},v^{**}) \in C^0(\bar{\Omega})\times C^0(\bar{\Omega})$ such that $(u_{n_k},u_{n_k})$ converges to $(u^{**},v^{**})$ in $C^0(\bar{\Omega})\times C^0(\bar{\Omega})$. By \eqref{entire-eq6} and \eqref{entire-eq6bis}, we have that $\|u^{**}(\cdot)\|_{\infty}\ge \delta_1 $ and $\|v^{**}(\cdot)\|_{\infty}\ge \delta_2.$} We claim that $(u(\cdot,t;0,u^{**},v^{**}),v(\cdot,t;0,u^{**},v^{**})$, $w(\cdot,t;0,u^{**},v^{**}))$ is a steady state solution of \eqref{u-v-w-eq00}, that is, \begin{equation} \label{entire-eq7} u(\cdot,t;0,u^{**},v^{**})=u^{**}(\cdot)\quad \text{and}\quad v(\cdot,t;0,u^{**},v^{**})=v^{**}(\cdot)\quad \text{for all} \, t\ge 0. \vspace{-0,05in}\end{equation} In fact, let $\epsilon>0$ be fix and let $t>0$. Note that \vspace{-0,05in} $$ [n_k t]\tau_{n_k}=\frac{[n_kt]}{n_k}\leq t \leq \frac{[n_kt]+1}{n_k}=([n_k t]+1)\tau_{n_k}. $$ Then, we can choose $k$ large enough such that \[|u(x,t;0,u^{**},v^{**})-u(x,t;0,u_{n_k},v_{n_k})|< \epsilon,\quad |u_{n_k}(x)-u^{**}(x)|< \epsilon,\quad |v_{n_k}(x)-v^{**}(x)|< \epsilon,\] \[ |v(x,t;0,u^{**},v^{**})-v(x,t;0,u_{n_k},v_{n_k})|< \epsilon,\quad|v(x,\frac{[n_kt]}{n_k};0,u_{n_k},v_{n_k})-v(x,t;0,u_{n_k},v_{n_k})|< \epsilon,\] \[ |u(x,\frac{[n_kt]}{n_k};0,u_{n_k},v_{n_k})-u(x,t;0,u_{n_k},v_{n_k})|< \epsilon.\] for all $x\in\bar\Omega$. We then have \vspace{-0,05in}\begin{align*} |u(x,t;0,u^{**},v^{**})-u^{**}|&\le |u(x,t;0,u^{**},v^{**})-u(x,t;0,u_{n_k},v_{n_k})| +|u_{n_k}(x)-u^{**}(x)|\\ &\quad +|u(x,t;0,u_{n_k},v_{n_k})-u(x,[n_k t]\tau_{n_k};0,u_{n_k},v_{n_k})|<3 \epsilon\quad \forall\,\, x\in\bar\Omega, \end{align*} and \vspace{-0,05in}\begin{align*} |v(x,t;0,u^{**},v^{**})-v^{**}|&\le |v(x,t;0,u^{**},v^{**})-v(x,t;0,u_{n_k},v_{n_k})| +|v_{n_k}(x)-v^{**}(x)|\\ &\quad +|v(x,t;0,u_{n_k},v_{n_k})-v(x,[n_k t]\tau_{n_k};0,u_{n_k},v_{n_k})|<3 \epsilon\quad \forall\,\, x\in\bar\Omega. \end{align*} Letting $\epsilon\to 0$, \eqref{entire-eq7} follows. (iii) { Note that solutions of the following system, $$ \begin{cases} u_t=u(a_0(t)-a_1(t)u-a_2(t)v)\cr v_t=v(b_0(t)-b_1(t)u-b_2(t)v)\cr 0=ku(t)+lv(t)-\lambda w(t) \end{cases} $$ are spatially homogeneous solutions $(u(t),v(t),w(t))$ of \eqref{u-v-w-eq00}.} By (H4) and Remark \ref{rk-2}, \eqref{stability-cond-1-eq1} is satisfied. (iii) then follows { from} Lemma \ref{persistence-lm6}. (2) It follows from the similar arguments as those in (1). \end{proof} \section{Extinction of one of the species} In this section, our aim is to find conditions on the parameters which guarantee the extinction of { the species $u$}. First we prove a lemma. Assume (H1) or (H2). For given { $u_0,v_0\in C^+(\bar\Omega)$}, let $$L_1(t_0,u_0,v_0)=\limsup_{t \to \infty}(\max_{x \in \bar{\Omega}}u(x,t;t_0,u_0,v_0)),\,\,\, l_1(t_0,u_0,v_0)=\liminf_{t \to \infty}(\min_{x \in \bar{\Omega}}u(x,t;t_0,u_0,v_0)),$$ and $$L_2(t_0,u_0,v_0)=\limsup_{t \to \infty}(\max_{x \in \bar{\Omega}}v(x,t;t_0,u_0,v_0)),\,\,\, l_2(t_0,u_0,v_0)=\liminf_{t \to \infty}(\min_{x \in \bar{\Omega}}v(x,t;t_0,u_0,v_0)).$$ If no confusion occurs, we may write $L_i(t_0,u_0,v_0)$ and $l_i(t_0,u_0,v_0)$ as $L_i$ and $l_i$ ($i=1,2$) respectively. By Theorem \ref{thm-global-000} we have $$0\leq l_1 \leq L_1 <\infty,\quad 0\leq l_2 \leq L_2 < \infty.$$ Furthermore, using the definition of $\limsup$ and of $\liminf,$ and elliptic regularity, we get that given $\epsilon >0,$ there exists $T_{\epsilon}>0$ such that \begin{equation} \label{eq-005} l_1-\epsilon \leq u(x,t) \leq L_1+\epsilon , \quad l_2-\epsilon \leq v(x,t) \leq L_2+\epsilon,\quad \forall \,\,t>T_{\epsilon}. \end{equation} \begin{lemma} \label{lem-extinction-01} \begin{itemize} \item[(1)] Assume $a_{1,\inf}>\frac{k \chi_1}{d_3}$ and $a_{2,\inf}\ge \frac{l \chi_1}{d_3}$. Then \begin{equation}\label{extinction-eq-000} L_1\leq \frac{\left\{ a_{0,\sup}-a_{2,\inf} l_2\right\}_{+}}{a_{1,\inf}-\frac{\chi_{1}k}{d_{3}}}. \end{equation} \item[(2)] Assume $b_{2,\inf}>\frac{l \chi_2}{d_3}$. Then \begin{equation}\label{extinction-eq-04} L_2\leq \frac{\left\{ b_{0,\sup}-\frac{\chi_{2}l}{d_{3}}l_2+{\Big(b_{1,\inf}-k\frac{\chi_2}{d_3}\Big)_-L_1}\right\}_{+}}{b_{2,\inf}-\frac{\chi_{2}l}{d_{3}}}, \end{equation} and \begin{equation}\label{extinction-eq-05} l_2\geq \frac{\left\{ b_{0,\inf}-{ \Big(\Big(b_{1,\sup}-k\frac{\chi_2}{d_3}\Big)_++k\frac{\chi_2}{d_3}\Big)L_1}-\frac{\chi_{2}l}{d_{3}}L_2\right\}_{+}}{b_{2,\sup}-\frac{\chi_{2}l}{d_{3}}}. \end{equation} \end{itemize} \end{lemma} \begin{proof} (1) From the first equation of \eqref{u-v-w-eq00}, \eqref{eq-005}, and the fact that $a_{2,\inf}\geq \frac{\chi_1l}{d_3}$, we have \begin{align*} &u_t-d_1\Delta u+\chi_1 \nabla u \cdot \nabla w\nonumber\\ &=u\left\{ a_0(t,x)-(a_1(t,x)-\frac{\chi_1 }{d_3}k)u -(a_2(t,x)-l\frac{\chi_1 }{d_3})v-\frac{\chi_1 }{d_3}\lambda w\right\}\nonumber\\ & \leq u\left\{ a_{0,\sup}-(a_{1,\inf}-\frac{\chi_1 }{d_3}k)u-a_{2,\inf}l_2+\left(a_{2,\sup}+k\frac{\chi_1}{d_3}\right)\epsilon\right\} \end{align*} for $t\ge T_\epsilon$, and thus since $a_{1,\inf}>\frac{\chi_{1}k}{d_{3}},$ \eqref{extinction-eq-000} follows {from parabolic comparison principle.} (2) From the second equation of \eqref{u-v-w-eq00} and \eqref{eq-005}, we have that \begin{align*} &v_t-d_2\Delta v+\chi_2 \nabla v \cdot \nabla w\nonumber\\ &=v\left\{ b_0(t,x)-(b_2(t,x)-\frac{\chi_2 }{d_3}k)v -(b_1(t,x)-k\frac{\chi_2 }{d_3})u-\frac{\chi_2 }{d_3}\lambda w\right\}\nonumber\\ & \leq v\left\{ b_{0,\sup}-(b_{2,\inf}-\frac{\chi_2 }{d_3}k)v+{ \Big(b_{1,\inf}-k\frac{\chi_2}{d_3}\Big)_-L_1}-l\frac{\chi_2}{d_3}l_2+\Big((k+l)\frac{\chi_{2}}{d_{3}} +{\Big(b_{1,inf}-k\frac{\chi_2}{d_3}\Big)_-}\Big)\epsilon\right\} \end{align*} for $t\ge T_\epsilon$, and \eqref{extinction-eq-04} follows {from parabolic comparison principle.} Similarly, we have \begin{align*} &v_t-d_2\Delta v+\chi_2 \nabla v \cdot \nabla w\nonumber\\ &=v\left\{ b_0(t,x)-(b_2(t,x)-\frac{\chi_2 }{d_3}k)v -(b_1(t,x)-k\frac{\chi_2 }{d_3})u-\frac{\chi_2 }{d_3}\lambda w\right\}\nonumber\\ &\geq v\left\{ b_{0,\inf}-(b_{2,\sup}-\frac{\chi_2 }{d_3}k)v-{ \Big(b_{1,\sup}-k\frac{\chi_2}{d_3}\Big)_+L_1-k\frac{\chi_2}{d_3}L_1}-l\frac{\chi_2}{d_3}L_2 -\Big(l\frac{\chi_{2}}{d_{3}}+{\Big(b_{1,\sup}-k\frac{\chi_2}{d_3}\Big)_+}\Big)\epsilon\right\} \end{align*} for $t\ge T_\epsilon$, and \eqref{extinction-eq-05} thus follows from { parabolic comparison principle.} \end{proof} Now we prove Theorem \ref{thm-extinction}. \begin{proof}[Proof of Theorem \ref{thm-extinction}] We first prove that $L_1=0$. Suppose by contradiction that $L_1>0.$ Then by \eqref{extinction-eq-000} and \eqref{Asymp-exclusion-eq-00}, we have \begin{equation}\label{Asymp-exclusion-eq-08} l_2<\frac{a_{0,\sup}}{a_{2,\inf}}. \end{equation} By \eqref{Asymp-exclusion-eq-03}, we have \begin{align*} a_{2,\inf}\big(b_{0,\inf}(b_{2,\inf}-l\frac{\chi_2}{d_3})-b_{0,\sup}\frac{\chi_2}{d_3}l\big) &\geq a_{0,\sup}\big((b_{2,\inf}-l\frac{\chi_2}{d_3})(b_{2,\sup}-l\frac{\chi_2}{d_3})-(l\frac{\chi_2}{d_3})^2\big)\nonumber\\ &=a_{0,\sup}\big((b_{2,\inf}-l\frac{\chi_2}{d_3})b_{2,\sup}-l\frac{\chi_2}{d_3}b_{2,\inf}\big)\nonumber\\ &\geq a_{0,\sup}(b_{2,\inf}-2l\frac{\chi_2}{d_3})b_{2,\sup}. \end{align*} This together with the fact that $a_{2,\inf}\big(b_{0,\inf}(b_{2,\inf}-l\frac{\chi_2}{d_3})-b_{0,\sup}\frac{\chi_2}{d_3}l\big)\leq a_{2,\inf}b_{0,\sup}(b_{2,\inf}-2l\frac{\chi_2}{d_3}), $ we get $$ a_{2,\inf}b_{0,\sup}(b_{2,\inf}-2l\frac{\chi_2}{d_3}) \geq a_{0,\sup}(b_{2,\inf}-2l\frac{\chi_2}{d_3})b_{2,\sup}, $$ which combines with $b_{2,\inf}-2l\frac{\chi_2}{d_3}>0$ implies $$ a_{2,\inf}b_{0,\sup} \geq a_{0,\sup}b_{2,\sup} \geq a_{0,\sup} 2l\frac{\chi_2}{d_3}. $$ Therefore \begin{equation} \label{Asymp-exclusion-eq-09} b_{0,\sup}-\frac{\chi_{2}l}{d_{3}}l_2> b_{0,\sup}-\frac{\chi_{2}l}{d_{3}}\frac{a_{0,\sup}}{a_{2,\inf}}\geq 0. \end{equation} From \eqref{extinction-eq-05}, we get \begin{equation*} \frac{ l\chi_2}{d_3}L_2\geq b_{0,\inf}-{ \Big(\Big(b_{1,\sup}-k\frac{\chi_2}{d_3}\Big)_++k\frac{\chi_2}{d_3}\Big)L_1}-(b_{2,\sup}-\frac{\chi_{2}}{d_{3}}l)l_2. \end{equation*} Thus, from \eqref{extinction-eq-000} and $L_1>0,$ we get \begin{equation*} \frac{l\chi_2}{d_3}L_2\geq b_{0,\inf} -{ \Big(\Big(b_{1,\sup}-k\frac{\chi_2}{d_3}\Big)_++k\frac{\chi_2}{d_3}\Big)}\frac{\left\{ a_{0,\sup}-a_{2,\inf} l_2\right\}}{a_{1,\inf}-\frac{\chi_{1}k}{d_{3}}}-(b_{2,\sup}-\frac{\chi_{2}}{d_{3}}l)l_2. \end{equation*} Therefore \begin{align*} \frac{l\chi_2}{d_3}(a_{1,\inf}-\frac{\chi_{1}k}{d_{3}})L_2 &\geq b_{0,\inf}(a_{1,\inf}-\frac{\chi_{1}k}{d_{3}})- { \Big(\Big(b_{1,\sup}-k\frac{\chi_2}{d_3}\Big)_++k\frac{\chi_2}{d_3}\Big)}a_{0,\sup}\nonumber\\ &-\Big((a_{1,\inf}-\frac{\chi_{1}k}{d_{3}})(b_{2,\sup}-\frac{\chi_{2}}{d_{3}}l)-{ \Big(\Big(b_{1,\sup}-k\frac{\chi_2}{d_3}\Big)_++k\frac{\chi_2}{d_3}\Big)}a_{2,\inf}\Big)l_2. \end{align*} It follows from the last inequality, \eqref{Asymp-exclusion-eq-09}, and \eqref{extinction-eq-04} that \begin{align*} & \frac{l\chi_2}{d_3}(a_{1,\inf}-\frac{\chi_{1}k}{d_{3}})\frac{\Big\{ b_{0,\sup}-\frac{\chi_{2}l}{d_{3}}l_2+{\Big(b_{1,\inf}-k\frac{\chi_2}{d_3}\Big)_-L_1}\Big\}}{b_{2,\inf}-\frac{\chi_{2}l}{d_{3}}}\nonumber\\ &\geq b_{0,\inf}(a_{1,\inf}-\frac{\chi_{1}k}{d_{3}})- { \Big(\Big(b_{1,\sup}-k\frac{\chi_2}{d_3}\Big)_++k\frac{\chi_2}{d_3}\Big)}a_{0,\sup}\nonumber\\ &-\Big((a_{1,\inf}-\frac{\chi_{1}k}{d_{3}})(b_{2,\sup}-\frac{\chi_{2}}{d_{3}}l)-{ \Big(\Big(b_{1,\sup}-k\frac{\chi_2}{d_3}\Big)_++k\frac{\chi_2}{d_3}\Big)}a_{2,\inf}\Big)l_2. \end{align*} Therefore from \eqref{extinction-eq-000}, we get \begin{align*} & \frac{l\chi_2}{d_3}(a_{1,\inf}-\frac{\chi_{1}k}{d_{3}})\frac{\Big\{ b_{0,\sup}-\frac{\chi_{2}l}{d_{3}}l_2+{\Big(b_{1,\inf}-k\frac{\chi_2}{d_3}\Big)_-\frac{\Big\{ a_{0,\sup}-a_{2,\inf} l_2\Big\}}{a_{1,\inf}-\frac{\chi_{1}k}{d_{3}}}}\Big\}}{b_{2,\inf}-\frac{\chi_{2}l}{d_{3}}}\nonumber\\ &\geq b_{0,\inf}(a_{1,\inf}-\frac{\chi_{1}k}{d_{3}})- { \Big(\Big(b_{1,\sup}-k\frac{\chi_2}{d_3}\Big)_++k\frac{\chi_2}{d_3}\Big)}a_{0,\sup}\nonumber\\ &-\Big((a_{1,\inf}-\frac{\chi_{1}k}{d_{3}})(b_{2,\sup}-\frac{\chi_{2}}{d_{3}}l)-{ \Big(\Big(b_{1,\sup}-k\frac{\chi_2}{d_3}\Big)_++k\frac{\chi_2}{d_3}\Big)}a_{2,\inf}\Big)l_2. \end{align*} Thus \begin{align}\label{ra-00001} & \Big\{\underbrace{(a_{1,\inf}-\frac{\chi_{1}k}{d_{3}})\Big[(b_{2,\inf}-\frac{\chi_{2}l}{d_{3}})(b_{2,\sup}-\frac{\chi_{2}l}{d_{3}})-(l\frac{\chi_2}{d_3})^2\Big]}_{B_1}\Big\}l_2\nonumber\\ &-\Big\{\underbrace{\Big[{ \Big(\Big(b_{1,\sup}-k\frac{\chi_2}{d_3}\Big)_++k\frac{\chi_2}{d_3}\Big)}(b_{2,\inf}-\frac{\chi_{2}l}{d_{3}})+\frac{l\chi_2}{d_3}{\Big(b_{1,\inf}-k\frac{\chi_2}{d_3}\Big)_-}\Big]a_{2,\inf}}_{B_2}\Big\}l_2\nonumber\\ &\geq \underbrace{\left(b_{0,\inf}(b_{2,\inf}-\frac{\chi_{2}l}{d_{3}})-l\frac{\chi_2}{d_3}b_{0,sup}\right)\left(a_{1,\inf}-\frac{\chi_{1}k}{d_{3}}\right)}_{A_1}\nonumber\\ &-\underbrace{\Big[{ \Big(\Big(b_{1,\sup}-k\frac{\chi_2}{d_3}\Big)_++k\frac{\chi_2}{d_3}\Big)}(b_{2,\inf}-\frac{\chi_{2}l}{d_{3}})+\frac{l\chi_2}{d_3}{ \Big(b_{1,\inf}-k\frac{\chi_2}{d_3}\Big)_-}\Big]a_{0,\sup}}_{A_2} \end{align} Then, inequality \eqref{ra-00001} is equivalent to \begin{equation}\label{ra-aa1} B l_2\geq A, \end{equation} with $B=B_1-B_2$ and $A=A_1-A_2$. Note that \eqref{Asymp-exclusion-eq-04} yields that $A>0.$ This combined with \eqref{ra-aa1} implies that $B>0$. Therefore, inequality \eqref{ra-aa1} becomes $$ l_2\geq \frac{A}{B}. $$ Then thanks to equation \eqref{Asymp-exclusion-eq-08}, we get $$ B>\frac{a_{2,\inf}}{a_{0,\sup}}A. $$ That means \begin{align*} &a_{0,\sup}(a_{1,\inf}-\frac{\chi_{1}k}{d_{3}})\Big[(b_{2,\inf}-\frac{\chi_{2}l}{d_{3}})(b_{2,\sup}-\frac{\chi_{2}l}{d_{3}})-(l\frac{\chi_2}{d_3})^2\Big]\nonumber\\ &-a_{0,\sup}\Big[{ \Big(\Big(b_{1,\sup}-k\frac{\chi_2}{d_3}\Big)_++k\frac{\chi_2}{d_3}\Big)}(b_{2,\inf}-\frac{\chi_{2}l}{d_{3}})+\frac{l\chi_2}{d_3}{\Big(b_{1,\inf}-k\frac{\chi_2}{d_3}\Big)_-}\Big]a_{2,\inf}\nonumber\\ &>a_{2,\inf}\left(b_{0,\inf}(b_{2,\inf}-\frac{\chi_{2}l}{d_{3}})-l\frac{\chi_2}{d_3}b_{0,sup}\right)\left(a_{1,\inf}-\frac{\chi_{1}k}{d_{3}}\right)\nonumber\\ &-\Big[{ \Big(\Big(b_{1,\sup}-k\frac{\chi_2}{d_3}\Big)_++k\frac{\chi_2}{d_3}\Big)}(b_{2,\inf}-\frac{\chi_{2}l}{d_{3}})+\frac{l\chi_2}{d_3}{ \Big(b_{1,\inf}-k\frac{\chi_2}{d_3}\Big)_-}\Big]a_{0,\sup}a_{2,\inf}. \end{align*} Thus $$ a_{0,\sup}\Big[(b_{2,\inf}-\frac{\chi_{2}l}{d_{3}})(b_{2,\sup}-\frac{\chi_{2}l}{d_{3}})-(l\frac{\chi_2}{d_3})^2\Big]>a_{2,\inf}\left(b_{0,\inf} (b_{2,\inf}-\frac{\chi_{2}l}{d_{3}})-l\frac{\chi_2}{d_3}b_{0,\sup}\right), $$ which contradicts to \eqref{Asymp-exclusion-eq-03}. Hence $L_1=0$. Next, we prove \eqref{MainAsym-eq-002} and \eqref{MainAsym-eq-003}. Since $L_1=0,$ we get from \eqref{extinction-eq-04} and \eqref{extinction-eq-05} respectively that \begin{equation}\label{Asymp-exclusion-eq-010} L_2\leq \frac{ b_{0,\sup}-\frac{\chi_{2}l}{d_{3}}l_2}{b_{2,\inf}-\frac{\chi_{2}l}{d_{3}}}. \end{equation} and \begin{equation}\label{Asymp-exclusion-eq-011} l_2\geq \frac{ b_{0,\inf}-\frac{\chi_{2}l}{d_{3}}L_2}{b_{2,\sup}-\frac{\chi_{2}l}{d_{3}}}. \end{equation} Then \eqref{MainAsym-eq-002} follows from \eqref{Asymp-exclusion-eq-010} and \eqref{Asymp-exclusion-eq-011}. Furthermore \eqref{MainAsym-eq-003} follows from \eqref{MainAsym-eq-001}, \eqref{MainAsym-eq-002} and elliptic comparison principle. Finally, assume that \eqref{v-w-eq00} has a unique positive entire solution $(v^*(x,t;\tilde b_0,\tilde b_2),w^*(x,t;\tilde b_0,\tilde b_2))$ for any $(\tilde b_0,\tilde b_2)\in H(b_0,b_2)$. We claim that \eqref{MainAsym-eq-004} holds. Indeed, if \eqref{MainAsym-eq-004} does not hold. Then there are $\tilde \epsilon_0>0$ and $t_n\to\infty$ such that $$ \|v(\cdot,t_n+t_0;t_0,u_0,v_0)-v^*(\cdot,t_n+t_0;b_0,b_2)\|_\infty\ge \tilde \epsilon_0\quad \forall\,\, n=1,2,\cdots. $$ Without loss of generality, we may assume that $$ \lim_{n\to\infty} (b_0(t+t_n+t_0,x),b_2(t+t_n+t_0,x))=(\tilde b_0(t,x),\tilde b_2(t,x)) $$ and $$ \lim_{n\to\infty} (u(x,t+t_n+t_0;t_0,u_0,v_0),v(x,t+t_n+t_0;t_0,u_0,v_0),w(x,t+t_n+t_0;t_0,u_0,v_0))=(0,\tilde v(x,t),w(x,t)) $$ locally uniformly in $(t,x)\in\RR\times \bar{\Omega}$. Then $(\tilde v(x,t),\tilde w(x,t))$ is a positive entire solution of \eqref{v-w-eq00} and $$ \|\tilde v(\cdot;0)-v^*(\cdot,0;\tilde b_0,\tilde b_2)\|_\infty\ge \tilde \epsilon_0, $$ which is a contradiction. Hence \eqref{MainAsym-eq-004} holds. \end{proof}
1,941,325,220,968
arxiv
\section*{\Large Appendix} \input{sections/appendix} \end{document} \section{Introduction} 3D content has been in high demand for a variety of applications, including gaming, entertainment, architecture, and robotics simulation. It is slowly finding its way into virtually every possible domain: retail, online conferencing, virtual social presence, education, and the metaverse. However, creating 3D content is not for anyone --- it requires immense artistic training and 3D modeling expertise. Developing these skill sets takes a significant amount of time and effort. If 3D content creation could be automated with \emph{natural language}, anyone could become a unique 3D artist. By allowing users to create high-quality 3D objects by simply using text prompts, it would drastically accelerate the democratization of 3D content creation. Image content creation from text prompts~\cite{nichol2021glide,saharia2022photorealistic,ramesh2022hierarchical,balaji2022ediffi} has seen significant progress with the advances of diffusion models~\cite{sohl2015deep,song2019generative,ho2020denoising} for generative modeling of images. The key enablers are large-scale datasets comprising billions of samples (images with text) scrapped from the web and a massive amount of compute. In contrast, 3D content generation has progressed at a much slower pace. Existing 3D object generation models~\cite{chan2022efficient,gao2022get3d,zeng2022lion} are mostly categorical. A trained model can only be used to synthesize objects for a single class, with early signs of scaling to multiple classes shown recently in Zeng~\etal~\cite{zeng2022lion}. Thus, what a user can do with these models is extremely limited and not yet ready for artistic creation. This limitation is largely due to the lack of diverse large-scale 3D datasets --- compared to image and video content, 3D content is much harder to find on the Internet. This naturally raises the question of whether 3D generation capability can be achieved by leveraging powerful text-to-image generative models. Recently, DreamFusion~\cite{poole2022dreamfusion} demonstrated a remarkable ability for text-conditioned 3D content generation by utilizing a pre-trained text-to-image diffusion model~\cite{saharia2022photorealistic} that generates images as a strong image prior. The diffusion model acts as a critic to optimize the underlying 3D representation. The optimization process ensures that rendered images from a 3D model, represented by Neural Radiance Fields (NeRF)~\cite{mildenhall2020nerf}, match the distribution of photorealistic images across different viewpoints, given the input text prompt. Since the supervision signal in DreamFusion operates on very low-resolution images ($64\times64$), DreamFusion cannot synthesize high-frequency 3D geometric and texture details. Due to the use of inefficient MLP architectures for the NeRF representation, practical high-resolution synthesis may not even be possible as the required memory footprint and the computation budget grows quickly with the resolution. Already at a resolution of $64\times64$, optimization times are in hours (1.5 hours per prompt on average using TPUv4). In this paper, we synthesize highly detailed 3D models from text prompts at a reduced optimization time. Specifically, we propose a coarse-to-fine optimization approach that uses an ensemble of diffusion priors at different resolutions to supervise the 3D representation, enabling the generation \twocolumn[{ \renewcommand\twocolumn[1][]{#1} \maketitle \begin{center} \input{figures/teaser_video.tex} \end{center} }] \noindent of both view-consistent geometry as well as high-resolution details. % In the first stage, we optimize a coarse neural field representation akin to DreamFusion, but with an % a memory-and-compute-efficient scene representation based on a hash grid~\cite{muller2022instant}. % In the second stage, we switch to optimizing mesh representations, a critical step that allows us to utilize diffusion priors at resolutions as high as $512\times 512$. As 3D meshes are amenable to fast graphics renderers that can render high-resolution images in real-time, we leverage a very efficient differentiable rasterizer~\cite{nvdiffrec,gao2022get3d} and make use of camera close-ups to recover high-frequency details in geometry and texture. As a result, our approach produces high-fidelity 3D content that can conveniently be imported and visualized in standard graphics software and does so at 2$\times$ the speed of DreamFusion. Furthermore, we showcase various creative controls over the 3D synthesis process by leveraging the advancements developed for text-to-image editing applications~\cite{balaji2022ediffi, ruiz2022dreambooth}. Our approach, dubbed {{\texttt{Magic3D}}\xspace}, endows users with unprecedented control in crafting their desired 3D objects with text prompts and reference images, bringing this technology one step closer to democratizing 3D content creation. In summary, our work makes the following contributions: \begin{itemize}[leftmargin=*] \setlength\itemsep{0pt} \item We propose {\texttt{Magic3D}}\xspace, a framework for high-quality 3D content synthesis using text prompts by improving several major design choices made in DreamFusion. It consists of a coarse-to-fine strategy that leverages both low- and high-resolution diffusion priors for learning the 3D representation of the target content. {\texttt{Magic3D}}\xspace, which synthesizes 3D content at an 8$\times$ higher resolution, is also 2$\times$ faster than DreamFusion. 3D content synthesized by our approach is significantly preferable by users (61.7\%). \item We extend various image editing techniques developed for text-to-image models to 3D object editing and show their applications in the proposed framework. \end{itemize} \section{Conclusion} We propose {\texttt{Magic3D}}\xspace, a fast and high-quality text-to-3D generation framework. We benefit from both efficient scene models and high-resolution diffusion priors in a coarse-to-fine approach. In particular, the 3D mesh models scale nicely with image resolution and enjoy the benefits of higher resolution supervision brought by the latent diffusion model without sacrificing its speed. It takes 40 minutes from a text prompt to a high-quality 3D mesh model ready to be used in graphic engines. With extensive user studies and qualitative comparisons, we show that {\texttt{Magic3D}}\xspace is more preferable (61.7\%) by the raters compared to DreamFusion, while enjoying a $2\times$ speed-up. Lastly, we propose a set of tools for better controlling style and content in 3D generation. We hope with {\texttt{Magic3D}}\xspace, we can democratize 3D synthesis and open up everyone's creativity in 3D content creation. \vspace{8pt} \noindent\textbf{Acknowledgements.} We would like to thank Frank Shen, Yogesh Balaji, Seungjun Nah, James Lucas, David Luebke, Clement Fuji-Tsang, Charles Loop, Qinsheng Zhang, Zan Gojcic, and Jonathan Tremblay for helpful discussions and paper proofreading. We would also like to thank Ben Poole, Ajay Jain, Jonathan T. Barron, and Ben Mildenhall for providing additional implementation details in DreamFusion. \section{Background: DreamFusion} DreamFusion~\cite{poole2022dreamfusion} achieves text-to-3D generation with two key components: a neural scene representation which we refer to as the scene model, and a pre-trained text-to-image diffusion-based generative model. The scene model is a parametric function $x = g(\theta)$, which can produce an image $x$ at the desired camera pose. Here, $g$ is a volumetric renderer of choice, and $\theta$ is a coordinate-based MLP representing a 3D volume. The diffusion model $\phi$ comes with a learned denoising function $\epsilon_\phi(x_t;y,t)$ that predicts the sampled noise $\epsilon$ given the noisy image $x_t$, noise level $t$, and text embedding $y$. It provides the gradient direction to update $\theta$ such that all rendered images are pushed to the high probability density regions conditioned on the text embedding under the diffusion prior. Specifically, DreamFusion introduces Score Distillation Sampling (SDS), which computes the gradient: \begin{equation} \nabla_\theta \mathcal{L}_\text{SDS}(\phi, g(\theta)) = \mathbb{E}_{t, \epsilon} \! \! \left[ w(t)(\epsilon_\phi(x_t;y,t) - \epsilon)\frac{\partial x}{\partial \theta} \right]. \end{equation} Here, $w(t)$ is a weighting function. We view the scene model $g$ and the diffusion model $\phi$ as modular components of the framework, amenable to choice. In practice, the denoising function $\epsilon_\phi$ is often replaced with another function $\tilde{\epsilon}_\phi$ that uses classifier-free guidance~\cite{ho2021classifierfree}, which allows one to carefully weigh the strength of the text conditioning (see Sec.~\ref{sec:controllable}). DreamFusion relies on large classifier-free guidance weights to obtain results with better quality. DreamFusion adopts a variant of Mip-NeRF 360~\cite{barron2022mip} with an explicit shading model for the scene model and Imagen~\cite{saharia2022photorealistic} as the diffusion model. These choices result in two key limitations. First, high-resolution geometry or textures cannot be obtained since the diffusion model only operates on $64 \times 64$ images. Second, the utility of a large global MLP for volume rendering is both computationally expensive as well as memory intensive, making this approach scale poorly with the increasing resolution of images. \section{Introduction} 3D digital content has been in high demand for a variety of applications, including gaming, entertainment, architecture, and robotics simulation. It is slowly finding its way into virtually every possible domain: retail, online conferencing, virtual social presence, education, \emph{etc}.\xspace} \def\vs{\emph{vs}\onedot. However, creating professional 3D content is not for anyone --- it requires immense artistic and aesthetic training with 3D modeling expertise. Developing these skill sets takes a significant amount of time and effort. Augmenting 3D content creation with natural language could considerably help democratize 3D content creation for novices and turbocharge expert artists. Image content creation from text prompts~\cite{nichol2021glide,saharia2022photorealistic,ramesh2022hierarchical,balaji2022ediffi} has seen significant progress with the advances of diffusion models~\cite{sohl2015deep,song2019generative,ho2020denoising} for generative modeling of images. The key enablers are large-scale datasets comprising billions of samples (images with text) scrapped from the Internet and massive amounts of compute. In contrast, 3D content generation has progressed at a much slower pace. Existing 3D object generation models~\cite{chan2022efficient,gao2022get3d,zeng2022lion} are mostly categorical. A trained model can only be used to synthesize objects for a single class, with early signs of scaling to multiple classes shown recently by Zeng~\etal~\cite{zeng2022lion}. Therefore, what a user can do with these models is extremely limited and not yet ready for artistic creation. This limitation is largely due to the lack of diverse large-scale 3D datasets --- compared to image and video content, 3D content is much less accessible on the Internet. This naturally raises the question of whether 3D generation capability can be achieved by leveraging powerful text-to-image generative models. Recently, DreamFusion~\cite{poole2022dreamfusion} demonstrated its remarkable ability for text-conditioned 3D content generation by utilizing a pre-trained text-to-image diffusion model~\cite{saharia2022photorealistic} that generates images as a strong image prior. The diffusion model acts as a critic to optimize the underlying 3D representation. The optimization process ensures that rendered images from a 3D model, represented by Neural Radiance Fields (NeRF)~\cite{mildenhall2020nerf}, match the distribution of photorealistic images across different viewpoints, given the input text prompt. Since the supervision signal in DreamFusion operates on very low-resolution images ($64\times64$), DreamFusion cannot synthesize high-frequency 3D geometric and texture details. Due to the use of inefficient MLP architectures for the NeRF representation, practical high-resolution synthesis may not even be possible as the required memory footprint and the computation budget grows quickly with the resolution. Even at a resolution of $64\times64$, optimization times are in hours (1.5 hours per prompt on average using TPUv4). \input{figures/teaser_intro.tex} In this paper, we present a method that can synthesize highly detailed 3D models from text prompts within a reduced computation time. Specifically, we propose a coarse-to-fine optimization approach that uses multiple diffusion priors at different resolutions to optimize the 3D representation, enabling the generation of both view-consistent geometry as well as high-resolution details. In the first stage, we optimize a coarse neural field representation akin to DreamFusion, but with a memory- and compute-efficient scene representation based on a hash grid~\cite{muller2022instant}. In the second stage, we switch to optimizing mesh representations, a critical step that allows us to utilize diffusion priors at resolutions as high as $512\times 512$. As 3D meshes are amenable to fast graphics renderers that can render high-resolution images in real-time, we leverage an efficient differentiable rasterizer~\cite{nvdiffrec,gao2022get3d} and make use of camera close-ups to recover high-frequency details in geometry and texture. As a result, our approach produces high-fidelity 3D content (see Fig.~\ref{fig:teaser_2x4}) that can conveniently be imported and visualized in standard graphics software and does so at 2$\times$ the speed of DreamFusion. Furthermore, we showcase various creative controls over the 3D synthesis process by leveraging the advancements developed for text-to-image editing applications~\cite{balaji2022ediffi, ruiz2022dreambooth}. Our approach, dubbed {{\texttt{Magic3D}}\xspace}, endows users with unprecedented control in crafting their desired 3D objects with text prompts and reference images, bringing this technology one step closer to democratizing 3D content creation. In summary, our work makes the following contributions: \begin{itemize}[leftmargin=*] \setlength\itemsep{0pt} \item We propose {\texttt{Magic3D}}\xspace, a framework for high-quality 3D content synthesis using text prompts by improving several major design choices made in DreamFusion. It consists of a coarse-to-fine strategy that leverages both low- and high-resolution diffusion priors for learning the 3D representation of the target content. {\texttt{Magic3D}}\xspace, which synthesizes 3D content with an 8$\times$ higher resolution supervision, is also 2$\times$ faster than DreamFusion. 3D content synthesized by our approach is significantly preferable by users (61.7\%). \item We extend various image editing techniques developed for text-to-image models to 3D object editing and show their applications in the proposed framework. \end{itemize} \section{Controllable 3D Generation}\label{sec:controllable} As certain styles and concepts are difficult to express in words but easy with images, it is desirable to have a mechanism to influence the text-to-3D model generation with images. We explore different image conditioning techniques as well as a prompt-based editing approach to provide users more control over the 3D generation outputs. \vspace{4pt} \noindent\textbf{Personalized text-to-3D.} DreamBooth~\cite{ruiz2022dreambooth} described a method to personalize text-to-image diffusion models by fine-tuning a pre-trained model on several images of a subject. The fine-tuned model can learn to tie the subject to a unique identifier string, denoted as \mathbf{V}, and generate images of the subject when \mathbf{V} is included in the text prompt. In the context of text-to-3D generation, we would like to generate a 3D model of a subject. This can be achieved by first fine-tuning our diffusion prior models with the DreamBooth approach, and then using the fine-tuned diffusion priors with the \mathbf{V} identifier as part of the conditioning text prompt to provide the learning signal when optimizing the 3D model. To demonstrate the applicability of DreamBooth in our framework, we collect 11 images of one cat and 4 images of one dog. We fine-tune {eDiff-I}\xspace~\cite{balaji2022ediffi} and LDM~\cite{Rombach_2022_CVPR}, binding the text identifier \mathbf{V} to the given subject. Then, we optimize the 3D model with \mathbf{V} in the text prompts. We use a batch size of 1 for all fine-tuning. For {eDiff-I}\xspace, we use the Adam optimizer with learning rate $1\times 10^{-5}$ for 1,500 iterations; for LDM, we fine-tune with learning rate $1\times 10^{-6}$ for 800 iterations. Fig.~\ref{fig:dreambooth} shows our personalized text-to-3D results: we are able to successfully modify the 3D models preserving the subjects in the given input images. \vspace{4pt} \noindent\textbf{Style-guided text-to-3D.} We also explore controlling the 3D generation with multi-modal conditioning. The {eDiff-I}\xspace diffusion prior~\cite{balaji2022ediffi} is designed such that it can condition on a reference image when performing text-to-image generation. Such an image conditioning design makes it easy to change the style of the generated output. However, we find that na\"ively feeding the style image as input to the model when computing the SDS gradient can result in a poor 3D model that is essentially overfitting to the input image. We hypothesize that the conditioning signal by the image is significantly stronger than the text prompt during optimization. To better balance the guidance strength between image and text conditioning, we extend our model's classifier-free guidance scheme~\cite{ho2021classifierfree} and compute the final $\tilde{\epsilon}_\phi(x_t; y_{\textrm{text}}, y_{\textrm{image}}, t)$: \begin{align} \label{eq:cfg_extended} \tilde{\epsilon}_\phi(x_t; &~ y_{\textrm{text}}, y_{\textrm{image}}, t) = \epsilon_\phi(x_t; t) \nonumber \\ &+ \omega_{\textrm{text}} [\epsilon_\phi(x_t; y_{\textrm{text}}, t) - \epsilon_\phi(x_t; t)] \nonumber \\ &+ \omega_{\textrm{joint}} [\epsilon_\phi(x_t; y_{\textrm{text}}, y_{\textrm{image}}, t) - \epsilon_\phi(x_t; t)] \;, \end{align} where $y_{\textrm{text}}$ and $y_{\textrm{image}}$ are text and image conditioning respectively, and $\omega_{\textrm{text}}$ and $\omega_{\textrm{joint}}$ are the guidance weights for text and joint text-and-image conditioning respectively. Note that for $\omega_{\textrm{joint}} = 0$, the scheme is equivalent to standard classifier-free guidance with respect to text conditioning only. Fig.~\ref{fig:style_transfer} shows our style-guided text-to-3D generation results. When optimizing the 3D model, we feed the reference image to the {eDiff-I}\xspace model. We set $\omega_{\textrm{text}},\omega_{\textrm{joint}}=50,50$ or $40,60$ and apply the image guidance when $t<0.5$ only. We do not provide high-resolution results for this experiment because LDM does not support reference image conditioning. \vspace{4pt} \noindent\textbf{Prompt-based editing through fine-tuning.} Another way to control the generated 3D content is by fine-tuning a learned coarse model with a new prompt. Our prompt-based editing includes three stages. (a) We train a coarse model with a base prompt. (b) We modify the base prompt and fine-tune the coarse model with the LDM. This stage provides a well initialized NeRF model for the next step. Directly applying mesh optimization on a new prompt would generate highly detailed textures but could deform geometry only slightly. (c) We optimize the mesh with the modified text prompt. Our prompt-based editing can modify the texture of the shape or transform the geometry and texture according to the text. The resulting scene models preserve the layer-out and overall structure. Such an editing capability makes the 3D content creation with {\texttt{Magic3D}}\xspace more controllable. In Fig.~\ref{fig:fine-tuneSD-prompt}, we show two coarse NeRF models trained with the base prompt for the ``bunny'' and ``squirrel''. We modify the base prompt, fine-tune the NeRF model in high resolution and optimize the mesh. Results show that we can tune the scene model according to the prompt, \eg changing the ``baby bunny'' to ``stained glass bunny'' or ``metal bunny'' results in similar geometry but with a different texture. \section{Related Work} \vspace{4pt} \noindent\textbf{Text-to-image generation.} We have witnessed significant progress in text-to-image generation with diffusion models in recent years. With improvements in modeling and data curation, diffusion models can compose complex semantic concepts from text descriptions (nouns, adjectives, artistic styles, \emph{etc}.\xspace} \def\vs{\emph{vs}\onedot) to generate high-quality images of objects and scenes~\cite{ramesh2022hierarchical,saharia2022photorealistic,Rombach_2022_CVPR, balaji2022ediffi}. Sampling images from diffusion models is time consuming. To generate high-resolution images, these models either utilize a cascade of super-resolution models~\cite{saharia2022photorealistic,balaji2022ediffi} or sample from a lower-resolution latent space and decode latents into high-resolution images~\cite{Rombach_2022_CVPR}. Despite the advances in high-resolution image generation, using language to describe and control 3D properties (\eg camera viewpoints) while maintaining coherency in 3D remains an open, challenging problem. \vspace{4pt} \noindent\textbf{3D generative models.} % There is a large body of work on 3D generative modeling, exploring different types of 3D representations such as 3D voxel grids~\cite{wu2016learning,gadelha20173d,henzler2019platonicgan,lunz2020inverse,smith2017improved}, point-clouds~\cite{achlioptas2018learning, pointflow,mo2019structurenet,zhou2021pvd,luo2021diffusion,zeng2022lion}, meshes~\cite{zhang2020image,gao2022get3d}, implicit~\cite{occnet,chen2019learning}, or octree~\cite{ibing2021octree} representations. Most of these approaches rely on training data in the form of 3D assets, which are hard to acquire at scale. Inspired by the success of neural volume rendering~\cite{mildenhall2020nerf}, recent works started investing in 3D-aware image synthesis~\cite{nguyen2019hologan,chan2021pi,niemeyer2021giraffe,hao2021GANcraft,gu2021stylenerf,chan2022efficient,orel2021styleSDF,schwarz2022voxgraf}, which has the advantage of learning 3D generative models directly from images --- a more widely accessible resource. However, volume rendering networks are typically slow to query, leading to a trade-off between long training time~\cite{chan2021pi,niemeyer2021giraffe} and lack of multi-view consistency~\cite{gu2021stylenerf}. EG3D~\cite{chan2022efficient} partially mitigates this problem by utilizing a dual discriminator. While obtaining promising results, these works remain limited to modeling objects within a single object category, such as cars, chairs, or human faces, thus lacking scalability and the creative control desired for 3D content creation. In our paper, we focus on text-to-3D synthesis, aiming to generate a 3D renderable representation of a scene based on a text prompt. \vspace{4pt} \noindent\textbf{Text-to-3D generation.} With the recent success in text-to-image generative modeling in recent years, text-to-3D generation has also gained a surge of interest from the learning community. Earlier works such as CLIP-forge~\cite{sanghi2022clip} synthesizes objects by learning a normalizing flow model to sample shape embeddings from textual input. However, it requires 3D assets in voxel representations during training, making it challenging to scale with data. DreamField~\cite{jain2021dreamfields} and CLIP-mesh~\cite{khalid2022clip} mitigate the training data issue by relying on a pre-trained image-text model~\cite{radford2021learning} to optimize the underlying 3D representations (NeRFs and meshes), such that all 2D renderings reach high text-image alignment scores. While these approaches avoid the requirement of expensive 3D training data and mostly rely on pre-trained large-scale image-text models, they tend to produce less realistic 2D renderings. Recently, DreamFusion~\cite{poole2022dreamfusion} % showcased impressive capability in text-to-3D synthesis by utilizing a powerful pre-trained text-to-image diffusion model~\cite{saharia2022photorealistic} as a strong image prior. We build upon this work and improve over several design choices to bring significantly higher-fidelity 3D models into hands of users with a much reduced generation time. \section{Experiments} \label{sec:experiments4} We focus on comparing our method with DreamFusion~\cite{poole2022dreamfusion} on 397 text prompts taken from the website of DreamFusion\footnote{\url{https://dreamfusion3d.github.io/gallery.html}}. We train {\texttt{Magic3D}}\xspace on all of the text prompts and compare them with the results provided on the website. \vspace{4pt} \noindent\textbf{Speed evaluation.} Unless otherwise noted, the coarse stage is trained for 5000 iterations with 1024 samples along the ray (subsequently filtered by the sparse octree) with a batch size of 32, with a total runtime of around $15$ minutes (upwards of $8$ iterations / second, variable due to differences in sparsity). The fine stage is trained for 3000 iterations with a batch size of 32 with a total runtime of $25$ minutes ($2$ iterations / second). Both runtimes combined are $40$ minutes. All runtimes were measured on 8 NVIDIA A100 GPUs. \vspace{4pt} \noindent\textbf{Qualitative comparisons.} We provide qualitative examples in Fig.~\ref{fig:compare_dreamfusion}. Qualitatively, our models achieve much higher 3D quality in terms of both geometry and texture. Notice that our model can generate candies on ice cream cones, highly detailed sushi-like cars, vivid strawberries, and birds. We also note that our resulting 3D models can be directly imported and visualized in standard graphics software. \vspace{4pt} \noindent\textbf{User studies.} We conduct user studies to evaluate different methods based on user preferences on Amazon MTurk. We show users two videos side by side rendered from a canonical view by two different algorithms using the same text prompt. We ask the users to select the one that is more realistic and detailed. Each prompt is evaluated by $3$ different users, resulting in $1191$ pairwise comparisons. As shown in Table~\ref{tbl:user_study}, users favor 3D models generated by {\texttt{Magic3D}}\xspace, with 61.7\% of the users considering our results with higher quality. \input{figures/dreambooth.tex} \input{figures/image_style_transfer.tex} \input{figures/prompt_edit.tex} \vspace{4pt} \noindent\textbf{Can single-stage optimization work with LDM prior?} We ablate scene models optimized with high-resolution LDM prior in a single-stage optimization setup. We find that 3D meshes as the scene model fail to generate high-quality results if optimized from scratch. This leaves our our memory-efficient sparse 3D representation as the ideal candidate for the scene model. However, rendering $512\times512$ images is still too memory intensive to fit into modern GPUs. Therefore, we render lower-resolution images from the scene model and upsample them to $512\times 512$ as input to the LDM. We find it generates objects with worse shapes. Fig.~\ref{fig:scratch_vs_2stage} shows two examples with scene rendering resolution $64 \times 64$ and $256 \times 256$ respectively (top row). While it generates furry details, the shape is worse than the coarse model. \vspace{4pt} \noindent\textbf{Can we use NeRF for the fine model?} Yes. While optimizing a NeRF from scratch does not work well, we can follow the coarse-to-fine framework but replace the second-stage scene model with a NeRF. In the bottom right of Fig.~\ref{fig:scratch_vs_2stage}, we show the result of a fine NeRF model initialized with the coarse model on its left and fine-tuned with $256 \times 256$ rendered images. The two-stage approach retains good geometry in the initial model and adds more details, showing superior quality to its one-stage counterpart. \vspace{4pt} \noindent\textbf{Coarse models vs. fine models.} Fig.~\ref{fig:ablate_mesh_two_stage_nerf} provides more visual results contrasting coarse and fine models. We try both NeRF and mesh for scene models and fine-tune them from the same coarse model above. We see significant quality improvements on both NeRF and mesh models, suggesting our coarse-to-fine approach works for general scene models. \section{High-Resolution 3D Generation} {\texttt{Magic3D}}\xspace is a two-stage coarse-to-fine framework that uses efficient scene models that enable high-resolution text-to-3D synthesis (Fig.~\ref{fig:diagram}). We describe our method and key differences from DreamFusion~\cite{poole2022dreamfusion} in this section. \subsection{Coarse-to-fine Diffusion Priors} \label{diffusion_prior} {\texttt{Magic3D}}\xspace uses two different diffusion priors in a coarse-to-fine fashion to generate high-resolution geometry and textures. In the first stage, we use the base diffusion model described in {eDiff-I}\xspace~\cite{balaji2022ediffi}, which is similar to the base diffusion model of Imagen~\cite{saharia2022photorealistic} used in DreamFusion. This diffusion prior is used to compute gradients of the scene model via a loss defined on rendered images at a low resolution $64\times 64$. In the second stage, we use the latent diffusion model (LDM)~\cite{Rombach_2022_CVPR} that allows backpropagating gradients into rendered images at a high resolution $512\times 512$; in practice, we choose to use the publicly available \textit{Stable Diffusion} model~\cite{Rombach_2022_CVPR}. Despite generating high-resolution images, the computation of LDM is manageable because the diffusion prior acts on the latent $z_t$ with resolution $64\times 64$: \begin{equation} \label{eq:sds_high} \hspace*{-8pt} \nabla_\theta \mathcal{L}_\text{SDS}(\phi, g(\theta)) = \mathbb{E}_{t, \epsilon} \!\! \left[ w(t)(\epsilon_\phi(z_t;y,t) \!-\! \epsilon)\frac{\partial z}{\partial x}\frac{\partial x}{\partial \theta} \! \right]. \end{equation} The increase in computation time mainly comes from computing ${\partial x} /\ { \partial \theta}$ (the gradient of the high-resolution rendered image) and ${\partial z} /\ {\partial x}$ (the gradient of the encoder in LDM). \input{figures/compare_google.tex} \subsection{Scene Models} \label{sec:scene_model} We cater two different 3D scene representations to the two different diffusion priors at coarse and fine resolutions to accommodate the increased resolution of rendered images for the input of high-resolution priors, discussed as follows. \vspace{4pt} \noindent\textbf{Neural fields as coarse scene models.} The initial coarse stage of the optimization requires finding the geometry and textures from scratch. This can be challenging as we need to accommodate complex topological changes in the 3D geometry and depth ambiguities from the 2D supervision signals. In DreamFusion~\cite{poole2022dreamfusion}, the scene model is a neural field (a coordinate-based MLP) based on Mip-NeRF 360~\cite{barron2022mip} that predicts albedo and density. This is a suitable choice as neural fields can handle topological changes in a smooth, continuous fashion. However, Mip-NeRF 360~\cite{barron2022mip} is computationally expensive as it is based on a large global coordinate-based MLP. As volume rendering requires dense samples along a ray to accurately render high-frequency geometry and shading, the cost of having to evaluate a large neural network at every sample point quickly stacks up. For this reason, we opt to use the hash grid encoding from Instant NGP~\cite{muller2022instant}, which allows us to represent high-frequency details at a much lower computational cost. We use the hash grid with two single-layer neural networks, one predicting albedo and density and the other predicting normals. We additionally maintain a spatial data structure that encodes scene occupancy and utilizes empty space skipping~\cite{liu2020neural,takikawa2021neural}. Specifically, we use the density-based voxel pruning approach from Instant NGP~\cite{muller2022instant} with an octree-based ray sampling and rendering algorithm~\cite{KaolinWispLibrary}. With these design choices, we drastically accelerate the optimization of coarse scene models while maintaining quality. \vspace{4pt} \noindent\textbf{Textured meshes as fine scene models.} In our fine stage of optimization, we need to be able to accommodate very high-resolution rendered images to fine-tune our scene model with high-resolution diffusion priors. Using the same scene representation (the neural field) from the initial coarse stage of optimization could be a natural choice since the weights of the model can directly carry over. Although this strategy can work to some extent (Figs.~\ref{fig:scratch_vs_2stage} and \ref{fig:ablate_mesh_two_stage_nerf}), it struggles to render very high-resolution (\eg, $512 \times 512$) images within reasonable memory constraints and computation budgets. To resolve this issue, we use textured 3D meshes as the scene representation for the fine stage of optimization. In contrast to volume rendering for neural fields, rendering textured meshes with differentiable rasterization can be performed efficiently at very high resolutions, making meshes a suitable choice for our high-resolution optimization stage. Using the neural field from the coarse stage as the initialization for the mesh geometry, we can also sidestep the difficulty of learning large topological changes in meshes. Formally, we represent the 3D shape using a deformable tetrahedral grid ($V_T, T$), where $V_T$ is the vertices in the grid $T$~\cite{gao2020deftet,dmtet}. Each vertex $\mathbf{v}_{i} \in V_T \subset \mathbb{R}^3$ contains a signed distance field (SDF) value $s_i \in \mathbb{R}$ and a deformation $\Delta \mathbf{v}_{i} \in \mathbb{R}^3$ of the vertex from its initial canonical coordinate. Then, we extract a surface mesh from the SDF using a differentiable marching tetrahedra algorithm~\cite{dmtet}. For textures, we use the neural color field as a volumetric texture representation. \subsection{Coarse-to-fine Optimization} \label{sec:c2fgen} We describe our coarse-to-fine optimization procedure, which first operates on a coarse neural field representation and subsequently a high-resolution textured mesh. \vspace{4pt} \noindent\textbf{Neural field optimization.} Similarly to Instant NGP~\cite{muller2022instant}, we initialize an occupancy grid of resolution $256^3$ with values to 20 to encourage shapes to grow in the early stages of optimization. We update the grid every 10 iterations and generate an octree for empty space skipping. We decay the occupancy grid by 0.6 in every update and follow Instant NGP with the same update and thresholding parameters. Instead of estimating normals from density differences, we use an MLP to predict the normals. Note that this does not violate geometric properties since volume rendering is used instead of surface rendering; as such, the orientation of particles at continuous positions need not be oriented to the level set surface. This helps us significantly reduce the computational cost of optimizing the coarse model by avoiding the use of finite differencing. Accurate normals can be obtained in the fine stage of optimization when we use a true surface rendering model. Similar to DreamFusion, we also model the background using an environment map MLP, which predicts RGB colors as a function of ray directions. Since our sparse representation model does not support scene reparametrization as in Mip-NeRF 360~\cite{barron2022mip}, the optimization has a tendency to ``cheat'' by learning the essence of the object using the background environment map. As such, we use a tiny MLP for the environment map (hidden dimension size of 16) and weigh down the learning rate by $10\times$ to allow the model to focus more on the neural field geometry. \vspace{4pt} \noindent\textbf{Mesh optimization.} To optimize a mesh from the neural field initialization, we convert the (coarse) density field to an SDF by subtracting it with a non-zero constant, yielding the initial $s_i$. We additionally initialize the volume texture field directly with the color field optimized from the coarse stage. During optimization, we render the extracted surface mesh into high-resolution images using a differentiable rasterizer~\cite{nvdiffrast,nvdiffrec}. We optimize both $s_i$ and $\Delta \mathbf{v}_{i}$ for each vertex $\mathbf{v}_{i}$ via backpropagation using the high-resolution SDS gradient (Eq.~\ref{eq:sds_high}). When rendering the mesh to an image, we also track the 3D coordinates of each corresponding pixel projection, which would be used to query colors in the corresponding texture field for joint optimization. When rendering the mesh, we increase the focal length to zoom in on object details, which is a critical step towards recovering high-frequency details. We keep the same pre-trained environment map from the coarse stage of optimization and composite the rendered background with the rendered foreground object using differentiable antialiasing~\cite{nvdiffrast}. To encourage the smoothness of the surface, we further regularize the angular differences between adjacent faces on the mesh. This allows us to obtain well-behaved geometry even under supervision signals with high variance, such as the SDS gradient $\nabla_\theta \mathcal{L}_\text{SDS}$. \input{tables/user_study.tex} \input{figures/scratch_vs_2stage.tex} \input{figures/compare_nerf_two_stage.tex} \section{Author Contributions} \textbf{All authors} have significant contributions on ideas, explorations, and paper writing. Specifically, \textbf{CHL} and \textbf{TYL} led the research, developed fundamental code for experiments and organized team efforts. \textbf{JG} led the experiments on generating high-resolution mesh models. \textbf{LT} led the experiments on using high-resolution diffusion prior. \textbf{TT} led the experiments on sparse scene representations. \textbf{XZ} and \textbf{KK} led the experiments in controllable generation. \textbf{XH} conducted the user study. \textbf{SF} and \textbf{MYL} advised the research direction and designed the scope of the project. \section{Implementation Details} We follow the implementation details described by Poole~\etal~\cite{poole2022dreamfusion} as closely as possible. We refer readers to the Dreamfusion paper~\cite{poole2022dreamfusion} for context and list the major differences below. \vspace{4pt} \noindent\textbf{Architectural details.} As aforementioned in the main paper, we adopt a multi-resolution hash grid encoding architecture from Instant NGP~\cite{muller2022instant} instead of using a large global coordinate-based MLP architecture. We use 16 levels of hash dictionaries of size $2^{19}$ and dimension $4$, spanning 3D gird resolutions from $2^4$ to $2^{12}$ with an exponential growth rate. We use single-layer MLPs with $32$ hidden units to predict all of RGB color, volume density, and normal, where the inputs to the MLPs are the concatenated feature vectors from the multi-resolution hash encoding sampled with trilinear interpolation (we refer readers to the Instant NGP paper~\cite{muller2022instant} for more details in this representation). We perform density-based pruning to sparsify the Instant NGP representation with an octree structure every 10 iterations. This allows us to more efficiently render pixels using empty space skipping, even with 3D points as dense as $1024$ samples per ray. We do not use the contracting reparametrization of unbounded scenes from Mip-NeRF 360~\cite{barron2022mip} as it is not supported by our sparse representation. \vspace{4pt} \noindent\textbf{Scene representation.} For the coarse neural field representation, we use a bounding sphere of radius $2$ for our experiments. We use the $\mathrm{softplus}$ activation for the density prediction and follow Poole~\etal~\cite{poole2022dreamfusion} to add an initial spatial density bias to encourage the optimization to focus on the object-centric neural field. We empirically found that using a linear form of spatial density bias helps stabilize the optimization, more formally written as \begin{align} \tau_\text{init}(\boldsymbol{\mu}) = \lambda_\tau \cdot \left( 1 - \frac{\|\boldsymbol{\mu}\|_2}{c} \right) \;, \end{align} where $\boldsymbol{\mu}$ is the 3D location, $\lambda_\tau = 10$ is the density bias scale, and $c=0.5$ is an offset scale. Different from DreamFusion, however, we add this density bias to the \emph{pre}-activation; as a result, the post-activation of the density prediction will vary continuously from $\mathrm{softplus}(\lambda_\tau)$ to $0$ as a function of $\|\boldsymbol{\mu}\|_2$. \vspace{4pt} \noindent\textbf{Camera and light augmentations.} We follow Poole~\etal~\cite{poole2022dreamfusion} to add random augmentations to the camera and light sampling for rendering the shaded images. Differently, (a) we sample the point light location such that the angular distance from the random camera center location (w.r.t\onedot} \def\dof{d.o.f\onedot the origin) is sampled from $\psi_\text{cam} \sim \mathcal{U}(0, \pi/3)$ with a random point light distance $r_\text{cam} \sim \mathcal{U}(0.8, 1.5)$, and (b) we use a ``soft'' version of the textureless and albedo-only augmentation such that various strengths of shading in the rendered images are seen during optimization. (c) we sample the camera distance from $\mathcal{U}(1.5, 2)$, and the focal length $\mathcal{U}(0.7, 1.35)$. When training with high resolution diffusion prior, we increase the focal length and sample from $\mathcal{U}(1.2, 1.8)$. \vspace{4pt} \noindent\textbf{Optimization.} Unless otherwise specified, we optimize the coarse model with batch size $32$ using the Adam optimizer~\cite{kingma2014adam} with a learning rate of $1\times 10^{-2}$ without warmup and decay. Note that the large global coordinate-based MLP architecture in DreamFusion~\cite{poole2022dreamfusion} limits its optimization to only an effective batch size of $8$. For the coarse model, we add the opacity regularization as suggested by Poole~\etal~\cite{poole2022dreamfusion} to encourage sparsity in the volume density field, but we do not add the orientation regularization as we empirically found it to hurt optimization. \vspace{4pt} \noindent\textbf{Score Distillation Sampling.} In the first stage, we sample the timestep $t\sim \mathcal{U}(0, 1)$ and set $w(t)=1$. In the second stage, we find the range of timestep $t$ in SDS affects the quality. We sample $t \sim \mathcal{U}(0.02, 0.5)$ in our experiments. In general, setting $t_\text{max}$ in the range of $[0.5, 0.7]$ works well. We set $w(t)=\sigma_t^2$ in this stage. \begin{figure}[t!] \centering \includegraphics[width=\linewidth]{figures/sr_nerf.pdf} \vspace{-6mm} \caption{Fine-tuning NeRF with SR prior fails to add high-resolution details. $t_\text{max}$ is the maximum timestep sampled in SDS. } \label{fig:sr_nerf} \end{figure} \section{Alternative High-Resolution Prior} In addition to LDM, we also consider using Super Resolution (SR) diffusion prior~\cite{balaji2022ediffi, saharia2022photorealistic} for increasing the resolution of a coarse model. This diffusion model is trained to % generate a high-resolution image conditioning on a low-resolution input image. In SDS, this model predicts noises added in high resolution, i.e., $\epsilon_\phi(x_t;y,t,x_\text{low})$, where $x_\text{low}$ denotes a $64 \times 64$ low-resolution image. We render $x_\text{low}$ with a frozen coarse model to optimize the second-stage fine model. Fig.~\ref{fig:sr_nerf} shows this approach fails to add high-quality details to the input coarse model. \section{Ablation Studies} \vspace{4pt} \noindent\textbf{Style-guided text-to-3D: guidance weight and noise level threshold.} We ablate different combinations of guidance weights and noise level thresholds in Figs.~\ref{fig:aba_style_transfer_guidance} and~\ref{fig:aba_style_sigma}, respectively. The guidance weights $\omega_{\textrm{text}}$ and $\omega_{\textrm{joint}}$ balance the guidance strength during optimization (see Eq.~\ref{eq:cfg_extended}). A similar guidance formulation has also been used by Liu~\etal for compositional text-to-image generation~\cite{liu2022compositional}. We also find that applying the image conditioning only below a certain noise level threshold can help control style transfer. The intuition is that image-based style guidance is most relevant for optimizing the generated 3D object's details, which are modeled at lower noise levels. Notice that we do not provide high-resolution results for this experiment because LDM does not support image conditioning inputs. \input{figures/sup_style_guide_guidance_weight.tex} \input{figures/sup_style_guide_noise_level.tex} \vspace{4pt} \noindent\textbf{Style-guided text-to-3D: content image as reference.} We also explore using multiple images as inputs during 3D synthesis to transfer the content in the images to the 3D model, as shown in Fig.~\ref{fig:sup_style_guide_content_transfer}: Given a text prompt, we first ask the eDiff-I model to generate the front view, side view and back view images. When optimizing the 3D model for the same text prompt from different views, we then feed the corresponding generated view image as input to guide the 3D synthesis. This approach requires some degree of consistency with respect to subject identity across the different view images, which can be achieved by generating a set of different view images first and choosing accordingly. Overall, the experiment shows that we can apply the text-to-image diffusion model to generate images that can be used for guidance during 3D model optimization. As we see, this does not only provide enhanced control by preserving the identity of the subjects in the images, but also improves output quality and 3D consistency. Generally, depending on image type, image conditioning can be used either for object-centric content transfer to 3D (Fig.~\ref{fig:sup_style_guide_content_transfer}) or for abstract 3D stylization (Figs.~\ref{fig:style_transfer}, \ref{fig:aba_style_transfer_guidance}, and \ref{fig:aba_style_sigma}). \input{figures/sup_style_guide_content_transfer.tex} \section{More Qualitative Results} We provide more qualitative comparisons with Dreamfusion~\cite{poole2022dreamfusion} in Figs.~\ref{fig:suppl_compare_dreamfusion_1},~\ref{fig:suppl_compare_dreamfusion_2},~\ref{fig:suppl_compare_dreamfusion_3},~\ref{fig:suppl_compare_dreamfusion_4},~\ref{fig:suppl_compare_dreamfusion_5}. Our {\texttt{Magic3D}}\xspace achieved much higher quality in terms 3D geometry and texture. We also show more results on prompt-based editing in Fig.~\ref{fig:sup-fine-tuneSD-prompt}. Our {\texttt{Magic3D}}\xspace enable high-quality editing of the 3D content through simple text prompt modification. \input{figures/sup_prompt_edit.tex} \input{figures/compare_google_suppl.tex}
1,941,325,220,969
arxiv
\section{Introduction} The reionization of the universe was the last global transition of the Intergalactic Medium (IGM), from fully-neutral after cosmic recombination at $z\sim1100$ to fully-ionized as we see it today, caused by the radiation from the first stars. Currently there are still only very few direct observational constraints on this epoch. The lack of Gunn-Peterson trough in the spectra of high-redshift sources indicates a low mean neutral fraction $x_{\rm HI}\lesssim10^{-4}$ out to redshift $z\sim6$, which implies overlap was achieved sometime before that, at $z_{\rm ov}>6$ . On the other hand, the WMAP 3-year data \citep{2007ApJS..170..377S} yielded a fairly high value for the integrated Thomson electron scattering optical depth to the surface of last scattering, at $\tau_{\rm es}=0.09\pm0.03$. This requires a significant ionized fraction out to high redshifts, $z>12$, and thus implies an extended reionization. The optical depth by itself does not put very stringent constraints on the possible reionization histories, however. The reason for this is the self-regulated nature of the reionization process \citep{2007MNRAS.376..534I}, whereby the Jeans-mass filtering of low-mass sources in the ionized regions results in the $\tau_{\rm es}$ and $z_{\rm ov}$, the overlap redshift, being only loosely related. The overlap redshift $z_{\rm ov}$ is determined by the abundances and efficiencies of the high-mass sources, whose formation is not suppressed by reionization, while $\tau_{\rm es}$ depends on both high- and low-mass sources. Thus, varying the ionizing efficiencies of the small sources yields a wide range of $\tau_{\rm es}$ values for the same value of $z_{\rm ov}$. This relative lack of observational data is set to change dramatically in the near future due to a number of large observational projects which are currently under way. The 21-cm data from high redshifts contains potentially the richest set of information since the signal is inherently three-dimensional, on the sky and in redshift/frequency \citep[e.g.][see \citet{2006PhR...433..181F} for a detailed recent review]{1997ApJ...475..429M,2000ApJ...528..597T,2002ApJ...572L.123I, 2003MNRAS.341...81I,2004ApJ...608..622Z,2006MNRAS.372..679M, 2006ApJ...646..681S,wmap3}. The features that could be derived include the full reionization history, geometry, statistics and individual bright features \citep{2006MNRAS.372..679M, 2006PhR...433..181F,wmap3}. There are significant challenges to be overcome, however, particularly related to precise subtraction of the very strong foregrounds present at low frequencies \citep[e.g.][]{2006PhR...433..181F,2006ApJ...648..767M,wmap3}. The patchiness of reionization creates secondary temperature anisotropies in the CMB through the kinetic Sunyaev-Zel'dovich effect \citep[][]{1998ApJ...508..435G,2000ApJ...529...12H,2001ApJ...551....3G, 2003ApJ...598..756S,2005ApJ...630..643M,kSZ}, as well as polarization anisotropies \citep{2000ApJ...529...12H,2003ApJ...595....1H, 2003ApJ...598..756S,mortonson06,cmbpol}. Unlike the 21-cm signal, the reionization signatures in the CMB are integrated over the reionization history and contain no frequency information. However, the typical scales of reionization are reflected in a characteristic peak of the kSZ anisotropy signal \citep{kSZ}, and the shape of the power spectrum is dependent on the reionization parameters (source efficiencies and small-scale gas clumping). CMB anisotropy observations can therefore provide us with key information about the process of reionization and since its systematics are different it would be an important complement to the 21-cm studies. One could also combine these observations more directly, by using 21-cm observations to derive the Thomson optical depth fluctuations \citep{pol21}. Small-scale CMB anisotropy and polarization measurements would be quite difficult due to the weakness of these signals, but are within the abilities of modern detectors \citep{kSZ,cmbpol}. Narrow-band searches for high-redshift Ly-$\alpha$ emitters have been very successful at finding sources at ever higher redshifts, currently up to $z\sim7$ \citep[e.g.][]{2002ApJ...568L..75H,2005pgqa.conf..363H,2003PASJ...55L..17K, 2005PASJ...57..165T,2003AJ....125.1006R,2004ApJ...604L..13S,2004ApJ...617L...5M, 2006NewAR..50...94B,2006PASJ...58..313S,2006ApJ...648....7K,2008ApJ...677...12O}. Together with studies of the Ly-$\alpha$ resonant absorption in the IGM \citep[e.g.][]{2002AJ....123.1247F,2003AJ.126..1W,2004AJ....128..515F, 2006AJ....132..117F} they provide important independent approaches for studying reionization \citep[see][for a recent review]{2006ARA&A..44..415F}. The optical depth for Ly-$\alpha$ resonant absorption in neutral IGM at high redshifts is quite large \citep{1964AZh....41..801S,1965ApJ...142.1633G}, thus absorption studies are mostly sensitive to very low hydrogen neutral fractions, typically below $x_{\rm HI}\sim10^{-4}$, otherwise the absorption saturates. This technique is thus best suited for studying highly-ionized regions and the very end of reionization and has been very successful for setting low limits for the redshift at which reionization was completed. On the other hand, Ly-$\alpha$ emitter surveys do not require a very low average hydrogen neutral fraction and thus in principle can probe further into the reionization epoch \citep{2004ApJ...617L...5M}. Other related probes include damping wing measurements \citep[e.g.][]{1998ApJ...501...15M,2004ApJ...611L..69M, 2006PASJ...58..485T}, quasar HII region sizes and their evolution \citep[e.g.][]{2007ApJ...660..923M,2004ApJ...610..117W,2006AJ....132..117F, 2007MNRAS.374..493B,2007MNRAS.376L..34M}, and sizes of dark gaps in high-z spectra \citep[e.g.][]{2006AJ....132..117F,2008MNRAS.386..359G}. The correct interpretation of these data for reionization is far from straightforward, however. The strong dependence of Ly-$\alpha$ absorption on the neutral fraction and gas density means that both should be modelled with certain precision. High-redshift sources are rare and strongly clustered \citep[see e.g.][]{2004ApJ...609..474B,2004ApJ...613....1F,2007MNRAS.376..534I}. As a result, H~II regions generally contain multiple ionizing sources and grow much larger than the ones created by individual sources. This minimizes the effect of the damping wings and increases the transmission, allowing the detection of more and fainter sources than would be naiively expected \citep{2005ApJ...625....1W}. However, while this is the generic expectation, the actual effect would be dependent on the exact geometry of reionization, e.g. sources close behind a neutral region will be damped even if they are inside a very large H~II region. Simplified models typically assume spherical ionized regions and either ignore source clustering or assume linear bias \citep[e.g.][]{2000ApJ...542L..75C,2004MNRAS.349.1137S,2004MNRAS.354..695F, 2005ApJ...623..627H,2005ApJ...628..575W,2006ApJ...649..570K,2007MNRAS.374..960W}. In practice the large ionized regions form by local percolation of multiple smaller ones and as a consequence are highly nonspherical \citep{2006MNRAS.369.1625I,mellema06,2007MNRAS.376..534I}. The inhomogeneous cosmological density fields and non-equilibrium chemistry effects (particularly in recently-ionized gas) further complicate the picture and point to the need of following the cosmological structure formation and the reionization history of a given region. A proper account of all these effects can only be done through detailed cosmological radiative transfer simulations. A number of radiative transfer methods have been developed in recent years and now they are reaching a certain level of maturity and are producing fairly reliable results \citep{comparison1}. However, performing large-scale reionization simulations, as required for Ly-$\alpha$ studies is still technically very challenging. Recently \citet{2007MNRAS.381...75M} used large scale structure formation numerical simulations postprocessed with radiation transfer to study the observability of Lyman-$\alpha$ emitters at high redshifts and what these can tell us about reionization. In order to achieve high dynamic range, these authors employed a subgrid model for the collapse of the smallest halos. Another, semi-numerical approach was used by \cite{2008MNRAS.385.1348M}, who took the linear density and velocity fields at early time (essentially the initial conditions for an N-body simulation) and used an excursion-set approach combined with a first-order Lagrangian theory to ``paint'' the H~II regions on the density field. This procedure provides large dynamic range at a low cost, but at the expense of making significant approximations and thus cannot fully replace full numerical simulations. In this paper we use the results of large scale numerical simulations to study the transfer of Ly-$\alpha$ through the IGM. In \S~2 we briefly describe our simulation method. In \S~3 we describe the evolution and environment of a rare, massive source, similar to the ones that are currently observed. In \S~4 we address the observability of Lyman-$\alpha$ emitters, considering the reduction of the transferred line flux due to the absorption in the IGM and luminosity functions. In \S~5 we describe the effects of the patchiness of reionization on the angular correlation function of Lyman-$\alpha$ emitters, and, finally, in \S~6 we sum up our conclusions. \section{Simulations} Our simulation results follow the full, self-consistent reionization history in a large volume of $(100\,\rm h^{-1}Mpc)^3$ and were described in detail in \citet{2006MNRAS.369.1625I,2006MNRAS.372..679M} and \citet{2007MNRAS.376..534I}. While this volume is too small to allow us to consider the rarest, most luminous sources like the SDSS QSO's \citep{2002AJ....123.1247F,2006AJ....132..117F}, we have sufficient resolution to locate the majority of sources responsible for reionization and take explicit account of their radiation, and to derive good quality absorption spectra. Of the range of simulations presented in \citet{2007MNRAS.376..534I} we here consider one specific run, labelled f250C. Our simulations were performed using a combination of two very efficient computational tools, a cosmological particle-mesh code called PMFAST \citep{2005NewA...10..393M} for following the structure formation, whose outputs are then post-processed using our radiative transfer and non-equilibrium chemistry code called C$^2$-Ray \citep{methodpaper}. The parameter $f_\gamma=250$ characterizes the emissivity of the ionizing sources - how many ionizing photons per gas atom in the (resolved) halos are produced and manage to escape from the host halo within $\sim20$~Myr, which is the time between two consecutive density slices, equal to two radiative transfer timesteps, while 'C' indicates that this run models the gas clumping at small (sub-radiative transfer grid) scales based on a fit given by \begin{equation} C_{\rm sub-grid}(z)= 26.2917e^{-0.1822z+0.003505\,z^2}. \label{clumpfact_fit3} \end{equation} based on the used WMAP3 cosmology (a good fit for $6<z<30$). This fit to the small-scale clumping factor is a more precise version of the one we presented in \citet{2005ApJ...624..491I}. We derived it based on a PMFAST simulation with a computational mesh of $3248^3$ and with particle number of $1624^3$, with computational volume of $(3.5\,\rm h^{-1}~Mpc)^3$. These parameters correspond to particle mass of $10^3M_\odot$, minimum resolved halo mass of $10^5M_\odot$, and a spatial resolution of $\sim1$~kpc comoving. Our $(100\,\rm h^{-1}~Mpc)^3$ volume simulation resolves all halos with mass above $2.2\times10^9M_\odot$, higher than the mass above which atomic-line cooling of hydrogen becomes effective, which is $\sim10^8M_\odot$. As a consequence, our treatment does not include the contribution of low-mass sources to reionization. Higher-resolution, smaller-box simulations which do include all ionizing sources above the atomic-line cooling limit \citep{2007MNRAS.376..534I} indicate that the effects of low-mass sources are primarily confined to the earliest stages of reionization, when such sources are dominant. Throughout most of the reionization process, and especially during its late stages, the low-mass sources, which are strongly clustered around the high density peaks, are heavily suppressed due to Jeans-mass filtering in the ionized regions and thus have limited effect on the reionization progress and large-scale geometry. Since the Ly-$\alpha$ observations largely probe the later stages of reionization, where the neutral gas fraction is $\sim30\%$ or less \citep{2004ApJ...617L...5M}, we do not expect that our conclusions will be strongly affected by the absence of low-mass sources. Larger simulations, currently in progress \citep{2008arXiv0806.2887I,2008arXiv0806.3091S}, which resolve all atomically-cooling halos in $\sim100$~Mpc boxes will settle these uncertainties. Throughout this work we assume a flat ($\Omega_k=0$) $\Lambda$CDM cosmology ($\Omega_m,\Omega_\Lambda,\Omega_b,h,\sigma_8,n)=(0.24,0.76,0.042,0.73,0.74, 0.95)$ based on WMAP 3-year results \citep{2007ApJS..170..377S}, hereafter WMAP3. Here $\Omega_m$, $\Omega_\Lambda$, and $\Omega_b$ are the total matter, vacuum, and baryonic densities in units of the critical density, $\sigma_8$ is the rms density fluctuations extrapolated to the present on the scale of $8 h^{-1}{\rm Mpc}$ according to the linear perturbation theory, and $n$ is the index of the primordial power spectrum of density fluctuations. \section{Luminous high-redshift sources and their environment: properties, evolution and reionization history} The luminous sources at high redshift, the ones typically seen in current surveys, are hosted by rare, massive halos which form at the location of the highest peaks of the density field. The statistics of Gaussian fields predicts that such high density peaks are rare and highly clustered in space, more strongly so at high redshifts. As a consequence, each high-redshift, massive galaxy should be surrounded by numerous smaller ionizing sources. The self-consistent reionization history simulations of such regions require following a sufficiently large volume, in order to obtain the correct statistics and biasing of the rare peaks, while at the same time resolving all the low-mass halos which are the main drivers of the reionization process. Our current radiative transfer simulations are able to achieve this. We also note that correctly modelling the nonlinear bias of the rare peaks in semi-analytical modes is a difficult and still unsolved problem. As a consequence, the halo clustering in semi-analytical models is typically underestimated, and in some cases even ignored. This often yields incorrect results, e.g. in estimates of the suppression of low-mass sources by Jeans-mass filtering \citep[e.g.][]{2007MNRAS.376..534I}. \begin{figure} \includegraphics[width=3.5in]{mass_accr_hist_ln.eps} \caption{Mass accretion history of the three most massive halos found in our computational volume at redshift $z=6$. The mass growth is roughly exponential in redshift, and is well-approximated by $M(z)[M_\odot]=\exp(A-\alpha z)$, where $A=31.2, \alpha=0.52$ for most massive halo (solid, red), $A=32.5, \alpha=0.81$ for the next most massive halo (short-dashed, blue) and $A=32.0, \alpha=0.75$ for the third most massive halo (long-dashed, green). Fits are shown by the thin straight lines (with corresponding line types and colors). \label{mass_accr}} \end{figure} \subsection{Mass accretion history of massive halos at high redshift} In Figure~\ref{mass_accr} we show the mass-growth history of the three most massive (at redshift $z=6$) halos found in our computational volume, with final masses of $1.5\times10^{12}M_\odot$, $9\times10^{11}M_\odot$ and $8.2\times10^{11}M_\odot$, respectively. All correspond to very rare, $\approx4.5-5-\sigma$ peaks of the density field. The first progenitors of these halos resolved in our simulation ($M_{\rm halo}>2\times10^9M_\odot$) form very early, at $z\sim16$. Thereafter, the halo masses grow roughly exponentially with redshift, $M\propto\exp(-\alpha z)$. This behaviour is in good agreement with previous results on the growth of dark matter halos in hierarchical $\Lambda$CDM models \citep{2002ApJ...568...52W}, as well as with similar results obtained in more generic, idealized situations \citep{2004astro.ph..9173S}. The slopes $\alpha$ for our halos exhibit much smaller scatter ($0.52<\alpha<0.81$; see Figure~\ref{mass_accr} caption for the complete fitting formulae) than the ones found by \citet{2002ApJ...568...52W} ($0.4<\alpha<1.6$). The exponential fits are excellent during periods when no major mergers (i.e. mergers with halos of similar mass) occur. This is the case e.g. at late times ($z<11$) by which time these halos have grown enough to dominate their surroundings and no similar-mass halos remain in their vicinity. The exception here is the second most massive halo (short-dashed line). Its last major merger occurs at $z\sim8$. At early times ($z>12$) the mass evolution of all three halos does not follow the overall exponential trend. Instead, the most massive halo is growing faster than its long-term trend, through series of rapid major mergers, while the other two halos initially grow slower than their long-term trend. \begin{figure*} \begin{center} \includegraphics[width=2.in]{simage_thin_yz_12.915.ps} \includegraphics[width=2.in]{simage_thin_yz_10.078.ps} \includegraphics[width=2.in]{simage_thin_yz_9.034.ps} \includegraphics[width=2.in]{ionrate_v2_yz_12.915.eps} \includegraphics[width=2.in]{ionrate_v2_yz_10.078.eps} \includegraphics[width=2.in]{ionrate_v2_yz_9.034.eps} \includegraphics[width=2.in]{simage_thin_yz_7.926.ps} \includegraphics[width=2.in]{simage_thin_yz_7.042.ps} \includegraphics[width=2.in]{simage_thin_yz_6.000.ps} \includegraphics[width=2.in]{ionrate_v2_yz_7.926.ps} \includegraphics[width=2.in]{ionrate_v2_yz_7.042.ps} \includegraphics[width=2.in]{ionrate_v2_yz_6.000.ps} \caption{The reionization history of a high density peak. The images are centered on the most massive (at $z=6$) halo in our computational volume and are of size $100\,h^{-1}$Mpc to the side. The snapshots are at: (top row) $z=12.9$ (left; global mass-weighted ionized fraction $x_m=0.001$), $z=10.1$ (middle; $x_m=0.10$), $z=9.0$ (right; $x_m=0.28$), and (third row) $z=7.9$ (left; $x_m=0.66$), $z=7.0$ (middle; $x_m=0.94$), and $z=6.0$ (right; $x_m=0.9999$). The underlying cosmological density field (dark,green) is superimposed with the ionized fraction (light, orange) and the cells containing ionizing sources (dark, blue dots), the slices thickness of $0.5\,h^{-1}$~Mpc (1 cell) in the density and ionized fraction fields and $10\,h^{-1}$~Mpc in terms of sources. The corresponding images of the (non-equilibrium) photoionization rates ($0.5\,h^{-1}$~Mpc thickness) are shown in the second and bottom rows. \label{peak_evol}} \end{center} \end{figure*} \subsection{Reionization history} In Figure~\ref{peak_evol} (top and third row) we illustrate the main stages of the reionization history of a high density peak and its intergalactic environment. The particular peak we show here is the one surrounding the largest-mass halo found in our computational volume at redshift $z=6$. For clarity and convenience for the reader we have shifted the peak to the box center using the periodicity of our computational volume. The first resolved source in this region forms at $z\sim16$. By redshift $z=12.9$ (top left; mass-weighted ionized fraction is $x_m=0.001$ at this time) the central halo has already undergone several major mergers with nearby halos and its total mass has grown to $1.5\times10^{10}M_\odot$. The combined effect of the central halo and the nearby members of the same source cluster is to create a substantial ionized region (of size $R\sim1\,\rm h^{-1}Mpc$, defined as the radius at which the integrated continuum optical depth from the source at the Lyman limit reaches unity). At this time there are only a few ionized regions in our computational volume (and just two intersect the plane shown). By $z=10.1$ (top middle; $x_m=0.10$) many more H~II regions appear and both the sources and the ionizing regions are strongly clustered in space. The central region still remains the largest one. By redshift $z=9.0$ (top right; $x_m=0.28$) many more haloes have formed, most of them in large clustered groups. The H~II region surrounding the central peak remains among the largest ($R\sim6\,\rm h^{-1}Mpc$), but several other ionized bubbles reach comparable sizes. At redshift $z=7.9$ (bottom left; $x_m=0.66$) some quite sizable regions, more than ten Mpc across, have percolated, reaching local overlap. The central bubble has grown to a size of $R\sim8\,\rm h^{-1}Mpc$. The reionization geometry becomes quite complex, with most ionized bubbles becoming interconnected, while leaving large neutral patches in-between. By $z=7.0$ (bottom middle; $x_m=0.94$) the notion of isolated, quasi-spherical H~II regions becomes rather meaningless, since all ionized regions have already merged into one topologically-connected region, although substantial neutral patches still remain interspersed throughout our volume. The volume remains on average quite optically-thick to ionizing continuum radiation. Finally, at $z=6.0$ (bottom right; $x_m=0.9999$; $R=18.7\,\rm h^{-1}Mpc$) our volume is well beyond overlap (which we define by $x_m>99\%$). Only by that time the volume becomes on average optically-thin to ionizing radiation. \begin{figure} \includegraphics[width=3in]{bubble_sizes_bw.eps} \caption{Histogram of the bubble size distributions along LOS. Shown are the distances from the source (the most massive halo in our volume at $z=6$) at which continuum optical depth (at the hydrogen ionization threshold) surpasses $\tau=1$ (solid lines), or $\tau=4.6$ (dotted lines), at several illustrative redshifts, as labelled. \label{hist_fig}} \end{figure} The corresponding panels in the second and fourth row of Figure~\ref{peak_evol} show images of the (non-equilibrium) ionization rate distribution at the same redshifts. The distribution is highly inhomogeneous, following the patchiness and peaking strongly in the vicinity of large source clusters. The volume-averaged photoionization rates at redshifts $z=(12.9;10.1; 9.0; 7.9; 7.0; 6.0)$ in units of $10^{-12}\,s^{-1}$ are $\Gamma_{-12}=(7.0\times10^{-2};1.3\times10^{-1};2.3\times10^{-1}; 7.6\times10^{-1};1.3;4.9)$, growing strongly with time as larger fraction of the volume becomes ionized and ever more sources form. Except for the earliest times, the peak photoionization rate values on the grid (corresponding to the brightest points of the images) remain fairly constant with time, at the same redshifts they are $\Gamma_{-12}=(1.0\times10^{2};2.0\times10^{3}; 2.0\times10^{3};2.5\times10^{3}; 2.8\times10^{3};5.1\times10^{3})$. The photoionization rate distributions and evolution are discussed further in \S~\ref{photoion_rates_sect}. \begin{figure*} \includegraphics[width=3in]{nearby_sources_z7_1Mpc.eps} \includegraphics[width=3in]{nearby_sources_z7_5Mpc.ps} \caption{Projected distributions at $z=7$ of halos within 1~$\rm h^{-1}Mpc$ comoving (left) and within 5~$\rm h^{-1}Mpc$ comoving from the most massive object in the box. Circle areas are proportional to the mass of the corresponding halo. \label{halo_dist_fig}} \end{figure*} To further characterize the shape and boundary sharpness of the central H~II region in Figure~\ref{hist_fig} we show a histogram of the radial distance $R$ from the central source at which the cumulative continuum optical depth at the ionizing threshold of hydrogen along each LOS reaches unity (black) and 4.6 (red; this optical depth value corresponds to 1\% transmission). A spherically-symmetric H~II region would correspond to a single value for the radius for a given optical depth, regardless of the directionality, while any scatter around the peak would be a measure of the non-sphericity of the ionized region. Furthermore, a comparison between the distributions for the two optical depths measures the sharpness of the H~II region boundary, as follows. A sharp transition from an ionized to a neutral gas along a given LOS results in both optical depth values being surpassed simultaneously (since the neutral gas is extremely optically-thick) and in such situation the histograms would coincide. On the other hand, if the boundary of the ionized region is not clearly defined due to local percolation, gas is highly ionized and the optical depth along the LOS increases only slowly, thus the values of 1 and 4.6 are reached at very different distances from the source. At redshift $z=9$ ($x_m=0.28$)the H~II region surrounding the central source is largely spherical and its boundary is well-defined, albeit with some modest spread around the peak, indicating some departures from sphericity, in agreement with what was seen in Fig.~\ref{peak_evol}. At redshift $z=8$ ($x_m=0.62$) the H~II region remains fairly spherical, slightly more so for $\tau=1$. The $\tau=4.6$ histogram has a long tail at large values of $R$, up to $\sim25 h^{-1}Mpc$, meaning that a small percentage of the LOS reach 99\% opacity only at such fairly large distances. Both distributions are somewhat wider than before and start to depart from each other, reflecting the increasing ``fuzziness'' of the bubble boundary as it merges with other nearby bubbles due to local percolation. At $z=7$ ($x_m=0.94$) the $\tau=1$ distribution still retains a well-defined peak, albeit one accompanied by a long high-$R$ tail reaching out to tens of Mpc. However, the $\tau=4.6$ histogram changes its character completely, becoming very broad, with only a low peak close to the $\tau=1$ peak and several secondary peaks. A significant number ($\sim14\%$) of the LOS do not reach $\tau=4.6$ within $50\rm\,h^{-1}Mpc$ of the central source (these are collected in the last, $R=50\rm\,h^{-1}Mpc$ bin of the histogram). Finally, at $z=6$ ($x_m=0.9999$), well beyond the overlap epoch ($z\sim6.6$, $x_m=0.99$) there is no indication of a defined ionized bubble. Both distributions are very broad, with no clear peak. At this time the optical depth is determined by the tiny remaining neutral fraction, which in its turn is dictated by the density variations of the Cosmic Web. Either value of $\tau$ is reached at a wide range of distances in different directions, from $\sim10\rm\,h^{-1}Mpc$ to over $50\rm\,h^{-1}Mpc$. The IGM becomes largely optically-thin to ionizing radiation and thus for most LOS neither $\tau=1$ nor $\tau=4.6$ are reached within 50 $\rm\, h^{-1} Mpc$ from the source. We also note that these results on the H~II region boundaries are valid for soft, stellar (either Pop. II or Pop. III) spectra. Should hard sources, e.g. QSOs, contribute significantly to reionization the ionized region boundaries will inevitably be thicker and the transition from ionized to neutral smoother, due to the longer mean free paths of the hard photons. However, most current observational indications are that stellar sources dominate reionization, in which case the main effect of the relatively few hard photons present will be to heat the neutral IGM, but not to ionize it significantly. \begin{figure} \includegraphics[width=3in]{emission_r.eps} \caption{Histograms in radial bins of the total cumulative emissivity from all halos within distance $R$, $N_{\rm ph, tot}(R)$, measured from the central object, here the most massive object in the computational volume, in units of the central object's emissivity, $N_{\rm ph,central}$. Shown are (bottom to top on the right) $z=12.9,9,8,7$ and 6.25 ($x_m=0.001,0.28,0.62,0.94$ and 0.998). The last radial bin contains the total emissivity of the computational box at that time. The vertical lines on top (corresponding line types and colors) indicate the average size of the H~II region at the corresponding redshift. \label{emiss_fig}} \end{figure} \subsection{Halo clustering in the vicinity of a luminous source} Taking a closer look at the halo clustering in the immediate vicinity of a high density peak in Fig.~\ref{halo_dist_fig} we show the projected spatial distribution of halos around the most massive halo at $z=7$. There are 31 resolved halos within 1~$h^{-1}$Mpc from the most massive halo and 360 resolved halos within 5~$h^{-1}$Mpc (both including the largest halo itself). The area of each circle is proportional to the mass (and thus, in accordance to our source model, to the luminosity) of the corresponding halo. Halos are distributed very anisotropically, concentrating preferentially along the density filaments and sheets of the local Cosmic Web. The mass of the most massive halo is $9\times10^{11}M_\odot$ at that time, well above any other halo in its vicinity. Nonetheless, the low-mass, but numerous halos surrounding the peak contribute significantly to the total ionizing emissivity. In order to quantify this point further, in Figure~\ref{emiss_fig} we show the cumulative emissivity vs. radial distance from the central halo for several redshifts spanning the whole range of interest here. At all redshifts only within $\sim2\,\rm h^{-1}Mpc$ from itself does the most massive halo dominate the total photoionizing emission (i.e. it contributes more than 50\% of the cumulative total emission coming from that region). The exception is the highest redshift ($z=12.9$), in which case the surrounding cluster of sources, rather than the central source, dominate the total flux even within $1\,\rm h^{-1}Mpc$ of the peak. The reason for this is the presence nearby of another halo of almost the same mass as the central one, which shortly thereafter merges with it. The cumulative emission is dominated by the small halo contribution beyond that distance. Within $10\,\rm h^{-1}Mpc$ the central source contributes only 10-30\% of the total emission. As more and more halos form and the lowest mass ones become relatively common the fractional contribution of the most luminous source to the total emissivity gradually decreases in time for all radii larger than a few comoving Mpc. At $30\,\rm h^{-1}Mpc$ the central source is dominated by the rest by 1.5-2 orders of magnitude. As fraction of the total emissivity of all sources in our computational volume (shown in the last bin of the histogram) the most luminous one contributes only $\sim2\%$ of the total at $z=12.9$, decreasing to $\sim0.1\%$ at $z=6.25$. \begin{figure} \includegraphics[width=3in]{ave_profiles_z6.6.eps} \caption{Spherically-averaged profiles of (bottom to top on the left/right): the comoving number density of hydrogen (in $cm^{-3}$; short-dashed, blue), the neutral hydrogen fraction (solid, red), radial velocity, $v_r$ (in $\rm 100\, km\,s^{-1}$; long-dashed, green) and continuum optical depth at $h\nu=13.6$~eV integrated from the source ($R=0$) outward, all at redshift $z=6.6$ ($x_m=0.99$) vs. $R$, the comoving radial distance from the most massive galaxy. \label{spher_aver}} \end{figure} In Figure~\ref{emiss_fig} we also indicate the current H~II region mean radius (vertical lines of corresponding line types and colors). Within its own bubble the central source contributes $\sim50\%$ of the emission at $z=12.9$, decreasing to $\sim10\%$ at $z=7$ and $\sim1\%$ at $z=6.25$. Recently, \citet{2005ApJ...625....1W} presented a semi-analytical model of the source clustering at high redshift. They found that a central galaxy at $z=7$ and with a velocity dispersion similar to our most massive source, $\sigma_V\approx200\rm\,km\,s^{-1}$) contributes $\sim40-70\%$ of the H~II region radius, or $\sim10-35\%$ of its volume (see their Fig. 2, right panel), a factor of a few larger compared to our results where we find that the central galaxy contribution to the total flux is $\sim1-10\%$. This discrepancy is most probably due to underestimate of source clustering in their bias model, and possibly also to the differences in the assumed source efficiencies. Furthermore, we have only considered a single massive source, which of course does not account for random statistical variations from source to source. Nevertheless, this result underlines the importance of considering the luminous sources individually (as opposed to considering an averaged ``mean'' source) and using simulations in order to account for nonlinear bias effects. \subsection{IGM Environment of luminous sources at high redshift} \begin{figure} \includegraphics[width=3.2in]{deltav_r_z6.eps} \caption{Mean radial velocity at $z=6$ of the IGM with respect to the source (red solid line, negative means towards the halo, positive - away) and its rms variation (error bars), both plotted vs. (comoving) radius from the source. \label{vel_r_fig}} \end{figure} The Ly-$\alpha$ absorption is strongly influenced by the local IGM environment -- density, velocity and ionization structure -- around the the source. As an illustrative example, let us consider a particular redshift, $z=6.6$, similar to the highest redshifts at which currently there are Ly-$\alpha$ source detections. At this time the universe in our simulation is already highly-ionized, with a mean mass-weighted ionized fraction of $x_m=0.939$, and a volume ionized fraction of $x_v=0.925$. In Figure~\ref{spher_aver} we show the spherically-averaged profiles around the central source of the gas number density, $n$, neutral fraction, $x_{\rm HI}$, integrated continuum optical depth ($\tau$, at $h\nu=13.6$~eV) and radial velocity, $v_r$. The radiative transfer cell which contains the central source is highly-overdense, with $\delta=n/\bar{n}-1=256$, which indicates that this cell roughly coincides with the source halo itself. The radial density profile declines steeply away from the source. The overdensity is $\delta=3.3$ at $R\sim1 \rm \,h^{-1}Mpc$, decreasing to $\delta=1$ at $R\sim2.5 \rm \,h^{-1}Mpc$, and approaching the mean density at distances beyond $R\sim10\rm \,h^{-1}Mpc$. The radial velocity profile shows an extended ($R\sim20\,\rm h^{-1}Mpc$) infall region, with the mean radial velocity peaking at $\sim150\,\rm km\,s^{-1}$ before dropping to zero inside the source halo itself. The proximity region of the central source ($R<15\rm\,h^{-1}Mpc$) is highly-ionized, with neutral fraction $x_{\rm HI}<10^{-4}$. The rest of the volume still has an appreciable neutral fraction ($\sim0.1-1\%$), however, and is thus on average still optically-thick, with the mean optical depth reaching $\tau=63$ at $R=50\rm\,h^{-1}Mpc$. However, the spherically-averaged quantities provide only a limited information about the state of the IGM surrounding each source. All quantities are distributed highly-anisotropically, and thus affect the Ly-$\alpha$ emission differently along each LOS. In particularly, the effect of the relative velocities of the IGM and source is relatively poorly studied at present. An optically-thick medium at rest with respect to a Ly-$\alpha$ source would absorb the blue wing of the line and transmit its red wing, at longer wavelengths than the line centre at $\lambda_0=1215\,$\AA\, \citep[e.g.][]{1998ApJ...501...15M}. A relative motion of the IGM gas and the source along the LOS would result in either more or less transmission, depending on the motion direction. E.g. gas infall towards the source along the LOS from the observer would redshift it, resulting in some absorption of the red wing of the line. We note that in order to evaluate these velocity effects with any precision much higher resolution simulations (and ones including outflows) will be required. Our radiative transfer grid has cell size of $\sim0.5/h$~Mpc comoving, or $\sim70/h$~kpc physical at $z=6$, which roughly corresponds to the size of the largest halos found in our box. The velocity and density fields used are at twice higher resolution ($\sim0.25/h$~Mpc comoving). Therefore, our results here should be considered as a guidance, illustrating that the velocity effects are quite important and should not be ignored. In Figure~\ref{vel_r_fig} we show the average IGM gas velocity relative to the source (line) vs. distance from it and the variance of that average velocity (error bars). As noted above, in the vicinity of the source the IGM on average infalls towards the halo. However there are large variations, of order hundreds of $km\,s^{-1}$ around this mean. E.g. a velocity offset of 200 $km\,s^{-1}$ at $z=6$ corresponds to $\Delta\lambda\sim6\,$\AA, of the same order as the typical observed line widths, and thus a relative motion of the IGM and source of this order could have a very significant effect on the observed line. In the next section we would quantify the effect of peculiar velocities on the Ly-$\alpha$ line. \begin{figure*} \includegraphics[width=3.2in]{tau12.915.ps} \includegraphics[width=3.2in]{spec12.915.ps} \includegraphics[width=3.2in]{tau9.034.ps} \includegraphics[width=3.2in]{spec9.034.ps} \caption{Sample LOS at redshifts $z=12.9$ (top; $x_m=0.001$) and 9.0 (bottom; $x_m=0.28$) vs. $\lambda$/comoving distance from the most massive galaxy. Shown are (left panels) the optical depth (solid), neutral fraction $x_{\rm HI}=1-x$ ($\times10^5$; dotted) and density in units of the mean (dashed), and (right panels) the corresponding transmission. The vertical lines show the position of the central source (in redshift space, i.e. accounting for its peculiar velocity along the LOS). The horizontal lines on the left indicate the optical depth equivalent to 1\% transmission. On the right, the shaded region is the transmission in the case where the unabsorbed spectrum is flat (the horizontal dotted line). \label{spectra}} \end{figure*} \section{Observability of high-$z$ Ly-$\alpha$ sources} \subsection{Absorption spectra of luminous sources} \label{spectra:sect} \begin{figure*} \includegraphics[width=3.2in]{tau8.072.ps} \includegraphics[width=3.2in]{spec8.072.ps} \includegraphics[width=3.2in]{tau7.042.ps} \includegraphics[width=3.2in]{spec7.042.ps} \caption{Same as Fig.~\ref{spectra}, but at redshifts $z=8.1$ (top; $x_m=0.62$) and 7.0 (bottom; $x_m=0.94$). \label{spectra2}} \end{figure*} Much of the information about high-redshift Ly-$\alpha$ sources and IGM is based on absorption spectra. We thus start our discussion of the observable signatures by presenting and discussing some sample spectra intersecting the position of the most luminous source in our computational volume. Figures~\ref{spectra}-\ref{spectra4} we show sample spectra along three random lines-of-sight at a few selected redshifts spanning the complete range of interest here. On the left panels we show the distributions of Ly-$\alpha$ Gunn-Peterson optical depth, $\tau_{\rm GP}$, neutral fraction, $x_{\rm HI}=1-x$ (multiplied by $10^5$ for clarity), and gas density in units of the mean, $\Delta=n/\bar{n}$. On the right panels we show the corresponding Gunn-Peterson transmission spectra for flux level of unity, $\exp(-\tau_{\rm GP})$. The horizontal lines on the left panels indicate the optical depth of $\tau=4.6$, which corresponds to 1\% transmission. For reference, this value is roughly equal to the optical depth of a hydrogen gas with neutral fraction of $x_{\rm HI}=10^{-5}$ at the mean density at redshift $z=6.6$. All quantities shown are in redshift/wavelength space and in the observer ($z=0$) frame. For reference, on the top axis of each figure we show the approximate corresponding distances in real space. Finally, in Figure~\ref{meanF_fig} we show the mean, averaged over all random LOS, transmission spectra at the same redshifts, for the most massive source (left) and mean over all sources (right). The Lyman-$\alpha$ absorption as a function of wavelength is computed using the standard procedure (e.g., \cite{1998MNRAS.301..478T}). The optical depth and transmission results include the redshift-space distortions due to the local peculiar velocities, relative to the peculiar velocity of the source (i.e., after applying peculiar velocity distortions, the whole spectrum has been shifted slightly so the source is returned to it's real space position). The temperature of the gas is assumed to be $10^4$ K when computing thermal broadening, consistent with the assumption adopted for the simulations. The nominal resolution of our spectra is $R\sim6000-12000$ at $z=6$ (higher at higher redshifts), based on our grid resolution of $203^3$ (radiative transfer) and ($406^3$ density and velocity fields). This resolution roughly corresponding to the one for medium-resolution observed spectra. In reality the situation is more complicated. Due to the non-linear transformations between our raw simulation data and the final spectra the limited simulation resolution can affect the results even if it were better than the observational resolution. A separate issue pointing in the same direction is the fact that the real data has effectively infinite resolution in the transverse direction (i.e., the width of the light beam), and thus in order for us to be accurate we have to ensure we are resolving essentially all the transverse structure, which is not the case for the current simulations. As a result, our spectra should not be considered completely realistic predictions, but rather as a guidance showing some important features to be expected from real spectra, as discussed below. Future higher-resolution, more detailed simulations will be better suited to make realistic predictions of the actual detailed spectral properties. \begin{figure*} \includegraphics[width=3.2in]{tau6.585.ps} \includegraphics[width=3.2in]{spec6.585.ps} \includegraphics[width=3.2in]{tau6.254.ps} \includegraphics[width=3.2in]{spec6.254.ps} \caption{Same as Fig.~\ref{spectra}, but at redshifts $z=6.6$ (top; $x_m=0.99$) and 6.25 (bottom; $x_m=0.998$). \label{spectra3}} \end{figure*} \begin{figure*} \includegraphics[width=3.2in]{tau6.000.ps} \includegraphics[width=3.2in]{spec6.000.ps} \caption{Same as Fig.~\ref{spectra}, but at redshift $z=6.0$ ($x_m=0.9999$). \label{spectra4}} \end{figure*} At early times ($z=12.9$; Figure~\ref{spectra}, top) the H~II region surrounding the most massive source is still quite small (see also Figs.~\ref{peak_evol}, \ref{hist_fig} and \ref{emiss_fig}) and the Ly-$\alpha$ emission of the source is completely suppressed by the damping wing of the Ly-$\alpha$ line profile, rendering it unobservable. The ionized region grows quickly after that and by $z=9$ reaches size of $\sim6\rm\,h^{-1}Mpc$ (Figure~\ref{spectra}, bottom). Regardless, the Ly-$\alpha$ optical depth even within the source proximity region remains quite high, at few up to $\sim10$, which allows through only a very weak transmission. The damping wing slightly weakens compared to the higher redshifts, but is still quite substantial, and still depresses most of the red wing of the emission line. Some of the continuum immediately behind the source in redshift space (within a few \AA) is absorbed due to gas infall towards the density peak. Because of that additional velocity towards the source the gas in front of it is slightly redshifted and thus absorbs at wavelengths red-ward of the source position. The line center at the luminous source's position is completely dark due to the high density in the middle of the density peak, regardless of the very low neutral fraction there. These features persist throughout the evolution and are very characteristic for all luminous sources, since these always associated with high density peaks and surrounded by infall. At redshift $z=8.1$ (Figure~\ref{spectra2}, top) a weak damping wing is still present and essentially no transmission occurs on the blue side of the line. The sources at this time may be potentially visible with very deep observations. Only by redshift $z=7$ (Figures~\ref{meanF_fig}) the ionized region is sufficiently large for the damping wing to effectively disappear. However, the gas in the ionized region still has a significant neutral fraction, and is sufficiently dense to absorb all photons on the blue side of the Ly-$\alpha$ line along most LOS. On average a weak transmission at a few percent level starts coming through in the proximity region of the source (Figure\ref{meanF_fig}). As more and more sources form and the ionizing flux rises the first transmission gaps start to appear at $z<7$, both in the mean IGM away from the peak and in the source proximity (Figure~\ref{spectra3}). At redshift $z=6.6$, our nominal overlap time (Figures~\ref{spectra3} and \ref{meanF_fig}), the proximity region extends for $\sim30$~\AA\ and has become fairly optically-thin, allowing up to 30-40\% transmission. Most of the volume still remains optically-thick, but some substantial transmission regions appear in the IGM away from the peak. We also note that there are significant variations between the different LOS, with some allowing for much more transmission than others. The size and properties of the proximity region also vary due to its asymmetry and the anisotropy of nearby structures. Finally, during the post-overlap epoch (Figures~\ref{spectra3}, bottom, \ref{spectra4} and \ref{meanF_fig}) the IGM slowly becomes more optically-thin to Ly-$\alpha$ and gradually approaches the state of the Ly-$\alpha$ forest. There are no more clearly-defined H~II bubbles. Only a few isolated low-density regions remain neutral. This is due to the inside-out character of the reionization process, whereby the high-density regions are preferentially ionized first, while the voids, where structure formation is delayed are ionized last. \begin{figure*} \includegraphics[width=3.2in]{meanF.ps} \includegraphics[width=3.2in]{allmeanF.eps} \caption{Mean radially-averaged transmission around the most massive source (left) and average over all sources (right) at several representative redshifts, as labelled. The corresponding ionized fractions by mass are $x_m=0.001, 0.105, 0.279, 0.618, 0.939, 0.992, 0.9986$ and 0.9999. \label{meanF_fig}} \end{figure*} Comparing the radially-averaged mean transmission around a luminous source (high density peak) and the average for all sources (i.e. the mean behaviour around a typical source; Figure~\ref{meanF_fig}) we see both some similarities and several notable differences. The mean damping wing has similar evolution with redshift in the two cases, strong at high redshift ($z>10$), gradually becoming weaker ($z\sim8-9$) and finally disappearing at later times ($z<7$). Naiively, one might expect that compared to an average source the luminous ones would suffer from weaker damping since such sources typically reside in the middle of large H~II regions, far from their boundaries. In fact, this expectation proves only partially correct. The damping is somewhat more pronounced around an average source than around a luminous one, but the differences we observe are rather modest, reflecting the fact that weaker sources are strongly clustered around the same density peaks that host the luminous ones, thus largely share the damping (or its lack) with the central source. For the same reason the damping becomes irrelevant at about the same time in both cases, which would not have been the case if the weak sources were residing in smaller, isolated bubbles. The infall around the high density peak leads to some redshifted absorption that appears behind the redshift-space position of the source. The same behaviour is not seen for a typical source. Such smaller halos tend to move more in tandem with their surrounding IGM, often towards the nearest high density peak. Some local infall should exist also for these halos, but this is at very small scales, unresolved here. However, these scales are small compared to the typical emission line width (see next section) and thus we do not expect that such local infall has significant effect on the emission line. One final important difference between a luminous and an average source is that the latter does not typically have a proximity transmission region on the blue side of the line. The spectra of the luminous sources, on the other hand exhibit extended high-transmission (10-60\% transmission) regions within 5~Mpc$\,\rm h^{-1}$ ($\sim30$~\AA). This behaviour is again due to the high source clustering around the density peak. The combined effect of all sources is to achieve much lower neutral gas fraction regardless of the higher gas density there. The line center coinciding with a high density peak remains optically-thick, however, unlike the line center of a typical source. Away from the proximity region the absorption is largely saturated, but there are a number of transmission gaps with up to a few per cent transmission. Future work would quantify the statistics of these features and its evolution. \begin{figure*} \includegraphics[width=3.2in]{line8.072.eps} \includegraphics[width=3.2in]{tauline8.072.eps} \caption{Emission lines: (left panels) intrinsic line, assumed a Gaussian with rms width of $160\,\rm km\,s^{-1}$ (black, top), and transmitted one (red, bottom) for three sample LOS at redshift $z=8.1$ ($x_m=0.62$), and (right panels) the corresponding optical depth (solid, black), neutral fraction $x_{\rm HI}=1-x$ ($\times10^5$; dotted, red) and density in units of the mean (dashed, green), all in redshift space (i.e. accounting for the relative velocities). The intrinsic emission line is also shown for reference. \label{line8}} \end{figure*} \begin{figure*} \includegraphics[width=3.2in]{line6.585.eps} \includegraphics[width=3.2in]{tauline6.585.eps} \caption{Same as Fig.~\ref{line8}, but at redshift $z=6.6$ ($x_m=0.99$). \label{line6.6}} \end{figure*} \subsection{Emission line shape and its evolution} In order to study the effect of IGM absorption on the line profile shape we model the intrinsic Ly-$\alpha$ line as a Gaussian with an rms width of $160\,\rm km\,s^{-1}$ and peak amplitude normalized to unity. In Figures~\ref{line8}, \ref{line6.6} and \ref{line6} we show sample results for several LOS through the most luminous source at redshifts $z=8$, 6.6 and 6, respectively. These examples are picked to illustrate the typical cases for the observed line shape. On the left panels we show the assumed intrinsic (black) transmitted (red) emission line, while on the right panels we show the corresponding distributions of Ly-$\alpha$ Gunn-Peterson optical depth, $\tau_{\rm GP}$, neutral fraction, $x_{\rm HI}=1-x$ (multiplied by $10^5$ for clarity), and gas density in units of the mean, $\Delta=n/\bar{n}$ and again (for reference) the intrinsic emission line. \begin{figure*} \includegraphics[width=3.2in]{line6.000.eps} \includegraphics[width=3.2in]{tauline6.000.eps} \caption{Same as Fig.~\ref{line8}, but at redshift $z=6$ ($x_m=0.9999$). \label{line6}} \includegraphics[width=3.2in]{meanFline.ps} \includegraphics[width=3.2in]{allmeanFline.eps} \caption{Evolution of the mean emission lines for most massive source (left panels) and average over all sources (right panels). Shown are the intrinsic emission line (dotted, red; assumed a Gaussian with rms width of $160\,\rm km\,s^{-1}$, normalized to one at the peak), the transmitted line (solid, red), and mean absorption (solid, black) for three sample LOS several redshifts, as labelled. \label{meanFline_fig}} \end{figure*} At redshift $z=8.1$ (Figure~\ref{line8}) the emission line shape is fairly regular and does not vary strongly for different LOS. The blue wing is generally highly absorbed and no appreciable flux comes through and much of the red wing is absorbed as well. The reasons for this behaviour become clear from the right panels. The neutral fraction is fairly uniform throughout this region, at $x_{\rm HI}\sim4\times10^{-5}$, thus the GP optical depth largely follows the local density field. The high density of the peak and its immediate vicinity results in optical depths of $\tau_{\rm GP}>10$ everywhere and $\tau>100$ at the peak itself. The gas infall towards the peak leads to a significant absorption of the red wing. The only transmitted flux comes from the far red side of the line (slightly depressed by the weak remaining damping wing). At overlap (1\% global neutral fraction by mass; $z=6.6$) the line shape becomes much more irregular and varies significantly between the different LOS (Figure~\ref{line6.6}). Significantly larger fraction of the flux is transmitted, both on the red and on the blue side of the line. The neutral fraction in the vicinity of the source is still fairly uniform, but much lower than at the earlier time, at $x_{\rm HI}\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}}\times10^{-5}$. Thus the GP optical depth again largely follows the density distribution. The effect of the gas infall towards the peak is still present and some of the red wing is absorbed, but much less so than at higher redshifts since the infalling gas is more highly-ionized. The line center remains absorbed and all damping wing effects have disappeared. After overlap (Figure~\ref{line6}) the neutral fraction gradually declines, decreasing the optical depth and allowing ever more flux to be transmitted. The neutral fraction is still fairly uniform and thus the optical depth mostly follows the density fluctuations. The line shapes remain rather irregular, with significant variations between the different LOS. There is transmission in both red and blue wing of the line. In Figure~\ref{meanFline_fig} we show the evolution of the mean (i.e. averaged over all LOS) observed emission line shape for the most luminous source in our volume (left) and average over all sources (right). In both cases the line starts completely damped ($z=12.9$). The later evolution of the mean line shape differs significantly, however. By redshifts $z=9-10$ a significant fraction of the red wing of the line is transmitted for the luminous source, except for the line center, while much less of the red wing is transmitted for an average source. At later times ($z\sim7-8$) this situation is reversed - practically all of the red wing of the line is transmitted for an average source, but much of the flux is still absorbed for the luminous source due to the high density peak in the middle and its surrounding infall. As a word of caution we should note that some of this effect is in fact not physical but numerical, since our simulations do not resolve well the detailed structure around the smaller halos. However, as we also mentioned above, these resolution effects should be modest considering that the emission line is fairly wide and thus reasonably well-resolved and any corrections due to smaller-scale structures will not affect much of the line. On the blue side of the line there are further important differences between the luminous and average sources. The strong clustering of sources around the density peaks result in very high fluxes and thus a more pronounced highly-ionized proximity region blue-ward of the line center. Thus, significantly more of the blue wing of the luminous source line is transmitted, up to 10\% on average at $z=6$, vs. only $\sim1-2\%$ for an average source. An interesting consequence of the very high absorption observed at the line center for massive sources and the redshift-space distortions due to the gas infall (the latter similar to the one studied theoretically in a more idealized setup by \citet{2004ApJ...601...64B}) is that the Ly-$\alpha$ line naturally takes a double-peaked (or even multiple-peaked in some cases) observed profile. This suggests that in principle it might be possible to use the line profiles of bright Ly-$\alpha$ sources to study the infall surrounding their host halos. In practice this might be difficult due to a number of complications. The line structure is quite different along different LOS, partly (as we pointed above in \S~2.4) due to the very anisotropic velocity structure surrounding the source, as well as its own peculiar motion. Furthermore, our analysis only takes into account the effects of the IGM on the line shape, while realistic ones will be affected also by the host galaxy's internal structure, outflows, etc. Modelling all those effects correctly will require high-resolution radiative-hydrodynamic simulations, which is well beyond the scope of this work. \subsection{Evolution of the mean transmissivity} \label{meanFabg_sect} Figure~\ref{meanFabg} shows the mean transmission fraction as a function of redshift, for the Lyman-$\alpha$, $\beta$, and $\gamma$ transitions. The latter two are, respectively, 6.2 and 17.9 times weaker than Lyman-$\alpha$, so transmission can be seen in cases where Lyman-alpha would be opaque \citep{2006AJ....132..117F}. At the low redshift end of our simulation data these quantities have been observed in the highest redshift quasar spectra \citep{2006AJ....132..117F}. Fig.~\ref{meanFabg} shows the \citep{2006AJ....132..117F} (points with error bars). At $z>6$, the Ly-$\alpha$ and Ly-$\beta$ transmission measurements are only upper limits. It is unclear exactly what error bar should be assigned to the Ly-$\gamma$ point. Fan et al. (2006) presented two upper limits and a detection in Ly-$\gamma$, which, taken together, support a detected mean transmission level (plotted in the figure) well below our prediction, with relatively small ($\sim 40$\%) errors; however, with only these three points we can not be sure that the sample variance isn't substantially larger. It appears that the reionization model in our simulations reproduces roughly the correct tail-end of reionization, but the data favor somewhat less transmission than is present in the model, i.e., a weaker radiation background and possibly slightly later reionization. \begin{figure} \includegraphics[width=3.in]{meanFabg.eps} \caption{Overall mean transmission fraction (not necessarily near a source) in Lyman-$\alpha$ (solid, black), Lyman-$\beta$ (dashed, red), and Lyman-$\gamma$ (dotted, green). The points with horizontal error-bars show the measurements of \citep{2006AJ....132..117F}. The (black, red), (lower, upper), points with vertical error bars show Ly-($\alpha$,$\beta$), while the green dot shows Ly-$\gamma$ (see text). \label{meanFabg}} \end{figure} \begin{figure} \includegraphics[width=3.in]{PDF_Gamma_f250C.eps} \caption{PDF of the photoionization rate at at $z=10.1$ (blue; $x_m=0.105$), $z=7.0$ (red; $x_m=0.94$), and $z=6.0$ (green; $x_m=0.9999$). We show the actual, non-equilibrium rates (solid) and the corresponding equilibrium rates (dotted, same color at each redshift). All PDF's are normalized to have an area of unity below the curve. \label{Gamma_pdf}} \end{figure} \subsection{Photoionization Rates} \label{photoion_rates_sect} In Figure~\ref{Gamma_pdf} we show the normalized probability density distributions (PDFs) of the nonequilibrium photoionization rates for all cells in our computational volume at three representative redshifts - $z=10.1$ ($x_m=0.105$), 7.0 ($x_m=0.94$) and 6.0 ($x_m=0.9999$) (early times, late times and well after overlap) in linear (top) and log (bottom) scales. For comparison we also plot the photoionization rates if the corresponding cells were in ionization equilibrium. \begin{figure*} \includegraphics[width=3.2in]{contours_ionrate_dens_z9.eps} \includegraphics[width=3.2in]{contours_ionrate_dens_z6.eps} \vspace{-0.5cm} \caption{Photoionization rate - overdensity correlation at $z=9$ (left; $x_m=0.28$) and $z=6$ (right; $x_m=0.9999$). Contours are logarithmic, from 10 cells up every 0.5 dex. \label{Gamma_distr}} \includegraphics[width=2.3in]{lum9.034.eps} \includegraphics[width=2.3in]{lum7.042.eps} \includegraphics[width=2.3in]{lum6.000.eps} \caption{ (bottom panels) Ly-$\alpha$ luminosity function of high-redshift sources without (black) and with absorption included (red) at redshifts $z=9$ (left; global mass-weighted ionized fraction $x_m=0.28$), $z=7$ (middle; $x_m=0.94$) and $z=6.0$ (right; $x_m=0.9999$). For reference, the green, dotted line shows the result if each source is assumed 50\% absorbed, which would be the case if e.g. all of the blue wing of the emission line were absorbed, while all of the red wing were transmitted. The error bar in each bin reflects the number of sources in that bin found in our computational volume. (top panels) Bin-by-bin ratio of the observed to the intrinsic luminosity function. \label{lum_funct:fig}} \end{figure*} The right peak of each PDF distribution reflects the most common photoionization rate values in the ionized regions. The peak position remains at $\Gamma_{-12}\sim1$ throughout the evolution, with only slight shifts. The distributions are quasi-Gaussian, but with long non-Gaussian tails at both high and (especially) low values of $\Gamma$. As could have been expected, the fraction of high-$\Gamma$ cells, all of which either contain sources or are in the immediate vicinity of a source, grows strongly with time, as ever . The highest photoionization rate values we find reach $\Gamma_{-12}\sim10^3-10^4$. This peak value rises over time, as a consequence of the growth of galaxies and the large number of sources forming in and around the density peaks. In the ionized regions the equilibration time is short and photoionization equilibrium is generally a good approximation. The cells with lower values of $\Gamma$ (below $\Gamma_{-12}\sim0.01-0.1$ correspond to the ionization fronts or neutral regions. There are many more cells in I-fronts at $z=10.1$ than at later times, when most of the IGM is already ionized. The equilibrium and actual rates differ widely in those regions, indicating that the assumption of ionization equilibrium would be a very poor approximation there. The photoionization rate-density correlations at $z=9$ and $z=6$ are shown in Figure~\ref{Gamma_distr} as contour plots. At high densities there is a clear, and fairly tight, correlation between the density and the photoionization rate. This reflects the fact that many more sources form in overdense regions, resulting in higher photoionization rates. At densities just above the mean the correlation becomes much less tight and around the mean densities it becomes nonexistent. At high redshifts regions with $1+\delta\sim1$ can have any value of $\Gamma_{-12}$ from $10$ (for cells close to sources) down to essentially 0 (in neutral and shielded cells). There are three broad peaks of the distribution, at $\Gamma_{-12}\sim1$ (the H~II regions), $\Gamma_{-12}\sim10^{-3}$ (cells at and around I-fronts), and $\Gamma_{-12}\sim0$ (neutral regions). By $z=6.0$, which is well after overlap both the neutral and self-shielded regions and the I-fronts have mostly disappeared and $\Gamma_{-12}>0.1$ almost everywhere, rising with time as more galaxies form. These values are fairly high compared to the photoionization rate values found from the Lyman-$\alpha$ forest at $z\sim2-4$ \citep{2002ApJ...570..457C,2004ApJ...617....1T,2006AJ....132..117F, 2007astro.ph..3306B}, in agreement with the high mean transmitted flux we found in \S~\ref{meanFabg_sect}. Both point to somewhat lower ionizing source efficiencies than the ones assumed here and to a correspondingly later end of reionization. \subsection{Luminosity function of high-z Ly-$\alpha$ sources} The luminosity function is an important statistical measure of the properties of high=redshift galaxies. It depends on both the intrinsic luminosity of the galaxies and the absorption in the surrounding IGM. In Figure \ref{lum_funct:fig} we show our results of the size of the effect of absorption on a luminosity function of high $z$ objects. For this fiducial case we assume that the Ly-$\alpha$ luminosity is simply proportional to the mass of our halos (similar to our model for the ionizing sources). We compute the reduction in luminosity of each halo due to absorption (Figures~\ref{spectra}-\ref{spectra4} show examples of this suppression). We assume that the intrinsic Lyman-$\alpha$ emission line is a Gaussian with an rms of $160\,\rm km\,s^{-1}$. Luminosity function of high-redshift sources without (in this fiducial case this is just the halo mass function; solid line) and with absorption included (dashed) at redshifts $z=9,7$ and 6. For reference, the dotted line shows the result if each source is assumed 50\% absorbed, which would be the case if e.g. all of the blue wing of the emission line were absorbed, while all of the red wing were transmitted. Top panels show the bin-by-bin ratios of the observed to the intrinsic luminosity function. Note that due to the binning at fixed luminosity (intrinsic or observed) this ratio is not the same as the average suppression per source of a given mass (i.e. the absorption shifts the curve both down and to the left). \begin{figure*} \includegraphics[width=3.2in]{lum_nopecv8.072.eps} \includegraphics[width=3.2in]{lum_nopecv6.000.eps} \caption{Luminosity function at redshifts z=8.1 (left; $x_m=0.62$) and z=6 (right; $x_m=0.9999$) if peculiar velocities are ignored (blue, long-dashed). For reference, we also show the data from Fig.~\ref{lum_funct:fig}, same notation. \label{lum_funct_nopecv:fig}} \end{figure*} \begin{figure*} \includegraphics[width=3.2in]{lum_varwid8.072.eps} \includegraphics[width=3.2in]{lum_varwid6.000.eps} \caption{Luminosity function at redshifts z=8.1 (left; $x_m=0.62$) and z=6 (right; $x_m=0.9999$) for variable emission line width (blue, long-dashed; see text for details). For reference, we also show the data from Fig.~\ref{lum_funct:fig}, same notation. \label{lum_funct_varwid:fig}} \end{figure*} The luminosity function exhibits clear evolution from high to low redshift. As we discussed in \S~\ref{spectra:sect}, at high redshift the damping wings are strong and thus not only the blue side, but also significant part of the red side of the line is absorbed, as evidenced by the significant difference between the dashed and dotted lines. As a result of this absorption, the number of sources per luminosity bin drops by one to two orders of magnitude. During the later stages of reionization the damping wing effectively disappears. As a consequence, the change in the faint end of the luminosity function due to IGM absorption is on average well-represented by simply reducing each source luminosity by 50\%, which would be the case if the blue half of the line were absorbed and the red half were not. The situation is different at the bright end of the luminosity function, however, where on average significantly more than half of the intrinsic flux is absorbed at both $z=7$ and $z=6$. The shape of the luminosity function shows some evolution, as well, which is in part due to an evolution in the shape of the halo mass function and in part to the higher mean absorption for the more massive sources. The higher average absorption levels for the luminous sources is consequence of the infall which surrounds the high density peaks they are in. In order to demonstrate this, we re-calculated the luminosity function at $z=8.1$ and $z=6$ using exactly the same data, but setting all peculiar velocities to zero. The results are shown in Figure~\ref{lum_funct_nopecv:fig}. With no peculiar velocities present the intrinsic emission of all sources is absorbed on average at roughly the same level, by factor of $\sim 10$ at $z=8.1$ and by factor of 2 at $z=6$. The resulting luminosity function at late times agrees well with the one where we simply assumed 50\% absorption. Early-on this is not the case as a consequence of the still-present damping wing. \begin{figure*} \includegraphics[width=2.3in]{supdist9.034.eps} \includegraphics[width=2.3in]{supdist7.042.eps} \includegraphics[width=2.3in]{supdist6.000.eps} \caption{Transmission fraction as a function of luminosity at redshifts $z=9.0$ ($x_m=0.28$; left), $z=7.0$ ($x_m=0.94$; center) and $z=6.0$ ($x_m=0.9999$; right). Shown are (dashed lines, top to bottom) 0.023, 0.16, 0.5, 0.84, and 0.977 percentiles (e.g., 2.3\% of points have suppression less than the uppermost line). The center line is the median of all the LOS. We required at least 50 LOS in each bin for sampling the distribution properly. There are 10 random LOS per source. We also plot the mean (solid line) for all LOS in each bin. \label{sup_dist:fig}} \end{figure*} \begin{figure*} \vspace{-0.3cm} \includegraphics[width=2.3in]{supdist_nopecv9.034.eps} \includegraphics[width=2.3in]{supdist_nopecv7.042.eps} \includegraphics[width=2.3in]{supdist_nopecv6.000.eps} \caption{Same as Fig.~\ref{sup_dist:fig}, but ignoring any peculiar velocities of the halos and the IGM. \label{sup_dist_nopecv:fig}} \end{figure*} \begin{figure*} \includegraphics[width=2.3in]{supdist_varwid9.034.eps} \includegraphics[width=2.3in]{supdist_varwid7.042.eps} \includegraphics[width=2.3in]{supdist_varwid6.000.eps} \caption{Same as Fig.~\ref{sup_dist:fig}, but for a variable Ly-$\alpha$ emission line width, as discussed in the text. \label{sup_dist_varwid:fig}} \end{figure*} How important is our assumption that all sources have same rms width of their intrinsic emission line profile? To check this, we replaced this assumption with one where the rms line width varies with the halo mass as $133\,\rm km/s (M/10^{11}M_{\odot})^{1/3}$ \citep{2007MNRAS.377.1175D}. Results, again at $z=6$ and in head-to-head comparison with our fiducial case of fixed line width, are shown in Figure~\ref{lum_funct_varwid:fig}. The faint end of the luminosity function proves insensitive to the line width, which is easy to understand. As we have shown above, on average the IGM completely absorbs the blue half of the line for the weaker sources and completely transmits the red half. However, the variable line width has some effect on the absorption of bright sources. Their lines become wider under this assumption and thus are less affected by the absorption due to the infalling gas, resulting in higher transmission by a factor of $\sim2$. The probability distribution of the transmission fraction per source of a given mass/luminosity is shown in Fig.~\ref{sup_dist:fig}. There are 10 random LOS per source and we required at least 50 LOS in each bin for sampling the distribution properly. We also plot the bin-by-bin average transmission. For the luminosity bins which do not contain our minimum number of LOS we plot only the mean. Several interesting trends emerge. The distributions are fairly wide at all times and for all sources, reflecting the large variations in opacity from source to source and from LOS to LOS. The former is due to the different environments sources are found in, while the later reflects the anisotropies around each source. At all redshifts the mean and the median curves are very similar for all bins, and are essentially identical at the bright end. The distribution itself changes its character as the evolution progresses. At early times the distribution is much wider for the fainter sources and the mean and median are gently rising towards the bright end, reflecting the fact that bright sources are found in the middle of larger H~II regions, while fainter sources are found in a variety of environments. Thus, during the early evolution the main factor shaping the distribution is the the local variation of the neutral fraction around each source. At late times ($z<7$), however, the situation changes to the opposite, with the distribution becoming wider at the bright end and the mean and median decreasing there as well. By that time the IGM is already largely ionized and the main environmental dependence is due to the anisotropies of the density field and, even more so, of the infall around the bright peaks, as discussed above. This is clearly demonstrated in Fig.~\ref{sup_dist_nopecv:fig}, where we show the distributions as they would be if there were no peculiar velocities. At early times the results are largely unchanged, while at late times the variations between the different LOS essentially disappear and all the curves become flat, showing that the distribution is shaped mainly by the effects of the peculiar velocities. Finally, in Fig.~\ref{sup_dist_varwid:fig} we show the effect of varying intrinsic line width on the distributions. Compared to our fiducial case of constant line width, at high redshift the distributions are hardly affected, except for slightly higher absorption of the faintest sources. However, at later times the varying line width has a more significant effects. At $z=7$ the mean and the median values for the majority of sources decrease from $\sim40-45\%$ down to $\sim30\%$, but the brightest sources are affected much less. As a result the curves for the mean and median become largely flat rather than decreasing towards the bright end. Furthermore, the distribution becomes wider at the faint end, with many more LOS being absorbed by factor of 10 or more. A similar effect is seen at $z=6$, except in this case the brightest sources are even less absorbed due to their wider emission lines, in agreement with what we observed in the luminosity functions. \begin{figure} \includegraphics[width=3.2in]{lum_funct_vs_observ.eps} \caption{Simulated luminosity function at $z=6.6$ (dashed; $x_m=0.99$) vs. best fits of \citet{2006ApJ...637..631K} (at $z=6.56$) for (top to bottom) faint-end slopes of $\alpha=-2,-1.5,-1)$ (solid). \label{lum_funct_vs_obs:fig}} \end{figure} Our derived luminosity functions assume a constant mass-to-light ratio and are in arbitrary units (halo mass $\times$ absorption by the IGM), proportional to a yet undetermined mass-to-observed light ratio. We can roughly determine the latter by comparing the number densities of observed and simulated objects. \citet{2006ApJ...637..631K} currently provide the best set of data at $z>6$. They have provided fits to a Schechter function: \begin{equation} \phi(L)dL=\phi_*\left(\frac{L}{L_*}\right)^\alpha \exp\left(-\frac{L}{L_*}\right)\frac{dL}{L_*} \end{equation} Since the high-redshift data still has large uncertainties, particularly in terms of the faint-end slope, the data is fit by assuming $\alpha=(-2,-1.5,-1)$, with best fit parameters at $z=6.56$ given by $log(L_*/\rm h^{-2}_{70} erg/s)=(42.74,42.60,42.48)$ and $log(\phi_*/\rm Mpc^{-3}h^{2}_{70})=(-3.14,-2.88,-2.74)$, respectively. We plot these fits in Figure~\ref{lum_funct_vs_obs:fig} against our derived luminosity function. The latter was obtained by rescaling our arbitrary luminosity units to physical ones using a constant ratio, \begin{equation} L=L(M_\odot)\times10^{30.9} \label{ml_equ} \end{equation} so as to match it to the observed luminosity function for the same number densities of objects. The fit assuming $\alpha=-1.5$ provides by far the best match to our luminosity function. The two agree in both amplitude and shape over the whole available range. The differences at both the luminous and the faint end should both be expected, the former due to cosmic variance and the latter due to both numerical resolution and lack of reliable observational data. Matching the other two faint end slopes would require us to relax our constant mass-to-light ratio assumption. Taking equation~\ref{ml_equ} at face value, we can now make an approximate correspondence between observed luminosities and masses of the underlying halos. The sources observed with Subaru at $z=6.56$ \citet{2006ApJ...637..631K} have luminosities $\sim 10^{42}-10^{42.7}\rm erg\,s^{-1}h^{-2}$. This corresponds to the luminous end of our LF, for effective masses (halo mass $\times$ absorption by the IGM) of order $10^{11}M_\odot$ or larger, or relatively rare halos. The observations are not yet sufficiently sensitive to detect the faint end, which contributes most of the ionizing emissivity during reionization. Hence, claims that observations show that there are not enough ionizing photons at $z\sim6$ to reionize the universe appear premature. \section{Correlation Functions} \begin{figure*} \vspace{-5cm} \includegraphics[width=4.2in]{corr_images_z9.eps} \includegraphics[width=2.3in]{corr_z9_9.5.eps} \caption{Projection of the sources as seen by a mock flux-limited survey with $L>10^{9.5}M_\odot$ (198 sources in total) at $z=9$ (left panel; $x_m=0.28$) and sources with the same number density if the IGM absorption were ignored (middle panel) and the 2-point 3D correlation functions (right panels) of the distribution with IGM absorption (solid) and without (dashed) and their ratio (top). \label{corr_z9}} \end{figure*} \begin{figure*} \vspace{-5cm} \includegraphics[width=4.2in]{corr_images_z7.eps} \includegraphics[width=2.3in]{corr_z7_10.eps} \caption{Projection of the sources as seen by a mock flux-limited survey with $L>10^{10}M_\odot$ (1617 sources in total) at $z=7$ (left panel; $x_m=0.94$) and sources with the same number density if the IGM absorption were ignored (middle panel) and the 2-point 3D correlation functions (right panels) of the distribution with IGM absorption (solid) and without (dashed) and their ratio (top). \label{corr_z7}} \end{figure*} As we discussed above, the high-redshift haloes are highly clustered in space, and Ly-$\alpha$ sources should be clustered as well. The latter clustering has been recently observed at $z\sim5.7$ \citep{2007ApJS..172..523M}. An interesting question to ask is if the absorption due to the surrounding IGM affects this clustering. If this were the case, then measuring the correlation function of high-redshift Ly-$\alpha$ sources can give us information about the state of the IGM at that time. We derive the correlation functions as follows. First we calculate the total luminosity of each source with and without IGM absorption using the same method as above, but instead of random LOS directions we only consider parallel LOS, as would be seen by far away observer. For simplicity we consider LOS parallel to the axes of our computational box. We mock a flux-limited survey by imposing a cutoff on the observed luminosity. We compare the resulting correlation function to the one obtained for the same number, but now of the brightest sources based on their intrinsic luminosity (i.e. the ones hosted by the most massive halos, thus ignoring IGM absorption in this latter case). We calculate the 3D correlation functions by direct summation over all pairs of halos as described in \citet{1991ApJ...366..353M}. The results at redshift $z=9$ (with cutoff $L_{\rm min}(M)=10^{9.5}M_\odot$) and $z=7$ (with cutoff $L_{\rm min}(M)=10^{10}M_\odot$) are shown in Figs.~\ref{corr_z9} and \ref{corr_z7}. In both cases the luminosity cutoffs were chosen so as to maximize the difference in the correlation functions with and without IGM absorption, while at the same time allowing for sufficient number of halos above the cutoff to reduce the noise of the correlation. In Figure~\ref{corr_z9} we show the projection at $z=9$ of the two source distributions onto the Y-Z plane (left), and the corresponding 3D correlation functions and their ratio (right). The projection shows that the two source populations differ, but cluster in the same spatial regions. This visual impression is confirmed by the correlation functions. We find that IGM absorption introduces only small variations in the correlation of sources. The difference is largest at small scales, for separations below 1 comoving Mpc. Even there they never exceed 10\%. At intermediate scales, $R\sim2-10$~Mpc the departures are up to 5\%. There are no appreciable differences at large scales. At redshift $z=7$, close to overlap (Figure~\ref{corr_z7}) there are many more sources, even with the low luminosity cutoff raised to $10^{10}M_\odot$. The two source distributions remain different, but are still clustered in a very similar way, resulting in largely identical correlation functions, which never differ by more than 0.7\%. The small effect of IGM absorption on the correlation function appears counter-intuitive. The reason for this is that the sources at high redshift, especially the most massive/luminous ones are strongly clustered and the ionization field is closely correlated with the galaxy field. The luminous sources are typically found in the inner parts of the largest ionized bubbles, where they are typically unaffected by damping from the remaining (few) neutral patches. This is also supported by the profiles in Fig.~\ref{meanF_fig} which show the damping wing effect being important over the same time interval for both luminous and average sources. The damping affects average sources more strongly, since these are more often found closer to a neutral patch than the more massive sources. However, this does not change the correlation function significantly. Basically, the reionization patchiness has little effect on the source clustering properties. The reason is that while the IGM absorption diminishes the flux from all sources, the same source clusters (although not necessarily the same individual sources) are seen as would be without IGM absorption. At late times any clearly defined ionized regions have already disappeared and the effect of IGM absorption is to replace some sources above the luminosity cutoff with other ones essentially at random, due to small local variations of the residual neutral fraction and the gas velocities. Therefore, the reionization patchiness ultimately has little effect on the correlation function. That of course does not mean that the source population is not modified by the IGM - a flux-limited survey will see many fewer sources than if IGM absorption were not present, but the clustering properties of those sources are almost the same as with no absorption for the same number density of sources. Recently, \citet{2007MNRAS.381...75M} claimed that reionization patchiness has a significant effect on observed source clustering, in apparent contradiction to our results \citep[see also][]{2006MNRAS.365.1012F,2008MNRAS.386.1990M}. However, they compared the clustering properties of Ly-$\alpha$ sources with and without (i.e. intrinsic) IGM absorption for {\it a fixed} luminosity cutoff, rather than at fixed number density of sources, as we did. When the IGM absorption is accounted for, the sources remaining above the imposed flux limit are many fewer than in the case without absorption and they are also much brighter on average, hosted by more massive halos. This naturally results in higher bias of the observed sources compared to all sources with intrinsic luminosity above that cutoff. However, it is not necessarily related to reionization patchiness. E.g. source bias will increase also if the small residual neutral fraction in the ionized IGM increases, boosting the IGM opacity and causing dimmer, less clustered sources to fall below the luminosity cutoff, thus no neutral patches are required for this to happen. In fact, our simulation results show almost no neutral patches below $z\sim6.6$ (at which time the global mean ionized fraction by mass is below 1\%). Therefore, while our conclusions disagree with the ones of \citet{2007MNRAS.381...75M}, at least some of the differences could be attributed to the different comparisons we make, but some of the variations are possibly real, since \citet{2007MNRAS.381...75M} claimed that the trend is present even for fixed number densities, albeit with no quantitative details. We would like to stress, however, that we observe the same qualitative trend of enhanced clustering of Ly-$\alpha$ sources due to patchiness during the early stages of reionization, but the quantitative level of the effect is different, being much weaker in our case. There are a number of possible explanations. In particular, in our simulations the pre-absorption clustering between ionized regions and sources appears to be more important than in the simulations of \citet{2007MNRAS.381...75M}. There are also notable differences between our and their modeling of the Ly-$\alpha$ sources and the IGM absorption. \citet{2007MNRAS.381...75M} (as many other current studies do) assume complete resonance absorption of the blue side of the Ly-$\alpha$ line, and only study the suppression due to the IGM damping wing and do not include velocity effects in their analysis. They also assume a particular duty cycle for their Ly-$\alpha$ emitters (that only 25\% of halos host emitters), which we do not do in this work. \citet{2007MNRAS.381...75M} also ran simulations with different minimum source halo mass cutoffs, either significantly higher ($M_{\rm min}=4\times10^{10}M_\odot$) or significantly lower ($M_{\rm min}=10^{8}M_\odot$) than the one we have here ($M_{\rm min}=2.2\times10^{9}M_\odot$). It is possible, therefore, that our results are more relevant than these previous works, if the low-mass sources missing in our current simulation are in fact strongly suppressed during the late stages of reionization due to Jeans-mass filtering. This can only be resolved conclusively by detailed simulations which actually follow the complicated radiative feedback effects on low-mass halos, which is well beyond the scope of this work. We have run several tests in order to try to understand these differences, following suggestions by the referee. One possibility we investigated was that our H~II regions size distribution is more strongly peaked and narrower than the one found in the above works, which, if it were the case might have explained some of the clustering differences. We found, however, that the H~II region size distributions derived from our simulations is in fair agreement with the one from the \citet{2007MNRAS.381...75M} simulations, and hence this offers no plausible explanation of the differences. We also compared the Ly-$\alpha$ damping wing optical depth distributions for sources of different mass at a range of reionization stages, as derived by \citet{2008MNRAS.386.1990M} (their Figures 2 and 3) and again found no significant differences between our results and theirs. We conclude that more detailed and direct comparisons will be required in order to evaluate and understand any differences between our results. \section{Summary and Conclusions} We considered the effects which the reionizing IGM has on the observations of high-redshift Ly-$\alpha$ sources. For this we utilized detailed structure formation and radiative transfer simulations, which allowed us to evaluate many features which can only be studied by detailed simulations, as well as to quantify better a number of previously-proposed effects. We followed the full reionization history self-consistently and accounted for the actual source distribution, neutral fraction, density and velocity fields. We find that the density, neutral fraction and velocity fields are all highly anisotropic, which results in large variations in the IGM transmission and source visibility among different LOS. The velocity effects, both gas infall and source peculiar velocity are most important for massive, luminous sources. The most luminous sources are found in highest peaks of the density field, which at late times are significantly overdense out to $\sim10$~comoving Mpc (cMpc) and are surrounded by infall extending to $\sim20$~cMpc. The infall of gas blueshifts it in frequency space and results in significant absorption on the red side of the line center, while the peculiar velocity of the source itself can either alleviate or exacerbate this effect, depending on the halo- and infall velocity alignment. The spherically-averaged local density enhancement and gas infall have been modelled analytically in approximate ways \citep{2004MNRAS.347...59B}, and thus can be incorporated in semi-analytical models \citep[e.g.][]{2007MNRAS.377.1175D}. However, such models are unable to account for the strong intrinsic anisotropies of the neutral fraction, density and velocity fields. The analytical and semianalytical models typically assume spherical symmetry and full ionization inside the H~II regions, both of which assumptions are quite unrealistic. The Ly-$\alpha$ lines we derive are generally asymmetric and vary hugely from LOS to LOS. The luminous sources form at the highest density peaks and as a consequence their line centers are always highly-absorbed even though their proximity regions are very highly ionized, with typical neutral fractions $x_{\rm HI}\sim10^{-5}-10^{-6}$. The luminous sources also more affected by infall and exhibit more pronounced proximity region with higher transmission of the blue wing of the line. High-redshift sources are strongly clustered around the high peaks of the density field. The central source contributes the majority of the ionizing flux only in its immediate vicinity, within 1-2 comoving Mpc. Beyond that distance the ionizing flux is dominated by the fainter sources clustered around it. This dominance is particularly strong at late times, when both many more sources form and the ionized regions become larger, resulting in the fainter sources contributing up to 2 orders of magnitude more photons than the central source. Compared to single-source ionized bubbles, the larger H~II regions from clustered sources diminish the effects from the damping wing of the line. Nevertheless, these remain significant until fairly late (ionized mass fraction $x_m=0.3-0.7$, which for the simulation considered here corresponds to redshifts $z\sim9-8$). Interestingly, the average damping wing effect is similar for luminous and typical sources, even though naiively one might expect that damping could be weaker for the former, since they are typically in the middle of large bubbles, away from the neutral patches, unlike the fainter sources, which are more evenly distributed. Both the mean IGM transmission and the typical photoionization rates we find are high compared to observations at $z\sim6$, indicating that our adopted source efficiencies are also high. The mean IGM transmissivity decreases only slowly towards the higher redshifts and the GP transparency occurs significantly after the actual overlap epoch. For the simulation considered here overlap (defined as 1\% neutral fraction) occurs at $z=6.6$, while average neutral fraction of $10^{-4}$ is reached only by $z=6$ and even then relatively small fraction (few to 10\%) of the flux is transmitted. By overlap the spectra start showing significant transmission gaps in the mean IGM (i.e. away from the proximity region of a luminous source). We find that for a given number density of sources (e.g. as determined by observations) the clustering of these sources depends only weakly on the IGM absorption during reionization. As a consequence, the reionization patchiness has little effect on the observed Ly-$\alpha$ source clustering, which implies that source clustering is not a good indicator for reionization patchiness. Our derived luminosity function assuming constant mass-to-light ratio provides an excellent match to the shape of the observed luminosity function at $z=6.6$ with faint-end slope of $\alpha=-1.5$. The resulting mass-to-light ratio implies that the majority of sources responsible for reionization are too faint to be observed by the current surveys. \section*{Acknowledgments} We thank Hugo Martel for letting us use and modify his correlation function code and X. Fan for useful discussions. This work was partially supported by NASA Astrophysical Theory Program grants NAG5-10825 and NNG04G177G, Swiss National Science Foundation grant 200021-116696/1, and Swedish Research Council grant 60336701.
1,941,325,220,970
arxiv
\section{Introduction} The theory of polygonal billiards concerns the uniform motion of a point mass (billiard ball) in a polygonal plane domain (billiard table). We define collisions with the boundary to be elastic: the angle of incidence equals the angle of reflection. It is natural to consider the behavior of periodic billiard orbits in polygons. Indeed, a long-standing open problem in polygonal billiards is whether every polygon contains a periodic billiard orbit (see \cites{gutkin, gutkint} for surveys). Much work has been devoted to this particular problem, leading to significant progress (see e.g., \cites{masur, schwartz, galperin}), although many questions remain unanswered. \begin{figure}[h] \begin{subfigure}{0.5\textwidth} \centering \includegraphics[scale=.5]{Images/short1.png} \end{subfigure}% \begin{subfigure}{0.5\textwidth} \centering \includegraphics[scale=.5]{Images/long3.png} \end{subfigure} \caption{A short trajectory (left) and a long trajectory (right) in the regular pentagon billiard table beginning in the periodic direction 132.} \label{fig:pent_trajs} \end{figure} One fruitful line of study has come from billiards in polygons whose interior angles are rational multiples of $\pi$ (rational polygons), and their correspondence with straight line flow on translation surfaces via \textit{unfolding} (see \cites{wrightsurv, MasurTabach, billiards} for review). This correspondence has lead to much development in both billiards and translation surfaces. For instance, Masur in \cite{masur} used this correspondence to show that every rational polygon admits countably many periodic billiard trajectories. It is also interesting to characterize the behavior of periodic billiard orbits in particular classes of polygons (see e.g. \cites{dft, dl}). In \cite{dl}, Davis and Leli\`evre study the rich behavior of periodic billiard trajectories in the regular pentagon by using the correspondence of the regular pentagon billiard table with the double pentagon and golden L translation surfaces. Their work demonstrated how any periodic billiard trajectory in the regular pentagon is either long, short, or hits a corner. To illustrate, a short trajectory and a long trajectory for a particular periodic direction is shown in Figure \ref{fig:pent_trajs}. As a consequence, McMullen \cite{mcmullenQ} asked if given a periodic direction and midpoint of an edge, is there a way to determine if the trajectory is long, short, or hits a corner without having to explicitly draw the trajectory? The following theorem proved in this paper resolves this question. \begin{theorem} \label{thm2} Let $\vec{v}$ be a periodic billiard direction in the regular pentagon billiard table. For any choice of midpoint on a side, Algorithm \ref{alg:MainAlgorithm} gives a process to determine if the billiard trajectory in the direction of $\vec{v}$ is long, short, or hits a corner point. \end{theorem} The proof of Theorem \ref{thm2} is an algorithm which translates the problem of billiards on the pentagon to straight-line flow on the golden L translation surface, and determines if the associated trajectory is in a long or short cylinder. This paper is organized as follows. In \S \ref{sec1} we provide background definitions and theory necessary for stating the proof of Theorem \ref{thm2}, which we give in \S \ref{sec2}. Finally, a corollary that often reduces the steps needed in the algorithm is explored in \S \ref{sec3}. \section{Preliminaries}\label{sec1} \subsection{Translation surfaces and Veech groups} Billiards are characterized by their elastic reflection off the boundary of the table according to the mirror law of reflection. This allows us to relate billiard dynamics to linear flow on translation surfaces. In particular, reflecting across the edges of the table until each edge is paired with an opposite parallel edge gives rise to an equivalent representation on a translation surface. These surfaces are typically easier to study because it allows us to restrict our attention to a single fixed direction. We begin by giving the definition of translation surfaces and their corresponding properties. For a complete review of translation surfaces see \cites{wrightsurv, MasurTabach}. \begin{definition} A \emph{translation surface}, denoted $(X, \omega)$, is a disjoint union of polygons in $\mathbb{C}$ with opposite, parallel sides identified by translation. In particular, a translation surface is embedded in $\mathbb{C}$ with embedding fixed only up to translation. In the polygonal representation, we consider translation surfaces to be equivalent if we can achieve one from the other through cut-and-paste operations. \end{definition} \begin{remark} The notation $(X, \omega)$ of a translation surface comes from an equivalent definition of a translation surface. Namely, a translation surface $(X, \omega)$ is a nonzero Abelian differential $\omega$ on a Riemann surface $X$. \end{remark} The vertices of the polygons defining a translation surface are called \textit{cone points}, and the segments joining pairs of cone points without any cone points in the interior are called \textit{saddle connections}. Trajectories that hit the corner of a given polygonal billiard table correspond to saddle connections in the translation surface obtained via unfolding said table. A \textit{cylinder} of a translation surface is a maximal family of periodic trajectories (closed geodesics) in a periodic direction, with boundaries given by saddle connections. See Figure \ref{DPSurf} for visual demonstration of cylinders. \begin{figure}[h] \centering \begin{tikzpicture}[scale=1] \node[regular polygon,regular polygon sides=8, minimum size=5cm, draw, fill=gray] at (0,0) {}; \node[rectangle, fill=lightgray, minimum width = 4.6cm, minimum height = 1.87cm] (r) at (0,0) {}; \draw[ultra thick, dashed] (-2.3, 0.95)--(2.3, 0.95); \draw[ultra thick, dashed] (-2.3, -0.95)--(2.3, -0.95); \draw (2.3, 0) node[anchor=west] {c}; \draw (-2.3, 0) node[anchor=east] {c}; \draw (-1.6, 1.6) node[anchor=south east] {d}; \draw (1.6, 1.6) node[anchor=south west] {b}; \draw (1.6, -1.6) node[anchor=north west] {d}; \draw (-1.6, -1.6) node[anchor=north east] {b}; \draw (0, 2.3) node[anchor=south] {a}; \draw (0, -2.3) node[anchor=north] {a}; \end{tikzpicture} \caption{The cylinder decomposition of the regular octagon translation surface in the horizontal direction. A longer cylinder (dark gray) and a shorter cylinder (light gray) are separated by saddle connections (black dashes).} \label{DPSurf} \end{figure} Translation surfaces admit an action by $\operatorname{SL}_2\mathbb{R}$. If $g \in \operatorname{SL}_2\mathbb{R}$ and $(X, \omega)$ is a translation surface given as a collection of polygons, then $g(X, \omega)$ is the translation surface obtained by acting linearly by $g$ on the polygons determining $(X, \omega)$. \begin{definition} We denote the stabilizer of $(X, \omega)$ under the action of $\operatorname{SL}_2\mathbb{R}$ by $\operatorname{SL}(X, \omega)$. The \emph{Veech Group} of $(X, \omega)$ is the image of $\operatorname{SL}(X, \omega)$ in $\operatorname{PSL}_2\mathbb{R}$. If $\operatorname{SL}(X, \omega)$ is a lattice, then $(X, \omega)$ is a \emph{Veech surface}. \end{definition} \subsection{Regular pentagon billiard table and double pentagon surface} \label{sec:unfoldingToDoublePent} Straight line flow on a translation surface corresponds to billiard flow in a polygon through \textit{unfolding}. That is, instead of reflecting the billiard off a side of a billiard polygon, reflect the polygon on a side and unfold the trajectory to a straight line, as shown in Figure \ref{unfolding}. Reflecting polygons in such a manner can generate translation surfaces, and hence we obtain a correspondence between billiard flow in a polygon and straight line flow on a translation surface. For a detailed description of unfolding a billiard into straight line flow on a translation surface, see \cite[\S 1.3]{MasurTabach}. The regular pentagon table can be unfolded to the \textit{necklace}, a 5-fold cover of the double pentagon translation surface. As a consequence, we can examine straight-line flow on the double pentagon translation surface to understand billiard dynamics in the regular pentagon. \begin{figure} \centering \includegraphics[scale=.23]{Images/Unfolding.png} \caption{An unfolding of a billiard trajectory in the regular pentagon table to a flow in the double pentagon surface. The red trajectory shows the billiard trajectory reflected off the edge.} \label{unfolding} \end{figure} \subsection{Going from the double pentagon to the golden L} \begin{figure}[h] \centering \begin{tikzpicture}[scale=0.86] \draw (0,0) -- (1.61803, 0) -- (1.61803^2, 0) -- (1.61803^2, 1.61803) -- (1.61803, 1.61803) -- (1.61803, 1.61803^2) -- (0, 1.61803^2) -- (0, 1.61803) -- (0,0); \draw (-0.1, 1.61803) -- (0.1, 1.61803); \draw (1.61803, -0.1) -- (1.61803, 0.1); \draw (0, 1.61803/2) node[anchor=east] {b}; \draw (1.61803^2, 1.61803/2) node[anchor=west] {b}; \draw (0, 1.61803 + 1/2) node[anchor=east] {a}; \draw (1.61803, 1.61803 + 1/2) node[anchor=west] {a}; \draw (1.61803/2, 1.61803^2) node[anchor=south] {c}; \draw (1.61803/2, 0) node[anchor=north] {c}; \draw (1.61803 + 1/2, 0) node[anchor=north] {d}; \draw (1.61803 + 1/2, 1.61803) node[anchor=south] {d}; \end{tikzpicture} \caption{The golden L with edge identifications labeled.} \label{fig:goldenL} \end{figure} Recall the golden ratio $\phi$, defined to be $$\phi = \frac{1+\sqrt{5}}{2} \approx 1.618$$ and is obtained as one of the solutions of $x^2 - x - 1 = 0$. Hence, it satisfies a useful identity: $\phi^2 = \phi +1$. We construct the \textit{golden L} by taking a $\phi \times \phi$ square, gluing two $1\times \phi$ rectangles to adjacent sides, and identifying opposite and parallel sides (see Figure \ref{fig:goldenL}). We construct the \textit{double pentagon} by gluing two pentagons together and identifying opposite and parallel sides, as in Figure \ref{unfolding}. Note this constructions is equivalent to the double pentagon obtained from the unfolding in \S \ref{sec:unfoldingToDoublePent} (see \cite[\S 2.1]{dl} for further information on the double pentagon). As in \cite[Definition 2.2]{dl}, let $$P = \begin{pmatrix} 1 & \cos{\pi/5} \\ 0 & \sin{\pi/5} \end{pmatrix}.$$ \begin{lemma} \cite[Lemma 2.3]{dl} \label{lem:LongShortConnec} The matrix $P$ takes the golden L surface to the double pentagon surface, and its inverse $P^{-1}$ takes the double pentagon to the golden L. In particular, they take long cylinder vectors to long cylinder vectors, and the same for short cylinder vectors. \end{lemma} \subsection{Periodic trajectories in the regular pentagon, the double pentagon, and the golden L} \label{sec:PerDirections} We begin with the \text{golden L}, pictured in Figure \ref{fig:goldenL}. The following matrices generate the Veech group of the Golden L \cite[Lemma 2.6]{dl}: \begin{center} $\sigma _0 = \begin{pmatrix} 1 & \phi \\ 0 & 1 \end{pmatrix}$, $\sigma _1 = \begin{pmatrix} \phi & \phi \\ 1 & \phi \end{pmatrix}$, $\sigma _2 = \begin{pmatrix} \phi & 1 \\ \phi & \phi \end{pmatrix}$, $\sigma _3 = \begin{pmatrix} 1 & 0 \\ \phi & 1 \end{pmatrix}$ \end{center} The images of each $\sigma_i$ acting on the golden L are pictured in Figures \ref{sigma0app} - \ref{sigma3app}. Since these are elements of the Veech group of the golden L, we know that they send the golden L to itself, up to a cut and paste. See Figure \ref{cutandpaste} for a visual demonstration of how the cut and paste operation works on the golden L after applying $\sigma_0$. Note the cut and paste operation respects side identifications and the cut lines may become new side identifications. \par \begin{figure}[h] \begin{subfigure}[h]{0.45\textwidth} \centering \begin{tikzpicture}[scale=0.86] \draw (0,0) -- (1.61803, 0); \draw (1.61803, 0) -- (1.61803+1, 0); \draw[thick, loosely dotted] (1.61803+1, 0) -- (2*1.61803+2, 1.61803); \draw[thick, loosely dotted] (1.61803+1, 1.61803) -- (0,0); \draw (2*1.61803+2, 1.61803) -- (2*1.61803+1, 1.61803); \draw[thick, densely dotted] (2*1.61803+1, 1.61803) -- (3*1.61803+1, 1.61803+1); \draw[thick, densely dotted] (2*1.61803+1, 1.61803+1) -- (1.61803+1, 1.61803); \draw (3*1.61803+1, 1.61803+1) -- (2*1.61803+1, 1.61803+1); \draw[blue,thick,dashed] (1.61803+1, 0) -- (1.61803+1, 1.61803); \draw[blue,thick,dashed] (2*1.61803+1, 1.61803) -- (2*1.61803+1, 1.61803+1); \draw (2.5*1.61803 + 1, 1.61803 +1/2) node[anchor=north west, shift={(-1mm,1mm)}] {a}; \draw (1.5*1.61803 + 1, 1.61803 +1/2) node[anchor=south east, shift={(1mm,-1mm)}] {a}; \draw (1.5*1.61803 + 3/2, 1.61803/2) node[anchor=north west, shift={(-1mm,1mm)}] {b}; \draw (1.61803/2 + 1/2, 1.61803/2) node[anchor=south east, shift={(1mm,-1mm)}] {b}; \end{tikzpicture} \end{subfigure} \begin{subfigure}[h]{0.45\textwidth} \centering \begin{tikzpicture}[scale=0.86]\textbf{} \draw[thick, loosely dotted] (1.61803+1, 1.61803) -- (0,0); \draw (0,0) -- (1.61803, 0) -- (1.61803+1, 0); \draw (1.61803+1, 1.61803) -- (1.61803, 1.61803); \draw[thick, densely dotted] (0, 1.61803) -- (1.61803, 1.61803+1); \draw (1.61803, 1.61803+1) -- (0, 1.61803+1); \draw[blue,thick,dashed] (1.61803+1, 0) -- (1.61803+1, 1.61803); \draw[blue,thick,dashed] (0, 0) -- (0, 1.61803+1); \draw[blue,thick,dashed] (1.61803, 1.61803) -- (1.61803, 1.61803+1); \draw (1.61803/2, 1.61803 +1/2) node[anchor=south east, shift={(1mm,-1mm)}] {a}; \draw (1.61803/2 + 1/2, 1.61803/2) node[anchor=south east, shift={(1mm,-1mm)}] {b}; \end{tikzpicture} \end{subfigure} \caption{$\sigma_0$ acting on the golden L before and after cutting and pasting along the blue dotted lines.} \label{cutandpaste} \end{figure} Next, consider a trajectory within the double pentagon. We say a trajectory in a translation surface is a \textit{periodic trajectory} if it forms a closed curve. In other words, the trajectory eventually comes back to where it started and repeats. It was shown by Davis-Leli\`evre in \cite{dl} that periodic trajectories in the golden L are precisely those with slope in $\mathbb{Q}[\sqrt{5}]$. The following theorem from \cite{dl} gives us a way to express any periodic direction as a product of finitely many $\sigma_i$'s and the horizontal vector $(\begin{smallmatrix} 1\\ 0 \end{smallmatrix}\big)$: \begin{theorem} \cite[Theorem 2.22]{dl} \label{thm:PerDirec} Corresponding to any periodic direction vector $\vec{v}$ in the first quadrant on the golden L is a unique sequence $a_v=(k_1, k_2, \dots, k_n)$ of sectors such that $\vec{v} = \ell_v \sigma_{k_n}\dots \sigma_{k_1}\big(\begin{smallmatrix} 1\\ 0 \end{smallmatrix}\big)$ for some length $\ell_v$. \end{theorem} We refer to this sequence of integers $k_1 k_2 \dots k_n$, $k_j \in (0,1,2,3)$, as a \textit{tree word}. See Example \ref{ex:treeword} below for an example of calculating the periodic direction vector from a tree word. \begin{example} \label{ex:treeword} The vector $\vec{v}$ in the direction $132$ is given by \begin{align*} v &= \sigma_2 \cdot \sigma_3 \cdot \sigma_1 \cdot \begin{pmatrix} 1\\ 0 \end{pmatrix} \\ &= \begin{pmatrix} \phi & 1 \\ \phi & \phi \end{pmatrix} \cdot \begin{pmatrix} 1 & 0 \\ \phi & 1 \end{pmatrix} \cdot \begin{pmatrix} \phi & \phi \\ 1 & \phi \end{pmatrix} \cdot \begin{pmatrix} 1\\ 0 \end{pmatrix} \\ &= \begin{pmatrix} 2\phi^2 + 1 \\ 2\phi^2 + 2\phi \end{pmatrix} = \begin{pmatrix} 2\phi + 3 \\ 4\phi + 2 \end{pmatrix} \tag{Recall $\phi^2 = \phi +1$}. \end{align*} \end{example} Using the following corollary, we can jump between periodic directions in the double pentagon to periodic directions in the golden L: \begin{corollary} \cite[Corollary 2.4]{dl} \label{cor:PinvIsPer} A direction $\vec{v}$ is periodic on the double pentagon if and only if the direction $P^{-1}\vec{v}$ is periodic on the golden L. \end{corollary} \subsection{Weierstrass points and the Veech group} \label{weierstrass} Next, we inscribe a pentagon within the golden L as pictured in Figure \ref{fig:goldLWP}, and label the midpoints of the sides as shown. These midpoints coincide with the \emph{Weierstrass points}, which are also the midpoints of the edges in the double pentagon. Moreover, these are the labeled points in the regular pentagon as in Figure \ref{fig:goldLWP}. \begin{figure}[h] \centering \begin{subfigure}{0.4\textwidth} \centering \begin{tikzpicture}[scale=0.86] \draw (0,0) -- (1.61803, 0) -- (1.61803^2, 0) -- (1.61803^2, 1.61803) -- (1.61803, 1.61803) -- (1.61803, 1.61803^2) -- (0, 1.61803^2) -- (0, 1.61803) -- (0,0); \draw [red] (1.61803, 0) -- (1.61803^2, 0) -- (1.61803, 1.61803) -- (0, 1.61803^2) -- (0, 1.61803) -- (1.61803, 0); \filldraw[red] (0, 1.61803+0.5) circle (2pt) node[anchor=west] {1}; \filldraw[red] (1.61803/2, 1.61803+0.5) circle (2pt) node[anchor=west] {2}; \filldraw[red] (1.61803/2, 1.61803/2) circle (2pt) node[anchor=west] {3}; \filldraw[red] (1.61803+0.5, 1.61803/2) circle (2pt) node[anchor=west] {4}; \filldraw[red] (1.61803+0.5, 0) circle (2pt) node[anchor=south] {5}; \end{tikzpicture} \end{subfigure} \begin{subfigure}{0.4\textwidth} \centering \begin{tikzpicture}[scale=0.86] \node[regular polygon,regular polygon sides=5, minimum size=2.6cm, draw] at (0,0) {}; \filldraw[red] (-0.77, 0.94) circle (2pt) node[anchor=west] {1}; \filldraw[red] (0.77, 0.94) circle (2pt) node[anchor=east] {2}; \filldraw[red] (1.16, -0.4) circle (2pt) node[anchor=east] {4}; \filldraw[red] (-1.16, -0.4) circle (2pt) node[anchor=west] {3}; \filldraw[red] (0, -1.2) circle (2pt) node[anchor=south] {5}; \end{tikzpicture} \end{subfigure} \caption{Left: The golden L with the inscribed pentagon and Weierstrass points labeled. Right: The corresponding points labeled on the regular pentagon.} \label{fig:goldLWP} \end{figure} We say a point of a Veech surface is \textit{periodic} if it has finite orbit under action of the Veech group. The Weierstrass points are the only periodic points of the golden L under action of the Veech group (See \cite[\S 2]{wrightsurv} for review of periodic points on Veech surfaces and L tables). As a consequence, the action of the Veech group elements over the golden L permutes the points, as seen in Figures \ref{sigma0app} - \ref{sigma3app}. \begin{figure}[t] \centering \begin{subfigure}{0.4\textwidth} \centering \begin{tikzpicture}[scale=0.86] \draw (0,0) -- (1.61803, 0) -- (1.61803+1, 0) -- (2*1.61803+2, 1.61803) -- (2*1.61803+1, 1.61803) -- (3*1.61803+1, 1.61803+1) -- (2*1.61803+1, 1.61803+1) -- (1.61803+1, 1.61803) -- (0,0); \draw[red] (1.61803, 0) -- (1.61803+1, 0) -- (2*1.61803+1, 1.61803) -- (2*1.61803+1, 1.61803+1) -- (1.61803+1, 1.61803) -- (1.61803, 0); \filldraw[red] (1.61803^2 + 1.61803/2, 1.61803+0.5) circle (2pt) node[anchor=east] {1}; \filldraw[red] (2*1.61803+1, 1.61803+0.5) circle (2pt) node[anchor=west] {2}; \filldraw[red] (1.61803+0.5, 1.61803/2) circle (2pt) node[anchor=west] {3}; \filldraw[red] (1.61803+0.5+1.61803^2/2, 1.61803/2) circle (2pt) node[anchor=west] {4}; \filldraw[red] (1.61803+0.5, 0) circle (2pt) node[anchor=south] {5}; \end{tikzpicture} \end{subfigure}% \begin{subfigure}{0.4\textwidth} \centering \begin{tikzpicture}[scale=0.86] \draw (1.61803+1, 1.61803) -- (0,0) -- (1.61803, 0) -- (1.61803+1, 0); \draw (0, 0) -- (1.61803+1, 1.61803) -- (1.61803, 1.61803); \draw (0, 1.61803) -- (1.61803, 1.61803+1) -- (0, 1.61803+1) -- (0, 1.61803); \draw (1.61803+1, 0) -- (1.61803+1, 1.61803); \draw (0, 0) -- (0, 1.61803+1); \draw (1.61803, 1.61803) -- (1.61803, 1.61803+1); \draw[red] (1.61803+1, 1.61803) -- (1.61803, 0) -- (1.61803+1, 0); \draw[red] (0, 0) -- (1.61803, 1.61803); \draw[red] (0, 1.61803+1) -- (0, 1.61803) -- (1.61803, 1.61803+1); \filldraw[red] (1.61803/2, 1.61803+0.5) circle (2pt) node[anchor=west] {1}; \filldraw[red] (0, 1.61803+0.5) circle (2pt) node[anchor=west] {2}; \filldraw[red] (1.61803+0.5, 1.61803/2) circle (2pt) node[anchor=west] {3}; \filldraw[red] (1.61803/2, 1.61803/2) circle (2pt) node[anchor=west] {4}; \filldraw[red] (1.61803+0.5, 0) circle (2pt) node[anchor=south] {5}; \end{tikzpicture} \end{subfigure} \caption{Image under $\sigma_0$ before and after cut and paste.} \label{sigma0app} \end{figure} \begin{figure}[t] \begin{subfigure}{0.4\textwidth} \centering \begin{tikzpicture}[scale=0.86] \draw (0,0) -- (1.61803+1, 1.61803) -- (2*1.61803+1, 1.61803+1) -- (3*1.61803+2, 2*1.61803+2) -- (2*1.61803+2, 2*1.61803+1) -- (3*1.61803+2, 3*1.61803+1) -- (2*1.61803+1, 2*1.61803+1) -- (1.61803+1, 1.61803+1) -- (0,0); \draw [red] (1.61803+1, 1.61803) -- (2*1.61803+1, 1.61803+1) -- (2*1.61803+2, 2*1.61803+1) -- (2*1.61803+1, 2*1.61803+1) -- (1.61803+1, 1.61803+1) -- (1.61803+1, 1.61803); \filldraw[red] (1.5*1.61803+1, 1.5*1.61803+1) circle (2pt) node[anchor=west] {1}; \filldraw[red] (2*1.61803+1.5, 2*1.61803+1) circle (2pt) node[anchor=north] {2}; \filldraw[red] (1.61803+1, 1.61803+0.5) circle (2pt) node[anchor=west] {3}; \filldraw[red] (2*1.61803+1.5, 1.5*1.61803+1) circle (2pt) node[anchor=east] {4}; \filldraw[red] (1.5*1.61803+1, 1.61803+0.5) circle (2pt) node[anchor=south] {5}; \end{tikzpicture} \end{subfigure}% \begin{subfigure}{0.4\textwidth} \centering \begin{tikzpicture}[scale=0.86] \draw (1.61803, 1.61803+1) -- (0, 1); \draw (1.61803+1, 1) -- (1.61803,0) -- (1.61803+1, 1/1.61803); \draw (0, 1/1.61803) -- (1.61803, 1.61803); \draw (0, 1.61803) -- (1.61803, 1.61803+1); \draw (1.61803+1, 1/1.61803) -- (1.61803, 0); \draw (1.61803, 1.61803) -- (0, 0); \draw (0, 1.61803+1) -- (1.61803, 1.61803+1); \draw (1.61803+1, 1.61803) -- (1.61803, 1.61803); \draw (1.61803+1, 0) -- (0, 0); \draw (1.61803, 1.61803) -- (1.61803, 1.61803+1); \draw (0, 0) -- (0, 1.61803+1); \draw (1.61803+1, 0) -- (1.61803+1, 1.61803); \draw[red] (1.61803, 1.61803+1) -- (1.61803, 1.61803); \draw[red] (0, 1.61803) -- (1.61803, 1.61803+1); \draw[red] (1.61803, 0) -- (1.61803+1, 1.61803) -- (1.61803, 1.61803) -- (0, 0); \filldraw[red] (0.5*1.61803, 0.5*1.61803) circle (2pt) node[anchor=west] {1}; \filldraw[red] (1.61803+0.5, 0) circle (2pt) node[anchor=north] {2}; \filldraw[red] (0, 1.61803+0.5) circle (2pt) node[anchor=west] {3}; \filldraw[red] (1.61803+0.5, 0.5*1.61803) circle (2pt) node[anchor=east] {4}; \filldraw[red] (0.5*1.61803, 1.61803+0.5) circle (2pt) node[anchor=south] {5}; \end{tikzpicture} \end{subfigure} \caption{Image under $\sigma_1$ before and after cut and paste.} \label{sigma1app} \end{figure} \begin{figure}[t] \begin{subfigure}{0.4\textwidth} \centering \begin{tikzpicture}[scale=0.86] \draw (0, 0) -- (1.61803+1, 1.61803+1) -- (2*1.61803+1, 2*1.61803+1) -- (3*1.61803+1, 3*1.61803+2) -- (2*1.61803+1, 2*1.61803+2) -- (2*1.61803+2, 3*1.61803+2) -- (1.61803+1, 2*1.61803+1) -- (1.61803, 1.61803+1) -- (0, 0); \draw [red] (1.61803+1, 1.61803+1) -- (2*1.61803+1, 2*1.61803+1) -- (2*1.61803+1, 2*1.61803+2) -- (1.61803+1, 2*1.61803+1) -- (1.61803, 1.61803+1) -- (1.61803+1, 1.61803+1); \filldraw[red] (1.61803+0.5, 1.5*1.61803+1) circle (2pt) node[anchor=west] {1}; \filldraw[red] (1.5*1.61803+1, 2*1.61803+1.5) circle (2pt) node[anchor=north] {2}; \filldraw[red] (1.61803+0.5, 1.61803+1) circle (2pt) node[anchor=south] {3}; \filldraw[red] (2*1.61803+1, 2*1.61803+1.5) circle (2pt) node[anchor=east] {4}; \filldraw[red] (1.5*1.61803+1, 1.5*1.61803+1) circle (2pt) node[anchor=south] {5}; \end{tikzpicture} \end{subfigure}% \begin{subfigure}{0.4\textwidth} \centering \begin{tikzpicture}[scale=0.86] \draw (1.61803+1, 1.61803) -- (1.61803, 0); \draw (1.61803, 1.61803+1) -- (1, 1.61803) -- (2, 1.61803+1); \draw (2, 0) -- (1.61803+2, 1.61803); \draw (1, 0) -- (1.61803+1, 1.61803); \draw (1, 1.61803) -- (1.61803, 1.61803+1); \draw (1.61803, 0) -- (1.61803+1, 1.61803) -- (1, 0); \draw (1.61803+1, 0) -- (1.61803+2, 1.61803) -- (2, 0); \draw (2, 1.61803+1) -- (1, 1.61803); \draw (1.61803+2, 1.61803) -- (1.61803+1, 0); \draw (1, 0) -- (3, 0); \draw (1.61803+2, 1.61803) -- (1.61803+1, 1.61803); \draw (1, 1.61803+1) -- (1.61803+1, 1.61803+1); \draw (1,0) -- (1, 1.61803+1); \draw (1.61803+2, 0) -- (1.61803+2, 1.61803); \draw[red] (1, 0) -- (1.61803+1, 1.61803) -- (1.61803+1, 1.61803+1) -- (1, 1.61803); \draw[red] (1.61803+2, 1.61803) -- (1.61803+1, 0) -- (1.61803+2, 0); \filldraw[red] (1.61803+1.5, 0.5*1.61803) circle (2pt) node[anchor=west] {1}; \filldraw[red] (0.5*1.61803+1, 1.61803+0.5) circle (2pt) node[anchor=north] {2}; \filldraw[red] (1.61803+1.5, 0) circle (2pt) node[anchor=north] {3}; \filldraw[red] (1, 1.61803+0.5) circle (2pt) node[anchor=east] {4}; \filldraw[red] (0.5*1.61803+1, 0.5*1.61803) circle (2pt) node[anchor=south] {5}; \end{tikzpicture} \end{subfigure} \caption{Image under $\sigma_2$ before and after cut and paste.} \label{sigma2app} \end{figure} \begin{figure}[t] \begin{subfigure}{0.4\textwidth} \centering \begin{tikzpicture}[scale=0.86] \draw (0,0) -- (1.61803, 1.61803+1) -- (1.61803+1, 2*1.61803+1) -- (1.61803+1, 3*1.61803+1) -- (1.61803, 2*1.61803+1) -- (1.61803, 2*1.61803+2) -- (0, 1.61803+1) -- (0, 1.61803) -- (0,0); \draw [red] (1.61803, 1.61803+1) -- (1.61803+1, 2*1.61803+1) -- (1.61803, 2*1.61803+1) -- (0, 1.61803+1) -- (0, 1.61803) -- (1.61803, 1.61803+1); \filldraw[red] (0, 1.61803+0.5) circle (2pt) node[anchor=west] {1}; \filldraw[red] (1.61803/2, 1.5*1.61803+1) circle (2pt) node[anchor=west] {2}; \filldraw[red] (1.61803/2, 1.61803+0.5) circle (2pt) node[anchor=west] {3}; \filldraw[red] (1.61803+0.5, 2*1.61803+1) circle (2pt) node[anchor=south] {4}; \filldraw[red] (1.61803+0.5, 1.5*1.61803+1) circle (2pt) node[anchor=south] {5}; \end{tikzpicture} \end{subfigure}% \begin{subfigure}{0.4\textwidth} \centering \begin{tikzpicture}[scale=0.86] \draw (0, 1.61803+1) -- (0, 1.61803) -- (0,0) -- (1.61803, 1.61803+1); \draw (1.61803, 0) -- (1.61803+1, 1.61803); \draw (1.61803+1, 0) -- (1.61803+1, 1.61803) -- (1.61803, 0); \draw (1.61803, 1.61803) -- (1.61803, 1.61803+1) -- (0, 0); \draw[red] (1.61803, 0) -- (1.61803+1, 1.61803) -- (1.61803, 1.61803) -- (0, 0); \draw[red] (0, 1.61803+1) -- (0, 1.61803) -- (1.61803, 1.61803+1); \filldraw[red] (0, 1.61803+0.5) circle (2pt) node[anchor=west] {1}; \filldraw[red] (1.61803/2, 0.5*1.61803) circle (2pt) node[anchor=west] {2}; \filldraw[red] (1.61803/2, 1.61803+0.5) circle (2pt) node[anchor=west] {3}; \filldraw[red] (1.61803+0.5, 0) circle (2pt) node[anchor=north] {4}; \filldraw[red] (1.61803+0.5, 0.5*1.61803) circle (2pt) node[anchor=south] {5}; \draw (1.61803, 1.61803) -- (1.61803+1, 1.61803); \draw (0, 0) -- (1.61803+1, 0); \draw (0, 1.61803+1) -- (1.61803, 1.61803+1); \end{tikzpicture} \end{subfigure} \caption{Image under $\sigma_3$ before and after cut and paste.} \label{sigma3app} \end{figure} \begin{lemma} \label{prop:permutation} For each $\sigma_i$, the permutation of the Weierstrass points (as labeled in Figure \ref{fig:goldLWP}) is a product of disjoint transpositions in the symmetric group $S_5$ given by the following: \begin{center} $\tau_0 = (1\; 2)(3\; 4)$, $\tau_1 = (1\; 3)(2\; 5)$, $\tau_2 = (1\; 4)(3\; 5)$, $\tau_3 = (2\; 3)(4\; 5)$ \end{center} with $\tau_i$ being the permutation associated with $\sigma_i$. \end{lemma} \begin{proof} This can be verified by the image of the golden L under each $\sigma_i$ after cut and paste, as shown in Figures \ref{sigma0app} - \ref{sigma3app}. \end{proof} \begin{remark} Since $\tau_i$ is a product of disjoint transpositions, it has order 2, so $\tau_i^{-1} = \tau_i$ for all i. As a result, the permutation associated with $\sigma_i^{-1}$ is $\tau_i$. \label{rem:order} \end{remark} \subsection{Long and short cylinders} Davis and Leli\`evre in \cite{dl} show that both the golden L and double pentagon decompose into exactly $2$ cylinders in any given periodic direction. In particular, the decomposition gives a \textit{long cylinder} and a \textit{short cylinder}, and the ratios of the circumferences of these two cylinders is always $\phi$ (see \cite[\S 2.1]{dl}). \section{Statement of algorithm and proof}\label{sec2} The purpose of this section is to address the following question stated by McMullen \cite{mcmullenQ}: \begin{question} Given an edge midpoint and periodic direction on the regular pentagon billiard table, will the trajectory be long, short, or a saddle connection? \end{question} We find that we can define an explicit procedure for determining which midpoints will give a short or long trajectory for any periodic direction. The procedure also shows that starting from one of the midpoints will give a section of a saddle connection. \begin{algo} \label{alg:MainAlgorithm} Given a vector $\vec{v}$ in a periodic direction in the regular pentagon, a starting midpoint on side $J \in (1,2,3,4,5)$, as labeled in Figure \ref{fig:goldLWP}, we can determine if the trajectory is long, short, or a saddle connection through the following steps: \begin{enumerate} \item Find the tree word $a = k_1 k_2 \dots k_n$ associated with $P^{-1}\vec{v}$. \item Compute the product $$\tau := \tau_{k_1} \cdot \tau_{k_{2}} \cdots \tau_{{k_n}} \in S_5.$$ \item Then, \begin{itemize} \item If $\tau(J) \in \{1, 2\}$, then the trajectory is short. \item If $\tau(J) \in \{3, 4\}$, then the trajectory is long. \item If $\tau(J) = 5$, then the trajectory is a saddle connection. \end{itemize} \end{enumerate} \end{algo} \begin{proof} Corollary \ref{cor:PinvIsPer} tells us $P^{-1}\vec{v}$ is periodic in the golden L, and Theorem \ref{thm:PerDirec} gives us a corresponding tree word $k_1 k_2 \cdots k_n$ such that $P^{-1}\vec{v} = \ell_{\vec{v}}\sigma_{k_n}\cdots \sigma_{k_2} \sigma_{k_1} \big(\begin{smallmatrix} 1\\ 0 \end{smallmatrix}\big)$ for some length $\ell_{\vec{v}}$. We then apply the following sequence of inverse matrices to both the golden L and the direction vector $P^{-1}\vec{v}$: $$\sigma_{k_1}^{-1} \sigma_{k_2}^{-1} \cdots \sigma_{k_n}^{-1}.$$ There's two things that happen. First, the direction vector is changed. All the $\sigma_i$'s cancel out, leaving us with a horizontal vector $\ell_{\vec{v}}\big(\begin{smallmatrix} 1\\ 0 \end{smallmatrix}\big)$. Secondly, the golden L is acted on by this product. Recall $\sigma_i$ is in the Veech group of the golden L, so $\sigma_i^{-1}$ must also be in the Veech group. Then by closure, the product above must also be in the Veech group. As a result, we may perform cut and paste operations and get back to the golden L. Moreover, by Proposition \ref{prop:permutation} and Remark \ref{rem:order}, the labeled midpoints (the Weierstrass points) will be permuted by the product $\tau := \tau_{k_1} \cdot \tau_{k_{2}} \cdots \tau_{{k_n}}$. Thus, midpoint $J$ will be sent to midpoint $\tau(J)$. We then have a horizontal direction originating from midpoint $\tau(J)$ on the golden L. Figure \ref{horizL} shows us clearly if midpoint $\tau(J)$ is in a long cylinder, short cylinder, or on a saddle connection in the horizontal direction. Finally, we use Lemma \ref{lem:LongShortConnec} to jump back to regular pentagon. The result follows immediately. \end{proof} \begin{figure}[h] \centering \begin{tikzpicture}[scale=1.45] \draw (0,0) -- (1.61803, 0) -- (1.61803^2, 0) -- (1.61803^2, 1.61803) -- (1.61803, 1.61803) -- (1.61803, 1.61803^2) -- (0, 1.61803^2) -- (0, 1.61803) -- (0,0); \draw [red] (1.61803, 0) -- (1.61803^2, 0) -- (1.61803, 1.61803) -- (0, 1.61803^2) -- (0, 1.61803) -- (1.61803, 0); \filldraw[red] (0, 1.61803+0.5) circle (2pt) node[anchor=south east] {1}; \filldraw[red] (1.61803/2, 1.61803+0.5) circle (2pt) node[anchor=south west] {2}; \filldraw[red] (1.61803/2, 1.61803/2) circle (2pt) node[anchor=south west] {3}; \filldraw[red] (1.61803+0.5, 1.61803/2) circle (2pt) node[anchor=south west] {4}; \filldraw[red] (1.61803+0.5, 0) circle (2pt) node[anchor=south] {5}; \draw [ultra thick, <->] (0, 1.61803/2) -- (1.61803^2, 1.61803/2); \draw [ultra thick, <->] (0, 1.61803 + 1/2) -- (1.61803, 1.61803 + 1/2); \draw [ultra thick, <->] (1.61803, 0) -- (1.61803^2, 0); \end{tikzpicture} \caption{A short trajectory (top), long trajectory (middle), and saddle connection (bottom) in the horizontal direction in the golden L.} \label{horizL} \end{figure} In the following Example, we illustrate the algorithm using the tree word $21$: \begin{example} Consider the direction $21$ in the regular pentagon. The algorithm tells us to evaluate $$\tau = \tau_2 \cdot \tau_1 =(1\; 4)(3\; 5) \cdot (1\; 3)(2\; 5) = (1\; 5 \; 2\; 3\; 4).$$ Then $\tau(1) = 5$, $\tau(2) = 3$, $\tau(3)=4$, $\tau(4)=1$, and $\tau(5)=2$. Thus, the algorithm tells us that midpoints 4 and 5 are in the short cylinder, midpoints 2 and 3 are in the long cylinder, and midpoint 1 is a saddle connection. To check this, we can draw in exactly what the trajectories look like in the regular pentagon. We first find the direction vector $\vec{v}$ in the golden L that corresponds to the tree word $21$ (see Example \ref{ex:treeword}). That turns out to be $\vec{v} = (2\phi + 2, 2 + 1)$. We then calculate the corresponding direction on the regular pentagon: $$P\vec{v} = \begin{pmatrix} 1 & \cos{\pi/5} \\ 0 & \sin{\pi/5} \end{pmatrix} \cdot \begin{pmatrix} 2\phi + 2 \\ 2 + 1 \end{pmatrix} \approx \begin{pmatrix} 8.66 \\ 2.49 \end{pmatrix}.$$ We can then draw in the trajectory using the standard rules of billiards. The long and short trajectories can be seen in Figure \ref{fig:LongShortEx}, along with the midpoints the trajectories hit. \end{example} \begin{figure} \centering \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=0.95\textwidth]{Images/21LongFromSides23.png} \end{subfigure} \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=0.95\textwidth]{Images/21ShortFromSides45.png} \end{subfigure} \caption{A long trajectory (left) and a short trajectory (right) in the direction $21$.} \label{fig:LongShortEx} \end{figure} This problem may be extended in a number of interesting directions. One interesting question is the following. \begin{question} How does this algorithm generalize to regular $(2n+1)$-gons? \end{question} We remark that this method only readily generalizes to surfaces with periodic points under the Veech group that are exactly the Weierstrass points of the surface, which are also the midpoints of edges of the regular polygon. \section{Simplifying Tree Words}\label{sec3} For a given finite tree word $a = k_1k_2 \dots k_n$, we define the \textit{derived word} $a'$ to be $a$ with the removal of repeated pairs of numbers. That is, if $k_i=k_{i+1}$, then both $k_i$ and $k_{i+1}$ will be removed. We say $a$ is a \textit{base word} if $a=a'$. Thus, every tree word $a$ has a base word, and that base word can be reached by deriving $a$ until there are no more pairs of numbers. Let $\Omega$ be the space of all possible tree words. Let $\langle e \rangle$ denote the empty word. Define $\mu : \Omega \to \Omega$ to be the function that takes a tree word to its base word. \begin{example} Consider the tree word $a=231221$. Then $a'=2311$. We may derive again and get $a''=23$. Notice $a''$ does not have any pairs of numbers, so it cannot be derived further. Thus, the base word for $a=231221$ is $a''=23$. Furthermore, we write $\mu(231221)=23$. \end{example} We now state a corollary to our main theorem that may help reduce the calculations needed for determining the long and short cylinder decomposition for long tree words. \begin{corollary} For any given tree word $a$, the corresponding base word $\mu(a)$ and $a$ have the same midpoints in the long cylinder, short cylinder, and on a saddle connection. \end{corollary} \begin{proof} Consider a tree word with a pair of numbers $k_i = k_{i+1}$. By our algorithm, we then consider the product $\tau= \cdots \tau_{k_i}\tau_{k_{i+1}} \cdots $. But by Remark \ref{rem:order}, $\tau_{k_i}\tau_{k_{i+1}}=(1)$, the identity element of $S_5$. Thus, this pair of numbers in our tree word cancel out, resulting in the derived tree word having the same midpoints in the same cylinders as our original tree word. This may then be repeated until we reach the base word, proving the corollary. \end{proof} This has an interesting consequence. That is, when we run the algorithm on a tree word, we may first reduce it down to its base word and then run the algorithm on that base word. This will the same result as if we ran it on the original word. As a result, for any tree word whose base word is $\langle e \rangle$, it is immediate that the midpoints $1,2$ lie in the short cylinder, midpoints $3,4$ lie in the long cylinder, and midpoint $5$ lies on a saddle connection. Thus, we ask the following question: \begin{question} Given a tree word of length $2n$, what is the probability the base word will be the empty word $\langle e \rangle$, i.e., what is the probability that $\mu(a)=\langle e \rangle$ for a given tree word $a$ of length $2n$? \end{question} \subsection*{Acknowledgements} We would like to thank Diana Davis, Samuel Leli\`evre, Jane Wang, and Sunrose Shrestha for helping us work on this problem in the Summer@ICERM 2021 REU. The idea for this algorithm was presented to us by Leli\`evre, who continually helped us understand and explore this problem. \printbibliography \end{document} \section{Introduction} The theory of polygonal billiards concerns the uniform motion of a point mass (billiard ball) in a polygonal plane domain (billiard table). We define collisions with the boundary to be elastic: the angle of incidence equals the angle of reflection. It is natural to consider the behavior of periodic billiard orbits in polygons. Indeed, a long-standing open problem in polygonal billiards is whether every polygon contains a periodic billiard orbit (see \cites{gutkin, gutkint} for surveys). Much work has been devoted to this particular problem, leading to significant progress (see e.g., \cites{masur, schwartz, galperin}), although many questions remain unanswered. \begin{figure}[h] \begin{subfigure}{0.5\textwidth} \centering \includegraphics[scale=.5]{Images/short1.png} \end{subfigure}% \begin{subfigure}{0.5\textwidth} \centering \includegraphics[scale=.5]{Images/long3.png} \end{subfigure} \caption{A short trajectory (left) and a long trajectory (right) in the regular pentagon billiard table beginning in the periodic direction 132.} \label{fig:pent_trajs} \end{figure} One fruitful line of study has come from billiards in polygons whose interior angles are rational multiples of $\pi$ (rational polygons), and their correspondence with straight line flow on translation surfaces via \textit{unfolding} (see \cites{wrightsurv, MasurTabach, billiards} for review). This correspondence has lead to much development in both billiards and translation surfaces. For instance, Masur in \cite{masur} used this correspondence to show that every rational polygon admits countably many periodic billiard trajectories. It is also interesting to characterize the behavior of periodic billiard orbits in particular classes of polygons (see e.g. \cites{dft, dl}). In \cite{dl}, Davis and Leli\`evre study the rich behavior of periodic billiard trajectories in the regular pentagon by using the correspondence of the regular pentagon billiard table with the double pentagon and golden L translation surfaces. Their work demonstrated how any periodic billiard trajectory in the regular pentagon is either long, short, or hits a corner. To illustrate, a short trajectory and a long trajectory for a particular periodic direction is shown in Figure \ref{fig:pent_trajs}. As a consequence, McMullen \cite{mcmullenQ} asked if given a periodic direction and midpoint of an edge, is there a way to determine if the trajectory is long, short, or hits a corner without having to explicitly draw the trajectory? The following theorem proved in this paper resolves this question. \begin{theorem} \label{thm2} Let $\vec{v}$ be a periodic billiard direction in the regular pentagon billiard table. For any choice of midpoint on a side, Algorithm \ref{alg:MainAlgorithm} gives a process to determine if the billiard trajectory in the direction of $\vec{v}$ is long, short, or hits a corner point. \end{theorem} The proof of Theorem \ref{thm2} is an algorithm which translates the problem of billiards on the pentagon to straight-line flow on the golden L translation surface, and determines if the associated trajectory is in a long or short cylinder. This paper is organized as follows. In \S \ref{sec1} we provide background definitions and theory necessary for stating the proof of Theorem \ref{thm2}, which we give in \S \ref{sec2}. Finally, a corollary that often reduces the steps needed in the algorithm is explored in \S \ref{sec3}. \section{Preliminaries}\label{sec1} \subsection{Translation surfaces and Veech groups} Billiards are characterized by their elastic reflection off the boundary of the table according to the mirror law of reflection. This allows us to relate billiard dynamics to linear flow on translation surfaces. In particular, reflecting across the edges of the table until each edge is paired with an opposite parallel edge gives rise to an equivalent representation on a translation surface. These surfaces are typically easier to study because it allows us to restrict our attention to a single fixed direction. We begin by giving the definition of translation surfaces and their corresponding properties. For a complete review of translation surfaces see \cites{wrightsurv, MasurTabach}. \begin{definition} A \emph{translation surface}, denoted $(X, \omega)$, is a disjoint union of polygons in $\mathbb{C}$ with opposite, parallel sides identified by translation. In particular, a translation surface is embedded in $\mathbb{C}$ with embedding fixed only up to translation. In the polygonal representation, we consider translation surfaces to be equivalent if we can achieve one from the other through cut-and-paste operations. \end{definition} \begin{remark} The notation $(X, \omega)$ of a translation surface comes from an equivalent definition of a translation surface. Namely, a translation surface $(X, \omega)$ is a nonzero Abelian differential $\omega$ on a Riemann surface $X$. \end{remark} The vertices of the polygons defining a translation surface are called \textit{cone points}, and the segments joining pairs of cone points without any cone points in the interior are called \textit{saddle connections}. Trajectories that hit the corner of a given polygonal billiard table correspond to saddle connections in the translation surface obtained via unfolding said table. A \textit{cylinder} of a translation surface is a maximal family of periodic trajectories (closed geodesics) in a periodic direction, with boundaries given by saddle connections. See Figure \ref{DPSurf} for visual demonstration of cylinders. \begin{figure}[h] \centering \begin{tikzpicture}[scale=1] \node[regular polygon,regular polygon sides=8, minimum size=5cm, draw, fill=gray] at (0,0) {}; \node[rectangle, fill=lightgray, minimum width = 4.6cm, minimum height = 1.87cm] (r) at (0,0) {}; \draw[ultra thick, dashed] (-2.3, 0.95)--(2.3, 0.95); \draw[ultra thick, dashed] (-2.3, -0.95)--(2.3, -0.95); \draw (2.3, 0) node[anchor=west] {c}; \draw (-2.3, 0) node[anchor=east] {c}; \draw (-1.6, 1.6) node[anchor=south east] {d}; \draw (1.6, 1.6) node[anchor=south west] {b}; \draw (1.6, -1.6) node[anchor=north west] {d}; \draw (-1.6, -1.6) node[anchor=north east] {b}; \draw (0, 2.3) node[anchor=south] {a}; \draw (0, -2.3) node[anchor=north] {a}; \end{tikzpicture} \caption{The cylinder decomposition of the regular octagon translation surface in the horizontal direction. A longer cylinder (dark gray) and a shorter cylinder (light gray) are separated by saddle connections (black dashes).} \label{DPSurf} \end{figure} Translation surfaces admit an action by $\operatorname{SL}_2\mathbb{R}$. If $g \in \operatorname{SL}_2\mathbb{R}$ and $(X, \omega)$ is a translation surface given as a collection of polygons, then $g(X, \omega)$ is the translation surface obtained by acting linearly by $g$ on the polygons determining $(X, \omega)$. \begin{definition} We denote the stabilizer of $(X, \omega)$ under the action of $\operatorname{SL}_2\mathbb{R}$ by $\operatorname{SL}(X, \omega)$. The \emph{Veech Group} of $(X, \omega)$ is the image of $\operatorname{SL}(X, \omega)$ in $\operatorname{PSL}_2\mathbb{R}$. If $\operatorname{SL}(X, \omega)$ is a lattice, then $(X, \omega)$ is a \emph{Veech surface}. \end{definition} \subsection{Regular pentagon billiard table and double pentagon surface} \label{sec:unfoldingToDoublePent} Straight line flow on a translation surface corresponds to billiard flow in a polygon through \textit{unfolding}. That is, instead of reflecting the billiard off a side of a billiard polygon, reflect the polygon on a side and unfold the trajectory to a straight line, as shown in Figure \ref{unfolding}. Reflecting polygons in such a manner can generate translation surfaces, and hence we obtain a correspondence between billiard flow in a polygon and straight line flow on a translation surface. For a detailed description of unfolding a billiard into straight line flow on a translation surface, see \cite[\S 1.3]{MasurTabach}. The regular pentagon table can be unfolded to the \textit{necklace}, a 5-fold cover of the double pentagon translation surface. As a consequence, we can examine straight-line flow on the double pentagon translation surface to understand billiard dynamics in the regular pentagon. \begin{figure} \centering \includegraphics[scale=.23]{Images/Unfolding.png} \caption{An unfolding of a billiard trajectory in the regular pentagon table to a flow in the double pentagon surface. The red trajectory shows the billiard trajectory reflected off the edge.} \label{unfolding} \end{figure} \subsection{Going from the double pentagon to the golden L} \begin{figure}[h] \centering \begin{tikzpicture}[scale=0.86] \draw (0,0) -- (1.61803, 0) -- (1.61803^2, 0) -- (1.61803^2, 1.61803) -- (1.61803, 1.61803) -- (1.61803, 1.61803^2) -- (0, 1.61803^2) -- (0, 1.61803) -- (0,0); \draw (-0.1, 1.61803) -- (0.1, 1.61803); \draw (1.61803, -0.1) -- (1.61803, 0.1); \draw (0, 1.61803/2) node[anchor=east] {b}; \draw (1.61803^2, 1.61803/2) node[anchor=west] {b}; \draw (0, 1.61803 + 1/2) node[anchor=east] {a}; \draw (1.61803, 1.61803 + 1/2) node[anchor=west] {a}; \draw (1.61803/2, 1.61803^2) node[anchor=south] {c}; \draw (1.61803/2, 0) node[anchor=north] {c}; \draw (1.61803 + 1/2, 0) node[anchor=north] {d}; \draw (1.61803 + 1/2, 1.61803) node[anchor=south] {d}; \end{tikzpicture} \caption{The golden L with edge identifications labeled.} \label{fig:goldenL} \end{figure} Recall the golden ratio $\phi$, defined to be $$\phi = \frac{1+\sqrt{5}}{2} \approx 1.618$$ and is obtained as one of the solutions of $x^2 - x - 1 = 0$. Hence, it satisfies a useful identity: $\phi^2 = \phi +1$. We construct the \textit{golden L} by taking a $\phi \times \phi$ square, gluing two $1\times \phi$ rectangles to adjacent sides, and identifying opposite and parallel sides (see Figure \ref{fig:goldenL}). We construct the \textit{double pentagon} by gluing two pentagons together and identifying opposite and parallel sides, as in Figure \ref{unfolding}. Note this constructions is equivalent to the double pentagon obtained from the unfolding in \S \ref{sec:unfoldingToDoublePent} (see \cite[\S 2.1]{dl} for further information on the double pentagon). As in \cite[Definition 2.2]{dl}, let $$P = \begin{pmatrix} 1 & \cos{\pi/5} \\ 0 & \sin{\pi/5} \end{pmatrix}.$$ \begin{lemma} \cite[Lemma 2.3]{dl} \label{lem:LongShortConnec} The matrix $P$ takes the golden L surface to the double pentagon surface, and its inverse $P^{-1}$ takes the double pentagon to the golden L. In particular, they take long cylinder vectors to long cylinder vectors, and the same for short cylinder vectors. \end{lemma} \subsection{Periodic trajectories in the regular pentagon, the double pentagon, and the golden L} \label{sec:PerDirections} We begin with the \text{golden L}, pictured in Figure \ref{fig:goldenL}. The following matrices generate the Veech group of the Golden L \cite[Lemma 2.6]{dl}: \begin{center} $\sigma _0 = \begin{pmatrix} 1 & \phi \\ 0 & 1 \end{pmatrix}$, $\sigma _1 = \begin{pmatrix} \phi & \phi \\ 1 & \phi \end{pmatrix}$, $\sigma _2 = \begin{pmatrix} \phi & 1 \\ \phi & \phi \end{pmatrix}$, $\sigma _3 = \begin{pmatrix} 1 & 0 \\ \phi & 1 \end{pmatrix}$ \end{center} The images of each $\sigma_i$ acting on the golden L are pictured in Figures \ref{sigma0app} - \ref{sigma3app}. Since these are elements of the Veech group of the golden L, we know that they send the golden L to itself, up to a cut and paste. See Figure \ref{cutandpaste} for a visual demonstration of how the cut and paste operation works on the golden L after applying $\sigma_0$. Note the cut and paste operation respects side identifications and the cut lines may become new side identifications. \par \begin{figure}[h] \begin{subfigure}[h]{0.45\textwidth} \centering \begin{tikzpicture}[scale=0.86] \draw (0,0) -- (1.61803, 0); \draw (1.61803, 0) -- (1.61803+1, 0); \draw[thick, loosely dotted] (1.61803+1, 0) -- (2*1.61803+2, 1.61803); \draw[thick, loosely dotted] (1.61803+1, 1.61803) -- (0,0); \draw (2*1.61803+2, 1.61803) -- (2*1.61803+1, 1.61803); \draw[thick, densely dotted] (2*1.61803+1, 1.61803) -- (3*1.61803+1, 1.61803+1); \draw[thick, densely dotted] (2*1.61803+1, 1.61803+1) -- (1.61803+1, 1.61803); \draw (3*1.61803+1, 1.61803+1) -- (2*1.61803+1, 1.61803+1); \draw[blue,thick,dashed] (1.61803+1, 0) -- (1.61803+1, 1.61803); \draw[blue,thick,dashed] (2*1.61803+1, 1.61803) -- (2*1.61803+1, 1.61803+1); \draw (2.5*1.61803 + 1, 1.61803 +1/2) node[anchor=north west, shift={(-1mm,1mm)}] {a}; \draw (1.5*1.61803 + 1, 1.61803 +1/2) node[anchor=south east, shift={(1mm,-1mm)}] {a}; \draw (1.5*1.61803 + 3/2, 1.61803/2) node[anchor=north west, shift={(-1mm,1mm)}] {b}; \draw (1.61803/2 + 1/2, 1.61803/2) node[anchor=south east, shift={(1mm,-1mm)}] {b}; \end{tikzpicture} \end{subfigure} \begin{subfigure}[h]{0.45\textwidth} \centering \begin{tikzpicture}[scale=0.86]\textbf{} \draw[thick, loosely dotted] (1.61803+1, 1.61803) -- (0,0); \draw (0,0) -- (1.61803, 0) -- (1.61803+1, 0); \draw (1.61803+1, 1.61803) -- (1.61803, 1.61803); \draw[thick, densely dotted] (0, 1.61803) -- (1.61803, 1.61803+1); \draw (1.61803, 1.61803+1) -- (0, 1.61803+1); \draw[blue,thick,dashed] (1.61803+1, 0) -- (1.61803+1, 1.61803); \draw[blue,thick,dashed] (0, 0) -- (0, 1.61803+1); \draw[blue,thick,dashed] (1.61803, 1.61803) -- (1.61803, 1.61803+1); \draw (1.61803/2, 1.61803 +1/2) node[anchor=south east, shift={(1mm,-1mm)}] {a}; \draw (1.61803/2 + 1/2, 1.61803/2) node[anchor=south east, shift={(1mm,-1mm)}] {b}; \end{tikzpicture} \end{subfigure} \caption{$\sigma_0$ acting on the golden L before and after cutting and pasting along the blue dotted lines.} \label{cutandpaste} \end{figure} Next, consider a trajectory within the double pentagon. We say a trajectory in a translation surface is a \textit{periodic trajectory} if it forms a closed curve. In other words, the trajectory eventually comes back to where it started and repeats. It was shown by Davis-Leli\`evre in \cite{dl} that periodic trajectories in the golden L are precisely those with slope in $\mathbb{Q}[\sqrt{5}]$. The following theorem from \cite{dl} gives us a way to express any periodic direction as a product of finitely many $\sigma_i$'s and the horizontal vector $(\begin{smallmatrix} 1\\ 0 \end{smallmatrix}\big)$: \begin{theorem} \cite[Theorem 2.22]{dl} \label{thm:PerDirec} Corresponding to any periodic direction vector $\vec{v}$ in the first quadrant on the golden L is a unique sequence $a_v=(k_1, k_2, \dots, k_n)$ of sectors such that $\vec{v} = \ell_v \sigma_{k_n}\dots \sigma_{k_1}\big(\begin{smallmatrix} 1\\ 0 \end{smallmatrix}\big)$ for some length $\ell_v$. \end{theorem} We refer to this sequence of integers $k_1 k_2 \dots k_n$, $k_j \in (0,1,2,3)$, as a \textit{tree word}. See Example \ref{ex:treeword} below for an example of calculating the periodic direction vector from a tree word. \begin{example} \label{ex:treeword} The vector $\vec{v}$ in the direction $132$ is given by \begin{align*} v &= \sigma_2 \cdot \sigma_3 \cdot \sigma_1 \cdot \begin{pmatrix} 1\\ 0 \end{pmatrix} \\ &= \begin{pmatrix} \phi & 1 \\ \phi & \phi \end{pmatrix} \cdot \begin{pmatrix} 1 & 0 \\ \phi & 1 \end{pmatrix} \cdot \begin{pmatrix} \phi & \phi \\ 1 & \phi \end{pmatrix} \cdot \begin{pmatrix} 1\\ 0 \end{pmatrix} \\ &= \begin{pmatrix} 2\phi^2 + 1 \\ 2\phi^2 + 2\phi \end{pmatrix} = \begin{pmatrix} 2\phi + 3 \\ 4\phi + 2 \end{pmatrix} \tag{Recall $\phi^2 = \phi +1$}. \end{align*} \end{example} Using the following corollary, we can jump between periodic directions in the double pentagon to periodic directions in the golden L: \begin{corollary} \cite[Corollary 2.4]{dl} \label{cor:PinvIsPer} A direction $\vec{v}$ is periodic on the double pentagon if and only if the direction $P^{-1}\vec{v}$ is periodic on the golden L. \end{corollary} \subsection{Weierstrass points and the Veech group} \label{weierstrass} Next, we inscribe a pentagon within the golden L as pictured in Figure \ref{fig:goldLWP}, and label the midpoints of the sides as shown. These midpoints coincide with the \emph{Weierstrass points}, which are also the midpoints of the edges in the double pentagon. Moreover, these are the labeled points in the regular pentagon as in Figure \ref{fig:goldLWP}. \begin{figure}[h] \centering \begin{subfigure}{0.4\textwidth} \centering \begin{tikzpicture}[scale=0.86] \draw (0,0) -- (1.61803, 0) -- (1.61803^2, 0) -- (1.61803^2, 1.61803) -- (1.61803, 1.61803) -- (1.61803, 1.61803^2) -- (0, 1.61803^2) -- (0, 1.61803) -- (0,0); \draw [red] (1.61803, 0) -- (1.61803^2, 0) -- (1.61803, 1.61803) -- (0, 1.61803^2) -- (0, 1.61803) -- (1.61803, 0); \filldraw[red] (0, 1.61803+0.5) circle (2pt) node[anchor=west] {1}; \filldraw[red] (1.61803/2, 1.61803+0.5) circle (2pt) node[anchor=west] {2}; \filldraw[red] (1.61803/2, 1.61803/2) circle (2pt) node[anchor=west] {3}; \filldraw[red] (1.61803+0.5, 1.61803/2) circle (2pt) node[anchor=west] {4}; \filldraw[red] (1.61803+0.5, 0) circle (2pt) node[anchor=south] {5}; \end{tikzpicture} \end{subfigure} \begin{subfigure}{0.4\textwidth} \centering \begin{tikzpicture}[scale=0.86] \node[regular polygon,regular polygon sides=5, minimum size=2.6cm, draw] at (0,0) {}; \filldraw[red] (-0.77, 0.94) circle (2pt) node[anchor=west] {1}; \filldraw[red] (0.77, 0.94) circle (2pt) node[anchor=east] {2}; \filldraw[red] (1.16, -0.4) circle (2pt) node[anchor=east] {4}; \filldraw[red] (-1.16, -0.4) circle (2pt) node[anchor=west] {3}; \filldraw[red] (0, -1.2) circle (2pt) node[anchor=south] {5}; \end{tikzpicture} \end{subfigure} \caption{Left: The golden L with the inscribed pentagon and Weierstrass points labeled. Right: The corresponding points labeled on the regular pentagon.} \label{fig:goldLWP} \end{figure} We say a point of a Veech surface is \textit{periodic} if it has finite orbit under action of the Veech group. The Weierstrass points are the only periodic points of the golden L under action of the Veech group (See \cite[\S 2]{wrightsurv} for review of periodic points on Veech surfaces and L tables). As a consequence, the action of the Veech group elements over the golden L permutes the points, as seen in Figures \ref{sigma0app} - \ref{sigma3app}. \begin{figure}[t] \centering \begin{subfigure}{0.4\textwidth} \centering \begin{tikzpicture}[scale=0.86] \draw (0,0) -- (1.61803, 0) -- (1.61803+1, 0) -- (2*1.61803+2, 1.61803) -- (2*1.61803+1, 1.61803) -- (3*1.61803+1, 1.61803+1) -- (2*1.61803+1, 1.61803+1) -- (1.61803+1, 1.61803) -- (0,0); \draw[red] (1.61803, 0) -- (1.61803+1, 0) -- (2*1.61803+1, 1.61803) -- (2*1.61803+1, 1.61803+1) -- (1.61803+1, 1.61803) -- (1.61803, 0); \filldraw[red] (1.61803^2 + 1.61803/2, 1.61803+0.5) circle (2pt) node[anchor=east] {1}; \filldraw[red] (2*1.61803+1, 1.61803+0.5) circle (2pt) node[anchor=west] {2}; \filldraw[red] (1.61803+0.5, 1.61803/2) circle (2pt) node[anchor=west] {3}; \filldraw[red] (1.61803+0.5+1.61803^2/2, 1.61803/2) circle (2pt) node[anchor=west] {4}; \filldraw[red] (1.61803+0.5, 0) circle (2pt) node[anchor=south] {5}; \end{tikzpicture} \end{subfigure}% \begin{subfigure}{0.4\textwidth} \centering \begin{tikzpicture}[scale=0.86] \draw (1.61803+1, 1.61803) -- (0,0) -- (1.61803, 0) -- (1.61803+1, 0); \draw (0, 0) -- (1.61803+1, 1.61803) -- (1.61803, 1.61803); \draw (0, 1.61803) -- (1.61803, 1.61803+1) -- (0, 1.61803+1) -- (0, 1.61803); \draw (1.61803+1, 0) -- (1.61803+1, 1.61803); \draw (0, 0) -- (0, 1.61803+1); \draw (1.61803, 1.61803) -- (1.61803, 1.61803+1); \draw[red] (1.61803+1, 1.61803) -- (1.61803, 0) -- (1.61803+1, 0); \draw[red] (0, 0) -- (1.61803, 1.61803); \draw[red] (0, 1.61803+1) -- (0, 1.61803) -- (1.61803, 1.61803+1); \filldraw[red] (1.61803/2, 1.61803+0.5) circle (2pt) node[anchor=west] {1}; \filldraw[red] (0, 1.61803+0.5) circle (2pt) node[anchor=west] {2}; \filldraw[red] (1.61803+0.5, 1.61803/2) circle (2pt) node[anchor=west] {3}; \filldraw[red] (1.61803/2, 1.61803/2) circle (2pt) node[anchor=west] {4}; \filldraw[red] (1.61803+0.5, 0) circle (2pt) node[anchor=south] {5}; \end{tikzpicture} \end{subfigure} \caption{Image under $\sigma_0$ before and after cut and paste.} \label{sigma0app} \end{figure} \begin{figure}[t] \begin{subfigure}{0.4\textwidth} \centering \begin{tikzpicture}[scale=0.86] \draw (0,0) -- (1.61803+1, 1.61803) -- (2*1.61803+1, 1.61803+1) -- (3*1.61803+2, 2*1.61803+2) -- (2*1.61803+2, 2*1.61803+1) -- (3*1.61803+2, 3*1.61803+1) -- (2*1.61803+1, 2*1.61803+1) -- (1.61803+1, 1.61803+1) -- (0,0); \draw [red] (1.61803+1, 1.61803) -- (2*1.61803+1, 1.61803+1) -- (2*1.61803+2, 2*1.61803+1) -- (2*1.61803+1, 2*1.61803+1) -- (1.61803+1, 1.61803+1) -- (1.61803+1, 1.61803); \filldraw[red] (1.5*1.61803+1, 1.5*1.61803+1) circle (2pt) node[anchor=west] {1}; \filldraw[red] (2*1.61803+1.5, 2*1.61803+1) circle (2pt) node[anchor=north] {2}; \filldraw[red] (1.61803+1, 1.61803+0.5) circle (2pt) node[anchor=west] {3}; \filldraw[red] (2*1.61803+1.5, 1.5*1.61803+1) circle (2pt) node[anchor=east] {4}; \filldraw[red] (1.5*1.61803+1, 1.61803+0.5) circle (2pt) node[anchor=south] {5}; \end{tikzpicture} \end{subfigure}% \begin{subfigure}{0.4\textwidth} \centering \begin{tikzpicture}[scale=0.86] \draw (1.61803, 1.61803+1) -- (0, 1); \draw (1.61803+1, 1) -- (1.61803,0) -- (1.61803+1, 1/1.61803); \draw (0, 1/1.61803) -- (1.61803, 1.61803); \draw (0, 1.61803) -- (1.61803, 1.61803+1); \draw (1.61803+1, 1/1.61803) -- (1.61803, 0); \draw (1.61803, 1.61803) -- (0, 0); \draw (0, 1.61803+1) -- (1.61803, 1.61803+1); \draw (1.61803+1, 1.61803) -- (1.61803, 1.61803); \draw (1.61803+1, 0) -- (0, 0); \draw (1.61803, 1.61803) -- (1.61803, 1.61803+1); \draw (0, 0) -- (0, 1.61803+1); \draw (1.61803+1, 0) -- (1.61803+1, 1.61803); \draw[red] (1.61803, 1.61803+1) -- (1.61803, 1.61803); \draw[red] (0, 1.61803) -- (1.61803, 1.61803+1); \draw[red] (1.61803, 0) -- (1.61803+1, 1.61803) -- (1.61803, 1.61803) -- (0, 0); \filldraw[red] (0.5*1.61803, 0.5*1.61803) circle (2pt) node[anchor=west] {1}; \filldraw[red] (1.61803+0.5, 0) circle (2pt) node[anchor=north] {2}; \filldraw[red] (0, 1.61803+0.5) circle (2pt) node[anchor=west] {3}; \filldraw[red] (1.61803+0.5, 0.5*1.61803) circle (2pt) node[anchor=east] {4}; \filldraw[red] (0.5*1.61803, 1.61803+0.5) circle (2pt) node[anchor=south] {5}; \end{tikzpicture} \end{subfigure} \caption{Image under $\sigma_1$ before and after cut and paste.} \label{sigma1app} \end{figure} \begin{figure}[t] \begin{subfigure}{0.4\textwidth} \centering \begin{tikzpicture}[scale=0.86] \draw (0, 0) -- (1.61803+1, 1.61803+1) -- (2*1.61803+1, 2*1.61803+1) -- (3*1.61803+1, 3*1.61803+2) -- (2*1.61803+1, 2*1.61803+2) -- (2*1.61803+2, 3*1.61803+2) -- (1.61803+1, 2*1.61803+1) -- (1.61803, 1.61803+1) -- (0, 0); \draw [red] (1.61803+1, 1.61803+1) -- (2*1.61803+1, 2*1.61803+1) -- (2*1.61803+1, 2*1.61803+2) -- (1.61803+1, 2*1.61803+1) -- (1.61803, 1.61803+1) -- (1.61803+1, 1.61803+1); \filldraw[red] (1.61803+0.5, 1.5*1.61803+1) circle (2pt) node[anchor=west] {1}; \filldraw[red] (1.5*1.61803+1, 2*1.61803+1.5) circle (2pt) node[anchor=north] {2}; \filldraw[red] (1.61803+0.5, 1.61803+1) circle (2pt) node[anchor=south] {3}; \filldraw[red] (2*1.61803+1, 2*1.61803+1.5) circle (2pt) node[anchor=east] {4}; \filldraw[red] (1.5*1.61803+1, 1.5*1.61803+1) circle (2pt) node[anchor=south] {5}; \end{tikzpicture} \end{subfigure}% \begin{subfigure}{0.4\textwidth} \centering \begin{tikzpicture}[scale=0.86] \draw (1.61803+1, 1.61803) -- (1.61803, 0); \draw (1.61803, 1.61803+1) -- (1, 1.61803) -- (2, 1.61803+1); \draw (2, 0) -- (1.61803+2, 1.61803); \draw (1, 0) -- (1.61803+1, 1.61803); \draw (1, 1.61803) -- (1.61803, 1.61803+1); \draw (1.61803, 0) -- (1.61803+1, 1.61803) -- (1, 0); \draw (1.61803+1, 0) -- (1.61803+2, 1.61803) -- (2, 0); \draw (2, 1.61803+1) -- (1, 1.61803); \draw (1.61803+2, 1.61803) -- (1.61803+1, 0); \draw (1, 0) -- (3, 0); \draw (1.61803+2, 1.61803) -- (1.61803+1, 1.61803); \draw (1, 1.61803+1) -- (1.61803+1, 1.61803+1); \draw (1,0) -- (1, 1.61803+1); \draw (1.61803+2, 0) -- (1.61803+2, 1.61803); \draw[red] (1, 0) -- (1.61803+1, 1.61803) -- (1.61803+1, 1.61803+1) -- (1, 1.61803); \draw[red] (1.61803+2, 1.61803) -- (1.61803+1, 0) -- (1.61803+2, 0); \filldraw[red] (1.61803+1.5, 0.5*1.61803) circle (2pt) node[anchor=west] {1}; \filldraw[red] (0.5*1.61803+1, 1.61803+0.5) circle (2pt) node[anchor=north] {2}; \filldraw[red] (1.61803+1.5, 0) circle (2pt) node[anchor=north] {3}; \filldraw[red] (1, 1.61803+0.5) circle (2pt) node[anchor=east] {4}; \filldraw[red] (0.5*1.61803+1, 0.5*1.61803) circle (2pt) node[anchor=south] {5}; \end{tikzpicture} \end{subfigure} \caption{Image under $\sigma_2$ before and after cut and paste.} \label{sigma2app} \end{figure} \begin{figure}[t] \begin{subfigure}{0.4\textwidth} \centering \begin{tikzpicture}[scale=0.86] \draw (0,0) -- (1.61803, 1.61803+1) -- (1.61803+1, 2*1.61803+1) -- (1.61803+1, 3*1.61803+1) -- (1.61803, 2*1.61803+1) -- (1.61803, 2*1.61803+2) -- (0, 1.61803+1) -- (0, 1.61803) -- (0,0); \draw [red] (1.61803, 1.61803+1) -- (1.61803+1, 2*1.61803+1) -- (1.61803, 2*1.61803+1) -- (0, 1.61803+1) -- (0, 1.61803) -- (1.61803, 1.61803+1); \filldraw[red] (0, 1.61803+0.5) circle (2pt) node[anchor=west] {1}; \filldraw[red] (1.61803/2, 1.5*1.61803+1) circle (2pt) node[anchor=west] {2}; \filldraw[red] (1.61803/2, 1.61803+0.5) circle (2pt) node[anchor=west] {3}; \filldraw[red] (1.61803+0.5, 2*1.61803+1) circle (2pt) node[anchor=south] {4}; \filldraw[red] (1.61803+0.5, 1.5*1.61803+1) circle (2pt) node[anchor=south] {5}; \end{tikzpicture} \end{subfigure}% \begin{subfigure}{0.4\textwidth} \centering \begin{tikzpicture}[scale=0.86] \draw (0, 1.61803+1) -- (0, 1.61803) -- (0,0) -- (1.61803, 1.61803+1); \draw (1.61803, 0) -- (1.61803+1, 1.61803); \draw (1.61803+1, 0) -- (1.61803+1, 1.61803) -- (1.61803, 0); \draw (1.61803, 1.61803) -- (1.61803, 1.61803+1) -- (0, 0); \draw[red] (1.61803, 0) -- (1.61803+1, 1.61803) -- (1.61803, 1.61803) -- (0, 0); \draw[red] (0, 1.61803+1) -- (0, 1.61803) -- (1.61803, 1.61803+1); \filldraw[red] (0, 1.61803+0.5) circle (2pt) node[anchor=west] {1}; \filldraw[red] (1.61803/2, 0.5*1.61803) circle (2pt) node[anchor=west] {2}; \filldraw[red] (1.61803/2, 1.61803+0.5) circle (2pt) node[anchor=west] {3}; \filldraw[red] (1.61803+0.5, 0) circle (2pt) node[anchor=north] {4}; \filldraw[red] (1.61803+0.5, 0.5*1.61803) circle (2pt) node[anchor=south] {5}; \draw (1.61803, 1.61803) -- (1.61803+1, 1.61803); \draw (0, 0) -- (1.61803+1, 0); \draw (0, 1.61803+1) -- (1.61803, 1.61803+1); \end{tikzpicture} \end{subfigure} \caption{Image under $\sigma_3$ before and after cut and paste.} \label{sigma3app} \end{figure} \begin{lemma} \label{prop:permutation} For each $\sigma_i$, the permutation of the Weierstrass points (as labeled in Figure \ref{fig:goldLWP}) is a product of disjoint transpositions in the symmetric group $S_5$ given by the following: \begin{center} $\tau_0 = (1\; 2)(3\; 4)$, $\tau_1 = (1\; 3)(2\; 5)$, $\tau_2 = (1\; 4)(3\; 5)$, $\tau_3 = (2\; 3)(4\; 5)$ \end{center} with $\tau_i$ being the permutation associated with $\sigma_i$. \end{lemma} \begin{proof} This can be verified by the image of the golden L under each $\sigma_i$ after cut and paste, as shown in Figures \ref{sigma0app} - \ref{sigma3app}. \end{proof} \begin{remark} Since $\tau_i$ is a product of disjoint transpositions, it has order 2, so $\tau_i^{-1} = \tau_i$ for all i. As a result, the permutation associated with $\sigma_i^{-1}$ is $\tau_i$. \label{rem:order} \end{remark} \subsection{Long and short cylinders} Davis and Leli\`evre in \cite{dl} show that both the golden L and double pentagon decompose into exactly $2$ cylinders in any given periodic direction. In particular, the decomposition gives a \textit{long cylinder} and a \textit{short cylinder}, and the ratios of the circumferences of these two cylinders is always $\phi$ (see \cite[\S 2.1]{dl}). \section{Statement of algorithm and proof}\label{sec2} The purpose of this section is to address the following question stated by McMullen \cite{mcmullenQ}: \begin{question} Given an edge midpoint and periodic direction on the regular pentagon billiard table, will the trajectory be long, short, or a saddle connection? \end{question} We find that we can define an explicit procedure for determining which midpoints will give a short or long trajectory for any periodic direction. The procedure also shows that starting from one of the midpoints will give a section of a saddle connection. \begin{algo} \label{alg:MainAlgorithm} Given a vector $\vec{v}$ in a periodic direction in the regular pentagon, a starting midpoint on side $J \in (1,2,3,4,5)$, as labeled in Figure \ref{fig:goldLWP}, we can determine if the trajectory is long, short, or a saddle connection through the following steps: \begin{enumerate} \item Find the tree word $a = k_1 k_2 \dots k_n$ associated with $P^{-1}\vec{v}$. \item Compute the product $$\tau := \tau_{k_1} \cdot \tau_{k_{2}} \cdots \tau_{{k_n}} \in S_5.$$ \item Then, \begin{itemize} \item If $\tau(J) \in \{1, 2\}$, then the trajectory is short. \item If $\tau(J) \in \{3, 4\}$, then the trajectory is long. \item If $\tau(J) = 5$, then the trajectory is a saddle connection. \end{itemize} \end{enumerate} \end{algo} \begin{proof} Corollary \ref{cor:PinvIsPer} tells us $P^{-1}\vec{v}$ is periodic in the golden L, and Theorem \ref{thm:PerDirec} gives us a corresponding tree word $k_1 k_2 \cdots k_n$ such that $P^{-1}\vec{v} = \ell_{\vec{v}}\sigma_{k_n}\cdots \sigma_{k_2} \sigma_{k_1} \big(\begin{smallmatrix} 1\\ 0 \end{smallmatrix}\big)$ for some length $\ell_{\vec{v}}$. We then apply the following sequence of inverse matrices to both the golden L and the direction vector $P^{-1}\vec{v}$: $$\sigma_{k_1}^{-1} \sigma_{k_2}^{-1} \cdots \sigma_{k_n}^{-1}.$$ There's two things that happen. First, the direction vector is changed. All the $\sigma_i$'s cancel out, leaving us with a horizontal vector $\ell_{\vec{v}}\big(\begin{smallmatrix} 1\\ 0 \end{smallmatrix}\big)$. Secondly, the golden L is acted on by this product. Recall $\sigma_i$ is in the Veech group of the golden L, so $\sigma_i^{-1}$ must also be in the Veech group. Then by closure, the product above must also be in the Veech group. As a result, we may perform cut and paste operations and get back to the golden L. Moreover, by Proposition \ref{prop:permutation} and Remark \ref{rem:order}, the labeled midpoints (the Weierstrass points) will be permuted by the product $\tau := \tau_{k_1} \cdot \tau_{k_{2}} \cdots \tau_{{k_n}}$. Thus, midpoint $J$ will be sent to midpoint $\tau(J)$. We then have a horizontal direction originating from midpoint $\tau(J)$ on the golden L. Figure \ref{horizL} shows us clearly if midpoint $\tau(J)$ is in a long cylinder, short cylinder, or on a saddle connection in the horizontal direction. Finally, we use Lemma \ref{lem:LongShortConnec} to jump back to regular pentagon. The result follows immediately. \end{proof} \begin{figure}[h] \centering \begin{tikzpicture}[scale=1.45] \draw (0,0) -- (1.61803, 0) -- (1.61803^2, 0) -- (1.61803^2, 1.61803) -- (1.61803, 1.61803) -- (1.61803, 1.61803^2) -- (0, 1.61803^2) -- (0, 1.61803) -- (0,0); \draw [red] (1.61803, 0) -- (1.61803^2, 0) -- (1.61803, 1.61803) -- (0, 1.61803^2) -- (0, 1.61803) -- (1.61803, 0); \filldraw[red] (0, 1.61803+0.5) circle (2pt) node[anchor=south east] {1}; \filldraw[red] (1.61803/2, 1.61803+0.5) circle (2pt) node[anchor=south west] {2}; \filldraw[red] (1.61803/2, 1.61803/2) circle (2pt) node[anchor=south west] {3}; \filldraw[red] (1.61803+0.5, 1.61803/2) circle (2pt) node[anchor=south west] {4}; \filldraw[red] (1.61803+0.5, 0) circle (2pt) node[anchor=south] {5}; \draw [ultra thick, <->] (0, 1.61803/2) -- (1.61803^2, 1.61803/2); \draw [ultra thick, <->] (0, 1.61803 + 1/2) -- (1.61803, 1.61803 + 1/2); \draw [ultra thick, <->] (1.61803, 0) -- (1.61803^2, 0); \end{tikzpicture} \caption{A short trajectory (top), long trajectory (middle), and saddle connection (bottom) in the horizontal direction in the golden L.} \label{horizL} \end{figure} In the following Example, we illustrate the algorithm using the tree word $21$: \begin{example} Consider the direction $21$ in the regular pentagon. The algorithm tells us to evaluate $$\tau = \tau_2 \cdot \tau_1 =(1\; 4)(3\; 5) \cdot (1\; 3)(2\; 5) = (1\; 5 \; 2\; 3\; 4).$$ Then $\tau(1) = 5$, $\tau(2) = 3$, $\tau(3)=4$, $\tau(4)=1$, and $\tau(5)=2$. Thus, the algorithm tells us that midpoints 4 and 5 are in the short cylinder, midpoints 2 and 3 are in the long cylinder, and midpoint 1 is a saddle connection. To check this, we can draw in exactly what the trajectories look like in the regular pentagon. We first find the direction vector $\vec{v}$ in the golden L that corresponds to the tree word $21$ (see Example \ref{ex:treeword}). That turns out to be $\vec{v} = (2\phi + 2, 2 + 1)$. We then calculate the corresponding direction on the regular pentagon: $$P\vec{v} = \begin{pmatrix} 1 & \cos{\pi/5} \\ 0 & \sin{\pi/5} \end{pmatrix} \cdot \begin{pmatrix} 2\phi + 2 \\ 2 + 1 \end{pmatrix} \approx \begin{pmatrix} 8.66 \\ 2.49 \end{pmatrix}.$$ We can then draw in the trajectory using the standard rules of billiards. The long and short trajectories can be seen in Figure \ref{fig:LongShortEx}, along with the midpoints the trajectories hit. \end{example} \begin{figure} \centering \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=0.95\textwidth]{Images/21LongFromSides23.png} \end{subfigure} \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=0.95\textwidth]{Images/21ShortFromSides45.png} \end{subfigure} \caption{A long trajectory (left) and a short trajectory (right) in the direction $21$.} \label{fig:LongShortEx} \end{figure} This problem may be extended in a number of interesting directions. One interesting question is the following. \begin{question} How does this algorithm generalize to regular $(2n+1)$-gons? \end{question} We remark that this method only readily generalizes to surfaces with periodic points under the Veech group that are exactly the Weierstrass points of the surface, which are also the midpoints of edges of the regular polygon. \section{Simplifying Tree Words}\label{sec3} For a given finite tree word $a = k_1k_2 \dots k_n$, we define the \textit{derived word} $a'$ to be $a$ with the removal of repeated pairs of numbers. That is, if $k_i=k_{i+1}$, then both $k_i$ and $k_{i+1}$ will be removed. We say $a$ is a \textit{base word} if $a=a'$. Thus, every tree word $a$ has a base word, and that base word can be reached by deriving $a$ until there are no more pairs of numbers. Let $\Omega$ be the space of all possible tree words. Let $\langle e \rangle$ denote the empty word. Define $\mu : \Omega \to \Omega$ to be the function that takes a tree word to its base word. \begin{example} Consider the tree word $a=231221$. Then $a'=2311$. We may derive again and get $a''=23$. Notice $a''$ does not have any pairs of numbers, so it cannot be derived further. Thus, the base word for $a=231221$ is $a''=23$. Furthermore, we write $\mu(231221)=23$. \end{example} We now state a corollary to our main theorem that may help reduce the calculations needed for determining the long and short cylinder decomposition for long tree words. \begin{corollary} For any given tree word $a$, the corresponding base word $\mu(a)$ and $a$ have the same midpoints in the long cylinder, short cylinder, and on a saddle connection. \end{corollary} \begin{proof} Consider a tree word with a pair of numbers $k_i = k_{i+1}$. By our algorithm, we then consider the product $\tau= \cdots \tau_{k_i}\tau_{k_{i+1}} \cdots $. But by Remark \ref{rem:order}, $\tau_{k_i}\tau_{k_{i+1}}=(1)$, the identity element of $S_5$. Thus, this pair of numbers in our tree word cancel out, resulting in the derived tree word having the same midpoints in the same cylinders as our original tree word. This may then be repeated until we reach the base word, proving the corollary. \end{proof} This has an interesting consequence. That is, when we run the algorithm on a tree word, we may first reduce it down to its base word and then run the algorithm on that base word. This will the same result as if we ran it on the original word. As a result, for any tree word whose base word is $\langle e \rangle$, it is immediate that the midpoints $1,2$ lie in the short cylinder, midpoints $3,4$ lie in the long cylinder, and midpoint $5$ lies on a saddle connection. Thus, we ask the following question: \begin{question} Given a tree word of length $2n$, what is the probability the base word will be the empty word $\langle e \rangle$, i.e., what is the probability that $\mu(a)=\langle e \rangle$ for a given tree word $a$ of length $2n$? \end{question} \subsection*{Acknowledgements} We would like to thank Diana Davis, Samuel Leli\`evre, Jane Wang, and Sunrose Shrestha for helping us work on this problem in the Summer@ICERM 2021 REU. The idea for this algorithm was presented to us by Leli\`evre, who continually helped us understand and explore this problem. \printbibliography \end{document}
1,941,325,220,971
arxiv
\section{Introduction} Proper scoring rules are scoring functions that incentivize truthful information elicitation: an agent who has a subjective belief about an uncertain event maximizes his expected score by making a prediction according to his belief. When the agent can acquire costly information to refine his belief, which proper scoring rules maximally incentivize the agent's information acquisition? We formalize this question and show that popular quadratic and log scoring rules are optimal for information acquisition under different scenarios. Suppose a principal wants to elicit a probabilistic prediction about an uncertain state $W$ (e.g. the outcome of a future coin flip). An agent can report his prior $P(W)$ (e.g. uniform on each side). Alternatively, the agent can obtain additional costly information $S$, say by tossing the coin several times, and then update his prediction to $P(W|S=s)$. Any proper scoring rule can elicit the agent's prediction truthfully once its formed. But not all scoring rules incentivize the agent to gather costly information and improve his posterior prediction. In particular, the \emph{information gain} --- the expected difference of payment on posterior prediction and prior prediction --- may be too small for the agent to obtain information. We propose an optimization framework to study the principal's problem of designing a proper scoring rule to incentivize the agent to acquire costly information, subject to a bounded payment constraint. The principal hopes to maximize the agent's information gain, with respect to the chosen scoring rule, for the worst information structure that the agent may have. This worst-case consideration is natural. For example, if the principal wants to design a scoring rule to incentivize her students to grade assignments with effort, the principal needs to ensure her scoring rule can incentivize all students to work hard when students have heterogeneous information structures. We study two settings of this scoring rule design problem: static and asymptotic settings. In the static setting, the agent can access one signal. The principal's problem (\cref{prob:opt}) is to design a proper scoring rule that 1) maximizes the information gain of the worst information structure in a known set $\mathcal{P}$ that the agent's information structure belongs to, and 2) is subjected to a bounded expected payment constraint. We require the expected score to be always bounded as otherwise, the problem is meaningless because we can always scale the scoring rule. Our setting is similar to that of \citet{hartline2020optimization}, but they require the ex-post payment be bounded. See related work for more comparison. We also consider the reverse of \cref{prob:opt} (our {\cref{def:settle}}) in the static setting and ask, for any given proper scoring rule, whether there exists a set of information structures that make the scoring rule uniquely optimal for incentivizing information acquisition. In the asymptotic setting (\cref{prob:limit}), the agent can adaptively and indefinitely refine his prediction. For instance, suppose we want to design a scoring rule and use it as a market scoring rule in a prediction market to encourage a sequence of forecasters to exert effort and report their posterior prediction. Each forecaster's incentive depends on the previous forecaster's information, and they can adaptively decide whether to spend the effort. Thus we need to design a scoring rule that optimizes for a sequence of collections of information structures where the $n$-th collection represents all possible information structures of any $k$-th forecaster with $k\le n$. Another example is the large sample theory in statistics which studies estimators with an indefinitely growing sample size. In this context, \cref{prob:limit} asks what the optimal proper scoring rule is when the agent can adaptively get sample points to improve his prediction. \subsection{Our Technique and Contributions} Because any proper scoring rule can be specified as a convex function of prediction and its sub-gradient \cite{mccarthy1956measures,savage1971elicitation,gneiting2007strictly}, our design space of the optimization in \cref{prob:opt} and \ref{prob:limit} can be converted to the set of bounded convex functions, and the information gain becomes the Jensen gap of the associated convex function. This gives us a geometric interpretation of our optimization problem: The information gain amount to how curved (an integral of second derivative) the associated convex function, $H$, is at the information structure. However, the double iterative integral of the second derivative of $H$ is bounded, since $H$ is bounded. \Cref{sec:static} studies the static setting. For \cref{prob:opt}, we first consider the extreme case when the principal knows the agent's information structure exactly (i.e. $\mathcal{P}$ is singleton), and show in \cref{sec:singleton} that the optimal convex function is an upside-down pyramid (\cref{fig:v_shape}). Then \cref{sec:finite} considers the case that the collection of information structures is finite: both the number of information structures and the support of any information structure are finite. \cref{thm:finite} shows we can solve the optimization problem efficiently and the optimal $H$ is a convex piecewise linear function. \cref{sec:unique} looks at \cref{def:settle}: Given any convex $H$, is there a collection of information structures for which $H$ is (uniquely) optimal? We show a necessary and sufficient condition for a convex piecewise linear function to be uniquely optimal in \cref{thm:lin_suff}. However, we show quadratic scoring rule cannot be uniquely optimal in \cref{prob:opt}. \Cref{sec:seq} considers the asymptotic setting which generalizes the static setting and evaluates a scoring rule relatively against the best scoring rule called the relative information gain (\cref{prob:limit}). \Cref{sec:quadr} tries to study general Bayesian estimation with large number of i.i.d. samples. Inspired by (Bayesian) Cram\`er-Rao bound~\cite{bj/1186078362} which shows the Bayes estimator with $n$ samples has variance of order $\Theta(1/n)$, \cref{sec:quadr} considers the sequence of collections of information structures with vanishing covariance. \Cref{thm:quadr} shows that the quadratic scoring rule optimizes the relative information gain against any smooth functions. To prove this, we relate the information gain to the strong convexity of our convex function and show that the quadratic scoring rule is the ``most strongly convex'' function. Finally, \cref{sec:exp} studies the information structures that are in an exponential family with conjugate prior, Beta-Bernoulli and Dirichlet-categorical information structures, and shows that the log scoring rule is optimal. Specifically, \cref{sec:beta} studies a sequence of collections of information structures that converges to the collection of all Beta-Bernoulli distributions and show the relative information gain of the log scoring rule is optimal against all smooth functions. Additionally, through simulation \cref{fig:diff} shows that the optimal scoring rules for Beta-Bernoulli distribution with large enough sample size is close to log scoring rule. \Cref{sec:dir} extends this result to non-binary settings and show the log scoring rule optimizes the relative information gain against all functions that are smooth up to the boundary. To achieve these results, we first show the limit of information gain is an elliptic differential operator on the convex function. Then we use a maximum principle to show that the log scoring rule maximized the elliptical differential operator's value~\cite{evans10,alias2016maximum}. \subsection{Related Work} Our problem can be seen as purchasing prediction from strategic people. The work on this topic can be roughly divided into two categories according to whether agents can misreport their signal or prediction. Below we focus on the relationship of our work to the most relevant technical scholarship. In the first category, to ensure agents reporting their signals or predictions truthfully, there are two settings according to whether money is used for incentive alignment. In the first setting, the analyst uses monetary payments to incentivize agents to reveal their data truthfully. The challenge is to ensure truth-telling gets the highest payments. Existing works verify agents' reports by using either an observable ground truth (proper scoring rules) or peers' reports (peer prediction). For the second setting, individuals’ utilities directly depend on an inference or learning outcome (e.g. they want a regression line to be as close to their own data point as possible) and hence they have incentives to manipulate their reported data to influence the outcome.~\cite{dekel2010incentive,meir2012algorithms,perote2004strategy,hardt2015strategic,chen2018strategyproof}. Our setting is inspired by \citet{hartline2020optimization} where there is an observable ground truth to verify an agent's reports and we want to maximize agent's inventive under a budget constraint. However, they consider that the ex-post payment is bounded, while we ask the ex-ante payment be bounded. This different boundedness condition contributes to the distinction between our results and theirs. In particular, ex-post bounded payment excludes the log scoring rule which is arguably one of the most important proper scoring rule. Furthermore, we show the log scoring rule is indeed the optimal when agent's belief is in exponential family with conjugate prior. Additionally, our paper stresses more on unknown information structure, and theirs focuses more on known information structure. They show if the information structure is exactly known the optimal scoring rule has v-shaped in the one-dimensional case which is qualitatively similar to our \cref{prop:v-shape}. They also consider general information structures and show that the quadratic scoring rule is optimal up to a constant factor which also holds for any strongly convex function. In contrast, we show the quadratic scoring rule is uniquely optimal. \citet{neyman2020binary} also study optimal scoring rule design problem, but is more related to sequential method~\cite{ghosh2011sequential}. Instead of imposing bounded payment conditions, their objective comprises both payment and accuracy. They consider a special case of Beta-Bernoulli information structures where the prior is uninformative and design scoring rule for an agent to sequentially acquire samples from a Bernoulli distribution. In the second category, agents cannot misreport their signal or prediction. The problem of purchasing data from people has been investigated with different focuses, e.g. privacy concerns~\cite{ghosh2011selling, fleischer2012approximately,ghosh2014buying,nissim2014redrawing,cummings2015truthful,waggoner2015market}, effort and cost of data providers~\cite{aaron2012conducting,cai2014optimum,abernethy2015actively,chen2017optimal,zheng2017active,chen2018prior}, and reward allocation~\cite{ghorbani2019data,agarwal2019market}. \section{Problem Description}\label{sec:setting} A \emph{principal} wants to collect high quality prediction of an unknown state of the world $W$ from an \emph{agent} who can access a costly signal $S$. The space of the world $\Omega$ is finite with $|\Omega| = d$ and has a typical realization $\omega \in \Omega$. The signal space $\mathcal{S}$ is a measurable set with a typical realization $s\in \mathcal{S}$. We refer to a joint distribution $P$ on $\Omega\times \mathcal{S}$ as an \emph{information structure}. Because accessing the signal $S$ is costly for the agent, the principal wants to design a scoring rule to incentivize the agent to spend the effort and report the posterior prediction instead of the prior prediction. However, the principal only knows a collection of information structures $\mathcal{P}\subseteq \Delta(\Omega\times\mathcal{S})$ that the agent's information structure falls into. Before stating our objective, we first simplify the notation by calibrating agent's signal. Specifically, let the agent's prediction after observing the signal be a random variable on $\Delta(\Omega)$, $X = X(s) := P(W\mid S = s)$ when $S = s$, and, thus, his prediction without the signal is $\E X = P(W)$. For the principal, the information structures $P\in \mathcal{P}$ affects only the random variable of prediction $X$. Hence, we will use $P$ and $X$ exchangeably for information structures in this paper, and let $\mathcal{X}$ be the collection of random variables of prediction induced from information structures $P$ in $\mathcal{P}$. With the above definitions, the principal wants to maximize the difference of expected payment between reporting the prior prediction $\E X$ and reporting the posterior prediction $X$. Because a proper scoring rule $PS:\Omega\times \Delta(\Omega)\to \mathbb{R}$ can be associated with a convex function $H:\Delta(\Omega)\to \mathbb{R}$ so that \begin{equation}\label{eq:ps} PS(\omega, x) = H(x)+\partial H(x)\cdot (\mathbf{1}_\omega- x) \end{equation} for all $x\in \Delta(\Omega)$ and $\omega\in \Omega$ (See \cref{sec:ps} for more properties of scoring rules), given an information structure $X$, the value of observing the signal under the proper scoring rule with $H$ is $$\J_H(X) := \E_{X}[H(X)]-H(\E X).$$ We call $\J_H(X)$ \emph{information gain} of $X$ on $H$ which measures the difference of expected scores between agents who exert effort and those who do not.\footnote{$\J_X(H)$ can be seen as the value of information for the agent~\cite{howard1966information}, or Jensen's gap of the random variable $X$ on $H$.~\cite{abramovich2016some}} Given $\Omega$ and $\mathcal{X}$, the principal chooses a convex function $H$ which maximizes the worst information gain of $\mathcal{X}$ which is called the information gain of $\X$ on $H$, $$\Obj_{\X}(H):=\inf_{X\in \mathcal{X}} \J_H(X).$$ We require the scoring rules are \emph{bounded} so that the ex-ante payment of truth-telling is always in $[0,1]$. By a simple calculation, a bounded scoring rule has $H$ in $\mathcal{H} := \{H:0\le H(x) \le 1, \forall x\in \Delta(\Omega)\}$. Now we are ready to define our max-min optimization problem for the principal. First, \cref{sec:static} considers a max-min optimization problem given a collection of information structures. \begin{prob}[name = static,label = prob:opt] Given a set of information structures $\mathcal{X}$ and $\Omega$, find bounded $H$ which maximizes the information gain of $\X$ \begin{equation}\label{eq:obj} \Obj_{\mathcal{X}}(H) = \max_{\tilde{H}\in \mathcal{H}}\Obj_{\mathcal{X}}(\tilde{H}). \end{equation} We further call $\Opt(\mathcal{X}):= \max_{H\in \mathcal{H}}\Obj_{\mathcal{X}}(H)$.\footnote{\cref{eq:obj} has the maximum, because the space $\mathcal{H}$ with the sup norm $\|\cdot\|_\infty$ is complete and for any $\mathcal{X}$ the functional $\Obj_{\mathcal{X}}$ is continuous on the norm space $(\mathcal{H}, \|\cdot\|_\infty)$.} \end{prob} We can also ask the converse of \cref{prob:opt}: Given any convex $H$, is there a collection of information structures $\mathcal{X}$ for which $H$ is (uniquely) optimal in \cref{prob:opt}? These problem can provide an incentive understanding why certain proper scoring rules, e.g., log scoring rule and quadratic scoring rule are commonly used. We say a collection of information structures $\X$ \emph{settles} a convex function $H\in \mathcal{H}$ if $H$ is optimal for $\X$. Given $U\subseteq\Delta_d$, we say $\X$ \emph{uniquely settles $H$ on $U$} if for all $H^\star\in \argmax_{\tilde{H}}\Obj_\X(\tilde{H})$, $H^\star(x) = H(x)$ for all $x\in U$, and \emph{uniquely settles} $H$ when $U = \Delta_d$. Equivalently $H$ is the unique optimal for $\X$, $\{H\} = \argmax_{\tilde{H}} \Obj_\X(\tilde{H})$. \begin{prob}[label=def:settle] Given $U\subseteq \Delta_d$ and a convex function $H$, find a collection of information structure $\mathcal{X}$ so that $\X$ uniquely settles $H$, i.e., $H$ is the unique optimal for $\X$. \end{prob} Note that \cref{def:settle} requires the unique optimality. Otherwise, if a collection of information structures contains a \emph{constant information structure} $X$ where $X = \E X$, any $H$ has zero information gain of $X$, and thus is optimal. The agent may adaptively conduct a sequence of experiments and access a sequence of signals. Hence the principal may want the agent to access as many signals as possible, and needs to ensure the information gain of each additional signal is as large as possible. \begin{prob}[name = asymptotic,label = prob:limit] Let $\X_0,\dots$ be a sequence of collections of information structures which does not contain any constant information structure and $\bar{\X}_n:= \cup_{k\le n} \X_k$ for all $n$. The principal chooses $H$ to maximize the \emph{relative information gain} of the sequence $\bar{X}_0,\dots$, \begin{equation}\label{eq:obj_comp} \inf_{\tilde{H}\in \mathcal{H}}\lim_{n\to \infty}\frac{\Obj_{\bar{\X}_n}(H)}{\Obj_{\bar{\X}_n}(\tilde{H})}. \end{equation} \end{prob} As the example in the introduction, to predict an output of a biased coin, the agent can observe any number of samples from the coin. The principal has a sequence of collections of information structures $\X_0,\X_1,\dots$ where each element in $\X_n$ is a possible information structure of the $(n+1)$-th sample. Specifically, given an outcome $s^{(\le n)} = (s^{(1)},\dots, s^{(n)})\in \{0,1\}^n$ of the first $n$ signal $S^{(\le n)}$, let $X_{s^{(\le n)}}(s) = P(W\mid S^{(\le n)} = s^{(\le n)}, S^{(n+1)} = s)$ be the \emph{posterior on $W$ with the $(n+1)$-th signal conditional on $S^{(\le n)} = s^{(\le n)}$} or \emph{posterior with $(n+1)$-th signal} for short. Then $\X_n = \{X_{s^{(\le n)}}, \forall s^{(\le n)}\in \{0,1\}^n\}$. As the asymptotic theory in statistics studies estimators with indefinitely growing sample size, \cref{prob:limit} evaluates proper scoring rules when the agent may collect indefinitely data. The notion of relative information gain is particularly useful when the cost of each signal is small for all, or the number of signal is indefinite. \Cref{eq:obj_comp}, intuitively, measures the information gain of $\lim_{n\to \infty} \bar{\X}_n$ on $H$. Indeed \cref{prop:opt2limit} shows that \cref{prob:limit} contains \cref{prob:opt} as a special case when $\Opt(\lim_n \bar{X}_n)>0$. Furthermore, using the ratio of information gains, \cref{eq:obj_comp} is meaningful even when $\Opt(\lim_n \bar{X}_n)=0$. \begin{proposition}\label{prop:opt2limit} Given an increasing sequence $\X_1, \dots$ with $\X:=\lim_{n\to \infty} \X_n$, if $\Opt(\X)>0$, $H\in \mathcal{H}$ is optimal for $\X$ for \cref{prob:opt} if and only if $H$ is optimal for the sequence $\X_1,\dots$ for \cref{prob:limit}. \end{proposition} \section{Preliminary}\label{sec:pre} Here we list some notations. Given positive integer $d$ and $n$, let $[n] := \{1, \ldots, n\}$, $\Delta = \Delta_{d} = \Delta(\Omega)\subset \R^d$ is probability simplex over $\Omega = [d]$. The vertices of the simplex $\Delta_d$ are $\hat{e}_j$ for $j\in [d]$ which is also the standard basis of $\R^d$. We set the interior and the boundary of $\Delta_d$ to be $\relint(\Delta_d) := \{x\in \Delta_d:x_k>0,\forall k\in [d]\}$, and $bd(\Delta) := \Delta\setminus\relint(\Delta)$ respectively. We say a function $h\in \mathcal{C}^\infty(U)$ is smooth on an open set $U$ if $h$ is all differentiable for all degrees of differentiation in $U$, and $h\in \mathcal{C}^\infty(\overline{U})$ is smooth on an close set $\overline{U}$ if all differentiation of $h$ is uniformly continuous in $\overline{U}$. We further call all one vector $\mathbf{1} := (1, \ldots, 1)\in\R^d$ and $c := \frac{1}{d}\mathbf{1}$. The $2$-norm of a matrix $A$ is $\|A\|_2 := \max_{v:\|v\|_2 = 1} \|Av\|_2$. \subsection{Proper Scoring Rules}\label{sec:ps} A \emph{scoring rule} for a random variable $W\in \Omega$ is a function $PS:\Omega\times\Delta(\Omega)\to \R$ where $PS(\omega, \hat{x})$ is the score assigned to a prediction $\hat{x}\in \Delta(\Omega)$ when $W = \omega$. The scoring rule is (strict) proper if for all random variable $W$ on $\Omega$ with distribution $x$, setting $\hat{x} = x$ (uniquely) maximizes the expected score $\E_{W\sim x} PS(W,\hat{x})$. In other words, if $W$ is distributed according to $x$, then truthfully reporting $x$ can maximizes the expected score. \begin{theorem}[Savage representation~\cite{mccarthy1956measures,savage1971elicitation,gneiting2007strictly}]\label{thm:ps} For every (strict) proper scoring rule $PS$, there exists a (strictly) convex function $H:\Delta(\Omega)\to \R$ so that for all $\omega\in \Omega$ and $x\in \Delta(\Omega)$ $$PS(\omega, x) = H(x)+ \partial H(x)\cdot(\mathbf{1}_\omega-x)$$ where $\partial H(x)$ is a sub-gradient of $H$ at $x$ and $\mathbf{1}_\omega\in \Delta(\Omega)$ is the indicator function with $\mathbf{1}_\omega(w') = 1$ if $w' = \omega$ and $0$ otherwise. Conversely, for every (strictly) convex function $H: \Delta(\Omega)\to \R$, there exists a (strict) proper scoring rule such that the above condition hold. \end{theorem} We list some common proper scoring rules with associated convex functions which are scaled to be in $\mathcal{H}$. \begin{description}\itemsep 0pt\topsep-6pt\partopsep-6pt \item[quadratic scoring rule] $Q(x) = \frac{d}{d-1}\sum_{k = 1}^d (x_k^2-1/d)^2$ \item[spherical scoring rule] $H_s(x) = \frac{\sqrt{d}}{\sqrt{d}-1}\sqrt{\sum x_k^2}-\frac{1}{\sqrt{d}-1}$ \item[log scoring rule] $H_{\ln}(x) = \frac{1}{\ln d}\sum_k x_k\ln x_k+1$ \end{description} Note that the convex functions $Q$ and $H_s$ for quadratic scoring rule and spherical scoring rule respectively are smooth on the closed set $\Delta_d$, but $H_{\ln}$ for log scoring rule is only smooth on the open set $\relint(\Delta_d)$. When the state space $\Omega$ is binary, the associated convex function is one-dimensional. For every proper scoring rule $PS$, there exists a convex function $H:[0,1]\to\R$ so that for all $x\in [0,1]$ and binary event $\omega\in \{0,1\}$ \begin{equation}\label{eq:ps_bin} PS(\omega, x) = \begin{cases} H(x)+ \partial H(x)\cdot (1-x)&\text{ when } \omega = 1,\\ H(x)-\partial H(x)\cdot x&\text{ when } \omega = 0.\end{cases} \end{equation} \subsection{Convex Functions}\label{sec:conv} In \cref{sec:quadr,sec:exp}, we will restrict our functions to be smooth and utilize their analytic properties ($k$-th order derivative). Though the domain of our convex functions is a probability simplex $\Delta_d$, which is a submanifold in $\R^d$, we can extend any smooth function $h$ on $\Delta_d$ to a smooth function on $\R^d_{\ge 0} := \{x\in \R^d: x_k\ge 0, \forall k\in [d]\}$,\footnote{Besides using Whitney's Expansion Theorem, for example, we can also define $(x_1,\dots,x_d)\mapsto \|x\|\cdot h\left(\frac{x_1}{\sum_{k = 1}^d x_k},\dots,\frac{x_d}{\sum_{k = 1}^d x_k}\right)$ when $x\neq \mathbf{0}$, and $0$ when $x = \mathbf{0}$. Furthermore, several common scoring rules are already defined on $\R^d_{\ge 0}$, e.g., log scoring rule and quadratic scoring rule.}, and use standard notation to discuss the $k$-th derivative of $h$, instead of using complex coordinate charts. We first define the notion of strongly convex on $\relint(\Delta)$. \begin{definition}\label{def:strongly} $H$ is \emph{$\alpha$-strongly convex} on $\relint(\Delta)$, if for all $x, y\in \relint(\Delta)$ \begin{equation}\label{eq:stronglyconvex} H(y)\ge H(x)+\nabla H(x)(y-x)+\frac{\alpha}{2}\|y-x\|^2. \end{equation} Note that the pair $x$ and $y$ are restricted in $\relint(\Delta)$, so the equivalent condition on the Hessian of $H$ is also restricted in the tangent space of $\Delta$. Formally, \begin{lemma}[Hessian]\label{lem:hessian} Suppose $H\in \mathcal{H}\cap \mathcal{C}^\infty(\relint(\Delta))$. $H$ is $\alpha$-strongly convex on $\relint(\Delta)$ if and only if for all $x\in \relint(\Delta)$ and $v\in \R^d$ with $\mathbf{1}^\top v = 0$, $v^\top \nabla^2 H(x) v\ge \alpha \|v\|^2$. \end{lemma} \subsection{PDE and Maximum Principle} We will associate the objective in \cref{prob:limit} with elliptic differential operators and exploit the maximum principles of our differential operator. \begin{definition}[\citet{evans10}]\label{def:diffop} Let $U\subset \R^d$ be an open and bounded set. $\mathcal{L}$ is a \emph{second order differential operator} with $A:U\to \R^{d\times d}$ on a field $u:U\to \R$ if for all $x\in U$ $$\mathcal{L}u(x) = \sum_{i,j} A_{ij}(x)\partial_{ij}^2 u.$$ We say $\mathcal{L}$ is \emph{elliptic} if $A(x) = (A_{ij}(x))$ is positive definite for all $x\in U$, and \emph{uniformly elliptic} if there exists $\rho>0$ such that $A(x)-\rho\mathbb{I}$ is still positive definition for all $x\in U$. \end{definition} The following theorem shows the maximum of a (super-)solution of a second order elliptic differential equation can only have its maximum points on the boundary. We will strengthen this result to functions on a probability simplex (\cref{lem:maxpsd}). \begin{theorem}[maximum principle of elliptic PDE~\cite{evans10}]\label{thm:max_pd} Suppose that $\mathcal{L}$ is uniformly elliptic and $\mathcal{L}u\ge 0$ for a function $u\in \mathcal{C}^2(U)$. Then the function $u$ must attain its maximum on the boundary $$\max_{cl(U)} u = \max_{bd (U)} u.$$ Moreover, if $u$ has an interior maximum, $u$ is a constant. \end{theorem} \end{definition} \section{Static Setting}\label{sec:static} \subsection{Singleton Information Structure}\label{sec:singleton} As a warm-up, let's consider the principal exactly knows the agent's information structure so that $\X = \{X\}$ is a singleton. We show the optimal $H$ can be an upside down pyramid. (\cref{fig:v_shape}) \begin{theorem}[name = singleton, label = thm:singleton] If $\X = \{X\}$ is singleton and the state space $\Omega = [d]$, there exists an optimal scoring rule associated with an upside down pyramid $H^\star$ such that the epigraph of $H^\star$ is the convex hull with vertices $(\E X,0)$, and $(\hat{e}_k, 1)$ for all $k\in [d]$. \end{theorem} Since the information gain $\J_H(X)$ is an integration of $H(x)-H(\E X)$ over all $x\in \Delta_d$ for any $H$, we can prove \cref{thm:singleton} by a pointwise inequality, $H(x)-H(\E X)\le H^\star(x)-H^\star(\E X)$ for all $x$. \begin{proof}[Proof of \cref{thm:singleton}] Let $H^\star$ be the upside down pyramid in \cref{thm:singleton}, and $H\in \mathcal{H}$ be an arbitrary bounded convex function. Since $\X = \{X\}$ is singleton, it is sufficient to prove $\J_{H}(X)\le \J_{H^\star}(X)$. Let $h_0 := H(\E X)$. If $h_0 = 1$, $H$ is a constant function and $0 = \J_H(X)\le \J_{H^\star}(X)$ by the Jensen's inequality. Now we consider $h_0<1$. Let $\tilde{H}$ be a convex piecewise linear function whose epigraph has vertices at $(\E X, h_0)$, and $(\hat{e}_k, 1)$ for all $k\in [d]$. First, because $H\in \mathcal{H}$, $H(\hat{e}_k)\le \tilde{H}(\hat{e}_k) = 1$ for all $k\in [d]$, and $H(\E X) = \tilde{H}(\E X) = h_0$. Then, the epigraph of $\tilde{H}$ is contained in the epigraph of $H$, so \begin{equation}\label{eq:singleton1} H(x)-H(\E X)\le \tilde{H}(x)-\tilde{H}(\E X) \text{ for all }x\in \Delta_d. \end{equation} Second, because the epigraphs of $H^\star$ and $\tilde{H}$ are both upside down pyramids, and the vertices are aligned, we can convert $\tilde{H}$ to $H^\star$ through an affine transformation: $H(x) = \frac{\tilde{H}(x)-h_0}{1-h_0}$ for all $x\in \Delta_d$. Therefore, for all $x\in \Delta_d$, \begin{equation}\label{eq:singleton2} \tilde{H}(x)-\tilde{H}(\E X) = (1-h_0) \left(H^\star(x)-H^\star(\E X)\right)\le H^\star(x)-H^\star(\E X), \end{equation} because $0\le h_0<1$, and both sides are non-negative. Combining \cref{eq:singleton1,eq:singleton2}, we have $\J_H(X) = \E_X\left[H(X)-H(\E X)\right]\le \E_X\left[H^\star(X)-H^\star(\E X)\right] = \J_{H^\star}(X)$, and complete the proof. \end{proof} When the state space is binary $\Omega = \{0,1\}$, by \cref{eq:ps_bin}, the upside down pyramid $H$ is \emph{v-shaped}, and \cref{thm:singleton} yields the following corollary. \begin{corollary}\label{prop:v-shape} If the state space $\Omega = \{0,1\}$ and $\X = \{X\}$, there exists an optimal scoring rule associated with a v-shape $H$ such that $$H(x) = \begin{cases} \frac{-1}{\E X}(x-\E X) &\text{ if } x<\E X\\ \frac{1}{1-\E X}(x-\E X)&\text{ if } x\ge \E X\end{cases}$$ \end{corollary} These results suggest the principal should choose $H$ that is ``curved'' at the prior in order to incentivize the agent to derive the signal and move away from the prior. This intuition is useful for the later sections. \begin{figure} \centering \begin{tabular}{cc} \includegraphics[width=0.4\textwidth]{images/v_shape.png} &\includegraphics[width=0.4\textwidth]{images/trian.png}\\ \end{tabular} \caption{The left panel shows several examples of the optimal $H$ in \cref{prop:v-shape} which have $\E X = 0.2, 0.5$, and $0.7$ respectively. The right panel shows the optimal $H^\star$ in \cref{thm:singleton} when $d = 3$, $\X = \{X\}$, and $\Pr[X = 1] = 0.2,\Pr[X = 2] = 0.5, \Pr[X = 3] = 0.3$.} \label{fig:v_shape} \end{figure} \subsection{Finite Information Structures}\label{sec:finite} In this section, we give a polynomial time algorithm that computes an optimal scoring rule when the collection of information structures is finite as defined below. \begin{definition}\label{def:finte} We call a collection of information structures $\X$ \emph{finite} if $|\X|$ is finite and all $X\in \X$ has a finite support $|\supp(X)|<\infty$. When $\X$ is finite, let $\overline{\supp}(X) := \supp(X)\cup \{\E X\}\subset \Delta(\Omega)$ for all $X\in \X$, and $\overline{\supp}(\X) := \cup_{X\in \X} \overline{\supp}(X)$. \end{definition} The notion of finite collection of information structures is natural when there is a finite number of heterogeneous agents with finite set of information. \begin{theorem}\label{thm:finite} If the state space $\Omega = [d]$, and $\X$ is finite with $|\overline{\supp}(\X)| = m$, there exists an algorithm that computes an optimal bounded proper scoring rule and the running time is polynomial in $d$ and $m$. \end{theorem} The main idea is that when $\X$ is finite $\Obj_X(H)$ in \cref{eq:obj} only depends on the evaluations of $H$ in $\overline{\supp}(\X)$. Thus, instead of searching for all possible bounded scoring rules, we can reduce the dimension of \cref{prob:opt} and use a linear programming whose variables contain the evaluations of $H$ in $\overline{\supp}(\X)$ and add linear constraints to ensure those evaluation can be extended to a convex function. This observation allows us the solve the problem in weakly polynomial time which is polynomial in $d$ and $m$ but may not be polynomial in the representation size. \begin{proof}[Proof of \cref{thm:finite}] The idea is to construct a linear programming whose variables contain the evaluations of $H$ in $\overline{\supp}(\X)$. To formulate this, we introduce some notations. Given $|\X| = n$, we set $\X = \{X_i: i = 1, \ldots, n\}$. For each $i\in [n]$, let the support of $X_i$ be $\supp(X_i) = \{x_{i,j}:j\in [m_i]\}$ with size $|\supp(X_i)| = m_i$. Additionally, let the expectation be $x_{i,0} = \E X_i$, and $\Pr(X_i = x_{i,j}) = p_{i,j}$. Hence $\overline{\supp}(X_i) = \{x_{i,j}: j = 0, \ldots, m_i\}$ and $\overline{\supp}(\X) = \{x_{i,j}:i\in [n], j = 0, \ldots, m_i\}$. We further use $\mathcal{A} = \{(i,j):i\in [n], j = 0, \ldots, m_i\}$ to denote the set of indices. Finally, we set the vertices of the probability simplex $\Delta_d$ be $x_{k} = \hat{e}_k$ for all $k\in [d]$, and $\bar{\mathcal{A}} :=\mathcal{A}\cup [d]$. To simplify the notations, we assume $\overline{\supp}(\X)$ does not contain any vertex of $\Delta_d$ $\hat{e}_k$ for $k = 1, \ldots, d$ and for all distinct $\alpha, \alpha'$ in $\mathcal{A}$, $x_{\alpha}\neq x_{\alpha'}$.\footnote{Otherwise, we just need to add some equality constraints. For instance, if $x_\alpha = x_{\alpha'}$, we need to set $h_\alpha = h_{\alpha'}$ as a constraint.\label{fn:equality}} Note that the objective value only depends on a finite number of values. Specifically, given $H(x_{\alpha}) = h_\alpha$ for any $\alpha\in \mathcal{A}$, the objective, \cref{eq:obj}, is \begin{equation}\label{eq:bin2} \Obj_\X(H) = \min_{i\in [n]} \sum_{k = 1}^{m_i}p_{i,j}h_{i,j}-h_{i,0}. \end{equation} Thus, we can first decide $h_\alpha$ to maximize \cref{eq:bin2}, and ``connect'' those points $(x_\alpha, h_\alpha)$ to construct a piece-wise linear function. To ensure the resulting function is convex, we further require there exists a supporting hyperplane for each $(x_\alpha, h_\alpha)$--- for each $\alpha$ there exists $g_{\alpha}\in \R^d$ such that $h_{\alpha'}\ge h_{\alpha}+ g_{\alpha}^\top(x_{\alpha'}-x_{\alpha})$ for all $\alpha'\neq \alpha$. In summary, we set the convex function to be \begin{equation}\label{eq:opt_bin} H(x) = \max\left\{\max_{\alpha\in \bar{\mathcal{A}}} h_{\alpha}+g_{\alpha}(x-x_\alpha), \min_\alpha h_\alpha\right\}, \end{equation} and the collection of $h_{\alpha}$ and $g_{\alpha}$ is a solution of the following linear programming, \begin{equation}\label{eq:lp_bin} \begin{aligned} & \max && \min_{i\in [n]} \sum_{k = 1}^{m_i}p_{i,j}h_{i,j}-h_{i,0},\\ & \text{subject to} && h_{\alpha}\in [0,1], &\forall \alpha\in \bar{\mathcal{A}},\\ &&& h_{\alpha'}\ge h_{\alpha}+g_{\alpha}^\top(x_{\alpha'}-x_\alpha),&\forall \alpha, \alpha' \in \bar{\mathcal{A}}. \end{aligned} \end{equation} The above linear programming has $(1+d)|\bar{\mathcal{A}}| = O(d(m+d))$ variables and $|\bar{\mathcal{A}}|+|\bar{\mathcal{A}}|^2 = O((m+d)^2)$ constraints, so we can solve it in polynomial time with respect to $m$ and $d$. Now we need to show $H$ is 1) convex, 2) bounded in $[0,1]$, and 3) optimal. It is easy to see for all $\alpha\in \bar{\mathcal{A}}$, \begin{equation}\label{eq:bin1} H(x_{\alpha}) = h_{\alpha}, \end{equation} because \begin{align*} H(x_{\alpha}) =& \max\left\{h_{\alpha}, \max_{\alpha'\neq \alpha} h_{\alpha'}+g_{\alpha'}(x_{\alpha}-x_{\alpha'}), \min_{\alpha'} h_{\alpha'}\right\}\tag{$h_{\alpha}+g_{\alpha}(x_{\alpha}-x_{\alpha}) = h_{\alpha}$}\\ =& \max\left\{h_{\alpha}, \max_{\alpha'\neq \alpha} h_{\alpha'}+g_{\alpha'}(x_\alpha-x_{\alpha'})\right\}\tag{$h_{\alpha}\ge \min_{\alpha'} h_{\alpha'}$}\\ =& h_{\alpha}\tag{by the constraints in \cref{eq:lp_bin}} \end{align*} First because $H$ is the maximum of a collection of linear functions, $H$ is convex. Second, for the lower bound, by the constraints in \cref{eq:lp_bin} $\min h_\alpha\ge0$ so $0\le \min h_\alpha\le H(x)$ due to \cref{eq:opt_bin}. For the upper bound, because $H$ is convex, for all $x\in \Delta_d$, $H(x)\le \max_k \{H(\hat{e}_k)\} = \max_k \{h_{k}\}\le 1$ by \cref{eq:bin1,eq:lp_bin}. Finally, for any bounded convex function $\tilde{H}\in \mathcal{H}$, we set $\tilde{h}_\alpha = \tilde{H}(x_\alpha)$ for $\alpha\in \bar{\mathcal{A}}$. At each $x_\alpha$ we can find a vector $\tilde{g}_\alpha$ such that $\tilde{H}(x)\ge \tilde{H}(x_\alpha)+\tilde{g}_\alpha^\top(x-x_\alpha)$ for all $x\in \Delta_d$.\footnote{Specifically, we can construct $\tilde{g}_\alpha$ by finding a support hyperplane to the epigraph of $\tilde{H}$ at $(x_\alpha, h_{\alpha})$, and the vector $\tilde{g}_\alpha$ is called subgradient.} Since $\tilde{H}$ is convex and in $\mathcal{H}$, the collection of $\tilde{h}_\alpha$ and $\tilde{g}_\alpha$ is a feasible solution to \cref{eq:lp_bin}, and $\Obj_\X(\tilde{H})\le \Obj_\X(H)$. \end{proof} Note that if $\X$ contains a constant information structure $X$, the resulting linear programming~\eqref{eq:lp_bin} will output an arbitrary piecewise linear convex function, because the objective value is always zero (\cref{fn:equality}). \subsection{Unique Optimality of an Convex Function}\label{sec:unique} In this section we ask given any convex $H$, is there a collection of information structures $\mathcal{X}$ that uniquely settles $H$? (\cref{def:settle}) First \cref{prop:any_nece} shows a simple necessary condition of $H$ to be uniquely settled for some $\mathcal{X}$. For convex piecewise linear function, \cref{thm:lin_suff} shows such necessary condition is also sufficient, and we can construct a finite collection of information structures (\cref{def:finte}) to uniquely settle those convex piecewise linear functions. Similarly, \cref{prop:open_settle} shows any convex $H$ satisfying the necessary condition can be uniquely settled on any closed set $U$ that does not contain any minimum or maximum point of $H$. However, \cref{ex:impossible} shows no collection of information structures uniquely settles a quadratic function which contrasts the unique optimality of quadratic scoring rules in an asymptotic setting in \cref{sec:quadr}. \begin{proposition}[A necessary condition]\label{prop:any_nece} For any bounded convex function $H$ on $\Delta_d$, there exists a collection of information structures $\mathcal{X}_H$ with $\Opt(\X_H)>0$ and settles $H$, only if \begin{equation}\label{eq:any_nece} \min_x H(x) = 0\text{, and } H(\hat{e}_j) = 1, \forall j\in [d]. \end{equation} \end{proposition} The formal proof is in \cref{app:unique}. Intuitively, we can show that if one of the condition does not hold, we can construct an other convex function that has a larger or the same information gain of any information structure. \paragraph{Convex piecewise linear functions} Now, we consider sufficient conditions for piecewise linear functions $H$. \Cref{thm:lin_suff} shows the necessary conditions~\cref{eq:any_nece} is also sufficient for any convex piecewise linear functions. \begin{theorem}\label{thm:lin_suff} If $H$ is a convex piecewise linear function satisfying \cref{eq:any_nece}, there exists a finite collection of information structures $\mathcal{X}_H$ that uniquely settles $H$. \end{theorem} Our proof is constructive: Given a point $\theta\in \Delta_d$, we can create information structures to upper bound and lower bound the evaluation of an optimal convex function. The proofs and details are in \cref{app:unique}. \paragraph{General convex functions} We can also use the same idea on a general convex function. \Cref{prop:open_settle} shows that we can construct a collection of information structures such that the optimal $H^\star$ agrees with $H$ with all except $\theta\in \Delta_d$ with extreme large or small $H(\theta)$. \begin{proposition}\label{prop:open_settle} If $H$ is a convex function satisfying \cref{eq:any_nece}, for any $\delta>0$ there exists a collection of information structures $\mathcal{X}_{H,\delta}$ which uniquely settles $H$ on $U_{H,\delta}:=\{\theta\in \Delta_d:\delta<H(\theta)<1-\delta\}$. Moreover, for all closed set $U\subset \Delta_d$ that does not contain any minimum or maximum point of $H$, there exists $\mathcal{X}_{H,U}$ that uniquely settles $H$ on $U$. \end{proposition} However, \cref{ex:impossible} shows this limit is inevitable and no collection of information structures uniquely settles a quadratic scoring rule which satisfies \cref{eq:any_nece}. Finally, it's not hard to extend this example to show that no collection of information structures can settle any strictly convex function. \begin{proposition}\label{ex:impossible} For any collection of information structures $\mathcal{X}$ which settles the quadratic scoring rule on binary state space $Q_2(x) = 4(x-\frac{1}{2})^2$, there exists another optimal $H$ for $\X$ but $H\neq Q_2$. \end{proposition} \begin{proof}[Proof of \cref{ex:impossible}] Let $\delta:=\Opt(\mathcal{X})$. If $\delta = 0$, any convex function is optimal and we are done. If $\delta>0$, we define $x_\delta:=\frac{1-\sqrt{1-\delta}}{2}$ $$H(x) =\begin{cases}Q_2(x)& \text{if }x\ge x_\delta \\1-\frac{2\delta}{1-\sqrt{1-\delta}}x &\text{otherwise}\end{cases}$$ It's easy to check $H$ is convex, $H(x)> Q_2(x)$ for all $0<x<x_\delta$ and $H(x) = Q_2(x)$ otherwise. Now we show a stronger result, $\J_X(H)\ge \J_X(Q_2)$ for all $X\in \X$. Suppose there exists $X\in \mathcal{X}$ with $\J_X(H)< \J_X(Q_2)$. If $\E X\ge x_\delta$, $\J_X(H) = \E H(X)-H(\E X) = \E H(X)-Q_2(\E X)\ge \E Q_2(X)-Q_2(\E X) = \J_X(Q_2)$ that contradicts to $\J_X(H)< \J_X(Q_2)$. Then if $\E X< x_\delta$, $\J_X(Q_2) = \E Q_2(X)-Q_2(\E X) \le 1-Q_2(\E X) < \delta$ that contradicts to $X\in \mathcal{X}$ because $\Opt(\X) = \delta$. \end{proof} \section{Asymptotic Setting}\label{sec:seq} Now we study optimal incentive scoring rules for eliciting estimation from Bayesian agents when the number of samples $n$ is large. Because as the number of samples increases, the mean squared error (variance) is vanishing for any consistent estimator, $\Opt(\lim_{n\to \infty}\X_n) = 0$, we use the relative information gain in \cref{prob:limit} to evaluate a proper scoring rule. \Cref{sec:quadr} considers a sequence of collection of information structures with vanishing covariance, and shows the quadratic scoring rule is uniquely optimal. In \cref{sec:exp}, the data are generated from exponential families with conjugate prior (Beta-Bernoulli and Dirichlet-Categorical process), and the log-scoring rule is uniquely optimal. Though both \cref{sec:quadr,sec:exp} consider eliciting Bayes estimator with large samples, \cref{sec:quadr} models general prior and observation which can be continuous, while the setting in \cref{sec:exp} is more restricted, e.g., observation is in $[d]$. \subsection{Vanishing Covariance Information Structures}\label{sec:quadr} Consider the agent can access $n$ i.i.d. samples to estimate the distribution of the ground state $W$. What is a natural sequence of collections of information structures to model this process? By Bayesian Cram\'er-Rao lower bound~\cite{bj/1186078362}, the variance (covariance) of the posterior distribution $P(W|S^{(1)}, \dots S^{(n)})$ is of order $\Theta(1/n)$ for large $n$. Thus, the variance of the posterior with $n$-th signal given the first $n-1$ samples is of order $\Theta(1/n^2)$. \footnote{Formally, the average variance of the posterior with the $n$-th signal is $\E_{S^{(\le n)}}[(P(W|S^{(\le n)}-P(W|S^{(\le n-1)})^2]$, and equals $\Var[(P(W|S^{(\le n)})]-\Var[(P(W|S^{(\le n-1)})] = \Theta(1/n^2)$, because $\E_{S^{(\le n-1)}}[\E_{S^{(n)}}[P(W|S^{(\le n)})P(W|S^{(\le n-1)})]] = \E_{S^{(\le n-1)}}[P(W|S^{(\le n-1)})^2]$.} This motivates us to consider a sequence of collections of information structures where the norm of covariance matrix is bounded below: $$\mathcal{X}_n := \{X: \|\Cov(X)\|_2\ge 1/{n^2}\},$$ and $\X_n$ emulates the set of the posterior with the $k$-th sample where $k\le n$. Additionally, as $n$ goes to infinity $\X$ contains all possible non constant information structure. Finally, the sequence is homogeneous and isotropic: whether an information structure $X$ is in $\X_n$ is independent of the position and direction so that translation and rotation do not affect the membership. Our main results in this section show that a quadratic scoring rule \begin{equation}\label{eq:quadr} Q(x):= \frac{d}{d-1}\|x-c\|^2 \text{ where } c := (1/d, \ldots, 1/d) \end{equation} is optimal for the sequence $\X_n$. The first result (\cref{thm:quadr}) shows that the information gain~\cref{eq:obj} is of order $1/n^2$ for any convex function, and the constant is maximized if and only if $H$ is the quadratic function $Q$. Then, we can translate the result into relative information gain, and show the quadratic scoring rule maximizes the relative information gain against all smooth function on $\relint(\Delta_d)$ for $(\X_n)_{n\in \mathbb{Z}_{>0}}$ in \cref{cor:quadr}. \begin{theorem}\label{thm:quadr} Let $\Omega=[d]$ and $H\in \mathcal{H}\cap \mathcal{C}^{\infty}(\relint (\Delta_d))$. $$\lim_{n\to \infty}n^{2}\Obj_{\X_n}(H) \le \frac{d}{d-1},$$ and the equality holds if and only if $H(x) = Q(x)$ for all $x\in \Delta_d$. \end{theorem} \begin{corollary}\label{cor:quadr} For any finite set $\Omega$ with $|\Omega| = d$, and any smooth $H^\star \in \mathcal{H}$, $H^\star$ maximizes the following relative information gain $$R_{\rm uni}(H):=\inf_{\tilde{H}\in \mathcal{H}\cap \mathcal{C}^{\infty}(\relint (\Delta_d))}\lim_{n \to \infty} \frac{\Obj_{\X_n}(H)}{\Obj_{\X_n}(\tilde{H})}.$$ if and only if $H^\star = Q(x)$ for all $x\in \Delta$. \end{corollary} To prove \cref{thm:quadr}, the main idea is that the information gain $\J_H(X)$ only depends on the ``curvature'' of $H$ at $\E X$ that is the strongly-convexity of $H$. (\cref{lem:strongly}) Then, it is sufficient to show that the quadratic function $Q$ has the largest possible curvature in $\mathcal{H}$. (\cref{lem:quadr}) The following lemma show it is sufficient to consider random variable $X$ with $\|\Cov(X)\|_2 = 1/n^2$. \begin{lemma}[name=scaling,label=lem:scaling,restate=scaling] Let $H\in \mathcal{H}$ and $n\in \mathbb{Z}_{>0}$. For all $X\in \X_{n}$, there exists $\tilde{X}\in \X_{n}$ so that $\|\Cov(\tilde{X})\|_2 = 1/n^2$ and $\J_H(X)\ge \J_H(\tilde{X})$. \end{lemma} Then we connect information gain of $\X_n$ to strongly convexity of a function. The proofs of \cref{lem:scaling,lem:strongly} are in \cref{app:quadr}. \begin{lemma}[name=strongly convex,restate=strongly,label=lem:strongly] If $H\in \mathcal{H}\cap \mathcal{C}^{\infty}(\relint (\Delta_d))$, $H$ is $\alpha$-strongly convex in $\relint(\Delta)$ if and only if $\inf_{X\in \X_n:\|\Cov(X)\| = 1/n^2}\J_H(X)\ge \frac{\alpha}{2n^2}+o\left(\frac{1}{n^2}\right)$. \end{lemma} Finally, we show the most ``strongly convex'' function in $\mathcal{H}$ is the quadratic scoring rule $Q$. Intuitively, if a function $H$ is more strongly convex than $Q$, the difference $H-Q$ is a convex function, and $H$ is greater than $Q$ at one of the vertex $\hat{e}_k$ which is a contradiction because $Q(\hat{e}_k) = 1$ for all $k\in [d]$. \begin{lemma}[restate=quadr,label=lem:quadr] Let $H\in \mathcal{H}\cap \mathcal{C}^{\infty}(\relint (\Delta_d))$. $H$ is $\frac{2d}{d-1}$-strongly convex on $\Delta$ if and only if for all $x\in \Delta$, $H(x) = Q(x)$. \end{lemma} \begin{proof}[Proof of \cref{lem:quadr}] If $H\in \mathcal{H}$ is $\frac{2d}{d-1}$-strongly convex, we define the symmetrization of $H$ as $$H_{sym}:(x_1, \ldots, x_d)\mapsto \frac{1}{d!}\sum_{\sigma\in S_d} H(x_{\sigma(1)}, \ldots, x_{\sigma(d)})$$ where $S_d$ is the set of all permutations on $[d]$. We first prove \begin{equation}\label{eq:quadr0} H(c) = 0\text{ and } \nabla H(c) = 0. \end{equation} Since the symmetrization $H_{sym}$ is also $\frac{2d}{d-1}$-strongly convex, smooth, and in $\mathcal{H}$, the following function is convex on $\Delta$, $K(x):=H_{sym}(x)- H_{sym}(c)-Q(x)$. Because $H_{sym}$ is convex and symmetric with respect to $c$, $H_{sym}(c) = \min_{x\in \Delta} H_{sym}(x)$, and $\nabla H_{sym}(c) = 0$ which proves the second part of \cref{eq:quadr0}. For the first part of \cref{eq:quadr0}, $\nabla K(c) = \nabla H_{sym}(c)-\nabla Q(c) = 0$, and $K(x)\ge K(c) = 0$ for all $x\in \Delta$ by \cref{eq:stronglyconvex}. As a result, for any vertex $\hat{e}_j$, $$1\ge H_{sym}(\hat{e}_j)\ge H_{sym}(c)+Q(\hat{e}_j)\ge 1,$$ so $H_{sym}(c) = 0$, and the vertices $K(\hat{e}_j) = 0$ for all $j = 1, \ldots, d$. Because $K\ge 0$ is convex and the evaluations at vertices are zero, $K(x) = 0$ for all $x\in \Delta$, so \begin{equation}\label{eq:quadr2} H_{sym}(x) = Q(x), \text{ for all }x\in \Delta. \end{equation} Finally, because $H$ is smooth and $\frac{2d}{d-1}$-strongly convex, by \cref{lem:hessian}, for all $x\in \relint(\Delta)$ and $v\in \R^d$ with $\mathbf{1}^\top v = 0$, $v^\top \nabla^2 H(x) v\ge \frac{2d}{d-1} \|v\|^2$. Thus $$v^\top \nabla^2 H_{sym}(x)v = \frac{1}{d!} \sum_\sigma v^\top \nabla^2 H(x_{\sigma(1)}, \ldots, x_{\sigma(d)})v \ge \frac{1}{d!}\sum_\sigma \frac{2d}{d-1} \|v\|^2 = \frac{2d}{d-1} \|v\|^2.$$ However, by \cref{eq:quadr2}, $v^\top \nabla^2 H_{sym}(x)v\le \frac{2d}{d-1} \|v\|^2$, and $v^\top \nabla^2 H(x) v = \frac{2d}{d-1} \|v\|^2$ for all $x\in \relint(\Delta)$. Thus, we have a second order differential equation with initial conditions in \cref{eq:quadr0}, and we get $H(x) = Q(x)$ for all $x\in \Delta$. \end{proof} \begin{proof}[Proof of \cref{thm:quadr}] First $Q$ is $\frac{2d}{d-1}$-strongly convex and $Q\in \mathcal{H}$. By \cref{lem:scaling,lem:strongly}, $\Obj_{\X_n}(Q) = \frac{d}{d-1}\frac{1}{n^2}+o(\frac{1}{n^2})$, so $\lim_{n\to \infty}n^2\Obj_{\X_n}(Q) = \frac{d}{d-1}$. Conversely, suppose $H\in \mathcal{H}$ is smooth and $\lim_{n\to \infty}n^2\Obj_{\X_n}(H) = \frac{d}{d-1}$. Then $\Obj_{\X_n}(H) = \frac{d}{d-1}\frac{1}{n^2}+o(\frac{1}{n^2})$. By \cref{lem:strongly}, $H$ is $\frac{2d}{d-1}$-strongly convex in $\relint(\Delta)$. By \cref{lem:quadr}, $H = Q$ in $\Delta$. This completes the proof. \end{proof} \subsection{Exponential Family Information Structures}\label{sec:exp} Now we consider two common families of information structures in Bayesian inference: Beta-Bernoulli and Dirichlet-categorical information structures. Specifically, the principal wants to predict the outcome of a $d$-sided dice. First, the principal announces her proper scoring rule to encourage the agent to collect signals through rolling the dice once (first setting) or a fixed amount of times (second setting). The agent then reports his prediction. Finally, the dice is rolled and the agent gets his payment according to the outcome and the proper scoring rule. In \cref{sec:beta} we consider $d = 2$. As an warm-up, we begin with the static settings where the agent can only collect a fixed number of samples and apply \cref{thm:finite} to solve the optimal scoring rules. Then we study the asymptotic setting, and show the log scoring rule is optimal. We further extend this result to general $d$-sided dice in \cref{sec:dir}. \subsubsection{Beta-Bernoulli Information Structure}\label{sec:beta} Suppose we want to collect predictions on an outcome of a coin. Given $p\in [0,1]$, the outcome $W$ follows the Bernoulli distribution $Bern(p)$ with $\Pr[W = 1] = p$, whereas the true value of $p$ is unknown. We consider the following two cases. \begin{enumerate} \item Knowing that the agent privately observes $N$ i.i.d samples from the coin, we want to incentivize the agent to make one additional observation. \item We want to incentivize the agent to collect $N+1$ samples adaptively. That is for any sequence of $n\le N$ samples from the coin, the information gain of the $(n+1)$-th sample is large enough to incentivize the agent to observe. \end{enumerate} In both cases, the agent starts with an uninformative prior on the parameter of the coin $p$ that is a uniform distribution on $[0,1]$. To formalize this, let $\theta\in \Theta = [0,1]$, $\Omega =\mathcal{S} = \{0,1\}$, and $m\in \N_{>0}$. Let $Beta(\theta, m)$ be a Beta distribution where the probability density at $p\in [0,1]$ is proportional to $p^{\theta m-1}(1-p)^{(1-\theta)m-1}$. Here $\theta$ is the mean and $m$ is the effective sample size. We define \emph{Beta-Bernoulli information structure} with $(\theta, m)$ as follow: $p$ is sampled from $Beta(\theta, m)$. $W$ and $S$ are sampled independently and identically from $Bern(p)$. \paragraph{Static Setting} In the first case, if the agent already observes $n_1$ heads and $n-n_1$ tails, his prior on $W = 1$ is $\theta = \frac{n_1+1}{n+2}$, and the posterior predictive distribution is $X_\theta^{(n)} := P(W = 1\mid S)$ that is a Beta-Bernoulli information structure with $(\theta, n+2)$, \begin{equation}\label{eq:beta_bern} X_\theta^{(n)} =\begin{cases} \frac{(n+2)\theta+1}{n+3} &\text{with probability }\theta\\ \frac{(n+2)\theta}{n+3} &\text{with probability } 1-\theta \end{cases}. \end{equation} Note that because the agent starts with the uniform prior, his effective sample size after $n$ observation is $m = n+2$. By \cref{eq:ps_bin}, the information gain of the $(n+1)$-th observation under a convex function $H:[0,1]\to \mathbb{R}$ is $\J_H(X^{(n)}_\theta) = \E[H(X^{(n)}_\theta)]-H(\theta)$. Thus to elicit the $(N+1)$-th observation, the objective value in \cref{eq:obj} is \begin{equation}\label{eq:beta_finite} \Obj_{\left\{X^{(N)}_\theta:\theta = \frac{1}{N+2}, \dots, \frac{N+1}{N+2}\right\}}(H) = \inf_{\theta = \frac{1}{N+2}, \dots, \frac{N+1}{N+2}}\J_H(X^{(N)}_\theta). \end{equation} When $N\in \mathbb{Z}_{\ge 0}$ is finite, the collection of information structures $\left\{X^{(N)}_\theta:\theta = \frac{1}{N+2}, \dots, \frac{N+1}{N+2}\right\}$ is finite. Hence, finding an optimal $H$ for \cref{eq:beta_finite} is studied in \cref{sec:finite}. For the second case, to incentivize the agent to collect $N+1$ costly observation adaptively, we need to ensure for any sequence of $n\le N$ samples, the agent's expected information gain of the $(n+1)$-th observation is large. Thus, the collection of information structures is $\left\{X^{(n)}_\theta:n\le N,\theta = \frac{1}{n+2}, \ldots, \frac{n+1}{n+2}\right\}$ which is also finite. \Cref{fig:finite_beta} shows examples of the optimal scoring rules in the above two cases with a finite effective sample size $N$. \begin{figure}[ht] \begin{center} \begin{tabular}{cc} \includegraphics[width=0.3\textwidth]{images/beta5.png} &\includegraphics[width=0.3\textwidth]{images/beta10.png}\\ $N = 5$ & $N = 10$\\ \includegraphics[width=0.3\textwidth]{images/farey5.png} &\includegraphics[width=0.3\textwidth]{images/farey10.png}\\ $N = 5$ & $N = 10$\\ \end{tabular} \end{center} \caption{We use \cref{thm:finite} to solve the optimal $H$ for Beta-Bernoulli information structures with various $N$, and the top row is the first case and the bottom row is the second case.}\label{fig:example} \label{fig:finite_beta} \end{figure} However, as $N$ increases the size of corresponding linear programming becomes larges. For instance, if we want to incentivize the agent to collect $N+1$ samples adaptively, the number of variable is $\Theta(N^2)$ which is the length of Farey sequence of order $N+1$.\footnote{Farey sequence of order $m$ is the sequence of completely reduced fractions between 0 and 1 and the denominators are less than or equal to $m$.} In the next section, we are going to show the optimal scoring rules is the \emph{log scoring rule} in \cref{eq:logscoring} when $N\to \infty$. \paragraph{Asymptotic Setting} Instead of the static setting with a fixed number of samples, now we consider the agent can indefinitely get samples, and define a sequence of collections of Beta-Bernoulli information structures as follows, \begin{equation}\label{eq:x_beta} \bar{\X}_{Beta}^{\le N} := \left\{X^{(n)}_\theta:n\le N, \delta_N<\theta<1-\delta_N\right\} \end{equation} which is the collection of all Beta-Bernoulli information structures with an effective sample size at most $N$ and $\delta_N<\theta<1-\delta_N$ where $\delta_N \in o(1)$ and $\delta_N \in \omega(1/N)$. Note that $\bar{\X}_{Beta}^{\le N}$ is an increasing sequence and $\lim_{N\to \infty}\bar{\X}_{Beta}^{\le N} = \X_{Beta} := \{X_{\theta}^{(N)}:0<\theta<1, N\in \mathbb{N}\}$ contains all Beta-Bernoulli information structures. Our main result in this section shows that a log scoring rule \begin{equation}\label{eq:logscoring2} H_{\ln}(x):= \frac{x\ln(x)}{\ln 2}+\frac{(1-x)\ln(1-x)}{\ln 2}+1 \end{equation} is optimal for the sequence $\bar{\X}^{\le n}_{Beta}$. \begin{theorem}\label{thm:beta} Let $H\in \mathcal{H}\cap \mathcal{C}^\infty((0,1))$ be smooth on $(0,1)$. $$\lim_{N\to \infty} (N+3)^2 \Obj_{\bar{\X}_{Beta}^{\le N}}(H) \le \frac{1}{2\ln 2}.$$ Moreover the equality holds if and only if $H(x) = H_{\ln}(x)$ for all $x$ in $[0,1]$. \end{theorem} We can also rewrite this result in terms of relative information gain, and show the log scoring rule maximizes the following relative information gain on the sequence in \cref{eq:x_beta} against any function in $\mathcal{H}\cap \mathcal{C}^{\infty}((0,1))$. \begin{corollary}\label{cor:beta} For all $H^\star \in \mathcal{H}\cap \mathcal{C}^{\infty}((0,1))$ smooth on $(0,1)$, $H^\star$ maximizes the following relative information gain $$R_{\rm Beta}(H):=\inf_{\tilde{H}\in \mathcal{H}\cap \mathcal{C}^{\infty}((0,1))}\lim_{n \to \infty} \frac{\Obj_{\bar{\X}_{Beta}^{\le n}}(H)}{\Obj_{\bar{\X}_{Beta}^{\le n}}(\tilde{H})},$$ if and only if $H^\star = H_{\ln}(x)$ for all $x\in (0,1)$. \end{corollary} In contrast to vanishing covariance information structures in \cref{sec:quadr}, the variance of a Beta-Bernoulli information structure depends on the mean. For example, the posterior $X_{0.5}^{(N)}$ with mean $0.5$ changes rapidly, but $X_{0.98}^{(N)}$ with mean $0.98$ hardly changes. Additionally, the movement becomes small as the prior mean approaches the vertices $0$ or $1$, so a good scoring rule should be more curved as well. The proof structure is similar to \cref{thm:quadr}. We defer the proofs of the following two lemmas to \cref{app:exp}. \begin{lemma}[name=scaling,restate=betascaling,label=lem:betascaling] Let $H\in \mathcal{H}$. For all $\theta\in [0,1]$ and $n\le N$, we have $\J_H(X_\theta^{(n)})\ge \J_H(X_\theta^{(N)})$. \end{lemma} The next lemma relates the objective value to a second order differential operator $\Dbeta$ $$h(x)\mapsto \Dbeta h(x) = x(1-x)h''(x)$$ for any smooth function $h:(0,1)\to \mathbb{R}$ where $h''$ is the second derivative of $h$. Note that for the log scoring rule, $\Dbeta H_{\ln}(x) = \frac{1}{\ln 2}$ for all $x\in (0,1)$. \begin{lemma}[restate=betacurv,label=lem:betacurv] For any smooth convex $H\in \mathcal{H}\cap \mathcal{C}^{\infty}((0,1))$, and $X\in \bar{\X}_{Beta}^{N}$, $$\J_H(X) = \frac{1}{2(N+3)^2}\Dbeta H(\E X)+O\left(\frac{1}{(N+3)^3}\right)$$ when $N$ is large enough. Moreover, $\Obj_{\bar{\X}^{N}_{Beta}}(H_{\ln}) = \frac{1}{2(\ln 2)(N+3)^2}+o\left(\frac{1}{(N+3)^2}\right)$, for log scoring rule. \end{lemma} Informally, the first part of \cref{lem:betacurv} is a ``pointwise'' characterization where for each $\theta$, $X^{(N)}_\theta$ the information gain pointwise converges to $\frac{\Dbeta H(\theta)}{2(N+3)^2}$, and the moreover part is an ``almost uniform'' characterization, where the information gain converges to the limit almost uniformly in $(0,1)$, because $(\delta_N, 1-\delta_N)\to (0,1)$ as $N\to \infty$. Finally, we show the log scoring rule with $H_{\ln}$ has the largest $\inf_x\Dbeta H(x)$ for all smooth $H\in \mathcal{H}$. \begin{lemma}[name=optimal,restate=betaopt,label=lem:beta] If $H\in \mathcal{H}\cap \mathcal{C}^{\infty}((0,1))$, $$\inf_{x\in(0,1)}\Dbeta H(x)\le \frac{1}{\ln 2}.$$ Moreover, if $\inf_{x\in(0,1)}\Dbeta H(x) = \frac{1}{\ln 2}$, $H$ equal the log scoring rule $H_{\ln}$ on $(0,1)$. \end{lemma} \begin{proof}[Proof of \cref{lem:beta}] Because for all $x\in (0,1)$ $H_{\ln}''(x) = \frac{1}{(\ln 2)x(1-x)}$, $\Dbeta H_{\ln}(x) = 1/\ln 2$. Additionally $H_{\ln}(0) = H_{\ln}(1) = 1$. Now we show the infimum cannot be greater than $1/\ln 2$ for any smooth $H\in \mathcal{H}$; otherwise $H = H_{\ln}$. Suppose \begin{equation}\label{eq:betaopt1} \inf \Dbeta H\ge 1/\ln 2. \end{equation} First by a similar argument in the proof of \cref{lem:quadr}, with out lose of generality, we only need to consider $H$ is symmetric about $0.5$ and $H(0.5) = 0$. Let $K:=H-H_{\ln}$ which is a smooth function on $(0,1)$ and $K(0.5) = H(0.5)-H_{\ln}(0.5) = 0$. By \cref{eq:betaopt1}, $\Dbeta K(x) = x(1-x)K''(x)\ge 0$ for all $x\in (0,1)$ which yields $K''(x)\ge 0$. Because $K$ is convex, the maximum happens at the boundary $0$ or $1$ and greater than $K(0.5) = 0$, $$1+K(0.5)\le 1+\max \{K(0), K(1)\} = 1+\max \{H(0)-H_{\ln}(0), H(1)-H_{\ln}(1)\} = \max \{H(0), H(1)\}.$$ Additionally, since $H\in \mathcal{H}$, $H(0), H(1)\le 1$. Combining these two, we have $K(0) = K(0.5) = K(1) = 0$, so $K(x) = 0$ for all $x\in [0,1]$. Therefore, $H(x) = H_{\ln}(x)$ for all $x$. \end{proof} Now, let's put everything together. \begin{proof}[Proof of \cref{thm:beta}] First, for any smooth $H\in \mathcal{H}$ not equal to $H_{\ln}$, $\lim_{n\to \infty}2 (n+3)^2\J_H(X^{(n)}_\theta) = \Dbeta H(\theta)$ for all $\theta\in (0,1)$ by \cref{lem:betacurv}. By \cref{lem:beta} and \ref{lem:betascaling}, we have $$\lim_{N\to \infty}(N+3)^2\Obj_{\bar{\X}_{beta}^{\le N}}(H)\le \inf_\theta \frac{1}{2} \Dbeta H(\theta) <\frac{1}{2\ln 2}.$$ The converse holds by \cref{lem:betacurv} and \ref{lem:betascaling}. \end{proof} Note that for any $N$ the set $\bar{\X}^{\le N}_{Beta}$ does not contain all Beta-Bernoulli information structures with effective sample size at most $N$, but it additionally requires the mean $\delta_N\le \theta\le 1-\delta_N$ is bounded away from the vertices, $0$ and $1$. This restriction is due to the log scoring rule's singular behavior around the vertices. In particular, in \cref{lem:betacurv} the information gain $\J_{H_{\ln}}(X_\theta^{(n)})$ only converges pointwise to $\Dbeta H_{\ln}(\theta)$ and the constant in the error term increases as $\theta$ approaches the vertices. In other words, \cref{thm:beta} ensures the log scoring rule is optimal in an almost uniform manner, but \cref{thm:quadr} shows the quadratic scoring rule is optimal in a uniform manner. In \cref{fig:diff}, we numerically show the optimal scoring rules under the first case with finite $N$ converge to the log scoring rule. \begin{figure}[ht] \centering \begin{tabular}{ccc} \includegraphics[width=0.3\textwidth]{images/beta10log.png} &\includegraphics[width=0.3\textwidth]{images/beta20log.png}&\includegraphics[width=0.3\textwidth]{images/diff.png}\\ $N = 10$ & $N = 20$& difference \end{tabular} \caption{We plot the log scoring rules $H_{\ln}$ and the optimal scoring rules under the static setting which is the first case with constant $N$. The rightmost panel shows the difference between those optimal scoring rules and the log scoring rule.}\label{fig:diff} \end{figure} \subsubsection{Dirichlet-Categorical Information Structure}\label{sec:dir} We will show the log scoring rule is still the optimal scoring rule for the collection of Dirichlet-categorical information structures. If $|\Omega| = d$, let $\theta\in \Theta = \Delta(\Omega)$ and $m\in \mathbb{N}_{>0}$. Let $Dir(\theta, m)$ be the Dirichlet distribution where the probability density at $p\in \Delta(\Omega)$ is proportional to $\Pi_{k = 1}^d p_k^{\theta_k m-1}$. Let $\theta$ be the mean and $m$ be the effective sample size. We similarly define \emph{Dirichlet-categorical information structure with $(\theta, m)$} as follow: $p$ is sampled from $Dir(\theta, m)$. The state of the world $W$ and signal $S$ are sampled independently from $Cat(p)$ where the outcome is $k$ with probability $p_k$. We will generalize Beta-Bernoulli information structures \eqref{eq:beta_bern} and the log scoring rule \cref{eq:logscoring2} with $d = 2$ to general $d$-dimensional cases. When the agent's prior is $Dir(\theta, n+d)$, the information structure of the $(n+1)$-th sample is $X^{(n)}_\theta$ where for all $k\in [d]$ \begin{equation}\label{eq:dir_cat} \Pr\left[X^{(n)}_\theta = \frac{(n+d)}{n+d+1}\theta+\frac{1}{n+d+1}\hat{e}_k\right] = \theta_k. \end{equation} Furthermore, we consider the following sequence of collections of information structures \begin{equation}\label{eq:x_dir} \bar{\X}_{dir}^{\le N} := \left\{X^{(n)}_\theta:n\le N, \delta_N<\theta_k, \forall k\in [d]\right\} \end{equation} where $\delta_N\in \omega(1/N)$ and $\delta_N\in o(1)$. Our main results (\cref{thm:dir,cor:dir}) show that a log scoring rule \begin{equation}\label{eq:logscoring} H_{\ln} (x) = \frac{1}{\ln d}\sum_{k = 1}^d x_k\ln x_k+1, \end{equation} is optimal for the sequence $\bar{\X}^{\le n}_{dir}$. However, we only show $H_{\ln}$ is optimal against any function that is smooth on the close set $\Delta_d$, instead of the relative interior $\relint(\Delta_d)$. Notice that $H_{\ln}$ is smooth on $\relint(\Delta_d)$ but not smooth on $\Delta_d$, but several standard proper scoring rules are smooth on $\Delta_d$, e.g., quadratic, and spherical scoring rule. Therefore, our results still provide relevant comparison among common scoring rules. \begin{theorem}\label{thm:dir} Given $\Omega = [d]$, let $H\in \mathcal{H}\cap \mathcal{C}^\infty(\Delta_d)$ be smooth on $\Delta_d$. $$\lim_{N\to \infty} (N+d+1)^2 \Obj_{\bar{\X}_{dir}^{\le N}}(H) < \frac{d-1}{2\ln d} = \lim_{N\to \infty} (N+d+1)^2 \Obj_{\bar{\X}_{dir}^{\le N}}(H_{\ln}).$$ \end{theorem} We can also rewrite this result in terms of relative information gain as \cref{cor:dir}. \begin{corollary}\label{cor:dir} For all $H^\star \in \mathcal{H}\cap \mathcal{C}^\infty(\Delta_d)\cup\{H_{\ln}\}$, $H^\star$ maximizes the following relative information gain $$R_{\rm dir}(H):=\sup_{\tilde{H}\in \mathcal{H}\cap \mathcal{C}^\infty(\Delta_d)\cup\{H_{\ln}\}}\lim_{n \to \infty} \frac{\Obj_{\bar{\X}_{dir}^{\le n}}(\tilde{H})}{\Obj_{\bar{\X}_{dir}^{\le n}}(H)},$$ if and only if $H^\star = H_{\ln}(x)$ for all $x\in \relint(\Delta_d)$. \end{corollary} The proof structure is mostly identical to \cref{thm:beta}. However, the main challenge is \cref{lem:dir} which shows the log scoring rule maximizes a differential operator $\Ddir$ defined in \cref{eq:ddir}. We prove \cref{lem:dir} in \cref{sec:lem:dir}, and omit rest of the proofs. \begin{lemma}[name=scaling,restate=dirscaling,label=lem:dirscaling] Let $H\in \mathcal{H}$. For all $\theta\in \Delta(\Omega)$ and $n\le N$ we have $\J_H(X^{(n)}_\theta)\ge \J_H(X^{(N)}_\theta)$. \end{lemma} The next lemma relates the information gain to a second order differential operator $\Ddir$, for all smooth $h\in \mathcal{H}$ and $x\in \Delta(\Omega)$ \begin{equation}\label{eq:ddir} h(x)\mapsto \Ddir h(x) = \sum_{k = 1}^d x_k (\hat{e}_k-x)^\top\nabla^2h(x) (\hat{e}_k-x) \end{equation} where $(\hat{e}_k)_{k = 1}^d$ are the vertices of simplex $\Delta(\Omega)$. The proof is basic identical to the proof of \cref{lem:betacurv}. \begin{lemma}[restate=dircurv,label=lem:dircurv] For any smooth convex $H\in \mathcal{H}\cap \mathcal{C}^\infty(\relint(\Delta_d))$, and Dirichlet-Categorical information structure $X = X_{\theta}^{(N)}$, $$\J_H(X) = \frac{1}{2(N+d+1)^2}\Ddir H(\E X)+O\left(\frac{1}{(N+d+1)^3}\right)$$ when $N$ is large enough. Moreover, for log scoring rule, $\Obj_{\bar{\X}^{N}_{Beta}}(H_{\ln}) = \frac{d-1}{2(\ln d)(N+d+1)^2}+o\left(\frac{1}{(N+d+1)^2}\right)$. \end{lemma} Finally, we show the infimum of $\Ddir$ on $H_{\ln}$ is greater than the infimum of $\Ddir$ on any $H\in \mathcal{H}$ which is smooth on $\Delta_d$. The idea stems from the maximum principle of elliptic second order differential equations. \begin{lemma}[name=optimal,restate=diropt,label=lem:dir] If $H\in \mathcal{H}\cap \mathcal{C}^\infty(\Delta_d)$, $$\inf_{x\in \relint(\Delta_d)}\Ddir H(x) < \frac{d-1}{\ln d}.$$ Moreover, $\inf_{x\in \relint(\Delta_d)}\Ddir H_{\ln}(x) = \frac{d-1}{\ln d}$. \end{lemma} \paragraph{Proof of \cref{lem:dir}}\label{sec:lem:dir} Despite being similar to \cref{lem:quadr} and \ref{lem:beta}, the proof of \cref{lem:dir} is quite challenging. To see this, \cref{lem:beta} only needs to handle functions in an one dimensional space and a lower bound of $\Dbeta$ is sufficient to characterize such function fully. In particular, once we know $\Dbeta h(x)$ we can compute the second derivative of $h$ at $x$. Similarly, \cref{lem:quadr} handles on functions on $d$-dimensional space, but the notion of strongly convex is also rich enough to fully determine the function. However, $\Ddir$ is a differential operator with scalar value, so we cannot reconstruct $h$ from the value of $\Ddir h$. To address this, we use the idea of maximum principle (\cref{thm:max_pd}) to proof \cref{lem:dir}. Note that $\Ddir(H_{\ln})(x) = \frac{d-1}{\ln d}$ for all $x$ and $H_{\ln}\in [0,1]$. Suppose there exist a function $H\neq H_{\ln}$ in $\mathcal{H}\cap \mathcal{C}^\infty(\Delta_d)$ such that $\Ddir H(x)\ge \frac{d-1}{\ln d}$ for all $x\in \relint(\Delta_d)$. We will show there exists an extreme point of the simplex $\hat{e}_j$ such that \begin{equation}\label{eq:dir0} H(\hat{e}_j)>H_{\ln}(\hat{e}_j) = 1, \end{equation} and prove \cref{lem:dir} by contradiction. By an argument similar to the proof of \cref{lem:quadr} and \ref{lem:beta}, we can assume that $H$ is symmetric with respect to $c = (1/d, \dots, 1/d)$, and $H(c) = 0$. Let $u:=H-H_{\ln}$. We have $u(c) = H(c)-H_{\ln}(c) = 0$ and \begin{equation}\label{eq:dir1} \Ddir u(x) = \Ddir H(x)-\Ddir H_{\ln}(x) = \Ddir H(x)-\frac{d-1}{\ln d}\ge 0, \end{equation} for all $x\in \relint(\Delta_d)$. Let $\theta^*\in \Delta_d$ be a maximum point of $u$. Now, we will show $\max u(x)>0$ and $\hat{e}_1 = \theta^*$ is a maximum points which proves \cref{eq:dir0} and completes the proof. Because $\Ddir$ in \cref{eq:ddir} is a second order differential operator with \begin{equation}\label{eq:dir_a} A_{dir}(x) = \sum_k x_k (\hat{e}_k-x) (\hat{e}_k-x)^\top \end{equation} where $A_{dir}(x)_{ii} = x_i(1-x_i)$ and $A_{dir}(x)_{ij} = -x_ix_j$, \cref{eq:dir1} is a partial differential equation on $u$. Ideally, we want to use the maximum principle (\cref{thm:max_pd}) and show that the maximum happens at the vertex so that $u(\hat{e}_k)>u(c) = 0$ which implies \cref{eq:dir0} and completes the proof. However, since $A_{dir}(x)\mathbf{1} = \mathbf{0}$ for all $x$, $\Ddir$ is not elliptic, and we cannot use the standard maximum principle. Thus we use a maximum principle which holds under this weaker condition. \begin{lemma}[restate=maxpsd,label=lem:maxpsd] Let $\mathcal{V}\subset \R^d$ be a linear subspace, and $W\subset \R^d$ be a bounded set where $\{x-w:\forall x\in W\}\subset \mathcal{V}$ is relative open to $\mathcal{V}$ for some $w\in \R^d$. If a second order differential operator $\mathcal{L}$ with $A$ satisfies the following two conditions: 1) $\mathcal{L}$ is uniformly elliptic on $W$ which has $\rho>0$ so that for all $x\in W$ and $v\in \mathcal{V}$, $v^\top A(x)v\ge \rho\|v\|^2_2$, 2) the image of $A$ is in $\mathcal{V}$, $null(A(x))= \mathcal{V}^\perp$ for all $x\in W$, then for any super-solution $h$ satisfies $\mathcal{L} h\ge 0$ and continuous on $cl(W)$, $$\max_{cl(W)} h = \max_{bd(W)} h.$$ Moreover, if $h$ has an interior maximum, $h$ is a constant. \end{lemma} This result can be derived from \cref{thm:max_pd}, because we can use $d-1$ coordinate to encode the space. The first condition says $\mathcal{L}$ is uniformly elliptic on the subspace and the second condition ensures the differential operator is well-defined. Alternatively this is also a special case of maximum principle on Riemannian manifolds.~\cite{padilla1997principal}. We include a proof in \cref{app:exp} for completeness. Back to our proof of \cref{lem:dir}, though $\Ddir$ is not uniformly elliptic on $\relint(\Delta_d)$, $\Ddir$ is uniformly elliptic on $U_\epsilon:=\{x\in \Delta_d:x_k\ge \epsilon, \forall k\in [d]\}$ for any $\epsilon>0$. We set $\mathcal{V}_{dir} = \{v\in \mathbb{R}^d: \mathbf{1}^\top v = 0\}$ and $\{x-c:\forall x\in U_\epsilon\}\subset\mathcal{V}_{dir}$, and by \cref{lem:maxpsd} the maximum value of $u$ happens at the boundary of $U_\epsilon$. Suppose that $u$ has an interior maximum $\theta^*$. we can set $\epsilon$ small enough such that $\theta^*\in \relint(U_\epsilon)$ which reaches a contradiction. Therefore, $\theta^*$ can only be in the boundary, and, due to the symmetry of $u$, we can assume $$\theta^*\in \Delta_{d-1}\subset bd(\Delta(\Omega)) = bd(\Delta_d).$$ Since $u(c) = H(c)-H_{\ln}(c) = 0$ and $u$ is not constant, $u(\theta^*)> 0$. Now we want to can the maximum principle recursively such that the maximum can be only at $\hat{e}_1$ which completes our proof. Formally, it is sufficient to prove $\sum_{k = 0}^{d-1} x_k (\hat{e}_k-x)^\top\nabla^2 u(x)(\hat{e}_k-x)> 0$ for all $x\in \relint(\Delta_{d-1})$ which is $\Ddir$ without the last coordinate. To prove this, $\lim_{x'\to x}\Ddir H(x') = \Ddir H(x)\ge \frac{d-1}{\ln d}$ since $H\in \mathcal{C}^\infty(\Delta(\Omega))$, and $\Ddir H_{\ln}(x) = \frac{d-2}{\ln d}$, by direct computation. Therefore, we can apply \cref{lem:maxpsd} again and show $\theta^*\in \Delta_{d-2}$. By induction, $\theta^* = \hat{e}_1$ and this completes our proof. \bibliographystyle{plainnat}
1,941,325,220,972
arxiv
\section{Introduction} The stochastic approximation method first appears in \cite{robbins1951stochastic} for solving a root-finding problem. Nowadays, its first-order version, or the stochastic gradient method (SGM), has been extensively used to solve machine learning problems that involve huge amounts of given data and also to stochastic problems that involve uncertain streaming data. Complexity results of SGMs have been well established for convex problems. Many recent research papers on SGMs focus on nonconvex cases. In this paper, we consider the regularized nonconvex stochastic programming \begin{equation}\label{eq:stoc-prob} \Phi^*= \Min_{{\mathbf{x}}\in\RR^n} ~\Phi({\mathbf{x}}):= \big\{F({\mathbf{x}})\equiv\EE_\xi [f({\mathbf{x}};\xi)]\big\} + r({\mathbf{x}}), \end{equation} where $f(\,\cdot\,; \xi)$ is a smooth nonconvex function almost surely for $\xi$, and $r$ is a closed convex function on $\RR^n$. Examples of \eqref{eq:stoc-prob} include the sparse online matrix factorization \cite{mairal2010online}, the online nonnegative matrix factorization \cite{zhao2016online}, and the streaming PCA (by a unit-ball constraint) \cite{mitliagkas2013memory}. In addition, as $\xi$ follows a uniform distribution on a finite set $\Xi=\{\xi_1,\ldots,\xi_N\}$, \eqref{eq:stoc-prob} recovers the so-called finite-sum structured problem. It includes most regularized machine learning problems such as the sparse bilinear logistic regression \cite{shi2014sparse}, the sparse convolutional neural network \cite{liu2015sparse}, and the group sparse regularized deep neural networks \cite{scardapane2017group}. \subsection{Background} When $r\equiv 0$, the recent work \cite{arjevani2019lower} gives an $O(\varepsilon^{-3})$ lower complexity bound of SGMs to produce a stochastic $\varepsilon$-stationary solution of \eqref{eq:stoc-prob} (see Definition~\ref{def:eps-sol} below), by assuming the so-called mean-squared smoothness condition (see Assumption~\ref{assump-smooth}). Several variance-reduced SGMs \cite{tran2021hybrid, wang2019spiderboost, fang2018spider, cutkosky2019momentum} have achieved an $O(\varepsilon^{-3})$ or $\tilde O(\varepsilon^{-3})$ complexity result\footnote{Throughout the paper, we use $\tilde O$ to suppress an additional polynomial term of $|\log\varepsilon|$}. Among them, \cite{fang2018spider, cutkosky2019momentum} only consider smooth cases, i.e., $r\equiv 0$ in \eqref{eq:stoc-prob}, and \cite{tran2021hybrid, wang2019spiderboost} study nonsmooth problems in the form of \eqref{eq:stoc-prob}. To reach an $O(\varepsilon^{-3})$ complexity result, the Hybrid-SGD method in \cite{tran2021hybrid} needs $O(\varepsilon^{-1})$ samples at the initial step and then at least two samples at each update, while \cite{wang2019spiderboost, fang2018spider} require $O(\varepsilon^{-2})$ samples after every fixed number of updates. The STORM method in \cite{cutkosky2019momentum} requires one single sample of $\xi$ at each update, but it only applies to smooth problems. Practically on training a (deep) machine learning model, small-batch training is often used to have better generalization \cite{masters2018revisiting, keskar2016large}. In addition, for certain applications such as reinforcement learning \cite{sutton2018reinforcement}, one single sample can usually be obtained, depending on the stochastic environment and the current decision. Furthermore, regularization terms can improve generalization of a machine learning model, even for training a neural network \cite{wei2019regularization}. We aim at designing a new SGM for solving the nonconvex nonsmooth problem \eqref{eq:stoc-prob} and achieving a (near)-optimal\footnote{By ``optimal'', we mean that the complexity result can reach the lower bound result; a result is ``near optimal'', if it has an additional logarithmic term or a polynomial of logarithmic term than the lower bound.} complexity result by using $O(1)$ (that can be \emph{one}) samples at each update. \subsection{Mirror-prox Algorithm} Our algorithm is a mirror-prox SGM, and we adopt the momentum technique to reduce variance of the stochastic gradient in order to achieve a (near)-optimal complexity result. Let $w$ be a continuously differentiable and 1-strongly convex function on $\dom(r)$, i.e., $$w({\mathbf{y}}) \ge w({\mathbf{x}}) + \langle \nabla w({\mathbf{x}}), {\mathbf{y}}-{\mathbf{x}}\rangle + \frac{1}{2}\|{\mathbf{y}}-{\mathbf{x}}\|^2,\, \forall\, {\mathbf{x}},{\mathbf{y}}\in \dom(r).$$ The Bregman divergence induced by $w$ is defined as \begin{equation} V({\mathbf{x}},{\mathbf{z}})=w({\mathbf{x}})-w({\mathbf{z}}) - \langle \nabla w({\mathbf{z}}), {\mathbf{x}}-{\mathbf{z}}\rangle. \end{equation} At each iteration of our algorithm, we obtain one or a few samples of $\xi$, compute stochastic gradients at the previous and current iterates using the same samples, and then perform a mirror-prox momentum stochastic gradient update. The pseudocode is shown in Algorithm~\ref{alg:prox-storm}. We name it as PStorm as it can be viewed as a proximal version of the Storm method in \cite{cutkosky2019momentum}. Notice that when $\beta_k=1,\forall\, k\ge0$, the algorithm reduces to the non-accelerated stochastic proximal gradient method. However, our analysis does not apply to this case, for which an innovative analysis can be found in \cite{davis2019stochastic}. \begin{algorithm}[h] \caption{Momentum-based variance-reduced proximal stochastic gradient method for \eqref{eq:stoc-prob}}\label{alg:prox-storm} \DontPrintSemicolon \textbf{Input:} max iteration numer $K$, minibatch size $m$, and positive sequences $\{\beta_k\}\subseteq(0,1)$ and $\{\eta_k\}$.\; \textbf{Initialization:} choose ${\mathbf{x}}^0\in\dom(r)$ and let ${\mathbf{d}}^0 = \frac{1}{m_0}\sum_{\xi\in B_0}\nabla f({\mathbf{x}}^0;\xi)$ with $m_0$ i.i.d. samples $B_0=\{\xi_1^0,\ldots,\xi_{m_0}^0\}$\; \For{$k=0, 1,\ldots, K-1$}{ Update ${\mathbf{x}}$ by \begin{equation}\label{eq:update-x} {\mathbf{x}}^{k+1}=\argmin_{{\mathbf{x}}} \left\{\langle {\mathbf{d}}^k, {\mathbf{x}}\rangle + \frac{1}{\eta_k}V({\mathbf{x}},{\mathbf{x}}^k) + r({\mathbf{x}})\right\}.\vspace{-0.3cm} \end{equation} \; Obtain $m$ i.i.d. samples $B_{k+1}=\{\xi_1^{k+1},\ldots,\xi_m^{k+1}\}$ and let \begin{equation}\label{eq:def-v-u-k} \textstyle {\mathbf{v}}^{k+1} = \frac{1}{m}\sum_{\xi\in B_{k+1}}\nabla f({\mathbf{x}}^{k+1}; \xi),\quad {\mathbf{u}}^{k+1} = \frac{1}{m}\sum_{\xi\in B_{k+1}}\nabla f({\mathbf{x}}^{k}; \xi).\vspace{-0.3cm} \end{equation}\; Let ${\mathbf{d}}^{k+1} = {\mathbf{v}}^{k+1} + (1-\beta_{k})({\mathbf{d}}^{k}-{\mathbf{u}}^{k+1})$.\; } Return ${\mathbf{x}}^\tau$ with $\tau$ selected from $\{0,1,\ldots,K-1\}$ uniformly at random or by the distribution \begin{equation}\label{eq:select-tau} \textstyle \Prob(\tau = k) = \frac{\frac{\eta_k}{4}(1-\eta_k L)-\frac{\eta_k^2}{5m\eta_{k+1}}(1-\beta_k)^2}{\sum_{j=0}^{K-1}\left(\frac{\eta_j}{4}(1-\eta_j L)-\frac{\eta_j^2}{5m\eta_{j+1}}(1-\beta_j)^2\right)},\, k = 0,1,\ldots,K-1. \end{equation} \vspace{-0.2cm} \end{algorithm} \subsection{Related Works} Many efforts have been made on analyzing the convergence and complexity of SGMs for solving nonconvex stochastic problems, e.g., \cite{ghadimi2016accelerated, ghadimi2013stochastic, xu2015block-sg, davis2019stochastic, davis2020stochastic, wang2019spiderboost, cutkosky2019momentum, fang2018spider, allen2018natasha, tran2021hybrid}. We list comparison results on the complexity in Table~\ref{table:comparison}. \begin{table} \begin{center} \resizebox{0.99\textwidth}{!}{ \begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{\textbf{Method}} & \multirow{2}{*}{\textbf{problem}} & \multirow{2}{*}{\textbf{key assumption}} & \textbf{\#samples} & \multirow{2}{*}{\textbf{complexity}}\\ & & &\textbf{at $k$-th iteration} &\\\hline\hline \multirow{2}{*}{accelerated prox-SGM \cite{ghadimi2016accelerated}} & \multirow{2}{*}{$\min_{\mathbf{x}}\{\EE_\xi[f({\mathbf{x}};\xi)]+r({\mathbf{x}})\}$} & $\EE_\xi[f({\mathbf{x}};\xi)]$ is smooth & \multirow{2}{*}{$\Theta(k)$} & \multirow{2}{*}{$O(\varepsilon^{-4})$}\\ & & $r$ is convex & & \\\hline \multirow{3}{*}{stochastic subgradient \cite{davis2019stochastic}} & \multirow{3}{*}{$\min_{\mathbf{x}}\{\EE_\xi[f({\mathbf{x}};\xi)]+r({\mathbf{x}})\}$} & $\EE_\xi[f({\mathbf{x}};\xi)]$ is weakly-convex & \multirow{3}{*}{$O(1)$} & \multirow{3}{*}{$O(\varepsilon^{-4})$}\\ & & $r$ is convex & & \\ & & bounded stochastic subgrad. & & \\\hline \multirow{2}{*}{Spider \cite{fang2018spider}} & \multirow{2}{*}{$\min_{\mathbf{x}}\{\EE_\xi[f({\mathbf{x}};\xi)]\}$} & mean-squared smoothness & $\Theta(\varepsilon^{-2})$ & \multirow{2}{*}{$O(\varepsilon^{-3})$} \\ & & see Assumption~\ref{assump-smooth} & or $\Theta(\varepsilon^{-1})$ & \\\hline \multirow{2}{*}{Storm \cite{cutkosky2019momentum} } & \multirow{2}{*}{$\min_{\mathbf{x}}\{\EE_\xi[f({\mathbf{x}};\xi)]\}$ } & $f(\,\cdot\,;\xi)$ is smooth a.s. & \multirow{2}{*}{1} & \multirow{2}{*}{$\tilde O(\varepsilon^{-3})$} \\ & & bounded stochastic grad.$^*$ & & \\\hline \multirow{2}{*}{Spiderboost \cite{wang2019spiderboost} }& \multirow{2}{*}{$\min_{\mathbf{x}}\{\EE_\xi[f({\mathbf{x}};\xi)]+r({\mathbf{x}})\}$ }& mean-squared smoothness & $\Theta(\varepsilon^{-2})$ & \multirow{2}{*}{ $O(\varepsilon^{-3})$}\\ & & $r$ is convex & or $\Theta(\varepsilon^{-1})$ & \\\hline \multirow{2}{*}{Hybrid-SGD \cite{tran2021hybrid} } & \multirow{2}{*}{$\min_{\mathbf{x}}\{\EE_\xi[f({\mathbf{x}};\xi)]+r({\mathbf{x}})\}$} & mean-squared smoothness & $\Theta(\varepsilon^{-1})$ if $k=0$ & \multirow{2}{*}{$O(\varepsilon^{-3})$}\\ & & $r$ is convex & $O(1)$ but at least 2 if $k>0$ & \\\hline \multirow{4}{*}{PStorm (\textbf{This paper})} & \multirow{4}{*}{$\min_{\mathbf{x}}\{\EE_\xi[f({\mathbf{x}};\xi)]+r({\mathbf{x}})\}$} & & $O(1)$ and can be 1 & \multirow{2}{*}{$\tilde O(\varepsilon^{-3})$}\\ & & mean-squared smoothness & varying stepsize & \\\cline{4-5} & & $r$ is convex & $O(1)$ and can be 1 & \multirow{2}{*}{$O(\varepsilon^{-3})$} \\ & & & constant stepsize & \\\hline \end{tabular} } \end{center} \caption{Comparison of the complexity results of several methods in the literature to our method to produce a stochastic $\varepsilon$-stationary solution of a nonconvex stochastic optimization problem. To obtain the listed results, all the compared methods assume unbiasedness and variance boundedness of the stochastic (sub)gradients. The results only show the dependence on $\varepsilon$. All other constants (e.g., the smoothness constant $L$ and the initial objective error) are hidden in the big-$O$. More complete results of the proposed method are given in Remarks~\ref{rm:dep-sigma-dynamic} and \ref{rm:dep-sigma-const}. } $^*$: the boundedness assumption on stochastic gradient made by Storm \cite{cutkosky2019momentum} can be lifted if a bound $\sigma$ on the variance of the stochastic gradient is known. \label{table:comparison} \end{table} The work \cite{ghadimi2013stochastic} appears to be the first one that conducts complexity analysis of SGM for nonconvex stochastic problems. It introduces a randomized SGM. For a smooth nonconvex problem, the randomized SGM can produce a stochastic $\varepsilon$-stationary solution within $O(\varepsilon^{-4})$ SG iterations. The same-order complexity result is then extended in \cite{ghadimi2016accelerated} to nonsmooth nonconvex stochastic problems in the form of \eqref{eq:stoc-prob}. To achieve an $O(\varepsilon^{-4})$ complexity result, the accelerated prox-SGM in \cite{ghadimi2016accelerated} needs to take $\Theta(k)$ samples at the $k$-th update for each $k$. Assuming a weak-convexity condition and using the tool of Moreau envelope, \cite{davis2019stochastic} establishes an $O(\varepsilon^{-4})$ complexity result of stochastic subgradient method for solving more general nonsmooth nonconvex problems to produce a near-$\varepsilon$ stochastic stationary solution (see \cite{davis2019stochastic} for the precise definition). In general, the $O(\varepsilon^{-4})$ complexity result cannot be improved for smooth nonconvex stochastic problems, as \cite{arjevani2019lower} shows that for the problem $\min_{\mathbf{x}} F({\mathbf{x}})$ where $F$ is smooth, any SGM that can access unbiased SG with bounded variance needs $\Omega(\varepsilon^{-4})$ SGs to produce a solution $\bar{\mathbf{x}}$ such that $\EE\big[\|\nabla F(\bar{\mathbf{x}})\|\big] \le \varepsilon$. However, with one additional mean-squared smoothness condition on each unbiased SG, the complexity can be reduced to $O(\varepsilon^{-3})$, which has been reached by a few variance-reduced SGMs \cite{tran2021hybrid, wang2019spiderboost, fang2018spider, cutkosky2019momentum, pham2020proxsarah}. These methods are closely related to ours. Below we briefly review them. \noindent\textbf{Spider.} To find a stochastic $\varepsilon$-stationary solution of \eqref{eq:stoc-prob} with $r\equiv0$, \cite{fang2018spider} proposes the Spider method with the update: ${\mathbf{x}}^{k+1}={\mathbf{x}}^k - \eta_k {\mathbf{v}}^k$ for each $k\ge0$. Here, ${\mathbf{v}}^k$ is set to \begin{equation}\label{eq:vk-spider} {\mathbf{v}}^k = \left\{ \begin{array}{ll} \frac{1}{|B_k|}\sum_{\xi\in B_k}\left(\nabla f({\mathbf{x}}^k;\xi)-\nabla f({\mathbf{x}}^{k-1};\xi)\right) + {\mathbf{v}}^{k-1}, & \text{ if } \mathrm{mod}(k, q)\neq 0, \\[0.2cm] \frac{1}{|C_k|}\sum_{\xi\in C_k}\nabla f({\mathbf{x}}^k;\xi), & \text{ otherwise}, \end{array} \right. \end{equation} where $|B_k|=\Theta(\frac{1}{q\varepsilon^{2}})$, $|C_k|=\Theta(\varepsilon^{-2})$, and $q=\Theta(\varepsilon^{-1})$ or $q=\Theta(\varepsilon^{-2})$. Under the mean-squared smoothness condition (see Assumption~\ref{assump-smooth}), the Spider method can produce a stochastic $\varepsilon$-stationary solution with $O(\varepsilon^{-3})$ sample gradients, by choosing appropriate learning rate $\eta_k$ (roughly in the order of $\frac{1}{q\|{\mathbf{v}}^k\|}$). \noindent\textbf{Storm.} \cite{cutkosky2019momentum} focuses on a smooth nonconvex stochastic problem, i.e., \eqref{eq:stoc-prob} with $r\equiv0$. It proposes the Storm method, which can be viewed as a special case of Algorithm~\ref{alg:prox-storm} with $m_0=m=1$ applied to the smooth problem. However, its analysis and also algorithm design rely on the knowledge of a uniform bound on $\{\|\nabla f({\mathbf{x}};\xi)\|\}$ or on the bound of the variance of the stochastic gradient. In addition, because the learning rate of Storm is set dependent on the sampled stochastic gradient, its analysis needs almost-sure uniform smoothness of $f({\mathbf{x}};\xi)$. This assumption is significantly stronger than the mean-squared smoothness condition, and also the uniform smoothness constant can be much larger than an averaged one. \noindent\textbf{Spiderboost.} \cite{wang2019spiderboost} extends Spider into solving a nonsmooth nonconvex stochastic problem in the form of \eqref{eq:stoc-prob} by proposing a so-called Spiderboost method. Spiderboost iteratively performs the update \begin{equation}\label{eq:spiderboost} \textstyle {\mathbf{x}}^{k+1}=\argmin_{\mathbf{x}} \langle {\mathbf{v}}^k, {\mathbf{x}}\rangle + \frac{1}{\eta}V({\mathbf{x}},{\mathbf{x}}^k) + r({\mathbf{x}}), \end{equation} where $V$ denotes the Bregman divergence induced by a strongly-convex function, and ${\mathbf{v}}^k$ is set by \eqref{eq:vk-spider} with $q=|B_k| = \Theta(\varepsilon^{-1})$ and $|C_k|=\Theta(\varepsilon^{-2})$. Under the mean-squared smoothness condition, Spiderboost reaches a complexity result of $O(\varepsilon^{-3})$ by choosing $\eta=\frac{1}{2L}$, where $L$ is the smoothness constant. \noindent\textbf{Hybrid-SGD.} \cite{tran2021hybrid} considers a nonsmooth nonconvex stochastic problem in the form of \eqref{eq:stoc-prob}. It proposes a proximal stochastic method, called Hybrid-SGD, as a hybrid of SARAH \cite{nguyen2017sarah} and an unbiased SGD. The Hybrid-SGD performs the update ${\mathbf{x}}^{k+1} = (1-\gamma_k){\mathbf{x}}^k + \gamma_k \hat{\mathbf{x}}^{k+1}$ for each $k\ge 0$, where $$\textstyle \hat{\mathbf{x}}^{k+1}=\argmin_{\mathbf{x}} \langle {\mathbf{v}}^k, {\mathbf{x}}\rangle + \frac{1}{2\eta_k}\|{\mathbf{x}}-{\mathbf{x}}^k\|^2 + r({\mathbf{x}}).$$ Here, the sequence $\{{\mathbf{v}}^k\}$ is set by ${\mathbf{v}}^0=\frac{1}{|B_0|}\sum_{\xi\in B_0}\nabla f({\mathbf{x}}^0;\xi)$ with $|B_0| = \Theta(\varepsilon^{-1})$ for a given $\varepsilon>0$ and \begin{equation}\label{eq:hyb-sgd-v} \textstyle {\mathbf{v}}^k = \beta_{k-1}{\mathbf{v}}^{k-1}+\beta_{k-1}\big(\nabla f({\mathbf{x}}^k;\xi_k)-\nabla f({\mathbf{x}}^{k-1};\xi_k)\big) + (1-\beta_{k-1})\nabla f({\mathbf{x}}^k;\zeta_k), \end{equation} where $\xi_k$ and $\zeta_k$ are two independent samples of $\xi$. A mini-batch version of Hybrid-SGD is also given in \cite{tran2021hybrid}. By choosing appropriate constant parameters $\{(\beta_k,\gamma_k,\eta_k)\}$, Hybrid-SGD can reach an $O(\varepsilon^{-3})$ complexity result. Although the update of ${\mathbf{v}}^k$ requires only two or $O(1)$ samples, its initial setting needs $O(\varepsilon^{-1})$ samples. As explained in \cite[Remark 3]{tran2021hybrid}, if the initial minibatch size is $|B_0|=O(1)$, then the complexity result of Hybrid-SGD will be worsened to $O(\varepsilon^{-4})$. It is possible to reduce the $O(\varepsilon^{-4})$ complexity by using an adaptive $\beta_k$ as mentioned in \cite[Remark 3]{tran2021hybrid} to adopt the technique in \cite{tran2020hybrid-minimax}. This way, a near-optimal $\tilde O(\varepsilon^{-3})$ result may be shown for Hybrid-SGD without a large initial minibatch. Notice that with $\xi_k=\zeta_k, \forall\, k$, the stochastic gradient estimator by Hybrid-SGD will reduce to that by Storm, and further with $\gamma_k=1,\, \forall\, k$, the update of Hybrid-SGD will recover ours. However, the analysis in \cite{tran2020hybrid-minimax, tran2021hybrid} relies on the independence of $\xi_k$ and $\zeta_k$ and the condition $\gamma_k\in (0,1)$, and thus it does not apply to our algorithm. \noindent\textbf{More.} There are many other works analyzing complexity results of SGMs on solving nonconvex finite-sum structured problems, e.g., \cite{allen2016variance, reddi2016stochastic, lei2017non, huo2016asynchronous}. These results often emphasize the dependence on the number of component functions and also the target error tolerance $\varepsilon$. In addition, several works have analyzed adaptive SGMs for nonconvex finite-sum or stochastic problems, e.g., \cite{chen2018convergence, zhou2018convergence, xu2020-APAM}. Moreover, along the direction of accelerating SGMs, some works (e.g., \cite{zhang2019stochastic, tran2020hybrid-minimax, xu2021katyusha, zhang2021stochastic}) have considered minimax structured or compositional optimization problems. An exhaustive review of all these works is impossible and also beyond the scope of this paper. We refer interested readers to those papers and the references therein. \subsection{Contributions} Our main contributions are about the algorithm design and analysis. \begin{itemize} \item We design a momentum-based variance-reduced mirror-prox stochastic gradient method for solving nonconvex nonsmooth stochastic problems. The proposed method generalizes Storm in \cite{cutkosky2019momentum} from smooth cases to nonsmooth cases. In addition, with one single data sample per iteration, it achieves, by taking varying stepsizes, the same near-optimal complexity result $\tilde O(\varepsilon^{-3})$ under a mean-squared smooth condition, which is weaker than the almost-sure uniform smoothness condition assumed in \cite{cutkosky2019momentum}. \item When constant stepsizes are adopted, the proposed method can achieve the optimal $O(\varepsilon^{-3})$ complexity result, by using one single or $O(1)$ data samples per iteration. While Spiderboost \cite{wang2019spiderboost} can also achieve the optimal $O(\varepsilon^{-3})$ complexity result for stochastic nonconvex nonsmooth problems, it needs $\Theta(\varepsilon^{-2})$ data samples every $\Theta(\varepsilon^{-1})$ iterations and $\Theta(\varepsilon^{-1})$ samples for every other iteration. To achieve the optimal $O(\varepsilon^{-3})$ complexity result, Hybrid-SGD \cite{tran2021hybrid} needs $\Theta(\varepsilon^{-1})$ data samples for the first iteration and at least two samples for all other iterations. However, if only $O(1)$ samples can be obtained initially, the worst-case complexity result of Hybrid-SGD with constant stepsize will increase to $O(\varepsilon^{-4})$. Our proposed method is the first one that uses only one or $O(1)$ samples per iteration and can still reach the optimal complexity result, and thus it can be applied to online learning problems that need real-time decision based on possibly one or several new data samples. \item Furthermore, the proposed method only needs an estimate of the smoothness parameter and is easy to tune to have good performance. Empirically, we observe that it converges faster than a vanilla SGD and can give higher testing accuracy than Spiderboost and Hybrid-SGD on training sparse neural networks. \end{itemize} \subsection{Notation, Definitions, and Outline} We use bold lowercase letters ${\mathbf{x}},{\mathbf{y}},{\mathbf{g}},\ldots$ for vectors. $\EE_{B_{k}}$ denotes the expectation about a mini-batch set $B_{k}$ conditionally on the all previous history, and $\EE$ denotes the full expectation. $|B_k|$ counts the number of elements in the set $B_k$. We use $\|\cdot\|$ for the Euclidean norm. A differentiable function $F$ is called $L$-smooth, if $\|\nabla F({\mathbf{x}})-\nabla F({\mathbf{y}})\|\le L\|{\mathbf{x}}-{\mathbf{y}}\|$ for all ${\mathbf{x}}$ and ${\mathbf{y}}$. \begin{definition}[proximal gradient mapping]\label{def:prox-map} Given ${\mathbf{d}}$, ${\mathbf{x}}\in\dom(r)$, and $\eta>0$, we define $P({\mathbf{x}},{\mathbf{d}},\eta)=\frac{1}{\eta}({\mathbf{x}}-{\mathbf{x}}^+)$, where ${\mathbf{x}}^+ =\argmin_{\mathbf{y}} \textstyle \left\{\langle {\mathbf{d}}, {\mathbf{y}}\rangle +\frac{1}{\eta}V({\mathbf{y}},{\mathbf{x}}) + r({\mathbf{y}})\right\}.$ \end{definition} By the proximal gradient mapping, if a point $\bar{\mathbf{x}}\in\dom(r)$ is an optimal solution of \eqref{eq:stoc-prob}, then it must satisfy $P(\bar{\mathbf{x}},\nabla F(\bar{\mathbf{x}}),\eta)=\vzero$ for any $\eta>0$. Based on this observation, we define a near-stationary solution as follows. This definition is standard and has been adopted in other papers, e.g., \cite{wang2019spiderboost}. \begin{definition}[stochastic $\varepsilon$-stationary solution]\label{def:eps-sol} Given $\varepsilon>0$, a random vector ${\mathbf{x}}\in \dom(r)$ is called a stochastic $\varepsilon$-stationary solution of \eqref{eq:stoc-prob} if for some $\eta>0$, it holds $\EE[\|P({\mathbf{x}},\nabla F({\mathbf{x}}),\eta)\|^2]\le \varepsilon^2$. \end{definition} From \cite[Lemma 1]{ghadimi2016mini}, it holds \begin{equation}\label{eq:lem1-lan} \big\langle {\mathbf{d}}, P({\mathbf{x}},{\mathbf{d}},\eta)\big\rangle \ge \|P({\mathbf{x}},{\mathbf{d}},\eta)\|^2 + \frac{1}{\eta}\big(r({\mathbf{x}}^+)-r({\mathbf{x}})\big). \end{equation} In addition, the proximal gradient mapping is nonexpansive from \cite[Proposition 1]{ghadimi2016mini}, i.e., \begin{equation}\label{eq:nonexp-P} \|P({\mathbf{x}}, {\mathbf{d}}_1, \eta) - P({\mathbf{x}}, {\mathbf{d}}_2, \eta)\| \le \|{\mathbf{d}}_1-{\mathbf{d}}_2\|, \ \forall\, {\mathbf{d}}_1,{\mathbf{d}}_2, \, \forall\, {\mathbf{x}}\in \dom(r), \, \forall\, \eta > 0. \end{equation} For each $k\ge0$, we denote \begin{equation}\label{eq:vg-vgbar} {\mathbf{g}}^k = P({\mathbf{x}}^k, {\mathbf{d}}^k, \eta_k),\quad \bar{\mathbf{g}}^k = P({\mathbf{x}}^k, \nabla F({\mathbf{x}}^k), \eta_k). \end{equation} Notice that $\|\bar{\mathbf{g}}^k\|$ measures the violation of stationarity of ${\mathbf{x}}^k$. The gradient error is represented by \begin{equation}\label{eq:error-grad} {\mathbf{e}}^k = {\mathbf{d}}^k - \nabla F({\mathbf{x}}^k). \end{equation} \noindent\textbf{Outline.} The rest of the paper is organized as follows. In section~\ref{sec:analysis}, we establish complexity results of Algorithm~\ref{alg:prox-storm}. Numerical experiments are conducted in section~\ref{sec:numerical}, and we conclude the paper in section~\ref{sec:conclusion}. \section{Convergence Analysis}\label{sec:analysis} In this section, we analyze the complexity result of Algorithm \ref{alg:prox-storm}. Part of our analysis is inspired from that in \cite{cutkosky2019momentum} and \cite{wang2019spiderboost}. In addition, we give a novel analysis that enables us to obtain the optimal $O(\varepsilon^{-3})$ complexity result by using $O(1)$ samples every iteration. Throughout our analysis, we make the following assumptions. \begin{assumption}[finite optimal objective]\label{assump-obj} The optimal objective value $\Phi^*$ of \eqref{eq:stoc-prob} is finite. \end{assumption} \begin{assumption}[mean-squared smoothness]\label{assump-smooth} The function $f(\,\cdot\,;\xi)$ satisfies the mean-squared smoothness condition: $\EE_\xi \big[\|\nabla f({\mathbf{x}};\xi) - \nabla f({\mathbf{y}};\xi)\|^2\big] \le L^2 \|{\mathbf{x}}-{\mathbf{y}}\|^2,\, \forall\, {\mathbf{x}},{\mathbf{y}}\in \dom(r).$ \end{assumption} \begin{assumption}[unbiasedness and variance boundedness]\label{assump-sgd} There is $\sigma>0$ such that \begin{align} \EE_\xi [\nabla f({\mathbf{x}}; \xi)] = \nabla F({\mathbf{x}}),\quad \EE[\|\nabla f({\mathbf{x}}; \xi) - \nabla F({\mathbf{x}})\|^2] \le \sigma^2, \ \forall\, {\mathbf{x}}\in \dom(r). \end{align} \end{assumption} It is easy to show that under Assumptions~\ref{assump-smooth} and \ref{assump-sgd}, the function $F({\mathbf{x}})=\EE_\xi [f({\mathbf{x}};\xi)]$ is $L$-smooth; see the arguments at the end of section~2.2 of \cite{tran2021hybrid}. We first show a few lemmas. The lemma below estimates one-iteration progress. Its proof follows from \cite{wang2019spiderboost}. \begin{lemma}[one-iteration progress]\label{lem:obj-diff} Let $\{{\mathbf{x}}^k\}$ be generated from Algorithm~\ref{alg:prox-storm}. If $F$ is $L$-smooth, then $$\Phi({\mathbf{x}}^{k+1})-\Phi({\mathbf{x}}^k) \le \frac{\eta_k}{2}(2-\eta_k L)\|{\mathbf{e}}^k\|^2 - \frac{\eta_k}{4}(1-\eta_k L)\|\bar{\mathbf{g}}^k\|^2,\, \forall\, k\ge 1,$$ where $\bar{\mathbf{g}}^k$ is defined in \eqref{eq:vg-vgbar}. \end{lemma} \begin{proof} By the $L$-smoothness of $F$ and the definition of ${\mathbf{g}}^k$ in \eqref{eq:vg-vgbar}, we have \begin{equation}\label{eq:lip-F-ineq} F({\mathbf{x}}^{k+1})-F({\mathbf{x}}^k)\le \langle \nabla F({\mathbf{x}}^k), {\mathbf{x}}^{k+1}-{\mathbf{x}}^k\rangle + \frac{L}{2}\|{\mathbf{x}}^{k+1}-{\mathbf{x}}^k\|^2 = -\eta_k\langle \nabla F({\mathbf{x}}^k), {\mathbf{g}}^k\rangle + \frac{\eta_k^2L}{2}\|{\mathbf{g}}^k\|^2. \end{equation} Using the definition of ${\mathbf{e}}^k$ in \eqref{eq:error-grad} and the inequality in \eqref{eq:lem1-lan}, we have $$-\langle \nabla F({\mathbf{x}}^k), {\mathbf{g}}^k\rangle = \langle {\mathbf{e}}^k, {\mathbf{g}}^k\rangle - \langle {\mathbf{d}}^k, {\mathbf{g}}^k\rangle \le \langle {\mathbf{e}}^k, {\mathbf{g}}^k\rangle - \|{\mathbf{g}}^k\|^2 + \frac{1}{\eta_k}\big(r({\mathbf{x}}^k)-r({\mathbf{x}}^{k+1})\big).$$ Plugging the above inequality into \eqref{eq:lip-F-ineq} and rearranging terms give $$\Phi({\mathbf{x}}^{k+1})-\Phi({\mathbf{x}}^k) \le \eta_k\langle {\mathbf{e}}^k, {\mathbf{g}}^k\rangle - \eta_k \|{\mathbf{g}}^k\|^2+ \frac{\eta_k^2L}{2}\|{\mathbf{g}}^k\|^2.$$ By the Cauchy-Schwartz inequality, it holds $\eta_k\langle {\mathbf{e}}^k, {\mathbf{g}}^k\rangle \le \frac{\eta_k}{2}\|{\mathbf{e}}^k\|^2 + \frac{\eta_k}{2}\|{\mathbf{g}}^k\|^2$, which together with the above inequality implies \begin{equation}\label{eq:chg-obj-0} \Phi({\mathbf{x}}^{k+1})-\Phi({\mathbf{x}}^k) \le \frac{\eta_k}{2}\|{\mathbf{e}}^k\|^2 - \frac{\eta_k}{2}(1-\eta_k L)\|{\mathbf{g}}^k\|^2. \end{equation} From \eqref{eq:nonexp-P} and the definitions of ${\mathbf{g}}^k$ and $\bar{\mathbf{g}}^k$ in \eqref{eq:vg-vgbar}, it follows \begin{equation}\label{eq:tri-g} -\|{\mathbf{g}}^k\|^2 \le -\frac{1}{2}\|\bar{\mathbf{g}}^k\|^2 + \|{\mathbf{g}}^k-\bar{\mathbf{g}}^k\|^2\le -\frac{1}{2}\|\bar{\mathbf{g}}^k\|^2 + \|{\mathbf{d}}^k -\nabla F({\mathbf{x}}^k)\|^2 = -\frac{1}{2}\|\bar{\mathbf{g}}^k\|^2 + \|{\mathbf{e}}^k\|^2. \end{equation} Now plug the above inequality into \eqref{eq:chg-obj-0} to give the desired result. \end{proof} The next lemma gives a recursive bound on the gradient error vector sequence $\{{\mathbf{e}}^k\}$. Its proof follows that of \cite[Lemma 2]{cutkosky2019momentum}. \begin{lemma}[recursive bound on gradient error]\label{lem:bd-ek} Under Assumptions~\ref{assump-smooth} and \ref{assump-sgd}, it holds $$\EE\big[\|{\mathbf{e}}^{k+1}\|^2\big] \le \frac{2\beta_k^2\sigma^2}{m} + \frac{4(1-\beta_k)^2\eta_{k}^2 L^2}{m} \EE\big[\|\bar{\mathbf{g}}^{k}\|^2\big] + (1-\beta_k)^2\left(\textstyle 1+\frac{4\eta_{k}^2 L^2}{m}\right)\EE\big[\|{\mathbf{e}}^{k}\|^2\big], \forall\, k\ge0,$$ where $\bar{\mathbf{g}}^k$ and ${\mathbf{e}}^k$ are defined in \eqref{eq:vg-vgbar} and \eqref{eq:error-grad}. \end{lemma} \begin{proof} First, notice $\EE_{B_{k+1}}[\langle {\mathbf{v}}^{k+1}, {\mathbf{e}}^{k}\rangle] = \langle\nabla F({\mathbf{x}}^{k+1}), {\mathbf{e}}^{k}\rangle$ and $\EE_{B_{k+1}}[\langle {\mathbf{u}}^{k+1}, {\mathbf{e}}^{k}\rangle] = \langle\nabla F({\mathbf{x}}^{k}), {\mathbf{e}}^{k}\rangle$, and thus \begin{equation}\label{eq:crs-term0} \EE_{B_{k+1}}[\langle {\mathbf{v}}^{k+1} - \nabla F({\mathbf{x}}^{k+1}), {\mathbf{e}}^{k}\rangle] = 0,\quad \EE_{B_{k+1}}[\langle {\mathbf{u}}^{k+1} - \nabla F({\mathbf{x}}^{k}), {\mathbf{e}}^{k}\rangle] = 0. \end{equation} Hence, by writing ${\mathbf{e}}^{k+1} = {\mathbf{v}}^{k+1} - \nabla F({\mathbf{x}}^{k+1}) + (1-\beta_k)(\nabla F({\mathbf{x}}^{k})- {\mathbf{u}}^{k+1})+(1-\beta_k){\mathbf{e}}^{k}$, we have \begin{equation}\label{eq:one-step-e} \EE_{B_{k+1}}\big[\|{\mathbf{e}}^{k+1}\|^2\big] = \EE_{B_{k+1}}\big[\|{\mathbf{v}}^{k+1} - \nabla F({\mathbf{x}}^{k+1}) + (1-\beta_k)(\nabla F({\mathbf{x}}^{k})- {\mathbf{u}}^{k+1})\|^2\big] + (1-\beta_k)^2\|{\mathbf{e}}^{k}\|^2. \end{equation} By the Young's inequality, it holds \begin{align}\label{eq:step2} &~\|{\mathbf{v}}^{k+1} - \nabla F({\mathbf{x}}^{k+1}) + (1-\beta_k)(\nabla F({\mathbf{x}}^{k})- {\mathbf{u}}^{k+1})\|^2 \cr = &~ \big\|\beta_k\big({\mathbf{v}}^{k+1} - \nabla F({\mathbf{x}}^{k+1})\big) + (1-\beta_k)\big({\mathbf{v}}^{k+1} - \nabla F({\mathbf{x}}^{k+1}) + \nabla F({\mathbf{x}}^{k})- {\mathbf{u}}^{k+1}\big)\big\|^2\cr \le &~ 2\beta_k^2\|{\mathbf{v}}^{k+1} - \nabla F({\mathbf{x}}^{k+1})\|^2 + 2(1-\beta_k)^2\|{\mathbf{v}}^{k+1} - \nabla F({\mathbf{x}}^{k+1}) + \nabla F({\mathbf{x}}^{k})- {\mathbf{u}}^{k+1}\|^2. \end{align} From the definition of ${\mathbf{v}}^{k+1}$ and $ {\mathbf{u}}^{k+1}$ in \eqref{eq:def-v-u-k}, we have \begin{align}\label{eq:step23} &~\EE_{B_{k+1}}\big[\|{\mathbf{v}}^{k+1} - \nabla F({\mathbf{x}}^{k+1}) + \nabla F({\mathbf{x}}^{k})- {\mathbf{u}}^{k+1}\|^2\big] \cr =&~\frac{1}{m^2}\EE_{B_{k+1}}\left\| \sum_{\xi\in B_{k+1}}\left(\nabla f({\mathbf{x}}^{k+1};\xi)-\nabla f({\mathbf{x}}^{k};\xi) - \nabla F({\mathbf{x}}^{k+1})+\nabla F({\mathbf{x}}^{k})\right) \right\|^2 \cr =&~\frac{1}{m^2}\sum_{j=1}^m\EE_{\xi_j^{k+1}}\left\| \nabla f({\mathbf{x}}^{k+1};\xi_j^{k+1})-\nabla f({\mathbf{x}}^{k};\xi_j^{k+1}) - \nabla F({\mathbf{x}}^{k+1})+\nabla F({\mathbf{x}}^{k}) \right\|^2 \cr \le&~\frac{1}{m^2}\sum_{j=1}^m\EE_{\xi_j^{k+1}}\left\| \nabla f({\mathbf{x}}^{k+1};\xi_j^{k+1})-\nabla f({\mathbf{x}}^{k};\xi_j^{k+1}) \right\|^2 \cr \le&~\frac{L^2}{m}\|{\mathbf{x}}^{k+1}-{\mathbf{x}}^{k}\|^2, \end{align} where the second equality holds because of the i.i.d. samples in $B_{k+1}$ and the zero mean of the random vector $\nabla f({\mathbf{x}}^{k+1};\xi_j^{k+1})-\nabla f({\mathbf{x}}^{k};\xi_j^{k+1}) - \nabla F({\mathbf{x}}^{k+1})+\nabla F({\mathbf{x}}^{k}) $ resulted from unbiasedness in Assumption~\ref{assump-sgd}, the first inequality is due to the fact that the variance of a random vector is upper-bounded by its second moment, and the last inequality follows from Assumption~\ref{assump-smooth}. Now, take conditional expectation on both sides of \eqref{eq:step2}, use \eqref{eq:step23}, and substitute it into \eqref{eq:one-step-e}. We have $$\EE_{B_{k+1}}\big[\|{\mathbf{e}}^{k+1}\|^2\big] \le (1-\beta_k)^2\|{\mathbf{e}}^{k}\|^2 +2\beta_k^2\EE_{B_{k+1}} \big[\|{\mathbf{v}}^{k+1} - \nabla F({\mathbf{x}}^{k+1})\|^2\big] + \frac{2(1-\beta_k)^2 L^2}{m} \EE_{B_{k+1}}\big[\|{\mathbf{x}}^{k+1}-{\mathbf{x}}^{k}\|^2\big] .$$ Taking a full expectation over the above inequality and using Assumption~\ref{assump-sgd}, we have \begin{align}\label{eq:step32} \EE\big[\|{\mathbf{e}}^{k+1}\|^2\big] \le &~ (1-\beta_k)^2\EE\big[\|{\mathbf{e}}^{k}\|^2\big]+ \frac{2\beta_k^2\sigma^2}{m} + \frac{2(1-\beta_k)^2 L^2}{m} \EE\big[\|{\mathbf{x}}^{k+1}-{\mathbf{x}}^{k}\|^2\big]\cr = & ~ (1-\beta_k)^2\EE\big[\|{\mathbf{e}}^{k}\|^2\big]+ \frac{2\beta_k^2\sigma^2}{m} + \frac{2(1-\beta_k)^2 \eta_k^2 L^2}{m} \EE\big[\|{\mathbf{g}}^{k}\|^2\big], \end{align} where we have used ${\mathbf{x}}^{k+1}-{\mathbf{x}}^{k} = -\eta_{k} {\mathbf{g}}^{k}$ in the equality. By similar arguments as those in \eqref{eq:tri-g}, it holds $$\|{\mathbf{g}}^{k}\|^2 \le 2\|\bar{\mathbf{g}}^{k}\|^2 + 2\|{\mathbf{g}}^{k}-\bar{\mathbf{g}}^{k}\|^2\le 2\|\bar{\mathbf{g}}^{k}\|^2 + 2\|{\mathbf{e}}^{k}\|^2.$$ Plugging the above inequality into \eqref{eq:step32}, we obtain the desired result. \end{proof} \subsection{Results with Varying Stepsize} In this subsection, we show the convergence results of Algorithm~\ref{alg:prox-storm} by taking varying stepsizes. Using Lemmas~\ref{lem:obj-diff} and \ref{lem:bd-ek}, we first show a convergence rate result by choosing the parameters that satisfy a general condition. Then we specify the choice of the parameters. \begin{theorem}\label{thm:generic-vary} Under Assumptions~\ref{assump-obj} through \ref{assump-sgd}, let $\{{\mathbf{x}}^k\}$ be the iterate sequence from Algorithm~\ref{alg:prox-storm}, with the parameters $\{\eta_k\}$ and $\{\beta_k\}$ satisfying the condition: \begin{equation}\label{eq:cond-eta-beta} \frac{1}{4}(1-\eta_k L)-\frac{\eta_k }{5m\eta_{k+1}}(1-\beta_k)^2 > 0,~\text{and}~ \frac{\eta_k}{2}(2-\eta_k L)-\frac{1}{20\eta_k L^2}+\frac{(1-\beta_k)^2(1+\frac{4\eta_k^2 L^2}{m})}{20\eta_{k+1} L^2} \le 0, \,\forall\, k\ge0. \end{equation} Let $\{\bar{\mathbf{g}}^k\}$ be defined in \eqref{eq:vg-vgbar}. Then \begin{equation}\label{eq:rate-grad} \sum_{k=0}^{K-1}\left(\frac{\eta_k}{4}(1-\eta_k L)-\frac{\eta_k^2}{5m\eta_{k+1}}(1-\beta_k)^2\right)\EE[\|\bar{\mathbf{g}}^k\|^2] \le \Phi({\mathbf{x}}^{0}) - \Phi^* + \frac{\sigma^2}{20 m_0\eta_0 L^2} + \sum_{k=0}^{K-1}\frac{\beta_k^2\sigma^2}{10m\eta_{k+1} L^2}. \end{equation} \end{theorem} \begin{proof} From Lemmas~\ref{lem:obj-diff} and \ref{lem:bd-ek}, it follows that \begin{align}\label{eq:diff-merit} &\EE\left[\Phi({\mathbf{x}}^{k+1})+\frac{\|{\mathbf{e}}^{k+1}\|^2}{20\eta_{k+1} L^2}-\Phi({\mathbf{x}}^{k})-\frac{\|{\mathbf{e}}^{k}\|^2}{20\eta_k L^2}\right] \le \EE\left[\frac{\eta_k}{2}(2-\eta_k L)\|{\mathbf{e}}^k\|^2 - \frac{\eta_k}{4}(1-\eta_k L)\|\bar{\mathbf{g}}^k\|^2-\frac{\|{\mathbf{e}}^{k}\|^2}{20\eta_k L^2}\right]\nonumber\\ &\hspace{2cm}+ \frac{1}{20\eta_{k+1} L^2}\EE\left[\frac{2\beta_k^2\sigma^2}{m} + \frac{4(1-\beta_k)^2\eta_k^2 L^2}{m} \|\bar{\mathbf{g}}^{k}\|^2 + (1-\beta_k)^2\left(1+\frac{4\eta_k^2 L^2}{m}\right)\|{\mathbf{e}}^{k}\|^2\right]. \end{align} We have from the condition of $\{\beta_k\}$ that the coefficient of the term $\|{\mathbf{e}}^k\|^2$ on the right hand side of \eqref{eq:diff-merit} is nonpositive, and thus we obtain from \eqref{eq:diff-merit} that $$\EE\left[\Phi({\mathbf{x}}^{k+1})+\frac{\|{\mathbf{e}}^{k+1}\|^2}{20\eta_{k+1} L^2}-\Phi({\mathbf{x}}^{k})-\frac{\|{\mathbf{e}}^{k}\|^2}{20\eta_k L^2}\right] \le \frac{\beta_k^2\sigma^2}{10m\eta_{k+1} L^2} -\left(\frac{\eta_k}{4}(1-\eta_k L)-\frac{\eta_k^2 }{5m\eta_{k+1}}(1-\beta_k)^2\right)\EE[\|\bar{\mathbf{g}}^k\|^2].$$ Summing up the above inequality from $k=0$ through $K-1$ gives \begin{align*} &~\EE\left[\Phi({\mathbf{x}}^{K})+\frac{\|{\mathbf{e}}^{K}\|^2}{20\eta_{K} L^2}-\Phi({\mathbf{x}}^{0})-\frac{\|{\mathbf{e}}^{0}\|^2}{20\eta_0 L^2}\right] \\ \le &~ \sum_{k=0}^{K-1}\frac{\beta_k^2\sigma^2}{10m\eta_{k+1} L^2}-\sum_{k=0}^{K-1}\left(\frac{\eta_k}{4}(1-\eta_k L)-\frac{\eta_k^2}{5m\eta_{k+1}}(1-\beta_k)^2\right)\EE[\|\bar{\mathbf{g}}^k\|^2], \end{align*} which implies the inequality in \eqref{eq:rate-grad} by $\EE[\|{\mathbf{e}}^0\|^2]\le \frac{\sigma^2}{m_0}$. \end{proof} Below we specify the choice of parameters and establish complexity results of Algorithm~\ref{alg:prox-storm}. \begin{theorem}[convergence rate with varying stepsizes]\label{thm:rate-dynamic} Under Assumptions~\ref{assump-obj} through \ref{assump-sgd}, let $\{{\mathbf{x}}^k\}$ be the iterate sequence from Algorithm~\ref{alg:prox-storm}, with $m_0=m$ and the parameters $\{\eta_k\}$ and $\{\beta_k\}$ set to \begin{equation}\label{eq:dynamic-para} \eta_k = \frac{\eta}{L(k+4)^{\frac{1}{3}}},\quad \beta_k = \frac{1+24\eta_k^2 L^2-\frac{\eta_{k+1}}{\eta_k}}{1+4\eta_k^2 L^2}, \,\forall\, k\ge 0, \end{equation} where $\eta\le \frac{\sqrt[3]{4}}{8 }$ is a positive number. If $\tau$ is selected according to \eqref{eq:select-tau}, then \begin{equation}\label{eq:rate-grad-dynamic} \EE[\|\bar{\mathbf{g}}^\tau\|^2] \le \frac{2 \left( L(\Phi({\mathbf{x}}^0)-\Phi^*)+\frac{\sqrt[3]{4}\sigma^2}{20 m \eta}+\frac{\sigma^2}{10m}\big(1152\eta^3 (\frac{5}{4})^{\frac{1}{3}}(\log(K+3)-\log 3) + \frac{1}{3\sqrt[3]{9}\eta}\big)\right)}{3\big(\frac{7}{32}-\frac{1}{5}(\frac{5}{4})^{\frac{1}{3}}\big)\eta\big((K+4)^{\frac{2}{3}}-4^{\frac{2}{3}}\big)}. \end{equation} \end{theorem} \begin{proof} Since $\eta\le \frac{\sqrt[3]{4}}{8}$, it holds $\eta_k\le \frac{1}{8L}$. Also, notice $\frac{\eta_k}{\eta_{k+1}}\le (\frac{5}{4})^{\frac{1}{3}}$ or equivalently $\frac{\eta_{k+1}}{\eta_{k}}\ge (\frac{4}{5})^{\frac{1}{3}}$ for all $k\ge0$. Hence, it is straightforward to have $\beta_k\in(0,1)$ and thus $(1-\beta_k)^2 \le 1-\beta_k$ for each $k\ge0$. Now notice $\frac{5m\eta_{k+1}}{4\eta_k}(1-\eta_k L)\ge \frac{5}{4}(\frac{4}{5})^{\frac{1}{3}}\frac{7}{8}>1 \ge (1-\beta_k)^2$, so the first inequality in \eqref{eq:cond-eta-beta} holds. In addition, to ensure the second inequality in \eqref{eq:cond-eta-beta}, it suffices to have $(1-\beta_k)(1+\frac{4\eta_k^2 L^2}{m})\le \frac{\eta_{k+1}}{\eta_{k}} - 10\eta_k\eta_{k+1}L^2(2-\eta_k L)$. Because $20\eta_k^2 L^2 \ge 10\eta_k\eta_{k+1}L^2(2-\eta_k L)$, this inequality is implied by $(1-\beta_k)(1+\frac{4\eta_k^2 L^2}{m})\le \frac{\eta_{k+1}}{\eta_{k}} -20\eta_k^2 L^2$, which is further implied by the choice of $\beta_k$ in \eqref{eq:dynamic-para}. Therefore, both conditions in \eqref{eq:cond-eta-beta} hold, and thus we have \eqref{eq:rate-grad}. Next we bound the coefficients in \eqref{eq:rate-grad}. First, from $1-\eta_k L \ge\frac{7}{8}$ and $\frac{\eta_k}{\eta_{k+1}}\le (\frac{5}{4})^{\frac{1}{3}}$ for all $k$, we have \begin{equation}\label{eq:bd-sum-eta-2} \sum_{k=0}^{K-1}\left(\frac{\eta_k}{4}(1-\eta_k L)-\frac{\eta_k^2}{5m\eta_{k+1}}(1-\beta_k)^2\right)\ge c\sum_{k=0}^{K-1}\eta_k \ge \frac{c\eta}{L}\int_{0}^{K}(x+4)^{-\frac{1}{3}}dx = \frac{3c\eta}{2 L}\left((K+4)^{\frac{2}{3}}-4^{\frac{2}{3}}\right), \end{equation} where $c=\frac{7}{32}-\frac{1}{5}(\frac{5}{4})^{\frac{1}{3}} > 0$. Second, \begin{align}\label{eq:sum-beta-term1} \sum_{k=0}^{K-1}\frac{\beta_k^2}{\eta_{k+1}} \le \frac{L}{\eta}\sum_{k=0}^{K-1} (k+5)^{\frac{1}{3}}\left(1+24\eta_k^2 L^2-\frac{\eta_{k+1}}{\eta_k}\right)^2 = \frac{L}{\eta}\sum_{k=0}^{K-1} (k+5)^{\frac{1}{3}}\left(1+24\eta_k^2 L^2-\frac{(k+4)^{\frac{1}{3}}}{(k+5)^{\frac{1}{3}}} \right)^2. \end{align} Note that \begin{align}\label{eq:k-eta-t1} \sum_{k=0}^{K-1} (k+5)^{\frac{1}{3}}\eta_k^4= \frac{\eta^4}{L^4}\sum_{k=0}^{K-1} (k+5)^{\frac{1}{3}}(k+4)^{-\frac{4}{3}}\le \frac{\eta^4}{L^4}(\tfrac{5}{4})^{\frac{1}{3}}\sum_{k=0}^{K-1}(k+4)^{-1}\le \frac{\eta^4}{L^4}(\tfrac{5}{4})^{\frac{1}{3}}(\log(K+3)-\log 3). \end{align} Furthermore, by $a^3-b^3 = (a-b)(a^2+ab+b^2)$ for any $a,b\in\RR$, we have $$1-\frac{(k+4)^{\frac{1}{3}}}{(k+5)^{\frac{1}{3}}}=(k+5)^{-\frac{1}{3}}\left((k+5)^{\frac{1}{3}}-(k+4)^{\frac{1}{3}}\right)=\frac{(k+5)^{-\frac{1}{3}}}{(k+5)^{\frac{2}{3}}+(k+5)^{\frac{1}{3}}(k+4)^{\frac{1}{3}}+(k+4)^{\frac{2}{3}}},$$ and thus \begin{align}\label{eq:k-eta-t2} \sum_{k=0}^{K-1} (k+5)^{\frac{1}{3}}\left({\textstyle 1-\frac{(k+4)^{\frac{1}{3}}}{(k+5)^{\frac{1}{3}}} }\right)^2= &~ \sum_{k=0}^{K-1} \frac{(k+5)^{-\frac{1}{3}}}{\left((k+5)^{\frac{2}{3}}+(k+5)^{\frac{1}{3}}(k+4)^{\frac{1}{3}}+(k+4)^{\frac{2}{3}}\right)^2}\cr \le &~ \frac{1}{9}\sum_{k=0}^{K-1} (k+4)^{-\frac{5}{3}} \le \frac{1}{6\sqrt[3]{9}}. \end{align} Now applying the inequality $(a+b)^2 \le 2a^2+2b^2$ to \eqref{eq:sum-beta-term1} and then using \eqref{eq:k-eta-t1} and \eqref{eq:k-eta-t2}, we obtain \begin{align}\label{eq:sum-beta-term2} \sum_{k=0}^{K-1}\frac{\beta_k^2}{\eta_{k+1}} \le 1152\eta^3 L(\tfrac{5}{4})^{\frac{1}{3}}(\log(K+3)-\log 3) + \frac{L}{3\sqrt[3]{9}\eta}. \end{align} Therefore, plugging \eqref{eq:bd-sum-eta-2} and \eqref{eq:sum-beta-term2} into \eqref{eq:rate-grad} and by the selection of $\tau$ in \eqref{eq:select-tau}, we obtain the desired result. \end{proof} \begin{remark}\label{rm:rate-dynamic} The result in Theorem~\ref{thm:rate-dynamic} does not include the noiseless case, i.e., $\sigma=0$. Nevertheless, if in that case, we can simply choose $\eta_k=\Theta(\frac{1}{L})$ and $\beta_k=1$ for all $k\ge0$. This way, Algorithm~\ref{alg:prox-storm} reduces to the deterministic mirror-prox method, and we can easily obtain $\min_{0\le k < K}\|\bar{\mathbf{g}}^k\|^2 = O(\frac{1}{K})$ from \eqref{eq:rate-grad}. \end{remark} By Theorem~\ref{thm:rate-dynamic}, we below estimate the complexity result of Algorithm~\ref{alg:prox-storm} to produce a stochastic $\varepsilon$-stationary solution. \begin{corollary}[complexity result with varying stepsizes]\label{cor:complexity-dynamic} Let $\varepsilon>0$ be given and suppose $\sigma>0$. Then under the same conditions of Theorem~\ref{thm:rate-dynamic}, Algorithm~\ref{alg:prox-storm} can produce a stochastic $\varepsilon$-stationary solution of \eqref{eq:stoc-prob} with a total complexity $$T_{\mathrm{total}} = mK=O\left(\max\left\{m\varepsilon^{-3}\big(L(\Phi({\mathbf{x}}^0)-\Phi^*)\big)^{\frac{3}{2}},\ \varepsilon^{-3}(|\log\varepsilon|+|\log\sigma|)^{\frac{3}{2}}\frac{\sigma^3}{\sqrt{m}}\right\}\right).$$ \end{corollary} \begin{proof} By Theorem~\ref{thm:rate-dynamic} with $\eta=\frac{\sqrt[3]{4}}{8 }$, we have \begin{equation}\label{eq:bd-vg-tau-dynamic} \EE[\|\bar{\mathbf{g}}^\tau\|^2] = O\left(K^{-\frac{2}{3}}\big(\textstyle L(\Phi({\mathbf{x}}^0)-\Phi^*) + \frac{\sigma^2\log K}{m}\big)\right). \end{equation} Hence, it suffices to let $K=\Theta\left(\max\left\{\varepsilon^{-3}\big(L(\Phi({\mathbf{x}}^0)-\Phi^*)\big)^{\frac{3}{2}},\ \varepsilon^{-3}(|\log\varepsilon|+|\log\sigma|)^{\frac{3}{2}}\frac{\sigma^3}{m^{\frac{3}{2}}}\right\}\right)$, to have $\EE[\|\bar{\mathbf{g}}^\tau\|^2]\le \varepsilon^2$. This completes the proof. \end{proof} \begin{remark}\label{rm:dep-sigma-dynamic} If $m=1$ or $m=O(1)$ independent of $\sigma$, then the total complexity will be $$T_{\mathrm{total}}=O\left(\max\left\{\varepsilon^{-3}\big(L(\Phi({\mathbf{x}}^0)-\Phi^*)\big)^{\frac{3}{2}}, \, \varepsilon^{-3}\sigma^3(|\log\varepsilon|+|\log\sigma|)^{\frac{3}{2}}\right\}\right).$$ If $\sigma \ge1$ is big and can be estimated, we can take $m=\Theta(\sigma^2)$. This way, we obtain the total complexity $O\left(\varepsilon^{-3} \sigma^2\big((|\log\varepsilon|+\log\sigma)^{\frac{3}{2}} +(L(\Phi({\mathbf{x}}^0)-\Phi^*))^{\frac{3}{2}}\big)\right)$. This result is near-optimal in the sense that its dependence on $\varepsilon$ has the additional logarithmic term $|\log\varepsilon|^{\frac{3}{2}}$ compared to the lower bound result in \cite{arjevani2019lower}. In the remaining part of this section, we show that with constant stepsizes, Algorithm~\ref{alg:prox-storm} can achieve the optimal complexity result $O(\varepsilon^{-3})$. \end{remark} \subsection{Results with Constant Stepsize} In this subsection, we show convergence results of Algorithm~\ref{alg:prox-storm} by taking constant stepsizes, i.e., $\eta_k = \eta_0,\forall \, k\ge1$. In order to consider the dependence on the quantities $L$, $\Phi({\mathbf{x}}^0)-\Phi^*$ and $\sigma^2$, we give two settings that yield two different results, but each result has the same dependence on the target accuracy $\varepsilon$. The first result is obtained from Theorem~\ref{thm:generic-vary} by taking constant stepsizes. \begin{theorem}[convergence rate I with constant stepsizes]\label{thm:rate-const} Under Assumptions~\ref{assump-obj} through \ref{assump-sgd}, let $\{{\mathbf{x}}^k\}$ be the iterate sequence from Algorithm~\ref{alg:prox-storm}, with the parameters $\{\eta_k\}$ and $\{\beta_k\}$ set to \begin{equation}\label{eq:const-para} \eta_k = \frac{\eta}{L\sqrt[3]{K}},\quad \beta_k = \beta=\frac{4\eta^2/m+10\eta^2(2-\eta /K^{\frac{1}{3}})}{K^{\frac{2}{3}}+4\eta^2/m}, \,\forall\, k\ge 0, \end{equation} where $\eta < \frac{\sqrt[3]{K}}{5 }$ is a positive number. If $\tau$ is selected from $\{0,1,\ldots,K-1\}$ uniformly at random, then \begin{equation}\label{eq:rate-grad-const} \EE[\|\bar{\mathbf{g}}^\tau\|^2] \le \frac{1}{ K^{\frac{2}{3}}\left(\textstyle\frac{1}{4}\big(1- \frac{\eta}{\sqrt[3]{K}}\big) - \frac{1}{5}\right)} \left(\frac{L\big(\Phi({\mathbf{x}}^{0}) - \Phi^*\big)}{\eta} + \frac{\sigma^2 \sqrt[3]{K}}{20 m_0\eta^2}+\frac{24^2\sigma^2\eta^2}{10m}\right). \end{equation} \end{theorem} \begin{proof} First note $\frac{\eta}{\sqrt[3]{K}} < \frac{1}{5}$ and thus $\beta \in (0,1)$. Now it is easy to verify by using $(1-\beta)^2 < 1-\beta$ that the conditions in \eqref{eq:cond-eta-beta} are satisfied. Hence, the result in \eqref{eq:rate-grad} holds. Second, by the choice of $\eta_k$ and $\beta_k$, we have \begin{equation}\label{eq:sum-coeff1} \begin{aligned} &~\sum_{k=0}^{K-1}\left(\textstyle\frac{\eta_k}{4}(1-\eta_k L)-\frac{\eta_k^2}{5m\eta_{k+1}}(1-\beta_k)^2\right)\cr \ge&~ \sum_{k=0}^{K-1} \left(\textstyle\frac{\eta}{4L\sqrt[3]{K}}\big(1- \frac{\eta }{\sqrt[3]{K}}\big) - \frac{\eta}{5L\sqrt[3]{K}}\right) = \textstyle \frac{\eta}{L} K^{\frac{2}{3}} \left(\textstyle\frac{1}{4}\big(1- \frac{\eta }{\sqrt[3]{K}}\big) - \frac{1}{5}\right), \end{aligned} \end{equation} and \begin{equation}\label{eq:sum-coeff2} \begin{aligned} \sum_{k=0}^{K-1}\frac{\beta_k^2\sigma^2}{10m\eta_{k+1} L^2} \le \sum_{k=0}^{K-1} \frac{\sigma^2\sqrt[3]{K}}{10m\eta L}\left(\frac{4\eta^2+20\eta^2}{K^{\frac{2}{3}}}\right)^2 = \frac{24^2\sigma^2\eta^3}{10m L}. \end{aligned} \end{equation} Plugging \eqref{eq:sum-coeff1} and \eqref{eq:sum-coeff2} into \eqref{eq:rate-grad}, we obtain the desired result by the selection of $\tau$ in \eqref{eq:select-tau}. \end{proof} From \eqref{eq:rate-grad-const}, we see that in order to have the $O(K^{-\frac{2}{3}})$ convergence rate, we need to set $m_0=\Theta(\sqrt[3]{K})$. Next we set $m_0$ in this way and estimate the complexity result of Algorithm~\ref{alg:prox-storm} with the constant stepsize. \begin{corollary}[complexity result I with constant stepsizes]\label{cor:complexity-const} Let $\varepsilon>0$ be given. Under Assumptions~\ref{assump-obj} through \ref{assump-sgd}, let $\{{\mathbf{x}}^k\}$ be the iterate sequence from Algorithm~\ref{alg:prox-storm} with $m_0\ge c_0 \sqrt[3]{K}$ and the parameters $\{\eta_k\}$ and $\{\beta_k\}$ set to those in \eqref{eq:const-para} where $\eta \le \frac{\sqrt[3]{K}}{10 }$. Let $\tau$ be selected from $\{0,1,\ldots,K-1\}$ uniformly at random. Then ${\mathbf{x}}^\tau$ is a stochastic $\varepsilon$-stationary solution of \eqref{eq:stoc-prob} if \begin{equation}\label{eq:K-const-1} K = \left\lceil\frac{40^{\frac{3}{2}}\left(\frac{L\big(\Phi({\mathbf{x}}^{0}) - \Phi^*\big)}{\eta} + \frac{\sigma^2}{20 c_0\eta^2}+\frac{24^2\sigma^2\eta^2}{10m}\right)^{\frac{3}{2}}}{\varepsilon^3}\right\rceil. \end{equation} \end{corollary} \begin{proof} When $\eta \le \frac{\sqrt[3]{K}}{10}$, it holds $\frac{1}{4}\big(1- \frac{\eta}{\sqrt[3]{K}}\big) - \frac{1}{5}\ge \frac{1}{40}$. Hence, \eqref{eq:rate-grad-const} with $m_0\ge c_0 \sqrt[3]{K}$ implies $$\EE[\|\bar{\mathbf{g}}^\tau\|^2] \le \frac{40}{K^{\frac{2}{3}}} \left( \frac{L\big(\Phi({\mathbf{x}}^{0}) - \Phi^*\big)}{\eta} + \frac{\sigma^2}{20 c_0\eta^2}+\frac{24^2\sigma^2\eta^2}{10m}\right),$$ which together with the condition of $K$ in \eqref{eq:K-const-1} gives $\EE[\|\bar{\mathbf{g}}^\tau\|^2] \le\varepsilon^2$. This completes the proof. \end{proof} \begin{remark}\label{rm:dep-sigma-const} Suppose that $\sigma\ge1$ and can be estimated. Also, assume $L=\Omega(1)$ and $\Phi({\mathbf{x}}^{0}) - \Phi^*=\Omega(1)$. In this case, we let $\eta=\Theta\left(\sigma^{-\frac{2}{3}}\big( L(\Phi({\mathbf{x}}^{0}) - \Phi^*)\big)^{\frac{1}{3}}\right)$, $c_0=\Theta(\sigma^{\frac{8}{3}})$, and $m=O(1)$ independent of $\sigma$. Then from \eqref{eq:K-const-1}, we have $K=O\left(\varepsilon^{-3}\sigma L(\Phi({\mathbf{x}}^{0}) - \Phi^*)\right)$. With this choice, the total number of sample gradients will be \begin{equation}\label{eq:dep-sigma-const} T_{\mathrm{total}}=m_0 + m(K-1) = O\left(\varepsilon^{-1}\sigma^3\big(L(\Phi({\mathbf{x}}^{0}) - \Phi^*)\big)^{\frac{1}{3}}+\varepsilon^{-3}\sigma L(\Phi({\mathbf{x}}^{0}) - \Phi^*)\right). \end{equation} The dependence on the pair $(\varepsilon, \sigma)$ matches with the result in \cite{tran2021hybrid}. \end{remark} The complexity result given in \eqref{eq:dep-sigma-const} has a low dependence on $(\varepsilon, \sigma, L(\Phi({\mathbf{x}}^{0}) - \Phi^*) )$ in the sense that $\varepsilon^{-3}$ only multiplies with $\sigma L(\Phi({\mathbf{x}}^{0}) - \Phi^*)$ but not a higher order. However, the drawback is that the initial batch $m_0$ must be in the order of $\varepsilon^{-1}$ to obtain the complexity result $O(\varepsilon^{-3})$. Our second result with constant stepsizes will relax the requirement. We utilize the momentum accumulation in the parameter of \eqref{eq:step32} and give our novel convergence analysis, by introducing the following quantity \begin{equation}\label{eq:gamma} \Gamma_k = \left\{ \begin{array}{ll} \prod_{i=0}^{k-1}(1-\beta_i)^2, & \text{ if } k\ge 1, \\[0.2cm] 1, & \text{ if } k=0. \end{array} \right. \end{equation} We first give a generic result below under certain conditions on the parameters. Then, we will specify the choice of parameters to satisfy the conditions. \begin{theorem}\label{thm:main} Under Assumptions~\ref{assump-obj} through \ref{assump-sgd}, let $\{{\mathbf{x}}^k\}$ be the iterate sequence from Algorithm~\ref{alg:prox-storm}. Suppose there are constants $A$ and $B$ such that the parameters $\{\eta_k\}$ and $\{\beta_k\}$ satisfying the conditions: \begin{equation}\label{eq:cond-eta-beta1} 2\eta_k L+ \frac{4 L^2}{m} \frac{\eta_{k}}{\Gamma_{k} } \sum_{j=k+1}^{K-1}\eta_j\Gamma_{j} \le 1, \,\forall\, k=0,\ldots,K-1, \end{equation} \begin{equation}\label{eq:cond-eta-beta2} \sum_{k=1}^{K-1}\eta_k\Gamma_{k}\le A,~\text{and}~\sum_{k=1}^{K-1}\eta_k\Gamma_{k}\sum_{j=0}^{k-1}\frac{\beta_j^2}{\Gamma_{j+1} }\le B, \end{equation} where $K$ is the maximum number of iterations in Algorithm~\ref{alg:prox-storm}. Let $\{\bar{\mathbf{g}}^k\}$ be defined in \eqref{eq:vg-vgbar}. Then \begin{equation}\label{eq:mainthm} \sum_{k=0}^{K-1} \eta_k \EE\big[\|\bar{\mathbf{g}}^{k}\|^2\big] \le 12\big[\Phi({\mathbf{x}}^0)-\Phi^*\big] +4(2A+3)\frac{\sigma^2}{ m_0}+ 16B\frac{\sigma^2}{m} . \end{equation} \end{theorem} \begin{proof} We begin by taking the total expectation and telescoping the inequality in \eqref{eq:chg-obj-0} over $k=0,\ldots,K-1$ to obtain \begin{align*} \EE\big[\Phi({\mathbf{x}}^{K})\big]-\Phi({\mathbf{x}}^0) \le&~ \sum_{k=0}^{K-1}\frac{\eta_k}{2} \EE\big[\|{\mathbf{e}}^k\|^2\big] -\sum_{k=0}^{K-1} \frac{\eta_k}{2}(1-\eta_k L)\EE\big[\|{\mathbf{g}}^k\|^2\big]\\ \le&~ \frac{\sigma^2}{m_0}+ \sum_{k=1}^{K-1}\frac{\eta_k}{2} \EE\big[\|{\mathbf{e}}^k\|^2\big] -\sum_{k=0}^{K-1} \frac{\eta_k}{2}(1-\eta_k L)\EE\big[\|{\mathbf{g}}^k\|^2\big], \end{align*} where we have used $\EE\big[\|{\mathbf{e}}^{0}\|^2\big]\le \frac{\sigma^2}{m_0}$ by Assumption~\ref{assump-sgd}. Since $\Phi({\mathbf{x}}^{K}) \ge \Phi^*$ from Assumption~\ref{assump-obj}, the above inequality implies \begin{equation}\label{eq:main1} \sum_{k=0}^{K-1} \frac{\eta_k}{2}(1-\eta_k L)\EE\big[\|{\mathbf{g}}^k\|^2\big] \le \Phi({\mathbf{x}}^0)-\Phi^* +\frac{\sigma^2}{m_0}+ \sum_{k=1}^{K-1}\frac{\eta_k}{2} \EE\big[\|{\mathbf{e}}^k\|^2\big]. \end{equation} In addition, we divide both sides of \eqref{eq:step32} by $\Gamma_{k+1}$ and obtain from the definition of $\Gamma_{k+1}$ in \eqref{eq:gamma} that $$\frac{1}{\Gamma_{k+1} }\EE\big[\|{\mathbf{e}}^{k+1}\|^2\big] \le \frac{1}{\Gamma_{k} }\EE\big[\|{\mathbf{e}}^{k}\|^2\big] +\frac{1}{\Gamma_{k+1} }\frac{2\beta_k^2\sigma^2}{m} + \frac{1}{\Gamma_{k} }\frac{2\eta_{k}^2 L^2}{m} \EE\big[\|{\mathbf{g}}^{k}\|^2\big],\, \forall\, k\ge 0.$$ Let $j=0,\ldots,k-1$ be another index on which the above inequality is telescoped. We obtain $$\frac{1}{\Gamma_{k} }\EE\big[\|{\mathbf{e}}^{k}\|^2\big] \le \EE\big[\|{\mathbf{e}}^{0}\|^2\big] +\sum_{j=0}^{k-1}\frac{1}{\Gamma_{j+1} }\frac{2\beta_j^2\sigma^2}{m} + \sum_{j=0}^{k-1}\frac{1}{\Gamma_{j} }\frac{2\eta_{j}^2 L^2}{m} \EE\big[\|{\mathbf{g}}^{j}\|^2\big],\, \forall\, k\ge 1.$$ Multiplying $\Gamma_{k}$ to both sides of the above inequality and rearranging it gives $$ \EE\big[\|{\mathbf{e}}^{k}\|^2\big] \le \Gamma_{k}\bigg(\frac{\sigma^2}{m_0} +\frac{2\sigma^2}{m}\sum_{j=0}^{k-1}\frac{\beta_j^2}{\Gamma_{j+1} } \bigg)+ \frac{2 L^2}{m}\sum_{j=0}^{k-1}\frac{\Gamma_{k}}{\Gamma_{j} } \eta_{j}^2 \EE\big[\|{\mathbf{g}}^{j}\|^2\big],\, \forall\, k\ge 1,$$ where we have used $\EE\big[\|{\mathbf{e}}^{0}\|^2\big]\le \frac{\sigma^2}{m_0}$ again. Now multiply $\eta_k$ to the above inequality and sum it up over $k=1,\ldots,K-1$ to have \begin{align}\label{eq:main2} \sum_{k=1}^{K-1}\eta_k\EE\big[\|{\mathbf{e}}^{k}\|^2\big] \le&~ \sigma^2\sum_{k=1}^{K-1}\eta_k\Gamma_{k}\bigg(\frac{1}{m_0} +\frac{2}{m} \sum_{j=0}^{k-1}\frac{\beta_j^2}{\Gamma_{j+1} } \bigg)+\frac{2 L^2}{m}\sum_{k=1}^{K-1} \sum_{j=0}^{k-1}\frac{\eta_k\Gamma_{k}}{\Gamma_{j} } \eta_{j}^2 \EE\big[\|{\mathbf{g}}^{j}\|^2\big] \cr =&~ \sigma^2\sum_{k=1}^{K-1}\eta_k\Gamma_{k}\bigg(\frac{1}{m_0} +\frac{2}{m} \sum_{j=0}^{k-1}\frac{\beta_j^2}{\Gamma_{j+1} } \bigg)+\frac{2 L^2}{m}\sum_{j=0}^{K-2}\frac{\eta_{j}^2}{\Gamma_{j} }\bigg(\sum_{k=j+1}^{K-1}\eta_k\Gamma_{k}\bigg) \EE\big[\|{\mathbf{g}}^{j}\|^2\big] \cr =&~ \sigma^2\sum_{k=1}^{K-1}\eta_k\Gamma_{k}\bigg(\frac{1}{m_0} +\frac{2}{m} \sum_{j=0}^{k-1}\frac{\beta_j^2}{\Gamma_{j+1} } \bigg)+\frac{2 L^2}{m}\sum_{k=0}^{K-1}\frac{\eta_{k}^2}{\Gamma_{k} }\bigg(\sum_{j=k+1}^{K-1}\eta_j\Gamma_{j}\bigg) \EE\big[\|{\mathbf{g}}^{k}\|^2\big], \end{align} where the first equality follows by swapping summation, and the second equality is obtained by swapping indices and realizing that the coefficient for $\EE\big[\|{\mathbf{g}}^{K-1}\|^2\big]$ is null. Now we have by substituting \eqref{eq:main2} into \eqref{eq:main1} and rearranging terms that $$ \sum_{k=0}^{K-1} \frac{\eta_k}{2}\bigg(1-\eta_k L- \frac{2 L^2}{m} \frac{\eta_{k}}{\Gamma_{k} } \sum_{j=k+1}^{K-1}\eta_j\Gamma_{j}\bigg)\EE\big[\|{\mathbf{g}}^k\|^2\big] \le \Phi({\mathbf{x}}^0)-\Phi^* +\frac{\sigma^2}{m_0}+ \frac{\sigma^2}{2}\sum_{k=1}^{K-1}\eta_k\Gamma_{k}\bigg(\frac{1}{m_0} +\frac{2}{m} \sum_{j=0}^{k-1}\frac{\beta_j^2}{\Gamma_{j+1} } \bigg), $$ which together with the conditions in \eqref{eq:cond-eta-beta1} and \eqref{eq:cond-eta-beta2} gives the bound for ${\mathbf{g}}^k$: \begin{equation}\label{eq:main3} \sum_{k=0}^{K-1} \eta_k \EE\big[\|{\mathbf{g}}^k\|^2\big] \le 4\big[\Phi({\mathbf{x}}^0)-\Phi^*\big] + 2(A+2)\frac{\sigma^2}{ m_0}+ 4B\frac{\sigma^2}{m} . \end{equation} Use \eqref{eq:cond-eta-beta1} again and substitute \eqref{eq:main3} into \eqref{eq:main2}. We obtain the bound for ${\mathbf{e}}^k$: \begin{equation}\label{eq:main4} \sum_{k=0}^{K-1} \eta_k \EE\big[\|{\mathbf{e}}^k\|^2\big] \le A\frac{\sigma^2}{ m_0}+ 2B\frac{\sigma^2}{m} +\sum_{k=0}^{K-1}\frac{\eta_{k}}{2} \EE\big[\|{\mathbf{g}}^{k}\|^2\big] \le 2\big[\Phi({\mathbf{x}}^0)-\Phi^*\big] +2(A+1)\frac{\sigma^2}{ m_0}+ 4B\frac{\sigma^2}{m} . \end{equation} Finally, we have from \eqref{eq:tri-g} that $\|\bar{\mathbf{g}}^{k}\|^2 \le 2\|{\mathbf{g}}^{k}\|^2 + 2\|{\mathbf{e}}^{k}\|^2.$ Sum up this inequality over $k=0,\ldots,K-1$ and substitute \eqref{eq:main3} and \eqref{eq:main4} into the summation. We obtain the result in \eqref{eq:mainthm}. \end{proof} Below we specify the choice of parameters and establish complexity results of Algorithm~\ref{alg:prox-storm}. The following lemma will be used to show the conditions in \eqref{eq:cond-eta-beta1} and \eqref{eq:cond-eta-beta2}. \begin{lemma}\label{lem:keylem} Let \begin{equation}\label{eq:beta-def} \beta_{k}=3\big[(k+3)^{1/3}-(k+2)^{1/3}\big],\quad k\ge 0, \end{equation} Then we have \begin{equation}\label{eq:keylem1} \sum_{j=k+1}^{K-1}\frac{\Gamma_{j}}{\Gamma_{k}} \le \frac{1}{2}(k+2)^{2/3}+\frac{1}{6}(k+2)^{1/3}+\frac{1}{36}. \end{equation} \end{lemma} \begin{proof} By the fact $a^3-b^3 = (a-b)(a^2+ab+b^2)$, we have \begin{equation}\label{eq:beta-bound} \beta_{k}=3\big[(k+3)^{1/3}-(k+2)^{1/3}\big]= \frac{3}{(k+3)^{2/3}+(k+3)^{1/3}(k+2)^{1/3}+(k+2)^{2/3}}. \end{equation} Hence, $\beta_{k}\in \big[(k+3)^{-2/3},(k+2)^{-2/3}\big]$ for all $k\ge 0$, and it is a decreasing sequence. In addition, by the definition of $\Gamma_k$ and $\beta_k$, it holds for all $j>k\ge 0$ that \begin{equation}\label{eq:keystring} \frac{\Gamma_{j}}{\Gamma_{k}} = \frac{\prod_{l=0}^{j-1}(1-\beta_l)^2}{\prod_{l=0}^{k-1}(1-\beta_l)^2} = \prod_{l=k}^{j-1}(1-\beta_l)^2 \le e^{-2\sum_{l=k}^{j-1}\beta_l} = e^{-6\big[(j+2)^{1/3}-(k+2)^{1/3}\big]}, \end{equation} where the inequality holds because $0\le 1+x\le e^x, \forall\, x\ge -1$. Therefore we have that for any $k\ge 0,$ \begin{equation}\label{eq:key-beta-ineq1} \sum_{j=k+1}^{K-1}\frac{\Gamma_{j}}{\Gamma_{k}} \le \sum_{j=k+1}^{K-1} e^{-6\big[(j+2)^{1/3}-(k+2)^{1/3}\big]} = e^{6(k+2)^{1/3}}\sum_{j=k+1}^{K-1} e^{-6 (j+2)^{1/3} }. \end{equation} Since $e^{-6x^{1/3}}$ is a decreasing function and has an anti-derivative $-\frac{1}{36} e^{-6x^{1/3}} (18x^{2/3}+6x^{1/3}+1)$, we have \begin{equation}\label{eq:key-beta-ineq2} \sum_{j=k+1}^{K-1} e^{-6 (j+2)^{1/3} } \le \int_{k+2}^{K+1} e^{-6x^{1/3}}dx \le \frac{1}{36} e^{-6(k+2)^{1/3}} (18(k+2)^{2/3}+6(k+2)^{1/3}+1) . \end{equation} Substituting \eqref{eq:key-beta-ineq2} into \eqref{eq:key-beta-ineq1} gives \eqref{eq:keylem1} and completes the proof. \end{proof} Now we are ready to show the second convergence rate result with constant stepsizes. \begin{theorem}[convergence rate II with constant stepsizes]\label{thm:rate-const2} Under Assumptions~\ref{assump-obj} through \ref{assump-sgd}, let $\{{\mathbf{x}}^k\}$ be the iterate sequence from Algorithm~\ref{alg:prox-storm} with $ \eta_k =\frac{\eta}{L \sqrt[3]{K}}$ and $\{\beta_k\}$ set by \eqref{eq:beta-def}, where $\eta \le \frac{1}{4}$ is a positive number. If $\tau$ is selected from $\{0,1,\ldots,K-1\}$ uniformly at random, then \begin{equation}\label{eq:rate-grad-const2} \EE[\|\bar{\mathbf{g}}^\tau\|^2] \le \frac{1}{K^{\frac{2}{3}}} \left( \frac{12 L}{\eta}\big[\Phi({\mathbf{x}}^0)-\Phi^*\big] + \Big( 2^{-1/3}+\frac{1}{6}2^{1/3}+\frac{1}{36}\Big) \frac{8}{\sqrt[3]{K}} \frac{\sigma^2}{ m_0} + \frac{12\sigma^2 L}{ \eta m_0} + \frac{ 32}{(1-2^{-2/3})^2} \frac{\sigma^2}{m}\right). \end{equation} \end{theorem} \begin{proof} We show the desired result by verifying the conditions in Theorem~\ref{thm:main}. First, with $ \eta_k =\frac{\eta}{L \sqrt[3]{K}}$, the condition in \eqref{eq:cond-eta-beta1} becomes $$\frac{2 \eta }{\sqrt[3]{K}} + \frac{4 }{m} \frac{\eta^2}{K^{2/3}} \sum_{j=k+1}^{K-1}\frac{\Gamma_{j}}{\Gamma_{k} } \le 1, \quad k=0,\ldots,K-1.$$ Notice that when $k=K-1$ the summation above is null. Hence by \eqref{eq:keylem1}, it suffices to require $$\frac{2\eta }{\sqrt[3]{K}} + \frac{4 }{m} \frac{\eta^2 }{K^{2/3}} \bigg(\frac{1}{2}K^{2/3}+\frac{1}{6}K^{1/3}+\frac{1}{36}\bigg) \le 1, $$ which is guaranteed when $\eta\le\frac{1}{4}$ and $K\ge1$. Therefore, the condition in \eqref{eq:cond-eta-beta1} holds. Secondly, by letting $k=0$ in \eqref{eq:keylem1} and recalling $\Gamma_0=1$, we have $\sum_{k=1}^{K-1}\eta_k\Gamma_{k}\le \big(\frac{1}{2}2^{2/3}+\frac{1}{6}2^{1/3}+\frac{1}{36}\big) \frac{ \eta}{L\sqrt[3]{K}}.$ Hence, the first condition in \eqref{eq:cond-eta-beta2} holds with $A=\big(2^{-1/3}+\frac{1}{6}2^{1/3}+\frac{1}{36}\big) \frac{ \eta}{L\sqrt[3]{K}}.$ Finally, notice \begin{align}\label{eq:last1} \sum_{k=1}^{K-1}\eta_k\Gamma_{k}\sum_{j=0}^{k-1}\frac{\beta_j^2}{\Gamma_{j+1} } =&~\sum_{j=0}^{K-2}\frac{\beta_j^2}{(1-\beta_j)^2 } \sum_{k=j+1}^{K-1}\eta_k\frac{\Gamma_{k}}{\Gamma_{j}} \cr \le&~ \frac{ \eta}{L\sqrt[3]{K}} \sum_{j=0}^{K-2}\frac{\beta_j^2}{(1-\beta_0)^2 } \bigg(\frac{1}{2}(j+2)^{2/3}+\frac{1}{6}(j+2)^{1/3}+\frac{1}{36}\bigg) \cr \le&~ \frac{ \eta}{(1-\beta_0)^2 L \sqrt[3]{K}} \sum_{j=0}^{K-2} \bigg(\frac{1}{2}(j+2)^{-2/3}+\frac{1}{6}(j+2)^{-1}+\frac{1}{36} (j+2)^{-4/3}\bigg) \cr \le&~\frac{ \eta}{(1-\beta_0)^2 L \sqrt[3]{K}} \bigg(\frac{3}{2}(K^{1/3}-1)+\frac{1}{6}\log K+\frac{1}{12} (1-K^{-1/3})\bigg) \cr \le&~ \frac{ 2\eta}{(1-2^{-2/3})^2 L}, \end{align} where the first inequality follows from \eqref{eq:keylem1}, the decreasing monotonicity of $\beta_k$, and the setting of $\eta_k$, the second inequality holds by $\beta_j \le (j+2)^{-2/3}$, and the last inequality is obtained by $\beta_0\le 2^{-2/3}$ and using the fact $3x^{1/3} > \log x, \forall\, x>0$. Thus the second condition in \eqref{eq:cond-eta-beta2} holds with $B = \frac{ 2\eta}{(1-2^{-2/3})^2 L}$. Therefore, \eqref{eq:rate-grad-const2} follows from \eqref{eq:mainthm} and the choice of $\tau$ by uniformly random selection. \end{proof} From Theorem~\ref{thm:rate-const2}, we can immediately obtain the next complexity result of Algorithm~\ref{alg:prox-storm} with the constant stepsize. \begin{corollary}[complexity result II with constant stepsizes]\label{cor:complexity-const} Let $\varepsilon>0$ be given. Under Assumptions~\ref{assump-obj} through \ref{assump-sgd}, let $\{{\mathbf{x}}^k\}$ be the iterate sequence from Algorithm~\ref{alg:prox-storm} with $ \eta_k =\frac{\eta}{L \sqrt[3]{K}}$ and $\{\beta_k\}$ set by \eqref{eq:beta-def}, where $\eta \le \frac{1}{4}$ is a positive number. Let $\tau$ be selected from $\{0,1,\ldots,K-1\}$ uniformly at random. Then ${\mathbf{x}}^\tau$ is a stochastic $\varepsilon$-stationary solution of \eqref{eq:stoc-prob} if \begin{equation}\label{eq:K-const-2} K =\left\lceil\frac{ \left(\frac{12 L}{\eta}\big[\Phi({\mathbf{x}}^0)-\Phi^*\big] + \big( 2^{-1/3}+\frac{1}{6}2^{1/3}+\frac{1}{36}\big) \frac{8\sigma^2}{ m_0} + \frac{12\sigma^2 L}{ \eta m_0} + \frac{ 32}{(1-2^{-2/3})^2} \frac{\sigma^2}{m} \right)^{3/2} }{\varepsilon^3}\right\rceil. \end{equation} \end{corollary} \begin{remark}\label{rm:dep-sigma-const-2} Let $m_0=O(1)$ and $m=O(1)$. Then we have from \eqref{eq:K-const-2} that $K= O(\varepsilon^{-3})$ by ignoring the dependence on other quantities and the total sample complexity is $m_0+m(K-1) = O(\varepsilon^{-3})$, which matches with the lower bound in \cite{arjevani2019lower}. However, as we need $\eta \le \frac{1}{4}$, the dependence on $L\big(\Phi({\mathbf{x}}^0)-\Phi^*\big)$ will be not as good as the result in \eqref{eq:dep-sigma-const}. \end{remark} \section{Numerical Experiments}\label{sec:numerical} In this section, we test Algorithm~\ref{alg:prox-storm}, named as PStorm, on solving three problems. The first problem is the nonnegative principal component analysis (NPCA) \cite{reddi2016proximal}, and the other two are on training neural networks. We compare PStorm to the vanilla proximal SGD, Spiderboost \cite{wang2019spiderboost}, and Hybrid-SGD \cite{tran2021hybrid}. Spiderboost and Hybrid-SGD both achieve optimal complexity results, and the vanilla proximal SGD is used as a baseline for the comparison. For NPCA, all methods were implemented in MATLAB 2021a on a quad-core iMAC with 40 GB memory, and for training neural networks, all methods were implemented by using PyTorch on a Dell workstation with 32 CPU cores, 2 GPUs, and 64 GB memory. \subsection{Nonnegative Principal Component Analysis (NPCA)}\label{sec:npca} In this subsection, we compare the four methods on solving the NPCA problem: \begin{equation}\label{eq:npca} \Max_{{\mathbf{x}}\in\RR^n} \frac{1}{2}\EE_{\mathbf{z}}[{\mathbf{x}}^\top({\mathbf{z}}\vz^\top){\mathbf{x}}], \mbox{ s.t. } \|{\mathbf{x}}\|\le 1, {\mathbf{x}}\ge\vzero, \end{equation} where ${\mathbf{z}}\in\RR^n$ represents a random data point following a certain distribution, and $\EE_{\mathbf{z}}$ takes expectation about ${\mathbf{z}}$. The problem \eqref{eq:npca} can be formulated into the form of \eqref{eq:stoc-prob}, by negating the objective and adding an indicator function of the constraint. Two datasets were used in this test. The first one takes ${\mathbf{z}}=\frac{{\mathbf{w}}}{\|{\mathbf{w}}\|}$ where ${\mathbf{w}}\sim{\mathcal{N}}(\vone, {\mathbf{I}})$, and we solved a stochastic problem; for the second one, we used the normalized training and testing datasets of \verb|realsim| from LIBSVM \cite{chang2011libsvm}, and we solved a deterministic finite-sum problem. For both datasets, each sample function in the objective of \eqref{eq:npca} is 1-smooth, and thus we used the Lipschitz constant $L=1$ for all methods. \noindent\textbf{Random dataset:}~~For the randomly generated dataset, we set the dimension $n=100$ and the minibatch size to $m=10$ for PStorm, the vanilla proximal SGD, and the Hybrid-SGD. For the Spiderboost, we set $\varepsilon=5\times10^{-3}$, and for each iteration $k$, it accessed $q=\varepsilon^{-1}$ data samples if $\text{mod}(k,q)\neq 0$ and $\varepsilon^{-2}$ data samples otherwise. Each method could access at most $10^6$ data samples. The stepsize of PStorm was set according to \eqref{eq:dynamic-para} with $\eta$ tuned from $\{0.1,0.2,0.5,1\}$, out of which $\eta=0.1$ turned out the best. The stepsize of the vanilla proximal SGD was set to $\frac{\eta}{\sqrt{k+1}}$ for each iteration $k\ge0$ with $\eta$ tuned from $\{0.1,0.2,0.5,1\}$, out of which $\eta=0.5$ turned out the best. The stepsize of Spiderboost was set to $\eta=0.5$. The Hybrid-SGD has a few more parameters to tune. As suggested by \cite[Theorem 4]{tran2021hybrid} and also its numerical experiments, we set $\gamma_k$, $\beta_k$, $\eta_k$, and the initial batch size to \begin{equation}\label{eq:para-rand-set} \gamma_k\equiv\gamma = \frac{3c_0 m^{\frac{3}{4}}}{\sqrt{13}m_0 (K+1)^{\frac{1}{4}}},\ \beta_k\equiv\beta=1-\frac{\sqrt{m}}{\sqrt{m_0 K}}, \ \eta_k\equiv\eta = \frac{2}{L(3+\gamma)},\ m_0=\frac{c_1^2}{\lceil m (K+1)^{\frac{1}{3}}\rceil}, \end{equation} where $K$ is the maximum number of iterations. We tuned $c_0$ to 10 and $c_1$ to 5. To evaluate the performance of the tested methods, we randomly generated $10^7$ data samples following the same distribution as we described above, and at the iterates of the methods, we computed their violation of stationarity of the sample-approximation problem. Since the compared methods have different learning rate, to make a fair comparison, we measured the \emph{violation of stationarity} at ${\mathbf{x}}$ by $\|P({\mathbf{x}}, \nabla F, 1)\|$, where $P$ is the proximal mapping defined in Definition~\ref{def:prox-map}, and $F$ is the sample-approximated objective. Also, to obtain the ``optimal'' objective value, we ran the projected gradient method to 1,000 iterations on the deterministic sample-approximation problem. The results in terms of the number of samples are plotted in Figure~\ref{fig:snpca}, which clearly shows the superiority of PStorm over all the other three methods. \begin{figure}[h] \begin{center} \begin{tabular}{cc} \includegraphics[width=0.3\textwidth]{pics/NPCA/SNPCA_obj.pdf} & \includegraphics[width=0.3\textwidth]{pics/NPCA/SNPCA_res.pdf} \end{tabular} \end{center} \caption{Objective error and the violation of stationarity by PStorm, the vanilla SGD, Spiderboost, and Hybrid-SGD on solving \eqref{eq:npca} with randomly generated dataset.}\label{fig:snpca} \end{figure} \noindent\textbf{realsim dataset:}~~The \verb|realsim| dataset has $N=72,309$ samples in total. We set the minibatch size to $m=64$ for PStorm, the vanilla proximal SGD, and the Hybrid-SGD. For each iteration $k$ of the Spiderboost, we set $|B_k| = q=\lceil \sqrt{N} \rceil =269$ in \eqref{eq:vk-spider}, as suggested by \cite[Theorem 3]{wang2019spiderboost}. The stepsizes of PStorm and the vanilla proximal SGD were tuned in the same way as above, and the best $\eta$ was 0.2 for the former and 0.5 for the latter. The stepsize for Spiderboost was still set to $0.5$ as the smoothness constant is $L=1$. For Hybrid-SGD, we set its parameters to $$\gamma_k\equiv\gamma = 0.95,\ \beta_k\equiv\beta=1-\frac{\sqrt{m}}{\sqrt{m_0 K}}, \ \eta_k\equiv\eta = \frac{2}{L(3+\gamma)},\ m_0=\max\left\{N, \frac{c_1^2}{\lceil m (K+1)^{\frac{1}{3}}\rceil}\right\},$$ where $K$ is the maximum number of iterations and $c_1$ was tuned to 15. Notice that different from \eqref{eq:para-rand-set}, here we simply fix $\gamma=0.95$. This choice of $\gamma$ was also adopted in \cite{tran2021hybrid}, and it turned out that this setting resulted in the best performance of Hybrid-SGD for this test. We ran each method to 100 epochs, where one epoch is equivalent to one pass of all data samples. The results in terms of epoch number are shown in Figure~\ref{fig:npca}, where the violation of stationary was again measured by $\|P({\mathbf{x}}, \nabla F, 1)\|$ and the ``optimal'' objective value was given by running the projected gradient method to 1,000 iterations. For this test, we found that Spiderboost converges extremely fast and gave much smaller errors than those by other methods, and thus we plot the results by Spiderboost in separate figures. PStorm performed better than the vanilla proximal SGD and the Hybrid-SGD. We also tested the methods on the datasets \verb|w8a| and \verb|gisette| from LIBSVM. Their comparison performance was similar to that on \verb|realsim|. \begin{figure}[h] \begin{center} \includegraphics[width=0.24\textwidth]{pics/NPCA/NPCA_realsim_obj.pdf} \includegraphics[width=0.24\textwidth]{pics/NPCA/NPCA_realsim_obj_sp.pdf} \includegraphics[width=0.24\textwidth]{pics/NPCA/NPCA_realsim_res.pdf} \includegraphics[width=0.24\textwidth]{pics/NPCA/NPCA_realsim_res_sp.pdf} \end{center} \caption{Objective error and the violation of stationarity by PStorm, the vanilla SGD, Spiderboost, and Hybrid-SGD on solving \eqref{eq:npca} with realsim dataset.}\label{fig:npca} \end{figure} \subsection{Regularized Feedforward Fully-connected Neural Network}\label{sec:fnn} In this subsection, we compare different methods on solving an $\ell_1$-regularized 3-layer feedforward fully-connected neural network, formulated as \begin{equation}\label{eq:sparseDNN} \min_{{\bm{\theta}}} \frac{1}{N}\sum_{i=1}^N \ell\Big(\mathrm{softmax}\big({\mathbf{W}}_3\sigma ({\mathbf{W}}_2\sigma({\mathbf{W}}_1{\mathbf{x}}_i))\big), y_i\Big) + \lambda\big(\|{\mathbf{W}}_1\|_1+\|{\mathbf{W}}_2\|_1+\|{\mathbf{W}}_3\|_1\big). \end{equation} Here $\{({\mathbf{x}}_i,y_i)\}_{i=1}^N$ is a $c$-class training data set with $y_i\in \{1,\ldots,c\}$ for each $i$, ${\bm{\theta}}:=({\mathbf{W}}_1,{\mathbf{W}}_2,{\mathbf{W}}_3)$ contains the parameters of the neural network, $\sigma(\cdot)$ is an activation function, $\ell$ denotes a loss function, $\mathrm{softmax}({\mathbf{z}}):= \frac{1}{\sum_{j=1}^c e^{z_j}}[e^{z_1}; \ldots; e^{z_c} ]\in\RR^c, \forall\, {\mathbf{z}}\in\RR^c$, and $\lambda\ge0$ is a regularization parameter to trade off the loss and sparsity. In the test, we used the MNIST dataset \cite{lecun1998gradient} of hand-written-digit images. The training set has 60,000 images, and the testing set has 10,000 images. Each image was originally $28\times 28$ and vectorized into a vector of dimension $784$. We set ${\mathbf{W}}_1\in\RR^{784\times 120}, {\mathbf{W}}_2\in\RR^{120\times 84}$, and ${\mathbf{W}}_3\in\RR^{84\times 10}$, whose initial values were set to the default ones in \verb|libtorch|, a C++ distribution of PyTorch. We used the hyperbolic tangent activation function $\sigma(x)=\frac{e^x - e^{-x}}{e^x + e^{-x}}$ and the cross entropy $\ell({\mathbf{q}}, y_i)=-\log q_{y_i}$ for any distribution ${\mathbf{q}}\in\RR^c$. The parameters of PStorm were set according to \eqref{eq:dynamic-para} with $L=1$ and $\eta=\frac{\sqrt[3]{4}}{8}\approx 0.198$. Notice that the gradient of the loss function in \eqref{eq:sparseDNN} is not uniformly Lipschitz continuous, and its Lipschitz constant depends on ${\bm{\theta}}$. More specifically, the gradient is Lipschitz continuous over any bounded set of ${\bm{\theta}}$. However, PStorm with this parameter setting performed well. The learning rate of the vanilla SGD was set to $\eta_k = \frac{\eta}{\sqrt{k+1}}, \forall\, k\ge0$ with $\eta=\frac{\sqrt[3]{4}}{8}$. We also tried $\eta=0.5$, and it turned out that the performance of the vanilla SGD was not as well as that with $\eta=\frac{\sqrt[3]{4}}{8}$ when $\lambda>0$ in \eqref{eq:sparseDNN}. For Spiderboost, we set $q =\lceil \sqrt{60000}\rceil =245$ in \eqref{eq:vk-spider} as specified by \cite[Theorem 2]{wang2019spiderboost} and its learning rate $\eta=0.02$ in \eqref{eq:spiderboost}. We also tried $\eta=0.1$ and $\eta=0.01$. It turned out that Spiderboost could diverge with $\eta=0.1$ and converged too slowly with $\eta=0.01$. For Hybrid-SGD, we fixed its parameter $\gamma=0.95$ as suggested in the numerical experiments of \cite{tran2021hybrid}, and we set $\beta_k=\beta=1-\frac{1}{\sqrt{K+1}},\forall k\ge0$ in \eqref{eq:hyb-sgd-v}, where $K$ is the maximum number of iterations. Its learning rate was set to $\eta=\frac{2}{4+L\gamma}$. Then we chose the initial mini-batch size $m_0$ from $\{256, 2560, 30000, 60000\}$ and $L$ from $\{5, 10, 50, 100\}$. The best results were reported. We ran each method to 100 epochs. Mini-batch size was set to 32 for PStorm, the vanilla SGD, and Hybrid-SGD. Again, to make a fair comparison, we measured the \emph{violation of stationarity} at ${\bm{\theta}}$ by $\|P({\bm{\theta}}, \nabla F, 1)\|$, where $P$ is the proximal mapping defined in Definition~\ref{def:prox-map}, and $F$ is the smooth term in the objective of \eqref{eq:sparseDNN}. Table~\ref{tbl: sp-DNN-32} and Figure~\ref{fig:sp-DNN-32} show the results by the compared methods. Each result in the table is the average of those at the last five epochs. For Hybrid-SGD, the best results were obtained with $(m_0, L) = (60000, 50)$ when $\lambda=0$ and with $(m_0, L) = (60000, 100)$ when $\lambda>0$. From the results, we see that PStorm and Hybrid-SGD give similar training loss and testing accuracies while the vanilla SGD and Spiderboost yield higher loss and lower accuracies. The lower accuracies by Spiderboost may be caused by its larger batch size that is required in \cite{wang2019spiderboost}, and the lower accuracies by the vanilla SGD are because of its slower convergence. In addition, PStorm produced sparser solutions than those by other methods in all regularized cases. In terms of the violation of stationarity, the solutions by PStorm have better quality than those by other methods. Furthermore, we notice that the model \eqref{eq:sparseDNN} trained by PStorm with $\lambda = 5\times 10^{-4}$ is much sparser than that without the $\ell_1$ regularizer, but the sparse model gives just slightly lower testing accuracy than the dense one. This is important because a sparser model would reduce the inference time when the model is deployed to predict new data. \begin{table}[h] \begin{center} \resizebox{0.99\textwidth}{!}{ \begin{tabular}{|c||cccc||cccc||cccc||cccc|} \hline Method & \multicolumn{4}{|c||}{PStorm} & \multicolumn{4}{|c||}{vanilla SGD} & \multicolumn{4}{|c||}{Spiderboost} & \multicolumn{4}{|c|}{Hybrid-SGD}\\\hline\hline $\lambda$ & train & test & grad & density & train & test & grad & density & train & test & grad & density & train & test & grad & density \\\hline 0.00 & 3.61e-3 & \textbf{98.01} & \textbf{3.45e-3} & 100 & 6.91e-2 & 97.09 & 3.42e-2 & 100 & 4.24e-2 & 97.41 & 1.57e-2 & 100& \textbf{1.50e-3} & 97.11 & 3.64e-3 & 100 \\ 2e-4 & 4.38e-2 & 97.60 & \textbf{1.60e-2} & \textbf{14.06} & 1.08e-1 & 96.62 & 5.77e-2 & 99.47 & 8.70e-2 & 97.24 & 1.87e-2 & 27.17& \textbf{4.08e-2} & \textbf{97.78} & 9.53e-2 & 28.29\\ 5e-4 & 8.86e-2 & \textbf{97.12} & \textbf{1.94e-2} & \textbf{6.16} & 1.69e-1 & 95.54 & 5.96e-2 & 92.86 & 1.41e-1 & 96.16 & 2.18e-2 & 10.62& \textbf{8.34e-2} & \textbf{97.12} & 1.11e-1 & 12.69 \\\hline \end{tabular} } \end{center} \caption{Results by the proposed method PStorm, the vanilla SGD, Hybrid-SGD, and Spiderboost on training the model \eqref{eq:sparseDNN}. The first three methods use mini-batch $m = 32$. Each method runs to 100 epochs. ``train'' is for training loss; ``test'' is for testing accuracy; ``grad'' is for the violation of stationarity; ``density'' is for the percentage of nonzeros in the solution. The best results for ``test'', ``grad'', and ``density'' are highlighted in \textbf{bold}.}\label{tbl: sp-DNN-32} \end{table} \begin{figure}[h] \begin{center} \begin{tabular}{ccc} $\lambda = 0$ & $\lambda = 2\times 10^{-4}$ & $\lambda = 5\times 10^{-4}$ \\ \includegraphics[width=0.25\textwidth]{pics/batch32/train_acc_lam_1} & \includegraphics[width=0.25\textwidth]{pics/batch32/train_acc_lam_2} & \includegraphics[width=0.25\textwidth]{pics/batch32/train_acc_lam_3} \\ \includegraphics[width=0.25\textwidth]{pics/batch32/test_acc_lam_1} & \includegraphics[width=0.25\textwidth]{pics/batch32/test_acc_lam_2} & \includegraphics[width=0.25\textwidth]{pics/batch32/test_acc_lam_3} \\ \includegraphics[width=0.25\textwidth]{pics/batch32/grad_lam_1} & \includegraphics[width=0.25\textwidth]{pics/batch32/grad_lam_2} & \includegraphics[width=0.25\textwidth]{pics/batch32/grad_lam_3} \\ \includegraphics[width=0.25\textwidth]{pics/batch32/density_lam_1}& \includegraphics[width=0.25\textwidth]{pics/batch32/density_lam_2} & \includegraphics[width=0.25\textwidth]{pics/batch32/density_lam_3} \end{tabular} \end{center} \caption{Results in terms of epoch by the proposed method PStorm, the vanilla SGD, Hybrid-SGD, and Spiderboost on training the model \eqref{eq:sparseDNN}. The first three methods use mini-batch $m = 32$.}\label{fig:sp-DNN-32} \end{figure} \subsection{Regularized Convolutional Neural Network} In this subsection, we compare different methods on solving an $\ell_1$-regularized convolutional neural network, formulated as \begin{equation}\label{eq:sparseAllCNN} \min_{{\bm{\theta}}} \frac{1}{N}\sum_{i=1}^N \ell\Big(\log \big(\mathrm{softmax}(\phi({\mathbf{x}}_i))\big), y_i\Big) + \lambda\|{\bm{\theta}}\|_1. \end{equation} Similar to \eqref{eq:sparseDNN}, $\{({\mathbf{x}}_i,y_i)\}_{i=1}^N$ is a $c$-class training data set with $y_i\in \{1,\ldots,c\}$ for each $i$, ${\bm{\theta}}$ contains all parameters of the neural network, $\ell$ denotes a loss function, the $\log$ function takes component-wise logarithm, $\phi$ represents the nonlinear transformation by the neural network, and $\lambda\ge0$ is a regularization parameter to trade off the loss and sparsity. In the test, we used the Cifar10 dataset \cite{krizhevsky2009learning} that has 50,000 training images and 10,000 testing images. In addition, we set $\ell$ to the cross entropy loss and $\phi$ to the all convolutional neural network (AllCNN) in \cite{springenberg2014striving} without data augmentation. The AllCNN has 9 convolutional layers. We ran each method to 200 epochs. Mini-batch size was set to 100 for PStorm, the vanilla SGD, and Hybrid-SGD. The stepsizes of PStorm and the vanilla proximal SGD were tuned in the same way as in section~\ref{sec:npca}. For Spiderboost, we set $q=\lceil \sqrt{50000}\rceil =224 $ in \eqref{eq:vk-spider}, and its learning rate $\eta$ in \eqref{eq:spiderboost} was tuned by picking the best one from $\{0.01, 0.1, 0.5\}$. For Hybrid-SGD, we set its parameters in a way similar to that in section~\ref{sec:fnn} but chose the best pair of $(L, m_0)$ from $\{1, 10, 100\}\times \{10^2, 10^3, 10^4\}$. Results produced by the four methods are shown in Table~\ref{fig:sp-CNN} and Figure~\ref{fig:allcnn}. Again, each result in the table is the average of those at the last five epochs. From the results, we see that PStorm and Hybrid-SGD give similar training loss and testing accuracies. PStorm is slightly better than Hybrid-SGD, and the advantage of the former is more significant when $\lambda = 5\times 10^{-4}$. Spiderboost can give small violation of stationarity, but it tended to have significantly higher loss and lower accuracies. This is possibly because Spiderboost used larger batch size. \begin{table}[h] \begin{center} \resizebox{0.99\textwidth}{!}{ \begin{tabular}{|c||cccc||cccc||cccc||cccc|} \hline Method & \multicolumn{4}{|c||}{PStorm} & \multicolumn{4}{|c||}{vanilla SGD} & \multicolumn{4}{|c||}{Spiderboost} & \multicolumn{4}{|c|}{Hybrid-SGD}\\\hline\hline $\lambda$ & train & test & grad & density & train & test & grad & density & train & test & grad & density & train & test & grad & density \\\hline 0.0 & \textbf{2.30e-2} & \textbf{89.74} & \textbf{0.10} & 100 & 2.45e-1 & 85.61 & 0.76 & 100 & 1.86 & 36.63 & 0.12 & 100 & 5.26e-2 & 88.17 & 0.19 & 100 \\ 2e-4 & \textbf{7.61e-1} & \textbf{89.40} & 4.17 & \textbf{44.91} & 9.42e-1 & 88.76 & 2.78 & 89.64 & 2.93 & 20.43 & \textbf{0.89} & 53.79 & 8.15e-1 & 88.03 & 1.68 & 72.78 \\ 5e-4 & \textbf{1.15} & \textbf{88.53} & 5.94 & \textbf{19.87} & 2.15 & 86.62 & 5.55 & 40.64 & 4.69 & 18.62 & \textbf{0.81} & 32.21 & 1.75 & 86.71 & 6.09 & 60.78\\\hline \end{tabular} } \end{center} \caption{Results in terms of epoch by the proposed method PStorm, the vanilla SGD, Hybrid-SGD, and Spiderboost on training the model \eqref{eq:sparseAllCNN}. The first three methods use mini-batch $m = 100$. Each method runs to 200 epochs. ``train'' is for training loss; ``test'' is for testing accuracy; ``grad'' is for the violation of stationarity; ``density'' is for the percentage of nonzeros in the solution. The best results for ``test'', ``grad'', and ``density'' are highlighted in \textbf{bold}.}\label{fig:sp-CNN} \end{table} \begin{figure}[h] \begin{center} \begin{tabular}{ccc} $\lambda = 0$ & $\lambda = 2\times 10^{-4}$ & $\lambda = 5\times 10^{-4}$ \\ \includegraphics[width=0.24\textwidth]{pics/allcnn/AllCNN_train_loss.pdf} & \includegraphics[width=0.24\textwidth]{pics/allcnn/AllCNN_train_loss_reg.pdf} & \includegraphics[width=0.24\textwidth]{pics/allcnn/AllCNN_train_loss_reg_lam5.pdf}\\ \includegraphics[width=0.24\textwidth]{pics/allcnn/AllCNN_test_acc.pdf} & \includegraphics[width=0.24\textwidth]{pics/allcnn/AllCNN_test_acc_reg.pdf} & \includegraphics[width=0.24\textwidth]{pics/allcnn/AllCNN_test_acc_reg_lam5.pdf} \\ \includegraphics[width=0.24\textwidth]{pics/allcnn/AllCNN_res.pdf} & \includegraphics[width=0.24\textwidth]{pics/allcnn/AllCNN_res_reg.pdf} & \includegraphics[width=0.24\textwidth]{pics/allcnn/AllCNN_res_reg_lam5.pdf} \\ \includegraphics[width=0.24\textwidth]{pics/allcnn/AllCNN_density.pdf}& \includegraphics[width=0.24\textwidth]{pics/allcnn/AllCNN_density_reg.pdf} & \includegraphics[width=0.24\textwidth]{pics/allcnn/AllCNN_density_reg_lam5.pdf} \end{tabular} \end{center} \caption{Results in terms of epoch by the proposed method PStorm, the vanilla SGD, Hybrid-SGD, and Spiderboost on training the model \eqref{eq:sparseAllCNN}. The first three methods use mini-batch $m = 100$.}\label{fig:allcnn} \end{figure} \section{Conclusions}\label{sec:conclusion} We have presented a momentum-based variance-reduced mirror-prox stochastic gradient method for solving nonconvex nonsmooth problems, where the nonsmooth term is assumed to be closed convex. The method, named PStorm, requires only one data sample for each update. It is the first $O(1)$-sample-based method that achieves the optimal complexity result $O(\varepsilon^{-3})$ under a mean-squared smoothness condition for solving nonconvex nonsmooth problems. The $O(1)$-sample update is important in machine learning because small-batch training can lead to good generalization. On training sparse regularized neural networks, PStorm can perform better than two other optimal stochastic methods and consistently better than the vanilla stochastic gradient method. \bibliographystyle{abbrv}
1,941,325,220,973
arxiv
\section{Introduction} \label{sec:intro} Oil and water are immiscible fluids under normal conditions of temperature and pressure. Their phase separation behaviour in binary mixtures has been extensively studied both experimentally and theoretically, and has become a testbed for many fluid simulation methods~\cite{bib:bray,bib:jury}. However, the addition of amphiphile (or surfactant) to these systems produces much more complex behaviour. A general review of the equilibrium phase behaviour of these fluids and theoretical methods for studying them was provided by Gompper and Schick~\cite{bib:gs2}. The complex phase behaviour of binary and ternary amphiphilic fluids arises as a result of the physical and chemical properties of amphiphilic molecules. Amphiphile molecules possess a hydrophylic head and a hydrophobic tail. As a result it is usually energetically favourable for the amphiphile molecules to be adsorbed at oil-water interfaces, effectively reducing the interfacial tension. Complex morphologies, termed mesophases, occur in both binary (surfactant and water or oil) and ternary amphiphilic systems. In general these structures depend on the concentration and chemical nature of the surfactant molecules (length of hydrocarbon tail, size of head group, and so on) as well as on the temperature. The equilibrium phase diagrams for these systems have been obtained from experimental investigation~\cite{bib:kshks}. Of considerable interest are the {\it microemulsion} phases; an example is the very low surface tension which exist between the ``middle'' microemulsion phase and the bulk oil and water phases~\cite{bib:gs2}, but there are many other fascinating phenomena including the viscoelastic properties of wormlike micelles and the formation of vesicles~\cite{bib:gs2,bib:cw}. Their intrinsic complexity and wide application make these systems appropriate for detailed scientific inquiry. Although the equilibrium phase behaviour of these systems is well understood, relatively little work has been done on their non-equilibrium properties. Much of the work which has addressed these dynamic properties has been based on molecular dynamics methods~\cite{bib:mdpl,bib:mdshe,bib:mdmlc}. However, such atomistic approaches are too computationally demanding to allow investigation of the important large-time dynamics and extended spatial structures of these systems. Indeed, a large part of the fascination of amphiphilic fluids is related to the dependence of their macroscopic properties on the underlying molecular and mesoscopic dynamics, which calls for the development of techniques which can efficiently bridge the length and time scale gaps from micro to macro. In the present paper we report on the formulation and implementation of a three-dimensional hydrodynamic lattice gas model, which has been extensively studied in two dimensions~\cite{bib:bce,bib:em1,bib:em2,bib:em3,bib:em4}. Compared with molecular dynamics the computational simplicity of lattice gases makes them an ideal method for modelling complex fluid behaviour from the mesoscale upwards. They have been used extensively for modelling hydrodynamics since Frisch, Hasslacher, and Pomeau~\cite{bib:fchc}, and Wolfram~\cite{bib:w4} showed that it is possible to simulate the incompressible Navier-Stokes equations using discrete boolean elements on a lattice. Rothman and Keller subsequently generalised the basic lattice gas method to allow simulation of immiscible fluids~\cite{bib:rk}, and we have used their model as the starting point for our own work. Notwithstanding the simplifications engendered by invoking the lattice-gas paradigm, the simulation of the non-equilibrium behavior of ternary amphiphilic fluids in three dimensions is a highly demanding area of computational science; indeed, the results presented in this paper have been made possible only by the recent availability of sufficiently powerful parallel computing architectures, as well as sophisticated visualisation methods. The purpose of the present paper is to describe the implementation of our three-dimensional model, and to establish its validity. In particular, we show that our model can reproduce the well known features of amphiphilic fluid phenomenology. In Sections~\ref{sec:model} and \ref{sec:amph_hamil} we describe our model, emphasising in particular the differences between the 2D and 3D lattice-gas realisations, and briefly describe the computational requirements of the work. Section~\ref{sec:algy} outlines the basic structure of the algorithm, while Section~\ref{sec:couple} specifies the coupling constants which are used in our simulations. The results of the simulations are presented in Section~\ref{sec:simul}. These simulations demonstrate the ability of our model to represent binary immiscible behaviour, binary water-surfactant self-assembly and ternary amphiphilic behaviour. Finally we close the paper with some conclusions in Section~\ref{sec:conclusions}. \section{Amphiphilic Lattice-Gas Dynamics} \label{sec:model} Our lattice-gas model of amphiphilic fluids consists of three different species of particles moving about on a $D$-dimensional lattice ${\cal L}$ in discrete time steps. The three species are the two immiscible fluids (oil and water) denoted by red ($R$) and blue ($B$) colours, respectively, and the amphiphile $A$. Lattice-gas particles can have velocities ${\bf c}_i$, where $1\leq i\leq b$, and $b$ is the number of velocities per site. We shall measure discrete time in units of one lattice update, so that a particle emerging from a collision at site ${\bf x}$ and time $t$ with velocity ${\bf c}_i$ will stream to site ${\bf x}+{\bf c}_i$ where it may undergo the next collision. The ${\bf c}_i$ must thus be integer multiples of the lattice vectors; it is also possible to have ${\bf c}_i={\bf 0}$ for some $i$ to allow for ``rest particles'' with zero speed. We let $n^\alpha_i(\bfx, t)\in\{0,1\}$ denote the presence ($1$) or absence ($0$) of a particle of species $\alpha\in\{R,B,A\}$ with velocity ${\bf c}_i$, at lattice site ${\bf x}\in{\cal L}$ and time step $t$. The $n^\alpha_i(\bfx, t)$ are not all independent since an ``exclusion rule'' is enforced whereby there can be no more than one particle of a given velocity at a given lattice site at a given time. The collection of all $n^\alpha_i(\bfx, t)$ for $1\leq i\leq b$ shall be denoted by ${\bf n}^\alpha(\bfx, t)$. This is not to be confused with the total number of particles of a given colour, \begin{equation} n^\alpha(\bfx, t)\equiv\sum_{i=1}^b n^\alpha_i(\bfx, t). \end{equation} Likewise, we shall sometimes need the {\it colour charge} associated with a given site, \begin{equation} q_i(\bfx, t)\equiv n^R_i(\bfx, t)-n^B_i(\bfx, t), \end{equation} as well as the total colour charge at a site, \begin{equation} q(\bfx, t)\equiv\sum_{i=1}^b q_i(\bfx, t). \end{equation} Finally, the collection of all ${\bf n}^\alpha(\bfx, t)$ for $\alpha\in\{R,B,A\}$ will be called the {\it population state} of the site; it is denoted by \begin{equation} {\bf n}(\bfx, t)\in{\cal N} \end{equation} where we have introduced the notation ${\cal N}$ for the (finite) set of all distinct population states. In addition to the specification of the particle populations at a site, the amphiphile particles have an orientation, denoted by $\mbox{\boldmath $\sigma$}_i(\bfx, t)$. This orientation vector, which has fixed magnitude $\sigma$, specifies the orientation of the director of the amphiphile particle at site ${\bf x}$ and time step $t$ with velocity ${\bf c}_i$. (Of course, if there is no amphiphile particle with that site, time step and velocity, then the value of $\mbox{\boldmath $\sigma$}_i(\bfx, t)$ there is not defined.) In our work, the values of the $\mbox{\boldmath $\sigma$}_i(\bfx, t)$ vectors may vary continuously~\footnote{It is also possible to construct models with discrete set of values for the $\mbox{\boldmath $\sigma$}_i$, but we do not consider that possibility in this paper.} on a $(D-1)$-sphere, denoted by $S^{D-1}$; thus, they take their values on a circle for $D=2$ and a sphere for $D=3$. The collection of the $b$ vectors $\mbox{\boldmath $\sigma$}_i(\bfx, t)$ at a given site ${\bf x}$ and time step $t$ is called the {\it orientation state}, \begin{equation} {\mbox{\boldmath $\Sigma$}}(\bfx, t) \equiv \left( \mbox{\boldmath $\sigma$}_1(\bfx, t), \mbox{\boldmath $\sigma$}_2(\bfx, t), \ldots \mbox{\boldmath $\sigma$}_b(\bfx, t) \right) \in \bigotimes^b S^{D-1}, \end{equation} where $\otimes^b$ denotes the $b$-fold Cartesian product. This is not to be confused with the {\it total director} of a site \begin{equation} \mbox{\boldmath $\sigma$}(\bfx, t)\equiv\sum_{i=1}^b\mbox{\boldmath $\sigma$}_i(\bfx, t). \end{equation} We shall also find it useful to define the following scalar director field \begin{equation} S(\bfx, t)\equiv\sum_{i=1}^b{\bf c}_i\cdot\mbox{\boldmath $\sigma$}_i(\bfx, t). \label{eq:sdef} \end{equation} Together, the population state and the orientation state completely specify (in fact, as noted above, they overspecify) the {\it total state} of the site, \begin{equation} s\equiv({\bf n},{\mbox{\boldmath $\Sigma$}}), \end{equation} where we have omitted the site and time-step specification for brevity. The time evolution of the system is an alternation between a streaming or {\it propagation} step and a {\it collision step}. In the first of these, the particles move in the direction of their velocity vectors to new lattice sites. This is described mathematically by the replacements \begin{equation} n^\alpha_i\left({\bf x}+{\bf c}_i,t+1\right) \leftarrow n^\alpha_i(\bfx, t) \label{eq:propp} \end{equation} \begin{equation} \mbox{\boldmath $\sigma$}_i\left({\bf x}+{\bf c}_i,t+1\right) \leftarrow \mbox{\boldmath $\sigma$}_i(\bfx, t), \label{eq:propo} \end{equation} for all ${\bf x}\in{\cal L}$, $1\leq i\leq b$ and $\alpha\in\{ R,B,A\}$. That is, particles with velocity ${\bf c}_i$ simply move from point ${\bf x}$ to point ${\bf x}+{\bf c}_i$ in one time step. In the collision step, the newly arrived particles interact, resulting in new momenta and directors. The collisional change in the state of a lattice site ${\bf x}$ is required to conserve the mass of each species present \begin{equation} \rho^\alpha(\bfx, t)\equiv\sum_i^b n^\alpha_i(\bfx, t), \end{equation} as well as the $D$-dimensional momentum vector \begin{equation} {\bf p}(\bfx, t)\equiv\sum_\alpha\sum_i^b{\bf c}_i n^\alpha_i(\bfx, t) \end{equation} (where we have assumed for simplicity that the particles all carry unit mass). Thus, the set ${\cal N}$ of population states at each site is partitioned into {\it equivalence classes} of population states having the same values of these conserved quantities. For a site with given masses ${\mbox{\boldmath $\rho$}}\equiv\left(\rho^R,\rho^B,\rho^A\right)$ and momentum ${\bf p}$, we denote the set of allowed population states by $E({\mbox{\boldmath $\rho$}},{\bf p})\subset{\cal N}$. Since the conservation laws do not restrict the orientations of the directors, the set of allowed total states is then \begin{equation} {\cal E}({\mbox{\boldmath $\rho$}},{\bf p})=E({\mbox{\boldmath $\rho$}},{\bf p})\bigotimes^b S^{D-1}. \end{equation} Given a precollision total state $s\in E({\mbox{\boldmath $\rho$}},{\bf p})$ with masses ${\mbox{\boldmath $\rho$}}$ and momentum ${\bf p}$, the postcollision total state $s'$ must belong to the set \begin{equation} s'=\left({\bf n}',{\mbox{\boldmath $\Sigma$}}'\right) \in {\cal E}({\mbox{\boldmath $\rho$}}(s),{\bf p}(s)). \end{equation} Henceforth, primed quantities are understood always to refer to the postcollision state. In our model, $s'$ is sampled from a probability density ${\cal P}(s')$, sometimes equivalently denoted ${\cal P}({\bf n}',{\mbox{\boldmath $\Sigma$}}')$, imposed upon this set. We assume that the characteristic time for collisional and orientational relaxation is sufficiently fast in comparison to that of the propagation that we can model this probability density as the Gibbsian equilibrium corresponding to a Hamiltonian function; that is \begin{equation} {\cal P}(s') = \frac{1}{{\cal Z}}\exp\left[-\beta H(s')\right], \label{eq:beta_defn} \end{equation} where $\beta$ is an inverse temperature, $H(s')$ is the energy associated with collision outcome $s'$, and ${\cal Z}$ is the equivalence-class partition function, \begin{equation} {\cal Z} ({\mbox{\boldmath $\rho$}},{\bf p},\beta) \equiv \sum_{{\bf n}'\in E({\mbox{\boldmath $\rho$}},{\bf p})} \int d{\mbox{\boldmath $\Sigma$}}'\;\exp\left[-\beta H(s')\right], \end{equation} and we have defined the measure on the set of orientational states \begin{equation} \int d{\mbox{\boldmath $\Sigma$}}\equiv\bigotimes_{i=1}^b\int d\mbox{\boldmath $\sigma$}_i. \end{equation} In practice, we sample $s'=\left({\bf n}',{\mbox{\boldmath $\Sigma$}}'\right)$ by first sampling the postcollision population state ${\bf n}'$ from the reduced probability density \begin{equation} P\left({\bf n}'\right) = \int d{\mbox{\boldmath $\Sigma$}}'\;{\cal P}(s'). \label{eq:redp} \end{equation} We then sample the postcollision orientation state by sampling the $b$ orientations $\mbox{\boldmath $\sigma$}_i' $ from each of \begin{equation} \pi_i\left(\mbox{\boldmath $\sigma$}'_i\right) = \prod_{j\neq i}^b\int d\mbox{\boldmath $\sigma$}'_j\;{\cal P}\left({\bf n}',{\mbox{\boldmath $\Sigma$}}'\right). \label{eq:redo} \end{equation} for $1\leq i\leq b$; these are, as we shall see, independent distributions, so the $b$ samples may each be taken without regard for the other outcomes. This completes the collision process. \section{Local Amphiphilic Lattice-Gas Hamiltonian} \label{sec:amph_hamil} It remains to specify the local Hamiltonian used in the collision outcome selection process. Such a Hamiltonian has been derived and described in detail for the two-dimensional version of the model~\cite{bib:bce}, and we use the same one here~\footnote{Actually, the forms given for the various fields in reference~\cite{bib:bce} are somewhat more general than those given here. In this presentation we restrict ourselves to nearest neighbour interactions for simplicity.}. It is \begin{equation} H(s') = {\bf J}\cdot (\alpha{\bf E}+\mu{\bf P}) + \mbox{\boldmath $\sigma$}'\cdot (\epsilon{\bf E}+\zeta{\bf P}) + {\cal J} : (\epsilon{\cal E}+\zeta{\cal P})+{\delta \over 2} {{{\bf v}({\bf x},t)}^{2}}, \label{eq:hamil} \end{equation} where we have introduced the {\it colour flux} vector of an outgoing state \begin{equation} {\bf J}(\bfx, t)\equiv\sum_{i=1}^b{\bf c}_i q'_i(\bfx, t), \label{eq:cflux} \end{equation} the {\it dipolar flux} tensor of an outgoing state \begin{equation} {\cal J}(\bfx, t)\equiv\sum_{i=1}^b{\bf c}_i\mbox{\boldmath $\sigma$}'_i(\bfx, t), \end{equation} the {\it colour field} vector \begin{equation} {\bf E}(\bfx, t)\equiv\sum_{i=1}^b{\bf c}_i q\left(\bfx+\bfc_i,t\right), \label{eq:bfEdef} \end{equation} the {\it dipolar field} vector \begin{equation} {\bf P}(\bfx, t)\equiv-\sum_{i=1}^b{\bf c}_i S\left(\bfx+\bfc_i,t\right), \end{equation} the {\it colour field gradient} tensor \begin{equation} {\cal E}(\bfx, t)\equiv\sum_{i=1}^b{\bf c}_i{\bf E}\left(\bfx+\bfc_i,t\right), \end{equation} the {\it dipolar field gradient} tensor~\footnote{Note that this definition differs from that used in the reference~\cite{bib:bce}. The two definitions can be reconciled by a redefinition of the coupling constants.} \begin{equation} {\cal P}(\bfx, t)\equiv-\sum_{i=1}^b{\bf c}_i{\bf c}_i S\left(\bfx+\bfc_i,t\right), \label{eq:calPdef} \end{equation} and the kinetic energy of the particles at a site. \begin{equation} {\delta \over 2}\left|{\bf v}({\bf x},t)\right|^2, \end{equation} where ${\bf v}$ is the average velocity of all particles at a site, the mass of the particles is taken as unity, and $\alpha$, $\mu$, $\epsilon$, $\zeta$ and $\delta$ are coupling constants. We note that the change in kinetic energy was not included by Rothman and Keller in their immiscible lattice gas model. We include it here for completeness, and set $\delta =1.0$; although it will not affect the equilibrium properties of the model, it may impact the non-equilibrium properties. It should be further noted that the inverse temperature-like parameter $\beta$ is not related in the conventional way to the kinetic energy. For a discussion of the intoduction of this parameter into lattice gases to reproduce critical behaviour we refer the reader to the original work by Chan and Liang~\cite{bib:chli}. Eqs.~(\ref{eq:hamil} - \ref{eq:calPdef}) were derived by assuming that there is an interaction potential between colour charges, and that the directors are like ``colour dipoles'' in this context~\cite{bib:bce}. The terms containing $\alpha$ models the interaction of colour charges with surrounding colour charges as in the original Rothman-Keller model~\cite{bib:rk}; those containing $\mu$ model the interaction of colour charges with surrounding colour dipoles; those containing $\epsilon$ model the interaction of colour dipoles with surrounding colour charges (alignment of surfactant molecules across oil-water interfaces); those containing $\zeta$ model the interaction of colour dipoles with surrounding colour dipoles (interfacial bending energy or ``stiffness''). Note that the field quantities depend only on the precollision state, whereas the flux quantities depend on the postcollision state. Thus, the fields may be computed once at every site, just after the propagation step. The fluxes, on the other hand, must be computed for each possible outgoing state. It follows that the Hamiltonian may be written in the form \begin{equation} H(s')=H_0({\bf n}')+\sum_{i=1}^b n^{A\prime}_i{\bf A}_i\cdot\mbox{\boldmath $\sigma$}'_i, \end{equation} where we have defined \begin{equation} H_0({\bf n}')\equiv {\bf J}\cdot (\alpha{\bf E}+\mu{\bf P}) \end{equation} and \begin{equation} {\bf A}_i\equiv\sigma \left[ \epsilon{\bf E}+\zeta{\bf P}+(\epsilon{\cal E}+\zeta{\cal P})\cdot{\bf c}_i \right]. \label{eq:auxvec} \end{equation} The reduced probability density for ${\bf n}'$ is then \begin{equation} P({\bf n}') = \int d{\mbox{\boldmath $\Sigma$}}'{\cal P}(s') = \frac{\exp\left[-\beta H_0({\bf n}')\right]}{{\cal Z}} \prod_{i=1}^b\int d\mbox{\boldmath $\sigma$}'_i\; \exp\left(-\beta n^{A\prime}_i{\bf A}_i\cdot\mbox{\boldmath $\sigma$}_i\right), \end{equation} and this demands evaluation of integrals of the form \begin{equation} \int d\mbox{\boldmath $\sigma$}\;\exp\left(-n\beta{\bf A}\cdot\mbox{\boldmath $\sigma$}\right). \end{equation} In two dimensions ($D=2$), we can adopt a polar coordinate $\theta$, chosen so that $\theta=0$ is the direction of ${\bf A}$, so that this integral becomes \begin{equation} \int_0^{2\pi} d\theta\;\exp\left(-n\beta\sigma |{\bf A}|\cos\theta\right) = 2\pi I_0\left(n\beta\sigma |{\bf A}|\right), \end{equation} where $I_0$ is the modified Bessel function of the first kind~\cite{bib:bce}. So, from from Eq.~(\ref{eq:redp}), we get the reduced probability density \begin{equation} P({\bf n}') = \frac{(2\pi)^b\exp\left[-\beta H_0({\bf n}')\right]}{{\cal Z}} \prod_{i=1}^b I_0\left(n^{A\prime}_i\beta\sigma |{\bf A}_i|\right) \label{eq:rpdfa} \end{equation} which must be evaluated numerically for every possible outgoing state. In three dimensions ($D=3$), we adopt spherical coordinates with polar axis in the direction of ${\bf A}$, so that the integral over the orientational degrees of freedom is of the form \begin{equation} \int_0^{2\pi}d\phi\int_0^\pi d\theta\;\sin\theta \exp\left(-n\beta\sigma |{\bf A}|\cos\theta\right) =4\pi \left\{ \begin{array}{ll} 1 & \mbox{if $n=0$}\\ \frac{\sinh\left(\beta\sigma |{\bf A}|\right)}{\beta\sigma |{\bf A}|} & \mbox{if $n=1$,} \end{array} \right. \end{equation} where $\theta$ is the colatitude and $\phi$ is the azimuthal angle. The reduced probability density of Eq.~(\ref{eq:redp}) is then \begin{equation} P({\bf n}') = \frac{(4\pi)^b\exp\left[-\beta H_0({\bf n}')\right]}{{\cal Z}} \prod_{i=1}^b \left\{ 1+n^{A\prime}_i \left[ \frac{\sinh\left(\beta\sigma |{\bf A}_i|\right)}{\beta\sigma |{\bf A}_i|}-1 \right] \right\}. \label{eq:rpdfb} \end{equation} Once the reduced probability is sampled from $P({\bf n}')$, we turn our attention to the determination of the new orientation state. For $D=2$, the reduced probability density for the angle $\theta'_i$ is given by \begin{equation} \pi_i(\theta'_i) = \frac{\exp\left(-\beta\sigma\left|{\bf A}_i\right|\cos\theta'_i\right)} {2\pi I_0\left(\beta\sigma\left|{\bf A}_i\right|\right)}. \label{eq:opdfa} \end{equation} For $D=3$, the probability density for the colatitude $\theta'_i$ and azimuthal angle $\phi'_i$ is then \begin{equation} \pi_i(\theta'_i,\phi'_i) = \frac{ \exp\left(-\beta\sigma\left|{\bf A}_i\right|\cos\theta'_i\right) \sin\theta'_i} {4\pi\left[\frac{\sinh\left(\beta\sigma\left|{\bf A}_i\right|\right)} {\beta\sigma\left|{\bf A}_i\right|}\right]}. \label{eq:opdfb} \end{equation} \section{Algorithmic Considerations} \label{sec:algy} A computer implementation of our hydrodynamic lattice-gas model requires a choice of data representation for the population state ${\bf n}$ and the orientation state ${\mbox{\boldmath $\Sigma$}}$. We consider these in turn. As noted above, the variables $n^\alpha_i(\bfx, t)$ are not all independent because of the ``exclusion rule'' that only one particle of any type may have a given velocity $i$ at a given site ${\bf x}$ and time step $t$. Thus, it is inefficient to store these variables directly. Rather, we note that for a given $i$, ${\bf x}$ and $t$ there are precisely four possibilities: there can be a particle of type $R$, type $B$, type $A$, or nothing at all. These four possibilities can be encoded in two bits of information as follows: \begin{center} \begin{tabular}{|l|l|l|} \hline High Bit & Low Bit & Description \\ \hline 0 & 0 & Nothing \\ 0 & 1 & $R$ particle \\ 1 & 0 & $B$ particle \\ 1 & 1 & $A$ particle \\ \hline \end{tabular} \end{center} Thus, the population state of a given site can be represented by $2b$ bits of information. For $D=2$, our current implementation uses a triangular lattice (coordination number 6) and one rest particle, so $b=7$ and 14 bits are needed to store the population state~\cite{bib:bce}. For $D=3$, our current implementation uses a projected face-centered hypercubic (PFCHC) lattice (coordination number 24) and two rest particles, so $b=26$ and 52 bits are needed to store the population state~\cite{bib:fchc,bib:cam}. The PFCHC lattice will be described in some detail later in this section. For now we note that in either case, the population state easily fits in a single integer variable; more precisely, for $D=2$ it fits in a ``short'' integer of 16 bits, and for $D=3$ it fits in a ``double-precision'' integer of 64 bits. The orientation state requires much more storage because we have chosen to allow the orientation angles to be continuous~\footnote{As noted in an earlier footnote, this choice is not thought to be essential, and it is possible that much storage space may be saved by requiring the orientation angles to be discrete.}. For $D=2$ we must store the $b$ polar angles $\theta_i$, each as an IEEE-format, single precision, floating-point variable of 32 bits. For $D=3$ we must likewise store the two spherical angles for each velocity for a total of $2b$ floating-point variables. While it is true that these angular variables are not defined unless the corresponding $n^{A\prime}_i=1$, our current implementation provides for storage for all angles at all sites and velocities, because the computational price of dynamically allocating and deallocating variables was not thought to be worth the savings in storage; for very low surfactant concentrations, this assumption may be invalid, and so the existing code might be improved. In a language that allows for user-defined types, such as Fortran 90, C++ or Java, it is useful to create a type for the total state of a site that includes both the population and orientation information. Given this data representation, the implementation of the propagation step is fairly obvious. The substitutions of Eqs.~(\ref{eq:propp}) and (\ref{eq:propo}) are made throughout the lattice. In Fortran 90, the CSHIFT intrinsic accomplishes periodic shifts on arrays, and it is natural to use this to construct a subroutine that accepts an array of type 'total state' and performs the propagation procedure on this array as a side effect. The above-described representation for the population state is somewhat inconvenient in this regard, as the bit pairs corresponding to a particular velocity must be extracted from the integer variable before it is shifted in that direction. When the propagation step is completed, the new fields $S$, ${\bf E}$, ${\bf P}$, ${\cal E}$ and ${\cal P}$ are computed at each site using Eqs.~(\ref{eq:sdef}) and (\ref{eq:bfEdef}) - (\ref{eq:calPdef}). Next, the possible outcomes for the population state are enumerated using a lookup-table procedure. A list of all possible outgoing states that has been sorted according to equivalence class $E({\mbox{\boldmath $\rho$}},{\bf p})$ is precomputed and stored. This list is of length $2^{14}=16384$ for the $D=2$ case. A full list for the 52-bit population state representation in $D=3$ would have length $2^{52}$ and this is obviously much too large to store on any existing or contemplated computer; a method of shortening this list will be described below. For now, assume that such a list could be stored, and that two other lists of the same size could be maintained that accept the current population state ${\bf n}$ as a key, and return a pointer to the initial position and the number of elements of the equivalence class $E({\mbox{\boldmath $\rho$}},{\bf p})$ in the table of sorted states. This makes it possible to enumerate the postcollision states ${\bf n}'$ that respect the required conservation laws. Note that the length of this list may vary from site to site. Next, each site loops over its set $E({\mbox{\boldmath $\rho$}},{\bf p})$ of allowed postcollision states and computes the outgoing colour flux vector ${\bf J}$ and the $b$ auxiliary vectors ${\bf A}_i$ for each one of them, using Eqs.~(\ref{eq:cflux}) and (\ref{eq:auxvec}) respectively. These are then used to compute the reduced probability density $P({\bf n}')$, given by Eq.~(\ref{eq:rpdfa}) for $D=2$, or Eq.~(\ref{eq:rpdfb}) for $D=3$. Given these probabilities, a final population state ${\bf n}'$ is sampled. Once ${\bf n}'$ is known, the ${\bf A}_i$ are recalculated for that final state, and the final orientation angles are sampled from Eq.~(\ref{eq:opdfa}) for $D=2$, or Eq.~(\ref{eq:opdfb}) for $D=3$. In the latter case, we note that $\pi_i(\theta'_i,\phi'_i)$ is independent of $\phi'_i$ so this may simply be sampled uniformly in $(0,2\pi)$. The colatitude $\theta'_i$ is then found by equating its cumulative distribution function to a random number $r$ uniformly distributed between 0 and 1, and solving for $\theta'_i$. The result is \begin{equation} \theta'_i = \arccos \left\{ \frac{-1}{\beta\sigma |{\bf A}_i|} \ln \left[ re^{+\beta\sigma |{\bf A}_i|}+(1-r)e^{-\beta\sigma |{\bf A}_i|} \right] \right\}. \end{equation} For $D=2$, the sampling procedure is not so simple, because of the integral leading to the modified Bessel function. In this case we proceed numerically; for small values of the parameter $\beta\sigma |{\bf A}_i|$, we approximate the distribution by a Gaussian centered at its maximum, and for small values of $\beta\sigma |{\bf A}_i|$ we employ rejection sampling~\cite{bib:bce}. It remains to describe the PFCHC lattice and our method for making the size of the lookup table more manageable. The face-centered hypercubic (FCHC) lattice is a regular, self-dual lattice in four dimensions with coordination number 24. It can be described as all integer-valued tetrads $(i,j,k,l)$ such that $i+j+k+l$ is even. The motivation for using this lattice is that it is known to yield isotropic Navier-Stokes behaviour for a single-phase fluid~\cite{bib:fchc}. The lattice vectors are then the 24 neighbours of the site $(0,0,0,0)$, and these can be partitioned into a subset of eight lattice vectors called Group 1, namely $(\pm 1,0,0,\pm 1)$ and $(0,\pm 1,\pm 1,0)$, a subset of eight lattice vectors called Group 2, namely $(0,\pm 1,0,\pm 1)$ and $(\pm 1,0,\pm 1,0)$, and a subset of eight lattice vectors called Group 3, namely $(0,0,\pm 1,\pm 1)$ and $(\pm 1,\pm 1,0,0)$. The virtue of this partition is that the sixteen lattice vectors of any two groups lie on the corners of a regular four-dimensional hypercube, and the eight lattice vectors of the remaining group point to the face centers of that hypercube~\cite{bib:cam}. The projection of the FCHC lattice to the three-dimensional PFCHC lattice can be accomplished by simply ignoring the fourth coordinate of the 24 vectors described above. The partition into three groups of eight vectors is still useful to maintain, as we shall see. One feature of this projection is that distinct vectors of the FCHC lattice can project to the same vector on the PFCHC lattice; for example, $(1,0,0,1)$ and $(1,0,0,-1)$ both project to $(1,0,0)$. We take these 24 three-vectors as the particle velocities in our $D=3$ model, and add two rest particles to them for a total of 26 particle velocities ($b=26$). In our computer implementation, we append the (same) two rest particles to each of our three groups of eight lattice vectors, to obtain three groups of ten velocities ($b=10$) each. The idea is then to allow collisions within each group of ten velocities separately, updating the state of the rest particles in all three groups whenever they change, thereby letting the rest particles provide mass and momentum transfer between the three groups. The three sets of velocities are summarized in the following table: \vspace{1cm} \begin{center} \begin{tabular}{|l|l|rrr|l|l|rrr|l|l|rrr|} \hline Group & Lattice & \multicolumn{3}{l|}{Components} & Group & Lattice & \multicolumn{3}{l|}{Components} & Group & Lattice & \multicolumn{3}{l|}{Components} \\ & Vector & $x$, & $y$, & $z$ & & Vector & $x$, & $y$, & $z$ & & Vector & $x$, & $y$, & $z$\\ \hline & ${\bf c}_{ 1}$ & 1,& 0,& 0 & & ${\bf c}_{ 1}$ & 0,& 1,& 0 & & ${\bf c}_{ 1}$ & 0,& 0,& 1\\ & ${\bf c}_{ 2}$ & 1,& 0,& 0 & & ${\bf c}_{ 2}$ & 0,& 1,& 0 & & ${\bf c}_{ 2}$ & 0,& 0,& 1\\ & ${\bf c}_{ 3}$ & -1,& 0,& 0 & & ${\bf c}_{ 3}$ & 0,&-1,& 0 & & ${\bf c}_{ 3}$ & 0,& 0,&-1\\ & ${\bf c}_{ 4}$ & -1,& 0,& 0 & & ${\bf c}_{ 4}$ & 0,&-1,& 0 & & ${\bf c}_{ 4}$ & 0,& 0,&-1\\ 1 & ${\bf c}_{ 5}$ & 0,& 1,& 1 & 2 & ${\bf c}_{ 5}$ & 1,& 0,& 1 & 3 & ${\bf c}_{ 5}$ & 1,& 1,& 0\\ & ${\bf c}_{ 6}$ & 0,& 1,&-1 & & ${\bf c}_{ 6}$ & -1,& 0,& 1 & & ${\bf c}_{ 6}$ & 1,&-1,& 0\\ & ${\bf c}_{ 7}$ & 0,&-1,& 1 & & ${\bf c}_{ 7}$ & 1,& 0,&-1 & & ${\bf c}_{ 7}$ & -1,& 1,& 0\\ & ${\bf c}_{ 8}$ & 0,&-1,&-1 & & ${\bf c}_{ 8}$ & -1,& 0,&-1 & & ${\bf c}_{ 8}$ & -1,&-1,& 0\\ & ${\bf c}_{ 9}$ & 0,& 0,& 0 & & ${\bf c}_{ 9}$ & 0,& 0,& 0 & & ${\bf c}_{ 9}$ & 0,& 0,& 0\\ & ${\bf c}_{10}$ & 0,& 0,& 0 & & ${\bf c}_{10}$ & 0,& 0,& 0 & & ${\bf c}_{10}$ & 0,& 0,& 0\\ \hline \end{tabular} \end{center} \vspace{1cm} Since two bits of information are required to represent the population state for each velocity, a total of 20 bits will specify the state within each group. This results in tables of length $2^{20}=1048576$. Since the results can be stored as single-precision (32-bit, or four-byte) integers, the tables each require a total of 4 Mbytes of storage. Since there are three tables, as described above, a grand total of 12 Mbytes are devoted to table storage. This amount of storage is not a significant problem on modern multiprocessor supercomputers. Once again, the use of user-defined types can simplify the implementation of the above decomposition. The population state can be made a user-defined type, constructed from three integer variables. Subroutines for propagation, table lookup, and other collision-related operations can then be overloaded to accommodate the new type. Despite the advantage gained by using the hydrodynamic lattice gas method described above, the simulation of large three dimensional amphiphilic systems remains extremely computationally demanding. The algorithm described above has been implemented in Fortran90, and parallelised using the MPI (Message Passing Interface) library. Simulations for the parameter search described in Section~\ref{sec:couple} were performed on a 512 processsor Cray T3D at the Edinburgh Parallel Computing Centre (EPCC). The code was then ported to SGI Origin 2000 machines at Schlumberger Cambridge Research, the National Computational Science Alliance (NCSA) in Illinois, and at the Oxford Supercomputing Centre. The Cray T3D and T3E systems are Massively Parallel Processor (MPP) machines, whereas the SGI O2000 machines have cache-coherent Non-Uniform Memory Architecture (ccNUMA). These two parallel architectures are very different, and the ease of moving from one to the other illustrates the portability of modern parallelised codes. A performance improvement of two orders of magnitude was obtained on going from the T3D to the O2000 machines, and all the simulations described in Section~\ref{sec:simul} were performed on the latter platforms. Baseline performance for a $64^{3}$ system running on 8 processors is 3.3 timesteps per minute for a binary oil-water system. Performance scales superlinearly out to 64 processors for $64^{3}$ and $128^{3}$ systems. Computational complexity increases as $L^{3}$ as one increases the linear system size $L$. Currently the largest attainable system size is $128^{3}$ due to memory limitations of the O2000 machines currently available to us. However, $256^{3}$ and $512^{3}$ systems are attainable within the limits of the O2000 architecture. A port to the Cray T3E (MPP architecture) has also been performed, and a comparison on a processor-by-processor basis gives a performance of 30-50\%\ of the O2000, with linear as opposed to superlinear scalability. However, the larger number of processors available on a typical T3E system more than compensates for this relative fall off--in one case we had access to a 1200-node T3E for a period of one month. A full description of the computational aspects of this model will be provided elsewhere. In addition to the demanding nature of the simulations themselves, visualising the results produced can be as--and in some aspects more--computationally demanding. In particular, the generation of geometrical information required to plot 3D isosurfaces of individual species concentration requires RAM resources of at least 1 Gbyte. Although visualisation is today sometimes still regarded as a subsidiary activity to numerical simulation, we have found it absolutely vital to check that the code was working, to distinguish different morphoplogies and to gain intuition about the very complex dynamics of these systems. Visualisation of the largest and most complex systems attainable by simulation using our hydrodynamic lattice gas demands use of the most advanced graphics engines currently available. \section{Definition of Coupling Constants} \label{sec:couple} The Hamiltonian used to determine the collision outcomes has been specified in Eq.~(\ref{eq:hamil}). We now describe the choice of the coupling comstants $\alpha$, $\mu$, $\epsilon$, $\zeta$. In addition to the terms in Eq.~(\ref{eq:hamil}) there is an additional kinetic energy term, so that the Hamiltonian becomes: \begin{equation} H(s') = {\bf J}\cdot (\alpha{\bf E}+\mu{\bf P}) + \mbox{\boldmath $\sigma$}'\cdot (\epsilon{\bf E}+\zeta{\bf P}) + {\cal J} : (\epsilon{\cal E}+\zeta{\cal P})+{\delta \over 2} \left|{\bf v}({\bf x},t)\right|^{2}, \end{equation} where ${\bf v}$ is the average velocity of all particles at a site, and the mass of the particles is set to unity ($\delta = 1.0$). Our model reduces to the Rothman-Keller model for binary immiscible fluids in the limit $\alpha\rightarrow\infty$. The three remaining parameters control the surfactant interactions. These were chosen by performing an extensive parameter search using binary water-surfactant systems, and measuring the structure factor for these systems to look for signs of self-assembly. In particular, we sought strong structure-factor peaks indicative of spherical or wormlike micelles of a characteristic size were sought. These simulations were performed with a surfactant:water ratio of 1:2, well above the critical concentration for the formation of micelles. The physical contributions of each term in Eq.~(\ref{eq:hamil}), and therefore the effect of varying each parameter, is described below: \begin{itemize} \item{$\epsilon$ controls the interaction of outgoing dipoles with the surrounding colour field. In ternary phases this term will send surfactant to oil-water interfaces. In binary water-surfactant phases this interaction will tend to favour flat interfaces between the phases;} \item{$\mu$ controls the interaction of outgoing coloured particles with the surrounding dipolar field. This term will favour the bending of dipoles around a central colour charge, and will be important in creating micellar phases;} \item{$\zeta$ controls the surfactant-surfactant interaction. For positive $\zeta$ this produces an attractive, ferromagnetic interaction between the dipoles. This term is of limited importance in the formation of self-assembled phases, and was set to 0.5 for the simulations described below.} \end{itemize} The key to locating the micellar phases is to find the correct balance between $\epsilon$ and $\mu$. Strongly peaked structure functions were obtained for the following values of $\epsilon$ and $\mu$: \begin{center} $ 0.75 < \mu < 2.0 $\\ $ 0.25 < \epsilon <2.0 $ \end{center} In order to maximise the desired behaviour of sending surfactant to oil-water interfaces while retaining the necessary micellar binary phases we chose a canonical set of coupling constants which are kept constant throughout all the following simulations, except in the short vesicle study described in Section~\ref{sec:vesicle}. The values of these constants are: \begin{center} $\alpha=1.0$, $\epsilon=2.0$, $\mu=0.75$, $\zeta=0.5 $ \end{center} The temperature-like parameter, $\beta$, was held constant at 1.0 for all of the ensuing simulations. \section{Simulations} \label{sec:simul} The equilibrium properties and phase diagrams of a wide variety of real amphiphilic fluids are well known~\cite{bib:gs2}. These phase diagrams have many features unique to three dimensional systems. To esablish the general validity of our model it is essential that we can reproduce the complex phase behaviour observed in real amphiphiles. In this section we describe some of this phenomenology, and present results of the simulations designed to reproduce it. \subsection{Oil-Water System} The spinodal decomposition of immiscible fluids has been extensively studied in two dimensions, and rather less in three~\cite{bib:bray,bib:jury}. After a quench into the two phase region an initially homogeneous mixture separates into relatively pure single-phase domains. \begin{figure}[htp] \centering \includegraphics[width=0.4\textwidth]{BIPS_200} \includegraphics[width=0.4\textwidth]{BIPS_400} \includegraphics[width=0.4\textwidth]{BIPS_600} \includegraphics[width=0.4\textwidth]{BIPS_1000} \caption[fig:BOWS_fig1]{Binary phase separation in an oil-water system. Timesteps (clockwise from top left) 200, 400, 600, 1000. Red isosurface shows interface between oil and water phases. System size is $64^{3}.$}\label{fig:bows_fig1} \end{figure} With no surfactant particles present in the system, the only term in the local site Hamiltonian, Eq.~(\ref{eq:hamil}), that contributes numerically to the collision process is $\alpha \Delta H_{\mbox{\scriptsize cc}}$. With parameters $\alpha=1.0$, $\beta=1.0$ we performed simulations starting from an initial configuration in which the lattice vectors at each site are populated randomly with oil or water particles with equal probability (a so-called critical quench). This initial condition corresponds to a homogenised 1:1 oil water mixture. The reduced density (i.e. the proportion of lattice vectors at each site that contain a particle) is $0.5$. In this parameter regime, the model exhibits phase separation with positive surface tension, as is evident from Fig.~\ref{fig:bows_fig1}, which illustrates the nonequilibrium behaviour of the immiscible lattice gas. If left to run for a large enough time the system would eventually reach the completely separated state of two distinct layers of fluid. To make a detailed comparison between the immiscible oil-water fluid behaviour shown here and later simulations in which we introduce surfactant to the system, we make use of {\it spherically averaged structure functions}. We first calculate the three-dimensional structure factor of the colour charge, $s({\bf k}, t)$, \[ s({\bf k}, t) = \left<\frac{1}{N}\left|\sum_{{\bf x}} (q({\bf x}) - q^{av}) e^{i{\bf k}\cdot{\bf x}}\right|^2\right>, \] where ${\bf k} = \left(2 \pi / L \right) \left( p {\bf i} + q {\bf j} + r{\bf k} \right)$, $p,q,r = 1,2,...,L$, q({\bf x}) is the total colour charge at a site, $q^{av}$ is the average value of the colour charge, $L$ is the length of the system and $N = L^3$ is the number of lattice sites in the simulation box. \begin{figure}[htp] \centering \includegraphics[width=1.0\textwidth]{SR_BOW} \caption[fig:tif]{Structure factor for timesteps 200, 400, 600, 800, 1000 in a binary phase-separating system . System size is $128^{3}$. The structure factor measurements are averaged over ten independent simulations. }\label{fig:bips_sr} \end{figure} We actually compute the spherically averaged structure factor, given by \begin{equation} S(k, t) = \frac{\sum_{\hat k}s({\bf k}, t)}{\sum_{\hat k} 1}, \label{eq:castf} \end{equation} where $k = 2\pi n / L, n = 0,1,2,...,L$, and the sum $\sum_{\hat k}$ is over a spherical shell defined by $(n - \frac{1}{2}) \le \frac{|{\bf k}|L}{2\pi} < (n + \frac{1}{2})$. The resolution of $S(k, t)$ depends on $k_c$, the Nyquist cutoff frequency associated with the lattice (for a real-space sampling interval of $\Delta$ the cut-off frequency is $1/(2\Delta)$); above this value of the frequency there is only spurious information due to aliasing. In our case, $k_c = (2\pi/L)n_c$, where we have chosen $n_c$ to be the maximum possible value, which is half the lattice size. Fig.~\ref{fig:bips_sr} shows the temporal evolution of $S(k, t)$ for the case of two immiscible fluids. As time increases, the peak of $S(k, t)$ shifts to progressively smaller wave numbers and its peak height increases. This behaviour is characteristic of domain coarsening and serves as a useful comparison for the surfactant-containing systems described below. \subsection{Binary amphiphilic fluid phases: from monomers to sponges.} In two dimensions the behaviour of the binary water-amphiphile lattice gas fluid is characterised by a {\it critical micelle concentration} (CMC) below which the fluid is a solution of monomeric amphiphiles, and above which the monomers form circular micelles. Further increase in amphiphile concentration in 2D simply results in more micelles, the micelles retaining their characteristic size. The situation in three dimensions is more complex. A CMC is still present for the formation of spherical micelles, but at higher concentrations new structures appear. As the concentration of surfactant increases the number of spherical micelles increases, until a second critical concentration is reached, beyond which wormlike micelles are the preferred structure. When the concentration of surfactant is high enough that both surfactant and water phases percolate throughout the system a sponge phase results. We identify this sponge phase with the $ L_3 $ phase described by Gompper and Schick~\cite{bib:gs2}. These concentrations have been determined in our model for $ \beta =1 $ as: \begin{center} $\ 0.006\leq\rho_{spherical} \leq 0.012 $\\ $\ 0.08 \leq \rho_{wormlike} \leq 0.25 $\\ $\ 0.25 \leq \rho_{L_{3}}$\\ \end{center} where $\rho$ is the reduced density; that is, the fraction of lattice sites occupied by surfactant particles. The description of these different regimes as distinct phases, with `critical' concentrations separating them, may be somewhat misleading. There is a large degree of phase coexistence. Individual monomers may join or leave a micelle in the spherical micelle regime, and the kinetics of simple micelle formation can be modelled theoretically on the basis of a Becker-D\"{o}ring theory~\cite{bib:cw,bib:mdmlc}. The critical concentration for the formation of spherical micelles is well defined in our model, with no micelles seen below this concentration. The formation of more complex structures, however, appears to take place by more general (Smoluchowski-type) agregation processes; wormlike micelles are formed from the coalesence of spherical micelles. Such behaviour reflects the highly dynamic nature of our model. \begin{figure}[htp] \centering \includegraphics[width=0.4\textwidth,height=0.4\textwidth]{S_micelles_32} \includegraphics[width=0.4\textwidth,height=0.4\textwidth]{S_micelles_64} \caption[fig:tif]{Spherical micelles in water at surfactant concentration 0.008, timestep 1000. Left figure shows a $32^{3}$ system. Arrows represent the average surfactant direction per site, overlaid on an isosurface of surfactant concentration. Right figure shows a $64^{3}$ system. The left half of this figure displays arrows of the average surfactant direction at a site, while the right half shows an isosurface of surfactant concentration.}\label{fig:bws_fig1} \end{figure} We performed simulations designed to access the monomeric, spherical micelle, wormlike micelle and sponge phases. All started with random initial conditions and the coupling constants defined in Section~\ref{sec:couple}. \begin{figure}[htp] \centering \includegraphics[width=0.6\textwidth]{SR_BWS} \caption[fig:tif]{Average value of the magnitude of the wavevector $k$ as a function of time for varying surfactant concentrations on a $64^{3} $ lattice. The monomeric surfactant solution (solid line) is at reduced surfactant concentration of 0.001; spherical micelles at reduced surfactant concentration of 0.01; wormlike micelles at reduced surfactant concentration of 0.1; and the $L_{3}$ sponge phase at reduced surfactant concentration of 0.3}\label{fig:bws_fig2} \end{figure} To determine the stability of these structures, we calculated spherically-averaged structure functions of the surfactant density. In Fig.~\ref{fig:bws_fig2}, we plot the temporal evolution of the characteristic wave number, \[ \langle k(t) \rangle = \frac{\sum_{k=0}^{k_c} k S(k, t)} {\sum_{k=0}^{k_c} S(k, t)}, \] the inverse of which is a measure of the average domain size. We see that in the low amphiphile concentration case the characteristic size remains consistent with a random configuration of solubilised monomers. For the cases where visualisation establishes the presence of more complex structures there is fast initial growth of the surfactant domains which soon levels off to some constant size; see Figs~\ref{fig:bws_fig3} and~\ref{fig:bws_fig4}. \begin{figure}[htp] \centering \includegraphics[width=1.0\textwidth]{W-micelles_32} \caption[fig:bws_fig3]{Wormlike micelles in water at a reduced surfactant concentration of 0.1 for a $32^3$ system. The same structures are present in larger systems, but we display this snapshot for clarity. The green isosurface shows the surfactant concentration at a level of 5 particles per lattice site. }\label{fig:bws_fig3} \end{figure} \begin{figure}[htp] \centering \includegraphics[width=1.0\textwidth]{L3_Sponge} \caption[fig:bws_fig4]{$L_{3}$ sponge phase at a reduced surfactant concentration of 0.3. The green isosurface shows the surfactant concentration at a level of 5 particles per site. The system size is $64^{3}.$}\label{fig:bws_fig4} \end{figure} \subsection{Ternary phases: lamellae} We first investigate the stability of a lamellar structure, which is composed of alternating layers of oil-rich and water-rich phases separated by a monolayer of surfactant molecules. We look at the system with and without surfactant present in order for a comparison to be made. It is not clear what influence surfactant will have on such a lamellar structure in three dimensions. We are interested in the ability of surfactant to stabilise large areas of interface by lowering the interfacial tension. We set up the initial configuration of the system, with layers of oil and water four sites wide, all sites having a reduced density of $0.5$. It is clear that if our model is exhibiting the correct behaviour, then we would expect there to be a critical density of surfactant required at the oil-water interfaces in order for the layered structure to be stable. Consequently, we set up a simulation with a single site wide layer of surfactant at each of the oil-water interfaces. Snapshots from these simulations are shown in Figs.~\ref{fig:tern_fig1} and Fig.~\ref{fig:tern_fig2}; the former is the pure oil-water case with coupling coefficient $\alpha = 1.0$ and inverse temperature $\beta = 1.0$, while the latter has surfactant monolayers present. \begin{figure}[htp] \centering \includegraphics[width=0.4\textwidth]{BL_48_03_0000} \includegraphics[width=0.4\textwidth]{BL_48_03_1000} \caption[fig:tif]{Binary oil-water lamellae at timesteps $t=0$ (left) and $t=500$ (right) for a $48^{3}$ system. The red and blue colourings on the slice planes indicate oil and water respectively, while the blue isosurface shows the interface between oil and water. For clarity the isosurface is shown for only half of the system. The system size is $48^{3}$}\label{fig:tern_fig1} \end{figure} The fluctuations present in the model enable oil and water particles from the initially separated layers to move; they thus come under the influence of the colour field gradients produced by other layers of the same fluid type. For these parameter choices there is an inherent tendency for the oil and water to act as immiscible fluids and phase separate ({\it cf.}, Fig.~\ref{fig:bows_fig1}), precisely as is shown in Fig.~\ref{fig:tern_fig1}. Fig.~\ref{fig:tern_fig2} shows the case where surfactant monolayers coat the oil-water interfaces. Here we see that the initial periodic structure is stabilised despite the presence of large amounts of oil-water interface. \begin{figure}[htp] \centering \includegraphics[width=0.4\textwidth]{ternary_lam_0000} \includegraphics[width=0.4\textwidth]{ternary_lam_1000} \caption[fig:tif]{Ternary lamellae at $t=0$ (left), and $t=1000$ (right). The isosurface shows the oil-water interface. System size is $40^{3}$.}\label{fig:tern_fig2} \end{figure} \subsection{Ternary phases: oil-in-water (water-in-oil) and bicontinuous microemulsions.} \begin{figure}[htp] \centering \includegraphics[width=0.3\textwidth]{Droplet_oil} \includegraphics[width=0.3\textwidth]{Droplet_surf} \caption[fig:ternary3_fig1]{Oil-in-water droplet phase arising from a random initial condition. Oil (red, left) and surfactant (green, right) isosurfaces.} \label{fig:ternary3_fig1} \end{figure} The ternary behaviour of amphiphilic systems shows the same increase in complexity between two and three dimensions that we have seen in the binary water-surfactant systems. One simplifying feature of our model is the symmetry between oil and water in the interactions producing the phase behaviour. We discuss here the two most basic of the many ternary phases, the oil-in-water droplet and the bicontinuous {\it microemulsion} phases. In the oil-in-water droplet phase, oil exists as the minority phase in a continuous background of water and surfactant. If sufficient surfactant is present in the system the spinodal decomposition of a quenched uniform mixture of the three components will be arrested, and the system will reach an equilibrium state consisting of droplets of oil in water with surfactant stabilising the oil-water interfaces. This phase may be regarded as a perturbation of a binary spherical micelle phase, the micelles now being swollen with oil. If one increases the oil concentration, a regime is reached where both the oil and water domains percolate throughout the system. Such a phase is the ternary extension of the $L_{3}$ sponge phase in the binary system. \begin{figure}[htp] \centering \includegraphics[width=0.7\textwidth]{ternary_sr} \caption[fig:ternary3_fig2]{Structure factor mean $k$ value for binary phase separation, ternary coalescing droplet phase, swollen wormlike micellar phase, and bicontinuous microemulsion phases.}\label{fig:ternary3_fig2} \end{figure} In both these cases, the complete separation of the oil and water phases, which is the equilibrium state for a binary immiscible fluid, is prevented by the presence of the surfactant. In order to quantify this result, and to compare with the previous binary case, we calculate the mean $k$ value of the spherically averaged structure factor of the colour charge, and plot the result in Fig.~\ref{fig:ternary3_fig2}. In order to reproduce the oil-in-water droplet phase, we set up two simulations with a random initial configuration and a reduced density of oil of 0.05. The concentration of surfactant was varied between the two simulations, the reduced surfactant density being 0.01 and 0.21 respectively. The total reduced density for both simulations was 0.5; to maintain consistency between our various simulations, we continue to use the numerical values for the coupling constants defined in Section~\ref{sec:couple}. Fig.~\ref{fig:ternary3_fig1} displays a snapshot of the simulation with a reduced surfactant density of 0.01 at timestep 1000, displaying the oil (red) and surfactant (green) isosurfaces. It is clear that the surfactant is adsorbed at the interface. However, Fig.~\ref{fig:ternary3_fig2} shows the time evolution of the average $k$ value of the spherically averaged structure factor. It is clear that although the phase separation is slowed by the presence of surfactant, the average domain size in the system is increasing with time. For the case with reduced surfactant density 0.21, one observes complete arrest of phase separation, with wormlike oil domains remaining at a constant size as shown by the dashed line in Fig.~\ref{fig:ternary3_fig2}. We refer to this morphology as a swollen wormlike micellar phase. A snapshot of this simulation taken at timestep 4000 is shown in Fig.~\ref{fig:ternary3_fig3}. \begin{figure}[htp] \centering \includegraphics[width=0.3\textwidth]{Sponge_t_water} \includegraphics[width=0.3\textwidth]{Sponge_t_surf} \includegraphics[width=0.3\textwidth]{Sponge_t_oil} \caption[fig:ternary3_fig3]{Bicontinuous microemulsion phase at timestep 4000. Left: water isosurface; middle: surfactant isosurface; right: Oil isosurface. The system size is $64^{3}$. Isosurfaces display concentrations of water, surfactant and oil, respectively, at a level of five particles per site. }\label{fig:ternary3_fig3} \end{figure} To obtain the bicontinuous sponge regime within the model's phase diagram, we simply increase the relative amount of oil present in the system. Hence this simulation has a random initial mixture with a reduced density $0.5$ and a $0.83:1.0:0.83$ oil-to-surfactant-to-water ratio. Fig.~\ref{fig:ternary3_fig4} displays the results of this simulation at timestep 4000, displaying the percolating oil (red) and water (blue) isosurfaces, as well as the surfactant (green) isosurface. \begin{figure}[htp] \centering \includegraphics[width=0.4\textwidth]{Droplet3_4000} \includegraphics[width=0.4\textwidth]{Droplet3_4000_sur} \caption[fig:ternary3_fig4]{Stable oil-in-water droplet phase at timestep 4000. Red and green isosurfaces show oil and surfactant concentrations at a level of five particles per site. The system size is $64^{3}$. }\label{fig:ternary3_fig4} \end{figure} These results show the ability of our model to correctly reproduce the effects of surfactant on the phase separation dynamics of binary immiscible fluids. A quantitative study of the domain growth behaviour for both binary and ternary systems will be described in a future paper. \subsection{Vesicles} \label{sec:vesicle} Membrane theory treatments of amphiphilic systems predict that for large binary water surfactant systems a bilayer can reduce its energy by closing its boundary, creating a large structure known as a vesicle~\cite{bib:gs2}. Such structures can exist because the energy required to bend the bilayer into a sphere is less than that required to maintain the bilayer edge. We do not expect vesicles to self-assemble in our current simulations, in part because such structures are usually metastable in real life and require the input of energy, for example via shearing or sonication. We can, however, construct such a structure as an initial condition and study its stability, or lack thereof, using our model. The initial condition we have used is shown in Fig.~\ref{fig:vesicle_fig1}. The concentration of surfactant is initially zero except within a spherical shell, 5 sites wide and 32 sites in radius, where the reduced density of surfactant is 1.0. All other sites contain water particles with a reduced density of 0.5. The total system size is $128^{3}$. For the following simulations, we have not used the canonical choice of coupling constants described in Section~\ref{sec:couple}. Rather, in order to study the mechanisms by which these vesicles disintegrate, we hsve performed three separate simulations, with the following parameter choices: \begin{center} $\alpha=1.0$, $\epsilon=0.01$, $\mu=0.01$, $\zeta=1.0 $\\ $\alpha=1.0$, $\epsilon=0.01$, $\mu=1.00$, $\zeta=0.01 $\\ $\alpha=1.0$, $\epsilon=1.00$, $\mu=0.01$, $\zeta=0.01 $\\ \end{center} These three simulations should separate the effects of the three surfactant interactions on the vesicles. We also wish to characterise the timescale over which the vesicles are stable. In order to do this, we performed one simulation with oil particles replacing the surfactant particles in the initial condition, and with $\alpha=0.0$. This essentially labels the particles starting in the vesicle region, and allows us to track their subsequent motion. Not suprisingly, this simulation shows that the initial vesicle configuration is destroyed in the absence of interactions in less than ten timesteps. The initial condition and state of the system at timestep 200 for the above three simulations is shown in Fig.~\ref{fig:vesicle_fig1}. These simulations show that the mechanisms driving the breakup of the vesicles are phase separation and micellisation. The vesicles are also unstable against fluctuations in concentration which produce holes in the structure. Once a hole has been produced it is observed to grow until the vesicle is destroyed. \begin{figure}[htp] \centering \includegraphics[width=0.4\textwidth]{Vesicle_init} \includegraphics[width=0.4\textwidth]{Vesicle_u1_200} \includegraphics[width=0.4\textwidth]{Vesicle_e1_200} \includegraphics[width=0.4\textwidth]{Vesicle_z1_200} \caption[fig:tif]{Vesicle disintegration. Clockwise from top left: Initial condition. Simulation with $\epsilon=0.01$, $\mu=1.00$, $\zeta=0.01 $ at timestep 200. Simulation with $\epsilon=1.00$, $\mu=0.01$, $\zeta=0.01 $ at timestep 200. Simulation with $\epsilon=0.01$, $\mu=0.01$, $\zeta=1.0 $ at timestep 200. All snapshots display an isosurface showing reduced surfactant density at a level of 5 particles per site for half the vesicle. }\label{fig:vesicle_fig1} \end{figure} \section{Conclusions} \label{sec:conclusions} We have extended our hydrodynamic lattice-gas model for the dynamics of binary and ternary amphiphilic fluids from two to three dimensions. We have shown that our model exhibits the correct 3D phenomenology using a combination of visual and analytic techniques. Experimentally observed self-assembling structures form in our simulations in a consistent manner when the relative concentrations of the three components are varied. Binary immiscible, binary amphiphilic and ternary amphiphilic behaviour are all captured using a single set of coupling constants. We have also shown that studies of vesicle metastability are possible using this model with different choices of the coupling constants. Work is currently in progress on a wide range of amphiphilic fluid systems, including measurements of viscosity and surface tension, and the study of growth laws in amphiphilic self-assembly processes. Studies of amphiphilic fluid flow in porous media, which have previously been performed in 2D~\cite{bib:pvcjb}, are now underway using the 3D model~\cite{bib:pvc}. Indeed there is a rich seam of problems related to amphiphilic fluids which may be mined using this model. Our work confirms the suitability of lattice gas automata for the modelling and simulation of such complex fluid problems in both two and three dimensions. \section*{Acknowledgements} We are indebted to numerous people and organisations for their support of and contributions to this work. They include Jean-Bernard Maillet and David Bailey at Schlumberger Cambridge Research, Silicon Graphics Incorporated (particularly Bart van Bloemann Waanders, Daron Green and Rob Jenkins), Oxford Supercomputing Centre (particularly Jeremy Martin and Kathryn Measures), the EPSRC E7 High Performance Computing Consortium (particularly Sujata Krishna), the Edinburgh Parallel Computing Centre (particularly Mario Antonioletti), and the National Computational Science Alliance in Illinois, USA. PJL would like to thank EPSRC and Schlumberger Cambridge Research for funding his CASE studentship award. The collaboration between PVC and BMB was supported by NATO grant number CRG950356. BMB was supported in part by the United States Air Force Office of Scientific Research under grant number F49620-95-1-0285.
1,941,325,220,974
arxiv
\section{Introduction} Let $\Omega\subset \mathbb{R}^d$, $d=2,3$, be an open polygonal or polyhedral domain. We consider the following modified Cahn-Hilliard-Darcy-Stokes problem with natural and no-flux/no-flow boundary conditions~\cite{choksi03,choksi11,ohta86,onuki97,zvelindovsky98a,zvelindovsky98b}: \begin{subequations} \begin{eqnarray} \partial_t \phi = \varepsilon\Delta \mu -\nabla\cdot\left({\bf u} \phi \right) ,\,\,&&\text{in} \,\Omega_T , \label{eq:CH-mixed-a-alt} \\ \mu = \varepsilon^{-1}\left(\phi^3-\phi\right)-\varepsilon \Delta \phi + \xi,\,\,&&\text{in} \,\Omega_T, \label{eq:CH-mixed-b-alt} \\ -\Delta \xi = \theta\left(\phi-\overline{\phi}_0\right),\,\,&&\text{in} \,\Omega_T, \label{eq:CH-mixed-c-alt} \\ \omega\partial_t{\bf u} -\lambda\Delta{\bf u} +\eta{\bf u} +\nabla p = \gamma \mu \nabla\phi ,\,\,&&\text{in} \,\Omega_T , \label{eq:CH-mixed-d-alt} \\ \nabla\cdot {\bf u} = 0 ,\,\,&&\text{in} \,\Omega_T , \label{eq:CH-mixed-e-alt} \\ \partial_n \phi =\partial_n \mu = \partial_n \xi = 0, \,{\bf u} = {\bf 0}, \,\,&&\text{on} \, \partial \Omega\times (0,T) , \label{eq:CH-mixed-f-alt} \end{eqnarray} \end{subequations} where $\overline{\phi}_0 = \frac{1}{|\Omega|} \int_{\Omega} \phi_0({\bf x}) d{\bf x}$. We assume that the model parameters satisfy $\varepsilon, \gamma, \lambda >0$ and $\eta, \omega, \theta \ge 0$. In the singular limit $\omega = 0$, we drop the initial conditions for ${\bf u}$. We remark that it is possible to replace the right-hand-side of Eq.~\eqref{eq:CH-mixed-d-alt}, the excess forcing due to surface tension, by the term $-\gamma \phi\nabla\mu$. The equivalence of the resulting PDE model with that above can be seen by redefining the pressure appropriately. The latter form is what was used in the recent paper~\cite{collins13}. The model~\eqref{eq:CH-mixed-a-alt} -- \eqref{eq:CH-mixed-f-alt} can be used to describe the flow of a very viscous block copolymer fluid~\cite{choksi03, choksi11, ohta86, onuki97, zvelindovsky98a, zvelindovsky98b}. In Figure~\ref{fig1}, we show simulation snapshots using the equations to describe the phase separation of a block-copolymer in shear flow. The parameters are given in the caption. \begin{figure}[h!] \begin{center} \includegraphics[width=6in]{./fig1} \caption{Phase separation of a two-dimensional, very viscous block-copolymer fluid in shear flow. The parameters are $\Omega = (0,8)\times (0,4)$, $\epsilon =0.02$, $\gamma = 0.4$, $\theta = 15000$, $\omega = 0$, $\eta = 0$, $\overline{\phi}_0 = -0.1$. The shear velocity on the top and bottom is $\mp 2.0$, respectively. Periodic boundary conditions are assumed in the $x$-direction. The time unit referenced above is $\tau = 0.02$. The long-range $\theta$ term suppresses phase separation and coarsening, relative to the case $\theta=0$, and relatively long and thin phase domains emerge. This simulation compares well with other studies~\cite{zvelindovsky98a, zvelindovsky98b} that use a different dynamic density functional approach. With a slightly larger value of $\theta$, the phase domains remain as dots and can form into hexagonal patterns, as in~\cite{zvelindovsky98b}.} \label{fig1} \end{center} \end{figure} A weak formulation of \eqref{eq:CH-mixed-a-alt} -- \eqref{eq:CH-mixed-f-alt} may be written as follows: find $(\phi,\mu,\xi,{\bf u},p)$ such that \begin{eqnarray} \phi &\in& L^\infty\left(0,T;H^1(\Omega)\right)\cap L^4\left(0,T;L^\infty(\Omega)\right), \\ \partial_t \phi &\in& L^2\bigl(0,T; H^{-1}(\Omega)\bigr), \\ \mu &\in& L^2\bigl(0,T;H^1(\Omega)\bigr), \\ {\bf u} &\in& L^2\left(0,T;{\bf H}_0^1(\Omega)\right)\cap L^\infty\left(0,T;{\mathbf L}^2(\Omega)\right) \\ \partial_t{\bf u} &\in& L^2\left(0,T;{\bf H}^{-1}(\Omega) \right) \\ p &\in& L^2\left(0,T; L_0^2(\Omega)\right), \end{eqnarray} and there hold for almost all $t\in (0,T)$ \begin{subequations} \begin{align} \langle \partial_t \phi ,\nu \rangle + \varepsilon \,\aiprd{\mu}{\nu} + \bform{\phi}{{\bf u}}{\nu} &= 0 &&\qquad \forall \,\nu \in H^1(\Omega), \label{weak-mch-a} \\ \iprd{\mu}{\psi}-\varepsilon \,\aiprd{\phi}{\psi} - \varepsilon^{-1}\iprd{\phi^3-\phi}{\psi} - \iprd{\xi}{\psi}&= 0 &&\qquad \forall \,\psi\in H^1(\Omega), \label{weak-mch-b} \\ \aiprd{\xi}{\zeta}-\theta \,\iprd{\phi-\overline{\phi}_0}{\zeta}&= 0 &&\qquad \forall \,\zeta\in H^1(\Omega), \label{weak-mch-c} \\ \omega \, \langle \partial_t{\bf u},{\bf v}\rangle+ \lambda\,\aiprd{{\bf u}}{{\bf v}} + \eta\iprd{ {\bf u}}{ {\bf v}} -\cform{{\bf v}}{p} - \gamma \,\bform{\phi}{{\bf v}}{\mu}&= 0 &&\qquad \forall \,{\bf v}\in {\bf H}_0^1(\Omega), \label{weak-mch-d} \\ \cform{{\bf u}}{q} &= 0 &&\qquad \forall \,q\in L_0^2(\Omega), \label{weak-mch-e} \end{align} \end{subequations} where \begin{equation} \aiprd{u}{v} := \iprd{\nabla u}{\nabla v}, \quad \bform{\psi}{{\bf v}}{\nu} := \iprd{\nabla \psi \cdot {\bf v} }{\nu}, \quad \cform{{\bf v}}{q} := \iprd{\nabla \cdot{\bf v}}{q} , \end{equation} with the ``compatible" initial data \begin{align} \phi(0) = \phi_0 \in H^2_N(\Omega) &:= \left\{v\in H^2(\Omega) \,\middle| \,\partial_n v = 0 \,\mbox{on} \,\partial\Omega \right\}, \\ {\bf u}(0) = {\bf u}_0 \in {\bf V} &:= \left\{{\bf v}\in {\bf H}_0^1(\Omega) \,\middle| \,\cform{{\bf v}}{q} = 0 , \,\forall \,q\in L_0^2(\Omega) \right\}. \end{align} Here we use the notations $H^{-1}(\Omega) := \left(H^1(\Omega)\right)^*$, ${\bf H}_0^1(\Omega):= \left[H_0^1(\Omega)\right]^d$, ${\bf H}^{-1}(\Omega) := \left({\bf H}_0^1(\Omega)\right)^*$, and $\langle \, \cdot \, , \, \cdot \, \rangle$ is the duality paring between $H^{-1}$ and $H^1$ in the first instance and the duality paring between ${\bf H}^{-1}(\Omega)$ and $\left({\bf H}_0^1(\Omega)\right)^*$ in the second. Throughout the paper, we use the notation $\chi(t) := \chi(\, \cdot \, , t)\in X$, which views a spatiotemporal function as a map from the time interval $[0,T]$ into an appropriate Banach space, $X$. The system \eqref{weak-mch-a} -- \eqref{weak-mch-e} is mass conservative: for almost every $t\in[0,T]$, $\iprd{\phi(t)-\phi_0}{1} = 0$. This observation rests on the fact that $b(\phi,{\bf u},1) = 0$, for all $\phi\in L^2(\Omega)$ and all ${\bf u}\in{\bf V}$. Observe that the homogeneous Neumann boundary conditions associated with the phase variables $\phi$, $\mu$, and $\xi$ are natural in this mixed weak formulation of the problem. The existence of weak solutions is a straightforward exercise using the compactness/energy method. See, for example,~\cite{feng12}. To define the energy for this system, we need a norm on a subspace of $H^{-1}(\Omega)$. With $L_0^2(\Omega)$ denoting those functions in $L^2(\Omega)$ with zero mean, we set \begin{equation} \mathring{H}^1(\Omega) = H^1(\Omega)\cap L_0^2(\Omega) \quad \text{and} \quad \mathring{H}^{-1}(\Omega) :=\left\{v\in H^{-1}(\Omega) \,\middle| \,\langle v , 1 \rangle = 0 \right\}. \end{equation} We define a linear operator $\mathsf{T} : \mathring{H}^{-1}(\Omega) \rightarrow \mathring{H}^1(\Omega)$ via the following variational problem: given $\zeta\in \mathring{H}^{-1}(\Omega)$, find $\mathsf{T}(\zeta)\in \mathring{H}^1(\Omega)$ such that \begin{equation} \aiprd{\mathsf{T}(\zeta)}{\chi} = \langle \zeta, \chi\rangle \qquad \forall \,\chi\in \mathring{H}^1(\Omega). \end{equation} $\mathsf{T}$ is well-defined, as is guaranteed by the Riesz Representation Theorem. The following facts are easily established. \begin{lem} \label{lem-negative-norm} Let $\zeta,\, \xi \in\mathring{H}^{-1}(\Omega)$ and, for such functions, set \begin{equation} \left(\zeta,\xi\right)_{H^{-1}} :=\aiprd{ \mathsf{T}(\zeta)}{\mathsf{T}(\xi)} =\iprd{\zeta}{\mathsf{T}(\xi)} = \iprd{\mathsf{T}(\zeta)}{\xi}. \label{crazy-inner-product} \end{equation} $\left(\, \cdot\, ,\, \cdot\, \right)_{H^{-1}}$ defines an inner product on $\mathring{H}^{-1}(\Omega)$, and the induced norm is equal to the operator norm: \begin{equation} \norm{\zeta}{H^{-1}} := \sqrt{\left(\zeta,\zeta\right)_{H^{-1}}} = \sup_{0\ne \chi\in\mathring{H}^1} \frac{\langle \zeta , \chi\rangle}{\norm{\nabla\chi}{L^2}} . \label{crazy-norm-minus-one} \end{equation} Consequently, for all $\chi\in H^1(\Omega)$ and all $\zeta\in\mathring{H}^{-1}(\Omega)$, \begin{equation} \left|\langle \zeta , \chi\rangle\right| \le \norm{\zeta}{H^{-1}} \norm{\nabla\chi}{L^2} . \end{equation} Furthermore, for all $\zeta\in L_0^2(\Omega)$, we have the Poincar\'{e} type inequality \begin{equation} \norm{\zeta}{H^{-1}} \le C \norm{\zeta}{L^2}, \end{equation} where $C>0$ is the usual Poincar\'{e} constant. \end{lem} Consider the energy \begin{eqnarray} E({\bf u},\phi) &=& \frac{\omega}{2\gamma} \norm{{\bf u}}{L^2}^2+ \frac{1}{4\varepsilon}\norm{\phi^2-1}{L^2}^2 +\frac{\varepsilon}{2} \norm{\nabla \phi}{L^2}^2 +\frac{\theta}{2}\norm{\phi-\overline{\phi}_0}{H^{-1}}^2 \nonumber \\ &=& \frac{\omega}{2\gamma} \norm{{\bf u}}{L^2}^2+ \frac{1}{4\varepsilon}\norm{\phi}{L^4}^4 - \frac{1}{2\varepsilon}\norm{\phi}{L^2}^2 + \frac{|\Omega|}{4\varepsilon} +\frac{\varepsilon}{2} \norm{\nabla \phi}{L^2}^2 +\frac{\theta}{2}\norm{\phi-\overline{\phi}_0}{H^{-1}}^2 , \label{continuous-energy} \end{eqnarray} which is defined for all ${\bf u} \in {\bf L}^2(\Omega)$ and $\phi\in\mathcal{A}:= \left\{\phi\in H^1(\Omega)\,\middle| \,\iprd{\phi-\overline{\phi}_0}{1}=0\right\}$. Clearly, if $\theta \ge 0$, then $E({\bf u},\phi) \ge 0$ for all ${\bf u} \in {\bf L}^2(\Omega)$ and $\phi\in \mathcal{A}$. For arbitrary $\theta\in\mathbb{R}$, $\varepsilon >0$, ${\bf u} \in {\bf L}^2(\Omega)$, and $\phi\in\mathcal{A}$, there exist positive constants $M_1=M_1(\varepsilon,\theta)$ and $M_2=M_2(\varepsilon,\theta)$ such that \begin{equation} 0 < M_1\left(\norm{{\bf u}}{L^2}^2+ \norm{\phi}{H^1}^2 \right) \le E({\bf u},\phi) +M_2. \end{equation} It is straightforward to show that weak solutions of \eqref{weak-mch-a} -- \eqref{weak-mch-e} dissipate the energy \eqref{continuous-energy}. In other words, \eqref{eq:CH-mixed-a-alt} -- \eqref{eq:CH-mixed-f-alt} is a conserved gradient flow with respect to the energy \eqref{continuous-energy}. Precisely, for any $t\in[0,T]$, we have the energy law \begin{equation} E({\bf u}(t),\phi(t)) +\int_0^t\left(\frac{\lambda}{\gamma}\norm{\nabla{\bf u}(s)}{L^2}^2 + \frac{\eta}{\gamma}\norm{{\bf u}(s)}{L^2}^2+\varepsilon\norm{\nabla\mu(s)}{L^2}^2\right) ds = E({\bf u}_0,\phi_0) . \label{pde-energy-law} \end{equation} Formally, one can also easily demonstrate that $\mu$ in \eqref{eq:CH-mixed-b-alt} is the variational derivative of $E$ with respect to $\phi$. In symbols, $\mu = \delta_\phi E$. The energy typically ``prefers" the fluid phase states $\phi \approx \pm 1$ (the pure phases) separated by a diffuse interface of (small) thickness $\epsilon$. However, the long-range energy described by the last term can change this picture~\cite{aristotelous13a, choksi11, choksi03, ohta86}. Specifically, when $\theta > 0$, the energy term $\frac{\theta}{2}\norm{\phi-\overline{\phi}_0}{H^{-1}}^2$ in \eqref{continuous-energy} is convex and stabilizing, and this energy tends to stabilize (or suppress) both the phase separation and the coarsening processes. This is observed in Fig.~\ref{fig1}. If $\theta < 0$ the term is concave and destabilizing. In this case, the process of phase separation will be enhanced. Herein we assume that $\theta \ge 0$. In the case that ${\bf u}\equiv 0$ -- which occurs if $\gamma=0$ -- the model \eqref{eq:CH-mixed-a-alt} -- \eqref{eq:CH-mixed-f-alt} reduces to the modified Cahn-Hillard equation~\cite{choksi03,choksi11} which was analyzed by Aristotelous~\emph{et al.}~\cite{aristotelous13a}. Their scheme was comprised of a convex splitting method for time discretization and a discontinuous galerkin finite element method for space discretization. They showed that their mixed, fully discrete scheme was unconditionally energy stable, unconditionally uniquely solvable, and optimally convergent in the energy norm in two-dimensions. Collins~\emph{et al.}~\cite{collins13} used a convex-splitting method in time and a finite difference method in space to devise an energy stable method for a system similar to \eqref{eq:CH-mixed-a-alt} -- \eqref{eq:CH-mixed-f-alt}, though they did not prove convergence or error stimulates. In the present work, we wish to generalize both~\cite{aristotelous13a, collins13}, in a certain sense, and prove convergence and optimal error estimates for a convex-splitting scheme for \eqref{eq:CH-mixed-a-alt} -- \eqref{eq:CH-mixed-f-alt} using a standard Galerkin finite element discretization of space, as opposed to the discontinuous Galerkin finite element setting used in~\cite{aristotelous13a} or the finite difference setting used in~\cite{collins13}. We remark that, as in~\cite{aristotelous13a}, the $\theta$ ``growth" term makes the analysis somewhat different from other works involving the Cahn-Hilliard equations. Specifically, analysis of this term requires the introduction of a discrete negative norm as in~\cite{aristotelous13a}. The goal of this paper is to construct a fully discrete, unconditionally energy stable, unconditionally uniquely solvable, and convergent mixed finite element scheme for the Cahn-Hilliard-Darcy-Stokes system \eqref{eq:CH-mixed-a-alt} -- \eqref{eq:CH-mixed-f-alt}. Energy stability means that the discrete solutions dissipate the energy in time, in analogy with the PDE model. The convex-splitting approach that we will employ was popularized by Eyre~\cite{eyre98} and has been used by many others \cite{collins13, elliott89b, feng12, wang11, wise09a}. In the convex-splitting framework, one treats the contribution from the convex part implicitly and the contribution from the concave part explicitly. This treatment promotes the scheme's energy stability and unique solvability, both properties being unconditional with respect to the time and space step sizes. Galerkin numerical methods for the Cahn-Hilliard-Navier-Stokes and the Allen-Cahn-Navier-Stokes have been investigated in the recent papers~\cite{abels12, feng06, feng07b, feng12, kay08, kay07, grun13, grun14, shen10a, shen10b}. The rigorous analyses of numerical schemes -- mostly for the matched-density \emph{CHNS} system -- can be found in~\cite{feng06, feng07b, feng12, kay08, kay07, grun13, shen10a, shen10b}. Specifically, there have been convergence proofs for these schemes, but all of these analyses focus on two types of limited convergence results: (i) error estimates and convergence rates for the semi-discrete setting (time continuous)~\cite{feng07b, kay08} and/or (ii) abstract convergence results with no convergence rates~\cite{feng07b, feng12, grun13, kay08}. \emph{Optimal error estimates for the fully discrete schemes are lacking in the literature.} Kay~\emph{et al.} develop both a semi-discrete and a fully discrete mixed finite element method for the Cahn-Hilliard-Navier-Stokes system of equations. For the semi-discrete model, they were able to show unconditional stabilities resulting from the discrete energy law. For the fully discrete model, they use a first order implicit-explicit Euler method to discretize time and were able to show conditional energy stability, with a restriction on the time step. They were able to obtain optimal error (convergence) rates for the semi-discrete model, but only an abstract convergence for the fully discrete model. In \cite{grun13}, Gr\"{u}n proves the abstract convergence of a fully discrete finite element scheme for a diffuse interface model for two-phase flow of incompressible, viscous fluids with different mass densities. No convergence rates were presented in his paper. Feng~\cite{feng06} presented a fully discrete mixed finite element method for the Cahn-Hilliard-Navier-Stokes system of equations. The time discretization used is a first order implicit Euler with the exception of a stabilization term which is treated explicitly. Conditional stability for the basic energy law is developed along with abstract convergence of the finite element model to the PDE model. However, no additional stability estimates are presented beyond the estimates achieved from the energy law. Feng~\emph{et al.}~\cite{feng07b} develop both a semi-discrete and fully discrete finite element method model for the Non-steady-Stokes-Allen-Cahn system of equations. For both the semi-discrete and fully discrete models, conditional energy stability is developed. Optimal error estimates are obtained for the semi-discrete scheme (time-continuous) while abstract convergence is proven for the fully discrete model. Shen and Yang~\cite{shen10b} derived a phase field model for two-phase incompressible flows with variable density without assuming a small density ratio and constructed several efficient time discretization schemes. For each of the schemes, a discrete energy law similar to the continuous energy law is established. Finally, they present several numerical results to illustrate the effectiveness and correctness of their numerical schemes. However, no convergence or error analyses are presented in the paper. The work presented in this paper on the modified Cahn-Hilliard-Darcy-Stokes system is unique in the following sense. We are able to prove unconditional unique solvability, unconditional energy stability, and optimal error estimates for a fully discrete finite element scheme in three dimensions. Specifically, the stability and solvability statements we prove are \emph{completely unconditional with respect to the time and space step sizes}. In fact, all of our \emph{a priori} stability estimates hold completely independently of the time and space step sizes. We use a bootstrapping technique to leverage the energy stabilities to achieve unconditional $L^{\infty}(0,T; L^{\infty}(\Omega))$ stability for the phase field variable $\phi_h$ and unconditional $L^{\infty}(0,T; L^2(\Omega))$ stability for the chemical potential $\mu_h$. With these stabilities in hand we are able to prove optimal error estimates for $\phi_h$ and $\mu_h$ in the appropriate energy norms. The paper is organized as follows. In Section~\ref{sec:defn-and-properties}, we define our mixed finite element convex-splitting scheme and prove the unconditional solvability and stability of the scheme. In Section~\ref{sec-error-estimates}, we prove error estimates for the scheme under suitable regularity assumptions for the PDE solution. In Section~\ref{sec:numerincal-experiments}, we present the results of numerical tests that confirm the rates of convergence predicted by the error estimates. \section{A Mixed Finite Element Convex Splitting Scheme} \label{sec:defn-and-properties} \subsection{Definition of the Scheme} \label{subsec-defn} Let $M$ be a positive integer and $0=t_0 < t_1 < \cdots < t_M = T$ be a uniform partition of $[0,T]$, with $\tau = t_i-t_{i-1}$, $i=1,\ldots ,M$. Suppose ${\mathcal T}_h = \left\{ K \right\}$ is a conforming, shape-regular, quasi-uniform family of triangulations of $\Omega$. For $r\in\mathbb{Z}^+$, define $\mathcal{M}_r^h := \left\{v\in C^0(\Omega) \, \middle| \,v|_K \in {\mathcal P}_r(K), \,\forall \,\, K\in \mathcal{T}_h \right\}\subset H^1(\Omega)$ and $\mathcal{M}_{r,0}^h :=\mathcal{M}_r^h\cap H_0^1(\Omega)$. For a given positive integer $q$, we set $S_h := \mathcal{M}_q^h$; $\mathring{S}_h := S_h\cap L_0^2(\Omega)$; \begin{displaymath} {\bf X}_h := \left\{ {\bf v}\in \left[C^0(\Omega)\right]^d \,\middle| \,v_i \in \mathcal{M}_{q+1,0}^h, \,i = 1, \ldots , d \right\}; \ {\bf V}_h := \left\{{\bf v} \in {\bf X}_h \,\middle| \,\left(\nabla\cdot {\bf v}, w \right) = 0, \forall \,\, w\in \mathring{S}_h \right\}. \end{displaymath} Note that ${\bf V}_h \not\subset {\bf V}$, in general. Our mixed convex-splitting scheme is defined as follows: for any $1\le m\le M$, given $\phi_h^{m-1} \in S_h$, ${\bf u}_h^{m-1}\in{\bf X}_h$, find $\phi_h^m,\mu_h^m\in S_h$, $\xi_h^m, p_h^m \in \mathring{S}_h$, and ${\bf u}_h^m\in{\bf X}_h$, such that \begin{subequations} \begin{align} \iprd{\delta_\tau \phi_h^m}{\nu} + \varepsilon \,\aiprd{\mu_h^m}{\nu} + \bform{\phi_h^{m-1}}{{\bf u}_h^m}{\nu}= \,& 0 , \, & \forall \, \nu \in S_h , \label{scheme-a} \\ \varepsilon^{-1} \,\iprd{\left(\phi_h^m\right)^3 -\phi_h^{m-1}}{\psi} + \varepsilon \,\aiprd{\phi_h^m}{\psi} -\iprd{\mu_h^m}{\psi} +\iprd{\xi_h^m}{\psi} = \,& 0 , \,&\forall \, \psi\in S_h , \label{scheme-b} \\ \aiprd{\xi_h^m}{\zeta}-\theta \,\iprd{\phi_h^m-\overline{\phi}_0}{\zeta} = \,& 0 , \,&\forall \, \zeta\in S_h , \label{scheme-c} \\ \iprd{\delta_\tau{\bf u}^m_h}{{\bf v}}+ \lambda \,\aiprd{{\bf u}^m_h}{{\bf v}} + \eta \,\iprd{ {\bf u}_h^m}{ {\bf v}} -\cform{{\bf v}}{p^m_h} - \gamma \,\bform{\phi_h^{m-1}}{{\bf v}}{\mu_h^m} = \,& 0 ,\, & \forall \, {\bf v}\in {\bf X}_h, \label{scheme-d} \\ \cform{{\bf u}_h^m}{q} = \,& 0 ,\, & \forall \, q\in \mathring{S}_h, \label{scheme-e} \end{align} \end{subequations} where \begin{equation} \delta_\tau \phi_h^m := \frac{\phi_h^m-\phi_h^{m-1}}{\tau}, \quad \phi_h^0 := R_h \phi_0 , \quad {\bf u}_h^0 := {\bf P}_h{\bf u}_0. \end{equation} The operator $R_h: H^1(\Omega) \to S_h$ is the Ritz projection: \begin{equation} \aiprd{R_h\phi - \phi}{\chi} = 0, \quad \forall \, \chi\in S_h, \quad \iprd{R_h \phi-\phi}{1}=0. \end{equation} The operator $\left({\bf P}_h,P_h\right): {\bf V}\times L_0^2 \to {\bf V}_h\times \mathring{S}_h$ is the Darcy-Stokes projection: \begin{align} \lambda \,\aiprd{{\bf P}_h {\bf u}- {\bf u}}{{\bf v}} + \eta \,\iprd{{\bf P}_h {\bf u}- {\bf u}} {{\bf v}} - \cform{{\bf v}}{P_h p -p} &= 0, \quad \forall \, {\bf v}\in {\bf X}_h, \\ \cform{{\bf P}_h {\bf u}- {\bf u}}{q} &= 0, \quad \forall \, q\in \mathring{S}_h. \end{align} \begin{rem} To shorten the presentation, we have set $\omega = 1$ (appearing in \eqref{eq:CH-mixed-d-alt}). We remark that the more general case $\omega >0$ can be considered in the analysis without any significant changes. Additionally, with some slight modifications, the singular limit case, $\omega = 0$, can be covered in the analysis that follows. In this case, one looses the stability ${\bf u}_h\in L^\infty(0,T;L^2(\Omega))$. For perspective, the analysis of Feng~\emph{et al.}~\cite{feng07b} requires ${\bf u}_h\in L^\infty(0,T;L^2(\Omega))$. \end{rem} \begin{rem} Note that $\iprd{\phi_h^0-\overline{\phi}_0}{1} =0$, where $\overline{\phi}_0$ is the initial mass average, which in the typical case, satisfies $|\overline{\phi}_0| \le 1$. We also point out that, appealing to \eqref{scheme-a} and \eqref{scheme-e}, we have $\iprd{\phi_h^m-\overline{\phi}_0}{1} =0$, for all $m = 1, \ldots , M$, which follows because $\aiprd{\mu}{1} = 0$, for all $\mu\in S_h$, and $\bform{\phi_h}{{\bf u}}{1} = 0$, for all $\phi\in S_h$ and all ${\bf u}\in {\bf V}_h$. \end{rem} \begin{rem} \label{rem:initial-projection} The elliptic projections are used in the initialization for simplicity in the forthcoming analysis. We can use other (simpler) projections in the initialization step, as long as they have good approximation properties. \end{rem} \begin{rem} Note that it is not necessary for solvability and some basic energy stabilities that the $\mu$--space and the $\phi$--space be equal. However, the proofs of the higher-order stability estimates, in particular those in Lemma~\ref{lem-a-priori-stability}, do require the equivalence of these spaces. Mass conservation of the scheme requires some compatibility of the $p$--space with that of the $\phi$--space, to obtain $\bform{\phi_h}{{\bf u}}{1} = 0$. For the flow problem we have chosen the inf-sup-stable Taylor-Hood element. One can also use the simpler MINI element. Recall that the stability of the Taylor-Hood element typically requires that the family of meshes ${\mathcal T}_h$ has the property that no tetrahedron/triangle in the mesh has more than one face/edge on the boundary~\cite{brenner08}. \end{rem} Now, we can define a scheme that is equivalent to that above. For any $1\le m\le M$, given $\varphi_h^{m-1} \in S_h$, ${\bf u}_h^{m-1}\in{\bf X}_h$, find $\varphi_h^m,\mu_h^m\in S_h$, $\xi_h^m \in \mathring{S}_h$, ${\bf u}_h^{m,0},{\bf u}_h^{m,1}\in{\bf X}_h$, $p_h^{m,0},p_h^{m,1}\in \mathring{S}_h$, such that \begin{subequations} \begin{align} \lambda \,\aiprd{{\bf u}^{m,0}_h}{{\bf v}} +\left(\eta+\frac{1}{\tau}\right) \iprd{ {\bf u}_h^{m,0}}{ {\bf v}} -\cform{{\bf v}}{p^{m,0}_h} -\frac{1}{\tau} \,\iprd{{\bf u}^{m-1}_h}{{\bf v}} = \, & 0 ,\, & \forall \, {\bf v}\in {\bf X}_h, \label{scheme-d-mean-zero-u0} \\ \cform{{\bf u}_h^{m,0}}{q} = \,& 0 ,\, & \forall \, q\in \mathring{S}_h, \label{scheme-e-mean-zero-u0} \end{align} \end{subequations} and \begin{subequations} \begin{align} \iprd{\frac{\varphi_h^m - \varphi_{h,\star}^{m-1}}{\tau}}{\nu} + \varepsilon \,\aiprd{\mu_h^m}{\nu} + \bform{\varphi_h^{m-1}}{{\bf u}_h^{m,1}}{\nu} = \,& 0 , & \forall \, \nu \in S_h , \label{scheme-a-mean-zero} \\ \varepsilon^{-1} \,\iprd{\left(\varphi_h^m+\overline{\phi}_0 \right)^3 -\varphi_h^{m-1}-\overline{\phi}_0}{\psi} + \varepsilon \,\aiprd{\varphi_h^m}{\psi} -\iprd{\mu_h^m}{\psi} +\iprd{\xi_h^m}{\psi} = \,& 0 , &\forall \, \psi\in S_h , \label{scheme-b-mean-zero} \\ \aiprd{\xi_h^m}{\zeta} - \theta \,\iprd{\varphi_h^m}{\zeta} = \,& 0 , &\forall \, \zeta\in S_h , \label{scheme-c-mean-zero} \\ \lambda \,\aiprd{{\bf u}^{m,1}_h}{{\bf v}} + \left( \eta +\frac{1}{\tau}\right) \iprd{{\bf u}^{m,1}_h}{ {\bf v}} - \cform{{\bf v}}{p^{m,1}_h} - \gamma \,\bform{\varphi_h^{m-1}}{{\bf v}}{\mu_h^m} = \,& 0 , & \forall \, {\bf v}\in {\bf X}_h, \label{scheme-d-mean-zero} \\ \cform{{\bf u}_h^{m,1}}{q} = \,& 0 , & \forall \, q \in \mathring{S}_h, \label{scheme-e-mean-zero} \end{align} \end{subequations} where \begin{equation} \varphi_{h,\star}^{m-1} := \varphi_h^{m-1} - \tau\mathcal{Q}_h\left( \nabla \varphi_h^{m-1} \cdot {\bf u}_h^{m,0}\right) \in S_h, \label{phi-m-1-star} \end{equation} and $\mathcal{Q}_h: L^2(\Omega) \to S_h$ is the $L^2$ projection, \emph{i.e.}, $\iprd{\mathcal{Q}_h \nu-\nu}{\chi}=0$, for all $\chi\in S_h$. For the initial data, we set \begin{equation} \varphi_h^0 := R_h \phi_0 - \overline{\phi}_0 , \quad {\bf u}_h^0 := {\bf P}_h{\bf u}_0. \end{equation} Hence, $\iprd{\varphi_h^0}{1}=0$. By setting $\nu\equiv 1$ in \eqref{scheme-a} and \eqref{scheme-a-mean-zero} and observing that $\aiprd{\varphi}{1} = 0$ for all $\varphi\in S_h$, one finds that, provided solutions for the two schemes exist, they are related via \begin{equation} \varphi_h^m +\overline{\phi}_0 = \phi_h^m, \quad \varphi_h^m\in\mathring{S}_h, \quad {\bf u}_h^m = {\bf u}_h^{m,0}+ {\bf u}_h^{m,1} \in {\bf X}_h, \quad p^m_h = p^{m,0}_h + p^{m,1}_h \in \mathring{S}_h \end{equation} for all $1\le m\le M$. The variables $\mu_h^m$ and $\xi_h^m$ are the same as before. Note that the average mass of $\mu_h^m$ will change with the time step $m$, \emph{i.e.}, $\iprd{\mu_h^m}{1} \ne \iprd{\mu_h^{m-1}}{1}$, in general. \begin{rem} The utility of this new, equivalent formulation is that we can straightforwardly show its unconditional unique solvability by convex optimization methods. Our arguments require that the velocity ${\bf u}_h^{m,1}$ is a linear function of $\mu_h^m$, as is the case in \eqref{scheme-d-mean-zero}. (See Lemma~\ref{lem-bilinear-ell}.) This was not the case in \eqref{scheme-d}, where ${\bf u}_h^m$ is an affine function of $\mu_h^m$. \end{rem} \subsection{Unconditional Solvability} \label{subsec-solvability} In this subsection, we show that our schemes are unconditionally uniquely solvable. We begin by defining some machinery for the solvability, as well as the stability and convergence analyses discussed later. First, consider the invertible linear operator $\mathsf{T}_h : \mathring{S}_h \rightarrow \mathring{S}_h$ defined via the following variational problem: given $\zeta\in \mathring{S}_h$, find $\mathsf{T}_h(\zeta)\in \mathring{S}_h$ such that \begin{equation} \aiprd{\mathsf{T}_h(\zeta)}{\chi} = \iprd{\zeta}{\chi} \qquad \forall \, \chi\in \mathring{S}_h. \end{equation} This clearly has a unique solution because $\aiprd{\, \cdot \, }{ \, \cdot \, }$ is an inner product on $\mathring{S}_h$. We now wish to define a mesh-dependent ``$-1$" norm, \emph{i.e.}, a discrete analogue to the $H^{-1}$ norm. We omit the details of the next result, the discrete analog of Lemma~\ref{lem-negative-norm}, for brevity. \begin{lem} \label{lem-negative-norm-discrete} Let $\zeta,\, \xi \in \mathring{S}_h$ and set \begin{equation} \left(\zeta,\xi\right)_{-1,h} :=\aiprd{\mathsf{T}_h(\zeta)}{\mathsf{T}_h(\xi)} =\iprd{\zeta}{\mathsf{T}_h(\xi)} = \iprd{\mathsf{T}_h(\zeta)}{\xi}. \label{crazy-inner-product-h} \end{equation} $\left(\, \cdot\, ,\, \cdot\, \right)_{-1,h}$ defines an inner product on $\mathring{S}_h$, and the induced negative norm satisfies \begin{equation} \norm{\zeta}{-1,h} := \sqrt{\left(\zeta,\zeta\right)_{-1,h}} = \sup_{0\ne \chi\in\mathring{S}_h} \frac{\iprd{\zeta}{\chi}}{\norm{\nabla\chi}{L^2}} . \label{crazy-norm-h} \end{equation} Consequently, for all $\chi\in S_h$ and all $\zeta\in\mathring{S}_h$, \begin{equation} \left|\iprd{\zeta}{\chi}\right| \le \norm{\zeta}{-1,h} \norm{\nabla\chi}{L^2} . \label{plus-1-minus-1-estimate} \end{equation} The following Poincar\'{e}-type estimate holds: \begin{equation} \norm{\zeta}{-1,h} \le C \norm{\zeta}{L^2}, \quad \forall \, \,\zeta\in\mathring{S}_h, \end{equation} for some $C >0$ that is independent of $h$. Finally, if $\mathcal{T}_h$ is globally quasi-uniform, then the following inverse estimate holds: \begin{equation} \norm{\zeta}{L^2} \le C h^{-1} \norm{\zeta}{-1,h}, \quad \forall \, \,\zeta\in\mathring{S}_h, \end{equation} for some $C >0$ that is independent of $h$. \end{lem} \begin{lem} \label{lem-bilinear-ell} Given $\varphi_h^{m-1}\in\mathring{S}_h$ define the bilinear form $\ell_h^m:\mathring{S}_h\times\mathring{S}_h\to \mathbb{R}$ via \begin{equation} \label{weak-form-L-h} \ell_h^m(\mu,\nu):= \varepsilon \,\aiprd{\mu}{\nu} + \bform{\varphi_h^{m-1}}{{\bf u}}{\nu}, \end{equation} where, for each fixed $\mu\in\mathring{S}_h$, ${\bf u} = {\bf u}(\mu) \in {\bf X}_h$ and $p = p(\mu)\in \mathring{S}_h$ solve \begin{subequations} \begin{align} \lambda \,\aiprd{{\bf u}}{{\bf v}} + \left(\eta+\frac{1}{\tau}\right)\iprd{{\bf u}}{ {\bf v}} -\cform{{\bf v}}{p} - \gamma \,\bform{\varphi_h^{m-1}}{{\bf v}}{\mu} = \,& 0 ,\, & \forall \, {\bf v}\in {\bf X}_h, \label{symm-u} \\ \cform{{\bf u}}{q} = \,& 0 ,\, & \forall \, q\in \mathring{S}_h. \label{symm-p} \end{align} \end{subequations} Then $\ell_h^m(\, \cdot \, , \, \cdot \, )$ is a coercive, symmetric bilinear form, and therefore, an inner product on $\mathring{S}_h$. \end{lem} \begin{proof} The solvability and stability of the flow problem follows from the fact that $\left({\bf X}_h, \mathring{S}_h\right)$ form a stable pair for the Darcy-Stokes problem. Now, let $\mu_i\in\mathring{S}_h$, $i = 1,2$. Set ${\bf u}_i = {\bf u}(\mu_i) \in {\bf X}_h$ and $p_i = p(\mu_i)\in \mathring{S}_h$, $i=1,2$, with ${\bf u}$ and $p$ defined in \eqref{symm-u} and \eqref{symm-p} above. Then with $\alpha,\beta \in \left\{1,2\right\}$, \begin{subequations} \begin{align} \lambda \,\aiprd{{\bf u}_\alpha}{{\bf u}_\beta} + \left(\eta+\frac{1}{\tau}\right)\iprd{{\bf u}_\alpha}{ {\bf u}_\beta} -\cform{{\bf u}_\beta}{p_\alpha} - \gamma \,\bform{\varphi_h^{m-1}}{{\bf u}_\beta}{\mu_\alpha} = \,& 0 , \\ \cform{{\bf u}_\beta}{p_\alpha} = \,& 0 , \end{align} \end{subequations} and setting $\alpha = 2$, $\beta = 1$ in the last two equations, we have \begin{align} \ell_h^m(\mu_1,\mu_2) &= \varepsilon \,\aiprd{\mu_1}{\mu_2} + \bform{\varphi_h^{m-1}}{{\bf u}_1}{\mu_2} \nonumber \\ &= \varepsilon \,\aiprd{\mu_1}{\mu_2} + \frac{\lambda}{\gamma} \,\aiprd{{\bf u}_2}{{\bf u}_1} + \frac{\eta+\frac{1}{\tau}}{\gamma}\iprd{{\bf u}_2}{ {\bf u}_1}. \end{align} It is now clear that $\ell_h^m(\, \cdot \, , \, \cdot \, )$ is a coercive, symmetric bilinear form on $\mathring{S}_h$. \end{proof} Owing to the last result, we can define an invertible linear operator ${\mathcal L}_{h,m} : \mathring{S}_h \rightarrow \mathring{S}_h$ via the following problem: given $\zeta\in \mathring{S}_h$, find $\mu\in \mathring{S}_h$ such that \begin{equation} \ell_h^m\left(\mu,\nu\right) = -\left(\zeta,\nu\right)_{L^2} \qquad \forall \, \nu\in \mathring{S}_h. \end{equation} This clearly has a unique solution because $\ell_h^m(\, \cdot \, , \, \cdot \, )$ is an inner product on $\mathring{S}_h$. We write ${\mathcal L}_{h,m}(\mu) = -\zeta$, or, equivalently, $\mu = -{\mathcal L}_{h,m}^{-1}(\zeta)$. We now wish to define another discrete negative norm. Again we omit the details for the sake of brevity. \begin{lem} \label{lem-bilinear-negative-norm} Let $\zeta,\, \xi \in \mathring{S}_h$ and suppose $\mu_\zeta,\, \mu_\xi\in\mathring{S}_h$ are the unique weak solutions to ${\mathcal L}_{h,m}\left(\mu_\zeta\right) = -\zeta$ and ${\mathcal L}_{h,m}\left(\mu_\xi\right) = -\xi$. Define \begin{equation} \left(\zeta,\xi\right)_{{\mathcal L}_{h,m}^{-1}} :=\ell_h^m\left(\mu_\zeta,\mu_\xi\right) =-\left(\zeta,\mu_\xi\right)_{L^2} =-\left(\mu_\zeta,\xi\right)_{L^2}. \label{crazy-inner-product-h-ell} \end{equation} $\left(\, \cdot\, ,\, \cdot\, \right)_{{\mathcal L}_{h,m}^{-1}}$ defines an inner product on $\mathring{S}_h$. The induced norm is \begin{equation} \norm{\zeta}{\mathcal{L}_{h,m}^{-1}} = \sqrt{\left(\zeta,\zeta\right)_{{\mathcal L}_{h,m}^{-1}}}, \quad \forall \, \zeta\in \mathring{S}_h. \label{crazy-norm-h-ell} \end{equation} \end{lem} Using our discrete negative norm we can define a variational problem closely related to our fully discrete scheme. To keep the discussion brief, we omit the proof of the following result that guarantees the unique solvability of \eqref{scheme-a-mean-zero} -- \eqref{scheme-e-mean-zero}. See, for example,~\cite{feng12} for the details of a similar argument. \begin{lem} \label{lem-minimizer} Let $\varphi_h^{m-1}\in\mathring{S}_h$ be given. Take $\varphi_{h,\star}^{m-1}$ as in \eqref{phi-m-1-star}. For all $\varphi_h\in\mathring{S}_h$, define the nonlinear functional \begin{eqnarray} G_h(\varphi_h) &:=& \frac{\tau}{2}\norm{\frac{\varphi_h-\varphi_{h,\star}^{m-1}}{\tau}}{\mathcal{L}_{h,m}^{-1}}^2 +\frac{1}{4\varepsilon}\norm{\varphi_h+\overline{\phi}_0}{L^4}^4 +\frac{\varepsilon}{2}\norm{\nabla\varphi_h}{L^2}^2 \nonumber \\ && -\frac{1}{\varepsilon}\iprd{\varphi_h^{m-1}+\overline{\phi}_0}{\varphi_h} + \frac{\theta}{2}\norm{\varphi_h}{-1,h}^2. \end{eqnarray} $G_h$ is strictly convex and coercive on the linear subspace $\mathring{S}_h$. Consequently, $G_h$ has a unique minimizer, call it $\varphi_h^m\in\mathring{S}_h$. Moreover, $\varphi_h^m\in\mathring{S}_h$ is the unique minimizer of $G_h$ if and only if it is the unique solution to \begin{equation} \varepsilon^{-1}\iprd{\left(\varphi_h^m+\overline{\phi}_0\right)^3}{\psi} +\varepsilon \,\aiprd{\varphi_h^m}{\psi} - \iprd{\mu_{h,\star}^m}{\psi} +\iprd{\xi_h^m}{\psi} = \varepsilon^{-1}\iprd{\varphi_h^{m-1}+\overline{\phi}_0}{\psi} \label{nonlinear-1} \end{equation} for all $\psi\in\mathring{S}_h$, where $\mu_{h,\star}^m,\xi_h^m \in \mathring{S}_h$ are the unique solutions to \begin{align} \ell_h^m\left(\mu_{h,\star}^m,\nu\right) = -\iprd{\frac{\varphi_h^m-\varphi_{h,\star}^{m-1}}{\tau}}{\nu} & \qquad \forall \, \nu\in\mathring{S}_h, \label{nonlinear-2} \\ \aiprd{\xi_h^m}{\zeta} = \theta \,\iprd{\varphi_h^m}{\zeta} & \qquad \forall \, \zeta\in\mathring{S}_h. \label{nonlinear-3} \end{align} \end{lem} Finally, we are in the position to prove the unconditional unique solvability of our scheme. \begin{thm} \label{thm-existence-uniqueness} The scheme \eqref{scheme-a} -- \eqref{scheme-e}, or, equivalently, the scheme \eqref{scheme-a-mean-zero} -- \eqref{scheme-e-mean-zero}, is uniquely solvable for any mesh parameters $\tau$ and $h$ and for any of the model parameters. \end{thm} \begin{proof} Suppose $\iprd{\varphi_h^{m-1}}{1}=0$. It is clear that a necessary condition for solvability of \eqref{scheme-a-mean-zero} -- \eqref{scheme-b-mean-zero} is that \begin{equation} \left(\varphi_h^m,1\right) = \bigl(\varphi_h^{m-1},1\bigr) = 0, \end{equation} as can be found by taking $\nu\equiv 1$ in \eqref{scheme-a-mean-zero}. Now, let $ \varphi_h^m,\mu_{h,\star}^m \in\mathring{S}_h\times\mathring{S}_h$ be a solution of \eqref{nonlinear-1} -- \eqref{nonlinear-3}. (The other variables may be regarded as auxiliary.) Set \begin{equation} \overline{\mu_h^m}:=\frac{1}{\varepsilon|\Omega|}\iprd{(\varphi_h^m+\overline{\phi}_0)^3 - \left(\varphi_h^m+\overline{\phi}_0\right)}{1} =\frac{1}{\varepsilon|\Omega|}\iprd{(\varphi_h^m+\overline{\phi}_0)^3 }{1} -\frac{\overline{\phi}_0}{\varepsilon} , \label{chem-pot-average} \end{equation} and define $\mu_h^m:=\mu_{h,\star}^m+\overline{\mu_h^m}$. There is a one-to-one correspondence of the respective solution sets: $\varphi_h^m,\mu_{h,\star}^m \in\mathring{S}_h\times\mathring{S}_h$ is a solution to \eqref{nonlinear-1} -- \eqref{nonlinear-3}, if and only if $\varphi_h^m,\mu_h^m \in\mathring{S}_h\times S_h$ is a solution to \eqref{scheme-a-mean-zero} -- \eqref{scheme-e-mean-zero}, if and only if $ \phi_h^m,\mu_h^m \in S_h\times S_h$ is a solution to \eqref{scheme-a} -- \eqref{scheme-e}, where \begin{equation} \phi_h^m = \varphi_h^m+\overline{\phi}_0,\quad \mu_h^m = \mu_{h,\star}^m+\overline{\mu_h^m}. \end{equation} But \eqref{nonlinear-1} -- \eqref{nonlinear-3} admits a unique solution, which proves that \eqref{scheme-a} -- \eqref{scheme-e} and \eqref{scheme-a-mean-zero} -- \eqref{scheme-e-mean-zero} are uniquely solvable. \end{proof} \subsection{Unconditional Energy Stability} \label{subsec-energy-stability} We now show that the solutions to our scheme enjoy stability properties that are similar to those of the PDE solutions, and, moreover, these properties hold regardless of the sizes of $h$ and $\tau$. The first property, the unconditional energy stability, is a direct result of the convex decomposition represented in the scheme. \begin{lem} \label{lem-energy-law} Let $(\phi_h^m, \mu_h^m, {\bf u}_h^m) \in S_h\times S_h\times {\bf X}_h$ be the unique solution of \eqref{scheme-a}--\eqref{scheme-e}, with the other variables regarded as auxiliary. Then the following energy law holds for any $h,\, \tau >0$: \begin{align} E\left({\bf u}_h^{\ell},\phi_h^\ell\right) &+\tau \varepsilon \sum_{m=1}^\ell \norm{\nabla\mu_h^m}{L^2}^2 + \tau \frac{\lambda}{\gamma} \sum_{m=1}^\ell \norm{\nabla{\bf u}_h^m}{L^2}^2 + \tau \frac{\eta}{\gamma} \sum_{m=1}^\ell \norm{{\bf u}_h^m}{L^2}^2 \nonumber \\ &+ \tau^2 \sum_{m=1}^\ell \Biggl\{ \, \frac{\varepsilon}{2} \norm{\nabla\left(\delta_\tau \phi_h^m\right)}{L^2}^2+ \frac{1}{2\gamma} \norm{\delta_\tau{\bf u}_h^m}{L^2}^2 + \frac{1}{4\varepsilon}\norm{\delta_\tau( \phi_h^m)^2}{L^2}^2 \nonumber \\ &\quad + \frac{1}{2\varepsilon}\norm{ \phi_h^m \delta_\tau \phi_h^m}{L^2}^2 +\frac{1}{2\varepsilon}\norm{\delta_\tau \phi_h^m}{L^2}^2 +\frac{\theta}{2}\norm{\delta_\tau\phi_h^m}{-1,h}^2\, \Biggr\} = E\left({\bf u}_h^0,\phi_h^0\right), \label{ConvSplitEnLaw} \end{align} for all $0\leq \ell \leq M$. \end{lem} \begin{proof} We first set $\nu= \mu_h^m$ in \eqref{scheme-a}, $\psi = \delta_\tau \phi_h^m$ in \eqref{scheme-b}, $\zeta = -\mathsf{T}_h\left(\delta_\tau \phi_h^m\right)$ in \eqref{scheme-c}, ${\bf v} = \frac{1}{\gamma}{\bf u}_h^m$ in \eqref{scheme-d}, $q = \frac{1}{\gamma}p_h^m$ in \eqref{scheme-e}, to obtain \begin{eqnarray} \iprd{\delta_\tau \phi_h^m}{\mu_h^m} + \varepsilon \norm{\nabla\mu_h^m}{L^2}^2 + \bform{\phi_h^{m-1}}{{\bf u}_h^m}{\mu_h^m} &= & 0, \label{tested-energy-1} \\ \frac{1}{\varepsilon}\iprd{\left(\phi_h^m\right)^3 -\phi_h^{m-1}}{\delta_\tau\phi_h^m} + \varepsilon \,\aiprd{\phi_h^m}{\delta_\tau\phi_h^m} -\iprd{\mu_h^m}{\delta_\tau\phi_h^m} + \iprd{\xi_h^m}{\delta_\tau\phi_h^m} &=& 0, \label{tested-energy-2} \\ -\aiprd{\xi_h^m}{\mathsf{T}_h\left(\delta_\tau \phi_h^m\right)} + \theta \,\iprd{\phi_h^m-\overline{\phi}_0}{\mathsf{T}_h\left(\delta_\tau \phi_h^m\right)} & = & 0 , \label{tested-energy-3} \\ \frac{1}{\gamma}\iprd{\delta_\tau{\bf u}^m_h}{{\bf u}_h^m}+ \frac{\lambda}{\gamma} \norm{\nabla{\bf u}_h^m}{L^2}^2 + \frac{\eta}{\gamma} \norm{{\bf u}_h^m}{L^2}^2 -\frac{1}{\gamma}\cform{{\bf u}_h^m}{p^m_h} -\bform{\phi_h^{m-1}}{{\bf u}_h^m}{\mu_h^m} & = & 0 , \label{tested-energy-4} \\ \frac{1}{\gamma}\cform{{\bf u}_h^m}{p^m} & = & 0 . \label{tested-energy-5} \end{eqnarray} Combining \eqref{tested-energy-1} -- \eqref{tested-energy-5}, using the identities \begin{align*} \left(\delta_\tau {\bf u}_h^m, {\bf u}_h^m \right) = &\frac12\,\left[\,\delta_\tau\norm{{\bf u}_h^m}{L^2}^2 + \tau \norm{\delta_\tau {\bf u}_h^m}{L^2}^2 \,\right], \\ \aiprd{\phi_h^m}{\delta_\tau\phi_h^m} = &\frac12\,\left[\,\delta_\tau\norm{\nabla\phi_h^m}{L^2}^2 +\tau \norm{\nabla\delta_\tau \phi_h^m}{L^2}^2 \,\right], \\ \iprd{\left(\phi_h^m\right)^3-\phi_h^{m-1}}{\delta_\tau \phi_h^m} = & \frac14\, \delta_\tau \norm{\left( \phi_h^m\right)^2-1}{L^2}^2 +\frac{\tau}4 \Bigl[\norm{\delta_\tau( \phi_h^m)^2}{L^2}^2 \\ & +2\norm{\phi_h^m \delta_\tau \phi_h^m}{L^2}^2 +2\norm{\delta_\tau \phi_h^m}{L^2}^2 \, \Bigr], \\ \iprdmone{\phi_h^m-\overline{\phi}_0}{\delta_\tau\phi_h^m} =& \frac{1}{2}\left[\delta_\tau\norm{\phi_h^m-\overline{\phi}_0}{-1,h}^2 + \tau\norm{\delta_\tau\phi_h^m}{-1,h}^2\right], \end{align*} and applying the operator $\tau\sum_{m=1}^\ell$ to the combined equation, the result is obtained. \end{proof} The discrete energy law immediately implies the following uniform (in $h$ and $\tau$) \emph{a priori} estimates for $\phi_h^m$, $\mu_h^m$, and ${\bf u}_h^m$. Note that, from this point, we will not track the dependence of the estimates on the interface parameter $\varepsilon>0$, though this may be of importance, especially if $\varepsilon$ is made smaller. \begin{lem} \label{lem-a-priori-stability-trivial} Let $(\phi_h^m, \mu_h^m, {\bf u}_h^m) \in S_h\times S_h\times {\bf X}_h$ be the unique solution of \eqref{scheme-a}--\eqref{scheme-e}. Suppose that $E\left({\bf u}_h^0,\phi_h^0\right)<C$, independent of $h$. Then the following estimates hold for any $h,\, \tau>0$: \begin{align} \max_{0\leq m\leq M} \left[ \norm{{\bf u}_h^m}{L^2}^2+ \norm{\nabla\phi_h^m}{L^2}^2 + \norm{\left( \phi_h^m\right)^2-1}{L^2}^2 + \norm{\phi_h^m-\overline{\phi}_0}{-1,h}^2\right] &\leq C, \label{Linf-u-phi-discrete} \\ \max_{0\leq m\leq M}\left[\norm{\phi_h^m}{L^4}^4 +\norm{\phi_h^m}{L^2}^2 +\norm{\phi_h^m}{H^1}^2 \right] &\le C, \label{LinfH1phi-discrete} \\ \tau \sum_{m=1}^M\bigg[ \norm{\nabla\mu_h^m}{L^2}^2 + \norm{\nabla{\bf u}_h^m}{L^2}^2 + \norm{{\bf u}_h^m}{L^2}^2 \bigg] &\leq C , \label{sum-mu-u-discrete} \\ \sum_{m=1}^M \bigg[ \norm{\nabla\left(\phi_h^m-\phi_h^{m-1}\right)}{L^2}^2 + \norm{ \phi_h^m-\phi_h^{m-1}}{L^2}^2 + \norm{ \phi_h^m( \phi_h^m-\phi_h^{m-1})}{L^2}^2 \nonumber \\ + \norm{( \phi_h^m)^2-(\phi_h^{m-1})^2}{L^2}^2 + \norm{\phi_h^m-\phi_h^{m-1}}{-1,h}^2 + \norm{{\bf u}_h^m - {\bf u}_h^{m-1}}{L^2}^2 \bigg] &\leq C , \label{sum-phi-u-discrete} \end{align} for some constant $C>0$ that is independent of $h$, $\tau$, and $T$. \end{lem} We are able to prove the next set of \emph{a priori} stability estimates without any restrictions of $h$ and $\tau$. Before we begin, we will need the discrete Laplacian, $\Delta_h: S_h \to \mathring{S}_h$, which is defined as follows: for any $v_h\in S_h$, $\Delta_h v_h\in \mathring{S}_h$ denotes the unique solution to the problem \begin{equation} \iprd{\Delta_h v_h}{\chi} = -\aiprd{v_h}{\chi}, \quad\forall \, \,\chi\in S_h. \label{discrete-laplacian} \end{equation} In particular, setting $\chi = \Delta_h v_h$ in \eqref{discrete-laplacian}, we obtain \begin{displaymath} \norm{\Delta_h v_h}{L^2}^2 = -\aiprd{v_h}{ \Delta_hv_h} . \end{displaymath} \begin{lem} \label{lem-improved-a-priori-stabilities} Let $(\phi_h^m, \mu_h^m, {\bf u}_h^m) \in S_h\times S_h\times {\bf X}_h$ be the unique solution of \eqref{scheme-a}--\eqref{scheme-e}, with the other variables regarded as auxiliary. Suppose that $E\left({\bf u}_h^0,\phi_h^0\right) < C$ independent of $h$ and that $T\ge 1$ (for simplicity). The following estimates hold for any $h,\, \tau >0$: \begin{equation} \tau \sum_{m=1}^M \bigg[ \norm{\delta_\tau \phi_h^m}{H^{-1}}^2 +\norm{\delta_\tau \phi_h^m}{-1,h}^2+\norm{ \Delta_h \phi_h^m}{L^2}^2 + \norm{\mu_h^m}{L^2}^2 + \norm{\phi_h^m}{L^\infty}^{\frac{4(6-d)}{d}} \bigg] \le TC, \label{sum-3D-good} \end{equation} for some constant $C>0$ that is independent of $h$, $\tau$, and $T$. \end{lem} \begin{proof} Let $\mathcal{Q}_h: L^2(\Omega) \to S_h$ be the $L^2$ projection, \emph{i.e.}, $\iprd{\mathcal{Q}_h \nu-\nu}{\chi}=0$, for all $\chi\in S_h$. Suppose $\nu\in\mathring{H}^1(\Omega)$. Then, using \eqref{Linf-u-phi-discrete} and Sobolev embeddings, \begin{align} \iprd{\delta_\tau\phi_h^m}{\nu}&= \iprd{\delta_\tau\phi_h^m}{\mathcal{Q}_h\nu} \\ &=-\varepsilon \ \bigl(\nabla \mu_h^m,\nabla \mathcal{Q}_h\nu \bigr) - \bform{\phi_h^{m-1}}{{\bf u}_h^m}{\mathcal{Q}_h\nu} \\ &\leq \varepsilon\norm{\nabla\mu_h^m}{L^2} \norm{\nabla \mathcal{Q}_h\nu}{L^2} + \norm{\nabla\phi_h^{m-1}}{L^2}\norm{{\bf u}_h^m}{L^4} \norm{ \mathcal{Q}_h\nu}{L^4} \\ &\leq C\left[\varepsilon\norm{\nabla\mu_h^m}{L^2} + \norm{{\bf u}_h^m}{H^1} \right]\, \norm{\nabla \mathcal{Q}_h \nu}{L^2} \\ &\leq C\left[\varepsilon\norm{\nabla\mu_h^m}{L^2} + \norm{{\bf u}_h^m}{H^1} \right]\, \norm{\nabla \nu}{L^2}, \end{align} where we used the $H^1$ stability of the $L^2$ projection in the last step. Applying $\tau\sum_{m=1}^M$ gives (\ref{sum-3D-good}.1) -- which, in our notation, is the bound on the first term of the left side of \eqref{sum-3D-good}. The estimate (\ref{sum-3D-good}.2) follows from the inequality $\norm{\nu}{-1,h} \le \norm{\nu}{H^{-1}}$, which holds for all $\nu\in L^2(\Omega)$. Setting $\psi_h= \Delta_h \phi_h^m$ in \eqref{scheme-b} and using the definition of $\Delta_h\phi_h^m$, we get \begin{align*} \varepsilon \norm{ \Delta_h \phi_h^m}{L^2}^2 &= - \varepsilon \,\aiprd{\phi_h^m}{ \Delta_h \phi_h^m} \\ &=-\iprd{\mu_h^m}{\Delta_h \phi_h^m} + \varepsilon^{-1}\iprd{\left(\phi_h^m\right)^3-\phi_h^{m-1}}{\Delta_h \phi_h^m} +\iprd{\xi_h^m}{\Delta_h \phi_h^m} \\ & \le\aiprd{ \mu_h^m}{\phi_h^m } + \varepsilon^{-1}\left(\frac{ \varepsilon^2}{2} \norm{ \Delta_h \phi_h^m}{L^2}^2 + \frac{1}{2 \varepsilon^2} \norm{\left(\phi_h^m\right)^3-\phi_h^{m-1}}{L^2}^2 \right)-\aiprd{\xi_h^m}{\phi_h^m} \\ &\leq \frac12\norm{\nabla\mu_h^m}{L^2}^2 +\frac12 \norm{\nabla\phi_h^m}{L^2}^2 +\frac{ \varepsilon}2 \norm{ \Delta_h \phi_h^m}{L^2}^2 +C\norm{\left(\phi_h^m\right)^3-\phi_h^{m-1}}{L^2}^2 - \theta \,\iprd{\phi_h^m-\overline{\phi}_0}{\phi_h^m} \\ &\leq \frac12\norm{\nabla\mu_h^m}{L^2}^2 + C \norm{\nabla \phi_h^m}{L^2}^2 +\frac{ \varepsilon}2 \norm{ \Delta_h \phi_h^m}{L^2}^2 +C\norm{\left(\phi_h^m\right)^3-\phi_h^{m-1}}{L^2}^2 + C \norm{\phi_h^m - \overline{\phi}_0}{-1,h}^2. \end{align*} Hence, \begin{equation} \varepsilon \norm{ \Delta_h \phi_h^m}{L^2}^2 \leq \norm{\nabla \mu_h^m}{L^2}^2 + C \norm{\nabla \phi_h^m}{L^2}^2 + C\norm{\left(\phi_h^m\right)^3-\phi_h^{m-1}}{L^2}^2 + C \norm{\phi_h^m - \overline{\phi}_0}{-1,h}^2. \label{discrete-laplacian-phi-middle} \end{equation} Now using \eqref{LinfH1phi-discrete}, we have \begin{align} \norm{(\phi_h^m)^3-\phi_h^{m-1}}{L^2}^2 &\le 2\left(\norm{\phi_h^m}{L^6}^6 + \norm{\phi_h^{m-1}}{L^2}^2\right) \nonumber \\ & \le C \norm{\phi_h^m}{H^1}^6 +C \nonumber \\ & \le C, \label{nonlinear-control} \end{align} where we used the embedding $H^1(\Omega) \hookrightarrow L^6(\Omega)$, for $d = 2, 3$. Putting the last two inequalities together, we have \begin{equation} \varepsilon \norm{ \Delta_h \phi_h^m}{L^2}^2 \leq \norm{\nabla \mu_h^m}{L^2}^2 +C. \end{equation} Applying $\tau\sum_{m=1}^M$, estimate (\ref{sum-3D-good}.3) now follows from (\ref{sum-mu-u-discrete}.1). Now, take $\psi = \mu_h^m$ in \eqref{scheme-b}. Then, using \eqref{Linf-u-phi-discrete} and \eqref{nonlinear-control}, we have \begin{eqnarray} \norm{\mu_h^m}{L^2}^2 &\le& \varepsilon^{-1} \norm{\left(\phi_h^m\right)^3 - \phi_h^{m-1}}{L^2} \norm{\mu_h^m}{L^2} +\varepsilon\norm{\nabla\phi_h^m}{L^2}\norm{\nabla\mu_h^m}{L^2} + \norm{\xi_h^m}{L^2}\norm{\mu_h^m}{L^2} \nonumber \\ &\le& \frac{1}{\varepsilon^2}\norm{\left(\phi_h^m\right)^3 - \phi_h^{m-1}}{L^2}^2 +\frac{1}{4}\norm{\mu_h^m}{L^2}^2 +\frac{\varepsilon}{2}\norm{\nabla\phi_h^m}{L^2}^2 + \frac{\varepsilon}{2}\norm{\nabla\mu_h^m}{L^2}^2 \nonumber \\ &&+ C\norm{\nabla\xi_h^m}{L^2}^2 + \frac{1}{4} \norm{\mu_h^m}{L^2} \nonumber \\ &\le& C +\frac{1}{2}\norm{\mu_h^m}{L^2}^2 + \frac{\varepsilon}{2} \norm{\nabla\mu_h^m}{L^2}^2 + C\norm{\nabla\xi_h^m}{L^2}^2 \nonumber \\ &\le& C +\frac{1}{2}\norm{\mu_h^m}{L^2}^2 + \frac{\varepsilon}{2} \norm{\nabla\mu_h^m}{L^2}^2 + C \norm{\phi_h^m-\overline{\phi}_0}{-1,h}^2 \nonumber \\ &\le& C +\frac{1}{2}\norm{\mu_h^m}{L^2}^2 + \frac{\varepsilon}{2} \norm{\nabla\mu_h^m}{L^2}^2 . \nonumber \end{eqnarray} Hence \begin{equation} \norm{\mu_h^m}{L^2}^2 \le C + \varepsilon\norm{\nabla\mu_h^m}{L^2}^2. \end{equation} Applying $\tau\sum_{m=1}^M$, estimate (\ref{sum-3D-good}.4) now follows from (\ref{sum-mu-u-discrete}.1). To prove estimate (\ref{sum-3D-good}.5), we use the discrete Gagliardo-Nirenberg inequality: \begin{equation} \|\phi_h^m\|_{L^\infty} \leq C\|\Delta_h \phi_h^m\|_{L^2}^{\frac{d}{2(6-d)}} \,\|\phi_h^m\|_{L^6}^{\frac{3(4-d)}{2(6-d)}} + C\|\phi_h^m\|_{L^6} \qquad (d=2,3) . \label{infinity-bound} \end{equation} Applying $\tau\sum_{m=1}^M$ and using $H^1(\Omega) \hookrightarrow L^6(\Omega)$, (\ref{LinfH1phi-discrete}.3) and (\ref{sum-3D-good}.3), estimate (\ref{sum-3D-good}.5) follows. \end{proof} \begin{lem} \label{lem-a-priori-stability} Let $(\phi_h^m, \mu_h^m, {\bf u}_h^m) \in S_h\times S_h\times {\bf X}_h$ be the unique solution of \eqref{scheme-a}--\eqref{scheme-e}, with the other variables regarded as auxiliary. Suppose that $E\left({\bf u}_h^0,\phi_h^0\right), \norm{\mu_h^0}{L^2}^2 < C$ independent of $h$, where $\mu_h^0$ is defined below in \eqref{initial-chem-pot}, $d=2,3$, and that $T\ge 1$ (for simplicity). The following estimates hold for any $h,\, \tau >0$: \begin{align} \tau \sum_{m=1}^M \norm{\delta_\tau\phi_h^m}{L^2}^2 & \le CT , \label{sum-dtau-phi} \\ \max_{1\le m\le M} \bigg[ \norm{\mu_h^m}{L^2}^2 + \norm{\Delta_h \phi_h^m}{L^2}^2 + \norm{\phi_h^m}{L^\infty}^{\frac{4(6-d)}{d}} \bigg] &\le CT, \label{Linf-mu-phi} \end{align} for some constant $C>0$ that is independent of $h$, $\tau$, and $T$. \end{lem} \begin{proof} We prove (\ref{sum-dtau-phi}) and (\ref{Linf-mu-phi}.1) together. To do so, we first define $\mu_h^0$ via \begin{equation} \label{initial-chem-pot} \iprd{\mu_h^0}{\psi} := \varepsilon \,\aiprd{\phi_h^0}{\psi} + \varepsilon^{-1}\iprd{\left(\phi_h^0\right)^3 -\phi_h^0 }{\psi} + \theta \,\iprd{\mathsf{T}_h\left(\phi_h^0-\overline{\phi}_0\right)}{\psi}, \end{equation} for all $\psi \in S_h$, and \begin{equation} \delta_\tau \phi_h^0 :\equiv 0\in S_h. \end{equation} Now, we subtract \eqref{scheme-b} from itself at consecutive time steps to obtain \begin{eqnarray} \tau\iprd{\delta_\tau\mu_h^m}{\psi} &=& \tau\varepsilon \,\aiprd{\delta_\tau\phi_h^m}{\psi} + \varepsilon^{-1}\iprd{\left(\phi_h^m\right)^3-\left(\phi_h^{m-1}\right)^3}{\psi} \nonumber \\ && -\tau\varepsilon^{-1}\iprd{\delta_\tau \phi_h^{m-1}}{\psi} + \theta\tau \,\iprd{\mathsf{T}_h\left(\delta_\tau\phi_h^m\right)}{\psi} , \label{scheme-b-staggered} \end{eqnarray} for all $\psi \in S_h$, which is well-defined for all $1\le m \le M$. Taking $\psi = \mu_h^m$ in \eqref{scheme-b-staggered} and $\nu = -\tau\delta_\tau\phi_h^m$ in \eqref{scheme-a} and adding the results yields \begin{eqnarray} \tau \,\iprd{\delta_\tau\mu_h^m}{\mu_h^m} +\tau \norm{\delta_\tau\phi_h^m}{L^2}^2 &=& \tau \varepsilon^{-1}\iprd{\delta_\tau\phi_h^m\left\{ \left(\phi_h^m\right)^2 +\phi_h^m\phi_h^{m-1} + \left(\phi_h^{m-1}\right)^2\right\}}{\mu_h^m} \nonumber \\ &&- \tau \varepsilon^{-1} \iprd{\delta_\tau \phi_h^{m-1}}{\mu_h^m} + \theta\tau \,\iprd{\mathsf{T}_h\left(\delta_\tau\phi_h^m\right)}{\mu_h^m - \overline{\mu_h^m}} \nonumber \\ &&- \tau \,\bform{\phi_h^{m-1}}{{\bf u}_h^m}{\delta_\tau\phi_h^m} \nonumber \\ &\le& \tau\varepsilon^{-1} \norm{\left(\phi_h^m\right)^2 + \phi_h^m\phi_h^{m-1} + \left(\phi_h^{m-1}\right)^2 }{L^3}\norm{\mu_h^m}{L^6} \norm{\delta_\tau\phi_h^m}{L^2} \nonumber \\ &&+ \tau \varepsilon^{-1} \norm{\nabla\mu_h^m}{L^2} \norm{\delta_\tau\phi_h^{m-1}}{-1,h} \nonumber \\ && + \theta\tau \norm{\nabla{\mathsf{T}_h\left(\delta_\tau\phi_h^m\right)}}{L^2} \norm{\mu_h^m - \overline{\mu_h^m}}{-1,h} \nonumber \\ &&- \tau \,\bform{\phi_h^{m-1}}{{\bf u}_h^m}{\delta_\tau\phi_h^m} \nonumber \\ &\le& C \tau \norm{\left(\phi_h^m\right)^2 +\phi_h^m\phi_h^{m-1} + \left(\phi_h^{m-1}\right)^2 }{L^3}^2\norm{\mu_h^m}{H^1}^2 +\frac{\tau}{4} \norm{\delta_\tau\phi_h^m}{L^2}^2 \nonumber \\ & & + C\tau \norm{\nabla\mu_h^m}{L^2}^2 + C\tau \norm{\delta_\tau\phi_h^{m-1}}{-1,h}^2 \nonumber \\ && + C\tau \norm{\nabla{\mathsf{T}_h\left(\delta_\tau\phi_h^m\right)}}{L^2}^2 + C\tau\norm{\mu_h^m - \overline{\mu_h^m}}{-1,h}^2 \nonumber \\ &&- \tau \,\bform{\phi_h^{m-1}}{{\bf u}_h^m}{\delta_\tau\phi_h^m} \nonumber \\ &\le& C \tau \left( \norm{\phi_h^m}{L^6}^4 +\norm{\phi_h^{m-1}}{L^6}^4\right)\norm{\mu_h^m}{H^1}^2 +\frac{\tau}{4} \norm{\delta_\tau\phi_h^m}{L^2}^2 \nonumber \\ & & + C\tau \norm{\nabla\mu_h^m}{L^2}^2 + C\tau \norm{\delta_\tau\phi_h^{m-1}}{-1,h}^2 \nonumber \\ && + C\tau \norm{\delta_\tau\phi_h^m}{-1,h}^2 + C\tau\norm{\nabla\mu_h^m}{L^2}^2 - \tau \,\bform{\phi_h^{m-1}}{{\bf u}_h^m}{\delta_\tau\phi_h^m} \nonumber \\ &\le& C \tau \norm{\mu_h^m}{H^1}^2 +\frac{\tau}{4} \norm{\delta_\tau\phi_h^m}{L^2}^2 + C\tau \norm{\delta_\tau\phi_h^{m-1}}{-1,h}^2 \nonumber \\ & & + C\tau \norm{\delta_\tau\phi_h^m}{-1,h}^2 - \tau \,\bform{\phi_h^{m-1}}{{\bf u}_h^m}{\delta_\tau\phi_h^m} \label{estimate-2.56} \end{eqnarray} where we have used $H^1(\Omega) \hookrightarrow L^6(\Omega)$, Lemma~\ref{lem-negative-norm-discrete}, and \eqref{LinfH1phi-discrete}. Now we bound the trilinear form $b(\, \cdot \, , \, \cdot \, , \, \cdot \, )$. To do so, we note the discrete estimate \begin{equation} \norm{\nabla \nu_h}{L^4} \leq C\left( \norm{\nabla \nu_h}{L^2}+\norm{\Delta_h \nu_h}{L^2}\right)^{\frac{d}{4}} \norm{\nabla \nu_h}{L^2}^{\frac{4-d}{4}} \quad \forall \, \nu_h \in S_h,\quad d= 2,3, \label{grad-v-L4-bound} \end{equation} and Ladyzhenskaya inequality \begin{equation} \norm{{\bf u}}{L^4}\le C\norm{{\bf u}}{L^2}^{\frac{4-d}{4}}\norm{\nabla{\bf u}}{L^2}^{\frac{d}{4}}\qquad \forall \, \, {\bf u} \in {\bf H}_0^1(\Omega) ,\quad d= 2,3. \label{Lady} \end{equation} Using Holder's inequality, \eqref{grad-v-L4-bound}, \eqref{Lady}, (\ref{Linf-u-phi-discrete}.1), and (\ref{Linf-u-phi-discrete}.2) \begin{eqnarray} \left| \bform{\phi_h^{m-1}}{{\bf u}_h^m}{\delta_\tau\phi_h^m}\right| &\le & \norm{\nabla\phi_h^{m-1}}{L^4}\norm{{\bf u}_h^m}{L^4}\norm{\delta_\tau\phi_h^m}{L^2} \nonumber \\ &\le & C \norm{\delta_\tau\phi_h^m}{L^2} \norm{\nabla{\bf u}_h^m}{L^2}\left(\norm{\nabla\phi_h^{m-1}}{L^2}+\norm{\Delta_h\phi_h^{m-1}}{L^2}\right) \nonumber \\ &\le & \frac{1}{4} \norm{\delta_\tau\phi_h^m}{L^2}^2 + C \norm{\nabla{\bf u}_h^m}{L^2}^2 + C \norm{\nabla{\bf u}_h^m}{L^2}^2 \norm{\Delta_h\phi_h^{m-1}}{L^2}^2 . \label{b-form-stability-estimate-a} \end{eqnarray} Setting $\psi_h= \Delta_h \phi_h^m$ in \eqref{scheme-b} and \eqref{initial-chem-pot} and using the definition of $\Delta_h\phi_h^m$, it follows that \begin{equation} \norm{ \Delta_h \phi_h^m}{L^2}^2 \leq C\norm{\mu_h^m}{L^2}^2 +C , \quad 0 \le m \le M, \label{lap-phi-bounded-by-mu} \end{equation} so that, for $1\le m\le M$, \begin{equation} \left| \bform{\phi_h^{m-1}}{{\bf u}_h^m}{\delta_\tau\phi_h^m}\right| \le \frac{1}{4} \norm{\delta_\tau\phi_h^m}{L^2}^2 + C \norm{\nabla{\bf u}_h^m}{L^2}^2 + C \norm{\nabla{\bf u}_h^m}{L^2}^2 \norm{\mu_h^{m-1}}{L^2}^2 . \end{equation} Thus, \begin{eqnarray} \tau\iprd{\delta_\tau\mu_h^m}{\mu_h^m} +\frac{\tau}{2} \norm{\delta_\tau\phi_h^m}{L^2}^2 &\le& C\tau \norm{\mu_h^m}{H^1}^2 + C\tau \norm{\delta_\tau\phi_h^{m-1}}{-1,h}^2 + C\tau \norm{\delta_\tau\phi_h^m}{-1,h}^2 \nonumber \\ && + C\tau\norm{\nabla{\bf u}_h^m}{L^2}^2 \norm{\mu_h^{m-1}}{L^2}^2+ C\tau \norm{\nabla{\bf u}_h^m}{L^2}^2. \end{eqnarray} Applying $\sum_{m=1}^\ell$, and using \eqref{sum-mu-u-discrete}, (\ref{sum-3D-good}), $\delta_\tau\phi_h^0 \equiv 0$, and the identity \begin{equation} \tau\iprd{\delta_\tau\mu_h^m}{\mu_h^m} = \iprd{\mu_h^m-\mu_h^{m-1}}{\mu_h^m} = \frac{1}{2}\norm{\mu_h^m}{L^2}^2 +\frac{1}{2}\norm{\mu_h^m-\mu_h^{m-1}}{L^2}^2-\frac{1}{2}\norm{\mu_h^{m-1}}{L^2}^2 , \end{equation} we conclude \begin{equation} \frac{1}{2}\norm{\mu_h^\ell}{L^2}^2 - \frac{1}{2}\norm{\mu_h^0}{L^2}^2 +\frac{\tau}{2}\sum_{m=1}^\ell\norm{\delta_\tau\phi_h^m}{L^2}^2 \le CT + C\tau\sum_{m=0}^{\ell-1} \norm{\nabla{\bf u}_h^{m+1}}{L^2}^2 \norm{\mu_h^m}{L^2}^2 . \end{equation} Since the estimate is explicit with respect to $\left\{\norm{\mu_h^m}{L^2}^2\right\}$ and $\tau \sum_{m=1}^M\norm{\nabla{\bf u}_h^m}{L^2}^2 \le C$, we may appeal directly to the discrete Gronwall inequality in Lemma~\ref{lem-discrete-gronwall}. Estimates \eqref{sum-dtau-phi} and (\ref{Linf-mu-phi}.1) follow immediately. Estimate (\ref{Linf-mu-phi}.2) follows from (\ref{Linf-mu-phi}.1) and \eqref{lap-phi-bounded-by-mu}. Estimate (\ref{Linf-mu-phi}.3) follows from \eqref{infinity-bound}, the embedding $H^1(\Omega) \hookrightarrow L^6(\Omega)$, (\ref{LinfH1phi-discrete}.3), and (\ref{Linf-mu-phi}.2). \end{proof} \begin{rem} The idea for controlling the time-lagged $\norm{\Delta_h\phi_h^{m-1}}{L^2}^2$ term in \eqref{b-form-stability-estimate-a} using the discrete Gronwall inequality was inspired by a similar technique from a recent paper by G.~Gr\"{u}n~\cite{grun13}, which deals with a different PDE system (as well as a different numerical method) from that examined here and is not concerned with error estimates. \end{rem} \section{Error Estimates for the Fully Discrete Convex Splitting Scheme} \label{sec-error-estimates} For the error estimates that we pursue in this section, we shall assume that weak solutions have the additional regularities \begin{align} \phi &\in H^2\bigl(0,T; L^2(\Omega)\bigr) \cap L^\infty\left(0,T;W^{1,6}(\Omega)\right)\cap H^1\bigl(0,T;H^{q+1}(\Omega)\bigr), \nonumber \\ \xi &\in L^2 \bigl(0,T; H^{q+1}(\Omega) \bigr), \nonumber \\ \mu &\in L^\infty\bigl(0,T;H^1(\Omega)\bigr)\cap L^2\bigl(0,T;H^{q+1}(\Omega)\bigr), \label{regularities} \\ {\bf u} &\in H^2\left(0,T;{\bf L}^2(\Omega)\right) \cap L^{\infty}\bigl(0,T; {\bf L}^4(\Omega)\bigr)\cap H^1 \bigl(0,T;{\bf H}^{q+1}(\Omega)\bigr), \nonumber \\ p &\in L^2 \bigl(0,T;H^q(\Omega)\cap L_0^2(\Omega)\bigr), \nonumber \end{align} where $q\ge 1$. Of course, some of these regularities are redundant because of embeddings. For the Darcy-Stokes projection we have \begin{equation} \norm{{\bf P}_h {\bf u}-{\bf u}}{H^1} + \norm{P_h p - p}{L^2} \le C h^q\left(\left|{\bf u}\right|_{H^{q+1}} + \left|p\right|_{H^q}\right) . \end{equation} Weak solutions $(\phi,\mu)$ to $\eqref{weak-mch-a} - \eqref{weak-mch-e}$ with the higher regularities \eqref{regularities} solve the following variational problem: \begin{subequations} \begin{align} \iprd{\partial_t \phi}{\nu} + \varepsilon \,\aiprd{\mu}{\nu} + \bform{\phi}{{\bf u}}{\nu} &= 0 &&\qquad \forall \, \nu \in H^1(\Omega), \label{weak-mch-error-a} \\ \iprd{\mu}{\psi}-\varepsilon \,\aiprd{\phi}{\psi} - \varepsilon^{-1}\iprd{\phi^3-\phi}{\psi} - \iprd{\xi}{\psi} &= 0 &&\qquad \forall \, \psi\in H^1(\Omega), \label{weak-mch-error-b} \\ \aiprd{\xi}{\zeta}-\theta \,\iprd{\phi-\overline{\phi}_0}{\zeta}&= 0 &&\qquad \forall \, \zeta\in H^1(\Omega), \label{weak-mch-error-c} \\ \iprd{\partial_t{\bf u}}{{\bf v}}+ \lambda\,\aiprd{{\bf u}}{{\bf v}} + \eta \,\iprd{ {\bf u}}{ {\bf v}} -\cform{{\bf v}}{p} - \gamma \,\bform{\phi}{{\bf v}}{\mu}&= 0 &&\qquad \forall \, {\bf v}\in \bf{H}_0^1(\Omega), \label{weak-mch-error-d} \\ \cform{{\bf u}}{q} &= 0 &&\qquad \forall \, q\in L_0^2(\Omega). \label{weak-mch-error-e} \end{align} \end{subequations} Define $L_{\tau} \phi(t) := \phi(t-\tau)$, $\delta_\tau \phi(t) :=\frac{\phi(t)-L_{\tau}\phi(t)}{\tau}$, the backward difference operator, and \begin{eqnarray} \label{eq:five} \mathcal{E}_a^{\phi} := \phi-R_h \phi , \quad \mathcal{E}_a^{\mu} := \mu -R_h \mu, \quad \mathcal{E}_a^{\bf u} := {\bf u} - {\bf P}_h {\bf u} , \quad \mathcal{E}_a^{\xi} := \xi - R_h \xi , \\ \sigma^\phi := \delta_\tau R_h \phi - \partial_t\phi, \quad \sigma^{\bf u} := \delta_\tau {\bf P}_h {\bf u} - \partial_t {\bf u}. \end{eqnarray} Then, for all $\nu,\psi, \zeta \in S_h, {\bf v} \in {\bf X}_h,$ and $q \in \mathring{S}_h$, \begin{subequations} \begin{align} \iprd{\delta_\tau R_h\phi}{\nu} + \varepsilon \, \aiprd{R_h\mu}{\nu} = & \iprd{\sigma^\phi}{\nu} - \bform{\phi}{{\bf u}}{\nu}, \label{weak-mch-error3-a} \\ \varepsilon \, \aiprd{R_h\phi}{\psi} - \iprd{R_h\mu}{\psi} + \iprd{R_h\xi}{\psi} = & \iprd{\mathcal{E}_a^{\mu}}{\psi} - \varepsilon^{-1}\iprd{\phi^3-L_{\tau}\phi}{\psi} \nonumber \\ & + \frac{\tau}{\varepsilon}\iprd{\delta_\tau \phi}{\psi} - \iprd{\mathcal{E}_a^{\xi}}{\psi}, \label{weak-mch-error3-b} \\ \aiprd{R_h \xi}{\zeta}-\theta \,\iprd{R_h \phi-\overline{\phi}_0}{\zeta} = & \, \theta \, \iprd{\mathcal{E}_a^{\phi}}{\zeta}, \label{weak-mch-error3-c} \\ \iprd{\delta_\tau {\bf P}_h{\bf u}}{{\bf v}} + \lambda \,\aiprd{{\bf P}_h{\bf u}}{{\bf v}} + \eta \,\iprd{{\bf P}_h{\bf u}}{ {\bf v}} -\cform{{\bf v}}{P_h p} = & \iprd{\sigma^{\bf u}}{{\bf v}} + \gamma \,\bform{\phi}{{\bf v}}{\mu}, \label{weak-mch-error3-d} \\ \cform{{\bf P}_h{\bf u}}{q} = & 0 . \label{weak-mch-error3-e} \end{align} \end{subequations} Define the piecewise constant (in time) functions, for $m=1,\dots M$ and for $t\in(t_{m-1},t_m]$, \begin{equation*} \hat{\phi}(t):=\phi_h^m, \quad \hat{\mu}(t):=\mu_h^m, \quad \hat{\bf u}(t):={\bf u}_h^m , \quad \hat{p}(t):=p_h^m , \quad \hat{\xi}(t):=\xi_h^m, \end{equation*} where $\phi_h^m$, $\mu_h^m$, $\xi_h^m$, ${\bf u}_h^m$, and $p_h^m$ are the solutions of the fully discrete convex-splitting scheme \eqref{scheme-a} -- \eqref{scheme-e}. Thus, for $0\le t\le T$, and all $\nu,\psi, \zeta \in S_h, {\bf v} \in {\bf X}_h,$ and $q \in \mathring{S}_h$, \begin{subequations} \begin{align} \iprd{\delta_\tau \hat{\phi}}{\nu} + \varepsilon \,\aiprd{\hat{\mu}}{\nu} = \,& - \bform{L_{\tau}\hat{\phi}}{\hat{\bf u}}{\nu}, \label{scheme-error-a} \\ \varepsilon \,\aiprd{\hat{\phi}}{\psi} - \iprd{\hat{\mu}}{\psi} + \iprd{\hat{\xi}}{\psi} = \,& - \varepsilon^{-1}\iprd{\hat{\phi}^3- L_{\tau}\hat{\phi}}{\psi}, \label{scheme-error-b} \\ \aiprd{\hat{\xi}}{\zeta}-\theta\iprd{\hat{\phi}-\overline{\phi}_0}{\zeta} = \,& 0 , \label{scheme-error-c} \\ \iprd{\delta_\tau \hat{\bf u}}{{\bf v}}+ \lambda \,\aiprd{\hat{\bf u}}{{\bf v}} + \eta \,\iprd{ \hat{\bf u}}{ {\bf v}} -\cform{{\bf v}}{\hat{p}} = \,& \gamma \,\bform{L_{\tau}\hat{\phi}}{{\bf v}}{\hat{\mu}}, \label{scheme-error-d} \\ \cform{\hat{\bf u}}{q} = \,& 0 . \label{scheme-error-e} \end{align} \end{subequations} Now, let us define \begin{eqnarray} \mathcal{E}_h^{\phi} := R_h \phi - \hat{\phi}, \ \mathcal{E}^{\phi} := \phi - \hat{\phi}, \ \mathcal{E}_h^{\mu} := R_h \mu - \hat{\mu} , \ \mathcal{E}_h^{\xi} := R_h \xi - \hat{\xi} , \\ \mathcal{E}_h^{\bf u} := {\bf P}_h {\bf u} - \hat{\bf u}, \ \mathcal{E}_h^p := P_h p - \hat{p} . \end{eqnarray} Subtracting \eqref{scheme-error-a} - \eqref{scheme-error-e} from \eqref{weak-mch-error3-a} - \eqref{weak-mch-error3-e}, we have, for all $\nu,\psi, \zeta \in S_h, {\bf v} \in {\bf X}_h,$ and $q \in \mathring{S}_h$, \begin{subequations} \begin{align} \iprd{\delta_\tau \mathcal{E}_h^{\phi}}{\nu} + \varepsilon \,\aiprd{\mathcal{E}_h^{\mu}}{\nu} = & \iprd{\sigma^\phi}{\nu} - \bform{\phi}{{\bf u}}{\nu} + \bform{L_{\tau} \hat{\phi}}{\hat{\bf u}}{\nu}, \label{error-a} \\ \varepsilon \, \aiprd{\mathcal{E}_h^{\phi}}{\psi} - \iprd{\mathcal{E}_h^{\mu}}{\psi} + \iprd{\mathcal{E}_h^{\xi}}{\psi} = & \iprd{\mathcal{E}_a^{\mu}}{\psi} + \frac{\tau}{\varepsilon} \iprd{ \delta_\tau \phi}{\psi} + \varepsilon^{-1}\iprd{ L_{\tau} \mathcal{E}^{\phi}}{\psi} \nonumber \\ &- \iprd{\mathcal{E}_a^{\xi}}{\psi} - \varepsilon^{-1}\iprd{\phi^3 - \hat{\phi}^3}{\psi}, \label{error-b} \\ \aiprd{\mathcal{E}_h^{\xi}}{\zeta}-\theta \,\iprd{\mathcal{E}_h^{\phi}}{\zeta} = & \,\theta \, \iprd{\mathcal{E}_a^{\phi}}{\zeta}, \label{error-c} \\ \iprd{\delta_\tau \mathcal{E}_h^{\bf u}}{{\bf v}} + \lambda \,\aiprd{\mathcal{E}_h^{\bf u}}{{\bf v}} + \eta \,\iprd{\mathcal{E}_h^{\bf u}}{ {\bf v}} -\cform{{\bf v}}{\mathcal{E}_h^p} = & \iprd{\sigma^{\bf u}}{{\bf v}} + \gamma \,\bform{\phi}{{\bf v}}{\mu} - \gamma \,\bform{L_{\tau} \hat{\phi}}{{\bf v}}{\hat{\mu}}, \label{error-d} \\ \cform{\mathcal{E}_h^{\bf u}}{q} &= 0 . \label{error-e} \end{align} \end{subequations} Setting $\nu = \mathcal{E}_h^{\mu}$ in (\ref{error-a}), $\psi = \delta_\tau \mathcal{E}_h^{\phi}$ in (\ref{error-b}), $\zeta = - \mathsf{T}_h\left(\delta_\tau \mathcal{E}_h^{\phi} \right)$ in (\ref{error-c}), ${\bf v} = \frac{1}{\gamma} \mathcal{E}_h^{\bf u}$ in (\ref{error-d}), and $q = \frac{1}{\gamma} \mathcal{E}_h^p$ in (\ref{error-e}) and adding the resulting equations we obtain \begin{align} \varepsilon \,\aiprd{\mathcal{E}_h^{\phi}}{\delta_\tau\mathcal{E}_h^{\phi}} &+ \theta \, \iprdmone{\mathcal{E}_h^{\phi}}{\delta_\tau \mathcal{E}_h^{\phi}} + \frac{1}{\gamma} \iprd{\delta_\tau \mathcal{E}_h^{\bf u}}{\mathcal{E}_h^{\bf u}} +\varepsilon \norm{\nabla \mathcal{E}_h^{\mu}}{L^2}^2 + \frac{\lambda}{\gamma} \norm{\nabla \mathcal{E}_h^{\bf u}}{L^2}^2 + \frac{\eta}{\gamma} \norm{\mathcal{E}_h^{\bf u}}{L^2}^2 \nonumber \\ & = \iprd{\sigma^\phi}{\mathcal{E}_h^{\mu}} + \frac{1}{\gamma} \iprd{\sigma^{\bf u}}{\mathcal{E}_h^{\bf u}} - \varepsilon^{-1}\iprd{\phi^3-\hat{\phi}^3 - L_\tau\mathcal{E}^{\phi}}{\delta_\tau\mathcal{E}_h^{\phi}} \nonumber \\ & + \iprd{\mathcal{E}_a^{\mu}}{\delta_\tau \mathcal{E}_h^{\phi}} + \frac{\tau}{\varepsilon}\iprd{\delta_\tau \phi}{\delta_\tau \mathcal{E}_h^{\phi}} - \iprd{\mathcal{E}_a^{\xi}}{\delta_\tau \mathcal{E}_h^{\phi}} - \theta \, \iprd{\mathcal{E}_a^{\phi}}{\mathsf{T}_h \left(\delta_\tau \mathcal{E}_h^{\phi} \right)} \nonumber \\ & - \bform{\phi}{{\bf u}}{\mathcal{E}_h^{\mu}} + \bform{L_{\tau} \hat{\phi}}{\hat{\bf u}}{\mathcal{E}_h^{\mu}} + \bform{\phi}{\mathcal{E}_h^{\bf u}}{\mu} - \bform{L_{\tau} \hat{\phi}}{\mathcal{E}_h^{\bf u}}{\hat{\mu}}. \label{error-eq-3} \end{align} The last expression is the key error equation. We now proceed to estimate the terms on the right hand side of \eqref{error-eq-3}. \begin{lem} \label{truncation-errors} Suppose that $(\phi, \mu,{\bf u})$ is a weak solution to \eqref{weak-mch-error-a} -- \eqref{weak-mch-error-e}, with the additional regularities \eqref{regularities}. Then, for any $h$, $\tau >0$, there exists $C>0$, independent of $h$ and $\tau$, such that \begin{align} &\norm{\sigma^\phi(t)}{L^2}^2 \le C\frac{h^{2q+2}}{\tau} \int_{t-\tau}^{t} \norm{\partial_s\phi(s)}{H^{q+1}}^2 ds + \frac{\tau}{3} \int_{t-\tau}^{t}\norm{\partial_{ss}\phi(s)}{L^2}^2 ds, \nonumber \\ &\norm{\sigma^{\bf u}(t)}{L^2}^2 \le C\frac{h^{2q+2}}{\tau} \int_{t-\tau}^{t} \norm{\partial_s{\bf u}(s)}{H^{q+1}}^2 ds + \frac{\tau}{3} \int_{t-\tau}^{t}\norm{\partial_{ss}{\bf u}(s)}{L^2}^2 ds , \end{align} for all $t\in(\tau,T]$. \end{lem} \begin{proof} We write $\sigma^\phi = \sigma^\phi_1+ \sigma^\phi_2$, $\sigma^\phi_1 := \delta_\tau R_h\phi - \delta_\tau\phi$, $\sigma^\phi_2 := \delta_\tau\phi - \partial_t \phi$. Then \begin{align} \norm{\sigma^\phi_1(t)}{L^2}^2 &= \norm{\frac{1}{\tau}\int_{t-\tau}^{t} \partial_s \left(R_h \phi(s)- \phi(s) \right) ds}{L^2}^2 \nonumber \\ &= \frac{1}{\tau^2} \norm{\int_{t-\tau}^{t} \left(R_h \partial_s\phi(s)- \partial_s\phi(s) \right) ds}{L^2}^2 \nonumber \\ & \le \frac{1}{\tau^2} \int_\Omega\int_{t-\tau}^{t} 1^2 ds \int_{t - \tau}^{t} \left(R_h \partial_s \phi(s)- \partial_s \phi(s) \right)^2 ds \, d{\bf x} \nonumber \\ & = \frac{1}{\tau} \int_{t-\tau}^{t} \norm{ R_h \partial_s \phi(s)- \partial_s \phi(s)}{L^2}^2 ds \nonumber \\ & \le C\frac{h^{2q+2}}{\tau} \int_{t-\tau}^{t} \norm{\partial_s\phi(s)}{H^{q+1}}^2 \, ds. \end{align} By Taylor's theorem \begin{align} \norm{\sigma^\phi_2(t)}{L^2}^2 &= \norm{\frac{1}{\tau}\int_{t-\tau}^{t}\partial_{ss}\phi(s)(t-s)ds}{L^2}^2 \nonumber \\ &\le \frac{1}{\tau^2}\int_\Omega \left[\int_{t-\tau}^{t}(t-s)^2\, ds \int_{t-\tau}^{t}\left(\partial_{ss}\phi(s)\right)^2\, ds \right] d{\bf x} \nonumber \\ &= \frac{1}{\tau^2}\int_{t-\tau}^{t}(t-s)^2\, ds \int_{t-\tau}^{t}\norm{\partial_{ss}\phi(s)}{L^2}^2 ds \nonumber \\ &= \frac{\tau^3}{3\tau^2} \int_{t-\tau}^{t}\norm{\partial_{ss}\phi(s)}{L^2}^2 ds. \end{align} Using the triangle inequality, the result for $\norm{\sigma^\phi(t)}{L^2}^2$ follows. A similar proof can be constructed for $\norm{\sigma^{\bf u}(t)}{L^2}^2$. \end{proof} We need a technical lemma that will be used a number of times. \begin{lem} \label{lem:technical} Suppose $g \in H^1(\Omega)$, and $v \in \mathring{S}_h$. Then \begin{equation} \left|\iprd{g}{v}\right| \le C \norm{\nabla g}{L^2} \, \norm{v}{-1,h} , \end{equation} for some $C>0$ that is independent of $h$. \end{lem} \begin{proof} If $g\in S_h$, we can apply Lemma~\ref{lem-negative-norm-discrete} directly. Otherwise, using the triangle inequality, the Cauchy-Schwarz inequality, and Lemma~\ref{lem-negative-norm-discrete}, \begin{equation} \left|\iprd{g}{v}\right| \le \left|\iprd{g - R_h g}{v}\right| +\left|\iprd{ R_h g}{v}\right| \le \norm{g-R_h g}{L^2} \norm{v}{L^2} + \norm{\nabla R_h g}{L^2} \norm{v}{-1,h}. \end{equation} Using the standard elliptic projection estimate \begin{equation} \norm{g-R_h g}{L^2} \le C h \norm{\nabla g}{L^2}, \end{equation} we have \begin{equation} \left|\iprd{g}{v}\right| \le C h \norm{\nabla g}{L^2} \norm{v}{L^2} + \norm{\nabla R_h g}{L^2} \norm{v}{-1,h}. \end{equation} Finally, using the (uniform) inverse estimate $h\norm{v}{L^2} \le C \norm{v}{-1,h}$ from Lemma~\ref{lem-negative-norm-discrete}, and the stability of the elliptic projection, $\norm{\nabla R_h g}{L^2} \le C \norm{\nabla g}{L^2}$, we have the result. \end{proof} \begin{lem} \label{nonlinear-estimate} Suppose that $(\phi, \mu,{\bf u})$ is a weak solution to \eqref{weak-mch-error-a} -- \eqref{weak-mch-error-e}, with the additional regularities \eqref{regularities}. Then, for any $h$, $\tau >0$, \begin{equation} \norm{\nabla \left(\phi^3 - \hat{\phi}^3\right)}{L^2} \le C \norm{\nabla \mathcal{E}^{\phi}}{L^2} , \end{equation} where $\mathcal{E}^{\phi} := \phi - \hat{\phi}$. \end{lem} \begin{proof} For $t\in[0,T]$, \begin{align} \norm{\nabla\left(\phi^3-\hat{\phi}^3 \right)}{L^2} &\le 3\norm{\hat{\phi}^2\nabla\mathcal{E}^{\phi}}{L^2} + 3\norm{\nabla\phi \left(\phi+\hat{\phi}\right)\mathcal{E}^{\phi}}{L^2} \nonumber \\ &\le 3\norm{\hat{\phi}}{L^\infty}^2\norm{\nabla\mathcal{E}^{\phi}}{L^2}+ 3\norm{\nabla\phi}{L^6}\norm{\phi+\hat{\phi}}{L^6}\norm{\mathcal{E}^{\phi}}{L^6} \nonumber \\ &\le 3\left(\norm{\hat{\phi}}{L^\infty}^2 + C\norm{\nabla\phi}{L^6}\norm{\phi+\hat{\phi}}{H^1}\right)\norm{\nabla\mathcal{E}^{\phi}}{L^2} \le C \norm{\nabla\mathcal{E}^{\phi}}{L^2}, \end{align} where $C>0$ is independent of $t\in[0,T]$. Then, using the unconditional \emph{a priori} estimates in Lemmas~\ref{lem-improved-a-priori-stabilities} and \ref{lem-a-priori-stability} and the assumption that $\phi\in L^\infty\left(0,T;W^{1,6}(\Omega)\right)$, the result follows. \end{proof} \begin{lem} \label{lem-error-1} Suppose that $(\phi, \mu,{\bf u})$ is a weak solution to \eqref{weak-mch-error-a} -- \eqref{weak-mch-error-e}, with the additional regularities \eqref{regularities}. Then, for any $h$, $\tau >0$, and any $\alpha > 0$ there exists a constant $C=C(\alpha)>0$, independent of $h$ and $\tau$, such that \begin{align} \frac{\varepsilon}{2} \norm{\nabla \mathcal{E}_h^{\mu}}{L^2}^2 + \varepsilon \,\aiprd{\mathcal{E}_h^{\phi}}{\delta_\tau\mathcal{E}_h^{\phi}} &+ \theta \, \iprdmone{\mathcal{E}_h^{\phi}}{\delta_\tau \mathcal{E}_h^{\phi}} + \iprd{\delta_\tau \mathcal{E}_h^{\bf u}}{\mathcal{E}_h^{\bf u}} + \frac{\lambda}{2 \gamma} \norm{\nabla \mathcal{E}_h^{\bf u}}{L^2}^2 + \frac{\eta}{2 \gamma} \norm{\mathcal{E}_h^{\bf u}}{L^2}^2 \nonumber \\ &\le C \norm{\nabla \mathcal{E}_h^{\phi}}{L^2}^2 + C \norm{\nabla L_\tau \mathcal{E}_h^{\phi}}{L^2}^2 + \alpha \norm{\delta_\tau \mathcal{E}_h^{\phi}}{-1,h}^2 + C \mathcal{R}, \label{alpha-estimate} \end{align} for any $t\in(\tau ,T]$, where $\mathcal{R}$ is the consistency term \begin{align} \mathcal{R}(t) =& \frac{h^{2q+2}}{\tau} \int_{t-\tau}^{t} \norm{\partial_s\phi(s)}{H^{q+1}}^2 ds + \frac{\tau}{3} \int_{t-\tau}^{t}\norm{\partial_{ss}\phi(s)}{L^2}^2 ds + \tau \int_{t - \tau}^{t} \norm{\nabla\partial_s\phi(s)}{L^2}^2 ds \nonumber \\ &+h^{2q}\left| \mu \right|_{H^{q+1}}^2+h^{2q}\left| \phi \right|_{H^{q+1}}^2 +h^{2q+2}\left| \phi \right|_{H^{q+1}}^2 + h^{2q}\left| L_\tau\phi \right|_{H^{q+1}}^2 + h^{2q}\left| \xi \right|_{H^{q+1}}^2 \nonumber \\ & + \frac{h^{2q+2}}{\tau} \int_{t-\tau}^{t} \norm{\partial_s{\bf u}(s)}{H^{q+1}}^2 ds + \frac{\tau}{3} \int_{t-\tau}^{t}\norm{\partial_{ss}{\bf u}(s)}{L^2}^2 ds + h^{2q}\left(\left|{\bf u}\right|_{H^{q+1}}^2 + \left|p\right|_{H^q}^2\right). \label{consistency} \end{align} \end{lem} \begin{proof} Using Lemmas~\ref{truncation-errors} and \ref{lem:technical}, the Cauchy-Schwarz inequality, the definition above, and the fact that $\iprd{\sigma^\phi}{1} = 0$, we get the following estimates: \begin{align} \left|\iprd{\sigma^\phi}{\mathcal{E}_h^{\mu}}\right| &\le \norm{\sigma^\phi}{-1,h} \norm{\nabla\mathcal{E}_h^{\mu}}{L^2} \nonumber \\ &\le C \norm{\sigma^\phi}{L^2} \norm{\nabla \mathcal{E}_h^{\mu}}{L^2} \nonumber \\ &\le C\norm{\sigma^\phi}{L^2}^2 +\frac{\varepsilon}{10} \norm{\nabla \mathcal{E}_h^{\mu}}{L^2}^2 \nonumber \\ &\le C \left( \frac{h^{2q+2}}{\tau} \int_{t-\tau}^{t} \norm{\partial_s\phi(s)}{H^{q+1}}^2 ds + \frac{\tau}{3} \int_{t-\tau}^{t}\norm{\partial_{ss}\phi(s)}{L^2}^2 ds \right) + \frac{\varepsilon}{10} \norm{\nabla \mathcal{E}_h^{\mu}}{L^2}^2 \label{error-estimate-1} \end{align} and, similarly, \begin{equation} \left|\iprd{\sigma^{\bf u}}{\mathcal{E}_h^{\bf u}}\right| \le C \left( \frac{h^{2q+2}}{\tau} \int_{t-\tau}^{t} \norm{\partial_s{\bf u}(s)}{H^{q+1}}^2 ds + \frac{\tau}{3} \int_{t-\tau}^{t}\norm{\partial_{ss}{\bf u}(s)}{L^2}^2 ds \right)+ \frac{\eta}{2 \gamma} \norm{\mathcal{E}_h^{\bf u}}{L^2}^2. \label{error-estimate-2} \end{equation} Now, from the standard finite element approximation theory \begin{equation*} \norm{\nabla \mathcal{E}_a^{\mu}}{L^2} = \norm{\nabla (R_h \mu - \mu)}{L^2} \leq C h^q\left| \mu \right|_{H^{q+1}} . \end{equation*} Applying Lemma~\ref{lem:technical} and the last estimate \begin{eqnarray} \left|\iprd{\mathcal{E}_a^{\mu}}{\delta_\tau \mathcal{E}_h^{\phi}}\right| &\le& C \norm{\nabla\mathcal{E}_a^{\mu}}{L^2} \, \norm{\delta_\tau \mathcal{E}_h^{\phi}}{-1,h} \nonumber \\ &\le& C h^q\left| \mu \right|_{H^{q+1}} \norm{\delta_\tau \mathcal{E}_h^{\phi}}{-1,h} \nonumber \\ &\le& Ch^{2q}\left| \mu \right|_{H^{q+1}}^2 + \frac{\alpha}{6} \norm{\delta_\tau \mathcal{E}_h^{\phi}}{-1,h}^2 \label{error-estimate-3} \end{eqnarray} and, similarly, \begin{equation} \left|\iprd{\mathcal{E}_a^{\xi}}{\delta_\tau \mathcal{E}_h^{\phi}}\right| \le Ch^{2q}\left| \xi \right|_{H^{q+1}}^2 + \frac{\alpha}{6} \norm{\delta_\tau \mathcal{E}_h^{\phi}}{-1,h}^2. \end{equation} Now, it follows that \begin{equation} \norm{ \tau \nabla \delta_\tau \phi(t)}{L^2}^2 \le \tau \int_{t-\tau}^{t} \norm{\nabla\partial_s\phi(s)}{L^2}^2 ds \label{lag-estimate} \end{equation} and, therefore, \begin{eqnarray} \frac{\tau}{\varepsilon}\left|\iprd{\delta_\tau \phi}{\delta_\tau\mathcal{E}_h^{\phi}}\right| &\le& \frac{1}{\varepsilon} \norm{\tau\nabla \delta_\tau \phi}{L^2} \, \norm{\delta_\tau \mathcal{E}_h^{\phi}}{-1,h} \nonumber \\ &\le& C\tau \int_{t - \tau}^{t} \norm{\nabla\partial_s\phi(s)}{L^2}^2 ds + \frac{\alpha}{6} \norm{\delta_\tau \mathcal{E}_h^{\phi}}{-1,h}^2 . \label{error-estimate-4} \end{eqnarray} Using Lemmas~\ref{lem:technical} and \ref{nonlinear-estimate} , as well as $\mathcal{E}^{\phi} =\mathcal{E}_a^{\phi}+\mathcal{E}_h^{\phi}$ and a standard error estimate, \begin{eqnarray} \frac{1}{\varepsilon}\left|\iprd{\phi^3-\hat{\phi}^3}{\delta_\tau \mathcal{E}_h^{\phi}}\right| &\le& C \norm{\nabla \left(\phi^3-\hat{\phi}^3 \right)}{L^2} \, \norm{\delta_\tau \mathcal{E}_h^{\phi}}{-1,h} \nonumber \\ &\le& C \norm{\nabla \left(\phi^3-\hat{\phi}^3 \right)}{L^2}^2 + \frac{\alpha}{6} \norm{\delta_\tau \mathcal{E}_h^{\phi}}{-1,h}^2 \nonumber \\ &\le& C \norm{\nabla \mathcal{E}^{\phi}}{L^2}^2 + \frac{\alpha}{6} \norm{\delta_\tau \mathcal{E}_h^{\phi}}{-1,h}^2 \nonumber \\ &\le& C \norm{\nabla \mathcal{E}_a^{\phi}}{L^2}^2 +C \norm{\nabla \mathcal{E}_h^{\phi}}{L^2}^2 + \frac{\alpha}{6} \norm{\delta_\tau \mathcal{E}_h^{\phi}}{-1,h}^2 \nonumber \\ &\le& C h^{2q}\left| \phi \right|_{H^{q+1}}^2 + C \norm{\nabla \mathcal{E}_h^{\phi}}{L^2}^2 + \frac{\alpha}{6} \norm{\delta_\tau \mathcal{E}_h^{\phi}}{-1,h}^2 . \label{error-estimate-5} \end{eqnarray} With similar steps as in the last estimate, \begin{eqnarray} \frac{1}{\varepsilon}\left|\iprd{L_\tau\mathcal{E}^{\phi} }{\delta_\tau \mathcal{E}_h^{\phi}}\right| &\le& C \norm{\nabla L_\tau \mathcal{E}^{\phi}}{L^2} \, \norm{\delta_\tau \mathcal{E}_h^{\phi}}{-1,h} \nonumber \\ &\le& C h^{2q}\left| L_\tau\phi \right|_{H^{q+1}}^2 +C \norm{\nabla L_\tau \mathcal{E}_h^{\phi}}{L^2}^2+ \frac{\alpha}{6}\norm{\delta_\tau \mathcal{E}_h^{\phi}}{-1,h}^2 . \label{error-estimate-6} \end{eqnarray} Using the estimate \begin{equation} \norm{\mathsf{T}_h\left(\delta_\tau \mathcal{E}_h^{\phi} \right)}{L^2}^2 \le C \norm{\nabla \mathsf{T}_h\left(\delta_\tau \mathcal{E}_h^{\phi} \right)}{L^2}^2 = C \norm{\delta_\tau \mathcal{E}_h^{\phi}}{-1,h}^2 , \nonumber \end{equation} we obtain \begin{align} \left| \theta \, \iprd{\mathcal{E}_a^{\phi}}{\mathsf{T}_h \left(\delta_\tau \mathcal{E}_h^{\phi} \right)} \right| &\le \theta \norm{\mathcal{E}_a^{\phi}}{L^2} \,\norm{\mathsf{T}_h \left(\delta_\tau \mathcal{E}_h^{\phi} \right)}{L^2} \nonumber \\ & \le C h^{q+1}\left| \phi \right|_{H^{q+1}} \norm{\delta_\tau \mathcal{E}_h^{\phi}}{-1,h} \nonumber \\ & \le C h^{2q+2}\left| \phi \right|_{H^{q+1}}^2 + \frac{\alpha}{6} \norm{\delta_\tau \mathcal{E}_h^{\phi}}{-1,h}^2. \label{error-estimate-7} \end{align} Now we consider the trilinear terms. Adding and subtracting the appropriate terms and using the triangle inequality gives \begin{align} \Bigg| - \bform{\phi}{{\bf u}}{\mathcal{E}_h^{\mu}} &+ \bform{L_{\tau} \hat{\phi}}{\hat{\bf u}}{\mathcal{E}_h^{\mu}} + \bform{\phi}{\mathcal{E}_h^{\bf u}}{\mu} - \bform{L_{\tau} \hat{\phi}}{\mathcal{E}_h^{\bf u}}{\hat{\mu}} \Bigg| \nonumber \\ & \le \left|\bform{\mathcal{E}_a^{\phi}}{{\bf u}}{\mathcal{E}_h^{\mu}} \right| + \left| \bform{L_{\tau} \mathcal{E}_h^{\phi}}{{\bf u}}{\mathcal{E}_h^{\mu}} \right| + \left| \bform{\tau \delta_\tau R_h \phi}{{\bf u}}{\mathcal{E}_h^{\mu}} \right| + \left| \bform{L_{\tau} \hat{\phi}}{\mathcal{E}_a^{\bf u}}{\mathcal{E}_h^{\mu}} \right| \nonumber \\ & + \left| \bform{\mathcal{E}_a^{\phi}}{\mathcal{E}_h^{\bf u}}{\mu} \right| + \left| \bform{L_{\tau} \mathcal{E}_h^{\phi}}{\mathcal{E}_h^{\bf u}}{\mu} \right| + \left| \bform{\tau\delta_\tau R_h \phi}{\mathcal{E}_h^{\bf u}}{\mu} \right| + \left| \bform{L_{\tau} \hat{\phi}}{\mathcal{E}_h^{\bf u}}{\mathcal{E}_a^{\mu}} \right|. \end{align} With the assumption ${\bf u} \in L^\infty\left(0,T; {\bf L}^4(\Omega)\right)$ we have \begin{eqnarray} \left|\bform{\mathcal{E}_a^{\phi}}{{\bf u}}{\mathcal{E}_h^{\mu}}\right| &\le& \norm{\nabla \mathcal{E}_a^{\phi}}{L^2} \norm{{\bf u}}{L^4} \norm{\mathcal{E}_h^{\mu}}{L^4} \nonumber \\ &\le& C \norm{\nabla \mathcal{E}_a^{\phi}}{L^2}^2 + \frac{\varepsilon}{10} \norm{\nabla \mathcal{E}_h^{\mu}}{L^2}^2 \nonumber \\ &\le& C h^{2q}\left| \phi \right|_{H^{q+1}}^2 + \frac{\varepsilon}{10} \norm{\nabla \mathcal{E}_h^{\mu}}{L^2}^2 , \label{error-estimate-8} \end{eqnarray} as well as \begin{eqnarray} \left|\bform{L_{\tau} \mathcal{E}_h^{\phi}}{{\bf u}}{\mathcal{E}_h^{\mu}}\right| &\le& \norm{\nabla L_{\tau} \mathcal{E}_h^{\phi}}{L^2} \norm{{\bf u}}{L^4} \norm{\mathcal{E}_h^{\mu}}{L^4} \nonumber \\ &\le& C \norm{\nabla L_{\tau} \mathcal{E}_h^{\phi}}{L^2}^2 + \frac{\varepsilon}{10} \norm{\nabla \mathcal{E}_h^{\mu}}{L^2}^2. \label{error-estimate-9} \end{eqnarray} Using the stability of the elliptic projection, and reusing estimate \eqref{lag-estimate}, and ${\bf u} \in L^\infty\left(0,T; {\bf L}^4(\Omega)\right)$ \begin{eqnarray} \left| \bform{\tau\delta_\tau R_h \phi}{{\bf u}}{\mathcal{E}_h^{\mu}}\right| &\le& \norm{\nabla \tau\delta_\tau R_h \phi}{L^2} \norm{{\bf u}}{L^4} \norm{\mathcal{E}_h^{\mu}}{L^4} \nonumber \\ &\le& C \norm{\tau\nabla\delta_\tau R_h\phi}{L^2} \norm{\nabla \mathcal{E}_h^{\mu}}{L^2} \nonumber \\ &\le& C \norm{\tau\nabla\delta_\tau \phi}{L^2} \norm{\nabla \mathcal{E}_h^{\mu}}{L^2} \nonumber \\ &\le& C \norm{\tau\nabla \delta_\tau \phi}{L^2}^2 + \frac{\varepsilon}{10} \norm{\nabla \mathcal{E}_h^{\mu}}{L^2}^2 \nonumber \\ &\le& C \tau \int_{t-\tau}^{t} \norm{\nabla\partial_s\phi(s)}{L^2}^2 ds + \frac{\varepsilon}{10} \norm{\nabla \mathcal{E}_h^{\mu}}{L^2}^2. \label{error-estimate-10} \end{eqnarray} Using (\ref{Linf-mu-phi}.3), \begin{eqnarray} \left|\bform{L_{\tau} \hat{\phi}}{\mathcal{E}_a^{\bf u}}{\mathcal{E}_h^{\mu}} \right| &\le& \norm{\nabla L_{\tau} \hat{\phi}}{L^2} \norm{\mathcal{E}_a^{\bf u}}{L^4} \norm{\mathcal{E}_h^{\mu}}{L^4} \nonumber \\ &\le& C \norm{\mathcal{E}_a^{\bf u}}{H^1}^2 + \frac{\varepsilon}{10} \norm{\nabla \mathcal{E}_h^{\mu}}{L^2}^2 \nonumber \\ &\le& C h^{2q}\left(\left|{\bf u}\right|_{H^{q+1}}^2 + \left|p\right|_{H^q}^2\right) + \frac{\varepsilon}{10} \norm{\nabla \mathcal{E}_h^{\mu}}{L^2}^2 \label{error-estimate-11}. \end{eqnarray} Since we assume $\mu\in L^\infty\left(0,T;H^1(\Omega) \right)$, \begin{eqnarray} \left|\bform{\mathcal{E}_a^{\phi}}{\mathcal{E}_h^{\bf u}}{\mu} \right| &\le& \norm{\nabla \mathcal{E}_a^{\phi}}{L^2} \norm{\mathcal{E}_h^{\bf u}}{L^4} \norm{\mu}{L^4} \nonumber \\ &\le& C \norm{\nabla \mathcal{E}_a^{\phi}}{L^2} \norm{\nabla \mathcal{E}_h^{\bf u}}{L^2} \norm{\mu}{H^1} \nonumber \\ &\le& C \norm{\nabla \mathcal{E}_a^{\phi}}{L^2}^2 + \frac{\lambda}{8 \gamma} \norm{\nabla \mathcal{E}_h^{\bf u}}{L^2}^2 \nonumber \\ &\le& C h^{2q}\left|\phi\right|_{H^{q+1}}^2 + \frac{\lambda}{8 \gamma} \norm{\nabla \mathcal{E}_h^{\bf u}}{L^2}^2, \label{error-estimate-12} \end{eqnarray} and \begin{eqnarray} \left|\bform{L_{\tau} \mathcal{E}_h^{\phi}}{\mathcal{E}_h^{\bf u}}{\mu} \right| &\le& \norm{\nabla L_{\tau} \mathcal{E}_h^{\phi}}{L^2} \norm{\mathcal{E}_h^{\bf u}}{L^4} \norm{\mu}{L^4} \nonumber \\ &\le& C \norm{\nabla L_{\tau} \mathcal{E}_h^{\phi}}{L^2} \norm{\nabla \mathcal{E}_h^{\bf u}}{L^2} \norm{\mu}{H^1} \nonumber \\ &\le& C \norm{\nabla L_{\tau} \mathcal{E}_h^{\phi}}{L^2}^2 + \frac{\lambda}{8 \gamma} \norm{\nabla \mathcal{E}_h^{\bf u}}{L^2}^2. \label{error-estimate-13} \end{eqnarray} Again, using $\mu \in L^\infty\left(0,T; H^1(\Omega)\right)$, the stability of the elliptic projection, and reusing estimate \eqref{lag-estimate} \begin{eqnarray} \left| \,\bform{\tau\delta_\tau R_h \phi}{\mathcal{E}_h^{\bf u}}{\mu}\right| &\le& \norm{\nabla \tau\delta_\tau R_h \phi}{L^2} \norm{\mathcal{E}_h^{\bf u}}{L^4} \norm{\mu}{H^1} \nonumber \\ &\le& C \norm{\tau\nabla \delta_\tau R_h\phi}{L^2} \norm{\nabla \mathcal{E}_h^{\bf u}}{L^2} \nonumber \\ &\le& C \norm{\tau\nabla \delta_\tau \phi}{L^2} \norm{\nabla \mathcal{E}_h^{\bf u}}{L^2} \nonumber \\ &\le& C \tau \int_{t-\tau}^{t} \norm{\nabla\partial_s\phi(s)}{L^2}^2 ds + \frac{\lambda}{8 \gamma} \norm{\nabla \mathcal{E}_h^{\bf u}}{L^2}^2. \label{error-estimate-14} \end{eqnarray} Finally, \begin{eqnarray} \left|\bform{L_{\tau} \hat{\phi}}{\mathcal{E}_h^{\bf u}}{\mathcal{E}_a^{\mu}} \right| &\le& \norm{\nabla L_{\tau} \hat{\phi}}{L^2} \norm{\mathcal{E}_h^{\bf u}}{L^4} \norm{\mathcal{E}_a^{\mu}}{L^4} \nonumber \\ &\le& C \norm{\nabla \mathcal{E}_h^{\bf u}}{L^2} \norm{\nabla \mathcal{E}_a^{\mu}}{L^2} \nonumber \\ &\le& \frac{\lambda}{8 \gamma} \norm{\nabla \mathcal{E}_h^{\bf u}}{L^2}^2 + C \norm{\nabla \mathcal{E}_a^{\mu}}{L^2}^2 \nonumber \\ &\le& \frac{\lambda}{8 \gamma} \norm{\nabla \mathcal{E}_h^{\bf u}}{L^2}^2 + C h^{2q}\left|\mu\right|_{H^{q+1}}^2. \label{error-estimate-15} \end{eqnarray} Combining the estimates \eqref{error-estimate-1} -- \eqref{error-estimate-15} with the error equation \eqref{error-eq-3} and using the triangle inequality, the result follows. \end{proof} \begin{lem} \label{lem-1,h-error-estimate} Suppose that $(\phi, \mu,{\bf u})$ is a weak solution to \eqref{weak-mch-error-a} -- \eqref{weak-mch-error-e}, with the additional regularities \eqref{regularities}. Then, for any $h$, $\tau >0$, there exists a constant $C>0$, independent of $h$ and $\tau$, such that \begin{equation} \norm{\delta_\tau \mathcal{E}_h^{\phi}}{-1,h}^2 \le 7\varepsilon^2 \norm{\nabla \mathcal{E}_h^{\mu}}{L^2}^2 + C \norm{\nabla L_{\tau} \mathcal{E}_h^{\phi}}{L^2}^2+ 7C_2^2 \norm{\nabla \mathcal{E}_h^{\bf u}}{L^2}^2 + C\mathcal{R}, \label{-1,h-error-estimate} \end{equation} for any $t\in (\tau, T]$, where $C_2 = C_0^2 C_1$, $C_0$ is the $H^1(\Omega) \hookrightarrow L^4(\Omega)$ Sobolev embedding constant, $C_1$ is a bound for $\max_{0\le t\le T}\norm{\nabla \hat\phi}{L^2}^2$, and $\mathcal{R}$ is the consistency term given in \eqref{consistency}. \end{lem} \begin{proof} Setting $\nu = \mathsf{T}_h\left(\delta_\tau \mathcal{E}_h^{\phi} \right)$ in \eqref{error-a}, we have \begin{eqnarray} \norm{\delta_\tau \mathcal{E}_h^{\phi}}{-1,h}^2 &=& - \varepsilon\, \aiprd{\mathcal{E}_h^{\mu}}{\mathsf{T}_h\left(\delta_\tau \mathcal{E}_h^{\phi} \right)} + \iprd{\sigma^\phi}{\mathsf{T}_h\left(\delta_\tau \mathcal{E}_h^{\phi} \right)} \nonumber \\ && - \,\bform{\phi}{{\bf u}}{\mathsf{T}_h\left(\delta_\tau \mathcal{E}_h^{\phi} \right)} + \bform{L_{\tau} \hat{\phi}}{\hat{\bf u}}{\mathsf{T}_h\left(\delta_\tau \mathcal{E}_h^{\phi} \right)} \nonumber \\ &=& - \,\varepsilon \iprd{\mathcal{E}_h^{\mu}}{\delta_\tau \mathcal{E}_h^{\phi}} + \iprd{\sigma^\phi}{\mathsf{T}_h\left(\delta_\tau \mathcal{E}_h^{\phi} \right)} \nonumber \\ && - \,\bform{\mathcal{E}_a^{\phi}}{{\bf u}}{\mathsf{T}_h\left(\delta_\tau \mathcal{E}_h^{\phi} \right)} - \bform{L_{\tau} \mathcal{E}_h^{\phi}}{{\bf u}}{\mathsf{T}_h\left(\delta_\tau \mathcal{E}_h^{\phi} \right)} \nonumber \\ && - \bform{\tau\delta_\tau R_h \phi}{{\bf u}}{\mathsf{T}_h\left(\delta_\tau \mathcal{E}_h^{\phi} \right)} - \bform{L_{\tau} \hat{\phi}}{\mathcal{E}_a^{\bf u}}{\mathsf{T}_h\left(\delta_\tau \mathcal{E}_h^{\phi} \right)} - \bform{L_{\tau} \hat{\phi}}{\mathcal{E}_h^{\bf u}}{\mathsf{T}_h\left(\delta_\tau \mathcal{E}_h^{\phi} \right)} \nonumber \\ &\le& \varepsilon \norm{\nabla \mathcal{E}_h^{\mu}}{L^2} \norm{\delta_\tau \mathcal{E}_h^{\phi}}{-1,h} + \norm{\sigma^\phi}{L^2} \norm{\mathsf{T}_h\left(\delta_\tau \mathcal{E}_h^{\phi} \right)}{L^2} \nonumber \\ && + \,\norm{\nabla \mathcal{E}_a^{\phi}}{L^2} \norm{{\bf u}}{L^4} \norm{\mathsf{T}_h\left(\delta_\tau \mathcal{E}_h^{\phi} \right)}{L^4} + \norm{\nabla L_{\tau} \mathcal{E}_h^{\phi}}{L^2} \norm{{\bf u}}{L^4} \norm{\mathsf{T}_h\left(\delta_\tau \mathcal{E}_h^{\phi} \right)}{L^4} \nonumber \\ && + \norm{\tau \nabla \delta_\tau R_h \phi}{L^2} \norm{{\bf u}}{L^4} \norm{\mathsf{T}_h\left(\delta_\tau \mathcal{E}_h^{\phi} \right)}{L^4} + \norm{\nabla L_{\tau} \hat{\phi}}{L^2} \norm{\mathcal{E}_a^{\bf u}}{L^4} \norm{\mathsf{T}_h\left(\delta_\tau \mathcal{E}_h^{\phi} \right)}{L^4} \nonumber \\ && + \norm{\nabla L_{\tau} \hat{\phi}}{L^2} \norm{\mathcal{E}_h^{\bf u}}{L^4} \norm{\mathsf{T}_h\left(\delta_\tau \mathcal{E}_h^{\phi} \right)}{L^4} \nonumber \\ &\le& \frac{7\varepsilon^2}{2} \norm{\nabla \mathcal{E}_h^{\mu}}{L^2}^2 + \frac{1}{14} \norm{\delta_\tau \mathcal{E}_h^{\phi}}{-1,h}^2 + C \norm{\sigma^\phi}{L^2}^2 + \frac{1}{14} \norm{\delta_\tau \mathcal{E}_h^{\phi}}{-1,h}^2 \nonumber \\ && + \,C \norm{\nabla \mathcal{E}_a^{\phi}}{L^2}^2 + \frac{1}{14} \norm{\delta_\tau \mathcal{E}_h^{\phi}}{-1,h}^2 + C \norm{\nabla L_{\tau} \mathcal{E}_h^{\phi}}{L^2}^2 + \frac{1}{14} \norm{\delta_\tau \mathcal{E}_h^{\phi}}{-1,h}^2 \nonumber \\ && + \,C \tau^2 \,\norm{\nabla \delta_\tau R_h \phi}{L^2}^2 + \frac{1}{14} \norm{\delta_\tau \mathcal{E}_h^{\phi}}{-1,h}^2 + C \norm{\mathcal{E}_a^{\bf u}}{L^2}^2 + \frac{1}{14} \norm{\delta_\tau \mathcal{E}_h^{\phi}}{-1,h}^2 \nonumber \\ &&+ \frac{7C_2^2}{2} \norm{\nabla \mathcal{E}_h^{\bf u}}{L^2}^2 + \frac{1}{14} \norm{\delta_\tau \mathcal{E}_h^{\phi}}{-1,h}^2 \nonumber \\ &\le& \frac12 \norm{\delta_\tau \mathcal{E}_h^{\phi}}{-1,h}^2 + \frac{7\varepsilon^2}{2} \norm{\nabla \mathcal{E}_h^{\mu}}{L^2}^2 + \frac{7C_2^2}{2} \norm{\nabla \mathcal{E}_h^{\bf u}}{L^2}^2 +C \norm{\nabla L_{\tau} \mathcal{E}_h^{\phi}}{L^2}^2 + C\mathcal{R} , \end{eqnarray} where we have used Lemmas~\ref{lem-negative-norm-discrete} and \ref{truncation-errors}. The result now follows. \end{proof} \begin{lem} \label{lem-error-3} Suppose that $(\phi, \mu,{\bf u})$ is a weak solution to \eqref{weak-mch-error-a} -- \eqref{weak-mch-error-e}, with the additional regularities \eqref{regularities}. Then, for any $h$, $\tau >0$, there exists a constant $C>0$, independent of $h$ and $\tau$, such that \begin{align} \norm{\nabla \mathcal{E}_h^{\mu}}{L^2}^2 + \norm{\mathcal{E}_h^{\bf u}}{H^1}^2 + \aiprd{\mathcal{E}_h^{\phi}}{\delta_\tau\mathcal{E}_h^{\phi}} &+ \iprdmone{\mathcal{E}_h^{\phi}}{\delta_\tau \mathcal{E}_h^{\phi}} + \iprd{\delta_\tau \mathcal{E}_h^{\bf u}}{\mathcal{E}_h^{\bf u}} \nonumber \\ &\le C \norm{\nabla \mathcal{E}_h^{\phi}}{L^2}^2 + C \norm{\nabla L_\tau \mathcal{E}_h^{\phi}}{L^2}^2 + C\mathcal{R} . \end{align} \end{lem} \begin{proof} This follows upon combining the last two lemmas and choosing $\alpha$ in \eqref{alpha-estimate} appropriately. \end{proof} Using the last lemma, we are ready to show the main convergence result for our convex-splitting scheme. \begin{thm} \label{thm-error-estimate} Suppose $(\phi, \mu,{\bf u})$ is a weak solution to \eqref{weak-mch-error-a} -- \eqref{weak-mch-error-e}, with the additional regularities \eqref{regularities}. Then, provided $0<\tau <\tau_0$, for some $\tau_0$ sufficiently small, \begin{align} \max_{1\le m \le M} \left[\norm{\nabla\mathcal{E}_h^{\phi}(t_m)}{L^2}^2 + \norm{\mathcal{E}_h^{\phi}(t_m)}{-1,h}^2 + \norm{\mathcal{E}_h^{\bf u}(t_m)}{L^2}\right] & \nonumber \\ + \tau\sum_{m=1}^M\left[\norm{\nabla\mathcal{E}_h^{\mu}(t_m)}{L^2}^2 + \norm{\mathcal{E}_h^{\bf u}(t_m)}{H^1}^2 + \norm{\delta_\tau\mathcal{E}_h^{\phi}(t_m)}{-1,h}^2 \right] &\le C(T)(\tau^2+h^{2q}) \end{align} for some $C(T)>0$ that is independent of $\tau$ and $h$. \end{thm} \begin{proof} Setting $t=t_m$ and using Lemma~\ref{lem-error-3} and the arithmetic-geometric mean inequality, we have \begin{align} \delta_\tau\norm{\nabla\mathcal{E}_h^{\phi}(t_m)}{L^2}^2 + \delta_\tau\norm{\mathcal{E}_h^{\phi}(t_m)}{-1,h}^2 + \delta_\tau\norm{\mathcal{E}_h^{\bf u}(t_m)}{L^2}^2 & \nonumber \\ + \norm{\nabla\mathcal{E}_h^{\mu}(t_m)}{L^2}^2 +\norm{\mathcal{E}_h^{\bf u}(t_m)}{H^1}^2 & \le \,C\norm{\nabla\mathcal{E}_h^{\phi}(t_m)}{L^2}^2 + C\norm{\nabla\mathcal{E}_h^{\phi}(t_{m-1})}{L^2}^2 \nonumber \\ & \hspace{0.2in} + C\mathcal{R}(t_m). \end{align} Let $1 < \ell \le M$. Applying $\tau\sum_{m=1}^\ell$ and using $\mathcal{E}_h^{\phi}(t_0) \equiv 0$, $\mathcal{E}_h^{\bf u}(t_0) \equiv {\bf 0}$, \begin{align} \norm{\nabla\mathcal{E}_h^{\phi}(t_\ell)}{L^2}^2 + \norm{\mathcal{E}_h^{\phi}(t_\ell)}{-1,h}^2 + \norm{\mathcal{E}_h^{\bf u}(t_\ell)}{L^2}^2 & \nonumber \\ + \tau \sum_{m=1}^\ell \left[\norm{\nabla\mathcal{E}_h^{\mu}(t_m)}{L^2}^2 + \norm{\mathcal{E}_h^{\bf u}(t_m)}{H^1}^2 \right] &\le C\tau\sum_{m=1}^\ell \mathcal{R}(t_m) + C_1\tau\sum_{m=1}^\ell\norm{\nabla\mathcal{E}_h^{\phi}(t_m)}{L^2}^2 . \label{estimate-before-gronwall} \end{align} If $0< \tau \le \tau_0:= \frac{1}{2C_1} < \frac{1}{C_1}$, it follows from the last estimate that \begin{eqnarray} \norm{\nabla\mathcal{E}_h^{\phi}(t_\ell)}{L^2}^2 &\le& C\tau\sum_{m=1}^\ell \mathcal{R}(t_m) + \frac{C_1\tau}{1-C_1\tau} \sum_{m=1}^{\ell-1}\norm{\nabla\mathcal{E}_h^{\phi} (t_m)}{L^2}^2 \nonumber \\ &\le& C(\tau^2+h^{2q}) + C\tau \sum_{m=1}^{\ell-1}\norm{\nabla\mathcal{E}_h^{\phi}(t_m)}{L^2}^2 , \label{pre-gronwall} \end{eqnarray} where we have used the regularity assumptions to conclude $\tau\sum_{m=1}^M \mathcal{R}(t_m)\le C(\tau^2+h^{2q})$. Appealing to the discrete Gronwall inequality \eqref{gronwall-conclusion}, it follows that, for any $1 < \ell \le M$, \begin{equation} \norm{\nabla\mathcal{E}_h^{\phi}(t_\ell)}{L^2}^2 \le C(T)(\tau^2+h^{2q}). \label{post-gronwall} \end{equation} Considering estimates \eqref{estimate-before-gronwall} and \eqref{-1,h-error-estimate} we get the desired result. \end{proof} \begin{rem} From here it is straightforward to establish an optimal error estimate of the form \begin{align} \max_{1\le m \le M}\left[ \norm{\nabla\mathcal{E}^{\phi}(t_m)}{L^2}^2 +\norm{\mathcal{E}^{\bf u}(t_m)}{L^2}^2\right] + \tau\sum_{m=1}^M \left[\norm{\nabla\mathcal{E}^{\mu}(t_m)}{L^2}^2 + \norm{\nabla\mathcal{E}^{\bf u}(t_m)}{L^2}^2\right] \le C(T)(\tau^2+h^{2q}) \end{align} using $\mathcal{E}^{\phi} = \mathcal{E}_a^{\phi} + \mathcal{E}_h^{\phi}$, \emph{et cetera}, the triangle inequality, and the standard spatial approximations. We omit the details for the sake of brevity. \end{rem} \section{Numerical Experiments} \label{sec:numerincal-experiments} In this section, we provide some numerical experiments to gauge the accuracy and reliability of the fully discrete finite element method developed in the previous sections. We use a square domain $\Omega = (0,1)^2\subset \mathbb{R}^2$ and take ${\mathcal T}_h$ to be a regular triangulation of $\Omega$ consisting of right isosceles triangles. To refine the mesh, we assume that ${\mathcal T}_{\ell}, \ {\ell} = 0, 1, ..., L$, is an hierarchy of nested triangulations of $\Omega$ where ${\mathcal T}_{\ell}$, is obtained by subdividing the triangles of ${\mathcal T}_{\ell -1}$ into four congruent sub-triangles. Note that $h_{\ell -1} = 2h_{\ell}, \ {\ell} = 1, ..., L$ and that $\{{\mathcal T}_{\ell}\}$ is a quasi-uniform family. For the flow problem, we use the inf-sup-stable Taylor-Hood element where the ${\mathcal P}_1$ finite element space is used for the pressure and the $\left[{\mathcal P}_2\right]^2$ finite element space is used for the velocity. (We use a family of meshes ${\mathcal T}_h$ such that no triangle in the mesh has more than one edge on the boundary, as is usually required for the stability of the Taylor-Hood element~\cite{brenner08}.) We use the ${\mathcal P}_1$ finite element space for the phase field and chemical potential. In short, we take $q=1$. We solve the scheme \eqref{scheme-a} -- \eqref{scheme-e} with the following parameters: $\lambda =1$, $\eta =1$, $\theta = 0$, and $\epsilon = 6.25 \times 10^{-2}$. The initial data for the phase field are taken to be \begin{equation} \phi_{h}^0 = \mathcal{I}_h\left\{ \frac{1}{2}\Big(1.0-\cos(4.0\pi x)\Big)\cdot \Big(1.0-\cos(2.0\pi y)\Big)-1.0\right\} , \end{equation} where $\mathcal{I}_h : H^2\left(\Omega\right) \to S_h$ is the standard nodal interpolation operator. Recall that our analysis does not specifically cover the use of the operator $\mathcal{I}_h$ in the initialization step. But, since the error introduced by its use is optimal, a slight modification of the analysis show that this will lead to optimal rates of convergence overall. (See Remark~\ref{rem:initial-projection}.) The initial data for the velocity are taken as ${\bf u}_h^0 = {\bf 0}$. Values of the remaining parameters are given in the caption of Table~\ref{tab1}. To solve the system of equations above numerically, we are using the finite element libraries from the FEniCS Project~\cite{fenics12}. We solve the fully coupled system by a Picard-type iteration. Namely, at a given time step we fix the velocity and pressure, then solve for $\phi_h$, $\mu_h$, and $\xi_h$. With these updated, we then solve for the velocity and pressure. This is repeated until convergence. \begin{table}[h!] \centering \begin{tabular}{ccccccccc} $h_c$ & $h_f$ & $\norm{\delta_\phi}{H^1}$ & rate & $\norm{\delta_\mu}{H^1}$ & rate & $\norm{\delta_p}{H^1}$ & rate \\ \hline $\nicefrac{\sqrt{2}}{8}$ & $\nicefrac{\sqrt{2}}{16}$ & $1.988\times 10^0$ & -- & $2.705\times 10^0$ & -- & $3.732\times 10^0$ & -- & \\ $\nicefrac{\sqrt{2}}{16}$ & $\nicefrac{\sqrt{2}}{32}$ & $9.149\times 10^{-1}$ & 1.09 & $1.309\times 10^0$ & 1.03 & $9.73\times 10^{-1}$ & 1.92 \\ $\nicefrac{\sqrt{2}}{32}$ & $\nicefrac{\sqrt{2}}{64}$ & $4.483\times 10^{-1}$ & 1.02 & $6.216\times 10^{-1}$ & 1.05 & $9.417\times 10^{-1}$ & 1.02 \\ $\nicefrac{\sqrt{2}}{64}$ & $\nicefrac{\sqrt{2}}{128}$ & $2.231\times 10^{-1}$ & 1.00 & $3.056\times 10^{-1}$ & 1.02 & $4.688\times 10^{-1}$ & 1.00 \\ \hline \end{tabular} \caption{$H^1$ Cauchy convergence test. The final time is $T = 4.0\times 10^{-1}$, and the refinement path is taken to be $\tau = .001\sqrt{2}h$. The other parameters are $\varepsilon =6.25\times 10^{-2}$; $\Omega = (0,1)^2$. The Cauchy difference is defined via $\delta_\phi := \phi_{h_f}-\phi_{h_c}$, where the approximations are evaluated at time $t=T$, and analogously for $\delta_\mu$, and $\delta_p$. (See the discussion in the text.) Since $q=1$, \emph{i.e.}, we use ${\mathcal P}_1$ elements for these variables, the norm of the Cauchy difference at $T$ is expected to be $\mathcal{O}(\tau_f)+\mathcal{O}\left(h_f\right) = \mathcal{O}\left(h_f\right)$.} \label{tab1} \end{table} Note that source terms are not naturally present in the system of equations \eqref{eq:CH-mixed-a-alt} -- \eqref{eq:CH-mixed-f-alt}. Therefore, it is somewhat artificial to add them to the equations in attempt to manufacture exact solutions. To get around the fact that we do not have possession of exact solutions, we measure error by a different means. Specifically, we compute the rate at which the Cauchy difference, $\delta_\zeta := \zeta^{M_f}_{h_f} - \zeta^{M_c}_{h_c}$, converges to zero, where $h_f=2h_c$, $\tau_f = 2\tau_c$, and $\tau_fM_f = \tau_cM_c=T$. Then, using a linear refinement path, \emph{i.e.}, $\tau = Ch$, and assuming $q = 1$, we have \begin{equation} \norm{\delta_\zeta}{H^1} = \norm{\zeta^{M_f}_{h_f} - \zeta^{M_c}_{h_c}}{H^1} \le \norm{\zeta^{M_f}_{h_f}-\zeta(T)}{H^1}+ \norm{\zeta^{M_c}_{h_c}-\zeta(T)}{H^1} = \mathcal{O}(h_f^q+\tau_f) = \mathcal{O}(h_f). \end{equation} The results of the $H^1$ Cauchy error analysis are found in Table~\ref{tab1} and confirm first-order convergence in this case. Additionally, we have proved that (at the theoretical level) the energy is non-increasing at each time step. This is observed in our computations, but, for the sake of brevity, we will suppress an extensive discussion of numerical energy dissipation.
1,941,325,220,975
arxiv
\section{Introduction} Customer feedback analysis is the task of classifying short text messages into a set of predefined labels (e.g., bug, request). It is an important step towards effective customer support. However, a real bottleneck for successful classification of customer feedback in a multilingual environment is the limited transferability of such models, i.e., typically each time a new language is encountered a new model is built from scratch. This is clearly impractical, as maintaining separate models is cumbersome, besides the fact that existing annotations are simply not leveraged. In this paper we present our submission to the IJCNLP 2017 shared task on customer feedback analysis, in which data from four languages was available (English, French, Japanese and Spanish). Our goal was to build a single system for all four languages, and compare it to the traditional approach of creating separate systems for each language. We hypothesize that a single system is beneficial, as it can provide positive transfer, particularly for the languages for which less data is available. The contributions of this paper are: \begin{itemize} \setlength\itemsep{-1pt} \item We propose a very simple multilingual model for four languages that overall ranks first (out of 12 teams) in the IJCNLP 2017 shared task on Customer Feedback Analysis. \item We show that a traditional model outperforms neural approaches in this low-data scenario. \item We show the effectiveness of a very simple approach to induce multilingual embeddings that does not require any parallel data. \item Our \textsc{All-In-1} model is particularly effective on languages for which little data is available. \item Finally, we compare our approach to automatic translation, showing that translation negatively impacts classification accuracy. \item To support reproducibility and follow-up work all code is available at: {\url{https://github.com/bplank/ijcnlp2017-customer-feedback}} \end{itemize} \section{\textsc{All-In-1}: One Model for All} Motivated by the goal to evaluate how good a single model for multiple languages fares, we decided to build a very simple model that can handle any of the four languages. We aimed at an approach that does \textit{not} require any language-specific processing (beyond tokenization) nor requires any parallel data. We set out to build a simple baseline, which turned out to be surprisingly effective. Our model is depicted in Figure~\ref{fig:model}. \begin{figure}[h!]\center \includegraphics[width=0.85\columnwidth]{all-in-1} \caption{Overview of our \textsc{All-In-1} model.} \label{fig:model} \end{figure} Our key motivation is to provide a simple, general system as opposed to the usual ad-hoc setups one can expect in a multilingual shared task. So we rely on character n-grams, word embeddings, and a traditional classifier, motivated as follows. First, character n-grams and traditional machine learning algorithms have proven successful for a variety of classification tasks, e.g., native language identification and language detection. In recent shared tasks simple traditional models outperformed deep neural approaches like CNNs or RNNs, e.g.,~\cite{medvedeva-kroon-plank:2017:VarDial,zampieri-EtAl:2017:VarDial,malmasi-EtAl:2017:BEA,kulmizev-EtAl:2017:BEA}. This motivated our choice of using a traditional model with character n-gram features. Second, we build upon the recent success of multilingual embeddings. These are embedding spaces in which word types of different languages are embedded into the same high-dimensional space. Early approaches focus mainly on bilingual approaches, while recent research aims at mapping several languages into a single space. The body of literature is huge, but an excellent recent overview is given in~\newcite{xlingsurvey}. We chose a very simple and recently proposed method that does not rely on any parallel data~\cite{smith2017offline} and extend it to the multilingual case. In particular, the method falls under the broad umbrella of \textit{monolingual mappings}. These approaches first train monolingual embeddings on large unlabeled corpora for the single languages. They then learn linear mappings between the monolingual embeddings to map them to the same space. The approach we apply here is particularly interesting as it does not require parallel data (parallel sentences/documents or dictionaries) and is readily applicable to off-the-shelf embeddings. In brief, the approach aims at learning a transformation in which word vector spaces are orthogonal (by applying SVD) and it leverages so-called ``pseudo-dictionaries''. That is, the method first finds the common word types in two embedding spaces, and uses those as pivots to learn to align the two spaces (cf.\ further details in~\newcite{smith2017offline}). \section{Experimental Setup} In this section we first describe the IJCNLP 2017 shared task 4\footnote{\url{https://sites.google.com/view/customer-feedback-analysis/}} including the data, the features, model and evaluation metrics. \subsection{Task Description} The customer feedback analysis task~\cite{liuetal:2017:IJCNLP} is a short text classification task. Given a customer feedback message, the goal is to detect the type of customer feedback. For each message, the organizers provided one or more labels. To give a more concrete idea of the data, the following are examples of the English dataset: \begin{itemize} \setlength\itemsep{1pt} \item ``Still calls keep dropping with the new update'' (\textit{bug}) \item ``Room was grubby, mold on windows frames.'' (complaint) \item ``The new update is amazing.'' (\textit{comment}) \item ``Needs more control s and tricks..'' (\textit{request}) \item ``Enjoy the sunshine!!'' (\textit{meaningless}) \end{itemize} \subsection{Data}\label{sec:data} The data stems from a joint ADAPT-Microsoft project. An overview of the provided dataset is given in Table~\ref{tbl:stats}. Notice that the available amount of data differs per language. \begin{table}\centering \begin{tabular}{lrrrr} \toprule & \textsc{En} & \textsc{Es} & \textsc{Fr} & \textsc{Jp}\\ \midrule \textsc{Train} & 3066 & 1632 & 1951 & 1527\\ \textsc{Dev} & 501 & 302 & 401 & 251\\ \textsc{Test} & 501 & 300 & 401 & 301 \\ \bottomrule \end{tabular} \caption{Overview of the dataset (instances).} \label{tbl:stats} \end{table} \begin{figure}[h!] \includegraphics[width=\columnwidth]{distrlabels} \caption{Distribution of the labels per language.} \label{fig:distrlabels} \end{figure} We treat the customer feedback analysis problem as a single-class classification task and actually ignore multi-label instances, as motivated next. The final label distribution for the data is given in Figure~\ref{fig:distrlabels}. In initial investigations of the data we noticed that very few instances had multiple labels, e.g., ``\textit{comment},\textit{complaint}''. In the English training data this amounted to $\sim$4\% of the data. We decided to ignore those additional labels (just picked the first in case of multiple labels) and treat the problem as a single-class classification problem. This was motivated by the fact that some labels were expected to be easily confused. Finally, there were some labels in the data that did not map to any of the labels in the task description (i.e., `\textit{undetermined}', `\textit{undefined}', `\textit{nonsense}' and `\textit{noneless}', they were presumably typos) so we mapped them all to the `\textit{meaningless}' label. This frames the task as a 5-class classification problem with the following classes: \begin{itemize} \setlength\itemsep{0pt} \item \textit{bug}, \item \textit{comment}, \item \textit{complaint}, \item \textit{meaningless} and \item \textit{request}. \end{itemize} At test time the organizers additionally provided us with \textit{translations} of the three language-specific test datasets back to English. These translations were obtained by Google translate. This allowed us to evaluate our English model on the translations, to gauge whether translation is a viable alternative to training a multilingual model. \subsection{Pre-processing}\label{sec:preproc} We perform two simple preprocessing steps. First of all, we tokenize all data using off-the-shelf tokenizers. We use \texttt{tinysegmenter}\footnote{\url{https://pypi.python.org/pypi/tinysegmenter}} for Japanese and the NLTK \texttt{TweetTokenizer} for all other languages. The Japanese segmenter was crucial to get sufficient coverage from the word embeddings later. No additional preprocessing is performed. \subsection{Multilingual Embeddings} Word embeddings for single languages are readily available, for example the Polyglot\footnote{Despite their name the Polyglot embeddings are actually monolingual embeddings, but available for many languages.} or Facebook embeddings~\cite{bojanowski2016enriching}, which were recently released. In this work we start from the monolingual embeddings provided by the Polyglot project~\cite{polyglot:2013}. We use the recently proposed approach based on SVD decomposition and a ``pseudo-dictionary''~\cite{smith2017offline} obtained from the monolingual embeddings to project embedding spaces. To extend their method from the bilingual to the multilingual case, we apply pair-wise projections by using English as pivot, similar in spirit to~\newcite{ammar2016massively}. We took English as our development language. We also experimented with using larger embeddings (Facebook embeddings; larger in the sense of both trained on more data and having higher dimensionality), however, results were comparable while training time increased, therefore we decided to stick to the smaller 64-dimensional Polyglot embeddings. \subsection{Model and Features} As classifier we use a traditional model, a Support Vector Machine (SVM) with linear kernel implemented in \texttt{scikit-learn}~\cite{scikit-learn}. We tune the regularization parameter $C$ on the English development set and keep the parameter fixed for the remaining experiments and all languages ($C=10$). We compared the SVM to \texttt{fastText}~\cite{joulin2016bag}. As we had expected \texttt{fastText} gave consistently lower performance, presumably because of the small amounts of training data. Therefore we did not further explore neural approaches. Our features are character n-grams (3-10 grams, with binary tf-idf) and word embeddings. For the latter we use a simple continuous bag-of-word representation~\cite{collobert2011natural} based on averaging and min-max scaling. Additionally, we experimented with adding Part-Of-Speech (POS) tags to our model. However, to keep in line with our goal to build a \textit{single system for all languages} we trained a single multilingual POS tagger by exploiting the projected multilingual embeddings. In particular, we trained a state-of-the-art bidirectional LSTM tagger~\cite{plank:ea:2016}\footnote{\url{https://github.com/bplank/bilstm-aux}} that uses both word and character representations on the concatenation of language-specific data provided from the Universal Dependencies data (version 1.2 for En, Fr and Es and version 2.0 data for Japanese, as the latter was not available in free-form in the earlier version). The word embeddings module of the tagger is initialized with the multilingual embeddings. We investigated POS n-grams (1 to 3 grams) as additional features. \subsection{Evaluation} We decided to evaluate our model using weighted F1-score, i.e., the per-class F1 score is calculated and averaged by weighting each label by its support. Notice, since our setup deviates from the shared task setup (single-label versus multi-label classification), the final evaluation metric is different. We will report on weighted F1-score for the development and test set (with simple macro averaging), but use Exact-Accuracy and Micro F1 over all labels when presenting official results on the test sets. The latter two metrics were part of the official evaluation metrics. For details we refer the reader to the shared task overview paper~\cite{liuetal:2017:IJCNLP}. \section{Results} We first present results on the provided development set, then on the official evaluation test set. \subsection{Results on Development} \begin{table} \resizebox{\columnwidth}{!}{ \begin{tabular}{lcccc|c} \toprule & \textsc{En} & \textsc{Es}& \textsc{Fr} & \textsc{Jp} & \textsc{Avg}\\ \midrule \multicolumn{6}{c}{ \textsc{Monolingual Models} }\\ Embeds & 50.6 & 82.0 & 66.5 & 65.1 & 66.05\\ Words (W) & 66.1 & 86.9 & 73.2 & 73.6 & 74.95\\ Chars (C) & 68.2 & 87.1 & 76.1 & 74.0 & 76.35\\ W+Chars (C) & 65.9 & 87.7 & 75.7 & 74.0 & 75.82\\ C+Embeds$\ddagger$ & 66.1 & 86.6 & 76.5 & 77.1 & 76.58\\ W+C+Embeds & 65.9 & 87.8 & 75.6 & 76.8 & 76.52\\ \midrule \multicolumn{6}{c}{\textsc{Bilingual Model}}\\ En+Es & 67.6 & 86.6 & -- & -- & --\\ En+Fr & 66.6 & -- & 77.8 & -- & --\\ En+Jp & 66.7 & -- & -- & 77.9 & --\\ \multicolumn{6}{c}{\textsc{Multilingual Model}}\\ En+Es+Fr & 68.3 & 87.0 & 77.9 & -- & --\\ \textsc{All-in-1}$\ddagger$ & 68.8 & 87.7 & 76.4 & 77.2 & 77.5\\ \midrule \textsc{All-in-1}+POS & 68.4 & 86.0 & 74.4 & 74.5 & 75.8\\ \bottomrule \end{tabular} } \caption{Results on the development data, weighted F1. \textsc{Monolingual}: per-language model; \textsc{Multilingual}: \textsc{All-In-1} (with C+Embeds features trained on En+Es+Fr+Jp). $\ddagger$ indicates submitted systems.} \label{tbl:resultsdev} \end{table} First of all, we evaluated different feature representations. As shown in Table~\ref{tbl:resultsdev} character n-grams alone prove very effective, outperforming word n-grams and word embeddings alone. Overall simple character n-grams (C) in isolation are often more beneficial than word and character n-grams together, albeit for some languages results are close. The best representation are character n-grams with word embeddings. This representation provides the basis for our multilingual model which relies on multilingual embeddings. The two officially submitted models both use character n-grams (3-10) and word embeddings. Our first official submission, \textsc{Monolingual} is the per-language trained model using this representation. Next we investigated adding more languages to the model, by relying on the multilingual embeddings as bridge. For instance in Table~\ref{tbl:resultsdev}, the model indicated as En+Es is a character and word embedding-based SVM trained using bilingual embeddings created by mapping the two monolingual embeddings onto the same space and using both the English and Spanish training material. As the results show, using multiple languages can improve over the in-language development performance of the character+embedding model. However, the bilingual models are still only able to handle pairs of languages. We therefore mapped all embeddings to a common space and train a single multilingual \textsc{All-in-1} model on the union of all training data. This is the second model that we submitted to the shared task. As we can see from the development data, on average the multilingual model shows promising, overall (macro average) outperforming the single language-specific models. However, the multilingual model does not consistently fare better than single models, for example on French a monolingual model would be more beneficial. Adding POS tags did not help (cf.\ Table~\ref{tbl:resultsdev}), actually dropped performance. We disregard this feature for the final official runs. \subsection{Test Performance} \begin{table}\resizebox{\columnwidth}{!}{ \begin{tabular}{lcccc|c} \toprule & \textsc{En} & \textsc{Es}& \textsc{Fr} & \textsc{Jp} & \textsc{Avg}\\ \midrule \textsc{Monoling} & \textbf{68.6} & 88.2 & \textbf{76.1} & 74.3 & 76.8\\ \textsc{Multiling} & 68.1 & \textbf{88.7} & 73.9 & \textbf{76.7} & \textbf{76.9}\\ \textsc{Translate} & -- & 83.4 & 69.5 & 61.6 & --\\ \bottomrule \end{tabular} } \caption{Results on the test data, weighted F1. \textsc{Monoling}: monolingual models. \textsc{Multiling}: the multilingual \textsc{All-In-1} model. \textsc{Trans}: translated targets to English and classified with \textsc{En} model.} \label{tbl:resultstestwF1} \end{table} We trained the final models on the concatenation of \textsc{Train} and \textsc{Dev} data. The results on the test set (using our internally used weighted F1 metric) are given in Table~\ref{tbl:resultstestwF1}. There are two take-away points from the main results: First, we see a positive transfer for languages with little data, i.e., the single multilingual model outperforms the language-specific models on the two languages (Spanish and Japanese) which have the least amount of training data. Overall results between the monolingual and multilingual model are close, but the advantage of our multilingual \textsc{All-in-1} approach is that it is a single model that can be applied to all four languages. Second, automatic translation harms, the performance of the \textsc{EN} model on the translated data is substantially lower than the respective in-language model. We could investigate this as the organizers provided us with translations of French, Spanish and Japanese back to English. \begin{table}[h!] \resizebox{\columnwidth}{!}{ \begin{tabular}{lcccccc} \toprule & \textsc{En} & \textsc{Es} & \textsc{Fr} & \textsc{Jp} & \textsc{Avg} \\ \midrule Ours (\textsc{Multiling} ) & 68.60 & \textbf{88.63} & 71.50 & \textbf{75.00}& \textbf{76.04}\\ Ours (\textsc{Monoling}) & 68.80 & 88.29 & \textbf{73.75} & 73.33 & 75.93\\ YNU-HPP-glove$\dagger$ & \textbf{71.00} & --& --& -- & --\\ FYZU-bilstmcnn & 70.80 & -- & -- & -- & -- \\ IITP-CNN/RNN & 70.00 & 85.62 & 69.00 & 63.00 & 71.90\\ TJ-single-CNN$\dagger$ & 67.40 & -- & -- & --\\ Baseline & 48.80 & 77.26 & 54.75 & 56.67 & 59.37\\ \bottomrule \end{tabular} } \caption{Final test set results (Exact accuracy) for top 5 teams (ranked by macro average accuracy). Rankings for micro F1 are similar, we refer to the shared task paper for details. Winning system per language in bold. $\dagger$: no system description available at the time of writing this description paper.} \label{tbl:resultstop} \end{table} Averaged over all languages our system ranked first, cf.\ Table~\ref{tbl:resultstop} for the results of the top 5 submissions. The multilingual model reaches the overall best exact accuracy, for two languages training a in-language model would be slightly more beneficial at the cost of maintaining a separate model. The similarity-based baseline provided by the organizers\footnote{``Using n-grams (n=1,2,3) to compute sentence similarity (which is normalized by the length of sentence). Use the tag(s) of the most similar sentence in training set as predicted tag(s) of a sentence in the test set.''} is considerably lower. Our system was outperformed on English by three teams, most of which focused only on English. Unfortunately at the time of writing there is no system description available for most other top systems, so that we cannot say whether they used more English-specific features. From the system names of other teams we may infer that most teams used neural approaches, and they score worse than our SVM-based system. The per-label breakdown of our systems on the official test data (using micro F1 as calculated by the organizers) is given in Table~\ref{tbl:resultstest}. Unsurprisingly less frequent labels are more difficult to predict. \begin{table} \resizebox{\columnwidth}{!}{ \begin{tabular}{lrrrrr} \toprule & \textit{comm} & \textit{compl} & \textit{req} & \textit{ml} & \textit{bug} \\ \midrule \textsc{En} (\textsc{Monoling}) & \textbf{82.3} & 64.4 & \textbf{60.0} & 27.5 & 0 \\ \textsc{En} (\textsc{Multiling}) & 82.0 & \textbf{65.0} & 42.1 & \textbf{28.6} & 0 \\ \midrule \textsc{Es} (\textsc{Monoling}) & 93.3 & 75.2 & \textbf{72.7} & 0 & 0 \\ \textsc{Es} (\textsc{Multiling}) & \textbf{93.5} & \textbf{76.2} & 66.6 & 0 & \textbf{66.6} \\ \textsc{Es} (\textsc{Translate}) & 92.6 & 67.2 & 11.8 & 0 & 0\\ \midrule \textsc{Fr} (\textsc{Monoling}) & \textbf{86.4} & \textbf{65.6 }& 14.3 & \textbf{47.6} & \textbf{54.5} \\ \textsc{Fr} (\textsc{Multiling}) & 85.5 & 61.5 & \textbf{16.6} & 41.2 & 50.0 \\ \textsc{Fr} (\textsc{Translate}) & 82.9 & 58.9 & 16.6 & 34.5 & 0\\ \midrule \textsc{Jp} (\textsc{Monoling}) & 85.7 & \textbf{67.8} & 55.8 & 0 & \textbf{50.0} \\ \textsc{Jp} (\textsc{Multiling} ) & \textbf{87.0} & \textbf{67.8} & \textbf{65.2} & 0 & \textbf{50.0} \\ \textsc{Jp} (\textsc{Translate} ) & 76.5 & 61.3 & 7.2 & 0 & 0 \\ \bottomrule \end{tabular} } \caption{Test set results (F1) per category (\textit{comment (comm)}, \textit{complaint (compl)}, \textit{request (req)}, \textit{meaningless (ml)} and \textit{bug}), official evaluation.} \label{tbl:resultstest} \end{table} \section{Conclusions} We presented a simple model that can effectively handle multiple languages in a single system. The model is based on a traditional SVM, character n-grams and multilingual embeddings. The model ranked first in the shared task of customer feedback analysis, outperforming other approaches that mostly relied on deep neural networks. There are two take-away messages of this work: 1) multilingual embeddings are very promising\footnote{Our study is limited to using a single multilingual embedding method and craves for evaluating alternatives!} to build single multilingual models; and 2) it is important to compare deep learning methods to simple traditional baselines; while deep approaches are undoubtedly very attractive (and fun!), we always deem it important to compare deep neural to traditional approaches, as the latter often turn out to be surprisingly effective. Doing so will add to the literature and help to shed more light on understanding why and when this is the case. \section*{Acknowledgments} I would like to thank the organizers, in particular Chao-Hong Liu, for his quick replies. I also thank Rob van der Goot, H\'{e}ctor Mart\'{i}nez Alonso and Malvina Nissim for valuable comments on earlier drafts of this paper.
1,941,325,220,976
arxiv
\section{Acknowledgement} We thank the anonymous reviewers for their constructive comments. We are grateful to Felice Frankel for her professional advice on aesthetics, and Angjoo Kanazawa for her help in running~\cite{zuffi20163d} on the \emph{Horse} sequence. We thank Kevin Burg for allowing us to use the ballet clips from~\cite{ballet}. We thank Katie Bouman, Vickie Ye, and Zhoutong Zhang for their help with the supplementary video. This work is partially supported by Shell Research, DARPA MediFor, and Facebook Fellowship. \section{Discussion \& Conclusion} We presented MoSculp\xspace, a system that automates the creation of \summarynames, and allows users to interactively explore the visualization and customize various rendering settings. Our system makes motion sculpting accessible to novice users, and requires only a video as input. \input{figText/limit-fig} As for limitations, our \summaryname may look cluttered when the motion is repetitive and spans only a small region (\fig{fig:limit}a). In addition, we rely on high-quality pose estimates, which are sometimes unattainable due to the inherent ambiguity of the 2D-to-3D inverse problem. \fig{fig:limit}b shows such an example\change{: when the person is captured in side profile throughout the video (\fig{fig:limit}c), there are multiple plausible arm poses that satisfy the 2D projection equally well. The red-circled region in \fig{fig:limit}b shows one plausible, but wrong arm pose.} Nevertheless, when our algorithm renders the imperfect sculpture back into the video from its original viewpoint, these errors are no longer noticeable (\fig{fig:limit}c). We demonstrated our motion sculpting system on diverse videos, revealing complex human motions in sports and dancing. We also demonstrated through user studies that our visualizations facilitate users' understanding of 3D motion. We see two directions opened by this work. The first is in developing artistic tools that allow users to more extensively customize the aesthetics of their renderings, while preserving the interpretability. The second is in rendering \summarynames in other media. In \fig{fig:teaser}d, we showed one example of this---a 3D printed sculpture, and future work could move towards customizing and automating this process. \section{Implementation Details} \label{sec:impl_details} We rendered our scenes using Cycles in Blender. \change{It took a Stratasys J750 printer around 10 hours to 3D print the sculpture shown in \fig{fig:teaser}d ($\sim$30cm long).} To render realistic floor reflections in synthetic scenes, we coarsely textured the 3D human with simple ray casting\change{: we cast a ray from each vertex on the human mesh to the estimated camera, and colored that vertex with the RGB value of the intersected pixel. Intuitively, this approach mirrors texture of the visible parts to obtain texture for the occluded parts. The original texture for sculptures (such as the sculpture texture in \fig{fig:walking}) was computed similarly, except that when the ray intersection fell outside the (eroded) human mask, we took the color of the intersection's nearest neighbor inside the mask to avoid colors being taken from the background. As an optional post-processing step, we smoothed the vertex colors over each vertex's neighbors.} Other sculpture texture maps (such as wood) were downloaded from \url{poliigon.com}. To render a \summaryname together with the human figures, we first rendered the 3D sculpture's RGB and depth images as well as the human's depth maps using the recovered camera. We then composited together all the RGB images by selecting, for each pixel, the value that is the closest to the camera, as mentioned before. Due to the noisy nature of the human's depth maps, we used a simple Markov Random Field (MRF) with Potts potentials to enforce smoothness during this composition. For comparisons with shape-time photography~\cite{freeman:shapetime}, because it requires RGB and depth image pairs as input, we fed our refined depth maps to the algorithm in addition to the original video. Furthermore, shape-time photography was not originally designed to work on high-frame-rate videos; directly applying it to such videos leads to a considerable number of artifacts. We therefore adapted the algorithm to normal videos by augmenting it with the texture smoothness prior in~\cite{pritch:shiftmap} and Potts smoothness terms. \section{Introduction} \label{sec:intro} Complicated actions, such as swinging a tennis racket or dancing ballet, can be difficult to convey to a viewer through a static photo. To address this problem, researchers and artists have developed a number of motion visualization techniques, such as chronophotography, stroboscopic photography, and multi-exposure photography~\cite{muybridge1985horses,braun1992picturing}. However, since such methods operate entirely in 2D, they are unable to convey the motion's underlying 3D structure. Consequently, they tend to generate cluttered results when parts of the object are occluded (\fig{fig:comparison}). Moreover, they often require special capturing procedures, environment (such as a clean, black background), or lighting equipment. \input{figText/teaser.tex} In this paper, we present \emph{MoSculp\xspace}, an end-to-end system that takes a video as input and produces a \emph{\summaryname}: a visualization of the spatiotemporal structure carved by a body as it moves through space. {Motion sculptures}\xspace aid in visualizing the trajectory of the human body, and reveal how its 3D shape evolves over time. Once computed, \summarynames can be inserted back to the source video (\fig{fig:teaser}b), rendered in a synthesized scene (\fig{fig:teaser}c), or physically 3D printed (\fig{fig:teaser}d). We develop an interactive interface that allows users to: (i) explore \summarynames in 3D, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, navigate around them and view them from alternative viewpoints, thus revealing information about the motion that is inaccessible from the original viewpoint, and (ii) customize various rendering settings, including lighting, sculpture material, body parts to render, scene background, and \emph{etc}\onedot} \def\vs{\emph{vs}\onedot\footnote{\change{Demo available at \url{http://mosculp.csail.mit.edu}}}. These tools provide flexibility for users to express their artistic designs, and further facilitate their understanding of human shape and motion. Our main contribution is devising the first end-to-end system for creating \summarynames from videos, thus making them accessible for novice users. A core component of our system is a method for estimating the human's pose and body shape over time. Our 3D estimation algorithm, built upon state of the art, has been designed to recover the 3D information required for constructing \summarynames (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, by modeling clothing), and to support simple user corrections. The \summaryname is then inferred from the union of the 3D shape estimations over time. To insert the sculpture back into the original video, we develop a 3D-aware, image-based rendering approach that preserves depth ordering. Our system achieves high-quality, artifact-free composites for a variety of human actions, such as ballet dancing, fencing, and other athletic actions. \input{figText/comparison-fig.tex} \section{Algorithm for Generating Motion Sculptures} The algorithm behind MoSculp\xspace consists of several steps illustrated in \fig{fig:pipeline}. In short, our algorithm (a) first detects the human body and its 2D pose (represented by a set of keypoints) in each frame, (b) recovers a 3D body model that represents the person's overall shape and its 3D poses across the frames, in a temporally coherent manner, (c) extracts a 3D skeleton from the 3D model and sweeps it through the 3D space to create an initial \summaryname, and finally, (d-f) renders the sculpture in different styles, together with the human, while preserving the depth ordering. \subsection{2D Keypoint Detection} The 2D body pose in each frame, represented by a set of 2D keypoints, is estimated using OpenPose~\cite{cao2017realtime}. Each keypoint is associated with a joint label (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, left wrist, right elbow) and its 2D position in the frame. While keypoints detected in a single image are typically accurate, inherent ambiguity in the motion of a human body sometimes leads to temporal inconsistency, \emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, the left and right shoulders flipping between adjacent frames. We address this problem by imposing temporal coherency between detections in adjacent frames. Specifically, we use a Hidden Markov Model (HMM), where the per-frame detection results are the observations. We compute the maximum marginal likelihood estimate of each joint's location at a specific timestamp, while imposing temporal smoothness (see the supplementary material for more details). We develop a simple interface (\fig{fig:userinterface}a), where the user can browse through the detection results (overlaid on the video frames) and indicate whether the detected joints are all correct in a given frame. The frames labeled correct are then used as constraints in another HMM inference procedure. Three or four labels are usually sufficient to correct all the errors in a video of 100 frames. \subsection{From 2D Keypoints to 3D Body Over Time} \label{sec:3dpose} Given the detected 2D keypoints, our goal now is to fit a 3D model of the body in each frame. We want temporally consistent configurations of the 3D body model that best match its 2D poses (given by keypoints). That is, we opt to minimize the re-projection error, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, the distance between each 2D keypoint and the 3D-to-2D projection of the mesh vertices that correspond to the same body part. We use the SMPL~\cite{loper2015smpl} body model that consists of a canonical mesh and a set of parameters that control the body shape, pose, and position. Specifically, the moving body is represented by shape parameters $\beta$, per-frame pose $\theta^t$, and global translation ${T^t}$. We estimate these parameters for each of the $N$ frames by minimizing the following objective function: \begin{align} \label{eqn:core} \mathcal{L}&\left(\{T^t\}, \{\theta^t\}, \beta\right) = \sum_{t=1}^N\left(\mathcal{L}_\text{data}\left(T^t, \theta^t, \beta\right) + \alpha_1\mathcal{L}_\text{prior}\left(\theta^t, \beta\right)\right)\nonumber\\ &+ \alpha_2\sum_{t=1}^{N-1}\mathcal{L}_\text{temporal}\left(T^t, T^{t+1}, \theta^t, \theta^{t+1}, \beta\right). \end{align} The data term $\mathcal{L}_\text{data}$ encourages the projected 3D keypoints in each frame to be close to the detected 2D keypoints. $\mathcal{L}_\text{prior}$ is a per-frame prior defined in~\cite{bogo2016keep}, which imposes priors on the human pose as well as joint bending, and additionally penalizes mesh interpenetration. Finally, $\mathcal{L}_\text{temporal}$ encourages the reconstruction to be smooth by penalizing change in the human's global translations and local vertex locations. $\alpha_i$ are hand-chosen constant weights that maintain the relative balance between the terms. This formulation can be seen as an extension of SMPLify~\cite{bogo2016keep}, a single-image 3D human pose and shape estimation algorithm, to videos. The optimization is solved using~\cite{loper2014opendr}. See the supplementary material for the exact term definitions and implementation details. \subsection{Generating the Sculpture} With a collection of 3D body shapes (\fig{fig:sculp-formation}a), we create a space-time sweep by extracting the reconstructed person's skeleton from the 3D model in each frame (marked red on the shapes in \fig{fig:sculp-formation}b) and connecting these skeletons across all frames (\fig{fig:sculp-formation}c). This space-time sweep forms our initial \summaryname. \input{figText/sculp-formation} \input{figText/warping.tex} \section{Refining and Rendering Motion Sculptures} \label{sec:embed-sculp} In order to achieve artifact-free and vivid renderings, we still have several remaining issues to resolve. First, a generic 3D body model (such as the one that we use) cannot accurately capture an individual's actual body shape In other words, it lacks important structural details, such as fine facial structure, hair, and clothes. Second, our reconstruction only estimates the geometry, but not the texture. Texture mapping from 2D to 3D under occlusion itself is a challenging task, even more so when the 3D model does not cover certain parts of the body. \fig{fig:two-worlds}a illustrates these challenges: full 3D rendering lacks structural details and results in noticeable artifacts. Our approach is inserting the 3D \summaryname back into the original 2D video, rather than mapping the 2D contents from the video to the 3D scene. This allows us to preserve the richness of information readily available in the input video (\fig{fig:two-worlds}c) without modeling fine-scale (and possibly idiosyncratic) aspects of the 3D shape. \subsection{Depth-Aware Composite of 3D Sculpture and 2D Video} As can be seen in \fig{fig:two-worlds}b, naively superimposing the rendered 3D sculpture onto the video results in a cluttered visualization that completely disregards the 3D spatial relationships between the sculpture and the object. Here, the person's head is completely covered by the sculpture, making shape and motion very hard to interpret. We address this issue and produce depth-preserving composites such as the one in \fig{fig:two-worlds}c. \input{figText/two-worlds} To accomplish this, we estimate a depth map of the person in each video frame. For each frame and each pixel, we then determine if the person is closer to or farther away from the camera than the sculpture by comparing the sculpture's and person's depth values at that pixel (the sculpture depth map is automatically given by its 3D model). We then render at each pixel what is closer to the camera, giving us the result shown in \fig{fig:two-worlds}c. \subsection{Refinement of Depth and Sculpture} While the estimated sculpture is automatically associated with a depth map, this depth map rarely aligns perfectly with the human silhouette. Furthermore, we still need to infer the human's depth map in each frame for depth ordering. As can be seen in \fig{fig:alignment}c, the estimated 3D body model provides only a rough and partial estimation of the human's depth due to misalignment and missing 3D contents (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, the skirt or hair). A rendering produced with these initial depth maps leads to visual artifacts, such as wrong depth ordering and gaps between the sculpture and the human (\fig{fig:alignment}a), To eliminate such artifacts, we extract foreground masks of the human across all frames (using Mask R-CNN~\cite{he2017mask} followed by $k$-NN matting~\cite{Chen:2012:KM}), and refine the human's initial depth maps as well as the sculpture as follows. For refining the object's depth, we compute dense matching, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, optical flow~\cite{liu:opticalflow}, between the 2D foreground mask and the projected 3D silhouette. We then propagate the initial depth values (provided by the estimated 3D body model) to the foreground mask via warping with optical flow. If a pixel has no depth after warping, we copy the depth of its nearest neighbor pixel that has depth. This approach allows us to approximate a complete depth map of the human. As shown in \fig{fig:alignment}c, the refined depth map has values for the ballerina's hair and skirt, allowing them to emerge from the sculpture (compared with the hair in \fig{fig:alignment}a). For refining the sculpture, recall that a \summaryname is formed by a collection of surface skeletons. We use the same flow field as above to warp the image coordinates of the surface skeleton in each frame. Now that we have determined the skeletons' new 2D locations, we edit the \summaryname in 3D accordingly\footnote{We back-project the 2D-warped surface skeletons to 3D, assuming the same depth as before editing. Essentially, we are modifying the 3D sculpture in only the $x$- and $y$-axes. To compensate for some minor jittering introduced, we then smooth each dimension with a Gaussian kernel.}. After this step, boundary of the sculpture, when projected to 2D, aligns well with the 2D human mask. \section{Extensions} We extend our model to handle camera motion and generate non-human \summarynames. \subsection{Handling Camera Motion} As an additional feature, we extend our algorithm to also handle camera motion. One approach for doing so is to stabilize the background in a pre-processing step, \emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, by registering each frame to the panoramic background~\cite{brown2007automatic}, and then applying our system to the stabilized video. This works well when the background is mostly planar. Example results obtained with this approach are shown for the \emph{Olympic} and \emph{Dunking} videos, in \fig{fig:teaser} and \fig{fig:movcam}a, respectively. However, for more complex scenes containing large variations in depth, this approach may result in artifacts due to motion parallax. Thus, for general cases, we use an off-the-shelf Structure-from-Motion (SfM) software~\cite{schoenberger2016sfm} to estimate the camera position at each frame and then compensate for it. More specifically, we estimate the human's position relative to the moving camera, and then offset that position by the camera position given by SfM. An example of this approach is \emph{Run, Forrest, Run!}, shown in \fig{fig:movcam}b. As can be seen, our method works well on this challenging video, producing a \summaryname spanning a long distance (\fig{fig:movcam}b has been truncated due to space limit, so the actual sculpture is even longer; see the supplementary video). \subsection{Non-Human {Motion Sculptures}\xspace} \label{sec:numhuman} While we have focused on visualizing human motion, our system can also be applied to other objects, as long as they can be reliably represented by a parametric 3D model---an idea that we explore with two examples. \fig{fig:non_human}a shows the \summaryname generated for a running horse, where we visualize its two back legs. To do so, we first estimate the horse's poses across all frames with the per-frame method by Zuffi~\emph{et al}\onedot~\cite{zuffi20163d}, smooth the estimated poses and translation parameters, and finally apply our method. \input{figText/non_human.tex} In~\fig{fig:non_human}b, we visualize how a basketball interacts in space and time with the person dribbling it. We track the ball in 2D (parameterized by its location and radius), and assign the hand's depth to the ball whenever they are in contact (depth values between two contact points are linearly interpolated). With these depth maps, camera parameters, and ball silhouettes, we insert a 3D ball into the scene. \ignore{ We can confess to some of the following technical limitations: 1. Should've used joint heatmap, instead of point estimations\newline 2. Unable to handle partial human bodies, as the pose of the unseen body part is not constrained.\newline 3. Nontrivial to extend to multi-person scenes, as we need to prohibit collision between human meshes.\newline 4. Learning motion priors will be much better than the pose priors currently used\newline 5. Errors due to depth ambiguities in single-view videos (Do we want to say this as a limitation? Our rendering from the original view is meant to overcome this, so it's not a limitation?)\newline } \section{Related Work} We briefly review related work in the areas of artistic rendering, motion effects in images, human pose estimation, video editing and summarization methods, and physical visualizations. \myparagraph{Automating Artistic Renderings.} A range of tools have been developed to aid users in creating artist-inspired motion visualizations~\cite{cutting2002representing, authoringhumanmovements, schmid2010programmable, bouvier2007motion}. DemoDraw~\cite{authoringhumanmovements} allows users to generate drawing animations by physically acting out an action, motion capturing them, and then applying different stylizing filters. Our work continues along this line of work and is inspired by artistic work that visualizes 3D shape and motion~\cite{jldesign2013,peterJ,gever2014,gremmler2016, chronofab}. However, these renderings are produced by professional artists and require special recording procedures or advanced computer graphics skills. In this paper, we opt to lower the barrier to entry and make the production of \summarynames less costly and more accessible for novice users. The most closely related work to ours in this category is ChronoFab~\cite{chronofab}, a system for creating \summarynames from 3D animations. However, a key difference is that ChronoFab requires a full 3D model of the object and its motion as input, which limits the practical use of ChronoFab, while our system directly takes a video as input and estimates the 3D shape and motion as part of the pipeline. \myparagraph{Motion Effects in Static Images.} Illustrating motion in a single image dates back to stroboscopic photography \cite{muybridge1985horses} and classical methods that design and add motion effects to an image (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, speedlines~\cite{masuch1999r}, motion tails~\cite{bennett2007computational, teramoto2010interactive}, and motion blur). Cutting~\cite{cutting2002representing} presented an interesting psychological standpoint and evaluation on the efficacy of different motion visualizations. In the context of non-photorealistic rendering, various motion effects have been designed for animations and cartoons~\cite{lake2000stylized, kawagishi2003cartoon}. Schmid~\emph{et al}\onedot~\cite{schmid2010programmable} designed programmable motion effects as part of a rendering pipeline to produce stylized blurring and stroboscopic images. Similar effects have also been produced by Baudisch~\emph{et al}\onedot~\cite{baudisch2006phosphor} for creating animated icon movements. In comparison, our system does not require a 3D model of the object, but rather estimates it from a set of 2D images. In addition, most of these motion effects do not explicitly model the 3D aspects of motion and shape, which are the essence of \summarynames. \myparagraph{Video Editing and Summarization.} {Motion sculptures}\xspace are related to video \change{editing techniques, such as MovieReshape~\cite{Jain:2010:MovieReshape}, which manipulates certain properties of the human body in a video,} and summarization techniques, such as image montage~\cite{agarwala2004interactive,sunkavalli2012video} that re-renders video contents in a more concise view, typically by stitching together foreground objects captured at different timestamps. As in stroboscopic photography, such methods do not preserve the actual depth ordering among objects, and thus cannot illustrate 3D information about shape and motion. Another related work is~\cite{blank2005actions}, which represents human actions as space-time shapes to improve action classification and clustering. However, their space-time shapes are 2D human silhouettes and thus do not convey 3D information. Video Summagator~\cite{nguyen2012video} visualizes a video as a space-time cube using volume rendering techniques. However, this approach does not model self-occlusions, which leads to clutter and visual artifacts. Depth-based summarization methods overcome some of these limitations using geometric information provided by depth sensors. Shape-time photography~\cite{freeman:shapetime}, for example, conveys occlusion relationships by showing, at each pixel, the color of the surface that is the closest to the camera over the entire video sequence. More recently, Klose~\emph{et al}\onedot introduced a video processing method that uses per-pixel depth layering to create action shot summaries~\cite{klose2015sampling}. While these methods are useful for presenting 3D relationships in a small number of sparsely sampled images, such as where the object is throughout the video, they are not well suited for visualizing continuous motion. Moreover, these methods are based on depth maps, and thus provide only a ``2.5D'' reconstruction that cannot be easily viewed from multiple viewpoints as in our case. \input{figText/userinterface.tex} \myparagraph{Human Pose Estimation.} \change{{Motion sculpture}\xspace creation involves estimating the 3D human pose and shape over time -- a fundamental problem that has been extensively studied. Various methods have been proposed to estimate 3D pose from a single image \cite{bogo2016keep,kanazawa2017end,pavlakos2017coarse,pavlakos2018learning,pavlakos2018ordinal,chen20173d,tome2017lifting}, or from a video \cite{howe2000bayesian,huang2017towards,zhou2016sparseness,mehta2017vnect,agarwal2006recovering}. However, these methods are not designed for the specifics of motion visualization like our approach.} \myparagraph{Physical Visualizations.} Recent research has shown great progress in physical visualizations and demonstrated the benefit of allowing users to efficiently access information along all dimensions~\cite{physicalviz, tastybeats, proxyprint}. MakerVis~\cite{makervis} is a tool that allows users to quickly convert their digital information into physical visualizations. ChronoFab~\cite{chronofab}, in contrast, addresses some of the challenges in rendering digital data physical, \emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, connecting parts that would otherwise float in midair. Our \summarynames can be physically printed as well. However, our focus is in rendering and seamlessly compositing them into the source videos, rather than optimizing the procedure for physically printing them. \section{Technical Evaluation} \label{sec:result} \label{sec:eval-pipeline-comp} We conducted experiments to evaluate our two key technical components: (i) 3D body estimation over time, and (ii) flow-based refinement of depth and sculpture. \subsection{Estimating Geometry Over Time} In our first evaluation, we compared our approach that estimates the correct poses by considering change across multiple frames against the pose estimation of SMPLify~\cite{bogo2016keep}, in which the 3D body model is estimated in each frame independently. \fig{fig:pose}a shows the output of SMPLify, and \fig{fig:pose}b shows our results. The errors in the per-frame estimates and the lack of temporal consistency in \fig{fig:pose}a resulted in a jittery, disjoint sculpture. In contrast, our approach solved for a single set of shape parameters and smoothly varying pose parameters for the entire sequence, and hence produced significantly better results. To quantitatively demonstrate the effects of our approach on the estimated poses, we applied Principal Component Analysis (PCA) to the 72D pose vectors, and visualized the pose evolution in 2D in~\fig{fig:pose}. In SMPLify (\fig{fig:pose}a), there is a significant discrepancy between poses in frames 25 and 26: the human body abruptly swings to the right side. In contrast, with our approach, we obtained a smooth evolution of poses (\fig{fig:pose}b). \subsection{Flow-Based Refinement} As discussed earlier, because the 3D shape and pose are encoded using low-dimensional basis vectors, perfect alignment between the projected shape and the 2D image is unattainable. These misalignments show up as visible gaps in the final renderings. However, our flow-based refinement scheme can significantly reduce such artifacts (\fig{fig:alignment}b). To quantify the contribution of the refinement step, we computed Intersection-over-Union (IoU) between the 2D human silhouette and projected silhouette of the estimated 3D body. \tbl{tbl:iou} shows the average IoU for all our sequences, before and after flow refinement. As expected, the refinement step significantly improves the 3D-2D alignment, increasing the average IoU from 0.61 to 0.94. After hole filling with the nearest neighbor, the average IoU further increases to 0.96. \input{figText/iou} \section{User Studies} \label{sec:userstudy} We conducted several user studies to \change{compare how well motion and shape are perceived from different visualizations, and evaluate the stylistic settings provided by our interface.} \subsection{{Motion Sculpture}\xspace \emph{vs.\xspace} Stroboscopic \emph{vs.\xspace} Shape-Time} \input{figText/comparison-tbl.tex} We asked the participants to rate how well motion information is conveyed in \summarynames, stroboscopic photography, and shape-time photography~\cite{freeman:shapetime} for five clips. An example is shown in \fig{fig:comparison}, and the full set of images used in our user studies is included in the supplementary material. \change{In the first test, we presented the raters with two different visualizations (ours \vs a baseline), and asked ``which visualization provides the clearest information about motion?''. We collected responses from 51 participants with no conflicting interests for each pair of comparison. 77\% of the responses preferred our method to shape-time photography, and 67\% preferred ours to stroboscopic photography. In the second study, we compared how easily users can perceive \emph{particular} information about shape and motion from different visualizations. To do so, we asked the following clip-dependent} questions: ``which visualization helps more in seeing: \begin{itemize} \vspace{-5pt}\item the arm moving in front the body (\textit{Ballet-1}), \vspace{-5pt}\item the wavy and intersecting arm movement (\textit{Ballet-2}), \vspace{-5pt}\item the wavy arm movement (\textit{Jogging} and \textit{Olympics}), or \vspace{-5pt}\item the person walking in a U-shape (\textit{U-Walking}).'' \end{itemize} \change{We collected 36 responses for each sequence. As shown in \tbl{tbl:comparison}, on average, the users preferred our visualization over the alternatives 75\% of the time. The questions above are intended to focus on the salient 3D characteristics of motion in each clip, and the results support that our visualization conveys them better than the alternatives. For example, in \textit{Ballet-1} (\fig{fig:comparison} top), our \summaryname visualizes the out-of-plane sinusoidal curve swept out by the ballerina's arm, whereas both shape-time and stroboscopic photography show only the in-plane motion. Furthermore, our \summaryname shows the interactions between the left and right arms.} \input{figText/human-studies} \input{figText/smoothness} \subsection{Effects of Lighting and Floor Reflections} To avoid exposing too many options to the user, we conducted a user study to decide (i) whether floor reflections are needed in our synthetic-background rendering, and (ii) whether localized or global lighting should be used. The raters were asked which rendering is more visually appealing: with \vs without reflections (Ours \vs A), and using localized \vs ambient lighting (Ours \vs B). \fig{fig:human-studies} shows the results collected from 20-35 responses for each sequence on Amazon Mechanical Turk, after filtering out workers who failed our consistency check. Most of the raters preferred our rendering with reflections plus shadows (82\%) and localized lighting (84\%) to the other options. We thus use these as the standard settings in our user interface. \section{System Walkthrough} To generate a \summaryname, the user starts by loading a video into the system, after which MoSculp\xspace detects the 2D keypoints and overlays them on the input frames (\fig{fig:userinterface}a). The user then browses the detection results and confirms, on a few ($\sim$3-4) randomly selected frames, that the keypoints are correct by clicking the ``Left/Right Correct'' button. After labeling, the user hits ``Done Annotating,'' which triggers MoSculp\xspace to correct temporally inconsistent detections, with these labeled frames serving as anchors. MoSculp\xspace then generates the \summaryname in an offline process that includes estimating the human's shape and pose in all the frames and rendering the sculpture. After processing, the generated sculpture is loaded into MoSculp\xspace, and the user can virtually explore it in 3D (\fig{fig:userinterface}b). This often reveals information about shape and motion that is not available from the original camera viewpoint, and facilities the understanding of how different body parts interact over time. Finally, the rendered \summaryname is displayed in a new window (\fig{fig:userinterface}c), where the user can customize the design by controlling the following rendering settings. \begin{itemize} \vspace{-5pt}\item \myparagraph{Scene.} The user chooses to render the sculpture in a synthesized scene or embed it back into the original video by toggling the ``Artistic Background'' button in \fig{fig:userinterface}c. For synthetic scenes (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, ``Artistic Background'' on), we use a glossy floor and a simple wall lightly textured for realism. To help the viewer better perceive shape, we render shadows cast by the person and sculpture on the wall as well as their reflections on the floor (as can be seen in \fig{fig:teaser}c). \vspace{-5pt}\item \myparagraph{Lighting.} Our set of lights includes two area lights on the left and right sides of the scene as well as a point light on the top. The user may choose any combination of these lights (see the ``Lighting'' menu in \fig{fig:userinterface}c). \vspace{-5pt}\item \change{\myparagraph{Body Parts.} The user decides which parts of the body form the \summaryname. For instance, one may choose to render only the arms to perceive clearly the arm movement, as in \fig{fig:comparison}a. The body parts that we consider are listed under the ``Body Parts'' menu in \fig{fig:userinterface}c.} \vspace{-5pt}\item \change{\myparagraph{Materials.} Users can control the texture of the sculpture by choosing one of the four different materials: leather, tarp, wood, and original texture (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, colors taken from the source video by simple ray casting). To better differentiate sculptures formed by different body parts, one can specify a different material for each body part (see the dynamically updating ``Part Material'' menu in \fig{fig:userinterface}c).} \vspace{-5pt}\item \myparagraph{Transparency.} A slider controls transparency of the \summaryname, allowing the viewer to see through the sculpture and better comprehend the complex space-time occlusion. \vspace{-5pt}\item \myparagraph{Human Figures.} In addition to the \summaryname, MoSculp\xspace can also include a number of human images (similar to sparse stroboscopic photos), which allows the viewer to associate sculptures with the corresponding body parts that generated them. A density slider controls how many of these human images, sampled uniformly, get inserted. \end{itemize} These tools grant users the ability to customize their visualization and select the rendering settings that best convey the space-time information captured by the \summaryname at hand. \subsection{Example {Motion Sculptures}\xspace} We tested our system on a wide range of videos of complex actions including ballet, tennis, running, and fencing. We collected most of the videos from the Web (YouTube, Vimeo, and Adobe Stock), and captured two videos ourselves using a Canon 6D (\emph{Jumping} and \emph{U-Walking}). For each example, we embed the \summaryname back into the source video and into a synthetic background. We also render the sculpture from novel viewpoints, which often reveals information imperceptible from the captured viewpoint. In \emph{Jumping} (\fig{fig:jumping}), for example, the novel-view rendering (\fig{fig:jumping}b) shows the slide-like structure carved out by the arms during the jump. \input{figText/jumping.tex} An even more complex action, cartwheel, is presented in \fig{fig:cartwheel}. For this example, we make use of the ``Body Parts'' options in our user interface, and decide to visualize only the legs to avoid clutter. Viewing the sculpture from a top view (\fig{fig:cartwheel}b) reveals that the girl's legs cross and switch their depth ordering---a complex interaction that is hard to comprehend even by repeatedly playing the original video. \input{figText/cartwheel.tex} In \emph{U-Walking} (\fig{fig:walking}), the \summaryname depicts the person's motion in depth; this can be perceived also from the original viewpoint (\fig{fig:walking}a), thanks to the shading and lighting effects that we select from the different rendering options. \input{figText/walking.tex} In \emph{Tennis} (\fig{fig:comparison} bottom), the sculpture highlights bending of the arm during the serve, which is not easily visible from 2D or 2.5D visualizations (also shown in \fig{fig:comparison} bottom). Similarly, in \emph{Ballet-2}~\cite{ballet} (\fig{fig:dancing}), a sinusoidal 3D surface emerges from the motion of the ballerina's right arm, again absent in the 2D or 2.5D visualizations. \input{figText/dancing.tex}
1,941,325,220,977
arxiv
\section{Introduction} A holomorphic quadratic differential on a Riemann surface defines a conformal metric that is flat with conical singularities, together with pair of \textit{horizontal} and \textit{vertical} measured foliations, and this singular-flat geometry is intimately related to quasiconformal mappings and the geometry of Teichm\"{u}ller space $\mathcal{T}_g$ in the Teichm\"{u}ller metric. In this paper we consider certain \textit{infinite area} singular flat surfaces corresponding to (non-integrable) meromorphic quadratic differentials that arise as geometric limits along Teichm\"{u}ller geodesic rays.\\ A \textit{half-plane surface} is a singular flat surface that is obtained by taking a finite partition of the boundaries of a collection of euclidean half-planes and gluing by an interval-exchange map (see \S4 for examples.) We shall only exclude the possibility that the two infinite-length boundary intervals of the same half-plane are identified. Such a surface can be thought of as a Riemann surface with punctures $p_1,p_2,\ldots p_n$ corresponding to the ends of the surface, equipped with a quadratic differential (restricting to $dz^2$ on the half-planes) that is holomorphic away from the punctures. This \textit{half-plane differential} has a pole of order $n_j\geq 4$ at $p_j$ (for $1\leq j\leq n$) and one can associate with it the data of a non-negative real \textit{residue} $a_j$ (this is always zero when $n_j$ is odd) and, in a given choice of local coordinates, a positive real \textit{leading order term} $c_j$, which is the top coefficient in a local series expansion of the differential. In the singular flat metric, a neighborhood of $p_j$ is isometric to a ``planar end" comprising $n_j-2$ half-planes glued cyclically along their boundaries (see Definition \ref{defn:pend}). The residue $a_j$ then corresponds to the metric holonomy around the puncture and $c_j$ gives the ``scale" of this planar end relative to the others. We provide precise definitions in \S3.\\ In this article we show that one can get \textit{any} Riemann surface with $n$ punctures by the above construction, and furthermore can arbitrarily prescribe the data of order, residue and leading order terms: \begin{center}\label{eq:data} $\mathcal{D} = \{ (n_j, a_j,c_j)| n_j\in \mathbb{N}, n_j\geq 4, c_j\in \mathbb{R}^{+}, a_j\in \mathbb{R}_{\geq 0}, a_j=0$ for $n_j$ odd.$\}$ \end{center} associated with the set of punctures: \begin{thm}\label{thm:main} Let $\Sigma$ be a Riemann surface with a set $P$ of $n$ marked points and with a choice of local coordinates around each. Then for any data $\mathcal{D}$ as above there is a corresponding half-plane surface $\Sigma_\mathcal{D}$ and a conformal homeomorphism \begin{equation*} g:\Sigma\setminus P \to \Sigma_\mathcal{D} \end{equation*} that is homotopic to the identity map.\\ (The only exception is for the Riemann sphere with one marked point with a pole of order $4$, in which case the residue must equal zero.) \end{thm} Note that it is not hard to show the existence of \textit{some} meromorphic quadratic differential which has local data given by $\mathcal{D}$ at the poles using the Riemann-Roch theorem, but half-plane differentials are a special subclass that satisfy the \textit{global} requirement of having a ``half-plane structure" as described.\\ This result can be thought of as a generalization of the following theorem of Strebel (\cite{Streb}): \begin{jthm}[Strebel]\label{thm:streb} Let $\Sigma$ be a Riemann surface of genus $g$, and $P = \{p_1,p_2,\ldots p_n\}$ be marked points on $\Sigma$ such that $2g-2+n>0$, and $(a_1,a_2,\ldots,a_n)$ a tuple of positive reals. Then there exists a meromorphic quadratic differential $q$ on $\Sigma$ with poles at $\mathcal{P}$ of order $2$ and residues $a_1,\ldots a_n$, such that all horizontal leaves (except the critical trajectories) are closed and foliate punctured disks around $\mathcal{P}$. \end{jthm} The corresponding singular flat metric for such a ``Strebel differential" with poles of order two comprises a collection of half-infinite cylinders glued by an interval-exchange on their boundaries, and has a metric spine (sometimes called the ``ribbon graph" or ``fat graph"). Strebel also showed that the quadratic differential $q$ above - and hence this metric spine - is unique, which yields useful combinatorial descriptions of Teichm\"{u}ller space (see, for example, \cite{HarZag}, \cite{Kont}). However, a corresponding uniqueness statement for Theorem \ref{thm:main} is not known, and conjecturally holds for poles of order $4$ (for a discussion, see \S13.2).\\ The proof of Theorem \ref{thm:main} uses the well-known result of Jenkins and Strebel that associates a holomorphic quadratic differential to a collection of curves on a \textit{compact} Riemann surface. The idea is to consider a compact exhaustion of $\Sigma\setminus P$ and produce a corresponding sequence of half-plane surfaces that shall have both $\Sigma\setminus P$ and $\Sigma_{\mathcal{D}}$ as its conformal limit. The main technical work is to obtain enough geometric control for this sequence, and extract these limits by building appropriate sequences of quasiconformal maps. We give a more detailed outline in \S5, and carry out the proof in \S6-12. In \S13 we conclude with some applications and further questions. In particular, the connection of half-plane surfaces with conformal limits of grafting and Teichm\"{u}ller rays will appear in a subsequent paper (\cite{Gup25}).\\ \medskip \textbf{Acknowledgements.} This paper arose from work of the author as a graduate student at Yale, and he wishes to thank his thesis advisor Yair Minsky for his generous help. The author also wishes to thank Michael Wolf for numerous helpful discussions, Kingshook Biswas and Enrico Le Donne for useful conversations, and the support of the Danish National Research Foundation center of Excellence, Center for Quantum Geometry of Moduli Spaces (QGM) where this work was completed. \section{Background} In this section we review some background on meromorphic quadratic differentials and the geometry they induce on a Riemann surface.\\ Most of this is standard (we refer to \cite{Streb}), but we point the reader to our notion of a ``planar end" (Definition \ref{defn:pend}), which provides a metric model for an end of a half-plane surface, or more generally for the quadratic differential metric around higher-order poles. \subsection{Quadratic differential space} A closed genus-$g$ Riemann surface $\Sigma_g$ admits no non-constant holomorphic functions, but carries a finite-dimensional vector space of holomorphic one-forms, or more general holomorphic differentials, which are holomorphic sections of powers of the canonical line-bundle $K_\Sigma$.\\ A \textit{quadratic differential} $q$ on $\Sigma_g$ is a section of $K_\Sigma \otimes K_\Sigma$, a differential of type $(2,0)$ locally of the form $q(z)dz^2$. It is said to be \textit{holomorphic} (or \textit{meromorphic}) when $q(z)$ is holomorphic (or meromorphic).\\ A \textit{zero} or a \textit{pole} of a holomorphic quadratic differential is a point $p$ where in a local chart sending $p$ to $0$ we have $\phi(z) =z^n\psi(z)$ or $\phi(z) = \frac{1}{z^n}\psi(z)$ respectively, where $\psi(z)\neq 0$ and the integer $n\geq 1$ is the \textit{order} of the zero or pole. The following is a well-known fact: \begin{lem}\label{lem:gb} If there are $M$ zeroes of orders $n_1,n_2,\ldots n_M$, and $N$ poles of orders $k_1,k_2,\ldots k_N$, then $\sum\limits_{i=1}^M n_i - \sum\limits_{i=1}^N k_i = 4g-4$. \end{lem} Let $Q_{g,k_1,k_2,\ldots k_N}$ be the space of meromorphic quadratic differentials on $\Sigma_g$ with poles of order less than or equal to $k_1,k_2,\ldots k_N$ at a collection of $N$ marked points, and $\widehat{Q}_{g,k_1,k_2,\ldots k_N}$ be the subset of such differentials with poles of orders \textit{exactly} $k_1,k_2,\ldots k_N$. The former is a finite dimensional vector space, whose dimension can be computed using the Riemann-Roch theorem.\\ In fact the vector space $Q(X)$ of holomorphic quadratic differentials on a Riemann surface $X$ has complex dimension exactly $3g-3$ (see for example \cite{FarKra}). \subsection{Quadratic differential metrics} A holomorphic quadratic differential $q\in Q(X)$ defines a conformal metric (also called the $q$-metric) given in local coordinates by $\lvert q(z)\rvert \lvert dz\rvert^2$ which has Gaussian curvature zero wherever $q(z)\neq 0$.\\ At the points where $q(z)=0$ (finitely many by Lemma \ref{lem:gb}) there is a conical singularity of angle $(n+2)\pi$ where $n$ is the order of the zero, and locally the singular flat metric looks like a collection of $n$ rectangles glued around the singularity (see \S7 of \cite{Streb}).\\ \begin{figure}[h]\label{fig:zero} \centering \includegraphics[scale=0.85]{simp-zero.png}\\ \caption{A simple zero $p$ is a $3\pi$ cone-point. } \end{figure} One way to see this is to change to coordinates where $q = d\zeta^2$ by the conformal map \begin{equation}\label{eq:choc} z\mapsto \displaystyle\int\limits_p^z \sqrt{q(z)}dz \end{equation} which gives a branched covering of the $\zeta$-plane when $p$ is a zero of $q$. \subsubsection*{Lengths, area} The \textit{horizontal length} in the $q$-metric of an arc $\gamma$ is defined to be \begin{equation*} \lvert \gamma\rvert_h = Re \displaystyle\int\limits_{\gamma} \sqrt{q(z)} d\vert z\rvert \end{equation*} and the \textit{vertical length} is the corresponding imaginary part. The total length in the $q$-metric is $\left(\lvert \gamma\rvert_h^2 + \lvert \gamma\rvert_v^2\right)^{1/2}$.\\ The $L^1$-norm of a quadratic differential gives the \textit{area} in the $q$-metric: \begin{equation}\label{eq:area} Area_q (X) = \displaystyle\int\limits_X \lvert \phi(z)\rvert dzd\bar{z} \end{equation} \subsection{Measured foliations} A holomorphic quadratic differential $q\in Q(X)$ determines a \textit{horizontal foliation} on $X$ which we denote by $\mathcal{F}_h(q)$, obtained by integrating the line field of vectors $\pm v$ where the quadratic differential is real and positive, that is $q(v,v)\geq 0$. Similarly, there is a \textit{vertical foliation} $\mathcal{F}_v(q)$ consisting of integral curves of directions where $q$ is real and negative.\\ These foliations can be thought of as the pullback by the map (\ref{eq:choc}) of the horizontal and vertical lines in the $\zeta$-plane. These foliations are also \textit{measured}: the measure of an arc transverse to $\mathcal{F}_h$ is given by its \textit{vertical} length, and the transverse measure for $\mathcal{F}_v$ is given by horizontal lengths. Such a measure is invariant by isotopy of the arc if it remains transverse with endpoints on leaves.\\ Let $\mathcal{MF}$ be the space of singular foliations on a surface equipped with a transverse measure, upto isotopy and Whitehead equivalence. \begin{jthm}[Hubbard-Masur \cite{HM}]\label{thm:HM} Fix a Riemann surface $X$. Then any $\mathcal{F}\in \mathcal{MF}$ is the horizontal foliation of a unique holomorphic quadratic differential on $X$. \end{jthm} Theorem \ref{thm:main} of this paper can be thought of as a step towards a generalization to a \textit{non-compact} version of the above result. In this paper we shall use a special case independently proved in \cite{JS1} and \cite{JS2} (See \cite{Wolf1} for a proof using harmonic maps to metric graphs.) \begin{thm}[Jenkins-Strebel]\label{thm:js} Let $\gamma_1,\gamma_2,\ldots \gamma_n$ be disjoint homotopy classes of curves on a Riemann surface $\Sigma$. Then there exists a holomorphic quadratic differential $q$ whose horizontal foliation $\mathcal{F}_h(q)$ consists of closed leaves foliating $n$ cylinders with core curves in those homotopy classes. Moreover, one can prescribe the heights, or equivalently the circumferences, of the $n$ metric cylinders to be any $n$-tuple $(c_1,c_2,\ldots,c_n)$ of positive reals. \end{thm} \begin{figure}[h] \centering \includegraphics[scale=0.55]{multicurve3.png}\\ \caption{In Theorem \ref{thm:js} the surface in the $q$-metric is composed of metric cylinders with core curves the given homotopy classes of curves. } \end{figure} \subsection*{Critical segments} A \textit{critical segment} is a horizontal arc between two critical points (zeroes or poles) of the quadratic differential, and a critical arc between two zeroes is called a \textit{saddle-connection}. Their number is always finite since by the theorem of Gauss-Bonnet (see also Theorem 14.2.1 in \cite{Streb}) there cannot be more than one saddle-connection between a pair of zeroes. \subsection{Metric structure at poles} Most of the preceding discussion also holds when the quadratic differential $q$ is \textit{meromorphic}, with finitely many poles: namely, one has an associated singular flat $q$-metric, together with horizontal and vertical foliations. \\ A meromorphic quadratic differential with a finite $q$-area can have poles of order at most $1$ (see \cite{Streb}). At such a pole, there is a conical singularity of angle $\pi$, and the singular flat metric has a ``fold" (see figure, also \S7 of \cite{Streb}).\\ A pole of higher order $n>1$ is at an infinite distance in the $q$-metric. Any such pole $p$ has an associated \textit{analytic residue} which is defined to be \begin{equation}\label{eq:ares} Res_q(p) = \displaystyle\int\limits_\gamma \sqrt{q} \end{equation} where $\gamma$ is any simple closed curve homotopic into $p$ (this is independent of choice of $\gamma$ since $q$ is holomorphic away from $p$).\\ \begin{figure}\label{fig:poles} \centering \includegraphics[scale=0.85]{poles.png}\\ \caption{Local picture with horizontal leaves at a pole of order $n$.} \end{figure} For the rest of this article, we shall consider only higher order poles which have a \textit{real} analytic residue (in \cite{Streb} this property is referred to as $q$ having a \textit{vanishing logarithmic term}). This rules out any ``spiralling" behaviour of the horizontal foliation at the poles. We also have an explicit description of the singular flat metric around the pole $p$ (Theorems \ref{thm:order2} and \ref{thm:streb}) which can be culled from \cite{Streb} (see \S7 of the book for a discussion). \begin{thm}\label{thm:order2} Let $p$ on $\Sigma$ be a pole of order $2$ with a positive real analytic residue $C$. Then there is a neighborhood $U$ of $p$ such that in the $q$-metric $U\setminus p$ is isometric to a half-infinite euclidean cylinder with circumference $2\pi C$. \end{thm} \begin{proof} A neighbourhood $U$ of $p$ has a conformal chart to $\mathbb{D}$ taking $p$ to $0$ the quadratic differential takes the form \begin{equation*} -\frac{C^2}{z^2} dz^2 \end{equation*} The change of coordinates (\ref{eq:choc}) is given by a logarithm map $\zeta = iC\ln z$ that pulls back the euclidean $\zeta-$plane to a half-infinite cylinder. We leave these verifications to the reader. \end{proof} \subsection{Planar ends} We now introduce the notion of a \textit{planar end} and its \textit{metric residue}. \\ In the following definition we identify the boundary of a euclidean half-plane with $\mathbb{R}$. \begin{defn}[Planar-end]\label{defn:pend} Let $\{H_i\}$ for $1\leq i\leq n$ be a cyclically ordered collection of half-planes with rectangular ``notches" obtained by deleting, from each, a rectangle of horizontal and vertical sides adjoining the boundary, with the boundary segment having end-points $x_i$ and $y_i$, where $x_i<y_i$. A \textit{planar end} is obtained by gluing the interval $[y_i,\infty)$ on $\partial H_i$ with $(-\infty, x_{i+1}]$ on $H_{i+1}$ by an orientation-reversing isometry. Such a surface is homeomorphic to a punctured disk. \end{defn} For an example, see Figure 4. \begin{defn}[Metric residue]\label{defn:metres} Let the half-plane differential $q$ at a pole $p_j$ have a planar end as above (where the number of half-planes $n$ equals $ n_j -2$). Then the residue $a_j$ of $q$ at $p_j$ is defined to be the alternating sum $\sum\limits_{i=1}^n (-1)^{i+1}(y_i - x_i)$. Note that there is an ambiguity of sign because of the cyclic ordering in the alternating sum, and we resolve this by choosing a starting index that ensures a positive sum. \end{defn} \begin{figure}[h] \centering \includegraphics[scale=0.45]{planar-end2.png}\\ \caption{A planar-end obtained from $3$ ``notched" half-planes (see Definition \ref{defn:pend}). } \end{figure} \begin{thm}\label{thm:streb} Let $p$ on $\Sigma$ be a pole of order $n>2$ with a real analytic residue $C$. Then there is a neighborhood $U$ of $p$ such that in the $q$-metric $U\setminus p$ is isometric to a planar-end surface with $(n-2)$ half-planes and metric residue equal to $C$. \end{thm} \begin{proof} We provide only the proof for the statement regarding the metric residue, as the proof of the planar-end structure appears in \cite{Streb} (see \S10.4 of the book).\\ Let $P$ be a polygon made from the boundaries of the rectangular ``notches" of each half-plane of the planar end (see Definition \ref{defn:pend}) oriented counter-clockwise. Then $P$ is a polygon enclosing $p$ contained in $U$, and consists of alternating horizontal and vertical segments, with the horizontal edges having lengths $b_i-a_i$ for $1\leq i\leq n$.\\ We first note that \begin{equation}\label{eq:asum} \pm \int\limits_{P} \sqrt q = \displaystyle\sum\limits_{i=1}^n (-1)^{i+1}(b_i-a_i) \end{equation} This follows from the fact that by a change of coordinates (see (\ref{eq:choc})) the integral over three successive edges (horizontal-vertical-horizontal) equals integrating the form $d\zeta$ on the complex ($\zeta$-)plane over a horizontal edge that goes from the right to left on the upper half-plane followed by one over a vertical edge followed by one over a horizontal edge that goes from left to right in the lower half-plane. Hence the integral over the horizontal sides picks up the horizontal lengths but the sign switches over the two successive horizontal edges. Meanwhile the integral over the vertical sides contribute to the imaginary part, but they cancel out since two successive vertical segments on the upper ($\zeta$)-half-plane are of equal length but of opposite orientation.\\ The left hand side of (\ref{eq:asum}) is equal to the real analytic residue $C$ (upto sign) by definition (\ref{eq:ares}), and the right hand side is equal to the metric residue of the planar end as defined in Definition \ref{defn:pend}, and the proof is complete. \end{proof} \begin{table}\label{table:tab} \begin{center} \begin{tabular}{|l|l|} \hline \textbf{Analytic notions} & \textbf{Metric notions} \\ \hline $L^1$-norm & Area \\ \hline At least one non-simple pole & Infinite area \\ \hline Zero of order $n$ & Cone point of angle $(n+2)\pi$ \\ \hline Simple (order $1$) pole & Cone point of angle $\pi$\\ \hline Order $2$ pole & Half-infinite cylinder \\ \hline Order $n>2$ pole & Planar end \\ \hline Analytic residue & Metric residue \\ \hline \end{tabular} \end{center} \caption{A glossary of the correspondence between analytic and metric properties of a meromorphic quadratic differential.} \end{table} \section{Preliminaries} In this section we introduce some of the terminology and observations used in this paper. \subsection{Half-plane differentials.} The following definition was already mentioned in \S1: \begin{defn}[Half-plane surface]\label{defn:hp} Let $\{H_i\}_{1\leq i\leq N}$ be a collection of $N\geq 2$ euclidean half planes and let $\mathcal{I}$ be a finite partition into sub-intervals of the boundaries of these half-planes. A \textit{half-plane surface} $\Sigma$ is a complete singular flat surface obtained by gluings by (oriented) isometries amongst intervals from $\mathcal{I}$. \end{defn} Such a half-plane surface has a number of planar ends as in Definition \ref{defn:pend}, and is equipped with a meromorphic quadratic differential $q$ (the \textit{half-plane differential}) that restricts to $dz^2$ in the usual coordinates on each half-plane. \subsection{Residue} The residue $a_j$ associated with a puncture (or end) of a half-plane surface has both an metric definition (Definition \ref{defn:metres}) and the following analytic definition (see (\ref{eq:ares}) ): \begin{defn}[Analytic residue] The residue $a_j$ of the half-plane differential $q$ at a pole $p_j$ is defined to be the absolute value of the integral \begin{equation*} a_j = \int\limits_{\gamma_j} \sqrt{q} \end{equation*} where $\gamma_j$ is a simple closed curve enclosing $p_j$ and contained in a chart where one can define $\pm\sqrt{q}$. This gives a positive real number (see Theorem \ref{thm:streb}) and in particular, equals the metric residue of the corresponding planar end. \end{defn} \subsection{Planar ends and truncations} A planar end (Definition \ref{defn:pend}) can be thought of as a neighborhood of $\infty$ on $\mathbb{C}$ in the metric induced by the restriction of a ``standard" holomorphic quadratic differential $\phi$: \begin{equation}\label{eq:phi} \phi = \phi(z)dz^2 = \left(z^{n-2} + iaz^{n/2 -2}\right)dz^2 \end{equation} where $a$ is the (positive, real) residue at the pole at $0$ and $n\geq 4$ is even. See \textit{Example 2} in \S4.1 for details. When $n$ is odd, the residue is necessarily zero, and the metric is induced by the differential $\phi = \phi(z)dz^2 = z^{n-2}dz^2$.\\ By inversion ($w= 1/z$), this can be thought of as the metric induced by the restriction of a meromorphic quadratic differential (which we also denote by $\phi$) to a neighborhood of $0\in\mathbb{C}$: \begin{equation}\label{eq:stan} \left(\frac{1}{w^{n+2}} + \frac{ia}{w^{n/2 +2}}\right)dw^2 \end{equation} when $n$ is even and $\left(\frac{1}{w^{n+2}}\right) dw^2$ when $n$ is odd. \begin{defn}[$\mathcal{P}_H$]\label{defn:trunc} For a planar end with residue $a$, the \textit{truncation at height $H$} denoted by $\mathcal{P}_H$, is when the missing ``notches" are rectangles of horizontal width $H$ and vertical heights $H/2$, except one rectangle of horizontal width $H+a$. The boundary of the planar $\mathcal{P}_H$ is then a polygon of alternating horizontal and vertical sides, each of length $H$ except one horizontal side of length $H+a$. (This is then compatible with the metric residue being $a$ - see Definition \ref{defn:metres}.) \end{defn} \textit{Remark.} Any planar end has a truncation at height $H$, for sufficiently large $H$.\\ \begin{figure}[h] \centering \includegraphics[scale=0.6]{uh.png}\\ \caption{A truncation of a planar end can be conformally identified with a neighborhood $U_H$ of $0\in \mathbb{C}$ (shown shaded). Lemma \ref{lem:uh} gives estimates on its dimensions. } \end{figure} By the previous discussion we can identify $\mathcal{P}_H\cup \infty$ with a neighborhood $U_H$ of $0\in \mathbb{C}$ , via a map taking $\infty$ to $0$, which is an isometry with respect to the singular flat metric on the planar end and the ${\phi}$-metric on $U_H$, and is hence conformal. The following estimates about this simply connected domain in $\mathbb{C}$ will be useful later: \begin{lem}\label{lem:uh} There exist universal constants $D_1, D_2>0$ such that for all sufficiently large $H$, we have: \begin{equation}\label{eq:uh} \frac{D_1}{H^{2/n}} \leq dist(0,\partial U_H) \leq \frac{D_2}{H^{2/n}} \end{equation} where $n$ is the number of half-planes for the planar end.\\ (In this paper, $dist(q, K) = \inf\limits_{p\in K} \left| p-q\right|$ where $q\in \mathbb{C}$ and $K\subset \mathbb{C}$ is a compact set.) \end{lem} \begin{proof} Observe that given a planar end, one can circumscribe a circle of circumference $C_2H$ and inscribe one of circumference $C_1H$ around $\partial \mathcal{P}_H$, where $0<C_1<1<C_2$ are constants that depend only on $n$ (for sufficiently large $H$, the effect of the fixed $a$ is negligible).\\ \begin{figure}[h] \centering \includegraphics[scale=0.5]{circs.png}\\ \end{figure} Now the circumference of the boundary circle $\partial B_r$ in the metric induced by $q$ (see (\ref{eq:stan}) is: \begin{equation*} C(r) = \int\limits_{\partial B_r} \lvert \sqrt{q}\rvert = \int\limits_{\lvert w\rvert = r} \Big\lvert\frac{1}{w^{n/2+1}} + \frac{ia}{2w}\Big\rvert dw \approx \int\limits_{\lvert w\rvert = r} \Big\lvert\frac{1}{w^{n/2+1}} \Big\rvert dw = O\left(\frac{1}{r^{n/2}}\right) \end{equation*} for sufficiently small $r$.\\ The previous observation together with this calculation then implies that $U_H$ is contained within two boundary circles which yield (\ref{eq:uh}). (Here $D_1,D_2$ depend on $C_1,C_2$.) \end{proof} \begin{cor}\label{cor:uhmap} The conformal map $\phi_H:U_H\to \mathbb{D}$ that takes $0$ to $0$, satisfies \begin{equation}\label{eq:uhbd} \frac{1}{4D_2}\leq \frac{\left|\phi_H^\prime(0)\right|}{H^{2/n}}\leq \frac{1}{D_1} \end{equation} for all $H>0$. \end{cor} \begin{proof} Note that such a conformal map is determined uniquely upto rotation (fixing $0$), but this does not change the magnitude of the derivative at $0$.\\ For any conformal map $f:\mathbb{D}\to \mathbb{C}$ the following holds for any $z\in \mathbb{D}$ (see Corollary 1.4 of \cite{Pom}): \begin{equation*} \frac{1}{4} \left(1 - \left|z\right|^2\right)\left|f^\prime(z)\right| \leq dist (f(z), \partial f(\mathbb{D})) \leq \left(1 - \left|z\right|^2\right)\left|f^\prime(z)\right| \end{equation*} We apply this to the conformal map $f = \phi_H^{-1}: \mathbb{D} \to U_H\subset \mathbb{C}$ and $z=0$. We get by rearranging and using that $f(0)=0$ that \begin{equation}\label{eq:shk2} dist(0, \partial f(\mathbb{D})) \leq \left|f^\prime(0)\right| \leq 4 dist(0, \partial f(\mathbb{D})) \end{equation} By the previous lemma, and the fact that $\phi_H^\prime(0) = 1/f^\prime(0)$, (\ref{eq:uhbd}) now follows. \end{proof} We also note the following monotonicity: \begin{lem}\label{lem:mon} Let $\phi_H:U_H\to \mathbb{D}$ be the conformal map preserving $0$. Then the derivative $\left|\phi^\prime_H(0)\right|$ is strictly increasing with $H$. \end{lem} \begin{proof} For $\hat{H}>H$ we have the strict inclusions $\mathcal{P}_H\subset \mathcal{P}_{\hat{H}}$ and $U_{\hat{H}} \subset U_H$. Hence $\phi_H\circ \phi_{\hat{H}}^{-1}:\mathbb{D}\to \mathbb{D}$ is well-defined, and the lemma follows from an application of the Schwarz lemma. \end{proof} \subsection{Leading order term} \begin{defn} In a choice of local coordinates $z$ around the pole $p$, any meromorphic quadratic differential $q$ has a local expression: \begin{equation*} q = \left(\frac{a_n}{z^n} + \frac{a_{n-1}}{z^{n-1}} +\cdots \frac{a_{1}}{z} + a_0 + \cdots\right)dz^2 \end{equation*} and we define the \textit{leading order term} at the pole to be the positive real number $c_j = \left| a_n\right|$. \end{defn} \textit{Remarks.} 1. A pole at $p$ of a half-plane differential $q$ has a neighborhood $U$ that is isometric to a planar end as in Definition \ref{defn:pend}. The leading order term of a half-plane differential at a pole determines, roughly speaking, the relative scale of the conformal disk $U$ is on the Riemann surface.\\ 2. We shall sometimes refer to the leading order term of $q$ at $p$ \textit{with respect to $U$}, where $U$ is simply-connected domain containing $p$. This means that the coordinate chart that we consider is the conformal (Riemann) map $\phi:U\to \mathbb{D}$ that takes $p$ to $0$. Note that by the above definition, the leading order term is independent of the choice of such a $\phi$ (rotation does not change the magnitude). \begin{lem}[Pullbacks and leading order terms]\label{lem:pullb}Let $f:\mathbb{D}\to \mathbb{C}$ be a univalent conformal map such that $f(0)=0$ and let $q$ be a meromorphic quadratic differential on $\mathbb{C}$ having the local expression \begin{equation*} q(z)dz^2 = \left(\frac{a_n}{z^n} + \frac{a_{n-1}}{z^{n-1}} +\cdots \frac{a_{1}}{z} + a_0 + \cdots\right)dz^2 \end{equation*} in the usual $z$-coordinates. Then the pullback quadratic differential $f^\ast q$ on $\mathbb{D}$ has leading order term equal to $\left| f^\prime(0)\right|^{2-n}a_n$ at the pole at $0$. \end{lem} \begin{proof} By the usual transformation law for change of coordinates, the local expression for the pullback differential is $q\circ f(z) f^\prime(z)^2$. The lemma follows by a calculation involving a series expansion, using that \begin{center} $f(z) = f^\prime(0)z + b_2z^2 +O(z^3)$. \end{center} \end{proof} \section{Examples} \subsection{Explicit hpd's in $\hat{\mathbb{C}}$} A meromorphic quadratic differential on $\hat{\mathbb{C}} = \mathbb{C} \cup \{\infty\}$ can be expressed as $q(z)dz^2$ in the affine chart $\mathbb{C}$, where $q(z)$ is a meromorphic function. The following are examples of such functions which yield a half-plane differential (hpd).\\ \textit{Example 0.} The quadratic differential $dz^2$ has a pole of order $4$ at $\infty$, and it induces the usual euclidean metric on the plane.\\ \textit{Example 1.} The quadratic differential $z^ndz^2$ for $n\geq 1$ has a pole of order $n+4$ at infinity, with analytic residue (and metric residue) equal to zero. The singular flat metric has $n+2$ half-planes glued around the origin.\\ \textit{Example 2.} The quadratic differential $\phi$ (see (\ref{eq:phi})) \begin{equation*} \left(z^{n-2} + iaz^{n/2 -2}\right)dz^2 \end{equation*} on $\mathbb{C}$ for an even integer $n\geq 4$ and some real number $a>0$ has $n/2+1$ zeroes, a pole of order $n+2$ at infinity, and a connected critical graph with a metric residue $\pi a$ (see figure, and \cite{HM} and \cite{Wan} for details).\\ \begin{figure}[h!] \centering \includegraphics[scale=0.7]{hubbmas.png}\\ \caption{Partial picture of the critical graph when $n=8$ in \textit{Example 2}. The finite-length saddle connections have length $\pi a/4$. } \end{figure} In local $w$-coordinates at infinity obtained by inversion, the quadratic differential has the standard form (\ref{eq:stan}) as in \S3. The analytic residue can thus be computed in these coordinates as follows: \begin{equation*} Res_q(0) = \displaystyle\int\limits_\gamma \sqrt{q} = \displaystyle\int\limits_\gamma \left(\frac{1}{w^{n/2+1}} + \frac{ia}{2w}\right)dw = \pi a \end{equation*} which is equal to the metric residue. \subsection{Other hpd's in $\hat{\mathbb{C}}$} Since there is only a unique Riemann surface conformally equivalent to $\mathbb{C}$, it is easy to construct half-plane differentials on the Riemann sphere: \begin{figure}[h] \centering \includegraphics[scale=0.75]{metree.png}\\ \caption{Attaching half-planes along this metric tree produces an hpd with a pole of order $8$ and residue $\left|2a_1-2a_3\right|$. } \end{figure} \subsubsection*{Single-poled} Take any metric tree $T$ with $n$ edges of infinite length. Then there are $n$ resulting boundary lines (think of the boundary of an $\epsilon$-thickening of $T$ and let $\epsilon\to 0$) and one can attach $n$ euclidean half-planes to these boundary lines by isometries along their boundaries. The resulting Riemann surface is simply connected, and parabolic, and hence $\mathbb{C}$, and is equipped with an hpd with order $n+2$ pole at infinity. The metric spine is $T$, and the metric residue at the pole can be read off from the lengths of edges in $T$ (see Figure 7). \subsubsection*{Multiple-poled} Consider $\mathbb{C}$ obtained by gluing half-planes to a metric tree $T$ as above. The following local modification introduces another pole: Take a subinterval of one of the edges of $T$ and slit it open, introduce $n^\prime$ vertices on the resulting boundary circle, and attach $n^\prime$ semi-infinite edges from those vertices. Along each of the $n^\prime$ resulting new boundary lines, we attach half-planes as before. Topologically, one has just attached a punctured disk to $\mathbb{C}$ after the slit, so the resulting surface (after adding the puncture) is still $\hat{\mathbb{C}}$, but this now has a half-plane differential with a new pole of order $n^\prime +2$. One can also vary the residues at the poles by controlling edge lengths (see Figure 8). \begin{figure}[h] \centering \includegraphics[scale=0.65]{hpdc.png}\\ \caption{An hpd on $\hat{\mathbb{C}}$: The zeroes are the dark vertices, and the poles of order $4$ are at the lighter vertices and infinity. Varying the lengths of various edges changes the residues and leading order terms. } \end{figure} \subsection{Interval-exchange surfaces} Introduce a finite-length horizontal slit on $\mathbb{C}$ and glue the resulting two sides of the interval by an interval exchange (see Figure 9). The resulting surface (punctured at infinity) can be of any genus, by prescribing the combinatorics of the gluing appropriately. By adding the two semi-infinite horizontal intervals on either side of the slit, one can easily see that this is a half-plane surface with two half-planes (above and below the horizontal line). The half-plane differential has a single order-$4$ pole at infinity. \\ \begin{figure}[h!] \centering \includegraphics[scale=0.8]{hp-torus.png}\\ \caption{The half-plane surface obtained by the interval exchange shown on the left gives a punctured torus. This is not the generic case, as the resulting hpd has a zero of order two.} \end{figure} One consequence of Theorem \ref{thm:main} is that one gets \textit{all} once-punctured Riemann surfaces this way. The following is a simple dimension count that indicates that this is possible:\\ We place a vertex at infinity to get a cell decomposition of a \textit{closed} surface of genus $g$. We have \begin{equation}\label{eq:euler} v-e+f = 2-2g \end{equation} where $v$, $e$ and $f$ are the number of vertices, edges and vertices of the resulting decompistion.\\ Note that there are two faces (the two half-planes), so $f=2$. Moreover, in the generic case, all vertices are trivalent except one (the vertex at infinity, which has valence $2$). Hence \begin{equation*} 2e = 3(v-1) +2 \end{equation*} Using these facts in (\ref{eq:euler}) we get \begin{equation*} e= 6g-1 \end{equation*} To count the resulting number of parameters, note that there are two edges corresponding to the two semi-infinite edges, and the conformal structure of the half-plane surface does not change if one scales all finite lengths by a positive real. Hence the total dimension of the set of parameters is $(6g-4)$, which is the dimension of the moduli space $\mathcal{M}_{g,1}$. \subsection{Single-poled hpd's on surfaces} Take a single-poled hpd on $\hat{\mathbb{C}}$ and introduce a slit on one of the edges of the metric spine, and glue the resulting two sides by an interval exchange. By suitable choice of combinatorics of gluing, this gives higher genus half-plane surfaces. (See Figure 9) \begin{figure}[h!] \centering \includegraphics[scale=0.9]{metreeIX.png}\\ \caption{Slitting the metric spine of an hpd on $\hat{\mathbb{C}}$ along an edge and then gluing the resulting sides by an interval exchange produces a single-poled hpd on higher genus surfaces. } \end{figure} \subsection{A low complexity example:} The following lemma deals with the exceptional case in Theorem \ref{thm:main}. \begin{lem} Any meromorphic quadratic differential on $\hat{\mathbb{C}}$ with a single pole $p$ of order 4 has residue $0$ at $p$. \end{lem} \begin{proof} Let $q$ be such a meromorphic quadratic differential, so it is \begin{equation*} q = (\frac{c_4}{z^4} + \frac{c_3}{z^3} + \cdots)dz^2 \end{equation*} in the usual coordinates in an affine chart. However in $\hat{\mathbb{C}}$ we also have the quadratic differential \begin{equation*} \psi = \frac{c_4}{z^4}dz^2 \end{equation*} The quadratic differential $q-\psi$ then has a single pole of order less than or equal to $3$. There is no such non-zero quadratic differential on $\hat{\mathbb{C}}$, and hence $q =\psi$, and has residue $0$. \end{proof} \section{Outline of the proof} We illustrate the proof of Theorem \ref{thm:main} in the case of a single puncture. This easily generalizes to the case of multiple poles - in \S12 we shall provide a summary. \\ Throughout, we shall fix a Riemann surface $\Sigma$ of genus $g$, with a marked point $p$ with a disk neighborhood $U$, and an integer $n\geq 4$ and $a,c\in \mathbb{R}^{+}$. Our goal is to show there exists a conformal homeomorphism \begin{equation*} g:\Sigma\setminus p \to \Sigma_{n,a} \end{equation*} where $\Sigma_{n,a}$ is a half-plane surface such that the half-plane differential has one pole of order $n$ and residue $a$. Furthermore, via the uniformizing chart \begin{equation*} \phi:U\to \mathbb{D} \end{equation*} taking $p$ to $0$ the pullback quadratic differential on $\mathbb{D}$ has a pole at $0$ with leading order term $c$.\\ \medskip Briefly, the argument consists of producing a sequence of half-plane surfaces (\textit{Steps 1} and \textit{2}) that converge to a surface conformally equivalent to $\Sigma \setminus p$ (\textit{Step 3}) and metrically a half-plane surface (\textit{Steps 4} and \textit{5}). The bulk of the proof lies in proving the latter convergence, after first showing that one has sufficient geometric control on the surfaces along the sequence.\\ \textbf{Step 1.} \textit{(Quadrupling)} We define a suitable compact exhaustion $\{\Sigma_i\}$ of $\Sigma\setminus p$, and by a two-step conformal doubling procedure along boundary arcs we define a corresponding sequence of compact Riemann surfaces $\hat{\Sigma_i}$. An application of the Jenkins-Strebel theorem then produces certain holomorphic quadratic differentials on these surfaces which on passing back to $\Sigma_i$ again by the involutions gives singular flat structures with ``polygonal" boundary.\\ \textbf{Step 2.} \textit{(Prescribing boundary lengths)} We complete each of these singular flat surfaces with polygonal boundary to a half-plane surface $\Sigma_i^\prime$ by gluing in an appropriate planar end. We first show that by choosing the arcs in the first doubling step appropriately, one can ensure that the sequence of planar ends one needs are truncations at height $H_i\to \infty$ of a fixed planar end $\mathcal{P}$. Here $H_i$ is a sequence of real numbers diverging at a prescribed rate. This is the geometric control crucial for the convergence in \textit{Step 4}.\\ \textbf{Step 3.}\textit{ (Conformal limit)} Applying a quasiconformal extension result we show that these half-plane surfaces $\Sigma_i^\prime$ have $\Sigma\setminus p$ as a conformal limit.\\ \textbf{Step 4.}\textit{ (The quadratic differentials converge)} We now show that the half-plane differentials corresponding to $\Sigma_i^\prime$ satisfy a convergence criterion (see Appendix A) and hence after passing to a subsequence they converge to a meromorphic quadratic differential with the right order and residue. \\ \textbf{Step 5.} \textit{(A limiting half-plane surface)} We show that the limiting quadratic differential in \textit{Step 4} is in fact a half-plane differential, that is, the sequence of half-plane surfaces limits to a half-plane surface $\Sigma_{n,a}$. By \textit{Step 3}, this surface is conformally $\Sigma \setminus p$, as required.\\ \textbf{Step 6.}\textit{ (The leading order coefficient)} With a final analytical lemma we show that an additional control on the sequence $H_i\to \infty$ in \textit{Step $2$} ensures that the limiting half-plane differential has leading order term $c$ on $U$, as required. \section{Step 1: A quadrupling procedure} \subsection*{The compact exhaustion} Consider the neighborhood $U$ of $p\in \Sigma$ with the conformal chart $\phi:U\to \mathbb{D}$ such that $\phi(p) = 0$. Let $B(r) \subset \mathbb{D}$ denote the open disk of radius $r$ centered at $0$, and let $U(r) \subset U$ denote the inverse image $\phi^{-1}(B(r))$.\\ Define $\Sigma_i$ to be Riemann surface $\Sigma \setminus U(2^{-i})$. For convenience we shall denote $U(2^{-i})$ by $U_i$. Note that this a compact Riemann surface with boundary $C_i = \partial \Sigma_i = \partial \overline{U(2^{-i})}$.\\ \begin{figure}[h!] \centering \includegraphics[scale=0.85]{exhaust.png}\\ \caption{$C_i$ is the inverse image of a circle of radius $2^{-i}$ under the conformal chart $\phi:U\to \mathbb{D}$. } \end{figure} The subsurfaces $\{\Sigma_i\}_{i=1}^{\infty}$ form a compact exhaustion. In particular, we note the following: \\ (1) $\Sigma_i\subset \Sigma_{i+1}$ for each $i\geq 1$.\\ (2) $\bigcup\limits_{i=1}^\infty \Sigma_i= \Sigma \setminus p$.\\ (3) $U_i = \Sigma \setminus \Sigma_i$ is a topological disk containing $p$, and\\ (4) $\Sigma_{i}\setminus \Sigma_{1}$ is a topological annulus of modulus $A\cdot i$ for some constant $A>0$. \subsection*{Conformal doubling} For each subsurface in the compact exhaustion constructed in the previous section, we shall now define a two-step doubling across a collection of $n$ arcs to get a sequence of compact surfaces $\widehat{\Sigma_i}$.\\ For each $i\geq 1$ consider the homeomorphism $\phi_i:\overline{U_i} \to \overline{B(2^{-i})}$ that is the restriction of the conformal chart $\phi$. Choose a collection of $n$ arcs on the boundary $\partial \overline{B(2^{-i})}$ and pull it back via $h_i^{-1}$ to a collection of arcs $a_1,a_2,\ldots a_n$ on $C_i = \partial \Sigma_i$. Note that the complement of these arcs on $C_i$ is another collection of $n$ arcs which we denote by $b_1,b_2,\ldots b_n$. In \textit{Step 2} (following section) we shall specify more about the choice of these arcs. \\ Consider now two copies of the surface $\Sigma_i$ with the collections of $a$ and $b$ arcs on its boundary. Passing to the unit disc $\mathbb{D}$ via the conformal chart $h$, glue the $b$ arcs together via the anti-conformal map $z\mapsto 2^{-2i}/\bar{z}$ (this preserves the circle of radius $2^{-i}$). We get a doubled surface $\Sigma^d_i$ with a conformal structure, which has $n$ slits corresponding to the $a$ arcs on the boundary of each half (which remain unglued in our doubling). \\ \begin{figure}[h] \centering \includegraphics[scale=0.55]{double.png}\\ \caption{We first double $\Sigma_i$ (shown on the left) along the $b$-arcs on the boundary to get $\Sigma^d_i$ (shown on the right). } \end{figure} Next, we take two copies of this resulting surface $\Sigma_d^i$ and glue them along these slits to get a \textit{closed} Riemann surface $\widehat{\Sigma_i}$. This time the conformal gluing is via a suitable restriction of the hyperelliptic involution of a genus $n$ surface branced over $n$ equatorial slits on $\hat{\mathbb{C}}$ (the restriction is to a collar neighhborhood on one side of the equator). \\ Note that the glued pairs of $a$-slits form a collection of $n$ nontrivial homotopy classes of curves $[\gamma_1],[\gamma_2],\ldots [\gamma_n]$ on the surface $\widehat{\Sigma_i}$.\\ \begin{figure}[h] \centering \includegraphics[scale=0.5]{doubstep2.png}\\ \caption{In the second step one glues two copies of $\Sigma_d^i$ (see Figure 12) to get the closed, ``quadrupled" surface $\widehat{\Sigma_i}$. (Not all the handles are shown in the figure.) } \end{figure} We also have a pair of anticonformal involutions $j^1_i$ and $j^2_i$ , where $j_i^1: \Sigma^d_i \to \Sigma^d_i$ is the deck translation of the branched double covering $\pi^1_i: \Sigma^d_i \to \Sigma_i$, and $j^2_i:\widehat{\Sigma_i}\to \widehat{\Sigma_i}$ that similarly commutes with the branched double covering $\pi^2_i: \widehat{\Sigma_i} \to \Sigma_d^i$. \subsection*{Rectangular surfaces} By the theorem of Jenkins-Strebel, on each surface $\widehat{\Sigma_i}$ we have a holomorphic quadratic differential $\hat{q_i}$ which induces a singular-flat metric comprising $n$ euclidean cylinders of circumference $(2H_i, 2H_i,\ldots 2H_i+2a)$ with core curves $[\gamma_1],[\gamma_2],\ldots [\gamma_n]$, where \begin{equation}\label{eq:hi} H_i=\left(H_0\cdot 2^i\right)^{n/2} \end{equation} for a choice of a $H_0>0$ that shall be eventually made in Proposition \ref{prop:deriv}.\\ (The reason for choosing $H_i$ to be of the above form shall be clarified by Lemma \ref{lem:uhi}).\\ We now show that this passes down to a singular flat metric on $\Sigma_i$ with a ``rectangular" structure when we quotient back by the anticonformal involutions $j^1_i$ and $j^2_i$. \begin{lem}\label{lem:diam} Let $X$ be a Riemann surface with a holomorphic quadratic differential $q$, and let $j:X\to X$ be an anti-conformal involution that is pointwise identity on a connected analytic arc $\gamma$. Then $\gamma$ is either completely horizontal or completely vertical in the quadratic differential metric. \end{lem} \begin{proof} Let $p\in \gamma$, and let $v\in T_pX$ be the tangent vector to $\gamma$ at $p$. Since $\gamma$ is fixed pointwise by $j$, the induced map $j_\ast:T_pX\to T_pX$ satisfies $j_\ast(v)=v$.\\ Since $j$ is anticonformal, we have that the pullback quadratic differential $j^\ast q$ satisfies: \begin{equation}\label{eq:eq1} j^\ast q (v,v) = \overline{ q (v,v) } \end{equation} where $\bar{\alpha}$ denotes the complex-conjugate of a complex number $\alpha$.\\ On the other hand, by definition of the pullback we have: \begin{equation}\label{eq:eq2} j^\ast q (v,v) = q(j_\ast v,j_\ast v) = q(v,v) \end{equation} where we used the fact that $j_\ast$ fixes $v$.\\ By (\ref{eq:eq1}) and (\ref{eq:eq2}) we have that $\overline{q(v,v)}=q(v,v)$ and hence $q(v,v)\in \mathbb{R}$.\\ Consider the local coordinate around any point $p\in \gamma$ in which the quadratic differential $q$ is $dz^2$. By the previous observation and the fact that $\gamma$ is analytic, $\gamma$ is locally either a horizontal or vertical segment around the image of $p$. Since $\gamma$ is connected, this is true of the entire arc. \end{proof} We apply this lemma to the involutions $j^1_i$ and $j^2_i$. First, the anticonformal involution $j^2_i:\widehat{\Sigma_i}\to \widehat{\Sigma_i}$ fixes the $n$ $a$-slits and hence they are either completely horizontal or vertical. Since they are homotopic to the core curves of the cylinders in the $\hat{q_i}$-metric, they must in fact be completely horizontal. Next, the anticonformal involution $j_i^1: \Sigma^d_i \to \Sigma^d_i$ fixes the $b$-arcs on $\Sigma^d_i$. Since these arcs embed in $\widehat{\Sigma_i}$ as transverse arcs across the cylinders in the $\hat{q_i}$-metric, they must be completely vertical.\\ Hence the holomorphic quadratic differential $\hat{q_i}$ passes down to a holomorphic quadratic differential $q_i$ on the bordered surface $\Sigma_i$. The $n$ $a$-arcs on $\partial \Sigma_i$ become horizontal segments, and the remaining $n$ $b$-arcs on $\partial \Sigma_i$ are vertical segments. These form a polygonal boundary. The $n$ euclidean cylinders on $\widehat{\Sigma_i}$ descend to a cyclically-ordered collection of $n$ euclidean rectangles on $\Sigma_i$ glued along the critical graph $\mathcal{G}_i$. In the $q_i$-metric on $\Sigma_i$, the horizontal width of these rectangles are $(H_i, H_i\ldots H_i+a)$ in a cyclic order (see Figure 14). \begin{figure}[h] \centering \includegraphics[scale=0.55]{rectsurf1.png}\\ \caption{The rectangular surface at the end of \textit{Step 1}.} \end{figure} \section{Step 2: Prescribing lengths} In \textit{Step 1}, the Jenkins-Strebel theorem allows us to prescribe the circumferences of the resulting metric cylinders on the ``quadrupled" surface $\widehat{\Sigma_i}$, but not their lengths. On quotienting back by the two involutions, this results in control on the lengths of the ``horizontal" sides of the polygonal boundary of the resulting ``rectangular" surface (see figure above). We show here, using a continuity method, that choosing the arcs carefully in the conformal doubling step ensures that the extremal lengths of the curves obtained from the doubled arcs on $\widehat{\Sigma_i}$ are appropriate values (Lemma \ref{lem:arcs}) such that the vertical edge-lengths are also prescribed (Lemma \ref{lem:cyllen}).\\ We shall use the following: \begin{lem}[A topological lemma]\label{lem:deglem} Let $\phi:\mathbb{R}^n_{>0} \to \mathbb{R}^n_{>0}$ be a proper, continuous map mapping \begin{equation*} (x_1,x_2,\ldots ,x_n) \mapsto (y_1,y_2,\ldots y_n) \end{equation*} Suppose there exists functions $\eta_1,\eta_2$ such that:\\ (1) $x_i>A \implies y_i>\eta_1(A)$, and\\ (2) $x_i<\epsilon \implies y_i< \eta_2(\epsilon)$,\\ for each $1\leq i\leq n$, where $\eta_1(A) \to \infty$ as $A\to \infty$, and $\eta_2(\epsilon)\to 0$ as $\epsilon\to 0$.\\ Then $\phi$ is surjective.\\ (Note: here $\mathbb{R}_{>0} = \mathbb{R}^{+}$ denotes the positive real numbers.) \end{lem} \begin{proof}[Sketch of a proof] The proof is a standard topological degree argument. Note that (1) and (2) are equivalent to the requirement that $\phi$, in addition to being proper, has a coordinate-wise control: for a sequence of points $(x_1^j,x_2^j,\ldots x_n^j)$ and $\phi$-images $(y_1^j,y_2^j,\ldots y_n^j)$ (where $j\in\mathbb{N}$) we have that fixing $1\leq i\leq n$, $x_i^j\to\infty \implies y_i^j\to \infty$ and $x_i^j\to 0 \implies y_i^j\to 0$ \textit{uniformly} (independent of the rest of the coordinates). This implies that $\phi$ has degree $1$ at infinity, and hence $\phi$ is surjective. \end{proof} \subsection*{Arcs and extremal lengths} Consider a Riemann surface $\Sigma$ with one boundary component which we identify with $S^1$, and with $n$ sub-intervals $I_1 ,I_2,\ldots I_n$ of equal length. Each $I_i$ is further divided into two sub-arcs $\{a_i,b_i\}$ in clockwise order (see figure). As in the construction in \textit{Step 1}, consider the two step doubling that leads to a a closed Riemann surface: first double along the $b$ arcs on $\partial \Sigma$ to get a Riemann surface $\Sigma_d$ with slits corresponding to the $a$- arcs, and next, glue two copies of $\Sigma_d$ along these slits to get a closed surface $\widehat{\Sigma}$.\\ We shall work with the doubled surface $\Sigma_d$, and homotopy classes of curves $\gamma_i$ that enclose each of the slits $a_i$, for $1\leq i\leq n$. Let their extremal lengths be $\lambda_1,\lambda_2,\ldots \lambda_n$ - note that these values are exactly double of the extremal lengths of the corresponding closed curves one gets on $\widehat{\Sigma}$. We shall first show that by prescribing the $2$-arc decomposition of each interval $I_i$ appropriately, we can obtain any $n$-tuple of extremal lengths.\\ Consider the subinterval $I_i = a_i\cup b _i$. We denote the (angular) length of a subarc $\tau$ on $\partial \Sigma \equiv S^1$ by $l(\tau)$. Note that $l(I_i) = \frac{2\pi}{n}$ for each $i$. For convenience, we shall fix a conformal metric $\rho$ on $\Sigma$ that gives a length $2\pi$ to the boundary circle $\partial \Sigma$, and the above lengths of arcs shall be those induced by this metric. Denote the ratio of lengths $r_i = \frac{l(a_i)}{l(b_i)}$. Also, notice that each $a_i$ arc has two adjacent arcs $b_{i-1}$ and $b_i$ on either side (where $i-1$ is taken to be $n$ if $i=1$). \begin{figure}[h] \centering \includegraphics[scale=0.6]{abarcs.png}\\ \caption{The $a$- and $b$-arcs on the boundary component of $\partial \Sigma$. The closed curve $\gamma_i$ goes around the $a_i$ slit on the surface doubled across the $b$-arcs, and has extremal length $\lambda_i$. } \end{figure} \begin{lem}\label{lem:arcs} The map $\phi:\mathbb{R}^n_{\geq 0}\to \mathbb{R}^n_{\geq 0}$ that assigns to a tuple $(r_1,r_2,\ldots r_n)$ of ratios of interval lengths, the corresponding extremal lengths $(\lambda_1,\lambda_2,\ldots,\lambda_n)$ is surjective. \end{lem} \begin{proof} The map $\phi$ is continuous since the moduli of the doubled surfaces depend continuously on the lengths of the slits (even if some slits degenerate to punctures). The lemma shall follow once we show that the following properties hold.\\ \textit{Notation.} In what follows we shall consider an arbitrary $(r_i)_{1\leq i\leq n}$ in $\mathbb{R}^n_{\geq 0}$ and its $\phi$-image $(\lambda_i)_{1\leq i\leq n}$. In (2) and (3) below, we consider a sequence $\{(r_i)^j\}$ of such $n$-tuples and their $\phi$-images $\{(\lambda_i)^j\}$, where the index $j$ runs from $1\leq j<\infty$ and shall be appended to any of the geometric quantities varying with $j$.\\ (1) $\lambda_i=0 \iff r_i=0$.\\ (2) $r_i^j\to \infty \implies \lambda_i^j\to \infty$. The divergence is uniform, that is $r_i>c \implies \lambda_i>\eta_1(c)$.\\ (3) $r_i<c \implies \lambda_i<\eta_2(c)$.\\ (Here $\eta_1,\eta_2:\mathbb{R}_{\geq 0} \to \mathbb{R}_{\geq 0}$ are increasing functions.)\\ Note that (1) and (2) imply that the map $\phi$ is proper, and from the uniform estimates in (2) and (3), the surjectivity of $\phi$ follows from Lemma \ref{lem:deglem}. \\ \textit{Property (1)}: The backward implication holds since $r_i=0\iff l(a_i)=0$ and hence the corresponding $a$-slit has degenerated to a puncture on $\Sigma_d$, and the extremal length $\lambda_i$ of a loop enclosing a puncture is $0$. For the other implication, observe if $l(a_i) \neq 0$ then for our choice of conformal metric $\rho$ we shall have $l_\rho(a_i) >\eta >0$, and a lower bound of $2\eta$ of the length of the curve around the $a_i$-slit. The analytic definition of extremal length: \begin{equation*} \lambda_i = \sup\limits_\rho \frac{l_\rho(a_i)^2}{A(\rho)} \end{equation*} then shows that there is a positive lower bound on the extremal length $\lambda_i$.\\ \textit{Property (2)}: Note that by definition, $r_i^j\to \infty \implies l(b_i^j)\to 0$. By the geometric definition of extremal length, \begin{equation}\label{eq:mod} \lambda_i^j =\inf \frac{1}{mod(\mathcal{A})} \end{equation} where $\mathcal{A} \subset \Sigma_d$ is an embedded annulus with core-curve enclosing the $a_i^j$-slit.\\ It is well-known that there is a bound $B$ on the largest modulus annulus $\hat{\mathcal{A}}$ that can be embedded in $\mathbb{C}$ such that the bounded complementary component contains the interval $[0, l(a_i^j)]$ and the other component contains the point $ l(a_i^j) + l(b_i^j) \in \mathbb{R}$. Moreover, $B\to 0$ as $l(b_i)\to 0$. In our case, the $a_i^j$-slit on $\Sigma_d$ is such an interval in an appropriate conformal chart, and the endpoint of the adjacent b-slit is the other real point. However since the annulus $\mathcal{A}$ is now constrained to be embedded in the surface $\Sigma_d$, the modulus of $\mathcal{A}$ is less than that of $\hat{\mathcal{A}}$ (in the planar case). One can also see this by considering the annular cover associated with the closed curve where all the slits lie. From the above discussion, this proves that $\lambda_i^j\to \infty$ by (\ref{eq:mod}). \\ \textit{Property (2) continued}: The uniform divergence follows by quantifying the estimates in the argument above: the bound $B$ of the largest modulus of an annulus in $\mathbb{C}$ separating the interval $[0,a]$ from $a+b\in \mathbb{R}$ is in fact a strictly increasing function of $b/a$. That is, $B = \eta(b/a)$ where $\eta:\mathbb{R}_{\geq 0} \to \mathbb{R}_{\geq 0}$ is a continuous function such that $\eta(0) = 0$. The above argument then shows that \begin{equation*} r_i>c \implies \frac{l(b_i)}{l(a_i)} < 1/c \implies B < \eta(1/c) \implies \lambda_i > \eta(1/c)^{-1} \end{equation*} where we have used (\ref{eq:mod}) for the last inequality. Hence the uniform estimate holds with the function $\eta_1(x) := \eta(1/x)^{-1}$.\\ \textit{Property (3)}: This follows from closely examining the argument of the backward implication in \textit{Property (1)}: if $r_i<c$ then by definition $l(a_i)< \frac{2\pi c}{n(1+c)}$ and hence for sufficiently small $c$ one can embed an annulus of inner radius $\frac{2\pi c}{n(1+c)}$ and outer radius $R$ (that depends only on $\rho$ and $\Sigma$) on the doubled surface $\Sigma_d$. The modulus of this annulus is $M(c)$ which tends to $\infty$ as $c\to 0$. By the geometric definition of extremal length (\ref{eq:mod}), $\lambda_i$ is less than $1/M(c)$. \end{proof} \textit{Remark.} A similar setup involving slits along subintervals of the real line in $\hat{\mathbb{C}}$ was considered in \cite{Penner}. \subsection*{Cylinder lengths} As in \S6, an application of the Jenkins-Strebel theorem to the homotopy classes of curves (corresponding to the doubled $a$-slits) on the ``quadupled" surface $\widehat{\Sigma}$ produces a quadratic differential metric with a decomposition into $n$ metric cylinders $C_1,C_2,\ldots C_n$ with these as the core curves. By choosing the $a$-arcs on $\partial \Sigma$ appropriately, by Lemma \ref{lem:arcs} these curves can have assume any $n$-tuple of extremal lengths. We now show that by prescribing these extremal lengths correctly, one can assume any $n$-tuple of cylinder lengths $\{l_i\}_{1\leq i\leq n}$. Note that the Jenkins-Strebel theorem already allows one to prescribe arbitrary circumferences. \begin{lem}\label{lem:cyllen} Suppose one fixes each cylinder circumference to be $H$. Then the map $\psi:\mathbb{R}^n_{>0}\to \mathbb{R}^n_{> 0}$ that assigns to a tuple $(\lambda_1,\lambda_2,\ldots \lambda_n)$ of extremal lengths of the core curves, the reciprocals of the cylinder lengths $(1/l_1,1/l_2,\ldots,1/l_n)$, is surjective. \end{lem} \begin{proof} The surjectivity of $\psi$ shall follow from Lemma \ref{lem:deglem} once we establish the following properties:\\ (1) $\lambda_i> C \implies 1/l_i > \eta_1(C)$.\\ (2) For sufficiently small $c$, $\lambda_i < c \implies 1/l_i < \eta_2(c)$.\\ for some increasing functions $\eta_1, \eta_2:\mathbb{R}_{\geq 0} \to \mathbb{R}_{\geq 0}$.\\ \textit{Property (1)}: By (\ref{eq:mod}) we have: \begin{equation*} \lambda_i\geq C \implies mod(\mathcal{A}) \leq 1/C \end{equation*} where $\mathcal{A}$ is any embedded annulus in $\Sigma_d$ with core curve $\gamma_i$. This implies that $l_i \leq H/C$ since otherwise one can embed a flat cylinder of modulus greater than $1/C$ with core curve $\gamma_i$. Hence $\eta_1(x) = x/H$ works.\\ \textit{Property (2)}: We shall prove the converse statement: If $l_i<B$, then $\lambda_i>C>0$ for some $C$ that depends on $B$. For this, we shall construct a curve $\beta_i$ that intersects $\gamma_i$ at most twice, such that the extremal length \begin{equation}\label{eq:bi} Ext(\beta_i) < D \end{equation} for some $D$ (depending on $B$). The lower bound $C$ for $\lambda_i =Ext(\gamma_i)$ now follows from the well-known inequality (see \cite{Min1}): \begin{equation*} Ext(\gamma_i)Ext(\beta_i)\geq i(\gamma_i, \beta_i)^2 \end{equation*} To show the bound (\ref{eq:bi}) we shall use the geometric definition of extremal length (\ref{eq:mod}): namely, we shall construct an annulus of definite modulus with core curve $\beta_i$.\\ The construction of $\beta_i$ is geometric, and falls in two cases. Consider the cylinder $C_i$ corresponding to $\gamma_i$ on the quadrupled surface $\Sigma_d$, of circumference $H$, and length $l_i<B$ (by our current assumption). Recall $C_i$ has a bilateral symmetry that comes from the doubling. Each of its two boundary components is adjacent to the other cylinders $C_2,\ldots C_n$, and at least one of them, say $C_k$, shares a boundary segment with $C_i$ of definite length, that is: \begin{equation*} l(\partial C_k \cap \partial C_i)> \frac{H}{n} \end{equation*} Let the length of the cylinder $C_k$ be $l_k$. The two cases are:\\ (I) $l_k \leq B$: In this case we construct the curve $\beta_i$ intersecting $\gamma_i$ once, as shown in Figure 16.\\ \begin{figure}[h] \centering \includegraphics[scale=0.65]{bicase1.png}\\ \caption{In Case I $\beta_i$ consists of an arc across $C_i$ and an arc across $C_k$. This figure shows half of the surface. By the two-fold symmetry from the ``doubling", these arcs join up to form a closed curve. } \end{figure} (II) $l_k>B$: In this case we construct $\beta_i$ intersecting $\gamma_i$ twice, as shown in Figure 17.\\ \begin{figure}[h] \centering \includegraphics[scale=0.65]{bicase2.png}\\ \caption{In Case II $\beta_i$ comprises two arcs across $C_i$, separated by a definite distance, that extend a bit into $C_k$, together with a loops around $C_k$ connecting the two pairs of endpoints.} \end{figure} In both cases, the curves are of bounded length and admit an embedded ``collar" neighborhood of definite width (these dimensions depend only on $B$ and $H$) and hence satisfy (\ref{eq:bi}) for some $D$, by the geometric definition of extremal length. The function $\eta_2$ is implicit from the construction. \end{proof} \textit{Remark.} The above argument also works if the cylinder circumferences are fixed $n$-tuple $(c_1,c_2,\ldots c_n)$, by replacing $H$ by the maximum or minimum value of the $c_i$-s, in the proof, as appropriate. \subsection*{Half-plane surfaces $\Sigma_i^\prime$} By Lemmas \ref{lem:cyllen} and \ref{lem:arcs} one can now choose the $a$- and $b$-arcs on $\partial \Sigma_i$ such that the cylinder lengths of the Jenkins-Strebel differential on the quadrupled surface $\widehat{\Sigma_i}$ are all $2H_i$. (Recall $H_i = \left(H_0\cdot 2^i\right)^{n/2}$ as in (\ref{eq:hi}). See also the remark following Lemma \ref{lem:cyllen}.)\\ Hence on each singular flat surface $(\Sigma_i,q_i)$, one now has $n$ euclidean rectangles $R_1,\ldots R_n$ glued along the metric spine $\mathcal{G}_i$ in that cyclic order, such that the resulting polygonal boundary has all the side-lengths $H_i$, except one horizontal side of length $H_i+a$ (see Figure 18 - here $a$ is the desired residue at the pole).\\ We construct a half-plane surface $\Sigma_i^\prime$ by gluing in a planar end $\mathcal{P}_{H_i}$ (see also Definition \ref{defn:trunc}) which has a polygonal boundary isometric to $\partial \Sigma_i$. From our choice of lengths of the polygonal boundary, the metric residue of $\Sigma_i^\prime$ is equal to $a$. For each $i$ these planar ends are truncations at height $H_i\to \infty$ of a fixed planar end $\mathcal{P}$ of residue $a$. \subsection*{Choice of $H_i$} Recall from \S3.2 that there is a conformal map from $\mathcal{P}_{H_i} \subset \Sigma^\prime_i$ to a neighborhood $U_{H_i}$ of $0\in\mathbb{C}$ that is an isometry in the $\phi$-metric as in (\ref{eq:stan}).\\ We now observe that by Lemma \ref{lem:uh} the choice of $H_i$ in (\ref{eq:hi}) yields the following: \begin{lem}\label{lem:uhi} $D_1^\prime2^{-i}\leq dist(0,\partial U_{H_i}) \leq D_2^\prime2^{-i}$, where $D_1^\prime, D_2^\prime>0$ are constants independent of $i$ (they depend only on the choice of $H_0$). \end{lem} \textit{Remark.} This choice implies the modulus of the annulus $U\setminus U_i$ in the compact exhaustion is comparable (upto a bounded multiplicative factor) to that of the annulus $\mathcal{P}_{H_0}\setminus \mathcal{P}_{H_i}$ on the half-plane surface $\Sigma_i^\prime$. This is the geometric control crucial for extracting a convergent subsequence in \textit{Step 4}. \begin{figure}[h] \centering \includegraphics[scale=0.7]{rect-surf2.png}\\ \caption{\textit{Step 2} ensures that one gets a ``rectangular" surface $\Sigma_i$ with horizontal and vertical edges as shown, that can be extended to a half-plane surface $\Sigma_i^\prime$ by gluing in a planar end. } \end{figure} \section{Step 3: A conformal limit} Here we show that the sequence $\{\Sigma_i^\prime\}_{i\geq 1}$ of half-plane surfaces constructed in the previous section has $\Sigma\setminus p$ as a conformal limit: \begin{lem}[Conformal limit]\label{lem:approx1} For all sufficiently large $i$ there exist $(1+\epsilon_i)$-quasiconformal homeomorphisms \begin{equation}\label{eq:fi} f_i:\Sigma_i^\prime \to \Sigma\setminus p \end{equation} where $\epsilon_i\to 0$ as $i\to \infty$. \end{lem} The intuition behind the proof is that since $\Sigma_i^\prime$ is obtained by excising-and-regluing disks from $\Sigma$ that get smaller as $i\to\infty$, for large enough $i$ the conformal structure is not too different, and one can construct an almost-conformal map as above. \subsection*{Quasiconformal extensions} The following lemma is a slight strengthening of the quasiconformal extension lemma proved in \cite{Gup1}.\\ Throughout, $\mathbb{D}$ shall denote a unit disk of radius $1$ and $B(r)$ shall denote an open ball of radius $r$, centered at $0\in \mathbb{C}$. \begin{lem}\label{lem:qclemnew} For any $\epsilon>0$ sufficiently small, and $0\leq r\leq \epsilon$, a map \begin{center} $f:\mathbb{D}\setminus B(r)\to \mathbb{D}$ \end{center} that\\ (1) preserves the boundary and is a homeomorphism onto its image,\\ (2) is $(1+\epsilon)$-quasiconformal on $\mathbb{D}\setminus B(r)$ \\ extends to a $(1 + C\epsilon)$-quasisymmetric map on the boundary, where $C>0$ is a universal constant. \end{lem} \begin{proof}[Sketch of the proof] In \cite{Gup1} (see Appendix A of that paper) we proved this when the map $f$ was a quasiconformal homeomorphism of the \textit{entire} disk, though it had the control on distortion only on the annulus $A = \mathbb{D}\setminus B(r)$ as in $(2)$ above. However, all that was required was the following estimate on the image of the ball $B(r)$: \begin{equation*} diam(f(B(r))) < C_1\epsilon \end{equation*} for some universal constant $C_1>0$.\\ Here, this can be replaced by the following fact: \begin{equation}\label{eq:pf1} d = diam(\mathbb{D}\setminus f(A)) < C_1\epsilon \end{equation} which follows from the modulus-estimates \begin{equation}\label{eq:pf2} \frac{1}{1+\epsilon} \leq \frac{mod(f(A))}{mod(A)} \end{equation} \begin{equation}\label{eq:pf3} mod(A) = \frac{1}{2\pi}\ln{\frac{1}{r}} \leq \frac{1}{2\pi}\ln{\frac{1}{\epsilon}} \end{equation} \begin{equation}\label{eq:pf4} mod(f(A)) < \frac{1}{2\pi} \ln{\frac{16}{d}} \end{equation} where (\ref{eq:pf2}) follows from the hypothesis $(2)$ above, (\ref{eq:pf3}) follows from the fact that $A$ is a circular annulus and $r\leq \epsilon$, and (\ref{eq:pf4}) is well-known (see III.A of \cite{Ahl}).\\ \begin{figure} \centering \includegraphics[scale=0.55]{qclem2.PNG}\\ \caption{The map $f$ in Lemma \ref{lem:qclemnew} is almost-conformal off a small sub-disk.} \end{figure} The rest of the proof is exactly the same as in \cite{Gup1}:\\ Let $\Gamma$ be family of curves between two arcs on the boundary of $\mathbb{D}$, that avoids the set $\mathbb{D}\setminus f(A)$ which by (\ref{eq:pf1}) is of diameter $O(\epsilon)$. Then by a length-area inequality, we have the following estimate on the extremal lengths: \begin{equation} 1\leq \frac{\lambda_{f(A)}(\Gamma)}{\lambda_\mathbb{D}(\Gamma)} \leq 1 +C_2\epsilon \end{equation} where $C_2>0$ depends only on $C_1$.\\ Since this holds for any pair of arcs on the boundary $\partial \mathbb{D}$, it translates to a condition on the cross-ratios of four boundary points, and is enough to prove the extension of $f$ to the boundary is $(1 + C\epsilon)$-quasisymmetric, as claimed (see \cite{AB}, and \cite{Gup1} for details). \end{proof} \begin{cor}\label{cor:cor1qclem} Let $r>0$ be sufficiently small. Suppose $g:\mathbb{D}\setminus B(r) \to \mathbb{D}$ is a conformal embedding that extends to a homeomorphism of $\partial \mathbb{D}$ to $\partial \mathbb{D}$. Then there exists a $(1+\epsilon)$-quasiconformal map $f:\mathbb{D} \to \mathbb{D}$ such that the extension of $f$ to $\partial \mathbb{D}$ agrees with that of $g$, and \begin{equation}\label{eq:epr} \epsilon < 2C^\prime r \end{equation} for some universal constant $C^\prime>0$. \end{cor} \begin{proof} Since $g$ is conformal, it is also $(1+ r)$-quasiconformal. By the previous lemma, $g$ extends to a $(1+Cr)$-quasisymmetric map of the boundary, which by the Ahlfors-Beurling extension (see \cite{AB}) extends to an $(1+C^\prime r)$-quasiconformal map of the entire disk, which is our required map $f$. \end{proof} \begin{cor}\label{cor:corqclem} Let $\epsilon>0$ be sufficiently small, and $U_0,U$ and $U^\prime$ be conformal disks such that $U_0 \subset U$ and the annulus $A = U\setminus U_0$ has modulus larger than $\frac{1}{2\pi}\ln\frac{1}{\epsilon}$. Then for any conformal embedding $g:A\to U^\prime$ that takes $\partial U$ to $\partial U^\prime$ there is a $(1+C^\prime\epsilon)$-quasiconformal map $f:U\to U^\prime$ such that $f$ and $g$ are identical on $\partial U$. \end{cor} \begin{proof} By uniformizing, one can assume that $U=U^\prime = \mathbb{D}$ and $U_0 \subset B(r)$ where $r\leq \epsilon$ by the condition on modulus. Hence this reduces to the previous corollary. \end{proof} \subsection*{Proof of Lemma \ref{lem:approx1}} \begin{proof} Consider the rectangular subsurface $\Sigma_i\subset \Sigma_i^\prime$ and the conformal embedding \begin{equation*} g_i:\Sigma_i \to \Sigma\setminus p \end{equation*} which exists as the subsurface $\Sigma_i$ is also part of a compact exhaustion of $\Sigma\setminus p$ (see \S6).\\ By construction of $\Sigma_i^\prime$, the complement $\Sigma_i^\prime \setminus \Sigma_i$ is a planar end that is conformally a punctured disk, and by property (3) of the compact exhaustion (see the first section of \S6), so is the complement of $g_i(\Sigma_i)$ in $\Sigma\setminus p$.\\ The conformal embedding $g_i$ restricts to a conformal map on the annulus $\Sigma_i \setminus \Sigma_1$. By property (4) of the compact exhaustion (see \S6) this annulus has a modulus $M = A\cdot i$, and hence $M\to \infty$ as $i\to \infty$.\\ By an application of the quasiconformal extension in Corollary \ref{cor:corqclem}, one can get, for sufficiently large $i$, a $(1 + \epsilon_i)$-quasiconformal map $g_i^\prime$ from the punctured disk $\Sigma_i^\prime \setminus \Sigma_1$ to $U\setminus p$ , that has the same boundary values as $g_i$ on $\partial \Sigma_1$. Here $\epsilon_i\to 0$ as $i\to \infty$. In fact, by (\ref{eq:epr}) and the fact that $M = A\cdot i$, we can derive the better estimate \begin{equation}\label{eq:epr2} \epsilon_i < B 2^{-i} \end{equation} where $B>1$ is a universal constant. This will be useful in the next section.\\ Together with the conformal map $g_i$ on $\Sigma_1$, this defines the $(1 + \epsilon_i)$-quasiconformal homeomorphism $f_i:\Sigma_i^\prime \to \Sigma\setminus p $ for all sufficiently large $i$, as required in (\ref{eq:fi}). \end{proof} \section{Step 4: A limiting quadratic differential} In this section and the next we show that after passing to a subsequence the sequence of half-plane surfaces $\Sigma_i^\prime$ converges to a half-plane surface $\Sigma_{n,a}$ - by \textit{Step 3} this would be conformally equivalent to $\Sigma \setminus p$, as required. In this section we complete a preliminary step, namely we show that after passing to a subsequence the corresponding half-plane differentials converge in $\widehat{\mathcal{Q}}_m$, the subset of the bundle of meromorphic quadratic differentials that have a single pole of order exactly $n$ (see Appendix A for a discussion).\\ Recall (from \S6) the half-plane surfaces $\Sigma_i^\prime$ are constructed by excising a disk and gluing in a planar end (a planar domain with a fixed quadratic differential). The idea is that though the glued-in disk gets smaller on the conformal surface, one can still extract some global control on the corresponding half-plane differentials (it is useful to remember that two holomorphic quadratic differentials on a closed surface are identical if they agree on any open set.) In particular, since the planar ends glued in are truncated at heights increasing at a prescribed rate that matches the rate of shrinking of the excised disks (see the final part of \S7), the restriction of the resulting half-plane differential on a \textit{fixed} conformal disk $U$ converge. We make this precise in the rest of this section.\\ Let $q_i^\prime$ be the half-plane differential corresponding to the half-plane structure on $\Sigma_i^\prime$ and let $U_i = f_i^{-1} (U)$ where $f_i$ is the $(1 + \epsilon_i)$-quasiconformal map in Lemma \ref{lem:approx1}. Let $\phi_i:U_i\to \mathbb{D}$ be the conformal chart mapping $p_i = f_i^{-1}(p)$ to $0$.\\ By the gluing-in construction (see \S6 and the final part of \S7), there is an open set $V_i \subset U_i \subset \Sigma_i^\prime$ containing $p_i$ which, in the metric induced by $q_i^\prime$, is isometric to a planar end $\mathcal{P}_{H_i}$. Moreover, if $\phi:(U,p)\to (\mathbb{D},0)$ is the conformal coordinate map, then $V_i = f_i^{-1}(V)$ where $V = \phi^{-1}(B(2^{-i}))$. \\ Let us recall that the planar end of $\Sigma_i^\prime$ is isometric (and hence conformally equivalent) to a neighborhood of $0\in \mathbb{C}$ equipped with the following meromorphic quadratic differential (see \S3.3): \begin{equation}\label{eq:fixqd} \left(\frac{1}{z^{n+2}} + \frac{ia}{z^{n/2 +2}}\right)dz^2 \end{equation} Hence the quadratic differential $q_i^\prime$ on $U_i$ (a subset of the planar end) is the pullback by some conformal map, of the above fixed meromorphic quadratic differential on $\mathbb{C}$. The fact that $V_i\subset U_i$ is isometric to the truncation at height $H_i$ of a planar end moreover implies that this conformal map takes $V_i$ to the neighborhood $U_{H_i}$ of $0\in\mathbb{C}$.\\ We shall use the following criterion of convergence of meromorphic quadratic differentials (see the Appendix for a discussion of the proof, and \textit{Criterion $4^\prime$} there): \begin{lem}\label{lem:convg} Let $(\Sigma_i, U_i, p_i)$ be a sequence of marked, pointed Riemann surfaces converging to $(\Sigma, U, p)$ in the sense that there exists a $(1+\epsilon_i)$-quasiconformal map $f_i:\Sigma_i\to \Sigma$ that takes $(U_i,p_i)$ to $(U,p)$, such that $\epsilon_i\to 0$ as $i\to \infty$. Assume that $\Sigma_i$ is equipped with a quadratic differential $q_i$ whose restriction to $U_i$ is the pullback by a univalent conformal map $g_i$ of a fixed meromorphic quadratic differential on $\mathbb{C}$. If $g_i$ form a normal family, then after passing to a subsequence, $q_i\to q\in \widehat{\mathcal{Q}}_m(\Sigma)$.\\ This normality condition is satisfied if for each $i$, $g_i$ maps the subdomain $V_i\subset U_i$ to $U_{H_i}\subset \mathbb{C}$, where via the conformal identification $\phi_i:(U_i,p_i) \to (\mathbb{D},0)$, we have: \begin{equation}\label{eq:vib0} {d}2^{-i} < dist(0, \partial U_{H_i}) < D2^{-i} \end{equation} and \begin{equation}\label{eq:vib} {d}2^{-i} < dist(0, \partial \phi_i(V_i)) < D2^{-i} \end{equation} for constants $d,D>0$. \end{lem} Note that (\ref{eq:vib0}) holds by Lemma \ref{lem:uhi}. Hence to show that in our case the maps $f_i:\Sigma^\prime_i \to \Sigma\setminus p$ satisfy above the conditions of the above lemma (in the notation already introduced) we only need to prove the bounds (\ref{eq:vib}). Recall here that $V_i$ is the image of a round disk via a quasiconformal map ($f_i^{-1}\circ \phi^{-1}$) of small dilatation. We start with the following more general analytical lemma: \begin{lem}\label{lem:jyva} Let $\epsilon>0$ be sufficiently small, and $r$ satisfy \begin{equation}\label{eq:rbd} r\geq \frac{\epsilon}{C} \end{equation} for some constant $C>1$. Let $f:\mathbb{D}\to\mathbb{D}$ be a $(1+ \epsilon)$-quasiconformal map such that $f(0)=0$. Let $V = f(B_r)$ be the image of the subdisk of radius $r$ centered at $0$. Then we have \begin{equation}\label{eq:contain} \frac{r}{D} \leq dist(0,\partial V) \leq 32r \end{equation} for some universal constant $D>0$. \end{lem} \begin{proof} A $K$-quasiconformal self-map of the disk is H\"{o}lder-continuous, with coefficient 16 and exponent $1/K$ (see \cite{Ahl1}). So for any $z\in \partial B_r$ we have: \begin{equation}\label{eq:hold} \left| f(z)\right| \leq 16 r^{1/1+\epsilon} < 16 r^{1-\epsilon} =16r^{-\epsilon} \cdot r \leq 16 (C)^{\epsilon}\epsilon^{-\epsilon} \cdot r < 32r \end{equation} for sufficiently small $\epsilon$ (since $C^\epsilon\to 1$ as $\epsilon\to 0$ and $\epsilon^{-\epsilon} < e^{-e} < 1.45$). \\ This gives the inequality on the right, in (\ref{eq:contain}).\\ For the left inequality of (\ref{eq:contain}) let $z\in \partial B_r$, that is, $\left| z\right| =r$. Then the H\"{o}lder continuity of $f^{-1}:\mathbb{D}\to \mathbb{D}$ yields: \begin{equation*} r = \left|z\right| \leq 16\left| f(z) - f(0) \right|^{1/(1+\epsilon)} = 16\left| f(z)\right|^{1/(1+\epsilon)} \end{equation*} and hence we have: \begin{equation*} \left| f(z)\right| \geq \left(\frac{r}{16}\right)^{(1+\epsilon)} \geq r \cdot \frac{\epsilon^\epsilon}{C^\epsilon 16^{1+\epsilon}} \end{equation*} where we have used (\ref{eq:rbd}) for the last inequality.\\ It is easy to verify that for sufficiently small $\epsilon$, we have: \begin{equation*} \frac{\epsilon^\epsilon}{C^\epsilon16^{1+\epsilon}} \geq \frac{1/2}{2 \cdot 16^{3/2}} \end{equation*} and hence we can take $D = 256$ in (\ref{eq:contain}). \end{proof} \textit{Remarks.} 1. The condition (\ref{eq:rbd}) is necessary, as in general quasiconformal maps are H\"{o}lder continuous with exponent less than $1$ and no better, and (\ref{eq:contain}) then fails for small $r$.\\ 2. The left inequality can be thought of a quasiconformal version of the Koebe one-quarter theorem, and it would be interesting to give a better estimate of the constant $D$. \begin{prop}[Step 4]\label{prop:step4} There is a subsequence of $\{q_i^\prime\}_{i\geq 1}$ that converges to a meromorphic quadratic differential $q$ on $\Sigma$ with a pole of order $n$ and residue $a$ at $p$. \end{prop} \begin{proof} We recapitulate some of the previous discussion in this section:\\ In the construction of the half-plane surface $\Sigma_i^\prime$, one excises a subdisk $V_i\subset U$ and glues it back by a different quasisymmetric map $w$ of a circle (the boundary extension of the conformal map uniformizing $\Sigma\setminus U$ to the rectangular surface $\Sigma_i$) to form a new conformal surface. The conformal structure on the resulting disk $U_i = (U\setminus V_i) \cup_w V_i$ now admits a uniformizing map $\phi_i:(U_i,p_i)\to (\mathbb{D}, 0)$, and in the quadratic differential metric the disk $\phi_i(U_i)$ is isometric to a subset of a planar end of residue $a$. It follows that the quadratic differential on $\phi_i(U_i)$ is a pullback of the fixed differential (\ref{eq:fixqd}) on $\mathbb{C}$ (with a pole at $0$) via a univalent conformal map. By construction, $U_{H_i}$ corresponds to the subdomain $\phi_i(V_i)$ via this map.\\ We shall verify the convergence criterion of Lemma \ref{lem:convg}. Recall that $\phi: (U,p)\to (\mathbb{D}, 0)$ was the uniformizing map for the (fixed) pointed disk $(U, p)$ on $\Sigma$, and $f_i:(\Sigma_i,U_i)\to (\Sigma,U)$ was a $(1+\epsilon_i)$-quasiconformal map (Lemma \ref{lem:approx1}). Hence the map $f=\phi_i\circ f_i^{-1}\circ \phi^{-1}:\mathbb{D}\to \mathbb{D}$ is a $(1+ \epsilon_i)$-quasiconformal map, and since the disk excised at the beginning of the construction was $\phi^{-1}(B(2^{-i}))$, the image of the disk $B(2^{-i})$ under $f$ is $\phi_i(V_i)$. \\ From (\ref{eq:epr2}) in the proof of Lemma \ref{lem:approx1}, we have that: \begin{equation} \epsilon_i < B 2^{-i} \end{equation} for some constant $B>1$, and hence the condition (\ref{eq:rbd}) of Lemma \ref{lem:jyva} is satisfied (here $r=r_i = 2^{-i}$ and $\epsilon=\epsilon_i$). Applying Lemma \ref{lem:jyva} to the map $f$, we then get the distance bounds (\ref{eq:vib}). The bounds (\ref{eq:vib0}) also hold by Lemma \ref{lem:uhi}, as noted previously. \\ Applying Lemma \ref{lem:convg} to the sequence of half-plane surfaces $\Sigma_i$ with their corresponding half-plane differentials $q_i^\prime$, we have that there is a limiting meromorphic quadratic differential $q\in \widehat{\mathcal{Q}}_m(\Sigma)$. \end{proof} \section{Step 5: A limiting half-plane surface} The set $\mathcal{Q}_{\mathcal{D}^\prime}$ of half-plane differentials associated with the local data \begin{center}\label{eq:data} $\mathcal{D}^\prime = \{ (n_j, a_j)| n_j\in \mathbb{N}, n_j\geq 4, a_j\in \mathbb{R}_{\geq 0},$ where $a_j=0$ for $n_j$ odd.$\}$ \end{center} of order of poles and residues at the marked points, is a subset of $\widehat{\mathcal{Q}}_m$. In this section we shall prove that this subset is closed. To simplify the discussion, we shall continue to consider the case of a \textit{single} pole of order $n$ and residue $a$. The proof of the general case follows by an easy extension of the arguments. \begin{thm}\label{thm:cpct} Let $\Sigma_i$ be a sequence of half-plane surfaces in $\mathcal{Q}_{\mathcal{D}^\prime}$ such that the corresponding half-plane differentials $q_i \to q \in \widehat{\mathcal{Q}}_m$. Then $q$ is a half-plane differential in $\mathcal{Q}_{\mathcal{D}^\prime}$ . \end{thm} This together with Proposition \ref{prop:step4} shall complete the proof of the following: \begin{prop}\label{prop:step5} The convergent subsequence of $\{q_i^\prime\}_{i\geq 1}$ converges to a half-plane differential $q$ on $\Sigma$ with a pole of order $n$ and residue $a$ at $p$. \end{prop} \subsection*{Proof of Theorem \ref{thm:cpct}} Consider a family of half-plane differentials $q_i \in \mathcal{Q}_{\mathcal{D}^\prime}$ that converge in $\widehat{\mathcal{Q}}_m$. The goal is to show that after passing to a subsequence the corresponding sequence of half-plane surfaces converges geometrically to a half-plane surface. This shall be done in this section by considering the \textit{metric spines} (see Definition \ref{defn:mspine}) of the sequence and constructing the limiting half-plane surface from the limiting metric spine (Definition \ref{defn:limit}). The geometric convergence is not quite a metric or biLipschitz one, as edges of the spines might collapse. A crucial observation is that by the assumption of convergence in $\widehat{\mathcal{Q}}_m$, in the sequence of metric spines, their embeddings in the surface do not get worse (Lemma \ref{lem:notwist}), cycles do not collapse (Lemma \ref{lem:forest}), and hence the limiting metric spine yields the same marked topological surface. The proof is completed by showing this limiting half-plane surface is indeed a \textit{conformal} limit of the sequence (Lemma \ref{lem:approx2}) by building quasiconformal maps whose dilatation tends to $1$. \subsubsection*{Spines} \begin{defn}\label{defn:tspine} A \textit{topological spine} on a surface $S$ with a set of punctures $P$ is an embedded graph that $S\setminus P$ deform-retracts onto. Moreover, we assume each vertex other than the punctures has valence at least three, so there are no unnecessary vertices. \end{defn} \begin{defn}\label{defn:mspine} Associated to the half-plane differential $q$ on $\Sigma$ is its \textit{metric spine} $\mathcal{G}(q)$, which is the metric graph on the half-plane surface obtained from the boundaries of the half-planes after identifications. This is a topological spine as in the previous definition - the retraction can be defined by collapsing along the vertical rays on each half-plane. \end{defn} The metric spines $\mathcal{G}_i = \mathcal{G}(q_i)$ are topologically equivalent since they are all spines of $\Sigma \setminus P$. After passing to a subsequence, one can assume that they are isomorphic. Let $\mathcal{G}_{top}$ denote this fixed finite graph, such that each $\mathcal{G}_i$ is just an assignment of lengths to its set of edges $\mathcal{E}$. By passing to further subsequence we can assume that the edge-lengths $l_i(e)$ for $e\in \mathcal{E}$ converge to a collection of non-negative reals $\{l(e)\}_{e\in \mathcal{E}}$. \begin{defn}\label{defn:collocus} The \textit{collapsing locus} $\mathcal{C}$ of the sequence is the set of edges of $\mathcal{G}_{top}$ whose edge-lengths tend to zero, and the \textit{diverging locus} $\mathcal{D}$ of the sequence is the set of edges whose lengths tend to infinity. \end{defn} The goal of the next sections is to show that in fact, after passing to a subsequence, the \textit{embeddings} of these spines can also be assumed to be the same upto isotopy, that is, the metric spines are identical as \textit{marked} graphs on the surface. From this it will follow that the collapsing locus $\mathcal{C}$ has no cycles and $\mathcal{D}$ is empty (Lemma \ref{lem:forest}). \subsubsection*{Bounded twisting and no collapsing cycles} \begin{defn}[Twist] Let $S$ be a surface and $A\subset S$ an annulus with core a non-trivial simple closed curve $\gamma$. For an embedded arc $\tau$ between the boundary components of $A$, the \textit{twist} of $\tau$ around $\gamma$ relative to $A$ is an integer denoting the number of times $\tau$ goes around $\gamma$ in $A$, upto isotopy fixing the endpoints $\tau \cap \partial A$. (We ignore signs by taking an absolute value.) \end{defn} \textit{Remark.} The above twist can be thought of as the distance in the curve complex of the annulus $A$ (see also \S3 in \cite{Min2}).\\ The following notion is to ensure that \textit{all} non-trivial twists about the core curve are captured in the annulus in the above definition: \begin{defn}[Maximal annulus] Given a non-trivial simple closed curve $c$ on a surface, its associated \textit{maximal annulus} $A(c)$ is an embedded open annulus with core curve $c$ such that the complement of its closure is either empty, or has components which are either disks or once-punctured disks. \end{defn} \textit{Remark.} An example is the maximal embedded annulus realizing the extremal length for $\gamma$ - the complement of its interior is a graph on the surface. In our case, we shall embed the annulus away from disks around the punctures, which gives the once-punctured disks in its complement. \begin{lem}\label{lem:dtwist} Let $D_c:S\to S$ denote the Dehn twist around a simple closed curve $c$, and $M$ be a positive integer. Then for any maximal annulus $A(c)$ and any simple closed curve $\gamma$ that intersects $c$, the twist of some component arc of $D^n_c(\gamma) \cap A(c)$ about $c$ is greater than $M$ for all $n$ sufficiently large. \end{lem} \begin{proof} It suffices to show that the ``twists" of the arcs $D^n_c(\gamma) \cap A(c)$ about $c$ are supported in the interior of $A(c)$, that is, cannot be isotoped away from the annulus. This holds because $A(c)$ is maximal, that is, the complement of its interior comprises closed disks connected by arcs. The image curve $D^n_c(\gamma)$ being embedded cannot run along these arcs more than once, and any twisting in the interior of the disks can be isotoped to be trivial.\end{proof} The following finiteness result is well-known. In the statement ``sufficiently large" can be taken to be a finite set of curves consisting of a complete marking (a maximal set of pants curves together with curves intersecting each), or alternatively, the Humphries generators for the mapping class group of $S$. \begin{lem}\label{lem:toplem1} Let $\mathcal{C}$ be a sufficiently large collection of simple closed curves on a surface $S$. For each $N>0$, the set : \begin{center} $\mathcal{S} = \{ \gamma$ is a simple closed curve $|$ Each component of $\gamma \cap A(c)$ has twist less than $N$ around $c$, for each $c\in \mathcal{C}\}$ \end{center} is a finite set. \end{lem} \begin{proof} There are finitely many curves $\gamma_1, \gamma_2,\ldots \gamma_k$ on $S$ upto the action of the mapping class group $MCG(S)$, which is virtually generated by Dehn twists around $\mathcal{C}$. By Lemma \ref{lem:dtwist} powers of a Dehn twist around $c \in \mathcal{C}$ increases the twist of some component of $\gamma_j \cap A(c)$ around $c$, and hence by the condition that twists are bounded there are only finitely many mapping classes $g_1,\ldots g_N$ such that $g_i\cdot \gamma_j \in \mathcal{S}$ (where $1\leq i\leq N$, and $1\leq j\leq k$). Hence $\mathcal{S}$ is finite. \end{proof} As a consequence we have the finiteness of spines with a similar ``bounded twisting" condition: \begin{lem}\label{lem:toplem2} Let $\mathcal{C}$ be a sufficiently large collection of simple closed curves on a surface $S$. For each $N>0$, the set : \begin{center} $\mathcal{M} = \{ m$ is a topological spine for $S\setminus P$ $|$ Each component of $e\cap A(c)$ has twist less than $N$ around $c$, for each edge $e\in m$ and $c\in \mathcal{C}\}$ \end{center} is a finite set. \end{lem} \begin{proof} It suffices to show that each cycle in a spine in $\mathcal{M}$ corresponds to finitely many possible homotopy classes of curves on the surface. Since there are uniformly bounded number of edges in the spine (depending only on the topology of the surface), and each edge of the cycle has bounded twisting around each $c\in\mathcal{C}$, so does the embedded cycle on the surface, and the finiteness follows from the previous lemma. \end{proof} Consider now the sequence of half-plane surfaces $\Sigma_i$ and the metric spines $\mathcal{G}_i$. \begin{lem}\label{lem:apriorib} There exists a choice of disk neighborhoods $U_i\subset \Sigma_i$ around the pole of $q_i$, for each $i\geq 1$, such that for the sequence of singular flat surfaces $S_i = \Sigma_i \setminus U_i$ we have:\\ (1) The sequence of areas $Area(S_i) = \int\limits_{S_i} \lvert q_i\rvert$ is uniformly bounded from above.\\ (2) For any simple closed curve $\gamma$ on $S$, the length in the $q_i$-metric of any curve in $S_i$ homotopic to $\gamma$ is uniformly bounded from below. \end{lem} \begin{proof} Since the meromorphic quadratic differentials $q_i$ are converging in $\widehat{\mathcal{Q}}_m$, the underlying conformal structures converge. Choose a disk $U$ around the pole in $\Sigma$, and for a choice of $\epsilon>0$, fix a sequence of $(1+ \epsilon)$-quasiconformal maps $f_i:\Sigma_i\to \Sigma$ preserving the puncture. Set $U_i = f_i^{-1}(U)$. By the convergence of $q_i$, the area or $L^1$-norm of $q_i$ on $S_i$ converges to $Area(\Sigma\setminus U)$, and hence we have statement (1) above.\\ Also, by the convergence, the singular flat $q_i$-metrics lie in a compact set, and hence so do the lengths of the geodesic representatives of a fixed simple closed curve $\gamma$ on $\Sigma_i$. (By properties of non-negative curvature, such a geodesic representative is unique except the case when they sweep out a flat annulus, in which case the lengths are all the same.) This implies that the length of any curve in $S_i$ homotopic to $\gamma$ is uniformly bounded from below, which is statement (2). \end{proof} \begin{lem}\label{lem:notwist} For any simple closed curve $\gamma$, consider a maximal embedded annulus $A_i(\gamma)$ on $S_i$. Then for any edge $e$ of the spine $\mathcal{G}_i$, each component of $e\cap A_i(\gamma)$ has uniformly bounded length as $i\to \infty$. Moroever, $A_i(\gamma)$ is a maximal annulus on $\Sigma_i$ and $e\cap A_i(\gamma)$ has uniformly bounded twist about $\gamma$. \end{lem} \begin{proof} The horizontal edge $e$ from the spine twists across the annular region $A_i(\gamma)$. Assume a large number of twists. For each point in $e\cap A_i(\gamma)$ sufficiently in the middle, there is a vertical segment into an adjacent half-plane, which because of the twisting, has length at least the circumference of $A_i(\gamma)$ before it can escape the annulus (see Figure 20).\\ \begin{figure}[h] \centering \includegraphics[scale=0.8]{notwist1.png}\\ \caption{A collar about the metric spine eating into an adjacent half-plane (shown on the left) embeds in $A_i(\gamma)$ contributing to area.} \end{figure} This circumference is bounded below by (2) of Lemma \ref{lem:apriorib}, and hence this sweeps out a definite metric collar in $A_i(\gamma)$ around the spine, which contributes area proportional to the length of $e\cap A_i(\gamma)$. On the other hand, by (1) of Lemma \ref{lem:apriorib} the areas of $A_i(\gamma)$ (which are less than $Area(S_i)$) remain remain uniformly bounded, and hence so do the lengths of $e\cap A_i(\gamma)$. Again by the uniform lower bound on the circumferences of $A_i(\gamma)$, this implies that the twisting of $e$ around $\gamma$ cannot tend to infinity (each twist will add to a length of at least half the circumference). Since the complement of a $S_i$ is a punctured disk $U_i$, the annulus $A_i(\gamma)$ that is maximal on $S_i$, is also maximal on $\Sigma_i$. \end{proof} The following is now immediate from Lemma \ref{lem:toplem2}: \begin{cor}\label{cor:marking} After passing to a further subsequence, we can assume that the metric graphs $\mathcal{G}_i$ are isomorphic as marked spines on the surface. In particular, a cycle in the graph corresponds to the same homotopy class of a closed curve throughout the sequence. \end{cor} \begin{lem}[No collapsing cycles]\label{lem:forest} $\mathcal{C}$ is a forest, that is, it contains no cycle, and $\mathcal{D}$ is empty. \end{lem} \begin{proof} The meromorphic quadratic differentials $q_i$ lie in a compact set $K$ of $\widehat{\mathcal{Q}}_m$ since they form a convergent sequence. By Corollary \ref{cor:marking}, after passing to a subsequence a cycle in the metric graph $\mathcal{G}_i$ corresponds to a (fixed) non-trivial curve in $\Sigma$ whose lengths in the singular flat $q_i$-metric must have a uniform lower bound by the compactness of $K$. By the uniform (upper) length bound of Lemma \ref{lem:notwist} there cannot be an edge whose lengths tend to infinity.\end{proof} Recall from the discussion preceding Definition \ref{defn:collocus} that the finite edge-lengths of the metric spines converge after passing to a subsequence. Let $\mathcal{G}$ be metric graph obtained by assigning this length $l(e)$ to every edge $e\in \mathcal{G}_{top}$, where it is understood that any component tree of the collapsing locus $\mathcal{C}$ is identified with a single vertex. The previous lemma ensures that $\mathcal{G}$ has the same topological type, that is, remains homotopy equivalent to $\mathcal{G}_{top}$. \begin{figure}[h] \centering \includegraphics[scale=0.5]{collapse.png}\\ \caption{The edges in $\mathcal{C}$ (shown in bold on the left) collapse along the sequence of graphs $\mathcal{G}_i$. } \end{figure} \begin{defn}[Limiting half-plane surface]\label{defn:limit} The half-plane surface $\Sigma_{\mathcal{G}}$ is defined to be the one obtained by gluing in $n$ half-planes along the metric graph $\mathcal{G}$ in the combinatorial order identical to the gluing of the half-planes along the metric spine of each $(\Sigma, q_i)$. (Here we assume we have passed to a subsequence where the metric spines are identical as marked graphs.) \end{defn} \subsubsection*{Proving $\Sigma_\mathcal{G}$ is the conformal limit} \begin{lem}\label{lem:approx2} For all sufficiently large $i$ there exist $(1+\epsilon_i)$-quasiconformal maps \begin{equation*} \overline{h_i}:\Sigma_i^\prime \to \Sigma_\mathcal{G} \end{equation*} where $\epsilon_i\to 0$ as $i\to \infty$. \end{lem} We start by observing that a map that collapses a short segment of the boundary of a half-plane $\mathbb{H}$ can be extended to a map of the half-plane that is almost-isometric away from a suitable neighborhood: \begin{lem}\label{lem:extend} For any $\epsilon>0$ there exists a $\delta >0$ such that a map $h:\partial \mathbb{H}\to \partial \mathbb{H}$ that collapses an interval $I$ of length $\epsilon$ and isometric on its complement, extends to a homeomorphism $\overline{h}:\mathbb{H}\to \mathbb{H}$ that is an isometry away from an $\delta$-neighborhood of $I$. Moroever, one can choose $\delta$ such that $\delta\to 0$ as $\epsilon\to 0$. \end{lem} \begin{proof} One can in fact choose $\delta = \epsilon$, and the extension to be height-preserving, as follows. Let $\mathbb{H} = \{(x,y)|$ $y>0\}$ and $I = [-\epsilon/2,\epsilon/2]\times\{0\}$. Choose a ``bump" function $\phi(x,y)$ supported on the rectangle $R= [-\epsilon,\epsilon]\times[0,\epsilon]$ that is positive in its interior and interpolates between $0$ on $I$ and $1$ on the complement of $R$. This function $\phi(x,y)$ is the ``dilatation factor" of a horizontal stretch map \begin{equation*} h(x,y) = \left(\phi(x,y)x, y\right) \end{equation*} that is the required extension. \end{proof} \begin{proof}[Proof of Lemma \ref{lem:approx2}] For any $\epsilon>0$, we shall construct a $(1+\epsilon)$-quasiconformal map from $\Sigma_i^\prime$ to $ \Sigma_\mathcal{G}$ for all sufficiently large $i$. The construction is in two steps: in the first step we map to an intermediate half-plane surface $\Sigma_{\mathcal{G}^{\prime}_i}$.\\ \textit{Step I.} Let $\mathcal{E}$ be the set of edges in $\mathcal{G}_{top}$. The cardinality $\vert \mathcal{E}\vert$ is finite, a number depending only on the genus of $\Sigma$. Recall also (see Definition \ref{defn:collocus}) that $\mathcal{C} \subset \mathcal{E}$ is the sub-graph consisting of edges whose lengths along the sequence $\mathcal{G}_i$ tend to zero.\\ The lengths of edges of $\mathcal{E}\setminus \mathcal{C}$ in $\mathcal{G}_i$ however converge to \textit{positive} lengths of the corresponding edges in $\mathcal{G}$. Hence for all $e\in \mathcal{E}\setminus \mathcal{C}$ we have: \begin{equation}\label{eqn:ratio} r_i= \frac{l_i(e)}{l(e)} \to 1 \end{equation} as $i\to \infty$.\\ Consider the metric graph $\mathcal{G}^{\prime}_i$ obtained by assigning the following lengths to the edges of $\mathcal{G}_{top}$: $l(e)$ to all edges in $\mathcal{E}\setminus \mathcal{C}$ and $l_i(e)$ to all edges in $\mathcal{C}$. We can construct a $K^1_i$-biLipschitz map \begin{equation}\label{eq:h1i} h^1_i:\mathcal{G}_i\to \mathcal{G}^{\prime}_i \end{equation} that preserves vertices, is a linear stretch map on all the finite-length edges in $\mathcal{E}\setminus \mathcal{C}$, and is an isometry on every other edge. By (\ref{eqn:ratio}), the stretch-factors, and therefore the biLipschitz constants $K^1_i\to 1$ as $i\to \infty$.\\ Note that any $K$-biLipschitz map \begin{equation*} b:\mathbb{R}\to \mathbb{R} \end{equation*} can be extended to a $K$-biLipschitz map $\bar{b}:\mathbb{H}\to\mathbb{H}$ of the upper half-plane (here $\mathbb{R} = \partial \mathbb{H}$ by mapping \begin{equation*} (x,y) \mapsto (b(x), y). \end{equation*} Applying this to the map (\ref{eq:h1i}) above, we get for all sufficiently large $i$, a $(1+\epsilon)$-biLipschitz extension \begin{equation}\label{eq:ext1} \overline{h^1_i}:\Sigma_i\to \Sigma_{\mathcal{G}^{\prime}_i} \end{equation} where $ \Sigma_{\mathcal{G}^{\prime}_i}$ is the half-plane surface obtained by gluing half-planes along $\mathcal{G}^{\prime}_i$.\\ \textit{Step II.} Note that the graph $\mathcal{G}$ is obtained by collapsing all the edges of $\mathcal{C}$ in $\mathcal{G}^{\prime}_i$. Let $E$ be the set of finite-length edges of $\mathcal{E}\setminus \mathcal{C} $ and consider the minimum length \begin{equation*} c = \min\limits_{e\in E} l(e) \end{equation*} if $E\neq \emptyset$ is non-empty, and set $c=2$ if $E = \emptyset$.\\ \begin{figure}[h] \centering \includegraphics[scale=0.75]{collapse2.png}\\ \caption{In \textit{Step II} the map between half-plane surfaces that is an isometry away from a neighborhood of $\mathcal{C}$ is finally adjusted to an almost-conformal map. } \end{figure} If $\mathcal{C}_i$ denotes the metric subgraph corresponding to $\mathcal{C}$ in $\mathcal{G}_i$, recall we have \begin{equation}\label{eqn:seq} diam(\mathcal{C}_i) \to 0 \end{equation} as $i\to\infty$.\\ Since $\mathcal{C}_i$ is a forest by Lemma \ref{lem:forest}, for sufficiently large $i$ we have:\\ (1) For each half plane $\mathbb{H}$ of the half-plane surface $ \Sigma_{\mathcal{G}^{\prime}_i}$ the intersection $\partial \mathbb{H} \cap \mathcal{C}_i$ is a union of segments each of length less than $\epsilon$ and separated by a distance at least $c$.\\ (2) The $c/2$-neighborhood $N_{c/2}$ of $\mathcal{C}_i$ is topologically a union of disks, one for each component tree.\\ Moreover, for sufficiently large $i$, $\epsilon$ is small enough such that the corresponding $ \delta <c/2$ where $\delta$ is as in Lemma \ref{lem:extend}. By an application of that Lemma on each half-plane, one can build a homeomorphism \begin{equation}\label{eq:h1i0} \overline{h^\prime_i}:\Sigma_{\mathcal{G}^{\prime}_i}\to \Sigma_\mathcal{G} \end{equation} that is an isometry (and hence conformal) away from a $\delta$-neighborhood $N_\delta$ of $\mathcal{C}_i$.\\ By observation (2) above each component of $N_{c/2}\setminus N_\delta$ is topologically an annulus, and has modulus that tends to infinity as $\delta \to 0$. Hence for sufficiently large $i$, one can apply the quasiconformal extension of Corollary \ref{cor:corqclem} to adjust the map $\overline{h^\prime_i}$ in the component disks of $N_\delta$ to obtain a $(1 + C^\prime\epsilon)$-quasiconformal map homeomorphism \begin{equation}\label{eq:h1i1} \overline{h^2_i}:\Sigma_{\mathcal{G}^{\prime}_i}\to \Sigma_\mathcal{G} \end{equation} Recall the map (\ref{eq:ext1}) from $Step$ I. The composition \begin{equation}\label{eq:h1i2} \overline{h_i} =\overline{h^2_i}\circ \overline{h^1_i}:\Sigma_i \to \Sigma_\mathcal{G} \end{equation} is then $(1+ C^{\prime\prime}\epsilon)$-quasiconformal, for some universal constant $C^{\prime\prime}>0$, as required. \end{proof} \section{Step 6: Prescribing the leading order term } From Lemmas \ref{lem:approx1} and \ref{lem:approx2} we now have: \begin{prop}\label{prop:qcseq} The quasiconformal maps $g_i\circ f_i^{-1}: \Sigma \setminus p \to \Sigma_\mathcal{G}$ have quasiconformal dilatation that tends to $1$ as $i\to \infty$, and hence limits to a conformal homeomorphism \begin{center} $g:\Sigma \setminus p \to \Sigma_\mathcal{G}$ \end{center} in the sense of uniform convergence on compact sets, after passing to a subsequence. Here, $\Sigma_\mathcal{G} = \Sigma_{n,a}$, that is, it is a half-plane surface with a pole of order $n$ and residue $a$. Moroever, $g$ is homotopic to the identity map. \end{prop} \begin{proof} The quasiconformal maps extend to a map between the closed surfaces, mapping $p$ to $\infty$. The uniform convergence is a standard application of the compactness of a family of quasiconformal maps with fixed domain and target, and bounded dilatation. Since the quasiconformal dilatations of $f_i$ and $g_i$ tend to $1$ as $i\to \infty$, that the limiting homeomorphism is $1$-quasiconformal, and hence conformal.\\ By construction (see Proposition \ref{prop:step4}), the limiting half-plane differential has a pole of order $n$ and residue $a$. Inspecting the construction of the quasiconformal homeomorphisms $f_i$ and $g_i$, we observe that both are homotopic to the identity ($f_i$ is a quasiconformal map on a disk together with the identity map on its complement, and $g_i$ restricts to a homotopy equivalence of the metric spines). Hence so is each homeomorphism $g_i\circ f_i^{-1}$, and the limit $g$. \end{proof} To complete the proof of Theorem \ref{thm:main} (in the case of a single marked point $p$) it only remains to show that the half-plane differential has a leading order term $c$ with respect to the fixed choice of coordinate neighborhood $U$ around $p$. By Lemma \ref{lem:pullb} it will be enough to show that the above conformal map $g$ has derivative of suitable magnitude with respect to this conformal neighborhood. This is where a suitable choice of the constant $H_0$ in (\ref{eq:hi}) will be made.\\ As before let $\phi:U\to \mathbb{D}$ be a conformal homeomorphism such that $\phi(p)=0$. Let $g(U)$ be a subset of a planar end is identified with a complement of a compact set in $\mathbb{C}$. As in \S3.3, by an inversion map, one has a conformal homeomorphism $\psi:g(U) \to V$ that takes $\infty$ to $0$, where $V$ is a simply connected domain in $\mathbb{C}$ containing $0$. In the rest of this section we shall show: \begin{prop}\label{prop:deriv} There is a choice of $H_0$ in (\ref{eq:hi}) for which the conformal map $G = \psi\circ g\circ\phi^{-1}:\mathbb{D}\to \mathbb{C}$ that takes $0$ to $0$, has derivative $\left| G^\prime(0)\right| = c^{-\frac{1}{n-2}}$. For this $H_0$, we have that the leading term of the half-plane differential for $\Sigma_\mathcal{G}$ is $c$ with respect to the coordinate chart $U$. \end{prop} The proof of this needs the following analytical lemma: \begin{lem}\label{lem:qclem2} Let $f_i:\mathbb{D}\to \mathbb{C}$ be a sequence of quasiconformal embeddings such that:\\ (1) $f_i(0)=0$ for all $i$,\\ (2) $f_i$ is $(1+\epsilon_i)$-quasiconformal, where $\epsilon_i\to 0$ as $i\to \infty$, and\\ (3) for some sequence $r_i\to 0$ we have that $f(B_{r_i}) = V_i$, where $V_i$ is an open simply connected domain containing $0\in \mathbb{C}$ having a uniformizing map $\phi_i:(V_i,0) \to (\mathbb{D},0)$, such that: \begin{equation}\label{eq:dl} \left|r_i \phi_i^\prime(0)\right| \to \alpha \end{equation} as $i\to \infty$.\\ Then after passing to a subsequence, $f_i$ converges uniformly to $f$, a univalent conformal map such that $\left|f^\prime(0)\right| = 1/\alpha$. \end{lem} \begin{proof} It is a standard fact that a sequence of $K$-quasiconformal self maps of $\hat{\mathbb{C}}$ normalized by the additional requirement that it fixes two points $0$ and $\infty$ forms a sequentially compact family with respect to uniform convergence. This is satisfied by the family $\{f_i\}$ for each $K>1$ and hence there is a limiting \textit{conformal} map $f$ as required, which is either univalent or constant. It only remains to show that $\left|f^\prime(0)\right| = 1/\alpha$ - this will also rule out the latter possibility.\\ Consider the conformal dilatation $\psi_i:\mathbb{D} \to B_{r_i}$ where $\psi_i(z) = r_iz$. Note that $\psi_i^\prime(0) = r_i$. Then the composition $F_i=\phi_i\circ f_i\circ \psi_i:\mathbb{D}\to \mathbb{D}$ is $(1 + \epsilon_i)$-quasiconformal, where $\epsilon_i\to 0$ as $i\to \infty$ by property (2) above. The compactness result mentioned above (see also Theorem 1 of \cite{Ahl}) implies that there is a subsequence that converges uniformly to a conformal map $F$ . Moreover, since each map along the sequence preserves the point $0\in \mathbb{D}$, so does the limit and by the Schwarz lemma, $F$ is a rotation and in particular \begin{equation}\label{eq:psip} \left| F^\prime(0) \right|=1. \end{equation} If each $f_i$ were differentiable at $0$, and the sequence of derivatives converged to the derivative of the limit, we would have by using the chain rule: \begin{equation*} \left| F^\prime(0) \right| = \lim\limits_{i\to \infty} \left| (\phi_i\circ f_i\circ \psi_i)^\prime(0) \right| = \lim\limits_{i\to \infty} \left| \phi_i^\prime(0) \right| \cdot \left| {f_i}^\prime(0) \right| \cdot r_i= \alpha \left| f^\prime(0) \right| \end{equation*} where the last equality is from (\ref{eq:dl}). By (\ref{eq:psip}) the argument would be complete, namely $\left|f^\prime(0)\right| = 1/\alpha$ as desired. However, $f_i$ are merely quasiconformal, and may not be differentiable at $0$. However they have derivatives which are defined almost-everywhere and are locally integrable, and that converge in norm to the derivative of the limit. So we run the above argument with the \textit{averages} ($L^1$-norms) of the total derivatives in a sequence of shrinking disks around $0$:\\ Let $B_\delta$ be a disk around $0$ of radius $\delta>0$ (sufficiently small). For a conformal or quasiconformal map $F$ defined on $\mathbb{D}$ we have a nonnegative real number \begin{equation*} D_{avg}^\delta F = \frac{1}{Area(B_\delta)}\int\limits_{B_\delta} \lvert DF_i(z)\rvert dzd\bar{z}. \end{equation*} In what follows we shall pass to a converging subsequence wherever needed.\\ The sequence of quasiconformal maps $F_i:\mathbb{D}\to \mathbb{D}$ converges uniformly to a rotation, and hence \begin{equation} \label{eq:davg1} D_{avg}^\delta F_i\to 1 \end{equation} as $i\to \infty$.\\ Also, the sequence of quasiconformal maps $f_i:\mathbb{D}\to\mathbb{C}$ converges uniformly to a conformal map $f$, and this implies that \begin{equation}\label{eq:davg2} D_{avg}^\delta f_i\ \to \left|f^\prime(0)\right| \end{equation} as $i\to \infty$ and $\delta\to 0$.\\ Further, for the univalent conformal map $\phi_i:V_i\to \mathbb{D}$, one has that \begin{equation}\label{eq:phiex} \phi_i^\prime (z) = \phi_i^\prime (0) + Az + O(z^2) \end{equation} where $\left|A\right|$ is bounded above by a universal constant (a standard fact in univalent mappings).\\ Since quasiconformal maps are differentiable almost everywhere, we have by the chain rule that \begin{equation}\label{eq:break} \left| DF_i(z) \right| = \left| D\phi_i(z)Df_i(z) D\psi_i(z) \right| = r_i \left| Df_i(z) \right| \left|\phi_i^\prime(z)\right| \end{equation} for $z\in B_\delta^\prime$, a full-measure subset of $B_\delta$.\\ By (\ref{eq:dl}) for any $\epsilon>0$ there is $i$ sufficiently large such that $\alpha -\epsilon < r_i\left|\phi_i(0)\right| < \alpha +\epsilon$ and $Ar_i <\epsilon$ where $A$ is the constant in (\ref{eq:phiex}). Using (\ref{eq:break}) we then have: \begin{equation*} \int\limits_{B_\delta^\prime} \lvert DF_i(z)\rvert dzd\bar{z} = \int\limits_{B_\delta^\prime} \left| Df_i(z) \right| r_i\left|\phi_i^\prime(z)\right| dzd\bar{z} \leq \int\limits_{B_\delta^\prime} \left| Df_i(z) \right| \left(\alpha + 2\epsilon\right)dzd\bar{z} \end{equation*} where we have used (\ref{eq:phiex}) and the above observations for the last inequality.\\ We have a similar bound from below. By taking $i\to \infty$ we get that \begin{equation*} \left|\alpha D_{avg}^\delta f_i - D_{avg}^\delta F_i \right| \to 0 \end{equation*} and hence by (\ref{eq:davg1}) we have: \begin{equation*} D_{avg}^\delta f_i \to 1/\alpha \end{equation*} which by (\ref{eq:davg2}) is equal to $\left|f^\prime(0)\right|$ as $\delta \to 0$. This completes the proof. \end{proof} In our case, we shall apply Lemma \ref{lem:qclem2} to the sequence of quasiconformal maps $g_i\circ f_i^{-1}:\Sigma \to \Sigma_{\mathcal{G}}$ (see Propn. \ref{prop:qcseq}) after restricting to the open set $U$ that we conformally identify with $\mathbb{D}$ (see the discussion preceding Propn. \ref{prop:deriv}). Recall that by construction the above quasiconformal map takes the open set $V_i\subset U$ to a planar end $\mathcal{P}_{H_i}$ that can be identified with an open neighborhood $U_{H_i}$ of $0\in \mathbb{C}$ (See also Lemma \ref{lem:uh}).\\ To use Lemma \ref{lem:qclem2}, we need to verify the condition (3). Recall from \S3.3 that $U_{H_i}$ is the image of the planar end truncated at height $H_i$. Consider a sequence of uniformizing conformal maps $\phi_i: U_{H_i}\to \mathbb{D}$ for $i\geq 1$, and recall that $r_i= 2^{-i}$ by construction (\S6). In this setup, we have: \begin{cor}\label{cor:dlim} After passing to a subsequence, $r_i\left| \phi_i^\prime(0)\right|$ has a limit $L$ as $i\to \infty$. Moreover, $L\to \infty$ as $H_0\to \infty$, and $L\to 0$ as $H_0\to 0$, where $H_0>0$ was the constant chosen arbitrarily in (\ref{eq:hi}), and $L$ varies continuously with $H_0$. \end{cor} \begin{proof} By construction (see the first part of \S6) we have $r_i= 2^{-i}$ and $H_i^{2/n} = H_0\cdot 2^i$ (see also (\ref{eq:hi})). Corollary \ref{cor:uhmap} now shows that the sequence $\{r_i\left| \phi_i^\prime(0)\right|\}_{i\geq1}$ lies in the interval $[H_0/4D_2, H_0/D_1]$, and hence after passing to a subsequence converges to $L$ in that interval. As one varies $H_0$, all domains (and hence the conformal maps $\phi_i$ and its derivatives) vary continuously (see also Lemma \ref{lem:mon}) and the same subsequence converges to a value that varies continuously (and lies in an appropriately shifted interval). \end{proof} \begin{proof}[Proof of Proposition \ref{prop:deriv}] By the previous corollary, there exists a choice of $H_0$ in (\ref{eq:hi}) for which \begin{equation*} r_i\left| \phi_i^\prime(0)\right| \to c^\frac{1}{n-2} \end{equation*} Applying Lemma \ref{lem:qclem2} to the sequence of quasiconformal maps \begin{equation*} \psi\circ g_i\circ f_i^{-1} \circ\phi^{-1}:\mathbb{D} \to \mathbb{C} \end{equation*} it follows that $G= \psi\circ g\circ \phi^{-1}$ has derivative $c^{-\frac{1}{n-2}}$ at $0$.\\ Now, the half-plane differential on the disk $\mathbb{D}$ uniformizing $U$ is the pullback of the standard meromorphic differential on $\mathbb{C}$ (see (\ref{eq:stan})) by this map $G$. Since the leading order term at the pole for this standard differential is $1$, by Lemma \ref{lem:pullb}, the leading order term is of the half-plane differential is $\left|G^\prime(0)\right|^{2-n}=c$ , as required. \end{proof} \section{Summary of the proof} Collecting the results of the previous sections, we have: \begin{proof}[Proof of Theorem \ref{thm:main}] Propositions \ref{prop:qcseq} and \ref{prop:deriv} complete the proof of Theorem \ref{thm:main} in the case of a single marked point. This easily generalizes to the case of multiple poles, as we briefly summarize:\\ For a set $P = \{p_1,\ldots p_n\}$ on $\Sigma$ consider fixed coordinate charts $U_1,\ldots U_n$ around each. As in \S6 , we consider a compact exhaustion of the surface by excising subdisks of radii $r_i$ tending to zero, from each (more specifically, we choose $r_i = 2^{-i}$). For each compact subsurface $\Sigma_i$ we choose a number of arcs on each boundary component depending on the desired orders of poles and by the quadrupling construction we construct a compact Riemann surface with a collection of distinct homotopy classes of curves (the assumption that $\Sigma \neq \widehat{\mathbb{C}}$ if $n=1$ ensures that the homotopy classes are distinct). We prescribe a Jenkins-Strebel differential with closed trajectories in these homotopy classes and given cylinder circumferences on this surface. Moreover, by Lemma \ref{lem:arcs} one can choose the arcs so that the extremal lengths of these curves are precisely what is needed for the cylinder lengths to be \begin{equation*} H_i^j=\left( H_0^j\cdot 2^i\right)^{n_j/2} \end{equation*} for the $j$-th marked point ($1\leq j\leq n$), where $n_j$ is the desired order of the pole at that marked point. (See \ref{eq:hi}) in \S6.) By quotienting back, one gets a ``rectangular" metric on $\Sigma_i$ with polygonal boundaries of prescribed dimensions, that can be completed to a half-plane surface $\Sigma_i^\prime$ by gluing in $n$ planar ends. As in Lemma \ref{lem:uhi}, the above prescribed dimensions of the polygonal boundary ensures that the planar end glued in is conformally a disk of radius $O(r_i)$ around $0\in \mathbb{C}$.\\ The fact that the conformal structures on $\Sigma \setminus P$ and $\Sigma_i^\prime$ differ only a union of disks of small radii ($r_i$) allows us to build an almost conformal homeorphism between them (\S8) that tends to a conformal homeomorphism as $i\to \infty$. The geometric control on the dimensions of polygonal boundaries of the rectangular surfaces $\Sigma_i$ also shows that the corresponding half-plane differentials converge to a meromorphic quadratic differential on $\Sigma$ with poles at $P$ (as in \S 9), and as in \S10 this limiting differential is in fact half-plane. Finally, as in \S11 one can show that an appropriate choice of ``scaling" factors (the constants $H_0^j$ above) while constructing the sequence of rectangular surfaces, ensures that the leading order terms at each pole of the resulting half-plane differential are the desired real numbers. \end{proof} \section{Applications and questions} \subsection{An asymptoticity result} We had previously shown (see \cite{Gup1} for a precise statement): \begin{jthm}[\cite{Gup1}] A grafting ray in a generic direction in Teichm\"{u}ller space is strongly asymptotic to some Teichm\"{u}ller ray. \end{jthm} The main result of \cite{Gup25} is a generalization of the above asymptoticity result to \textit{all} directions. The idea of the proof is to consider the conformal limit of the grafting ray, and find a conformally equivalent singular flat surface that shall be the conformal limit of the corresponding Teichm\"{u}ller ray. The strong asymptoticity is shown by adjusting this conformal map to almost-conformal maps between surfaces along the rays. \\ The result of this paper is used to find the conformally equivalent singular flat surface mentioned above: namely, Theorem \ref{thm:main} can be generalized easily to include poles of order $2$, which correspond to half-infinite cylinders in the quadratic differential metric. This is used to obtain a (generalized) half-plane surface $Y_\infty$ and a conformal map \begin{equation*} g:X_\infty \to Y_\infty \end{equation*} for each component $X_\infty$ of the conformal limit of the grafting ray.\\ Prescribing the leading order term, or equivalently the derivative of the above conformal map $g$, helps to construct the controlled quasiconformal gluings of truncations of these infinite-area surfaces. \subsection{The question of uniqueness} The construction of single-poled half-plane differentials on surfaces (see \S4.4) proceeds by introducing a slit in the metric spine of a single-poled hpd $\hat{\mathbb{C}}$ as above, and gluing by an interval-exchange. (This does not affect the residue and leading order coefficient at the pole.)\\ Since there are non-unique choices of single-poled hpds (of order greater than $4$) on $\hat{\mathbb{C}}$ which have the same residue and leading order coefficient (see \S4.2), one can see that uniqueness does not hold in general, in Theorem \ref{thm:main}.\\ However, we conjecture: \begin{conj} When all the orders of the poles are $4$, the half-plane differential with prescribed residues and leading order terms that exists by Theorem \ref{thm:main}, is unique. \end{conj} and more generally one can ask: \begin{ques} Does uniqueness of the half-plane differential hold if one prescribes further local data, in addition to the order of pole, residue and leading order term? \end{ques} \subsection{Limits of Teichm\"{u}ller rays} One obtains half-plane surfaces as geometric limits of Teichm\"{u}ller rays (details in the forthcoming paper \cite{Gup25}). Roughly speaking, along a Teichm\"{u}ller geodesic there is a stretching of the quadratic-differential metric in the ``horizontal" direction (after rescaling we can assume that distances in the vertical direction remains unchanged). This increases the area of the surface monotonically, and stretches a neighborhood of the critical graph of vertical saddle-connections, to a half-plane surface.\\ Generically (for directions determined by \textit{arational} laminations) one obtains a collection of $2g-2$ half-plane differentials $(\mathbb{C}, zdz^2)$, and more interesting limits are obtained for directions determined by \textit{non-filling} laminations. \\ One of the questions to be addressed in forthcoming work is: \begin{ques} Given a collection of half-plane surfaces with pairings of poles with matching order and residues, is it possible to construct a Teichm\"{u}ller ray with that (disconnected) half-plane surface as a limit? \end{ques}
1,941,325,220,978
arxiv
\section{INTRODUCTION} Since the Convolutional Neural Network was applied to visual data analysis~\cite{kohonen1982self}, there has been great progress in deep learning, computer vision and medical image processing. There is the potential to apply deep-learning-based approaches on medical image analysis, such as the tumor detection of pancreatic cancer. Pancreatic cancer is one of a malignant tumor diseases with around 7\% in 5-year survival rates~\cite{ryan2014pancreatic}~\cite{bray2018global}. The pancreas is a small organ located in the deep of human body, so that the difficulty of detection is significantly increased. Furthermore, missing the optimal time for radical surgery is the major cause of cancer death. CT imaging, a medical monitoring technology, which collects information of the tumor location, size and morphology, is helpful for the diagnosis and staging of pancreatic cancer compared with ultrasound imaging and Magnetic Resonance Imaging (MRI)~\cite{chu2019application}. Nevertheless, manually diagnosing requires doctors with rich clinical experience, because the quality of CT images varies between different CT scanners or operators, and pathological texture features are hard to be distinguished. Therefore, there is a growing need of studying on proposing a robust deep-learning-based algorithm for accurate pancreatic tumor detection. Kishor achieved detection of pancreatic cancer in 2015~\cite{reddy2015detection}. A K-means clustering approach was utilized firstly to group the region of interests (ROIs). Then, A Haar wavelet transformation and threshold were adopted to classify images. His algorithm could be briefly deployed for the computer-aided system, whereas the performance of segmentation and classification might be seriously influenced by cancer pathological features. Li utilized saliency maps and densely-connected convolutional networks for pancreatic ductal adenocarcinoma diagnosis in 2019~\cite{li2019differential}. The high-level features were extracted and mapped to different types of pancreatic cysts. A larger training dataset might improve the performance. An approach for pancreatic tumor characterization inspired by the radiologists' inspiration and label proportion was illustrated by Sarfaraz~\cite{hussein2019lung}. He designed a 3D CNN-based graph-regularized sparse multi-task framework with a proportion-SVM to avoid the limited labeled data. It achieved sensitivity in diagnosing Intraductal Papillary Mucinous Neoplasms, but deep learning approaches such as Generative Adversarial Networks may show better performance~\cite{gutmann2012noise}. Following the above consideration, an advanced framework for detecting human pancreatic tumor via CT images is proposed. Feature Pyramid Networks (FPN) utilize a top-down path with lateral connections to propagate semantic features in low levels, whereas the propagation through a long way increases the difficulty of exploiting accurate localization information~\cite{lin2017feature}. Therefore, a bottom-up Augmented Feature Pyramid aiming at shortening the information path and propagating low-level features is created at first. Secondly, Self-adaptive Feature Fusion to adaptively encode and integrate context information at multiple scales based on the proposals is designed, because the size of tumor is relatively small and nonuniform. Thirdly, inspired by the Non-local Neural Networks~\cite{wang2018non}, we employ a Dependencies Computation Module to compute dependencies and acquire interaction information with surrounding tissues. The expressiveness of features is enhanced by calculating the dependencies ranging from local to global. Subsequently, evaluation is illustrated and applied. The results achieve competitive performance compared with other deep-learning-based approaches. \begin{figure*}[t] \centering \includegraphics[scale=0.55]{framework} \caption{The architecture of the pancreatic tumor detection network.} \label{fig:framework} \end{figure*} \section{Methods} The novel and efficient tumor detection framework we proposed is illustrated in Fig.~\ref{fig:framework}. The network utilizes FPN combined with Faster R-CNN~\cite{ren2015faster} as the backbone and the contribution of the proposed method consists of three components: Augmented Feature Pyramid networks, Self-adaptive Feature Fusion and a Dependencies Computation Module. Firstly, we feed the preprocessed CT images into the pre-trained ResNet-101 for feature extraction~\cite{he2016deep}, and then build the feature pyramid via up-sampling and lateral connections. Secondly, in order to enhance the entire feature hierarchy for improving detection performance, a bottom-up path is established to make low-level localization information propagation more efficient. Thirdly, we employ a Region Proposal Network (RPN) on each level to generate proposals~\cite{ren2015faster}, and then use Self-adaptive Feature Fusion to enlarge the corresponding ROIs and encode richer context information at multiple scales. Besides, we conduct the Dependencies Computation Module to capture dependencies with surrounding tissues of each proposal. Finally, detection results are predicted via a Score Prediction layer and a Box Regression layer, respectively. \subsection{Augmented Feature Pyramid Networks} In the process of feature extraction, DCNNs can extract semantic information. Meanwhile, high-level feature maps strongly respond to global features, which are beneficial to detect large objects~\cite{liu2018path}. The tumor, however, is relatively small in CT images, thus the consecutive pooling layers may lose the important spatial details of feature maps. In addition, the low-level accurate localization information is essential for tumor detection, but the information propagation path in FPN, which consists of more than 100 layers, affects the transmission effect. To this end, we build a bottom-up Augmented Feature Pyramid. As shown in Fig.~\ref{fig:framework}, firstly, we generate $\left\{{P2, P3, P4, P5}\right\}$ based on FPN. Then, the augmented path is established from the level $P2$, and $P2$ is directly used as $S2$, without any processing. Next, a $3\times3$ convolutional operator with stride 2 is conducted on a higher resolution feature map $S_i$ to reduce the map size. The down-sampled feature map is then merged with a coarser feature map $P_{i+1}$ by element-wise sum. In addition, we employ another $3\times3$ convolutional operator on each fused feature map to generate $S_{i+1}$ for following feature map generation. This process is iterated until the level $P5$ is used. In this way, we can acquire a new Augmented Feature Pyramid consisting of $\left\{{S2, S3, S4, S5}\right\}$. \subsection{Self-adaptive Feature Fusion} After acquiring the proposed regions by RPN, ROIs are assigned to one certain level according to their size, and the subsequent operations are performed on the same level, resulting in some useful information from other levels are discarded. In this case, instead of using a regression function to make predictions directly on the assigned proposals, we design a Self-adaptive Feature Fusion module, which aggregates hierarchical feature maps from multiple levels, to make full use of context information at multiple scales. Formally, the ROI with width $w$ and height $h$ is assigned to the level $S_k$ of the Augmented Feature Pyramid for each proposal by: \begin{equation} k=min(S_{max},max(\lfloor {k_0+\log_2{(\sqrt{wh}/C)}} \rfloor,S_{min}))\label{level} \end{equation} In Eq.~\ref{level}, $C$ is the ImageNet pre-training size 224~\cite{krizhevsky2012imagenet}, and $k_0$ is set to 5, representing the coarsest level $S_5$. $S_{max}$ is 4, representing the level $S_4$. $S_{min}$ is 3, representing the level $S_3$. \begin{figure}[b] \centering \includegraphics[scale=0.732]{large} \caption{The example CT image of the given ROI $B$ and its corresponding region $R$.} \label{fig:enlarge} \end{figure} As shown in Fig.~\ref{fig:enlarge}, given an input ROI $B$, the predicted bounding box in red fails to cover the entire area of the tumor, especially the edge response, which results in information loss. To tackle this problem, we enlarge the width and height of ROI by the factor $S_w=$ 1.2 and $S_h=$ 1.2 to create a new region $R$ in blue. The new region $R$ contains richer context information, especially the responses about edges, which are strong indicators to accurate localization. Furthermore, as high-level features have larger receptive fields and capture more semantic information, low-level features have higher resolution and contain accurate localization details, which are complementary to abstract features. Both of them can help improve the detection performance, therefore, the regions $B$ and $R$ are mapped to the level $S_{k-1}$ and $S_{k+1}$, so that region $B$ and $R$ get three feature maps from three different scales, respectively. We employ 14$\times$14 ROI pooling over these maps to uniform the size. These descriptors are concatenated together and dimensions are reduced by 1$\times$1 convolutional operators. Finally, the $B$ based descriptor is used for score prediction, and the $R$ based descriptor is used for bounding box regression. \subsection{Dependencies Computation Module} In clinical practice, doctors pinpoint tumors through CT images by analyzing the global context information, local geometry structures, shape variations, and especially the spatial relations with surrounding tissues. In this case, we employ the Dependencies Computation Module to compute the response at a position, which is a weighted sum of the features at all positions on the enlarged region $R$. This operation can enable the network to pay more attention to the interactions and dependencies ranging from local to global, which is one of the most useful information for tumor detection. Specifically, given an input $x$, the entire Dependencies Computation Module is defined as follows: \begin{equation} y_i=softmax(\phi(x_i,x_j))h(x_j)\label{test} \end{equation} \begin{equation} softmax(\phi(x_i,x_j))=\frac{exp(\phi(x_i,x_j))}{\sum_{j=1}^Nexp(\phi(x_i,x_j))}\label{softmax} \end{equation} In Eq.~\ref{test} and Eq.~\ref{softmax}, $i$ is the index of the chosen position, $j$ is the index of all other positions. The dependencies between any two positions are calculated via $\phi(x_i,x_j)=x_i^{T}W_{f}^{T}W_{g}x_j$, and $h(x_j)=W_hx_j$. $W_f$, $W_g$ and $W_h$ are matrices implemented by 1$\times$1 convolutional operators to reduce the number of channels. As the shape of the input feature $R$ is 14$\times$14$\times$512, the shape of three corresponding outputs is 14$\times$14$\times$256. At last, we use an addition operator to fuse it with the original feature, which denotes: \begin{equation} z_{i}=W_{z}y_{i}+x_i\label{eq} \end{equation} where $y_{i}$ is calculated in Eq.~\ref{test}, $x_i$ is the original input. $W_z$ is a 1$\times$1 convolution layer used to restore the shape back to 14$\times$14$\times$512. \begin{figure}[t] \centering \includegraphics[scale=0.542]{size14} \caption{The Histogram of the diameter of tumor in the dataset.} \label{fig:diameter} \end{figure} \section{EXPERIMENTS AND RESULTS} \subsection{Dataset} The model is trained by a dataset of pancreatic CT images provided by The Affiliated Hospital of Qingdao University. The dataset contains 2890 CT images, in which 2650 images are for training, and 240 images are for testing. There is no overlap between the training set and the test set, and all the images are labeled by three experienced doctors with accurate bounding boxes. The diameter distribution of the tumor in the dataset is illustrated in Fig.~\ref{fig:diameter}. The diameter ranges from 15 to 104 pixels, and most of them are between 20 and 80 pixels. We preprocess these images and conduct data augmentation, including horizontal flip, vertical flip and diagonal flip before training. \subsection{Experiment Setup} The proposed method is implemented in Python using Tensorflow. During the training process, we set the batch size to 1, the momentum is 0.9 and weight decay is 0.0001, the learning rate is 0.001 for the first 30K iterations, 0.0001 for the next 20K and 0.00001 for the last 10K. For each mini-batch, we sample 512 ROIs with positive-to-negative ratio 1:1. For anchors, according to the tumors' diameter distribution illustrated in Fig.~\ref{fig:diameter}, we choose 5 scales with box areas of $16^2$, $32^2$, $64^2$, $128^2$, $256^2$, and 5 anchor ratios of 1:1, 1:1.5, 1.5:1, 1:2 and 2:1. The hardware settings are Intel (R) Core i7-9800X CPU, Nvidia GeForce RTX2080 Ti GPU and 32GB memory on Ubuntu 64bits Linux desktop. \begin{figure}[b] \centering \includegraphics[scale=0.59]{result} \caption{Example results of tumor detection. The first row are the ground truth, the second row are the corresponding detection results of the proposed method.} \label{fig:result} \end{figure} \subsection{Results and Discussion} Example results of tumor detection are shown in Fig.~\ref{fig:result}, the localization is relatively accurate and the corresponding probability score is high as well. In order to evaluate the detection performance, the proposed method is compared with classical object detection algorithms, including DetNet~\cite{Li2018DetNet}, Cascade R-CNN~\cite{cai2018cascade}, Mask R-CNN~\cite{he2017mask}, FPN~\cite{lin2017feature}, Faster R-CNN~\cite{ren2015faster}, RetinaNet~\cite{lin2017focal}, SSD512~\cite{liu2016ssd} and YOLO-v3~\cite{redmon2018yolov3}. These algorithms are trained and tested using the same pancreatic CT dataset without additional modifications. The Intersection Over Union (IOU) between predicted bounding box $B_p$ and the corresponding ground-truth bounding box $B_{gt}$ is calculated for each result, which is defined as follows: \begin{equation} IOU=\frac{B_{gt}\cap{B_p}}{B_{gt}\cup{B_p}}\label{eq} \end{equation} Furthermore, the detection results whose IOU are higher than 0.5 are regarded as valid results. As shown in Table~\ref{table:performance}, the proposed method achieves the best 0.8376, 0.9179 and 0.9018 in Sensitivity, Specificity and Accuracy, respectively, outperforming other methods by a notable margin. The corresponding Receiver Operating Characteristics (ROC) curves in Fig.~\ref{fig:roc} show that our proposed method is superior to other methods with the Area Under Curve (AUC) of 0.9455. \begin{table}[t] \caption{Detection performance comparison among different algorithms on test set} \def1.24{1.21} \begin{center} \begin{tabular}{|p{3.0cm}<{\centering}|p{1.2cm}<{\centering}|p{1.2cm}<{\centering}|p{1.2cm}<{\centering}|} \hline \textbf{Methods} & \textbf{Sensitivity}& \textbf{Specificity}& \textbf{Accuracy} \\ \hline SSD512~\cite{liu2016ssd}& 0.4238& 0.9088& 0.6411 \\ FPN + Faster R-CNN~\cite{lin2017feature}& 0.6984& 0.8584& 0.7416 \\ YOLO-v3~\cite{redmon2018yolov3}& 0.7697& 0.5849& 0.7423 \\ Mask R-CNN~\cite{he2017mask}& 0.7244& 0.8247& 0.7500 \\ Faster R-CNN~\cite{ren2015faster}& 0.4877& 0.9131& 0.7538 \\ DetNet~\cite{Li2018DetNet}& 0.6932& 0.9032& 0.7695 \\ RetinaNet~\cite{lin2017focal}& 0.8245& 0.5238& 0.7726 \\ Cascade R-CNN~\cite{cai2018cascade}& 0.6309& 0.9113& 0.7981 \\ \hline \textbf{Our Method} & \textbf{0.8376}& \textbf{0.9179}& \textbf{0.9018} \\ \hline \end{tabular} \label{table:performance} \end{center} \end{table} \begin{figure}[h] \centering \includegraphics[scale=0.5293]{roc} \caption{The ROC curves of different methods for pancreatic tumor detection.} \label{fig:roc} \end{figure} In addition, in order to evaluate the proposed method more accurately, Free-Response Receiver Operating Characteristics (FROC) is used to compute the Sensitivity at 7 FP/scan rates. Our proposed method achieves an average score of 0.901, the corresponding results are documented in Table~\ref{table:froc}. \begin{table}[t] \tiny \caption{Detection performance in terms of Sensitivity based on different FPs/scan rates on test set} \def1.24{1.8} \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \textbf{FPs/scan} & \textbf{0.125}& \textbf{0.25}& \textbf{0.5}& \textbf{1}& \textbf{2}& \textbf{4}& \textbf{8}& \textbf{Average}\\ \hline \textbf{Sensitivity} & 0.671& 0.804& 0.907& 0.963& 0.977& 0.986& 0.998& \textbf{0.901}\\ \hline \end{tabular} \label{table:froc} \end{center} \end{table} \begin{table}[h] \caption{Ablation studies on Effects comparison of the proposed components and their combinations} \def1.24{1.24} \begin{center} \begin{tabular}{|c|c|c|c|} \hline \textbf{Augmented}&\textbf{Self-adaptive}& & \\ \textbf{Feature}&\textbf{Feature}&\textbf{DC Module}&\textbf{Accuracy} \\ \textbf{Pyramid}&\textbf{Fusion}& & \\ \hline & & & \textbf{0.7416} \\ $\checkmark$& & & 0.8132 \\ & $\checkmark$& & 0.8359 \\ & $\checkmark$& $\checkmark$& 0.8697 \\ $\checkmark$& $\checkmark$& & 0.8541 \\ $\checkmark$& $\checkmark$& $\checkmark$ & \textbf{0.9018} \\ \hline \end{tabular} \label{table:contribution} \end{center} \end{table} Extensive ablation experiments are conducted to analyze the effects of the proposed components and their combinations in our method. The results are documented in Table~\ref{table:contribution}, the Augmented Feature Pyramid networks and Self-adaptive Feature Fusion can significantly improve the accuracy in individual cases. Finally, the detection accuracy can be significantly improved from 0.7416 to 0.9018 by the combinations of these three proposed components. \section{CONCLUSION} In this paper, we study on how to accurately detect the tumor of pancreatic cancer, which is of great significance for the diagnosis in clinical practice. We establish an Augmented Feature Pyramid to propagate low-level accurate localization information. We also design Self-adaptive Feature Fusion to capture richer context information at multiple scales. Finally, we compute the relation information of the features via the Dependencies Computation Module. Comprehensive evaluations and comparisons are completed, and our proposed method achieves promising performance. In the future, we will continue studying on the staging of pancreatic cancer to assist the doctor's clinical diagnosis. \addtolength{\textheight}{-12cm} \section*{ACKNOWLEDGMENT} This research is supported in part by Foundation of Shandong provincial Key Laboratory of Digital Medicine and Computer assisted Surgery (SDKL-DMCAS-2018-01), National Natural Science Foundation of China (NO. 61672077 and 61532002), Applied Basic Research Program of Qingdao (NO. 161013xx), and Beijing Natural Science Foundation-Haidian Primitive Innovation Joint Fund (L182016). \bibliographystyle{ieeeconf}
1,941,325,220,979
arxiv
\section*{References}
1,941,325,220,980
arxiv
\section{Introduction}\label{introduction} The past several years have witnessed the progress and success of self-play. The combination of classical MCTS \cite{mcts_survey} algorithms with newly developed deep learning techniques gives a stunning performance on complex board games like Go and Chess \cite{silver2018general,alpha0,alpha_go}. One common but outstanding feature of such an algorithm is the tabula-rasa style of learning. In other words, it learns to play the game with zero knowledge (except the game rules). Such tabula-rasa learning is regarded as an approach to general artificial intelligence. Given such an achievement, it is interesting to see whether their algorithm's superhuman capability can be used to solve problems in other domains. Specifically, we apply neural MCTS \cite{silver2018general,expertit} to solve the QSAT problem through self-play on the QSAT game. Our experiment shows that, even though the QSAT game is fundamentally different from traditional board games (see section \ref{arch}), the algorithm is still able to determine the truthfulness of the corresponding QSAT problem through the dominant player. Furthermore, the trained algorithm can be used to approximate the solution (or show the non-existence of a solution) of the original problem through competitions against an enumerator. However, our objective is not necessarily to improve the state-of-the-art of hand-crafted problem solvers in specific areas but to illustrate that there is a generic algorithm (neural MCTS) that can solve well-known problems tabula-rasa. In this work, we make two main contributions: 1. We propose a way to turn QBFs into graphs so that they can be embedded with a graph neural network; 2. We implemented a variant of the neural MCTS algorithm, which has two independent neural networks (designed explicitly for the QSAT games) for each of the player. Our result shows that the algorithm can determine the truthfulness of the QSAT problem correctly. The remainder of this paper is organized as follows. Section \ref{relatedworks} shows some related works which inspired our work. Section \ref{preliminaries} presents essential preliminaries on neural MCTS, the QSAT problems, and graph neural networks. Section \ref{implement} introduces our approach to encode QBFs as graphs and the architecture of our implementation. Section \ref{experiment} gives our correctness measurement and presents experimental results. Section \ref{discuss} and \ref{conclusion} made a discussion and conclusions. \section{Related Work}\label{relatedworks} In terms of combining a QSAT solver with machine learning, Janota built a competitive QSAT solver, QFUN \cite{qfun}, based on counterexample guided refinement and machine learning. Although like in our work, the QSAT problems is treated as a game, their learning does not depend on the game state (i.e., the QBF), but focus on the assignment pairs from the two players in two consecutive moves (i.e., a move by the existential player, and a countermove by the universal player). By supervised learning a decision tree classifier, the learning algorithm categorizes the move-countermove pairs into two classes: feasible countermove and infeasible countermove. QFUN progressively solves a QBF by learning moves for the existential player so that there are no feasible countermoves for the universal player. While the performance is compelling, their solver is largely based on a counterexample guided abstraction refinement algorithm \cite{CEGAR}, whose design requires insights from human, hence cannot be regarded as tabula-rasa. As an alternative methodology, NeuroSAT \cite{selsam2018learning} provides us another insight to apply machine learning on such problems. By leveraging the graph neural networks \cite{battaglia2018relational} and message passing process \cite{gilmer2017neural}, they developed a single-bit supervised SAT solver. The algorithm depends on zero-knowledge and learns purely on the input formula. In NeuroSAT, Boolean formulas are encoded as graphs so that a specially designed graph neural network can be applied to those graphs. The target value of the graph neural network is a single bit, represented for the solvability of the input SAT problem. It has been shown that NeuroSAT performs adequately on SAT problems within a reasonable size. When it comes to applying neural MCTS to solve problems in other domains, Xu et al. use a technique called Zermelo Gamification to turn specific combinatorial optimization problems into games so that they can be solved through AlphaZero like algorithms \cite{ruiyang2019}. They applied their method to a particular combinatorial optimization problem called HSR. Their result shows that the algorithm can accurately solve such a problem within a given size. Although they only applied their method to one specific problem, their experiment result endorse the idea that there is a generic algorithm (neural MCTS) that can solve well-known problems tabula-rasa. To this extent, our work can be seen as an extension of theirs. \section{Preliminaries}\label{preliminaries} \subsection{Neural Monte Carlo Tree Search}\label{mcts} The PUCT (Predictor + UCT) algorithm implemented in AlphaZero \cite{alpha0,gochessshogi} is essentially a neural MCTS algorithm which uses PUCB Predictor + UCB \cite{Rosin2011} as its confidence upper bound \cite{uct} and uses the neural prediction $P_{\phi}(a|s)$ as the predictor. The algorithm is running through multiple searching iterations to decide the optimal action for the current state. During each iteration, there are 4 phases: \begin{enumerate} \item{SELECT: } At the beginning of each iteration, the algorithm selects a path from the root (current game state) to a leaf (either a terminal state or an unvisited state) in the tree according to the PUCB (see \cite{alpha_go} for a detailed explanation for terms used in the formula). Specifically, suppose the root is $s_0$, we have \footnote{Theoretically, the exploratory term should be $\sqrt{\frac{\sum_{a'}N(s_{i-1},a')}{N(s_{i-1},a)+1}}$, however, the AlphaZero used the variant $\frac{\sqrt{\sum_{a'} N(s_{i-1},a')}}{N(s_{i-1},a)+1}$ without any explanation. We tried both in our implementation, and it turns out that the AlphaZero one performs much better.}: $$a_{i}=\argmax_a\left[Q(s_{i},a)+cP_\phi(a|s_{i})\frac{\sqrt{\sum_{a'} N(s_{i},a')}}{N(s_{i},a)+1}\right]$$ $$Q(s_{i},a) = \frac{W(s_{i},a)}{N(s_{i},a)+1}$$ $$s_{i+1}=move(s_{i},a_{i})$$ \item{EXPAND: } Once the select phase ends at a non-terminal leaf, the leaf will be fully expanded and marked as an internal node of the current tree. All its children nodes will be considered as leaf nodes during the next iteration of selection. \item{ROLL-OUT: } Normally, starting from the expanded leaf node chosen from previous phases, the MCTS algorithm uses a random policy to roll out the rest of the game \cite{mcts_survey}. The algorithm simulates the actions of each player randomly until it arrives at a terminal state, which means the game has ended. The algorithm then uses the outcome of the game as the result evaluation for the expanded leaf node. However, a random roll-out usually becomes time-consuming when the tree is deep. A neural MCTS algorithm, instead, uses a neural network $V_{\phi}$ to predict the result evaluation so that the algorithm saves the time on rolling out. \item{BACKUP: } This is the last phase of an iteration where the algorithm recursively backs-up the result evaluation in the tree edges. Specifically, suppose the path found in the Select phase is $\{(s_0,a_0),(s_1,a_1),...(s_{l-1},a_{l-1}),(s_l,\_)\}$. then for each edge $(s_i,a_i)$ in the path, we update the statistics as: $$W^{new}(s_i,a_i)=W^{old}(s_i,a_i)+V_{\phi}(s_l)$$ $$N^{new}(s_i,a_i)=N^{old}(s_i,a_i)+1$$ However, in practice, considering a Laplace smoothing in the expression of Q, the following updates are actually applied: $$Q^{new}(s_i,a_i)=\frac{Q^{old}(s_i,a_i)\times N^{old}(s_i,a_i)+V_{\phi}(s_l)}{N^{old}(s_i,a_i)+1}$$ $$N^{new}(s_i,a_i)=N^{old}(s_i,a_i)+1$$ Once the given number of iterations has been reached, the algorithm returns a vector of action probabilities of the current state (root $s_0$). And each action probability is computed as $\pi(a|s_0)=\frac{N(s_0,a)}{\sum_{a'}N(s_0,a')}$. The real action played by the neural MCTS is then sampled from the action probability vector $\pi$. In this way, neural MCTS simulates the action for each player alternately until the game ends. This process is called neural MCTS simulation, which is the core of self-play. \end{enumerate} \subsection{QSAT Problems and QSAT games}\label{qsat} A quantified Boolean formula (QBF) is a formula in the following form: $$\exists x_1 \forall x_2...\exists x_n.\Phi(x_1,...,x_n)$$ Where $x_i$ are distinct boolean variables. The sequence of quantifiers and variables is called the prefix of a QBF. The propositional formula $\Phi$ is called the matrix of a QBF, which only uses the variables in $\{x_i\}$. A QBF can evaluate to either true or false since there are no free variables, and it is solvable only if it evaluates to true, otherwise, it is unsolvable. The problem of determining the truthfulness of a QBF is called QSAT problem, which is known to be PSPACE complete. A QSAT problem can be seen as a game between two players: the existential player (the Proponent (P)) who assigns values to the existentially quantified variables, and the universal player (the Opponent (OP)) who assigns values to the universally quantified variables. The two players make moves by assigning values to the variables alternately following the sequence of quantifiers in the prefix. The existential player (P) wins if the formula evaluates to True and the universal player (OP) wins if it evaluates to False. \subsection{Gated Graph Neural Networks}\label{gnn} In this work, QBFs are encoded as graphs, and a Gated Graph Neural Network (GGNN) \cite{ggnn,gilmer2017neural} is applied to embed the QBFs into the neural MCTS framework. Notice that the GGNN is not the only option and there are alternatives \cite{gilmer2017neural,battaglia2018relational}, we choose GGNN for the sake of its easy implementation. The forward pass of the GGNN can be described as following: $$m_v^{t+1}=\sum_e\sum_{w\in N(v)}A_{e_{wv}}h_w^t,t=0..T$$ $$h_v^{t+1}=GRU(h_v^t,m_v^{t+1}),t=0..T$$ $$R=\sum_{v\in V}\sigma(f(h_v^T,h_v^0))\odot g(h_v^T)$$ where $e$ is the edge type in a multigraph, $A_e$ is the edge-weight matrix to be learned during the training, $h_v^t$ is the hidden representation of node $v$ at message passing iteration $t$, and $m_v^t$ is called the message from node $v$ at iteration $t$. $R$ is called the read-out which aggregates information from each node to generate a global feature target (notice that $\sigma$ means the sigmoid activation function, $f$ and $g$ are MLPs, and $\odot$ means element-wise product). The message passing process iterates for a given $T$ times, during each iteration, each node $v$ computes its message using the hidden representation from the neighbor nodes $N(v)$. After that, a Gated Recurrent Unit (GRU) is used to update the hidden representation of the node $v$. The message passing process allows each node's hidden representation to capture the global structure information of the entire input graph. Finally, the read-out process $R$ is applied to all the nodes to compute the global target of the input graph. GGNN is invariant to graph isomorphism, which is well-suited to capture the symmetry properties among the QBFs. \section{Implementation}\label{implement} \subsection{QBF Graphs} Although the QSAT problem has a simple syntactic structure, symmetries induced by the semantics of propositional logic should not be ignored \cite{qbfsym}. The fact that symmetric QBFs are equivalent can improve learning efficiency. In this work, we specially designed a graph encoding of the QBFs, which helps us catch those symmetries through graph isomorphism. After using Tseyting transformation to re-write $\Phi$ in conjunctive normal form (CNF), a QBF is represented as an undirected multigraph (Fig. \ref{qbfg}) with two nodes for every literal and its negation, and one node for every clause. There are four types of edges in this multigraph: 1. E2A edge, an edge between every consecutive existential literal and universal literal; 2. A2E edge, an edge between every consecutive universal literal and existential literal; 3. L2C edge, an edge between every literal and every clause it appears in; 4. reflexive edge, and an edge between each pair of literal and its negation. The reason behind such a design are three aspects: 1. the sequential information of the prefix is essential to identify the solution of a QBF. Even if two QBFs have the same matrix $\Phi$, a different variable sequence in the prefix might lead to a massive difference in the solution. Therefore, we use the E2A edges and A2E edges to track such sequential information. 2. In a QBF, variables only show as positive literals in the prefix; however, they can be both positive and negative in the matrix $\Phi$. Hence we naturally represent any variable as two nodes, meaning a pair of two complementary literals. 3. Since any literal and its complement are coupled, we use a reflexive edge to capture such entanglement. \begin{figure}[ht] \centering \includegraphics[width=0.4\textwidth]{qbfg.png} \caption{An example of graph encoding for the QBF: $\exists x_1\forall x_2\exists x_3 (x_1\vee x_2\vee \neg x_3)\wedge(x_2\vee x_3)\wedge(x_1\vee x_3)$. Notice that there are four types of edges, and two types of nodes.} \label{qbfg} \end{figure} \subsection{Architecture}\label{arch} In our design, the policy-evaluation neural network of the neural MCTS becomes two GGNNs (see section \ref{gnn}), one for each player. The reason why we use two independent neural networks instead of one is that the QSAT game is asymmetric in terms of the winning condition. As we have introduced in the section \ref{qsat}, P wins the game if and only if the QBF evaluates to true, while OP wins the game if and only if the QBF evaluates to false. On the other hand, when it comes to the outcome of GGNN for two consecutive moves by different players, we noticed that the hidden representation sometimes has no significant difference between the two players. Hence the GGNN becomes confused on the input graphs. This issue can be resolved only by separating the neural networks, so that both of the players can learn and progress mutually and consistently. Another fact to notice is that we treat every single QSAT problem as an independent game. During the self-play phase, the neural MCTS algorithm (section \ref{mcts}) simulates the move for each player based on the player's GGNN. The neural MCTS takes in the current game state (the QBF graph) and uses the current player's GGNN to do the selection and rollout. After a certain number (25 in our case) of iterations, neural MCTS will return the action probability distribution for the current state. The player will sample her next move from this distribution. The simulation alternates between the two players until the game ends, where the game outcome will be evaluated and stored for the training phase. To call the neural network, the hidden representation $h_v^0$ of each node $v$ is initialized with the type of the node. Specifically, for an existential literal node, the hidden representation is $[1,0,0,...,0]$; for a universal literal node, the hidden representation is $[0,1,0,...,0]$; and for a CNF clause node, the hidden representation is $[0,0,1,...,0]$ . Notice that we use $0$'s to pad the vector to a given length. Another fact to notice is that there are two read-out task ($P_{\phi}$ and $V_{\phi}$). Hence we use two different sets of aggregation MLPs for each of the task: $$R_i=\sum_{v\in V}\sigma(f_i(h_v^T,h_v^0))\odot g_i(h_v^T)$$ $$P_{\phi}=R_1,V_{\phi}=R_2$$ After each self-play simulation, we store the game trace of each player separately as a set of tuples in the form of ($s$, $\pi$, $v$), where $s$ is the game state (the QBF graph), $\pi$ is the action probability distribution generated by neural MCTS based on current state, and $v$ is the game result in the perspective of current player based on the game outcome. We run such a simulation several times (in our case, ten times) to retrieve enough training data. After that, we train the GGNN independently for each of the players using those training data collected during self-play. After training, we use the newly trained GGNNs to play against each other for 20 rounds and collect the performance data for evaluation and analysis, and this is called the arena phase. \section{Experiment}\label{experiment} \subsection{Experiment Setup} The hyperparameters are set as follows: the number of searching iteration for neural MCTS is set to 25, and the number of simulation is set to 100; the message passing time $T$ is set to 10 for the GGNN; the size of the hidden representation of the GGNN is set to 128. Considering the capacity and computation power of our machine, we generate 20 random QBFs (10 solvable and 10 unsolvable) which have 51 nodes (the prefix has 21 quantified variables, and the matrix has 9 clauses. So there are 42 literal nodes and 9 clause nodes.) after encoding as graphs. Each QBF is regarded as a single game to be played and learned by the neural MCTS. We run the learning iteration (i.e., self-play, training, and arena) for 32 epochs, and collect the performance data in the arena phase during each iteration. \subsection{Performance Measurement} To measure the performance of the algorithm, we use two metrics: the local correctness ratio and the global correctness ratio. We compute the local correctness ratio of the two players during the arena phase where the two players compete with each other for 20 rounds. The action is locally correct if it preserves a winning position. It is straightforward to check the local correctness of actions using a QSAT solver: GhostQ \cite{ghostq}. We collect the local correctness ratio of the two players after each round of competing in arena phase. Then we take the average value of their local correctness ratio as the performance measurement for the current training iteration. \theoremstyle{definition} \newtheorem{definition}{Definition}[section] \theoremstyle{definition} \begin{definition}{Local Correctness for P} \\Given a QBF $\exists x_1 \forall x_2...\exists x_n.\Phi$, an action $x^*$ is locally correct if and only if $\forall x_2...\exists x_n.\Phi[x_1\setminus x^*]$ evaluates to True. \end{definition} \begin{definition}{Local Correctness for OP} \\Given a QBF $\forall x_1 \exists x_2...\exists x_n.\Phi$, an action $x^*$ is locally correct if and only if $\exists x_2...\exists x_n.\Phi[x_1\setminus x^*]$ evaluates to False. \end{definition} Since the two neural networks might be inductively biased to each other, the locally correct solution could be incorrect. To see whether the neural MCTS learns the correct solution, we measure the global correctness ratio by test the algorithm with an enumerator. To be specific, if a QBF is satisfactory, then we enumerate all possible move for the OP (the universal player) and use the enumerator to play against the P's neural network. Vice-versa for the unsatisfactory QBF. Theoretically, OP's neural network fails to solve the QBF if there is any chance that the P's enumerator can win the game. We count the number of winning games for each player and use it to measure the global correctness ratio. A 100\% global correctness not only means the neural MCTS has found the correct solution, but also a fully support (represented as a winning strategy encoded in the neural network) to that solution. On the other hand, a non-100\% global correctness can be treated as a measure of approximation of the algorithm. \subsection{General Result} Our experiment shows that the algorithm can correctly determine the truthfulness of all 20 test QBFs. We notice that, for a solvable QBF, the existential player can quickly dominate the game and win at most of the times, and vice-versa for the universal player in an unsolvable case. The result indicates that for a solvable/ an unsolvable QSAT problem, the existential/ universal player has a higher chance to win the corresponding QSAT game against the universal/ existential player. We also measured the algorithm's global correctness ratio for all test cases, and we noticed an asymmetry between the solvable and unsolvable cases. To be specific, we computed the average global correctness ratio (AGC) for all solvable and unsolvable QBFs respectively, and it turns out that the AGC for solvable cases is 87\%, while the AGC for unsolvable cases is 85\%. This fact indicates that neural MCTS can still be an adequate approximator to QSAT problem, even if it cannot derive a 100\% correct strategy. \subsection{Two Examples}\label{examples} In this section, for the illustration purpose, we show the experiment results for a solvable QSAT and an unsolvable QSAT (described in Fig. \ref{qbfsat} and Fig. \ref{qbfunsat} where, due to limited space, we only show the matrix of the QBF). One can see, in Fig. \ref{qbfsat}, the local correctness ratio of the existential player (P) soars high after the first epoch; while in Fig. \ref{qbfunsat}, the local correctness ratio of the universal player (OP) increases rapidly. Even though there are fluctuations, one of the player always dominate the game, this phenomenon is treated as an indicator to the truthfulness of the QSAT. Also, notice that the curves in the unsolvable case wave more violently than the ones in the solvable case. This fact means that even though the player can dominate the game, dominating an unsolvable QSAT game might be harder than a solvable one. In terms of global correctness ratio, both of them got 100\% correctness, that means the neural MCTS not only makes the correct decision but also constructively support its decision. \begin{figure}[ht] \centering \includegraphics[width=0.4\textwidth]{sat.png} \caption{Local correctness ratio measured for a solvable QBF. The matrix of the QBF is listed on the right side in QDIMACS format.} \label{qbfsat} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=0.4\textwidth]{unsat.png} \caption{Local correctness ratio measured for an unsolvable QBF. The matrix of the QBF is listed on the right side in QDIMACS format.} \label{qbfunsat} \end{figure} \section{Discussion}\label{discuss} \subsection{Exploration v.s. Exploitation} One of the known issues of self-play is that the two players will always mutually bias their strategy to fit with the other's one through exploiting their experiences. This mutual inductive bias facilitates the learning process of the players when they are at the same pace. However, once the learning speeds are unbalanced, the mutual inductive bias foils the improvement of players' performance by stagnating their strategies in a local optimum. To understand this issue, one can think about a game between an expert and a newbie. Since the expert can easily find a strategy to win against the newbie, the newbie will always lose the game. And because there is no positive feedback at all, the newbie will build a biased belief that there is no way to win the game. Such a belief can be strengthened during self-play, and finally, it leads to some fixed losing strategy. While on the other side, since the opponent is not so challenging, the expert will also stay with the current strategy without any attempt to improve it. Nevertheless, we notice that neural MCTS is resilient to mutual inductive bias. Whenever the learning paces are unbalanced, the weaker players decisions become indifferent (i.e., no matter what moves she takes, she will always lose the game). On the other hand, neural MCTS pushes those indifferent actions into a uniform distribution, hence to encourage exploration by making random moves. Consequently, neural MCTS adaptively balance the exploration and exploitation, thus jumping out of the local optimal. \subsection{State Space Coverage} Neural MCTS is capable of handling a large state space \cite{alpha0}. Such an algorithm must search only a small portion of the state space and make the decisions from those limited observations. To measure the state space coverage, we recorded the number of states accessed during the experiment, In each QSAT game, we count the total number of states accessed during each game in self-play, and we compute the 10-game moving average of state accessed for all self-plays (Fig. \ref{coverage}). This result indicates an implicit adaptive pruning mechanism behind the neural MCTS algorithm, which can be regarded as a justification for its capability of handling large state spaces. \begin{figure}[ht] \centering \includegraphics[width=0.4\textwidth]{coverage.png} \caption{Average states accessed during self-play for QSAT problem described in Fig. \ref{qbfsat}. As a comparison, there are 226599 states in total.} \label{coverage} \end{figure} \subsection{Limitation} Our test cases are restricted to a limited size. Because QSAT is known to be PSPACE complete, verifying the correctness of the algorithm is time-consuming. In our experiment, there are 10 to 11 moves for each player. Hence to verify the correctness of the algorithm, it roughly takes $2^{10}$ to $2^{11}$ tests. And the verification time increases exponentially with the number of variables in the QBF. On the other hand, the strategy learned by the neural MCTS algorithm is implicitly encoded inside the neural network, and there is no way to extract such a strategy so that they can be explicitly verified by any more efficient approaches from the formal method. Therefore, using an enumerator to verify the correctness is inevitable for the time being. As a result, even though neural MCTS can handle a deep game tree hence a large number of variables, it is still hard or even impossible to verify the learning outcome. \section{Conclusion}\label{conclusion} In this work, intrigued by the astonishing achievements from AlphaZero, we attempt to leverage the computational power of neural MCTS algorithm to solve a practical problem: QSAT. We make two main contributions. First, we propose a way to encode QBFs as undirected multigraphs, which bridges the logic formula representation of QBFs with the graph neural network input. Second, we particularly use two separated graph neural networks to build our neural MCTS variant. Such a design can significantly reduce the learning confusion caused by the asymmetry between the two players. Our evaluation is based on two metrics: local and global correctness ratio. The local metric, by utilizing an off-the-shelf QSAT solver, only focus on the correctness in a single game, yet it imposes no constraints on the number of variables in the formula; The global metric, relies on an enumerator, can determine the exact correctness of the learned neural MCTS, but it is sensitive to the number of variables in the formula. Our experiment results are positive on the given limited size test cases, which justifies the feasibility of our idea to some extents. For future work, it may be worthwhile to figure out how to explain the learned neural MCTS or how to extract the generated strategy from the neural network. It is also useful to do some study on how to optimize the current algorithm so that it can handle more significant cases. Our objective is not necessarily to improve the state-of-the-art of hand-crafted problem solvers in specific areas but to illustrate that there is a generic algorithm (neural MCTS) that can solve well-known problems tabula-rasa. The hope is that neural MCTS will help solve future algorithmic problems that have not yet been solved by humans. We view Neural MCTS as a helper in human solving of algorithmic problems in the future. We also hope our research sheds some light on the remarkable but mysterious learning ability of the neural MCTS algorithm from AlphaZero. \bibliographystyle{aaai}
1,941,325,220,981
arxiv
\section{Introduction} In this paper we want to discuss the notion of viscosity solution for geometric equations, describing weak front propagation in step two Carnot groups, of the form \begin{equation}\label{eqflow} u_t(x,t)+F(x,t,Xu(x,t),X^2u(x,t))=0,\quad(x,t)\in\mathbb R^n\times(0,+\infty). \end{equation} Here the operator $F=F(x,t,q,A)$, $F:\mathbb R^n\times(0,+\infty)\times\mathbb R^m\backslash\{0\}\times{\mathcal S}^m\to\mathbb R$ is elliptic and {\it geometric}, meaning that it is positively one homogeneous in the pair $(q,A)\in\mathbb R^m\backslash\{0\}\times{\mathcal S}^m$ and invariant in the last argument with respect to matrices of the form $\mu\; q\otimes q$, $\mu\in\mathbb R$, as we make it more precise later. The notation ${\mathcal S}^m$ indicates the set of symmetric $m\times m$ matrices, $n,\;m\geq2$. Therefore it is possible that $F$ has a singularity at $q=0$ and we assume that it behaves nicely, namely $$F_*(x,t,0,\mathbb{O})=F^*(x,t,0,\mathbb{O})=0,$$ where the stars above indicate the lower and upper semicontinuous envelopes, respectively. The notation $Xu$ indicates the horizontal gradient with respect to a family of vector fields $\{X_1,\dots,X_m\}$, seen as differential operators, \begin{equation}\label{eqfields} X_j=\sum_{i=1}^n\sigma_{i,j}(x)\partial_i,\quad j\in\{1,\dots,m\}, \end{equation} generators of a step two Carnot group. In particular, for a smooth function $u$, $Xu=\nabla u\;\sigma(x)$, $\sigma=(\sigma_{i,j})_{i,j}$ and if $m<n$ the equation (\ref{eqflow}) has singularities when $Xu=0$, i.e. at characteristic points of the level set of $u$, therefore on a subspace of positive dimension. Notation $X^2u$ indicates instead the horizontal hessian, namely $X^2u=\left(X_iX_ju\right)^*_{i,j=1,\dots,m}$, the symmetrised matrix of second derivatives. This compares to the usual euclidian case when $\sigma\equiv I_{n}$ the identity matrix, where $Xu\equiv \nabla u$ is the standard gradient, and the singularity is just at the origin. In the special case when the operator $F:\mathbb{R}^m\backslash\{0\}\times\mathcal{S}^m\to\mathbb{R}$ is defined as \begin{equation}\label{mean curvature hamiltonian dim m} F(q,A)=-\tr[\big(I-\frac{q}{\abs{q}}\otimes\frac{q}{\abs{q}}\big)A], \end{equation} and moreover $m=n$ and $\sigma\equiv I_n$, (\ref{eqflow}) reads as the well known the mean curvature flow equation \begin{equation}\label{eqmeanflow} u_t(x,t)-\tr[\big(I-\frac{\nabla u}{\abs{\nabla u}}\otimes\frac{\nabla u}{\abs{\nabla u}}\big)D^2u]=0. \end{equation} In a group setting instead, (\ref{eqmeanflow}) becomes \begin{equation}\label{eqlevelset(vector fieldX)} u_t(x,t)-\sum_{i,j=1}^m\Big(\delta_{ij}-\frac{X_iu(x,t)X_ju(x,t)}{\sum_{i=1}^m(X_iu(x,t))^2}\Big)X_iX_ju(x,t)=0, \end{equation} which is the horizontal mean curvature flow equation in the Carnot group. Due to the presence of singularities and the fact that we do not expect classical solutions in general in (\ref{eqflow}), we will use as usual the notion of viscosity solution, as in Crandall, Ishii, Lions \cite{cil}, Chen, Giga, Goto \cite{cgg}. In our main result, we prove an equivalent notion of solution where we use a restricted class of test functions at singular points, with the property that if the horizontal gradient vanishes, then also the horizontal hessian vanishes as well. This equivalent notion of solution simplifies the dealing with singularities and was first proved in the euclidian setting for the mean curvature flow equation by Barles, Georgelin \cite{bage} to study the convergence of numerical schemes. We also use this approach to extend to our setting the notion of generalised flow, introduced as a general and flexible method to study singular limits in pdes giving rise to propagating fronts by Barles and Souganidis \cite{basou} and applied in several situations in the euclidian setting, see also Barles and Da Lio \cite{badl}. As a matter of fact, we will use this notion of solution in a forthcoming paper, when we discuss the singular limit of reaction diffusion equations for anisotropic and degenerate diffusions \cite{deso5}, while we develop here the preliminary needed tools on weak front propagation. This simplified approach, which is particularly helpful when studying approximations of (\ref{eqflow}) of different nature, therefore extends to the Carnot group setting with similar properties. Hopefully it could also prove useful to tackle the comparison principle for viscosity solutions of (\ref{eqflow}), which is still missing in the literature in full generality. To achieve our goal we need to modify the usual approach with the doubling of variables in viscosity solutions, by changing the test function, since the euclidian norm does not work for singular anisotropic equations as (\ref{eqflow}), and replace it instead with an homogeneous norm, adapted to the Carnot group structure. As an application, we show how one can more easily check that functions are super or subsolutions of (\ref{eqflow}) especially at singular points, by providing explicit examples of super or subsolutions to be used as barriers. If in particular we consider the recent notion of v-convex functions with respect to the family of vector fields, we can prove, coupling our result with a comparison principle, that their level sets become extinct in finite time under the horizontal mean curvature flow equation, by constructing suitable supersolutions of (\ref{eqflow}). Equation (\ref{eqflow}) appears in the level set approach to the weak propagation of hypersurfaces, where we want to discuss the propagation of interfaces, boundaries of open sets, with prescribed normal velocity. In the euclidian space usually the velocity $V=V(x,n,Dn)$, where $n$ is the exterior normal. Indeed, if $\Omega_t\subset\mathbb R^n$ is a family of open sets, $\Gamma_t=\partial\Omega_t$ is the propagating front, and there exists a smooth function $u:\mathbb{R}^n\times[0,+\infty)\to\mathbb{R}$ such that $$\Gamma_t=\{x\in\mathbb{R}^n:u(x,t)=0\},\quad\Omega_t=\{x\in\mathbb{R}^n:u(x,t)>0\},\quad \nabla u\neq0\mbox{ on }\Gamma_t$$ then one computes $$V=\frac{u_t}{|\nabla u|},\quad\mathbf n=-\frac{\nabla u}{|\nabla u|}\quad\mbox{and}\quad D\mathbf n=-\frac{1}{|\nabla u|}\Big(I-\frac{\nabla u\otimes \nabla u}{|\nabla u|^2}\Big)D^2u$$ and so $u$ formally satisfies \begin{equation}\label{level set equation}u_t=G(x,t,\nabla u,D^2u),\end{equation} where $G$ is related to $V$ by $$G(x,t,p,A)=|p|V\Big(x,t,-\frac p{|p|},-\frac1{|p|}(I-\frac{p\otimes p}{|p|^2})A\Big),\qquad(x,p,A)\in\mathbb{R}^n\times\mathbb R^n\backslash\{0\}\times\mathcal S^n.$$ In our case the anisotropy of the velocity will be for instance exploited by the fact that $$G(x,t,p,A)=F(p\sigma(x),\;^t\sigma(x)A\sigma(x)),$$ so that as an operator $G(x,t,\nabla u,D^2u)=F(Xu,X^2u)$. The novelty here with respect to the classical cases is that while in the euclidian case $\sigma=I$, and its square is a non degenerate matrix, here the diffusion matrix $\sigma(x)^t\sigma(x)$ is not only anisotropic but also degenerate. When the family of vector fields does not span the whole $\mathbb R^n$ at each point, this fact adds metric singularities to the usual one of geometric equations. The geometric property of the level set approach is based on the fact that if $u$ solves (\ref{eqflow}) and $\psi:\mathbb R\to\mathbb R$ is smooth and increasing, then also $\psi(u)$ solves the same equation. As a consequence, when a comparison principle holds true, it is easy to see that if $u^1_o$ and $u^2_o$ are two initial conditions such that $$\Gamma_o=\{x:u^1_o(x)=0\}=\{x:u^2_o(x)=0\},$$ and $u^1, u^2$ are the corresponding solutions in (\ref{eqflow}), then one has \begin{equation*} {\{x:u^1(x,t)=0\}=\Gamma_t=\{x:u^2(x,t)=0\}},\quad \mbox{for all }t>0. \end{equation*} One can therefore { define} the family of closed sets $(\Gamma_t)_t$ to be the geometric flow of the front or interface $\Gamma_o$ with the prescribed normal velocity. The notion of horizontal normal and horizontal mean curvature is due to Danielli, Garofalo, Nhieu \cite{dagani}. Recently equation (\ref{eqflow}) has been studied by several authors. Existence results are available in the work of Capogna, Citti \cite{caci}, who proved existence in Carnot groups by vanishing viscosity riemannian approximations. Dirr, Dragoni, Von Renesse \cite{didr} used stochastic approximations to show existence for more general H\"ormander structures. Capogna, Citti, Manfredini \cite{cacima} prove uniform regularity estimates on the riemannian vanishing viscosity approximations for the flow of graphs, that also apply to prove existence for (\ref{eqflow}) in that case. On uniqueness results the literature is far less complete. Capogna, Citti \cite{caci} proved a comparison principle if either one of the functions to compare is uniformly continuous or their initial condition does not depend on the vertical coordinate, thus avoiding characteristic points in the initial front. A very recent paper by Baspinar, Citti \cite{baci} finds a comparison principle in Carnot groups of step two as a consequence of the fact that all solutions are limits of suitable families of riemannian regularisations. We remark the fact that in \cite{caci}, \cite{didr} the authors use a notion of solution that differs from standard viscosity solutions at singular points. However their notion of solution turns out to be equivalent to viscosity solutions as a consequence of our result. One of the referees pointed out to us the work of Ferrari, Liu and Manfredi \cite{flm} where the authors use an approach similar to ours in the case of the horizontal mean curvature flow equation in the Heisenberg group, and they show a comparison principle for axisymmetric viscosity solutions. We recall that the level set method for geometric flows was proposed by Osher-Sethian \cite{os} for numerical computations of geometric flows. The rigorous theory of weak front evolution started with the work by Evans-Spruck \cite{es} for the mean curvature flow and by Chen-Giga-Goto \cite{cgg} for more general geometric flows. For the mathematical analysis of the level set method via viscosity solutions, the reader is referred to the book by Giga \cite{gi}, where the approach is discussed in detail, see also Souganidis \cite{soug} and the references therein for the main applications of the theory. \section{Step two Carnot groups and level set equations on the group} In this paper we consider in $\mathbb R^n$ a family of vector fields ${\mathcal X}=\{X_1,\dots,X_m\}$ written as differential operators as in (\ref{eqfields}) and consider $\sigma:\mathbb R^n\to\mathbb R^{n\times m}$ which is the matrix valued family of the coefficients. We will indicate $\sigma_j$ the $j-$th column of $\sigma$ so that $$X_j\;I(x)=\sigma_j(x),\quad j\in\{1,\dots,m\}$$ where $I(x)$ is the identity map in $\mathbb R^n$ and in general $X_j$ applied to a vector valued smooth function $\varphi$ means the vector whose entries are given by $X_j$ applied to the components of $\varphi$. The vector fields of the family are throughout the paper assumed to be generators of a step two Carnot group. To be more precise we rely on the following definition, see the book by Bonfiglioli, Lanconelli, Uguzzoni \cite{bolaug}, which we refer the reader to, for an introduction to the subject. \begin{defin} We say that $G=(\mathbb R^n,\circ)$ is a Lie group if $\circ$ is a group operation on $\mathbb R^n$ and the map $(x,y)\mapsto x^{-1}\circ y$ is smooth. \noindent We then say that $(G,\circ,\delta_\lambda)$ is a step two Carnot group if we can split $\mathbb R^n=\mathbb R^m\times\mathbb R^{n-m}$, $x=(x_h,x_v)$, $m<n$, and for all $\lambda>0$ the family of dilations $\delta_\lambda(x)=(\lambda x_h,\lambda^2 x_v)$ are automorphisms of the group (the group is homogeneous). Moreover the family of vector fields $\mathcal X$ are left invariant on $G$ with respect to the group operation, that is for all $\varphi\in C^{\infty}(\mathbb R^n)$ and all $\alpha\in\mathbb R^n$ we have that $$X_j(\varphi(\alpha\circ x))=(X_j\varphi)(\tau_\alpha(x)),\quad j\in\{1,\dots,m\},$$ where $\tau_\alpha(x)=\alpha\circ x$ is the left traslation, and the following H\"ormander property is satisfied $$\hbox{span}\{X_i(x),[X_j,X_k](x):i,j,k\in\{1,\dots,m\}\}=\mathbb R^n,\quad\hbox{for all }x\in\mathbb R^n,$$ so that the family of vector fields $\mathcal X$, together with their first order Lie brackets, generates $\mathbb R^n$ at every point (the Carnot group is step two). \noindent The vector fields of the family $\mathcal X$ are said to be generators of the Carnot group. \end{defin} Following \cite{bolaug}, it is then well known that if $\mathcal X$ generate a step two Carnot group, then, by a suitable change of variables, we can suppose that \begin{equation}\label{eqcarnotstr} \sigma(x)=\left(\begin{array}{c} I_m\\^t(Bx_h)\end{array}\right), \end{equation} where $I_m$ is the $m\times m$ identity matrix, $Bx_h=(B^{(1)}x_h,\dots,B^{(n-m)}x_h)$, and $B^{(j)}$, $j\in\{1,\dots,n-m\}$ are skew symmetric, linearly independent, $m\times m$ matrices. In addition, $\mathbb R^n$ has the group structure with the operation $$x\circ y=(x_h+y_h,x_v+y_v+<Bx_h,y_h>),$$ with the notation $<Bx_h,y_h>=(B^{(1)}x_h\cdot y_h,\dots,B^{(n-m)}x_h\cdot y_h)$. With this group operation it is clear that $x^{-1}=-x$ and $0$ is the identity element of the group. Moreover we notice that the jacobian of the left traslation has the following structure $$D\tau_\alpha(x)=\left(\begin{array}{cc} I_m\quad &{\mathbb O}_{m\times (n-m)}\\^t(Bx_h)&I_{n-m} \end{array}\right),$$ so the first $m$ columns of the jacobian give the matrix $\sigma(x)$. It is also good to remember that for $\lambda>0$ the family $\mathcal X$ is homogeneous of degree one with respect to the dilations, namely $$X_j(\varphi(\delta_\lambda(x)))=\lambda(X\varphi)(\delta_\lambda(x)),\quad j\in\{1,\dots,n-m\},$$ for all $\varphi\in C^{\infty}(\mathbb R^n)$. \begin{exa} The well known example of the Heisenberg group comes from $\mathbb R^3=\mathbb R^2\times\mathbb R$ and the single matrix $$B=\left(\begin{array}{cc}0\quad&1\\-1&0\end{array}\right).$$ \end{exa} For our purposes, given a smooth function $u\in C^2(\mathbb R^n)$ we indicate the {\it horizontal gradient} (here gradients are row vectors) as $$Xu(x)=\nabla u(x)\;\sigma(x),$$ and the {\it horizontal hessian} as $$X^2u(x)=\left(X_jX_k\;u(x)\right)^*_{j,k=1,\dots,n-m}=\;^t\sigma(x)D^2u(x)\sigma(x).$$ We just observe that $A^*=(A+\;^tA)/2$ indicates the symmetrisation and that the first order terms in the second derivatives of $X^2$ cancel out by direct computation since $\sigma$ only depends on the first $m$ variables. In $\mathbb R^n$, taking advantage of the group structure of the family of vector fields, we want to study the problem of weak front propagation by extending the now classical level set idea. Let $F:\mathbb R^n\times(0,+\infty)\times\mathbb R^m\backslash\{0\}\times\mathcal S^m\to\mathbb{R}$ be a continuous function, locally bounded at points of the form $(x,t,0,A)$, where $\mathcal S^m$ denotes the space of the $m\times m$ symmetric matrices. We assume on $F$ the following structure conditions. \begin{description} \item[(F1)] $F$ satisfies \begin{equation}\label{0 condition} F^*(x,t,0,\mathbb{O})=F_*(x,t,0,\mathbb{O}),\qquad\mbox{for all }(x,t)\in\mathbb R^n\times(0,+\infty); \end{equation} \item[(F2)] $F$ is elliptic, i.e. for any $(x,t)\in\mathbb R^n\times(0,+\infty),\;p\in\mathbb R^m\backslash\{0\}$ and $A,B\in\mathcal S^m$ \begin{equation}\label{ellipticity condition} F(x,t,p,A)\leq F(x,t,p,B),\quad\mbox{if }A\geq B; \end{equation} \item[(F3)] $F$ is \emph{geometric}, i.e., \begin{equation}\label{geometric condition} F(x,t,\lambda p,\lambda A+\mu(p\otimes p))=\lambda F(x,t,p,A)\quad\mbox{for all }\lambda>0\mbox{ and }\mu\in\mathbb{R} \end{equation} for every $(x,t)\in\mathbb R^n\times(0,+\infty),\;p\in\mathbb R^m\backslash\{0\}$ and $A\in\mathcal S^m$. \end{description} In the above, we are using the following notation for the lower semicontinuous extension of $F$ at the singular points. $$F_*(x,t,0,A)=\lim_{r\to0+}\inf\{F(y,t,q,B):q\neq0,\;|(y,q,B)-(x,0,A)|\leq r\},$$ and similarly for the upper semicontinuous extension $F^*$. Notice in particular that the geometric property of $F$ implies $F_*(x,t,0,\mathbb{O})=0$ for all $(x,t)\in\mathbb R^n\times(0,+\infty)$. We want to discuss the notion of solution for the equation \begin{equation}\label{eqlevelset} u_t(x,t)+F(x,t,Xu,X^2u),\quad(x,t)\in\mathbb R^n\times(0,+\infty),\end{equation} where now only the horizontal first and second derivatives of the unknown function appear in the equation. Notice that in our group setting, the operator $F$ in (\ref{eqlevelset}), written in the usual coordinates of $\mathbb R^n$ becomes \begin{equation}\label{eqopg} G(x,t,p,A)=F(x,t,p\sigma(x),\;^t\sigma(x)A\sigma(x)), \end{equation} $G:((\mathbb R^n\times(0,+\infty)\times\mathbb R^n)\backslash\{(x,t,p):p\sigma(x)=0\})\times {\mathcal S}^m\to\mathbb R.$ \begin{rem} We easily show in a moment that $G$ preserves the assumptions (F1), (F2), (F3), however the singularities of $G$ are not just at the origin but in the whole of the subset $$S=\{(x,t,p,A)\in\mathbb R^n\times(0,+\infty)\times\mathbb R^m\times{\mathcal S}^m:p\sigma(x)=0\},$$ where now for all $(x,t,A)\in\mathbb R^n\times(0,+\infty)\times{\mathcal S}^m$, the set $\{p:(x,t,p,A)\in S\}$ is a varying subspace, not necessarily trivial if the family of vector fields $\mathcal X$ does not span $\mathbb R^n$ at $x$. In this sense the operator $G$ is not covered by the standard theory of the anisotropic operators.\\ Operator $G$ is elliptic since if $A\geq B$, then $^t\sigma(x)A\sigma(x)\geq \;^t\sigma(x)B\sigma(x)$ and thus $G(x,t,p,A)\leq G(x,t,p,B)$.\\ Operator $G$ is also geometric since $$\begin{array}{l} G(x,t,\lambda p,\lambda A+\mu(p\otimes p))=F(x,t,\lambda p\sigma(x),\lambda\;^t\sigma(x) A\sigma(x)+\mu(p\sigma(x)\otimes p\sigma(x)))\\ =\lambda F(x,t, p\sigma(x),\;^t\sigma(x) A\sigma(x))=\lambda G(x,t,p,A). \end{array}$$ Thus (F2), (F3) hold true. \end{rem} We now recall the usual definition of viscosity solution for the level set equation (\ref{eqlevelset}). \begin{defin}\label{defviscsol} An upper (respectively lower) semicontinuous function $u:\mathbb{R}^n\times(0,+\infty)\to\mathbb R$ is a viscosity subsolution (respectively supersolution) of (\ref{eqlevelset}) if and only if for any $\phi\in C^2(\mathbb{R}^n\times(0,+\infty))$, if $(x,t)\in\mathbb{R}^n\times(0,+\infty)$ is a local maximum (respectively minimum) point for $u-\phi$, we have \begin{equation}\label{eqsubsol}\phi_t(x,t)+G_*\big(x,t,\nabla \phi(x,t),D^2\phi(x,t)\big) \leq0, \end{equation} where $G$ is given in (\ref{eqopg}). A viscosity solution of (\ref{eqlevelset}) is a continuous function $u:\mathbb{R}^n\times(0,+\infty)\to\mathbb R$ which is either a subsolution and a supersolution. \end{defin} \begin{rem}\label{remtech} In the previous definition the lower semicontinuous extension of $G$ at the singular points where $p\sigma(x)=0$ is $$\begin{array}{ll} G_*(x,t,p,A)&\\ =\lim_{r\to0+}\inf\{G(y,s,q,B):(y,s,q,B)\in\hbox{dom }(G),\;|(x-y,t-s,p-q,A-B)|\leq r\}\\ =\lim_{r\to0+}\inf\{F(y,s,q\sigma(y),\;^t\sigma(y)B\sigma(y)):q\sigma(y)\neq0, \quad|(x-y,t-s,p-q,A-B)|\leq r\}. \end{array}$$ In particular from $p\sigma(x)=0$ we have $$\begin{array}{ll} F_*(x,t,0,\;^t\sigma(x)A\sigma(x))\leq G_*(x,t,p,A) \leq G^*(x,t,p,A)\leq F^*(x,t,0,\;^t\sigma(x)A\sigma(x)). \end{array}$$ Thus if $p\sigma(x)=0$ and $^t\sigma(x)A\sigma(x)={\mathbb O}$, then $G_*(x,t,p,A)=G^*(x,t,p,A)=0$, so a counterpart of (F1) holds for $G$.\\ In Definition \ref{defviscsol}, if $X\phi(x,t)\neq0$, then (\ref{eqsubsol}) is equivalently written as \begin{equation} \phi_t(x,t)+F\big(x,t,X\phi(x,t),X^2\phi(x,t)\big)\leq0, \end{equation} and the extended operator $G_*$ only appears when $X\phi(x,t)=0$. Therefore at singular points the notion of viscosity subsolution is stronger than one would get requiring \begin{equation}\label{eqfakesubsol} \phi_t(x,t)+F_*\big(x,t,X\phi(x,t),X^2\phi(x,t)\big) \leq0. \end{equation} instead of (\ref{eqsubsol}). Notice that in the special case (\ref{mean curvature hamiltonian dim m}), if $p\sigma(x)=0$, $$F_*(x,t,0,A)=\min_{|p|=1}\{-\hbox{tr }(I-p\otimes p)A\} $$ and this is used in \cite{caci} or in \cite{didr} to define (weak-)subsolutions of the horizontal mean curvature flow equation, by requiring (\ref{eqfakesubsol}) instead of (\ref{eqsubsol}). \end{rem} \section{Viscosity solutions} In this section we consider equation (\ref{eqlevelset}) and prove an equivalent definition of viscosity solution. This result extends \cite{bage} to our setting and simplifies the treatment of singularities of equation (\ref{eqlevelset}) by restricting the family of test functions at characteristic points. When it will be necessary to emphasise the variable $x$ in which we are computing the vector fields $X_i$ (and with respect to we are computing the derivatives), we will denote the horizontal gradient and the horizontal Hessian matrix as $X_x$ and $X_x^2$ . For example if $H(x,y)$ is a $C^2$ function defined in $\mathbb R^n\times\mathbb R^n$ and $(x_o,y_o)$ is a generic point of $\mathbb R^n\times\mathbb R^n$ we will denote with $X_xH(x_o,y_o)$ the horizontal gradient of $H$ with respect to the variable $x$ and with $X_yH(x_o,y_o)$ the horizontal gradient of $H$ with respect to $y$, both computed at the point $(x_o,y_o)$. Analogous definitions hold for $X^2_{x}H(x_o,y_o)$ and $X^2_{y}H(x_o,y_o)$. We consider an homogeneous (with respect to any dilatation $\delta_\lambda$, $\lambda>0$) norm on $\mathbb R^n$, \begin{equation}\label{norm}\norm{x}_{G}=[|x_h|^4+|x_v|^2]^{1/4},\end{equation} and we define a left invariant metric $d_{G}:\mathbb R^n\times\mathbb R^n\to[0,+\infty)$ as \begin{equation}\label{distance} d_{G}(x,y)=\norm{x^{-1}\circ y}_G =[\abs{y_h-x_h}^4+\abs{y_v-x_v-\langle Bx_h,y_h\rangle}^2]^{1/4}. \end{equation} \begin{rem} Here we make some comments on the definitions (\ref{norm}) and (\ref{distance}). Dealing with fully nonlinear partial differential equations with singularities poses a number of additional difficulties. Viscosity solutions theory can cope with these difficulties since the work of Evans-Spruck \cite{es} and Chen-Giga-Goto \cite{cgg}. The horizontal mean curvature flow equation adds further difficulties since the singularity does not just appear when the gradient of the solution vanishes, but rather when the horizontal gradient vanishes, so when the gradient takes its values in a nontrivial subspace. In some key step of the proofs, the standard euclidian distance does not work and one has to think to something different. One natural choice would be to exchange the euclidian distance with the Carnot-Caratheodory distance. This distance is not smooth however being only locally H\"older continuous. Therefore, due to the nature of Carnot groups, one thinks of distance functions that are related to homogeneous norms, which are distance function equivalents to the euclidian one but are smooth. One well known example is the norm in (\ref{norm}). This one works well in step two groups, at least for the results we prove, but not for the comparison principle, one reason being that the group operation is not commutative and this makes the distance not symmetric. In groups of higher step, one has a natural homogeneous distance with more terms, making the computations in this section more complex. Moreover we are often using the structure (\ref{eqcarnotstr}), which is valid specifically in step two groups. There might be additional difficulties due to the fact that Carnot groups with step higher than two differ in some important geometric properties. Nonetheless step two groups already have important applications that make their study quite interesting as for instance in models of the visual cortex, see \cite{baci} and the references therein for details. \end{rem} We start proving a nice property of the homogeneous metric $d_G$ defined in (\ref{distance}). \begin{lem}\label{lemma:property norm in Carnot group2} Put $N(x)=\norm{x}_{G}^4$ for any $x\in\mathbb{R}^{n}$. Then \begin{enumerate}[(i)] \item $\{x\in \mathbb R^n:|XN(x)|=0\}=\{x\in \mathbb R^n:X^2N(x)=\mathbb{O}\}=\{x\in \mathbb R^n:x_h=0\}$. \item $|X_xd_{G}^4(x,y)|=|X_yd_{G}^4(x,y)|$ and $X_x^2d_{G}^4(x,y)=X_y^2d_{G}^4(x,y)$ for any $x,y\in \mathbb R^n$; moreover they all have as zero-set the set $\{(x,y)\in \mathbb R^n\times\mathbb R^n:x_h=y_h\}$. \end{enumerate} \end{lem} \begin{proof}(i) The proof of the first point follows by some simple computations. In fact since $$XN(x)=4\abs{x_h}^2x_h+2\sum_{k=1}^{n-m}{(x_v)}_kB^{(k)}x_h,$$ we have, here notice that, since the matrices $B^{(k)}$ are all skew symmetric the mixed products are all null, $$|XN(x)|^2=16|x_h|^6+4\left|\sum_{k=1}^{n-m}{(x_v)}_kB^{(k)}x_h\right|^2=16|x_h|^6+4\sum_{k,l=1}^{n-m}(x_v)_k(x_v)_l\langle B^{(k)}x_h,B^{(l)}x_h\rangle.$$ Thus $XN(x)=0$ if and only if $x_h=0$. Moreover $$X^2N(x)=4|x_h|^2I_m+8x_h\otimes x_h+2\sum_{k=1}^{n-m}B^{(k)} x_h\otimes B^{(k)}x_h,$$ which is null at $x_h=0$. \noindent (ii) First of all we observe that, since the vector fields $X_i$ are invariant by left composition of the group operation, we have \begin{align}\label{eqdistnorm} X_yd_{G}^4(x,y)&=X_yN(x^{-1}\circ y)=(XN)(x^{-1}\circ y)\\ X_y^2d_{G}^4(x,y)&=(X^2N)(x^{-1}\circ y) \end{align} and so by point (i) $X_yd_{G}^4(x,y)$ and $X^2_yd_{G}^4(x,y)$ are null if and only if $(x^{-1}\circ y)_h=0$, i.e. $y_h=x_h$. To compute the horizontal gradient and the horizontal Hessian matrix with respect the $x$ variable we observe that, since $N(x^{-1})=N(-x)=N(x)$, it holds $d_{G}^4(x,y)=N(x^{-1}\circ y)=N(y^{-1}\circ x)$ and, by left invariance of the vector fields, $$X_xd_{G}^4(x,y)=(XN)(y^{-1}\circ x),\quad X_x^2d_{G}^4(x,y)=(X^2N)(y^{-1}\circ x).$$ Again $X_xd_{G}^4(x,y)$ and $X^2_xd_{G}^4(x,y)$ are null exactly when $y_h=x_h$. Finally we observe that $\abs{X_yd_{G}^4(x,y)}^2=\abs{X_xd_{G}^4(x,y)}^2$ and $X^2_yd_{G}^4(x,y)=X^2_xd_{G}^4(x,y)$. \end{proof} We use the previous Lemma to prove an equivalent definition of solution other than Definition \ref{defviscsol} which is the usual definition of {viscosity solution} for the equation (\ref{eqlevelset}). The definition will only change at singular points of the differential operator. \begin{thm}\label{thmbg} An upper (respectively lower) semicontinuous function $u$ is a viscosity subsolution (respectively supersolution) of (\ref{eqlevelset}) if and only if for any $\phi\in C^2(\mathbb{R}^n\times(0,+\infty))$, if $(x,t)\in\mathbb{R}^n\times(0,+\infty)$ is a local maximum (respectively minimum) point for $u-\phi$, one has \begin{equation}\label{condition1}\frac{\partial\phi(x,t)}{\partial t}+F(x,t,X\phi(x,t),X^2\phi(x,t)) \leq0\quad\mbox{if }X\phi(x,t)\neq0 \end{equation} and \begin{equation}\label{condition2}\frac{\partial\phi(x,t)}{\partial t}\leq0\quad\mbox{if}\quad X\phi(x,t)=0\;\mbox{and}\;\mbox X^2\phi(x,t)=0,\end{equation} (respectively $$\frac{\partial\phi(x,t)}{\partial t}+F(x,t,X\phi(x,t),X^2\phi(x,t) \geq0\quad\mbox{if }X\phi(x,t)\neq0$$ and \begin{equation}\label{condition2 supersol} \frac{\partial\phi(x,t)}{\partial t}\geq0\quad\mbox{if}\quad X\phi(x,t)=0\;\mbox{and}\;\mbox X^2\phi(x,t)=0). \end{equation} \end{thm} \begin{proof} We only show the result for subsolutions the other part being similar. It is clear that a viscosity subsolution will satisfy (\ref{condition2}) since $G_*(x,t,p,\mathbb{O})=0$ if $p\sigma(x)=0$ by Remark \ref{remtech} and (F1). Let $u$ be an upper semicontinuous function which satisfies (\ref{condition1}) and (\ref{condition2}). Consider $\phi\in C^2(\mathbb{R}^n\times(0,+\infty))$ and $(\hat x,\hat t)\in\mathbb{R}^n\times(0,+\infty)$ a local maximum point for $u-\phi$ such that $X\phi(\hat x,\hat t)=0$ and $X^2\phi(\hat x,\hat t)\neq0$. Without loss of generality we can assume that $u$ is a strict local maximum point for $u-\phi$. We need to prove that \begin{equation}\label{thesis} \frac{\partial\phi(\hat x,\hat t)}{\partial t}+G_*(x,t,\nabla\phi(\hat x,\hat t),D^2\phi(\hat x,\hat t))\leq0. \end{equation} For any $\varepsilon>0$ we consider the function $$\psi_\varepsilon(x,y,t)=u(x,t)-\frac{d_{G}^4(x,y)}{\varepsilon}-\phi(y,t).$$ By standard arguments one proves that for $\varepsilon$ sufficiently small there is a family of local maxima $(x_\varepsilon,y_\varepsilon,t_\varepsilon)$ of $\psi_\varepsilon$ such that $(x_\varepsilon,y_\varepsilon,t_\varepsilon)$ converges to $(\hat x,\hat x,\hat t)$. Indeed, if $(x_\varepsilon,y_\varepsilon,t_\varepsilon)$ are the maximum points of $\psi_\varepsilon$ in a small compact neighborhood of $(\hat x,\hat x,\hat t)$, $(x_\varepsilon,y_\varepsilon,t_\varepsilon)$ will converge to some $(\bar x,\bar y,\bar t)$ (passing to a subsequence if necessary). One first uses $\psi(x_\varepsilon,y_\varepsilon,t_\varepsilon)\geq\psi(\hat x,\hat x, \hat t)$ to show that $\bar x=\bar y$ and next by taking the limit that $(\bar x,\bar t)$ is a maximum of $u-\phi$ in the neighborhood so that $(\bar x,\bar t)=(\hat x,\hat t)$. Moreover since the function $y\mapsto\psi_\varepsilon(x_\varepsilon,y,t_\varepsilon)$ has a local maximum in $y_\varepsilon$ we have $$\nabla \phi(y_\varepsilon,t_\varepsilon)=-\frac{D_yd_{G}^4(x_\varepsilon,y_\varepsilon)}{\varepsilon},\quad D^2\phi(y_\varepsilon,t_\varepsilon)\geq-\frac{D^2_yd_{G}^4(x_\varepsilon,y_\varepsilon)}{\varepsilon}.$$ Thus \begin{equation}\label{max consequence1} X\phi(y_\varepsilon,t_\varepsilon)=-\frac{X_yd_{G}^4(x_\varepsilon,y_\varepsilon)}{\varepsilon},\quad X^2\phi(y_\varepsilon,t_\varepsilon)\geq-\frac{X^2_yd_{G}^4(x_\varepsilon,y_\varepsilon)}{\varepsilon}, \end{equation} Two cases may now occur. 1. $X\phi(y_\varepsilon,t_\varepsilon)=0$ along a subsequence. This means that $X_yd_{G}^4(x_\varepsilon,y_\varepsilon)=0$ and by Lemma \ref{lemma:property norm in Carnot group2}, $(x_\varepsilon)_h=(y_\varepsilon)_h$. Since the map $(x,t)\mapsto u(x,t)- \varphi(x,t)$, with $\varphi(x,t)=\frac{d^4_{G}(x,y_\varepsilon)}{\varepsilon}+\phi(y_\varepsilon,t)$ attains a maximum at $(x_\varepsilon,t_\varepsilon)$ and $$X\varphi(x,t)=0\Leftrightarrow (x_\varepsilon)_h=(y_\varepsilon)_h\Leftrightarrow X^2\varphi(x,t)=0,$$ by (\ref{condition2}) we get $$\frac{\partial\varphi}{\partial t}(x_\varepsilon,t_\varepsilon)=\partial_t\phi(y_\varepsilon,t_\varepsilon)\leq0.$$ For future reference we remark that the test function $\varphi$ satisfies in a neighborhood of $(\hat x,\hat t)$: $X\varphi=0$ implies $X^2\varphi=0$. We proceed and by (\ref{max consequence1}) and $(x_\varepsilon)_h=(y_\varepsilon)_h$, we get that $X^2\phi(y_\varepsilon,t_\varepsilon)\geq\mathbb{O}$. Using the ellipticity of $F$ and Remark \ref{remtech}, it holds $$\begin{array}{l} \partial_t\phi(y_\varepsilon,t_\varepsilon)+G_*(y_\varepsilon,t_\varepsilon,{\nabla\phi(y_\varepsilon,t_\varepsilon)},D^2\phi(y_\varepsilon,t_\varepsilon))\leq\partial_t\phi(y_\varepsilon,t_\varepsilon)+F^*(y_\varepsilon,t_\varepsilon,X\phi(y_\varepsilon,t_\varepsilon),X^2\phi(y_\varepsilon,t_\varepsilon))\\ \leq\partial_t\phi(y_\varepsilon,t_\varepsilon)+F^*(y_\varepsilon,t_\varepsilon,0,\mathbb{O}_{m\times m}) =\partial_t\phi(y_\varepsilon,t_\varepsilon)\leq0 \end{array}$$ and we conclude by letting $\varepsilon$ go to 0. 2. $X\phi(y_\varepsilon,t_\varepsilon)\neq0$ for all $\varepsilon$ sufficiently small. Using (\ref{max consequence1}) and the previous Lemma this means $(y_\varepsilon)_h\neq (x_\varepsilon)_h$. Moreover the point $(x_\varepsilon,t_\varepsilon)$ is a maximum for $$\begin{array}{ll} (x,t)\mapsto\psi_\varepsilon(x,x\circ x_\varepsilon^{-1}\circ y_\varepsilon,t)&\displaystyle=u(x,t)-\frac{d^4_{G}(x_\varepsilon,y_\varepsilon)}{\varepsilon}-\phi(x\circ x_\varepsilon^{-1}\circ y_\varepsilon,t)\\ &\displaystyle=:u(x,t)-\varphi(x,t), \end{array}$$ since $d_G^4(x,x\circ x^{-1}_\varepsilon\circ y_\varepsilon)=N(x^{-1}_\varepsilon\circ y_\varepsilon)=d^4_G(x_\varepsilon,y_\varepsilon)$. Let $\tilde \tau_\alpha(x)=x\circ\alpha$ be the right translation by $\alpha$ and $D{\tilde\tau_\alpha}(x)\equiv D{\tilde\tau_\alpha}$ its Jacobian matrix. A simple computation shows that $D{\tilde\tau_\alpha}$ has the form \begin{align*}D{\tilde\tau_\alpha}&= \left(\begin{array}{c|c} \mathbb{I}_m&\mathbb{O}_{m\times n}\\ \hline \\ \;^t((\;^tB^{(1)})\alpha_h)&\\ \vdots&\mathbb{I}_n\\ \;^t((\;^tB^{(n-m)})\alpha_h)&\end{array}\right)=\left(\begin{array}{c|c} \mathbb{I}_m&\mathbb{O}_{m\times n}\\ \hline \\ \;^t(-B^{(1)}\alpha_h)&\\ \vdots&\mathbb{I}_n\\ \;^t(-B^{(n-m)}\alpha_h)&\end{array}\right)\\\\ &=\left(\begin{array}{c|c} \mathbb{I}_m&\mathbb{O}_{m\times n}\\ \hline \\ -\;^tB\alpha_h&\mathbb{I}_n, \end{array}\right).\end{align*} By the chain rule we get $$\begin{array}{ll} X\varphi(x_\varepsilon,t_\varepsilon)&=\;^t\sigma(x_\varepsilon)\;^tD{\tilde\tau_{x_\varepsilon^{-1}\circ y_\varepsilon}}\nabla \phi(\tilde\tau_{x_\varepsilon^{-1}\circ y_\varepsilon}(x_\varepsilon),t_\varepsilon) =\;^t\big(D{\tilde\tau_{x_\varepsilon^{-1}\circ y_\varepsilon}}\sigma(x_\varepsilon)\big)\nabla \phi(y_\varepsilon,t_\varepsilon)\\ &=\;^t\sigma(2x_\varepsilon-y_\varepsilon)\nabla \phi(y_\varepsilon,t_\varepsilon) \longrightarrow X\phi(\hat x,\hat t)=0,\quad\mbox{as }\varepsilon\to0,\end{array}$$ since $(x_\varepsilon)_h-(x_\varepsilon^{-1}\circ y_\varepsilon)_h=(x_\varepsilon\circ y_\varepsilon^{-1}\circ x_\varepsilon)_h=(2x_\varepsilon-y_\varepsilon)_h$, and $$\begin{array}{ll} X^2\varphi(x_\varepsilon,t_\varepsilon)&=\;^t\sigma(x_\varepsilon)\;^tD{\tilde\tau_{x_\varepsilon^{-1}\circ y_\varepsilon}}D^2\phi(\tilde\tau_{x_\varepsilon^{-1}\circ y_\varepsilon}(x_\varepsilon),t_\varepsilon)D{\tilde\tau_{x_\varepsilon^{-1}\circ y_\varepsilon}}\sigma(x_\varepsilon)\\ &=\;^t\sigma(2x_\varepsilon-y_\varepsilon)D^2\phi(y_\varepsilon,t_\varepsilon)\sigma(2x_\varepsilon-y_\varepsilon) \longrightarrow X^2\phi(\hat x,\hat t)\neq0,\quad\mbox{as }\varepsilon\to0.\end{array}$$ Moreover we show that $X\varphi(x_\varepsilon,t_\varepsilon)\neq0$. In fact, as $u(x_\varepsilon,t)-\frac1\varepsilon d_G^4(x_\varepsilon,y)-\phi(y,t)$ has a maximum at $(y,t)=(y_\varepsilon,t_\varepsilon)$, $$\begin{array}{ll} X\varphi(x_\varepsilon,t_\varepsilon)&\displaystyle=\;^t\sigma(2x_\varepsilon-y_\varepsilon)\nabla \phi(y_\varepsilon,t_\varepsilon)=-\varepsilon^{-1}\;^t\sigma(2x_\varepsilon-y_\varepsilon)\nabla_yd_{G}^4(x_\varepsilon,y_\varepsilon)\\ &\displaystyle=-\varepsilon^{-1}\;^t\sigma(2x_\varepsilon-y_\varepsilon)\;^tD{\tau_{x_\varepsilon^{-1}}}\nabla N(x_\varepsilon^{-1}\circ y_\varepsilon) =-\varepsilon^{-1}\;^t\sigma(x_\varepsilon-y_\varepsilon)\nabla N(x_\varepsilon^{-1}\circ y_\varepsilon)\\ &\displaystyle=\varepsilon^{-1}\;^t\sigma(x_\varepsilon-y_\varepsilon)\nabla N(y_\varepsilon^{-1}\circ x_\varepsilon) =\varepsilon^{-1}XN(y_\varepsilon^{-1}\circ x_\varepsilon). \end{array}$$ By the previous Lemma \ref{lemma:property norm in Carnot group2} this is null if and only if $(y_\varepsilon)_h=(x_\varepsilon)_h$ and we already know that this cannot be true. Thus by (\ref{eqsubsol}) it holds $$\frac{\partial\varphi}{\partial t}(x_\varepsilon,t_\varepsilon)+G(x_\varepsilon,t_\varepsilon,\nabla \varphi(x_\varepsilon,t_\varepsilon),D^2\varphi(x_\varepsilon,t_\varepsilon))\leq0$$ and we conclude by letting $\varepsilon\to0$, $$\begin{array}{ll} 0&\displaystyle\geq\liminf_{\varepsilon\to0}\Big(\frac{\partial\varphi}{\partial t}(x_\varepsilon,t_\varepsilon)+G(x_\varepsilon,t_\varepsilon,\nabla\varphi(x_\varepsilon,t_\varepsilon),D^2\varphi(x_\varepsilon,t_\varepsilon))\Big)\\ &\displaystyle\geq\partial_t\phi(\hat x,\hat t)+G_*(\hat x,\hat t,\nabla\phi(\hat x,\hat t),D^2\phi(\hat x,\hat t)).\end{array}$$ \end{proof} \begin{rem}\label{corol:BG in Carnot group2} By a remark during the previous proof, it is not restrictive to assume in Definition \ref{defviscsol} that, if $u$ (respectively $v$) is an upper semicontinuous subsolution (respectively a lower semicontinuous supersolution) of equation (\ref{eqlevelset}) and $\varphi\in C^2(\mathbb{R}^n\times(0,+\infty))$ is a test function for $u$ (resp. for $v$) at the point $(x,t)$, then at any point $(y,s)$ in a neighborhood of $(x,t)$ such that $$X\varphi(y,s)=X\varphi(x,t)=0,$$ it holds $$X^2\varphi(y,s)=0.$$ \end{rem} Complementing Theorem \ref{thmbg} and Remark \ref{remtech}, we obtain the following consequence. It shows, in particular that the notion of solution for the horizontal mean curvature flow equation used in \cite{caci} or in \cite{didr}, which is different from viscosity solutions at characteristic points, is in fact equivalent to standard viscosity solutions and ours. \begin{cor}\label{corequiv} Let $u:\mathbb R^n\times(0,+\infty)\to\mathbb R$ be an upper (respectively lower) semicontinuous function. Function $u$ is a viscosity subsolution (resp. supersolution) of (\ref{eqlevelset}) if and only if for any $\phi\in C^2(\mathbb{R}^n\times(0,+\infty))$, if $(x,t)\in\mathbb{R}^n\times(0,+\infty)$ is a local maximum (respectively minimum) point for $u-\phi$, one has \begin{equation}\label{condcaci} {\partial_t\phi(x,t)}+F_*(x,t,X\phi(x,t),X^2\phi(x,t)) \leq0\end{equation} (resp. $${\partial_t\phi(x,t)}+F^*(x,t,X\phi(x,t),X^2\phi(x,t) \geq0.)$$ \end{cor} \begin{proof} Suppose that $(x,t)\in\mathbb{R}^n\times(0,+\infty)$ is a local maximum point for $u-\phi$. If $\nabla u(x,t)\neq0$ then $G(x,t,\nabla u(x,t),D^2u(x,t))=F(x,t,Xu(x,t),X^2(x,t))$ so there is nothing to prove. We therefore limit ourselves to discuss the case $Xu(x,t)=0$. If $u$ is a viscosity subsolution, by Remark \ref{remtech} we know that $F_*\leq G_*$, therefore (\ref{condcaci}) is satisfied. If instead we suppose that (\ref{condcaci}) holds true, then by Theorem \ref{thmbg} we limit ourselves to test functions $\phi$ that satisfy: $X\phi(x,t)=0$ implies $X^2\phi(x,t)=\mathbb O$. In this case $F_*(x,t,X\phi(x,t),X^2\phi(x,t))=0$ and then $\partial_t \phi(x,t)\leq0$. Thus by Theorem \ref{thmbg} we know that $u$ is a viscosity subsolution. \end{proof} \begin{rem} In \cite{baci} the authors require a subsolution $u$ of the horizontal mean curvature flow equation to satisfy $$\partial_t\phi(x,t)-\mbox{Tr}X^2\phi(x,t)\leq0,$$ if $u-\phi$ has a maximum at $(x,t)$ and $X\phi(x,t)=0$. If in particuar $\phi$ is in the class of test functions such that $X\phi(x,t)=0$ implies $X^2\phi(x,t)=0$, then $\partial_t\phi(x,t)\leq0$. Therefore $u$ is a subsolution in the sense of Theorem \ref{thmbg} and then it is a viscosity subsolution of (\ref{eqlevelset}). \end{rem} \section{Examples of explicit super or subsolutions} In this section we present examples of super and subsolutions of the geometric equation in the case of the horizontal mean curvature flow equation (mcfe) when $F$ is given in (\ref{mean curvature hamiltonian dim m}). From Theorem \ref{thmbg} we see that when we deal with functions with separated variables like $u(x,t)=\phi(t)+U(x)$ it it easy to check the (mcfe) at singular points of the operator. If $u-\varphi$ has a maximum/minimum at $(x_o,t_{o})$ and $X\varphi(x_o,t_{o})=0$, then we only need to look at the sign of $\varphi_t(x_o,t_{o})$ provided suitable test functions exist, i.e. $X^2\varphi(x_o,t_o)=\mathbb{O}$, otherwise we have nothing to check. We start with a general result in step two Carnot groups, based on the definition of convex functions in the group. The definition of $v-$convex function (as in viscosity-convex) is given in Bardi-Dragoni \cite{badr}, where it is discussed and characterised, and the reader can find explicit examples. \begin{defin} A continuous function $U:\mathbb R^n\to\mathbb R$ is $v-$convex in the Carnot group if there is $\alpha\geq0$, and for all test functions $\phi\in C^2$ such that $U-\phi$ has a maximum at $x_o$, then $X^2\phi(x_o,t_o)\geq\alpha I$. If $\alpha>0$ we say that $U$ is strictly v-convex. \end{defin} The idea is to build supersolutions of the (mcfe) from a $v-$convex function. \begin{prop}\label{propsupsol} In a Carnot group of step 2, let $U\in C(\Omega)$ be continuous and a strictly $v-$convex function. Then for $c,r\in\mathbb R$, the function $u(x,t)=ct-U(x)+r$ is a supersolution of (mcfe) for all $c\geq-(m-1)\alpha$, $r\in\mathbb R$. Suppose moreover that $U$ is nonnegative. Then if $c=-(m-1)\alpha$ and $r>0$, the initial front $\{x:u(x,0)=0\}=\{x:U(x)=r\}$ becomes extinct before time $\bar t= r/({(m-1)\alpha})$. \end{prop} \begin{proof} In order to check the supersolution condition, we use the alternative definition as in Theorem \ref{thmbg}. Let $\varphi\in C^2(\mathbb R^n\times(0,+\infty))$ be such that $u-\varphi$ has a minimum at $(x_o,t_o)$. Since $U-(-\varphi(\cdot,t_o))$ has a maximum at $x_o$ and $U$ is strictly $v-$convex, then $-X^2\varphi(x_o,t_o)\geq \alpha I$ for some $\alpha>0$. Therefore it cannot be $X\varphi(x_o,t_o)=0$ if $\varphi$ is an appropriate test function, and then $$\partial_t\varphi(x_o,t_0)-\hbox{tr }\left(\left(I-\frac{X\varphi(x_o,t_o)}{|X\varphi(x_o,t_o)|}\otimes\frac{X\varphi(x_o,t_o)}{|X\varphi(x_o,t_o)|}\right)X^2\varphi(x_o,t_o)\right)\geq c+(m-1)\alpha\geq0,$$ provided $c\geq-(m-1)\alpha$. The zero sublevel set of the supersolution $u$ becomes a barrier if a comparison principle holds. At time $t$, if $c=-(m-1)\alpha$, it is given by $\{x:u(x,t)\geq 0\}=\{x:U(x)\leq-(m-1)\alpha t+r\}$ and becomes empty if $t> r/({(m-1)\alpha})$. \end{proof} In the previous proposition, the front may have characteristic points, as we see in some more explicit examples below. To simplify, we now specialise to Heisenberg like groups. Building supersolutions seems to be easier than subsolutions in particular if characteristic points are present. Below we consider as reference space $\mathbb R^n=\mathbb R^{m}\times\mathbb R\ni x=(x_h,x_v)$, $m\geq2$ and suppose that $\sigma(x_h,x_v)=\;^t(I_{ m},Bx_h)$, where $^tB=-B=B^{-1}$ is an $m\times m$ matrix. Notice that then $Bx_h\cdot x_h=0$, $B^2=-I_{ m}$ and $|Bx_h|=|x_h|$. \begin{exa} In the first example we avoid characteristic points. For $c, r\in\mathbb R$, consider the family of functions $w(x,t)=ct-|x_h|^2+r$. We easily get that $$Xw(x,t)=-2x_h,\quad X^2w(x,t)=-2I_{m\times m}. $$ In particular $|x_h|^2$ is strictly $v-$convex, we can compute exactly the operator $$\begin{array}{cc} w_t(x,t)-\hbox{tr}\left(X^2w(x,t)-\frac{X^2wXw\otimes Xw(x,t)}{|Xw(x,t)|^2}\right)= c+2(m-1) \end{array}$$ and thus by Theorem \ref{thmbg} and Proposition \ref{propsupsol}, $w$ is a supersolution for $c\geq-2(m-1)$ and a subsolution for $c\leq -2(m-1)$ in $\mathbb R^n\times(0,+\infty)$, so $w$ is a viscosity solution, for $c=-2(m-1)$. Notice that for $r>0$ the zero level set of $w$ is a cylinder with axis $\{x:x_h=0\}$ and it goes extinct at time $t=r/(2(m-1))$. \end{exa} In general it is not as easy to find explicit solutions. \begin{exa} We consider a function built on the gauge function of the Heisenberg group, namely a variation of the homogeneous norm $$u(x,t)=ct-G(x_h,x_v)+r,\quad \hbox{where }G(x_h,x_v)=|x_h|^4+4|x_v|^2,$$ and $c,r$ are constants to be decided later. Notice that the zero level set of $u$ is (we will always regard $r>0$ for convenience) $$\{(x,t):u(x,t)=0\}=\{(x,t):G(x,t)=r+ct\},$$ therefore it is the boundary of a ball for the distance $G^{1/4}$ centred at the origin. It has characteristic points, namely points where $XG(x,t)=0$ precisely in its intersection with the axis $x_h=0$, as we readily see below. We can easily compute (here we will do complete calculations and not only the signature of $X^2G$ because we also want to check the subsolution condition) $$XG(x,t)=\sigma(x) \;^t\nabla G(x,t)=4|x_h|^2x_h+8x_v \;^t(Bx_h),\quad |XG(x,t)|^2=16|x_h|^2G(x,t),$$ $$\begin{array}{cc} X^2G(x,t)=\;^t\sigma\; D^2G\;\sigma(x)=(I_{m\times m},Bx_h)\left(\begin{array}{cc}8x_h\otimes x_h+4|x_h|^2I_{m\times m}\quad&0\\0&8 \end{array}\right)\;^t(I_{m\times m},\;Bx_h)\\ =8x_h\otimes x_h+4|x_h|^2I_{m\times m}+8Bx_h\otimes Bx_h\geq0. \end{array}$$ Therefore $G$ is v-convex but not strictly v-convex. Finally $$XG\cdot X^2G(x,t)XG(x,t)=(48|x_h|^4x_h+96x_v|x_h|^2\;Bx_h)\cdot XG(x,t)=192|x_h|^4G(x,t). $$ and since $u$ is smooth, we conclude that, for $x_h\neq0$, $$\begin{array}{cc} u_t(x,t)-\hbox{tr}\left(X^2u(x,t)-\frac{X^2uXu\otimes Xu(x,t)}{|Xu(x,t)|^2}\right)= c+\hbox{tr}\left(X^2G(x,t)-\frac{X^2GXG\otimes XG(x,t)}{|XG(x,t)|^2}\right)\\ =c+(8+4m+8)|x_h|^2 -12|x_h|^2=c+4n|x_h|^2. \end{array}$$ Now we can use our alternative definition to obtain that the viscosity super/subsolution condition is satisfied also at points where the horizontal gradient vanishes. We conclude that: \begin{itemize} \item[(i)]{for $c\geq0$, $u$ is a global supersolution in $\mathbb R^n\times\mathbb R_+$, since $u_t\geq0$; } \item[(ii)]{for $c<0$, $u$ is a subsolution but only in the cylinder $\{x:|x_h|< \sqrt{\frac{-c}{4n}}\}$ around the axis $x_h=0$. } \end{itemize} Compared to the previous example, now $u$ may itself be a test function and then we cannot fulfil a super or subsolution condition just by the lack of test functions. Notice that the level sets of $u$ in the supersolution case, which are propagating (super)fronts, have radius nondecreasing in time and it may even be stationary for $c=0$. Instead the radius decreases in time in the subsolution case where however the diameter of the section of the domain of the subsolution vanishes with $c$. The zero level set at time $t=0$ is contained in the cylinder in (ii) provided $-c>4n\sqrt{r}$ and it goes extinct at time $t=-r/c$. \end{exa} \begin{exa} Similar calculations of the previous example can be made for $w(x,t)=ct-|x|^2+r$ we get $$Xw(x,t)=-(2x_h+2x_vBx_h),\quad X^2w(x,t)=-2I_{m\times m},\quad |Xw(x,t)|^2=-4|x_h|^2(1+x_v^2) $$ and therefore $$\begin{array}{cc} w_t(x,t)-\hbox{tr}\left(X^2w(x,t)-\frac{X^2wXw\otimes Xw(x,t)}{|Xw(x,t)|^2}\right)= c+2(m-1)+2\frac{|x_h|^2}{1+x_v^2}. \end{array}$$ Again by Theorem \ref{thmbg}, Proposition \ref{propsupsol} and since $|x|^2$ is strictly v-convex, $w$ is a supersolution in $\mathbb R^n\times(0,+\infty)$, for $c\geq-2(m-1)$, and a subsolution for $c<-2(m-1)$ in the open sets $\{x\in\mathbb R^n:|x_h|^2<\varepsilon(1+x_v^2)\}$ if $\varepsilon$ is sufficiently small. \end{exa} We now construct a modification of the second example to build a global subsolution of the mean curvature flow equation whose level sets have characteristic points. We first prove a lemma on change of variables for the horizontal mean curvature operator. \begin{lem} Let $U\in C^2(\mathbb R^n)$ and $\psi:\mathbb R\to\mathbb R$ be smooth with $\psi'>0$. Then for $W=\psi(U)$, if $XU\neq0$, we have $$-\hbox{tr }\left(X^2W-\frac{X^2W\;XW\otimes XW(x)}{|XW(x)|^2} \right)=-\psi'(U)\hbox{tr }\left(X^2U-\frac{X^2U\;XU\otimes XU(x)}{|XU(x)|^2} \right).$$ \end{lem} \begin{proof} It is just a matter of computing terms. We obtain $$\nabla W=\psi'(U)\nabla U,\quad XW(x)=\psi'(U)\;XU(x),\quad |XW(x)|^2=(\psi'(U))^2\;|XU(x)|^2$$ $$\begin{array}{ll}D^2W(x)&=\psi''(U)\;\nabla U\otimes \nabla U(x)+\psi'(U)\;D^2U(x), \\ X^2W(x)&= \psi''(U)\;X U\otimes X U(x)+\psi'(U)\;X^2U(x) \end{array}$$ $$\begin{array}{ll} \hbox{tr }X^2W(x)&=\psi''(U)\;|X U|^2+\psi'(U)\;\hbox{tr }X^2U(x),\\ X^2W\;XW\cdot XW(x)&=(\psi'(U))^2(\psi''(U)\;|XU(x)|^4+\psi'(U)\;X^2U\;XU\cdot XU(x)). \end{array}$$ Finally, putting things together the first two terms in the previous equations cancel out. \end{proof} \begin{exa} In this example we consider the function $$v(x,t)=ct-G(x_h,x_v)^{1/2}+r$$ which now is not differentiable at points $(0,t)\in\mathbb R^n\times(0,+\infty)$. However $v$ is locally Lipschitz continuous and is differentiable in the group of variables $x_h$. Moreover there is no smooth test function such that $v-\phi$ has a local minimum at $(0,t)$, and if $v-\phi$ has a local maximum at $(0,t)$, then $\phi_t(0,t)=c$ and $\nabla_{x_h}\phi(0,t)=0$ so that $X\phi(0,t)=0$ since $\sigma(0)=(I_{m\times m},0)$. Therefore to check the mean curvature flow equation at such points we only need to look at the sign of $c$ by Theorem \ref{thmbg}. We now proceed at points such that $x_h\neq0$. We use the lemma with $\psi(s)=s^{1/2}$, so that $\psi'(s)=1/(2\psi(s))$ and the calculations of the previous example. Again the zero level sets of $v$ are $$\{(x,t):v(x,t)=0\}=\{(x,t):G(x,t)=(r+ct)^2\},$$ and we check the equation at non characteristic points. We obtain, by the lemma, $$\begin{array}{cc} v_t(x,t)-\hbox{tr}\left(X^2v(x,t)-\frac{X^2vXv\otimes Xv(x,t)}{|Xv(x,t)|^2}\right)= c+\frac1{2G(x,t)^{1/2}}4n|x_h|^2. \end{array}$$ We conclude that $v$ is a supersolution for $c\geq0$ as before, but now, since ${|x_h|^2}\leq{G(x,t)^{1/2}}$, $v$ becomes a global subsolution for $c\leq-2n$. If $c=-2n$, the extinction time of the zero level set of the subsolution is $t=r/(2n)$. Finally notice that all functions of the family share the same initial condition at time $t=0$ independently of $c$. \end{exa} \section{A geometric definition of generalised flow in Carnot groups} In this section we extend the definition of generalised super and subflows introduced by Barles-Souganidis \cite{basou}, later revisited by Barles and Da Lio \cite{badl}, to the setting of level set equations in Carnot groups, also in view of the ideas described in Section 3. This more geometric definition turns out to determine uniquely the geometric flow of a hypersurface if the usual level set equation determines a unique evolution with empty interior (no fattening). This definition has been proven to be much more efficient when dealing with singularly perturbed problems that give rise to geometric flows and we will use it in \cite{deso5} to extend to the Carnot group setting the classical Allen-Cahn approach. In the following definition we follow \cite{badl} with one modification, see Remark \ref{remflow}. \begin{defin}\label{generalized flow in Carnot group def} Let $F:\mathbb R^n\times(0,+\infty)\times\mathbb R^m\backslash\{0\}\times\mathcal S^m$ be locally bounded and satisfying (F1-2-3), and let $G$ be defined as in (\ref{eqopg}). A family $(\Omega_t)_{t\in(0,T)}$ (resp. $(\mathcal{F}_t)_{t\in(0,T)}$) of open (resp. close) subsets of $\mathbb{R}^n$ is called a \emph{generalized superflow} (resp. \emph{subflow}) with normal velocity $-F$ if, for any $x_0\in\mathbb{R}^n$, $t\in(0,T)$, $r>0$, $h>0$ so that $t+h<T$ and for any smooth function $\phi:B(x_0,r]\times[t,t+h]\rightarrow\mathbb{R}$ such that: \begin{description} \item[(i)] $\partial_t\phi(x,s)+G^*(x,t,\nabla \phi(x,s),D^2\phi(x,s))<0$ in $B(x_0,r]\times[t,t+h]$\\ (resp. $\partial_t\phi(x,s)+G_*(x,t,\nabla \phi(x,s),D^2\phi(x,s))>0$ in $B(x_0,r]\times[t,t+h]$), \item[(ii)] for any $s\in[t,t+h]$, $\{x\in B(x_0,r]:\phi(x,s)=0\}\neq\emptyset$ and $$|\nabla \phi(x,s)|\neq0\mbox{ on }\{(x,s)\in B(x_0,r]\times[t,t+h]:\phi(x,s)=0\},$$ \item[(iii)] if there exists a pair $(x,s)\in B(x_0,r]\times[t,t+h]$ so that $\abs{X\phi(x,s)}=0$, then it holds also $\abs{X^2\phi(x,s)}=0$, \item[(iv)] $\{x\in B(x_0,r]:\phi(x,t)\geq0\}\subset\Omega_t$ (resp. $\{x\in B(x_0,r]:\phi(x,t)\leq0\}\subset\mathcal{F}_t^c$), \item[(v)] for all $s\in[t,t+h]$, $\{x\in \partial B(x_0,r]:\phi(x,s)\geq0\}\subset\Omega_s$ (resp. $\{x\in \partial B(x_0,r]:\phi(x,s)\leq0\}\subset\mathcal{F}_s^c$), \end{description} then we have $$\{x\in B(x_0,r]:\phi(x,s)>0\}\subset\Omega_s,\quad (\mbox{resp. }\{x\in B(x_0,r]:\phi(x,s)<0\}\subset\mathcal{F}_s^c,)$$ for every $s\in(t,t+h)$. A family $(\Omega_t)_{t\in(0,T)}$ of open subsets of $\mathbb{R}^n$ is called a \emph{generalized flow} with normal velocity $-F$ if $(\Omega_t)_{t\in(0,T)}$ is a superflow and $(\overline{\Omega}_t)_{t\in(0,T)}$ is a subflow. \end{defin} \begin{rem}\label{remflow} The previous definition focuses on evolution of sets directly instead of looking at the level sets of the solutions of a differential equation. It does this by assuming local comparison with smooth evolutions. Indeed when checking if a collection of open sets provides a superflow, (i) requires the smooth function $\phi$ to be a local strict subsolution, (ii) assumes that the zero level set of $\phi$ is smooth, (iv)-(v) require compatible initial and boundary conditions in the local cylinder between the family of sets and the smooth evolution. The condition (iii) is new and we add it to restrict the family of test functions in view of what we did in Section 3. As we will see from the proof of the characterisation Theorem \ref{flow-viscositysol Carnot group} below, in view of our Theorem \ref{thmbg}, the condition (iii) can be present or not, the corresponding definition would be equivalent. It follows immediately by Definition \ref{generalized flow in Carnot group def} that a family $(\Omega_t)_{t\in(0,T)}$ of open subsets of $\mathbb{R}^n$ is a generalised superflow with normal velocity $-F$ if and only if $(\Omega_t^c)_{t\in(0,T)}$ is a generalised subflow with normal velocity $F$. \end{rem} We now state and prove the following result which describes the connection between generalised flows and solutions of (\ref{eqlevelset}). \begin{thm} \label{flow-viscositysol Carnot group} \begin{description} \item[(i)]Let $(\Omega_t)_{t\in(0,T)}$ be a family of open subsets of $\mathbb{R}^n$ such that the set $\Omega:=\bigcup_{t\in(0,T)}\Omega_t\times\{t\}$ is open in $\mathbb{R}^n\times[0,T]$. Then $(\Omega_t)_{t\in(0,T)}$ is a generalised superflow with normal velocity $-F$ if and only if the function $\chi=\mathds{1}_{\Omega}-\mathds{1}_{\Omega^c}$ is a viscosity supersolution of (\ref{eqlevelset}). \item[(ii)]Let $(\mathcal{F}_t)_{t\in(0,T)}$ be a family of closed subsets of $\mathbb{R}^n$ such that the set $\mathcal{F}:=\bigcup_{t\in(0,T)}\mathcal{F}_t\times\{t\}$ is closed in $\mathbb{R}^n\times[0,T]$. Then $(\mathcal{F}_t)_{t\in(0,T)}$ is a generalised subflow with normal velocity $-F$ if and only if the function $\overline\chi=\mathds{1}_{\mathcal{F}}-\mathds{1}_{\mathcal{F}^c}$ is a viscosity subsolution of (\ref{eqlevelset}). \end{description} \end{thm} \begin{proof} We adapt to our situation some of the ideas in \cite{badl} and only consider (i) as the other case is similar. We first assume that $\chi=\mathds{1}_{\Omega}-\mathds{1}_{\Omega^c}$ is a supersolution of (\ref{eqlevelset}) and we show that $(\Omega_t)_{t\in(0,T)}$ is a generalised superflow. To do this we consider a smooth function $\phi$, a point $(x_0,t)\in\mathbb{R}^n\times(0,T)$ and $r,h>0$ satisfying conditions (i--v) in Definition \ref{generalized flow in Carnot group def}. We assume that $\phi\leq1$ in $B(x_0,r]\times[t,t+h]$ (otherwise we change $\phi$ with $\eta\phi$ for $\eta>0$ small enough and we use the homogeneity of $F$). We consider $$m:=\min\{\chi(x,s)-\phi(x,s):(x,s)\in B(x_0,r]\times[t,t+h]\}.$$ Since $\phi$ satisfies condition (i), $\chi$ is a supersolution of equation (\ref{eqlevelset}) in $B(x_0,r)\times(t,t+h)$ and it is well known, see e.g. \cite{bcd}, that $\chi$ is therefore also a supersolution in $B(x_0,r)\times(t,t+h]$, we deduce that the minimum $m$ has to be attained either in $\partial B(x_0,r)$ or at time $t$. Let $(x,s)\in(\partial B(x_0,r)\times[t,t+h])\cup (B(x_0,r]\times\{t\})$. If $x\in\Omega_s$, then $\chi(x,s)=1$ and $(\chi-\phi)(x,s)\geq0$ because $\phi\leq1$ in $B(x_0,r]\times[t,t+h]$. If instead $x\not\in\Omega_s$, then $\chi(x,s)=-1$ and, by (iv) and (v), $(\chi-\phi)(x,s)\geq-1+\delta$ for some $\delta>0$. In any case we can conclude that $$\chi(y,s)-\phi(y,s)\geq-1+\delta,\quad (y,s)\in B(x_0,r]\times[t,t+h],$$ in particular $\phi(y,s)\leq-\delta$, if $y\notin\Omega_s$. This means that for every $s\in[t,t+h]$, $$\{y\in B(x_0,r]:\phi(y,s)\geq0\}\cap\Omega_{s}^c=\emptyset,$$ which implies that $(\Omega_t)_{t\in(0,T)}$ is a generalised superflow with normal velocity $-F$. Conversely, we assume that $(\Omega_t)_{t\in(0,T)}$ is a generalised superflow and we show that $\chi$ is a supersolution of the equation (\ref{eqlevelset}) in $\mathbb{R}^n\times (0,T)$. We consider a point $(x,t)\in\mathbb{R}^n\times(0,T)$ and a function $\phi\in C^\infty(\mathbb{R}^n\times[0,T])$ so that $(x,t)$ is a strict local minimum point of $\chi-\phi$ and by adding a constant to $\phi$ if necessary we may assume $\phi(x,t)=0$. We want to show that \begin{equation}\label{thesi flow-viscositysol Carnot group} \partial_t\phi(x,t)+G^*(x,t,\nabla \phi(x,t),D^2\phi(x,t))\geq0. \end{equation} By using the equivalent definition of viscosity solution with a restricted family of test functions, we will suppose that $X^2\phi(y,s)=0$ whenever $|X\phi(y,s)|=0$. When $(x,t)$ is in the interior of either $\{\chi=1\}$ or $\{\chi=-1\}$ then $\chi$ is constant in a neighborhood of $(x,t)$ and therefore $\partial_t\phi(x,t)=0$, $\nabla \phi(x,t)=0$ and $D^2\phi(x,t)\leq0$. Since $F$ satisfies (F1-2), then the inequality in (\ref{thesi flow-viscositysol Carnot group}) is true. Assume instead that $(x,t)\in\partial\{\chi=1\}\cap\partial\{\chi=-1\}$. Thus, by the lower semicontinuity of $\chi$, $\chi(x,t)=-1$. We suppose by contradiction that there exists an $\alpha>0$ so that we have $$\partial_t\phi(x,t)+G^*(x,t,\nabla \phi(x,t),D^2\phi(x,t))<-\alpha.$$ We can find $r,h>0$ such that for all $(y,s)\in B(x,r]\times[t-h,t+h]$, \begin{equation}\label{c1 Carnotgroup} \partial_t\phi(y,s)+G^*(y,s,\nabla \phi(y,s),D^2\phi(y,s))<-\frac{\alpha}{2}. \end{equation} and \begin{equation}\label{c2 Carnotgroup} \chi(x,t)-\phi(x,t)=-1<\chi(y,s)-\phi(y,s),\quad(y,s)\neq(x,t).\end{equation} We consider first the case $|\nabla \phi(x,t)|\neq0$ and by choosing smaller $r$, $h$, we assume that $|\nabla \phi|\neq0$ in $B(x,r]\times[t-h,t+h]$. We introduce the test function $\phi_\delta(y,s):=\phi(y,s)+\delta(s-(t-h))$, for $0<\delta\ll1$. Since $\phi(x,t)=0$ and $\nabla \phi(x,t)\neq0$, it is easy to see that if $h$ and $\delta$ are small enough then, for any $t-h\leq s\leq t+h$, the set $\{y\in B(x,r):\phi_\delta(y,s)=0\}$ is not empty. We observe that, for $\delta>0$ small enough, by (\ref{c1 Carnotgroup}) and (\ref{c2 Carnotgroup}), we have \begin{equation}\label{c3}\phi_\delta(y,s)-1<\chi(y,s),\end{equation} for all $(y,s)\in (B(x,r)\times\{t-h\})\cup(\partial B(x,r)\times[t-h,t+h])$ and $$\partial_t\phi_\delta(y,s)+G^*(y,s,\nabla \phi_\delta(y,s),D^2\phi_\delta(y,s))<-\frac{\alpha}{4}$$ for all $(y,s)\in B(x,r]\times[t-h,t+h]$. The inequality (\ref{c3}) implies that $$\{y\in B(x,r]:\phi_\delta(y,t-h)\geq0\}\subset\Omega_{t-h},\mbox{ and }\{y\in\partial B(x,r):\phi_\delta(y,s)\geq0\}\subset\Omega_{s},$$ for all $s\in[t-h,t+h]$. Therefore $\phi_\delta$ satisfies (i-ii-iv-v) in Definition \ref{generalized flow in Carnot group def}. Assumption (iii) holds as well by assumptions on function $\phi$. The definition of superflow then yields $$\{y\in B(x,r]:\phi_\delta(y,s)>0\}\subset\Omega_s,$$ for every $s\in(t-h,t+h)$. Since $\phi_\delta(x,t)=\delta h>0$, we deduce that $x\in\Omega_t$, and this is a contradiction with $(x,t)\in\partial\{\chi=-1\}$. Now we turn to the case when ${\nabla \phi(x,t)}=0$. In particular $X\phi(x,t)=0$, $X^2\phi(x,t)=\mathbb O$ by our assumption, and therefore to prove (\ref{thesi flow-viscositysol Carnot group}), it is then enough to show that $$\partial_t\phi(x,t)\geq0.$$ We further observe that by the result in \cite{bage} corresponding to our Theorem \ref{thmbg}, we could have restricted $\phi$ to the class of functions such that $\nabla \phi(x,t)=0$ implies $$\frac{\partial^2\phi}{\partial_{x_i}\partial_{x_j}}(x,t)=\frac{\partial^3\phi}{\partial_{x_i}\partial_{x_j}\partial_{x_k}}(x,t)= \frac{\partial^4\phi}{\partial_{x_i}\partial_{x_j}\partial_{x_k}\partial_{x_l}}(x,t)=0$$ for any $i,j,k,l\in\{1,\dots,n\}$ as we do now. Suppose by contradiction that $a:=\partial_t\phi(x,t)<0$. Therefore by Taylor formula $$\phi(y,s)=\partial_t\phi(x,t)(s-t)+o(\abs{s-t}+\abs{y-x}^4)\quad\mbox{as }s\to t,\;\abs{y-x}\to0.$$ Thus, for all $\varepsilon>0$, there exist $r=r_\varepsilon,h=h_\varepsilon,h'=h'_\varepsilon>0$ such that $$h'\leq h,\quad h<-\frac{\varepsilon r^4}{a}$$ and, for any $(y,s)\in B(x,r]\times[t-h,t+h']$ $$\begin{array}{rl} \phi(y,s)&\geq a(s-t)+\frac{a}{2}\abs{s-t}-\varepsilon\abs{y-x}^4\\ &=\frac{a}{2}(s-t)+a(s-t)^+-\varepsilon\abs{y-x}^4 \geq\frac{a}{2}(s-t)-\varepsilon\abs{y-x}^4+ah'. \end{array}$$ Let $d_G(x,y)=\|x^{-1}\circ y\|_G$, be the distance function defined in (\ref{distance}). For any compact set $K\subset\mathbb{R}^n$, by known results, see e.g. Proposition 5.15.1 in \cite{bolaug}, there exists a positive constant $C_K>0$ so that $$\frac{\abs{x-y}}{C_K}\leq d_G(x,y)\leq C_K\abs{x-y}^{1/2},$$ for any $x,y\in K$. Thus, if we put $C_r=(C_{B(x,r]})^4$, we get $$\frac{\abs{x-y}^4}{C_r}\leq N(x^{-1}\circ y)\leq C_r\abs{x-y}^2$$ and by definition of $N$ $$\phi(y,s)\geq\frac{a}{2}(s-t)-\varepsilon C_r N(x^{-1}\circ y)+ah'$$ for any $(y,s)\in B(x,r]\times[t-h,t+h']$. By (\ref{c2 Carnotgroup}) we can take $\beta>0$ such that \begin{equation*}2\beta+\phi(y,s)-1<\chi(y,s)\end{equation*} for all $(y,s)\in (B(x,r]\times\{t-h\})\cup(\partial B(x,r)\times(t-h,t+h'))$. By taking $\beta$ smaller we may also suppose $\beta<\varepsilon r^4/2$. We now proceed similarly as before and consider the function $\psi_\beta(y,s)=(a/2)(s-t)-\varepsilon C_rN(x^{-1}\circ y)+\beta$. Since we can take $h'$ smaller we assume from now on that $h'\leq-\beta/a$. Combining the last two displayed inequalities and the assumptions on $\beta,h,h'$ and $r$ we get \begin{equation}\label{c4 Carnotgroup} \psi_\beta(y,s)-1<\chi(y,s) \end{equation} for all $(y,s)\in (B(x,r]\times\{t-h\})\cup(\partial B(x,r)\times[t-h,t+h'])$. Thus, with a reasoning similar to the one that we used in the previous case, it is possible to prove that $\psi_\beta$ satisfies conditions (iv) and (v) in Definition \ref{generalized flow in Carnot group def}. Furthermore we consider a fixed $s\in[t-h,t+h']$. We have $\psi_\beta(x,s)=a(s-t)/2+\beta\geq ah'/2+\beta>0$ while for $\abs{y-x}=r$ $$\begin{array}{ll} \psi_\beta(y,s)&=\frac{a}{2}(s-t)-\varepsilon C_r d_G(x,y)^4+\beta\leq\frac{a}{2}(s-t)-\varepsilon \abs{y-x}^4+\beta\\ &\leq-\frac{ah}{2}-\varepsilon r^4+\beta\leq-\frac{ah+\varepsilon r^4}{2}\leq0. \end{array}$$ Thus the set $\{y\in B(x,r]:\psi_\beta(y,s)=0\}$ is not empty. Let $y\in B(x,r]$, we compute $$\nabla \psi_\beta(y,s)=-\varepsilon C_r \left(\begin{array}{c} 4\abs{y_h-x_h}^2(y_h-x_h)-2\sum_{i=1}^{n-m}(y_{m+i}-x_{m+i}-\langle B^{(i)}x_h,y_h\rangle)B^{(i)}x_h\\ 2(y_v-x_v-\langle Bx_h,y_h\rangle). \end{array}\right)$$ Thus, since the matrices $B^{(i)}$ are skew-symmetric, $\nabla \psi_\beta(y,s)=0$ if and only if $y=x$ and therefore $\abs{\nabla \psi_\beta(y,s)}\neq0$ for every $(y,s)\in\{B(x,r]\times[t-h,t+h']:\psi_\beta(y,s)=0\}$. This proves that $\psi_\beta$ satisfies (ii) in Definition \ref{generalized flow in Carnot group def}. Moreover it satisfies also (iii) since, by Lemma \ref{lemma:property norm in Carnot group2}, $$\abs{X\psi_\beta(y,s)}=0\Leftrightarrow y_h=x_h\Leftrightarrow\abs{X^2\psi_\beta(y,s)}=0.$$ It remains to prove that (i) holds. Since $G^*$ is upper semicontinuous, $G^*(y,s,0,\mathbb O)=0$ and $G$ is geometric, we have that $$\begin{array}{ll} {\partial_t\psi_\beta}(y,s)&+G^*(y,s,\nabla \psi_\beta(y,s),D^2\psi_\beta(y,s))\\ &=\frac{a}{2}+G^*(y,s,-\varepsilon C_r\nabla _yN(x^{-1}\circ y),-\varepsilon C_rD^2_{yy}N(x^{-1}\circ y))<0, \end{array}$$ for $(y,s)\in B(x,r]\times[t-h,t+h']$ and $\varepsilon$ small enough. Thus, since $(\Omega_t)_{t\in(0,T)}$ is a generalised superflow, we have $$\{y\in B(x,r]:\psi_\beta(y,s)>0\}\subset\Omega_s$$ for any $s\in(t-h,t+h')$. But again $\psi_\beta(x,t)=\beta>0$, and this means $x\in\Omega_t$, which is a contradiction. \end{proof} \begin{rem} When we define generalised flows as in Definition \ref{generalized flow in Carnot group def}, then Theorem \ref{flow-viscositysol Carnot group} provides a discontinuous solution of (\ref{eqlevelset}). The discontinuous solution $\chi$ bears a natural initial condition at $t=0$ in the following way. Since $\chi$ is lower semicontinuous, we can extend it at $t=0$ by lower semicontinuity and then define a lower semicontinuous initial condition as $$\chi_o(x)=\chi_*(x,0).$$ \end{rem} In order to better understand the nature of Definition \ref{generalized flow in Carnot group def}, we comment briefely on the previous result by recalling the connection between the discontinuous solution of (\ref{eqlevelset}) that appears in Theorem \ref{flow-viscositysol Carnot group} and usual viscosity solutions of (\ref{eqlevelset}), see e.g. Souganidis \cite{soug} and the references therein. Suppose that $u\in C(\mathbb R^n\times[0,+\infty)$ is a viscosity solution of (\ref{eqlevelset}). Then we can define the following family of sets, for $t\geq 0$, \begin{equation}\label{nointerior} \Gamma_t=\{(x,t):u(x,t)=0\},\quad D_t^+=\{(x,t):u(x,t)>0\},\quad D_t^-=\{(x,t):u(x,t)<0\}. \end{equation} The following result is well known in the theory and contains as a consequence of Theorem \ref{flow-viscositysol Carnot group} an existence result for geometric flows. The second part of the statement is based on the validity of a comparison principle, which at the moment for equation (\ref{eqlevelset}) is valid under some restrictions as we discussed in the introduction. \begin{thm}\label{chi viscosol} Suppose that $u\in C(\mathbb R^n\times[0,+\infty)$ is a viscosity solution of (\ref{eqlevelset}). With the notation in (\ref{nointerior}), the two functions $\overline\chi(x,t)=\mathds1_{D_t^+\cup\Gamma_t}(x)-\mathds1_{D_t^-}(x),$ $\underline\chi(x,t)=\mathds1_{D_t^+}(x)-\mathds1_{D_t^-\cup\Gamma_t}(x)$ are viscosity solutions of (\ref{eqlevelset}) associated respectively with the discontinuous initial data $$\bar w_o=\mathds1_{D_o^+\cup\Gamma_o}-\mathds1_{D_o^-},\quad \underline w_o=\mathds1_{D_o^+}-\mathds1_{D_o^-\cup\Gamma_o},$$ respectively. In particular the family of sets $(D_t^+)_{t>0}$, $(D_t^+\cup\Gamma_t)^o_{t>0}$ are generalised flows and they coincide if and only if the no-interior condition holds: $\Gamma_t=\partial D^+_t=\partial D^-_t$, for all $t\geq 0$. If moreover $\Gamma_o$ has an empty interior and a comparison principle holds for the equation (\ref{eqlevelset}), then $\bar \chi,\;\underline\chi$ are respectively the maximal subsolution and the minimal supersolution of the Cauchy problem coupling (\ref{eqlevelset}) with the initial condition $w_o=\mathds1_{D_o^+}-\mathds1_{D_o^-}$ and it has a unique discontinuous solution if and only if the no-interior condition holds. The unique solution is given by the function \begin{equation}\label{eqchi} \chi(x,t)=\mathds1_{D_t^+}(x)-\mathds1_{D_t^-}(x).\end{equation} \end{thm}
1,941,325,220,982
arxiv
\section{Introduction} According to the $AdS/CFT$ correspondence the eleven dimensional supergravity on $AdS_4 \times S_7$ is dual to a superconformal field theory describing multiple M2-branes. This superconformal field theory has to have $\mathcal{N} = 8$ supersymmetry. This is because apart from a constant closed 7-form on $S^7$, $AdS_4 \times S^7 \sim [SO(2,3)/ SO (1, 3)]\times [SO(8)/ SO(7)] \subset OSp(8|4)/[SO(1,3) \times SO(7)]$. The $OSp(8|4)$ gets realized as $\mathcal{N} = 8$ supersymmetry of the dual superconformal field theory. The transverse coordinates of the M2-branes give rise to eight gauge valued scalar fields. Apart from these eight gauge valued scalar fields, this theory also has sixteen physical fermions. The gauge fields of this theory do not have any on-shell degrees of freedom. A theory called the BLG theory satisfies these properties \cite{1b, 2b, 3b, 4b, 5b}. However, the gauge symmetry of the BLG theory is based on a Lie 3-algebra and the only known example of a Lie 3-algebra is $SO(4) \sim SU(2) \times SU(2)$. So, the BLG theory can only describe two M2-branes. It has been possible to generalize the BLG theory to a superconformal field theory describing any number of M2-branes on $AdS_4 \times S_7/ Z_k$ \cite{abjm, ab, ab1, ab0}. This theory called the ABJM theory only has $\mathcal{N} =6$ supersymmetry and $SO(6)$ $R$-symmetry. However, as it considers with the BLG theory for two M2-branes, it is expected that its supersymmetry would get enhanced to full $\mathcal{N} = 8$ supersymmetry. In fact, the supersymmetry for the ABJM can gets enhanced to $\mathcal{N} =8$ supersymmetry for Chern-Simons levels, $k = 1, 2$, by the use of monopole operators \cite{mp, mp1, mp2, mp0}. In the ABJM theory the matter fields are in the bi-fundamental representation of the gauge group $U(N) \times U(N)$ and the double gauge fields are in the adjoint representation. A further generalization of the ABJM theory to a theory describing fractional M2-branes has been made \cite{5bc, 5ba, 5a1, 5a2, 5a}. This theory is called the ABJ theory and in it the gauge fields are described by the gauge group $U(M) \times U(N)$ with $M \neq N$ \cite{1, 2}. The matter fields are again in the bi-fundamental representation of this gauge group and the double gauge fields are in the adjoint representation. Wilson loops for the ABJ theory have been studied and they are given by semi-classical string surfaces in the dual string theory picture \cite{3, 4}. The most symmetric string of this kind preserves half the supersymmetry. The dual field theory operator to it has also been constructed using a superconnection \cite{5}. In this superconnection, the scalar fields occur in bi-linears combinations and the fermions appear linearly. Thus, the fermions transform in the bi-fundamental representation and appear in the off-diagonal block. The bi-linear product of the scalars transforms in the adjoint representations. So, the scalars appear in the diagonal blocks along with the gauge fields. The fermions couple to Grassmann even quantities and thus the off-diagonal blocks contain Grassman odd quantities. It may be noted that Wilsons loops which preserve $1/6$ of the total supersymmetry have also been studied \cite{6, 7}. In fact, a matrix model corresponding to the vacuum expectation value for the $1/6$ BPS Wilsons loop has been constructed \cite{7a}. In this paper we introduce Polyakov loops as the variables to be used. In mathematical language these are the holonomies of closed loops in space-time, and they are sometimes also called Dirac phase factors in the physics literature. Although they are defined via parametrized loops in space-time, they are independent of the parametrization chosen. They are therefore gauge group-valued functions of the infinite-dimensional loop space. The main difference between a Polyakov loop and a Wilson loop is that in the Wilson loop a trace is taken and no such such trace is taken in the Polyakov loop \cite{p1}. In this paper we will study the Polyakov loops for the ABJ theory. Polyakov loops have been used for deriving a duality in non-abelian gauge theories \cite{pq1, pq2}. This duality has been used for analysing the 't Hooft's order-disorder parameters \cite{1p}. A Dualized Standard Model has also been constructed using this duality \cite{pq4, p9}. In this model three generations of fermions are produced by the breaking of a dual color $SU(3)$ symmetry \cite{pq0, q01}. The resulting scheme give a method for calculating a fermion mass hierarchy along with the mixing parameters of the Standard Model fermions \cite{qp1, pq01}. Dual Feynman Rules for Yang-Mills theories with a monopole have also been analysed using Polyakov loops \cite{pq5}. Polyakov loops for supersymmetric gauge theories in $\mathcal{N} =1$ superspace have also been discussed \cite{pq}. It is possible to define a Polyakov connection on loop space which measures the change in phase as one moves from one point in the loop space to a neighboring point. It is also possible to construct a curvature tensor using this connection \cite{p2,p3}. This curvature is proportional to the Bianchi identities and thus vanishes when the Bianchi identities are satisfied \cite{p4}. As in the presence of a monopoles, Bianchi identities are not satisfied, so this curvature only gets a non-zero value when a monopole is present. Furthermore, it is possible to define a loop in the loop space which covers a surface in spacetime. This loop in the loop space can be used as a measure for the non-abelian monopole charge. These results are know to hold for ordinary Yang-Mills theories. We shall derive them for the ABJ theory. We shall also generalize some of the previously known results. So, we shall obtain a curvature and connection in the space of loop of loops and use them for analysing topological defects in the loop space. \section{Polyakov Loops } As the ABJ theory is a Chern-Simons-Matter theory with the gauge group $U(N) \times U(M)$, so, we will denote the gauge fields corresponding to $U(N)$ by $A_\mu$ and the gauge fields corresponding to $U(M)$ by $A' _\mu$. These gauge fields are coupled to complex scalar fields $C_I$ and their complex conjugates $\bar C^I$, where $I = 1..4$ is an $SU(4)_R$ index. They are also coupled to fermions $\phi^a_I$ and $\bar \phi^I_a$, where $a = \pm$ is a spinor index. It may be noted that the matter fields $C_I, \bar\phi^I_a$ transforms under $(N, \bar M)$ and the matter fields $\bar C^I, \phi^a_I$ transforms under $(\bar N, M)$ representations of the gauge group $U(N) \times U(M)$. We choose a notation such that $\bar C^I C_I$ and $\phi^a_I \bar\phi^I_a$ are in the adjoint representation of $U(N)$ and $C_I \bar C^I $ and $\bar\phi_I^a \phi_a^I $ is in the adjoint representation of $U(M)$. These fields for the ABJ theory transform under a superconformal transformations as follows \begin{eqnarray} \delta A_\mu&=&\frac{4\pi i}{k}\bar{\Theta}^{IJ\alpha}(\gamma_\mu)_\alpha^{\ \beta}\left(C_I\Psi_{J\beta} +\frac{1}{2}\epsilon_{IJKL}\bar{\Psi}_\beta^K\bar{C}^L\right), \nonumber \\ \delta A'_\mu&=& \frac{4\pi i}{k}\bar{\Theta}^{IJ\alpha}(\gamma_\mu)_\alpha^{\ \beta}\left(\Psi_{J\beta}C_I +\frac{1}{2}\epsilon_{IJKL}\bar{C}^L\bar{\Psi}_\beta^K\right), \nonumber \\ \delta C_K&=&\bar{\Theta}^{IJ\alpha}\epsilon_{IJKL}\bar{\Psi}^L_\alpha, \nonumber \\ \delta \bar{C}^K&=&2\bar{\Theta}^{KL\alpha}\Psi_{L\alpha}, \nonumber \\ \delta\Psi^\beta_K&=&-i\bar{\epsilon}^{IL\beta}\epsilon_{ILKJ}\bar{C}^J -i\bar{\Theta}^{IJ\alpha}\epsilon_{IJKL}(\gamma^\mu)_\alpha^{\ \beta}D_\mu \bar{C}^{L} \nonumber \\ && +\frac{2\pi i}{k}\bar{\Theta}^{IJ\beta}\epsilon_{IJKL}(\bar{C}^LC_P\bar{C}^P-\bar{C}^PC_P\bar{C}^L) \nonumber \\ &&+\frac{4\pi i}{k}\bar{\Theta}^{IJ\beta}\epsilon_{IJML}\bar{C}^MC_K\bar{C}^L, \nonumber\\ \delta\bar{\Psi}_\beta^K&=&-2i\bar{\Theta}^{KL\alpha}(\gamma^\mu)_{\alpha\beta}D_\mu C_{L}-2i\bar{\epsilon}^{KL}_\beta C_L\nonumber \\ && -\frac{4\pi i}{k}\bar{\Theta}^{KL}_\beta(C_L\bar{C}^MC_M-C_M\bar{C}^MC_L)\nonumber \\ && -\frac{8\pi i}{k}\bar{\Theta}^{IJ}_\beta C_I\bar{C}^KC_J. \end{eqnarray} As we want to study Polyakov loops for the ABJ theory, we consider all the loops passing through some fixed point in spacetime, \begin{equation} C : \{ \xi^\mu (s): s = 0 \to 2\pi, \, \, \xi^\mu (0) = \xi^\mu (2\pi)\}, \end{equation} where $\xi^\mu (s) $ represents the spacetime coordinates of all points on the loop. We also define $\dot{\xi}^\mu = {d \xi^\mu}/{ ds}$, and $|\dot{\xi}|= \sqrt{\eta_{\mu\nu} \dot{\xi}^\mu \dot{\xi}^\nu}$. Even though, the gauge group of the ABJ theory is $U(N) \times U(M)$, we will embed it into a superconnection $\mathcal{A}$ belonging to $U(N|M)$ \cite{5}. Scalar fields occur as bi-linears because in three dimensions the dimension of scalar fields is $1/2$. As the bi-linear combinations of scalar fields is in adjoint representation, they occur with the gauge fields in the diagonal blocks. Furthermore, as the dimensions of the fermions in three dimensions is $1$, they appear linearly. As the fermions transform under bi-fundamental representation, they are placed off-diagonally. We define $M^I_J, {M'}^I_J, \eta^a_I, \bar\eta_a^I$ as the parameters in the theory which parameterize the local couplings. Even though $\eta^a_I, \bar\eta_a^I$ transform under a spinor representation of the Lorentz group, they are taken to be Grassmann even quantities. This is because by taking them to be Grassmann even quantities the off-diagonal entries become Grassmann odd quantities. So, the superconnection for $U(N|M)$ can be written as, \begin{eqnarray} \mathcal{A} [\xi] = \left( \begin{array}{cc} \mathcal{A}_{11} [\xi] & \mathcal{A}_{12} [\xi] \\ \mathcal{A}_{21} [\xi] & \mathcal{A}_{22} [\xi] \\ \end{array} \right), \end{eqnarray} where \begin{eqnarray} \mathcal{A}_{11} [\xi] &=& A_\mu \dot{\xi}^\mu + \frac{2\pi}{k}|\dot{\xi}| M^I_J C_I \bar C^J, \nonumber \\ \mathcal{A}_{12} [\xi] &=& \sqrt{\frac{2\pi}{k}}|\dot{\xi}|\eta^a_I \bar \phi_a^I, \nonumber \\ \mathcal{A}_{21} [\xi] &=& \sqrt{\frac{2\pi}{k}}|\dot{\xi}| \phi^a_I \bar\eta_a^I, \nonumber \\ \mathcal{A}_{22} [\xi] &=& A'_\mu \dot{\xi}^\mu + \frac{2\pi}{k} |\dot{\xi}|{M'}^I_J \bar C^J C_I. \end{eqnarray} So, we can write the field strength for this theory as $\mathcal{F}[\xi] =d \mathcal{A} [\xi] + \mathcal{A} [\xi]\wedge \mathcal{A} [\xi]$. The Bianchi identity can now be written as $(d + A [\xi]\wedge )\mathcal{F}[\xi] =0$. In the dual string picture, the operators describing semi-classical string surfaces have a local $U(1)\times SU(3)$ $R$-symmetry. So, the $R$-symmetry of the couplings can be described by a vector $n_I$ and its complex conjugate $\bar n_{I}$ \cite{abcd}. These specify the local embedding of of $SU(3)$ subgroup into $SU(4)$. They satisfy $n_{I}\bar n^{I}=1$. Now we have, $\eta_{I}^{\alpha} =n_{I} \eta^{\alpha},\, \bar\eta^{I}_{\alpha} =\bar n^{I} \bar\eta_{\alpha},\, M_{J}^{I} =p_{1} \delta^{I}_{ J}-2 p_{2} n_{J} \bar n^{I },\, {M'}_{J}^{ I}=q_{1} \delta^{I}_{J}-2 q_{2} n_{J} \bar n^{I} $. The Eigenvalues of the $M_{J}^{I}$ and $ {M'}_{J}^{ I}$ are controlled the functions $p_{i}$ and $q_{i}$. The condition that the supersymmetric variation of the superconnection vanishes is too strong and it does not yield any solution for the couplings. So, it is replaced by the requirement that the supersymmetry variation of the superconnection is equal to the covariant derivative generated from it. The spinor couplings are given by $\delta^{\beta}_{\alpha}= (\eta^{\beta} \bar\eta_{\alpha}-\eta_{\alpha} \bar\eta^{\beta})/{2 i}$ and ${(\dot{x}^{\mu}\gamma_{\mu})_{\alpha}^{ \beta}}=\ell |\dot x|(\eta^{\beta} \bar\eta_{\alpha}+ \eta_{\alpha} \bar\eta^{\beta})/{2i}$. Furthermore, we have $M_{J}^{I} = {M'}_{J}^{ I} = \ell(\delta^{J}_{K}-2 n_{K}\bar n^{J})$. Here $\ell = \pm1$ and specifies the eigenvalues of these matrices. Now $\epsilon_{IJKL} (\eta\bar{\Theta}^{IJ})\bar n^{K}=0$ and $n_{I}(\bar\eta\bar\Theta^{IJ})=0$ are the constraints on $\bar\Theta^{IJ}$. Apart from these constraints, it also satisfied $\bar{\Theta}^{IJ}(d/ds) {\bar{\eta}}^K\epsilon_{IJKL}=0$ and $\bar\Theta^{IJ}(d/ds) {\eta}_{I}=0$. These conditions are local and a conformal Killing spinor which satisfies these constraints has to be constructed for obtaining a supersymmetric Polyakov loop. If $\bar \theta^{IJ}$ and $\bar\epsilon^{IJ}$ are constant spinors, then we can write, $\bar\Theta^{IJ}=\bar\theta^{IJ}-( \gamma^\mu \xi_\mu)\bar\epsilon^{IJ}$. Recall that the Polyakov loop \cite{p1} by the very definition is an element of the gauge group. Now the Polyakov loop variables for the ABJ will be given by \begin{eqnarray} \phi [\xi] &=& \left( \begin{array}{cc} \phi _{11} [\xi ] & \phi _{12} [\xi ] \\ \phi _{21} [\xi ] & \phi _{22} [\xi ] \\ \end{array} \right) \nonumber \\&=& P_s \exp \int ds \left( \begin{array}{cc} \mathcal{A}_{11} [\xi] & \mathcal{A}_{12} [\xi] \\ \mathcal{A}_{21} [\xi] & \mathcal{A}_{22} [\xi] \\ \end{array} \right). \end{eqnarray} Here the ordering from right to left in $s$ is denoted by $P_s$. It may be noted that $\phi[\xi]$ depends only on the loop $C$ in spacetime and not in the manner in which it is parametrized. If we introduce a new parameter say, $s' = f (s)$, it will only give a change in the variable of integration and not its value. So, at first sight it might appear better to define the loops as equivalence classes of the function $\xi(s)$, equivalent under reparametrization. But then it would be very difficult to define differentiation and integration in this quotient space of equivalence classes, and hence we will retain the original definition of the parametrized loops. \section{ Connection and Curvature} In this section we will construct a connection and a curvature for the loop space. Strictly speaking, these do not have the exact geometric meanings of the corresponding concepts in fiber bundles \cite{zois}, but the formulae obtained below make sense in the context of loop space variables and we shall continue to use these terms with this understanding. We will first obtain a connection in the loop space and relate it to the field strength in spacetime. Then, we will construct a covariant derivative using this connection. Finally, we will construct the curvature in the loop space from the commutator of these covariant derivatives. Now we first construct the connection in the loop space from $\phi [\xi]$, by taking its logarithmic derivative. As $\phi [\xi]$ is an element of the gauge group, its logarithmic derivative will be an element of the Lie algebra corresponding to that gauge group. So, we define the connection generated from $\phi [\xi]$ as follows, \begin{eqnarray} F_\mu[ \xi|s] &=& \left( \begin{array}{cc} F_\mu[ \xi|s] _{11} & F_\mu[ \xi|s] _{12} \\ F_\mu[ \xi|s] _{21} & F_\mu[ \xi|s] _{22} \\ \end{array} \right), \end{eqnarray} where \begin{equation} F_\mu[ \xi|s] = i \phi^{-1} [\xi] \frac{\delta}{\delta \xi^\mu (s)}\phi [\xi]. \end{equation} As $ F_\mu[ \xi|s]$ represents the change in $\phi[\xi]$ as one moves from one point in the loop space to its neighboring point, it can be regarded as a connection in parametrized loop space. In calculations it is sometimes useful to define further $\phi [\xi(s_1,s_2)]$ as a parallel transport from a point $\xi (s_1)$ to a point $\xi (s_1)$ along the curve $C$, \begin{eqnarray} \phi [\xi(s_1, s_2)] &=& \left( \begin{array}{cc} \phi _{11} [\xi (s_1, s_2)] & \phi _{12} [\xi (s_1, s_2)] \\ \phi _{21} [\xi (s_1, s_2)] & \phi _{22} [\xi (s_1, s_2)] \\ \end{array} \right) \nonumber \\ \nonumber \\ &=& P_s \exp \int_{s_1}^{s_2} ds \left( \begin{array}{cc} \mathcal{A}_{11} [\xi (s)] & \mathcal{A}_{12} [\xi (s)] \\ \mathcal{A}_{21} [\xi (s)] & \mathcal{A}_{22} [\xi (s)] \\ \end{array} \right). \end{eqnarray} Now using $\phi [\xi (s_1, s_2)]$, we can move from a fixed point another point say, $s$, and then take a detour and travel backwards along the same path to the original point. In doing this the phase factor generated in going from the original point to $s$, exactly cancels the phase factor generated in going back from $s$ to the original point. However, the phase factor while transporting around the infinitesimal circuit at $s$ does have a finite contribution that does not cancel. In fact, this contribution is proportional to the field strength $\mathcal{F}$. Thus, $ F_\mu[\xi |s]$ is proportional to $ \phi^{-1} [\xi(s, 0)] \mathcal{F}[\xi (s)] \phi [\xi( s, 0)] $. In fact, it is already know that in Yang-Mills theories the connection in loop space is proportional to the field strength in spacetime \cite{p2}. We have observed here that this also hold for the superconnection of the ABJ theory. In the loop space $ F_\mu[\xi]$ acts like a connection. The natural quantity to construct from this connection is the curvature of the loop space. Now we can define a covariant derivative in the loop space as follows, \begin{equation} \nabla_\mu [\xi(s)] = \frac{\delta}{\delta \xi^\mu (s) } + i F_\mu [\xi|s]. \end{equation} The curvature $ -i G_{\mu\nu}[\xi, s_1, s_2]$ of the loop space can be defined by taking a commutator of these two covariant derivatives, $ [\nabla_\mu [\xi(s_1)], \nabla_\nu [\xi(s_2)]] $. Thus, we can write \begin{eqnarray} G_{\mu\nu}[\xi ( s_1, s_2)] &=& \frac{\delta}{\delta \xi^\mu (s_2) }F_\nu [\xi|s_1] - \frac{\delta}{\delta \xi^\nu (s_1) }F_\mu [\xi|s_2] \nonumber \\&& +i [F_\mu [\xi|s_1], F_\mu [\xi|s_2]]. \end{eqnarray} The gauge transformations in loop space can be denoted by given by $u = \exp i \Lambda[\xi] $. The connection $F_\mu [\xi|s]$ transforms under these gauge transformations as $F_\mu[\xi|s] = i u \nabla_\mu [\xi(s)]u^{-1} $ and $G_{\mu\nu}[\xi( s_1, s_2 )]$ transforms under these gauge transformations as $u G_{\mu\nu}[\xi ( s_1, s_2 )] u^{-1}$. Now if we first travel from point say $s_1$ along a certain direction till a point say $s_2$. After that we travel along another direction at $s_1$, then we travel along the first direction and finally again travel along the direction we traveled from $s_2$, to get to $s_1$. In doing so we completed a full circuit and the total change in phase generated in the process is represented is proportional to $\phi^{-1} [\xi(s_1, 0)] \nabla^*\mathcal{F}[\xi(s_1)] \phi [\xi( s_1, 0 )]\delta (s_1-s_2)$ \cite{p4}. We have observed here that this also hold for the superconnection of the ABJ theory. Now this is also the value of by $-i G_{\mu\nu} [\xi (s_1, s_2)] \delta \xi^\mu (s_1) \xi^\nu (s_2)$. Hence, the curvature is proportional to $\phi^{-1} [\xi(s_1, 0)] \nabla^*\mathcal{F}[\xi (s_1)] \phi [\xi( s_1, 0)] \delta (s_1-s_2)$. Thus, if the Bianchi identity is satisfied $ \nabla^*\mathcal{F}[\xi (s_1)] =0$, this curvature vanishes $G_{\mu\nu} [\xi(s_1, s_2)] =0$. However, in presence of a monopole, Bianchi identity is not satisfied and thus this curvature does not vanish. It may be noted that $ G_{\mu\nu}[\xi( s_1, s_2)]$ satisfies a functional Bianchi identity even in presence of a monopole. To derive this functional Bianchi identity, we first define $\nabla_\mu [\xi(s_1)]^* [\nabla_\nu [\xi(s_2)], \nabla_\rho [\xi(s_3)]]$ as follows, \begin{eqnarray} && \nabla_\mu [\xi(s_1)]^* [\nabla_\nu [\xi(s_2)], \nabla_\rho [\xi(s_3)]]\nonumber \\ &=& \nabla_\rho [\xi(s_3)][\nabla_\mu [\xi(s_1)], \nabla_\nu [\xi(s_2)]] \nonumber \\ && + \nabla_\nu [\xi(s_2)] [\nabla_\rho [\xi(s_3)], \nabla_\mu [\xi(s_1)]]\nonumber \\ && + \nabla_\mu [\xi(s_1)] [\nabla_\nu [\xi(s_2)], \nabla_\rho [\xi(s_3)]]. \end{eqnarray} Now, expanding this expression for $\nabla_\mu [\xi(s_1)]^* [\nabla_\nu [\xi(s_2)], \nabla_\rho [\xi(s_3)]]$, we get \begin{eqnarray} &&\nabla_\mu [\xi(s_1)]^* [\nabla_\nu [\xi(s_2)], \nabla_\rho [\xi(s_3)]]\nonumber \\ &=& \left( \frac{\delta}{\delta \xi^\rho (s_3) } + i F_\rho [\xi|s_3]\right) \left( \frac{\delta}{\delta \xi^\mu (s_1) } + i F_\mu [\xi|s_1]\right) \left( \frac{\delta}{\delta \xi^\nu (s_2) } + i F_\nu [\xi|s_2]\right) \nonumber \\ && - \left( \frac{\delta}{\delta \xi^\rho (s_3) } + i F_\rho [\xi|s_3]\right) \left(\frac{\delta}{\delta \xi^\nu (s_2) } + i F_\nu [\xi|s_2]\right) \left( \frac{\delta}{\delta \xi^\mu (s_1) } + i F_\mu [\xi|s_1]\right) \nonumber \\ && + \left( \frac{\delta}{\delta \xi^\nu (s_2) } + i F_\nu [\xi|s_2]\right)\left( \frac{\delta}{\delta \xi^\rho (s_3) } + i F_\rho [\xi|s_3]\right) \left( \frac{\delta}{\delta \xi^\mu (s_1) } + i F_\mu [\xi|s_1]\right) \nonumber \\ && - \left( \frac{\delta}{\delta \xi^\nu (s_2) } + i F_\nu [\xi|s_2]\right)\left( \frac{\delta}{\delta \xi^\mu (s_1) } + i F_\mu [\xi|s_1]\right) \left( \frac{\delta}{\delta \xi^\rho (s_3) } + i F_\rho [\xi|s_3]\right) \nonumber \\ && + \left( \frac{\delta}{\delta \xi^\mu (s_1) } + i F_\mu [\xi|s_1]\right) \left( \frac{\delta}{\delta \xi^\nu (s_2) } + i F_\nu [\xi|s_2]\right) \left(\frac{\delta}{\delta \xi^\rho (s_3) } + i F_\rho [\xi|s_3]\right) \nonumber \\ &&- \left( \frac{\delta}{\delta \xi^\mu (s_1) } + i F_\mu [\xi|s_1]\right) \left(\frac{\delta}{\delta \xi^\rho (s_3) } + i F_\rho [\xi|s_3]\right) \left( \frac{\delta}{\delta \xi^\nu (s_2) } + i F_\nu [\xi|s_2]\right) \nonumber \\ &=& 0, \end{eqnarray} where \begin{eqnarray} && \frac{\delta}{\delta \xi^\mu (s_1) } + i F_\mu [\xi|s_1] \nonumber \\&=& \begin{pmatrix} \frac{\delta}{\delta \xi^\mu (s_1) }+ i F_\mu[ \xi|s_1] _{11} & i F_\mu[ \xi|s_1] _{12} \\ i F_\mu[ \xi|s_1] _{21} & \frac{\delta}{\delta \xi^\mu (s_1) }+ i F_\mu[ \xi|s_1] _{22} \end{pmatrix}, \\ && \frac{\delta}{\delta \xi^\nu (s_2) } + i F_\nu [\xi|s_2] \nonumber \\&=& \begin{pmatrix} \frac{\delta}{\delta \xi^\nu (s_2) }+ i F_\nu[ \xi|s_2] _{11} & i F_\nu[ \xi|s_2] _{12} \\ i F_\nu[ \xi|s_2] _{21} & \frac{\delta}{\delta \xi^\nu (s_2) }+ i F_\nu[ \xi|s_2] _{22} \end{pmatrix}, \\ && \frac{\delta}{\delta \xi^\rho (s_3) } + i F_\rho [\xi|s_3] \nonumber \\&=& \begin{pmatrix} \frac{\delta}{\delta \xi^\rho (s_3) }+ i F_\rho[ \xi|s_3] _{11} & i F_\rho[ \xi|s_3] _{12} \\ i F_\rho[ \xi|s_3] _{21} & \frac{\delta}{\delta \xi^\rho (s_3) }+ i F_\rho[ \xi|s_3] _{22} \end{pmatrix}. \end{eqnarray} Thus, we get, $\nabla_\mu [\xi(s_1)]^* [\nabla_\nu [\xi(s_2)], \nabla_\rho [\xi(s_3)]] =0$. However, as the curvature of the loop space is generated by the commutator of the functional covariant derivatives, we observe that the functional Bianchi identity is satisfied for the loop space. So, we can write $ \nabla_\rho [\xi(s_3)]G_{\mu\nu}[\xi( s_1, s_2)] +\nabla_\mu [\xi(s_1)]G_{\nu\rho}[\xi ( s_2, s_3)] +\nabla_\nu [\xi(s_2)]G_{\rho\mu}[\xi ( s_3, s_1)] =0 $. We emphasize again that in the presence of a monopole, the space-time Bianchi identity is not satisfied, and this translates into the non-vanishing of the loop space curvature $G_{\mu\nu} [\xi(s_1, s_2)]$. On the other hand, the loop space curvature itself does satisfy the Bianchi identity even in the presence of a monopole. \section{Loop of Loops} In the previous section we analysed Polyakov loops for the ABJ theory. It may be noted that Polyakov loops have been generalized to loop of loops for Yang-Mills theories \cite{p4}. Here we will apply this formalism of loop of loops to the ABJ theory. We will also extend this formalism to include the concept of a connection and curvature for loop of loops. A loop in the loop space can be defined using $ F_\mu[ \xi|s]$ as the connection. Now we can parameterize a loop in the loop space as follows \cite{p4}, \begin{equation} \Sigma : \{ \xi^\mu (t:s), \, s = 0 \to 2 \pi, \, t = 0 \to 2 \pi\}, \end{equation} where $ \xi^\mu (t:0) = \xi^\mu (t:2\pi), $ and $ t = 0 \to 2 \pi $. At each value of $t$, a closed loop $C(t)$ is traced in the spacetime passing through a fixed point. Thus, for $t=0$ and $t = 2\pi$ it shrinks to this fixed point and as $t$ varies from $0 \to 2\pi$, $C(t)$ traces out a closed loop in the loop space. This loop starts and ends at the fixed point to which $C(t)$ shrinks for $t=0$ and $t = 2\pi$. Thus, we can define a loop variable for this space as, \begin{eqnarray} \Theta [\xi] &=& \left( \begin{array}{cc} \Theta _{11} [\xi ] & \Theta _{12} [\xi ] \\ \Theta _{21} [\xi ] & \Theta _{22} [\xi ] \\ \end{array} \right) \nonumber \\ &=& P_t \exp i \int^{2\pi}_{0} dt \int^{2\pi}_0 ds \left( \begin{array}{cc} F^\mu[ \xi|t:s] _{11} & F^\mu[ \xi|t:s] _{12} \\ F^\mu[ \xi|t:s] _{21} & F^\mu[ \xi|t:s] _{22} \\ \end{array} \right) \nonumber \\ && \times \frac{\partial \xi_\mu (t:s)}{ \partial t}, \end{eqnarray} where $P_t$ denotes ordering in $t$ increasing from right to left and the derivative is taken from below. The connection in the loop space, $F^\mu[\xi|s]$, plays the role of the space-time gauge field ${\cal A}[\xi]$ in the loop space, so that this definition is the analogue in loop space of (5). However, the connection in the loop space is infinite dimensional and so apart from the sum over $\mu$, we have to also integrate over $s$. In ordinary spacetime this parametrized loop in loop space is represented by a two dimensional surface which enclosing a three dimensional volume. In analogy with the previous case we can define a connection in this space using $\Theta [\xi]$. In fact, we will define the connection in this space to be the logarithmic derivative of $\Theta [\xi]$. So, we write \begin{eqnarray} B_\mu[ \xi|t: s]&=& \left( \begin{array}{cc} B_\mu[ \xi|t: s] _{11} & B_\mu[ \xi|t: s] _{12} \\ B_\mu[ \xi|t: s] _{21} & B_\mu[ \xi|t: s] _{22} \\ \end{array} \right), \end{eqnarray} where \begin{equation} B_\mu[ \xi|t: s] = i \Theta^{-1} [\xi] \frac{\delta}{\delta \xi^\mu (t:s)}\Theta [\xi]. \end{equation} Geometrically, $ B_\mu[ \xi|t : s]$ can be regarded as a connection in the space of loop of loops, as it represents the change in $\Theta[\xi]$ as one moves from one point in this space to its neighboring point. Now we can define a quantity which will act as parallel transport in this space \begin{eqnarray} \Theta [ \xi(t_1, t_2) ] &=& \left( \begin{array}{cc} \Theta _{11} [\xi (t_1, t_2) ] & \Theta _{12} [\xi (t_1, t_2) ] \\ \Theta _{21} [\xi (t_1, t_2) ] & \Theta _{22} [\xi (t_1, t_2) ] \\ \end{array} \right) \nonumber \\ &=& P_t \exp i \int^{t_2}_{t_1} dt \int^{2\pi}_0 ds \left( \begin{array}{cc} F^\mu[ \xi|t:s] _{11} & F^\mu[ \xi|t:s] _{12} \\ F^\mu[ \xi|t:s] _{21} & F^\mu[ \xi|t:s] _{22} \\ \end{array} \right) \nonumber \\ && \times \frac{\partial \xi_\mu (t:s)}{ \partial t}. \end{eqnarray} Now using $\Theta[\xi (t_1, t_2)]$, we can move from a fixed point another point and then take a detour and travel backwards along the same path to then original path. In doing this the phase factor for the generated in going from the original point to final point, exactly cancels the phase factor generated in going back from the final point to the original point. However, the phase factor while transporting around the infinitesimal circuit at the final point does have a finite contribution that does not cancel. This contribution is proportional to the curvature of the loop space. In fact, by repeating the previous calculations, we observe that \begin{eqnarray} B_\mu [\xi(t_1: s_1)] &=& \int ds_2 \Theta^{-1}[\xi (t_1, 0)]G_{\mu\nu}[\xi ( t_1: s_1, s_2 )] \Theta[\xi (0, t_1)] \nonumber \\ && \times \frac{\partial\xi^\nu(t_1:s_2)}{\partial t_1}. \end{eqnarray} So, $B^\mu [\xi(t_1: s_1)]$ is proportional to the curvature of the loop space. Now for $s_1 \neq s_2$, $G_{\mu\nu}[\xi ( t_1: s_1, s_2 )]$ corresponds to a parameterized surface enclosing no volume and $B^\mu [\xi(t_1: s_1)]$ in this case is zero. The same value is obtained for $s_1 = s_2$, if it the volume $\Sigma$ encloses does not contain a monopole. As in the space of loop of loops $B^\mu [\xi(t: s)]$ acts like a connection, we can construct a covariant derivative using it, \begin{equation} \bar \nabla_\mu [\xi(t: s)] = \frac{\delta}{\delta \xi^\mu (t: s) } + i B_\mu [\xi|t:s]. \end{equation} We can now define a curvature $ -i E_{\mu\nu}[\xi( t_1, t_2 : s_1, s_2)]$ of this space as follows $ [\bar \nabla_\mu [\xi(t_1: s_1)], \bar \nabla_\nu [\xi(t_2: s_2)]] $. Thus, we can write \begin{eqnarray} E_{\mu\nu}[\xi ( t_1, t_2: s_1, s_2)] &=& \frac{\delta}{\delta \xi^\mu (t_2: s_2) }B_\nu [\xi|t_1: s_1] - \frac{\delta}{\delta \xi^\nu (t_1: s_1) }B_\mu [\xi|t_2: s_2] \nonumber \\&& +i [B_\mu [\xi|t_1:s_1], B_\mu [\xi|t_2: s_2]]. \end{eqnarray} The gauge transformations in loop space can be denoted by given by $v = \exp i \Lambda[\xi] $. The connection $B_\mu [\xi|t: s]$ transforms under these gauge transformations as $B_\mu[\xi|t: s] = i v \bar \nabla_\mu [\xi(t:s)]v^{-1} $ and $E_{\mu\nu}[\xi( t_1, t_2: s_1, s_2 )]$ transforms under these gauge transformations as $v E_{\mu\nu}[\xi (t_1, t_2: s_1, s_2 )] v^{-1}$. It may be noted that $ E_{\mu\nu}[\xi(t_1, t_2: s_1, s_2)]$ again satisfies a functional Bianchi identity. Now we define $\bar \nabla _\mu [\xi(t_1: s_1)]^* [\bar \nabla _\nu [\xi(t_2: s_2)], \bar \nabla _\rho [\xi(t_3, s_3)]]$ as follows, \begin{eqnarray}&& \bar \nabla _\mu [\xi(t_1: s_1)]^* [\bar \nabla _\nu [\xi(t_2: s_2)], \bar \nabla _\rho [\xi(t_3: s_3)]]\nonumber \\ &=& \bar \nabla _\rho [\xi(t_3: s_3)][\bar \nabla _\mu [\xi(t_1: s_1)], \bar \nabla _\nu [\xi(t_2: s_2)]]\nonumber \\ && + \bar \nabla _\nu [\xi(t_2:s_2)] [\bar \nabla _\rho [\xi(t_3: s_3)], \bar \nabla _\mu [\xi(t_1: s_1)]]\nonumber \\ && + \bar \nabla _\mu [\xi(t_1: s_1)] [\bar \nabla _\nu [\xi(t_2: s_2)], \bar \nabla _\rho [\xi(t_3:s_3)]]. \end{eqnarray} In order to prove the functional Bianchi identity for $ E_{\mu\nu}[\xi(t_1, t_2: s_1, s_2)]$, we expand $\bar \nabla _\mu [\xi(t_1: s_1)]^* [\bar \nabla _\nu [\xi(t_2:s_2)], \bar \nabla _\rho [\xi(t_3: s_3)]]$, as follows \begin{eqnarray} &&\bar \nabla _\mu [\xi(t_1: s_1)]^* [\bar \nabla _\nu [\xi(t_2:s_2)], \bar \nabla _\rho [\xi(t_3: s_3)]]\nonumber \\ &=& \left( \frac{\delta}{\delta \xi^\rho (t_3: s_3) } + i B _\rho [\xi|t_3: s_3]\right) \left( \frac{\delta}{\delta \xi^\mu (t_1: s_1) } + i B _\mu [\xi|t_1: s_1]\right) \nonumber \\ &&\times \left( \frac{\delta}{\delta \xi^\nu (t_2: s_2) } + i B _\nu [\xi|t_2: s_2]\right) \nonumber \\ && - \left( \frac{\delta}{\delta \xi^\rho (t_3: s_3) } + i B _\rho [\xi|t_3: s_3]\right) \left(\frac{\delta}{\delta \xi^\nu (t_2: s_2) } + i B _\nu [\xi|t_2: s_2]\right) \nonumber \\ &&\times \left( \frac{\delta}{\delta \xi^\mu (t_1: s_1) } + i B _\mu [\xi|t_1: s_1]\right) \nonumber \\ && + \left( \frac{\delta}{\delta \xi^\nu (t_2: s_2) } + i B _\nu [\xi|t_2: s_2]\right)\left( \frac{\delta}{\delta \xi^\rho (t_3: s_3) } + i B _\rho [\xi|t_3: s_3]\right) \nonumber \\ &&\times \left( \frac{\delta}{\delta \xi^\mu (t_1: s_1) } + i B _\mu [\xi|t_1: s_1]\right) \nonumber \\ && - \left( \frac{\delta}{\delta \xi^\nu (t_2: s_2) } + i B _\nu [\xi|t_2: s_2]\right)\left( \frac{\delta}{\delta \xi^\mu (t_1: s_1) } + i B _\mu [\xi|t_1: s_1]\right) \nonumber \\ &&\times \left( \frac{\delta}{\delta \xi^\rho (t_3: s_3) } + i B _\rho [\xi|t_3: s_3]\right) \nonumber \\ && + \left( \frac{\delta}{\delta \xi^\mu (t_1: s_1) } + i B _\mu [\xi|t_1: s_1]\right) \left( \frac{\delta}{\delta \xi^\nu (t_2: s_2) } + i B _\nu [\xi|t_2: s_2]\right) \nonumber \\ &&\times \left(\frac{\delta}{\delta \xi^\rho (t_3: s_3) } + i B _\rho [\xi|t_3: s_3]\right) \nonumber \\ &&- \left( \frac{\delta}{\delta \xi^\mu (t_1: s_1) } + i B _\mu [\xi|t_1: s_1]\right) \left(\frac{\delta}{\delta \xi^\rho (t_3: s_3) } + i B _\rho [\xi|t_3: s_3]\right)\nonumber \\ &&\times \left( \frac{\delta}{\delta \xi^\nu (t_2: s_2) } + i B _\nu [\xi|t_2: s_2]\right) \nonumber \\ &=& 0, \end{eqnarray} where \begin{eqnarray} && \frac{\delta}{\delta \xi^\mu (t_1 : s_1) } + i B_\mu [\xi|t_1 : s_1] \nonumber \\&=& \begin{pmatrix} \frac{\delta}{\delta \xi^\mu (t_1 : s_1) }+ i B_\mu[ \xi|t_1 : s_1] _{11} & i B_\mu[ \xi|t_1 : s_1] _{12} \\ i B_\mu[ \xi|t_1 : s_1] _{21} & \frac{\delta}{\delta \xi^\mu (t_1 : s_1) }+ i B_\mu[ \xi|t_1 : s_1] _{22} \end{pmatrix}, \\ && \frac{\delta}{\delta \xi^\nu (t_2 : s_2) } + i B_\nu [\xi|t_2 : s_2] \nonumber \\&=& \begin{pmatrix} \frac{\delta}{\delta \xi^\nu (t_2 : s_2) }+ i B_\nu[ \xi|t_2 : s_2] _{11} & i B_\nu[ \xi|t_2 : s_2] _{12} \\ i B_\nu[ \xi|t_2 : s_2] _{21} & \frac{\delta}{\delta \xi^\nu (t_2 : s_2) }+ i B_\nu[ \xi|t_2 : s_2] _{22} \end{pmatrix}, \\ && \frac{\delta}{\delta \xi^\rho (t_3 : s_3 ) } + i B_\rho [\xi|t_3 : s_3 ] \nonumber \\&=& \begin{pmatrix} \frac{\delta}{\delta \xi^\rho (t_3 : s_3 ) }+ i B_\rho[ \xi|t_3 : s_3 ] _{11} & i B_\rho[ \xi|t_3 : s_3 ] _{12} \\ i B_\rho[ \xi|t_3 : s_3 ] _{21} & \frac{\delta}{\delta \xi^\rho (t_3 : s_3 ) }+ i B_\rho[ \xi|t_3 : s_3 ] _{22} \end{pmatrix}. \end{eqnarray} Thus, we get, $\bar \nabla _\mu [\xi(t_1: s_1)]^* [\bar \nabla _\nu [\xi(t_2: s_2)], \bar \nabla _\rho [\xi(t_3: s_3)]] =0$. However, as the curvature of the loop space is generated by the commutator of the functional covariant derivatives, we observe that the functional Bianchi identity is satisfied for this space. So, we can write $ \bar \nabla _\rho [\xi(t_3: s_3)]E_{\mu\nu}[\xi( t_1, t_2: s_1, s_2)] +\bar \nabla _\mu [\xi(t_1: s_1)]E_{\nu\rho}[\xi ( t_2, t_3: s_2, s_3)] +\bar \nabla _\nu [\xi(t_2: s_2)]E_{\rho\mu}[\xi ( t_3, t_1: s_3, s_1)] =0 $. \section{Topological Defects} The both the ABJM theory and the ABJ theory have $\mathcal{N} =6$ supersymmetry. However, it is expected that for ABJM theory for the Chern-Simons levels, $k =1, 2$, this supersymmetry will get enhanced to $\mathcal{N} =8$ supersymmetry \cite{mp, mp1, mp2, mp0}. In this supersymmetric enhancement an important role is played by the monopole operators. Thus, it is important to understand the role of monopoles in the ABJM theory. In fact, in this section we will analyse the monopoles in the ABJ theory. We will also study a topological defect in the loop space. This defect in loop space is similar to a monopole in spacetime. So, now we will analyse monopoles in the ABJ theory. To do that, we first note that whenever $\mathcal{F}$ is derivable from the superconnection $\mathcal{A}$, Bianchi identities for $\mathcal{F}$ will be satisfied, $\nabla ^* \mathcal{F}=0$. As the curvature of the loop space is proportional to the Bianchi identities, it will vanish whenever $\mathcal{F}$ is derivable from the superconnection $\mathcal{A}$. However, at a point where the loop intersects the world-line of a monopole, $\mathcal{F}$ will not be derivable from the superconnection $\mathcal{A}$ and the Bianchi identities will not hold. Thus, the above argument will not hold and the curvature can get a non-zero value. In other words if $ G_{AB}[\xi( s_1, s_2)] \neq 0$ then $\nabla ^* \mathcal{F}\neq 0$ and the loop will be intersecting word-lines of a monopole. As $\Theta $ measure the total change in the loop as $t = 0 \to 2 \pi$, so, if a monopole is present it will not wind fully around the gauge group. However, in absence of a monopole, it will wind fully around the gauge group. Thus, we can write \cite{p4} $\Theta = \zeta I$, where $\zeta$ is the monopole charge of the ABJ theory enclosed by the surface $\Sigma$. Thus, the monopole charge corresponds to the loop in the loop space for the ABJ theory. If the loop passes through a monopole, then at the value of $s_1$, where the loop $\xi(s_1)$ intersects the monopole world-line $Y(s_3)$, the curvature will not vanish. In fact, it will be given by \cite{p4} \begin{eqnarray} G_{\mu\nu}[\xi ( s_1, s_2 )] &=& - \pi \int ds_3 \kappa [\xi|s]\epsilon_{\mu\nu \rho \tau} \frac{d \xi^\rho (s_1)}{ds_1} \frac{d\xi^\tau (s_3)}{ds_3} \nonumber \\ && \times \delta^3 (\xi (s_1) - Y(s_3)) \delta(s_1-s_2). \end{eqnarray} Here $\kappa [\xi |s]$ satisfies $\exp i \pi \kappa = \zeta$, where, $\zeta$ is the charge carried by the monopole moving along the world-line $Y(s_3)$. We have observed that even when monopoles are present a functional Bianchi identity for the curvature in the loop space is satisfied. Furthermore, for the space of loop of loops, let us first travel from point say $t_1$ along a certain direction till a point say $t_2$. After that we travel along another direction at $t_1$, then we travel along the first direction and finally again travel along the direction we traveled from $t_2$, to get to $t_1$. In doing so we completed a full circuit and the total change in phase generated in the process is represented is proportional to the curvature $-i E_{\mu\nu} [\xi (t_1, t_2: s_1, s_2)] \delta \xi^\mu (t_1: s_1) \xi^\nu (t_2: s_2)$. Now we calculate the quantity given by $\Theta^{-1}[ \xi_2] \Theta [\xi_3] - \Theta [\xi] \Theta [\xi_1] $, where $\xi_1 ^{\mu}[t:s] = \xi^{\mu}[t:s] + \delta \xi^{\mu}[t:s], \xi_2^{\mu}[t:s] = \xi^{\mu}[t:s] + \delta' \xi^{\mu}[t:s], \xi_3^{\mu}[t:s] = \xi_1^{\mu}[t:s] + \delta'\xi^{\mu}[t:s]$. Now we can write \begin{eqnarray} \Theta [\xi_1] &=& \Theta[\xi] - ig \int dt \int ds_1 ds_2 \Theta[ \xi (2\pi, t)] G_{\mu\nu } [\xi( t: s_1, s_2)] \nonumber \\&& \times \frac{\partial \xi^\nu (t: s_2)}{\partial t} \delta \xi^\mu (t,s_1) \Theta[\xi(t, 0)]. \end{eqnarray} We also have \begin{eqnarray} \Theta [\xi_2] &=& \Theta[\xi] - ig \int dt \int ds_1 ds_2 \Theta[ \xi (2\pi, t)] G_{\mu\nu } [\xi( t: s_1, s_2)] \nonumber \\&& \times \frac{\partial \xi^\nu (t: s_2)}{\partial t} \delta' \xi^\mu (t,s_1) \Theta[\xi(t, 0)]. \end{eqnarray} Finally, we have \begin{eqnarray} \Theta [\xi_3] &=& \Theta[\xi_1] - ig \int dt \int ds_1 ds_2 \Theta[ \xi_1 (2\pi, t)] G_{\mu\nu } [\xi_1( t: s_1, s_2)] \nonumber \\&& \times \frac{\partial \xi^\nu_1 (t: s_2)}{\partial t} \delta' \xi^\mu_1 (t,s_1) \Theta[\xi_1(t, 0)]. \end{eqnarray} We also note that \begin{eqnarray} G_{\mu\nu}[\xi_1(t: s_1, s_2)] &=& \int ds_3 \frac{\delta }{\delta \xi^{\rho} (t: s_3)} G_{\mu\nu}[\xi(t: s_1, s_2)] \delta \xi^{\rho} (t: s_3) \nonumber \\ && + G_{\mu\nu}[\xi(t: s_1, s_2)]. \end{eqnarray} Now by collection all the terms for $E_{\mu\nu}[\xi( t_1, t_2: s_1, s_2)]$ is given by \begin{eqnarray} E_{\mu\nu}[\xi( t_1, t_2: s_1, s_2)] &=&\int ds_3 \Theta^{-1}[\xi(t_1, 0) ] [ \nabla_\rho [\xi(t_1: s_3)]G_{\mu\nu}[\xi( t_1: s_1, s_2)] \nonumber \\ && +\nabla_\mu [\xi(t_1: s_1)]G_{\nu\rho}[\xi ( t_1: s_2, s_3)] \nonumber \\ && +\nabla_\nu [\xi(t_1: s_2)] G_{\rho\mu}[\xi ( t_1: s_3, s_1)]] \nonumber \\ && \times \Theta[\xi(t_1, 0) ] \frac{\partial \xi^{\rho}(t_1: s_3)}{\partial t_1}\delta(t_1 -t_2). \end{eqnarray} So, the curvature of the space of loop of loops is proportional to the functional Bianchi identity, $\nabla_\rho [\xi(t_1: s_3)]G_{\mu\nu}[\xi( t_1: s_1, s_2)] +\nabla_\mu [\xi(t_1: s_1)]G_{\nu\rho}[\xi ( t_1: s_2, s_3)] +\nabla_\nu [\xi(t_1: s_2)] G_{\rho\mu}[\xi ( t_1: s_3, s_1)]$. If this functional Bianchi identity is satisfied, then this curvature vanishes. However, one can envisage a singularity in loop space which is similar to the singularity in space-time giving a monopole. If such a monopole like defect occurs, such that the functional Bianchi identity is not satisfied, then the curvature for loop of loops will not be zero. It would be interesting to see what would be the possible implications of such a topological defect. It may be noted that even in presence of such a defect, the functional Bianchi identity for $E_{\mu\nu}[\xi( t_1, t_2: s_1, s_2)]$ hold, $ \bar\nabla_\rho [\xi(t_3: s_3)]E_{\mu\nu}[\xi( t_1, t_2: s_1, s_2)] +\bar\nabla_\mu [\xi(t_1:s_1)]E_{\nu\rho}[\xi ( t_2, t_3: s_2, s_3)] +\bar \nabla_\nu [\xi(t_2: s_2)]E_{\rho\mu}[\xi ( t_3, t_1: s_3, s_1)] =0 $. \section{Conclusion} In this paper we analysed Polyakov loops for the ABJ theory. This was done by first constructing the loop variables in terms of a superconnection. The fermions coupled to the loop in the bi-fundamental representation. Then a connection for this loop was constructed and a curvature tensor from this connection was also constructed. This curvature tensor was found to be proportional to the Bianchi identities and thus vanished when they were satisfied. As the Bianchi identities are not satisfied in presence of a monopole, so this curvature tensor has a non-vanishing value in the presence of a monopole. A space of loop of loop was also construed and was used as a measure for the monopole charge of the ABJ theory. We also constructed the curvature and connection for this space of loop of loops. Furthermore, certain topological defects in the loops space were analysed using the curvature of the space of loop of loops. It would be interesting to investigate further the physical implications of topological defect in the loop space. If a loop in the loop space is really the quantity which captures real physics, then it seems likely that a topological defect in this space of loop of loops can have real physical meaning. Furthermore, it would be interesting to see how far can we go with such constructions. We have already constructed the functional Bianchi identity for the space of loop of loops. The next natural question to analyse is the existence of topological defects in this space which violates this functional Bianchi identity. The we can perform a similar analyse for those defect too. If we consider the ABJM theory and give a vacuum expectation value to one of the scalar fields, then we arrive at the action for multiple D2-branes \cite{d2,sxsw, xswd, d21}. In this mechanism the gauge group $U(N) \times U(N)$ is broken down to its diagonal subgroup. The theory thus obtained is the Yang-Mills theory coupled to matter fields. It would be interesting to analyse the Polyakov loops for the ABJM theory, after a vacuum expectation value is given to one of the scalar fields. It would be expected that the Polyakov loops for the ABJM theory in this case will reduce to the Polyakov loop for the D2-branes. Furthermore, it will interesting to analyse what happens to monopole charge of the ABJM theory, after a vacuum expectation value is given to one of the scalar fields. The Wilsons loops in generic tensor representation for IIB string theory are dual to D3-branes \cite{dual}. In fact, they are also dual to D5-branes \cite{2dual}. It is thus expected that string like objects in $AdS_3 \times CP^3$ will be dual to Wilsons loops in different representation of $SU(N|M)$ \cite{dual2}. It would be interesting to construct such operators for the ABJM theory and use them for perturbative calculations. It will also be interesting to analyse what happens to this duality when the Wilsons loops are replaced by Polyakov loops. A system of M2-branes ending on other objects in M-theory has been studied by analysing the ABJM theory and the BLG theory on a manifold with boundaries \cite{mf, mf2, mf1}. Boundary conditions for the M2-branes ending on M5-branes, M9-branes and gravitational waves have been studied \cite{BCIntMem}. Furthermore, a background flux can exist in M-theory. Boundary conditions for M2-branes in the presence of a background flux have also been discussed \cite{ChuSehmbi}. It is possible to learn about the physics of M5-branes by studding a system of M2-branes ending on them. A novel quantum geometry on M5-branes has been studied by analysing a system of M2-branes ending on a M5-brane with constant $C$-field \cite{d12}. The BLG theory was used to study this novel geometry. In fact, the BLG theory with Nambu-Poisson $3$-bracket has been identified with the M5-brane action with a large worldvolume C-field \cite{M5BLG}. A non-commutative string theory on the M5-brane worldvolume has been derived using the action of a single M2-brane \cite{NCS1, NCS2, NCS3}. It would be interesting to analyse the boundary effects for fractional M2-branes using ABJ theory. It would also be interesting to analyse topological defects in this theory using Polyakov loops.
1,941,325,220,983
arxiv
\subsection*{Acknowledgements} The authors are indebted to Ninoslav Bralic and Olivier Espinosa for enlightening discussions. The work of (CAD) has been supported in part by the Foundation for Research Development and Fundaci\'{o}n Andes, and that of (ML) by FONDECYT 0751/92. This work has been performed in the framework of the FRD-CONICYT Scientific Cooperation Program. (CAD) wishes to thank the Facultad de Fisica, Pontificia Universidad Cat\'{o}lica de Chile for their kind hospitality.
1,941,325,220,984
arxiv
\section{Introduction}\label{sec:intro} We consider the incompressible Navier--Stokes equations (NSE) in a three-dimensional domain $\Omega=[0,L]^3$, equipped with the space-periodic boundary condition. The NSE, which are the governing equations of motion of a viscous, incompressible, Newtonian fluid, are given by align}{\begin{eqnarray*}{subequations} align}{\begin{eqnarray*}{align*} &\frac{\partial {u}}{\partial t}-\nu \Delta {u}+\left({u} \cdot \nabla\right) {u}+\frac{1}{\rho}\nabla p=0,\\ &\nabla \cdot {u}=0,\\ &{u}(x, 0)={u}^0 (x), align}{\end{eqnarray*}{align*} align}{\end{eqnarray*}{subequations} where $x=(x_1, x_2, x_3) \in \Omega$, ${u}(x,t)=(u_1, u_2, u_3)$ is the unknown velocity of the fluid, $u^0=(u^0_1, u^0_2, u^0_3)$ is the initial velocity, $\nu>0$ is the kinematic viscosity of the fluid, $\rho$ is the density, and $p$ the unknown pressure. The incompressibility constraint is manifested in the divergence free condition $\nabla \cdot u=0$. Recently, several authors \cite{benameur2010blow, cheskidov2016lower, CM2018, cortissoz2014lower, mccormick2016lower, robinson2012lower} have obtained ``optimal'' existence times, and the associated blow-up rates, assuming they exist, for solutions of the 3D NSE in Sobolev spaces $H^s, s>\frac{1}{2}$. In particular, in \cite{robinson2012lower}, by employing a scaling argument, Robinson, Sadowski and Silva established that the optimal existence time of a (strong) solution of the NSE in the whole space $\mathbb{R}^3$, for initial data in $H^s, s>\frac{1}{2}$,\comments{short of the answer to the global regularity problem for the 3D NSE being in the affirmative,} is necessarily given by align}{\begin{eqnarray*}{equation} \label{optimaltime} T(u_0) \gtrsim \frac{1}{\|u_0\|_{H^s}^{\frac{4}{2s-1}}}. align}{\end{eqnarray*}{equation} The optimality refers to the fact that if one establishes an existence time which depends solely on $\|u_0\|_{H^s}$ which is better than \eqref{optimaltime}, i.e. has the form $T \gtrsim \frac{1}{\|u_0\|_{H^s}^{\gamma}}$ with $\gamma < \frac{4}{2s-1}$, then the NSE is globally well-posed in $H^s$. Observe that an existence time of the form \eqref{optimaltime} immediately yields the blow-up rate \[ \|u(t)\|_{H^s} \gtrsim \frac{1}{(T_*-t)^{\frac{2s-1}{4}}}\, , \] where $T_*<\infty$ is the putative blow-up time of $\|u(t)\|_{H^s}$. It follows from the optimality of the existence time that this blow-up rate is also optimal \cite{robinson2012lower}.\comments{ i.e., if $\|u(t)\|_{H^s}$ blows-up at the rate $\|\|u(t)\|_{H^s} \gtrsim \frac{1}{(T_*-t)^\gamma}$ with $\gamma > \frac{2s-1}{4}$ implies that $T_*$ is not a blow-up time} In the same work \cite{robinson2012lower}, the authors obtained the following existence/persistence times in the space $H^s$, namely, align}{\begin{eqnarray*}{equation} \label{robinsontime} T(u_0) \gtrsim \ \left\{ align}{\begin{eqnarray*}{array}{l} \frac{1}{\|u_0\|_{H^s}^{\frac{4}{2s-1}}}, \quad \frac{1}{2} < s < \frac{5}{2},\quad s \neq \frac{3}{2},\\ \frac{1}{\|u_0\|_{H^s}^{\frac{5}{2s}}}, \quad s > \frac{5}{2}. align}{\end{eqnarray*}{array} \right. align}{\end{eqnarray*}{equation} Evidently, the existence time is optimal for $\frac{1}{2} < s < \frac{5}{2}, s \neq \frac{3}{2}$, while the existence time for $s > \frac{5}{2}$, though not optimal, is the best known to-date. The borderline cases, namely $s=\frac{3}{2}, s = \frac{5}{2}$, were subsequently considered by varying methods in \cite{cheskidov2016lower, CM2018, cortissoz2014lower, mccormick2016lower}, including Littlewodd-Paley decomposition and other harmonic analysis tools, the upshot being that the optimal existence time $T \sim \frac{1}{\|u_0\|_{H^s}^2}$ also holds for $s=\frac{3}{2}$, while the optimal existence time in $H^{5/2}$ is still open. \comments{$\frac{1}{2} <s < \frac{5}{2}$ while the existence time in \eqref{robinsontime} for $s>\frac{5}{2}$, though not optimal, is the best known to date.} The purpose of our present work is to investigate as to what extent the above mentioned existence/ persistence times (and the associated blow-up rates) hold if one considers the evolution of the NSE in an \emph{analytic Gevrey class}, equipped with the much stronger Gevrey norm which characterizes space analyticity, with the goal of obtaining sharper lower bounds of the space-analyticity radius of the solutions. In fluid-dynamics, the space analyticity radius has an important physical interpretation: at this length scale, the viscous effects and the (nonlinear) inertial effects are roughly comparable, and below this length scale, the Fourier spectrum decays exponentially \cite{BJMT, dt, foias, hkr, hkr1, kuk}. In other words, the space analyticity radius yields a Kolmogorov type \emph{dissipation length scale} encountered in conventional turbulence theory. The exponential decay property of high frequencies can be used to show that the finite dimensional Galerkin approximations converge exponentially fast. For instance, in the case of the complex Ginzburg-Landau equation, analyticity estimates are used in \cite{doel-t} to rigorously explain numerical observations that the solutions to this equation can be accurately represented by a very low-dimensional Galerkin approximation, and that the ``linear'' Galerkin approximation performs just as well as the nonlinear one. Furthermore, a surprising connection between possible abrupt change in analyticity radius (which is necessarily shown to be intermittent in \cite{BiF} if it occurs) and (inverse) energy cascade in 3D turbulence was found in \cite{BiF}. Other applications of analyticity radius occur in establishing sharp temporal decay rates of solutions in higher Sobolev norms \cite{biswas2012, oliver2000remark}, establishing geometric regularity criteria for the Navier-Stokes and related equations and in measuring the spatial complexity of fluid flow \cite{bradshaw, g, kuk1} and in the nodal parameterization of the attractor \cite{fr,fkr}. In a seminal work, Foias and Temam \cite{foias1989gevrey} pioneered the use of Gevrey norms for estimating space analyticity radius for the Navier-Stokes equations which was subsequently used by many authors (see \cite{biswas2012, biswas2010navier, biswas2007existence, cao1999navier, ferrari1998gevrey}, and the references there in); closely related approaches can be found in \cite{BGK, gk, gk2}. In this work, Foias and Temam showed that starting with initial data in $H^1$, one can control the much stronger Gevrey norm of the solution up a time which is comparable to the optimal existence time of the strong solution in $H^1$. The Gevrey class approach enables one to avoid cumbersome recursive estimation of higher order derivatives and is known to yield optimal estimates of the analyticity radius \cite{optimaltiti}. Other approaches to analyticity can be found in \cite{pav, masuda, miura2006} for the 3D NSE, \cite{Kalantarov} for the Navier-Stokes-Voight equation, \cite{dong1, dong} for the surface quasi-geostrophic equation, \cite{Ly} for the Porous medium equation, and \cite{ang} for certain nonlinear analytic semi-flows. The (analytic) Gevrey norm of $u$ in the Sobolev space $H^s$, which we refer to as the \emph{Sobolev-Gevrey norm} here, is defined by $\|e^{\alpha A^{\frac{1}{2}}}u\|_{H^s}$, where $A$ is the Stokes operator. We recall that the norms $\|u\|_{H^s}$ and $\|A^{s/2}u\|_{L^2}$ are equivalent for mean-zero, divergence-free vector fields \cite{cf}. In case $\|e^{\alpha A^{\frac{1}{2}}}u\|_{H^s} < \infty $, then $u$ is space-analytic and the uniform space analyticity radius of $u$ is bounded below by $\alpha$. We provide below a brief summary of, and comments on, our results. align}{\begin{eqnarray*}{enumerate} \item Assume that the initial data $\|e^{\beta_0 A^{\frac{1}{2}}}u_0\|_{H^s} < \infty$ with $\beta_0 \ge 0$; $\beta_0=0$ corresponds to $u_0 \in H^s$. In this case, $\sup_{t \in [0,T]} \|e^{(\beta_0 +\beta t)A^{\frac{1}{2}}}u\|_{H^s} < \infty $ with $0\leq\beta \leq \frac{1}{2}$ for $T \sim \frac{1}{\|e^{\beta_0 A^{\frac{1}{2}}}u\|_{H^s}^{\frac{4}{2s-1}}}, \frac{1}{2} < s < \frac{3}{2}$ and $T \sim \frac{1}{\|e^{\beta_0 A^{\frac{1}{2}}}u\|_{H^s}^{2}}, s > \frac{3}{2}$ (see Theorem \ref{mainthem213}). The quantity $\beta t$ captures the gain in analyticity due to the dissipation. If we set $\beta=0$, then this gives a persistence time in the Gevrey class corresponding to $\beta_0$. Note that the time of persistence of the solution in the Gevrey class in this result coincides with the optimal time of existence \eqref{optimaltime} in the range $\frac{1}{2} < s < \frac{3}{2}$ but is far from optimal in the range $\frac{3}{2} < s < \frac{5}{2}$ and is also smaller than the best known existence time in Sobolev classes in case $s>\frac{5}{2}$ obtained in \cite{robinson2012lower}. The case $s=1$ is precisely the classical result of Foias and Temam \cite{foias1989gevrey}, while this result for $ \frac{1}{2} \le s < \frac{3}{2}$ was obtained using semigroup methods in \cite{BiSw, biswas2010navier}. We provide a proof of this result using energy technique, mainly for completeness, but also to illustrate that one can as a consequence, adapt a technique from \cite{dt} to obtain an improved estimate of the analyticity radius, which is possible by considering the evolution of Gevrey norm in $H^s$ with $s>1$; see Theorem \ref{analyticthm} and Remark \ref{rmk:improvedanalyticity}. This provides one of our motivations for considering the evolution of the Gevrey norm in higher-order Sobolev spaces. \item Subsequently, in Theorem \ref{mainthem1} and Theorem \ref{mainthem2}, we improve the existence times in the Gevrey classes given in Theorem \ref{mainthem213} for $s$ in the range $s \ge \frac{3}{2},\ s \neq \frac{5}{2}$. The existence time in Gevrey classes obtained in Theorem \ref{mainthem2} for $\frac{3}{2} \le s <\frac{5}{2}$ is optimal, i.e. coincides with \eqref{optimaltime} while the existence time obtained in Theorem \ref{mainthem1} for $s > \frac{5}{2}$ coincides with the best known existence time in Sobolev classes $H^s$ obtained in \cite{robinson2012lower}. In order to prove these results, we first obtain refined commutator estimates of the nonlinear term in Lemma \ref{lem1}, Lemma \ref{inequ3w} and Lemma \ref{inequ4w} which exploit their respective orthogonality properties. These estimates are new to the best of our knowledge and are motivated by those in \cite{BiSQG, BMSSQG} obtained for the surface quasi-geostrophic equations. Using these estimates, for initial data in $H^s, s \ge \frac{3}{2}, s \neq \frac{5}{2}$, we show that $\sup_{t \in [0,T]}\|e^{\beta tA^{\frac{1}{2}}}u\|_{H^s} < \infty $ where $T$ is given as in \eqref{optimaltime} in the said range of $s$ (for large data). It is worth mentioning that the differential inequalities for the evolution of the Gevrey norms that one obtains in these cases are non-autonomous; estimates of existence times of these given in Lemma \ref{lemmaonzeta} and Lemma \ref{lemmaonX}, though elementary, may be new as well. Moreover, in Corollary \ref{corollary2}, we give an alternate proof for the persistence in the Sobolev class $H^s$ for the entire range $\frac{1}{2} < s < \frac{5}{2}$, thus unifying the results in \cite{robinson2012lower} and \cite{cheskidov2016lower, cortissoz2014lower, mccormick2016lower} and showing that the case $\frac{3}{2}$ is not a borderline in our approach. Furthermore, unlike in \cite{cheskidov2016lower, mccormick2016lower}, our method is elementary and avoids any harmonic analysis machinery such as paraproducts and Littlewood-Paley decomposition. \item The study of blow up in Gevrey classes is of importance for the NSE as it was shown in \cite{BiF} that in certain situations, an abrupt change in analyticity radius (which in turn is measured by a Gevrey norm) is indicative of a strong inverse energy cascade. The persistence time in Theorem \ref{mainthem213} (set $\beta=0, \beta_0>0$) readily yields a blow-up rate provided there exists a time $T_*$ at which the analyticity radius possibily decreases from $\beta_0$ (and consequently $\|e^{\beta_0 A^{\frac{1}{2}}}u(t)\|_{H^s}$ blows up as $t$ approcahes $T_*$). This is substantially different from the blow-up of a sub-analytic Gevrey norm studied in \cite{benameur2014exponential,benameur2016blow}. As we show in Corollary \ref{corollary1}, a blow-up of a sub-analytic Gevrey norm can only occur if the solution itself loses regularity; whether or not a solution loses regularity is precisely one of the millennium problems. In other words, for a globally regular solution, persistence in a sub-analytic Gevrey class is guaranteed for all times. However, this is not necessarily the case for analytic Gevrey norms. For instance, it is not difficult to show that for forced NSE, there exists a body-force, and an initial data $u_0$ in a Gevrey class, such that the solution exists globally in $H^s$ while a Gevrey norm of the form $\|e^{(\beta_0 +\beta t)A^{\frac{1}{2}}}u\|_{H^s} < \infty $ blows up in finite time. This is due to restriction posed on the solution by the analyticity radius of the driving force. To the best of our knowledge however, an example of such a phenomenon in the unforced case is unknown. Therefore it is of interest to determine the blow-up rate in Gevrey classes even for solutions that are globally regular. Although our Theorem \ref{mainthem213} provides a blow-up rate, this may not be optimal for $s > \frac{3}{2}$. At the very least, the blow-up rate provided in \eqref{norm13} does not correspond to the best known rate in Sobolev classes e.g. in \cite{robinson2012lower}. We leave it as an \emph{open problem} to determine whether these rates can be matched. Although we obtain existence time results for Gevrey classes that matches the existence times in \cite{robinson2012lower, cheskidov2016lower, mccormick2016lower} in Theorem \ref{mainthem1} and Theorem \ref{mainthem2}, they are for time-varying Gevrey classes defined by $\|e^{(\beta t)A^{\frac{1}{2}}}u\|_{H^s}$, i.e. $\beta_0=0$, and therefore $u_0 \in H^s$. A similar result on existence time for $\beta_0>0$ will yield an improvement of the blow-up rate in Gevrey classes. This is an open problem as well. align}{\end{eqnarray*}{enumerate} \comments{ This type of problems was first studied by Leray in 1934 \cite{leray1934mouvement}. He showed that if a smooth solution of \eqref{nse} in $(0, T)\times\mathbb{R}^3$ blows up at time $T$, then the following lower bond on the $H^1$ norm of the solution must hold align}{\begin{eqnarray*}{align*} \|u(t)\|_{H^1 (\mathbb{R}^3)}\geq \frac{c_1}{(T-t)^{1/4}}. align}{\end{eqnarray*}{align*} For all $s>\frac{1}{2}$, Robinson, Sadowski, and Silva \cite{robinson2012lower} proposed the ``optimal rate'' align}{\begin{eqnarray*}{align} \label{optimal} \|u(t)\|_{\dot{H}^s (\mathbb{R}^3)}\geq \frac{c}{(T-t)^{\frac{2s-1}{4}}}. align}{\end{eqnarray*}{align} Moreover, in the same paper \cite{robinson2012lower}, they proved align}{\begin{eqnarray*}{align} \label{Robinson} \|u(t)\|_{\dot{H}^s (\mathbb{R}^3)}\geq align}{\begin{eqnarray*}{cases} align}{\begin{eqnarray*}{matrix} c(T-t)^{-(2s-1)/4},& s\in (1/2, 5/2),\ s\neq 3/2,\\ {c}\|u^0\|^{(5-2s)/5}_{L^2}(T-t)^{-\frac{2s}{5}},& s>5/2. align}{\end{eqnarray*}{matrix} align}{\end{eqnarray*}{cases} align}{\end{eqnarray*}{align} When $s=\frac{3}{2}$, the optimal rate was obtained by Mccormick et al. \cite{mccormick2016lower} align}{\begin{eqnarray*}{align} \label{cor} \|u(t)\|_{\dot{H}^{3/2}(\mathbb{R}^3)}\geq \frac{c}{\sqrt{(T-t)}}. align}{\end{eqnarray*}{align} In \cite{cheskidov2016lower}, Cheskidov and Zaya also obtained the optimal rates in the space ${\dot{H}^{3/2}}$. When $s=\frac{5}{2}$, the blow-up estimate for the space $\dot{H}^{5/2}$ is still weak. In \cite{mccormick2016lower}, Mccormick et al. showed align}{\begin{eqnarray*}{align} \label{cor1} \limsup_{t\to T}(T-t)\|u(t)\|_{\dot{H}^{5/2}(\mathbb{R}^3)}\geq c, align}{\end{eqnarray*}{align} and a strong blowup estimate in the Besov space align}{\begin{eqnarray*}{align*} \|u(t)\|_{\dot{B}_{2,1}^{5/2}(\mathbb{R}^3)}\geq \frac{c}{{(T-t)}}. align}{\end{eqnarray*}{align*} In this paper, we are interested in exploring the blow-up criteria for solutions of the NSE in a higher order regularity space, specifically, in a Gevrey class. The use of Gevrey norms was pioneered by Foias and Temam in \cite{foias1989gevrey}, they showed that the solutions of the NSE are analytic in time with values in a Gevrey class of functions for a small interval of time. In \cite{benameur2014exponential}, Benameur obtained the following blow-up results for solutions in the subanalytic Sobolev-Gevrey spaces $H^{s}_{a, \sigma}$, with $s>\frac{3}{2}$ and $u^0\in ({H^{s}_{a, \sigma}(\mathbb{R}^3)})^3$ align}{\begin{eqnarray*}{align} \label{J-1} \|u(t)\|_{H^{s}_{a, \sigma}}\geq C_1 (T^{\ast}-t)^{-s/3}\exp\left(aC_2 (T^{\ast}-t)^{-\frac{1}{3\sigma}}\right), align}{\end{eqnarray*}{align} where $T^{\ast}<\infty$ is the minimum blow-up time and $\displaystyle \|f\|_{H^{s}_{a, \sigma}}:=\|e^{aA^{1/\sigma}}f\|_{H^{s}}$. In \cite{benameur2016blow}, Benameur and Jlali improved (\ref{J-1}), allowing less regularity on the intial condition. Rather than studying the Sobolev-Gevrey spaces as in \cite{benameur2014exponential, benameur2016blow}, we mainly consider the analytic Gevrey spaces, $Gv(s, \beta t)$, consisting of smooth functions $\psi$, which have finite norm, $$\|\psi\|_{Gv(s, \beta t)}:=\|A^{\frac{s}{2}}e^{\beta tA^{\frac{1}{2}}}\psi\|_{L^2} < \infty.$$ These spaces have the property that if $\psi \in Gv(s, \beta t)$, then $\psi$ is analytic, with a radius of analyticity bounded below by $\beta t$. We will consider solutions $u$ of \eqref{nse} with the property that $\|u(t)\|_{Gv(s, \beta t)} < \infty$ for all $t$ in some interval $[0,T]$, and hence, have a spatial radius of analyticity bounded below by a linearly increasing function of time. In this paper, we study the local existence times and lower bounds on putative blow-up solutions of the incompressible 3D Navier--Stokes equations in Gevrey spaces: $Gv_{s,\beta t}$ when $\frac{1}{2}< s<\frac{5}{2}$ and $s>\frac{5}{2}$. When $s>\frac{5}{2}$, we obtain results by deriving bounds on the nonlinear term. After obtaining a commutator estimate of the nonlinear term, we study a differential inequality on the Gevrey norm using functional analysis and a nonlinear Gronwall inequality. A lower bound on the minimum blow-up time followed by analyzing the inequality. Similar results for the Sobolev spaces with $s>\frac{5}{2}$ are also presented. For the case $s>\frac{5}{2}$, compared with the results in \cite{robinson2012lower}, our main Theorem \ref{mainthem1} looks like a weaker conclusion with a weaker assumption. However, our proof can be tailored into an alternative way to obtain the same results in Sobolev spaces as in \cite{robinson2012lower}. Also, our results improve the results of \cite{robinson2012lower} since we can derive the results in \cite{robinson2012lower} with weaker assumptions, as demonstrated in Corollary \ref{corollary1}. Moreover, an optimal rate has been obtained for the case $\frac{1}{2}<s<\frac{3}{2}$. For the case $\frac{3}{2}\leq s<\frac{5}{2}$, we study the vorticity formulation of the NSE. Doing so, the analysis on the nonlinear term becomes more complicated than the $s>\frac{5}{2}$ case. Nontheless, as before we obtain a differential inequality, this time on the Gevrey norm of the vorticity. We study the differential inequality following a similar procedure as in the case $s>\frac{5}{2}$. Throughout our proofs, various interpolation techniques and Fourier expansions are used. In Corollary \ref{corollary3}, we have shown that these are the same rates as in \cite{robinson2012lower} when phrased in terms of Sobolev spaces. An improvement of the results can be found in Corollary \ref{corollary2}. Moreover, notice that, in \cite{cheskidov2016lower, cortissoz2014lower, mccormick2016lower, robinson2012lower}, the border line cases $s=\frac{3}{2}$ and $s=\frac{5}{2}$ were always treated separately by using techniques like Littlewood-Paley decomposition or Besov spaces. In this paper, by studying the vorticity, we eliminate the $s=\frac{3}{2}$ border case. } \section{Main results}\label{sec:results} Before describing our main results, we first establish some notation, concepts, and settings. Using the notation $\displaystyle \kappa_0=\frac{2\pi}{L}$, define the dimensionless length, time, velocity, and pressure variables align}{\begin{eqnarray*}{align*} \tilde{x}=\kappa_0 x, \tilde{t}=\nu \kappa_0^2 t, \tilde{u}=\frac{u}{\nu \kappa_0}, \tilde{p}=\frac{p}{\rho \nu^2 \kappa_0^2}. align}{\end{eqnarray*}{align*} Using this transformation, the NSE transform to align}{\begin{eqnarray*}{subequations} align}{\begin{eqnarray*}{align*} &\frac{\partial {\tilde{u}}}{\partial \tilde{t}}-\tilde{\Delta} {\tilde{u}}+\left({\tilde{u}} \cdot \tilde{\nabla}\right) {\tilde{u}}+\tilde{\nabla} \tilde{p}=0, \\ &\tilde{\nabla} \cdot {\tilde{u}}=0, \\ &\tilde{u}(x, 0)=\tilde{u}^0 (x). align}{\end{eqnarray*}{align*} align}{\end{eqnarray*}{subequations} $\tilde{\Delta}$ and $\tilde{\nabla}$ denote the gradient and Laplacian operators with respect to the primed variables. Henceforth, for simplicity, we assume that $\nu=1$, $L=2\pi$, $\rho=1$, and $\kappa_0=\frac{2\pi}{L}=1$. We have the dimensionless version of the NSE as align}{\begin{eqnarray*}{subequations}\label{nse} align}{\begin{eqnarray*}{align} &\frac{\partial {{u}}}{\partial {t}}-{\Delta} {{u}}+\left({{u}} \cdot {\nabla}\right) {{u}}+{\nabla} p=0,\label{nse1} \\ &{\nabla} \cdot {{u}}=0,\label{nse2} \\ &{u}(x, 0)={u}^0 (x),\label{nse3} align}{\end{eqnarray*}{align} align}{\end{eqnarray*}{subequations} after dropping the tildes. Moreover, we will focus on $\Omega=[0, 2\pi]^3$, employ the Galilean invariance of the NSE, take $u$ to be mean free, i.e., $\displaystyle \int_{\Omega}u=0.$ In this paper, we are interested in investigating the existence times of strong solutions of the three-dimensional Navier-Stokes equations in time-varying analytic Gevrey classes based on Sobolev spaces $H^s, s> \frac{1}{2}$. The results vary as the value of $s$ changes. \subsection{Functional analytic framework} With $\Omega=[0, 2\pi]^3$, we denote by $\dot{L}^2(\Omega)$ the Hilbert space of all $L-$periodic functions from $\mathbb{R}^3$ to $\mathbb{R}^3$ that are square integrable on $\Omega$ with respect to the Lebesgue measure and mean free. The scalar product is taken to be the usual $L^2(\Omega)$ inner product align}{\begin{eqnarray*}{align*} (u,v)=\int_{\Omega} u(x)\cdot v(x) dx, align}{\end{eqnarray*}{align*} and we denote align}{\begin{eqnarray*}{align*} \|u\|_{L^2}=(u,u)^{1/2}. align}{\end{eqnarray*}{align*} The real separable Hilbert space $\mathit{H}$ is formed by the set of all $\mathbb{R}^3$-valued functions $u(x), x\in \mathbb{R}^3$, which has the Fourier expansion align}{\begin{eqnarray*}{align*} u(x)=\sum_{k\in \mathbb{Z}^3\setminus \left \{(0,0,0)\right \}}\hat{u}(k)e^{ik\cdot x} \quad \mbox{(with}\, \hat{u}(0)=0\,\mbox{),} align}{\end{eqnarray*}{align*}\\ where the Fourier coefficients $\hat{u}(k)\in \mathbb{C}^3$, for all $k\in \mathbb{Z}^3\setminus \left \{(0,0,0)\right \}$, satisfy align}{\begin{eqnarray*}{equation*} \hat{u}_k= \overline{\hat{u}_{-k}},\ k \cdot \hat{u}(k)=0,\ \mbox{for all}\ k\in \mathbb{Z}^3\setminus \left \{(0,0,0)\right \} \ \mbox{and}\ \|u\|_{L^2}^2= \sum_{k\in \mathbb{Z}^3\setminus \left \{(0,0,0)\right \}}|\hat{u}(k)|^2 < \infty. align}{\end{eqnarray*}{equation*} For $s\geq 0$, the space $ \dot{H}^s (\Omega)$ is defined by align}{\begin{eqnarray*}{align*} \dot{H}^s (\Omega)= \left \{u\in H: u=\sum_{k\in \mathbb{Z}^3\setminus \left \{(0,0,0)\right \}}\hat{u}(k)e^{ik\cdot x},\ \|u\|_{H^s(\Omega)}^2=\sum |k|^{2s} |\hat{u}_k|^2<\infty \right \}. align}{\end{eqnarray*}{align*} \comments{ The norm in $\dot{H}^s (\Omega)$ is defined as align}{\begin{eqnarray*}{align*} \|u\|_{\dot{H}^s(\Omega)}=\left (\sum_{k\in \mathbb{Z}^3\setminus \left \{(0,0,0)\right \}} |k|^{2s}|\hat{u}_k|^2\right )^{1/2}. align}{\end{eqnarray*}{align*} } For simplicity, we denote $\|\cdot\|_{\dot{H}^s(\Omega)}$ as $\|\cdot\|_{s}$. For $s< 0$, the space $ \dot{H}^s (\Omega)$ is defined to be the dual of $\dot{H}^{|s|} (\Omega)$. The $l^1-$type norm of the Fourier coefficients is given by align}{\begin{eqnarray*}{align*} \|u\|_{F^s(\Omega)}=\sum_{k\in \mathbb{Z}^3\setminus \left \{(0,0,0)\right \}} |k|^{s}|\hat{{u}}_k|. align}{\end{eqnarray*}{align*} We write $\|u\|_{F}$ for $\|u\|_{F^0}$. It is easy to see that $F^s(\Omega)$ form an algebra under multiplication and $F^0(\Omega)$ is referred to as the Wiener algebra \cite{BJMT}. \subsubsection{Gevrey class of functions} We say that a function $u \in C^\infty(\Omega)$ is in Gevrey class $Gev(\alpha;\theta)$ if align}{\begin{eqnarray*}{equation} \label{def:gevrey} |\partial^\mbf{m} u(x)|\le M \left(\frac{\mbf m!}{\alpha^{|\mbf m|}}\right)^\theta\ \forall\ x \in \Omega, align}{\end{eqnarray*}{equation} where $\mbf m=(m_1,\cdots,m_n) \in {\mathbb N}^n$ is a multi-index, $\mbf m !=m_1!\cdots m_n!$ and $|\mbf m |=\sum_{i=1}^nm_i$. The analytic Gevrey class corresponds to $\theta=1$, in which case, the function $u$ is real analytic with \emph{uniform analyticity radius} $\alpha$ for all $x \in \Omega$. In case $0<\theta <1$, the functions are called sub-anlytic. For a function $u \in H$, its Gevrey norm is defined by align}{\begin{eqnarray*}{align*} \|u\|_{s, \alpha;\theta}=\|A^{\frac{s}{2}}e^{\alpha A^{\frac{\theta}{2}}}u\|_{L^2}=\|e^{\alpha A^{\frac{\theta}{2}}}u\|_s=\left (\sum_{k\in \mathbb{Z}^3\setminus \left \{(0,0,0)\right \}} |k|^{2s} e^{2\alpha|k|^\theta}|\hat{u}_k|^2\right )^{1/2}, align}{\end{eqnarray*}{align*} where $\alpha>0$. The connection between Gevrey class and Gevrey norm is given by the fact that \eqref{def:gevrey} holds for all $x \in \Omega$ if and only if $\|u\|_{s,\alpha;\theta} < \infty$ \cite{oliver2000remark,optimaltiti}. In case $\theta=1$, this is equivalent to the fact that $u$ is real analytic with uniform radius of real analyticity $\alpha$. We will denote \[ Gv(s,\alpha;\theta)=\left\{u \in H: \|u\|_{s,\alpha;\theta} < \infty \right\}, \] and in case $\theta=1$, for simplicity, we will write $Gv(s,\alpha)$ instead of $Gv(s,\alpha;1)$ and we will denote $\|u\|_{s, \alpha;1}$ as $\|u\|_{s, \alpha}.$ Clearly, \[ Gv(s,\alpha)\subsetneq Gv(s,\alpha;\theta) \subsetneq \dot{H}^m(\Omega)\ \mbox{for all}\ 0<\theta<1, s \in \mathbb{R}, m \in \mathbb{R}_+. \] If $u \in Gv(s,\alpha)$, then clearly \[ |\hat{u}(k)| \le e^{-\alpha |k|}\|u\|_{s,\alpha}, \] and therefore, the uniform analyticity radius $\alpha$ establishes a length scale below which the Fourier power spectrum decays exponentially which in turn relates it to the Kolmogorov decay length scale in turbulence theory \cite{BJMT, dt}. The \emph{maximal analyticity radius} for a function $u \in H^s$ is defined by \[ \lambda_{max}(u)= \sup \{ \alpha \ge 0: \|u\|_{\alpha,s} < \infty\}. \] One can check easily that $\lambda_{max}(u)$ is independent of $s$. \comments{ The Gevrey norm we consider here defines the analytic Gevrey class. We also consider the subanalytic Gevrey classes: $\displaystyle \left \{ u \:|\: \|e^{rA^{\frac{\theta}{2}}}u\|_s < \infty\right \}$, where $0<\theta<1$ and $r>0$. } \subsection{The Functional differential equation} Let $\Pi$ be the orthogonal projection from $L^2$ onto the subset of $L^2$ consisting of those functions whose weak derivatives are divergence-free in the $L^2$ sense. $A$ is the Stokes operator, defined as align}{\begin{eqnarray*}{align} \label{defA} A=-\Pi \Delta. align}{\end{eqnarray*}{align} $B$ is the bilinear form defined by align}{\begin{eqnarray*}{align} \label{defB} B(u,u)=\Pi \left [({u} \cdot \nabla) {u}\right ]. align}{\end{eqnarray*}{align} Then, the functional form of the NSE can be written as align}{\begin{eqnarray*}{align} \label{functional_form} \frac{du}{dt}+\mathit{A}u+\mathit{B}(u,u)=0. align}{\end{eqnarray*}{align} \subsection{Main results} We will now present our main results. Here, we denote by $c$ all the dimensionless constants which are independent of $s$, while all the dimensionless constants which depend on $s$ are denoted by $c_s$. align}{\begin{eqnarray*}{theorem} \label{mainthem213} Let $u$ be a strong solution of \eqref{nse} with initial condition $u^0\in Gv(s, \beta_0)(\Omega)$, for some $s>\frac{1}{2}$, $\beta_0\geq 0$, and $0\leq\beta \leq \frac{1}{2}$. If $\|u^0\|_{s, \beta_0}\leq c_s$, then $\sup_{t\in[0,\infty)} \|u\|_{s, \beta_0+\beta t}<\infty$. If $\|u^0\|_{s, \beta_0}> c_s$, define align}{\begin{eqnarray*}{align*} T^{\ast}=\sup \left\{T>0 \:|\: \sup_{t\in[0,T]}\|e^{(\beta_0+\beta t) A^{\frac{1}{2}}}u(t)\|_{s} < \infty \right\}. align}{\end{eqnarray*}{align*} We have align}{\begin{eqnarray*}{align}\label{t13} T^{\ast} \gtrsim \ \left\{ align}{\begin{eqnarray*}{array}{l} \frac{1}{\|u^0\|^{\frac{4}{2s-1}}_{s, \beta_0}}, \quad \frac{1}{2} < s < \frac{3}{2}\\ \frac{1}{\|u^0\|^{2}_{s, \beta_0}}, \quad s > \frac{3}{2}. align}{\end{eqnarray*}{array} \right. align}{\end{eqnarray*}{align} Moreover, if $T^{\ast} < \infty$, $\|e^{(\beta_0+\beta t)A^{\frac{1}{2}}}u(t)\|_{s}$ will blow-up at the following rate align}{\begin{eqnarray*}{align}\label{norm13} \|e^{(\beta_0+\beta t)A^{\frac{1}{2}}}u(t)\|_{{s}} \gtrsim \ \left\{ align}{\begin{eqnarray*}{array}{l} \frac{1}{ (T^{\ast}-t)^{\frac{2{s}-1}{4}}}, \quad \frac{1}{2} < s < \frac{3}{2}\\ \frac{1 }{ (T^{\ast}-t)^{\frac{1}{2}}}, \quad s > \frac{3}{2}. align}{\end{eqnarray*}{array} \right. align}{\end{eqnarray*}{align} align}{\end{eqnarray*}{theorem} Proceeding as in \cite{BiF, dt}, we can optimize over the choice of $\beta$ to obtain a better lower estimate of the analyticity radius. align}{\begin{eqnarray*}{theorem} \label{analyticthm} Let $u$ be a strong solution of \eqref{nse} with initial condition $u^0\in Gv(s, \beta_0)(\Omega)$, for some $\frac{1}{2}<s<\frac{3}{2}$ and $\beta_0,\ \beta\geq 0$. When $t\in [0, t^{\ast})$ align}{\begin{eqnarray*}{align*} \|u\|_{s, \beta_0+\beta t} \leq \frac{e^{\frac{\beta^2}{2}t}\|u(0)\|_{s, \beta_0}}{\left(1-\frac{2c_s}{\beta^2}\|u(0)\|_{s, \beta_0}^{\frac{4}{2s-1}}\left(e^{\frac{2\beta^2}{2s-1}t}-1\right)\right)^{\frac{2s-1}{4}}}, align}{\end{eqnarray*}{align*} where align}{\begin{eqnarray*}{align*} t^{\ast}=\frac{2s-1}{2\beta^2}\log\left(1+\frac{\beta^2}{2c_s \|u(0)\|_{s, \beta_0}^{\frac{4}{2s-1}}}\right). align}{\end{eqnarray*}{align*} Moreover, for the optimal choice of $\beta=\sqrt{2c_s}\|u(0)\|_{s, \beta_0}^{\frac{2}{2s-1}}\varsigma,$ with $\varsigma$ being the positive solution of $-\frac{1}{2\varsigma^2}\log(1+\varsigma^2)+\frac{1}{1+\varsigma^2}=0$, a lower estimate of the analyticity radius is given by $$\lambda_{max}(u(t^*))\ge \beta_0+c_s\frac{1}{\|u(0)\|_{s, \beta_0}^{\frac{2}{2s-1}}}.$$ align}{\end{eqnarray*}{theorem} align}{\begin{eqnarray*}{remark} \label{rmk:improvedanalyticity} \emph{ Let $u_0=\sum_{N \le |k|\le cN}\hat{u}(k)e^{i k \cdot x}, 1 \le c,$ with $\sum_k |\hat{u}(k)|^2 =1$ and observe that $\|u\|_s \sim N^s$. Then by Theorem \ref{analyticthm} the lower estimate of the (gain in) analyticity radius is given by ${\displaystyle \frac{c_s}{N^{\frac{2s}{2s-1}}}}$. The lower estimate in \cite{dt} in this case is ${\displaystyle \frac{c_1}{N^2}}$, which corresponds to $s=1$. Clearly, this lower estimate improves in our case if one considers $1<s<\frac{3}{2}$. However, one cannot take the limit as $s \nearrow \frac{3}{2}$ in this estimate as $c_s \rightarrow 0$. } align}{\end{eqnarray*}{remark} align}{\begin{eqnarray*}{corollary}\label{corollary1} Let $u$ be a strong solution of \eqref{nse} with initial condition $u^0\in Gv(s,r_0;\theta)$, for some $s>\frac{1}{2}, r_0>0,$ and $0<\theta <1$. Let align}{\begin{eqnarray*}{align*} T^{\ddagger}=\sup \left\{T>0 \:|\: \sup_{t\in[0,T]} \|e^{r_0 A^{\frac{\theta}{2}}}u(t)\|_{s} < \infty \right\}. align}{\end{eqnarray*}{align*} If $T^{\ddagger} < \infty$, then as $t\nearrow T^{\ddagger}$, $\lim_{t \nearrow T^{\ddagger}}\|u(t)\|_{s'}= \infty $ for any $s' > \frac{1}{2}$. Moreover, $\|u(t)\|_{Gv(s,r_0;\theta)}$ blows up at an exponential rate at $T^\ddagger$. \comments{ align}{\begin{eqnarray*}{align} \label{coro1} \|u(t)\|_{s}\geq \frac{c_s\|u^0\|_{L^2}^{1-\frac{2s}{5}}}{(T^{\ddagger}-t)^{\frac{2s}{5}}}. align}{\end{eqnarray*}{align} } align}{\end{eqnarray*}{corollary} align}{\begin{eqnarray*}{theorem} \label{mainthem1} Let $u$ be a strong solution of the Navier--Stokes equations \eqref{nse} with initial condition $u^0\in \dot{H}^{s}(\Omega)$, for some $s>\frac{5}{2}$. Let $0<\beta \leq \frac{1}{2}$, and define align}{\begin{eqnarray*}{align*} T^{\ast}=\sup \big\{T>0 \:|\: \sup_{t\in[0,T]}\|e^{\beta t A^{\frac{1}{2}}}u(t)\|_{s} < \infty \big\}. align}{\end{eqnarray*}{align*} (i) If $$\frac{\|u^0\|_s}{\|u^0\|_{L^2}}\geq c_s \beta^{-\frac{4s}{5}} \min \left\{1, \|u^0\|_{L^2}^{-\frac{2s}{5}}\right \},$$ then align}{\begin{eqnarray*}{align*} T^{\ast}>c_s \min \left\{1, \ \|u^0\|_{L^2}^{-1}\right \}\left(\frac{\|u^0\|_s}{\|u^0\|_{L^2}}\right)^{-\frac{5}{2s}}. align}{\end{eqnarray*}{align*} (ii) If $$\frac{\|u^0\|_s}{\|u^0\|_{L^2}}< c_s \beta^{-\frac{4s}{5}} \min \left\{1, \|u^0\|_{L^2}^{-\frac{2s}{5}}\right \},$$ then align}{\begin{eqnarray*}{align*} T^{\ast}>\min\left\{\tilde{Z}, \tilde{Z}^{2/5}\right\}, align}{\end{eqnarray*}{align*} where $\tilde{Z}=c_s \min \left\{1, \ \|u^0\|_{L^2}^{-1}\right \}\left(\frac{\|u^0\|_s}{\|u^0\|_{L^2}}\right)^{-\frac{5}{2s}}.$ align}{\end{eqnarray*}{theorem} align}{\begin{eqnarray*}{theorem} \label{mainthem2} Let $u$ be a strong solution of \eqref{nse} with initial condition $u^0\in \dot{H}^{s}(\Omega)$, for some $\frac{3}{2}\leq s<\frac{5}{2}$. Let $0<\beta \leq \frac{1}{2}$, and define align}{\begin{eqnarray*}{align*} T^{\ast}=\sup \left\{T>0 \:|\: \sup_{t\in[0,T]}\|e^{\beta t A^{\frac{1}{2}}}u(t)\|_{s} < \infty \right\}. align}{\end{eqnarray*}{align*} (i) If $$\|{u}^0\|_{s}\geq \frac{c_{s} }{(\beta)^{\frac{2s-1}{2}}},$$ then $$T^{\ast}>\frac{c_{{s}}}{\|u^0\|^{\frac{4}{2s-1}}_{{s}}}.$$ (ii) If $$\|{u}^0\|_{s}< \frac{c_{s} }{(\beta)^{\frac{2s-1}{2}}},$$ then $$T^{\ast}>\min\left\{{\cal N}, {\cal N}^{1/2}\right\},$$ where $\displaystyle {\cal N}=\frac{c_{{s}}}{\|u^0\|^{\frac{4}{2s-1}}_{{s}}}.$ align}{\end{eqnarray*}{theorem} align}{\begin{eqnarray*}{remark} \emph{The differential inequalities for the evolution of the Gevrey norms leading up to the proofs of Theorem \ref{mainthem1} and Theorem \ref{mainthem2} are non-autonomous and much more complicated than that of Theorem \ref{mainthem213}. Consequently, finding an optimal $\beta$ leading to an improved estimate of the analyticity radius as has been done in Theorem \ref{analyticthm} is difficult. Thus, it would be of interest to find an improved estimate of the analyticity radius for $s>\frac{3}{2}$ by optimizing over the choice of $\beta$.} align}{\end{eqnarray*}{remark} align}{\begin{eqnarray*}{remark} \emph{Following the technique presented in Theorem \ref{mainthem2}, we present in the corollary below an alternate proof (i.e. different from the ones in \cite{cheskidov2016lower, CM2018, cortissoz2014lower, mccormick2016lower, robinson2012lower}) of the existence time/blow-up rate in spaces $H^s$ for the entire range $ \frac{1}{2} < s < \frac{5}{2}$ which in particular shows that the case $s=\frac{3}{2}$, which appears as a borderline case in \cite{cheskidov2016lower, CM2018, cortissoz2014lower, mccormick2016lower, robinson2012lower} is not really a borderline in our approach.} align}{\end{eqnarray*}{remark} align}{\begin{eqnarray*}{corollary}\label{corollary2} Let $u$ be a strong solution of \eqref{nse} with initial condition $u^0\in \dot{H}^{s}(\Omega)$, for some $s\in(\frac{1}{2},\frac{5}{2})$. Define align}{\begin{eqnarray*}{align*} T^{\ddagger}=\sup \big\{T>0 \:|\: \sup_{t\in[0,T]} \|u(t)\|_{s} < \infty \big\}. align}{\end{eqnarray*}{align*} Then align}{\begin{eqnarray*}{align*} T^{\ddagger}>\frac{c_s}{\|u^0\|^{\frac{4}{2s-1}}_s}. align}{\end{eqnarray*}{align*} Moreover, if $T^{\ast} < \infty$, then align}{\begin{eqnarray*}{align} \label{coro2} \|u(t)\|_{{s}}>\frac{c_{s} }{ (T^{\ddagger}-t)^{\frac{2{s}-1}{4}}}. align}{\end{eqnarray*}{align} align}{\end{eqnarray*}{corollary} \comments{ align}{\begin{eqnarray*}{corollary}\label{corollary3} Let $u$ be a strong solution of \eqref{nse} with initial condition $u^0\in \dot{H}^{s}(\Omega)$, for some $s\in(\frac{1}{2},\frac{5}{2})$. Let $r_0>0$ and $0<\theta<1$, and define align}{\begin{eqnarray*}{align*} T^{\ddagger}=\sup \big\{T>0 \:|\: \sup_{t\in[0,T]} \|e^{r_0 A^{\frac{\theta}{2}}}u(t)\|_{s} < \infty \big\}. align}{\end{eqnarray*}{align*} If $T^{\ddagger} < \infty$, then align}{\begin{eqnarray*}{align} \label{coro1-n} \|u(t)\|_{s}\geq \frac{c_{s}}{(T^{\ddagger}-t)^{\frac{2{s}-1}{4}}}. align}{\end{eqnarray*}{align} align}{\end{eqnarray*}{corollary} } The rest of the paper is organized as follows. Section~\ref{sec:prelim} provides the background and setting for our analysis. In Section~\ref{sec:velocity}, working on the velocity equation, we obtained new commutator estimates of the nonlinear term in Gevrey spaces. Using these estimates, in subsection~\ref{sec:general case}, the existence time and blow-up rates have been obtained for $\|u\|_{Gv(s, \beta_0+\beta t)}$ when $s>\frac{1}{2},\ s\neq \frac{3}{2}$. We have also obtained an improved estimate of the analyticity radius for $\|u\|_{Gv(s, \beta_0+\beta t)}$ when $ \frac{1}{2}<s<\frac{3}{2}$. In subsection~\ref{sec:s>5/2}, we improve the existence times in the Gevrey classes when $s>\frac{5}{2}$. In Section~\ref{sec:1/2<s<5/2}, working on the vorticity equaiton, we improve the existence times in the Gevrey classes when $\frac{3}{2}\leq s<\frac{5}{2}$. Section~\ref{sec:appe} is the Appendix which includes several proofs of several requisite lemmas\& propositions. \section{Preliminaries}\label{sec:prelim} We recall the definition of strong solutions from \cite{temam1995navier}.\\ Let $\displaystyle V=\left \{u\in H^1_{loc}(\Omega), \text{u is periodic, and}\ \nabla \cdot {u}=0\ \text{in}\ \Omega\right \}$ and $u_0\in V$, $u$ is a strong solution of NSE if it solves the variational formulation of (\ref{nse1})-(\ref{nse3}) as in \cite{cf, temam1995navier}, and $$u\in L^2(0, T; D(A))\cap L^{\infty}(0, T; V),$$ for $T>0$. The following lemma will be used in this paper. align}{\begin{eqnarray*}{lemma} \cite{swanson2011gevrey} \label{lem-gev1-n} Let $1<p<\infty$, if $s_1,\ s_2 <\frac{n}{p'},\ s_1+s_2 \geq 0$, and $s_1+s_2 > \frac{n}{p'}-\frac{n}{p}$, then align}{\begin{eqnarray*}{align} \label{gevsplit-n} \|u \ast v\|_{s_1+s_2-\frac{n}{p'},p}\leq C_{s_1, s_2, n, p}\|u\|_{s_1, p} \|v\|_{s_2, p}, align}{\end{eqnarray*}{align} for all $u\in V_{s_1, p}$ and $v\in V_{s_2, p}$. align}{\end{eqnarray*}{lemma} In our current setting, we have $n=3,\ p'=2,\ p=2.$ Since we mainly work in the Gevrey spaces, we will need another version of the above lemma. align}{\begin{eqnarray*}{lemma} \label{lem-gev1} In three dimensional spaces, for $s_1,\ s_2<\frac{3}{2}$ and $s_1+s_2> 0$, $u=e^{\alpha A^{\frac{1}{2}}}u_1\in \dot{H}^{s_1}$ and $v=e^{\alpha A^{\frac{1}{2}}}v_1\in \dot{H}^{s_2}$, we have align}{\begin{eqnarray*}{align} \label{gevsplit} \|u_1 \ast v_1\|_{s_1+s_2-\frac{3}{2},\alpha}\leq \|u \ast v\|_{s_1+s_2-\frac{3}{2}}\leq C_{s_1,s_2}\|u_1\|_{s_1,\alpha} \|v_1\|_{s_2,\alpha}. align}{\end{eqnarray*}{align} align}{\end{eqnarray*}{lemma} align}{\begin{eqnarray*}{lemma} \cite{mccormick2016lower} \label{lem4n} If $\displaystyle \dot{X}\leq c X^{1+\gamma}$ and $X(t)\rightarrow \infty$ as $t\rightarrow T$, then align}{\begin{eqnarray*}{align*} X(t)\geq \left(\frac{1}{\gamma c(T-t)}\right)^{1/\gamma}. align}{\end{eqnarray*}{align*} align}{\end{eqnarray*}{lemma} align}{\begin{eqnarray*}{lemma} \cite{robinson2012lower} \label{lem3} If $0\leq s_1<3/2+r<s_2$ and $u\in \dot{H}^{s_1}\cap\dot{H}^{s_2}$, then $u\in F^r$ and align}{\begin{eqnarray*}{align} \label{est4} \|u\|_{F^r}\leq c\|u\|_{s_1} ^{(s_2-r-3/2)/(s_2-s_1)}\|u\|_{s_2} ^{(3/2+r-s_1)/(s_2-s_1)}. align}{\end{eqnarray*}{align} align}{\end{eqnarray*}{lemma} align}{\begin{eqnarray*}{lemma} \cite{robinson2012lower} \label{lemrobinson} Suppose that the local existence time in $\dot{H}^s (\mathbb{R}^3)$ depends on the norm in $\dot{H}^s (\mathbb{R}^3)$, with align}{\begin{eqnarray*}{align*} T_{s} (u_0)\geq \frac{c'_s}{\|u_0\|_{{H}^s (\mathbb{R}^3)}}. align}{\end{eqnarray*}{align*} Then align}{\begin{eqnarray*}{align*} T_{s} (u_0)\geq {c_s}{\|u_0\|^{(5-2s)/2s}_{L^2 (\mathbb{R}^3)}}{\|u_0\|^{-5/2s}_{\dot{H}^s (\mathbb{R}^3)}}. align}{\end{eqnarray*}{align*} In case the solution blows up at time $T<\infty$ then align}{\begin{eqnarray*}{align*} \|u(T-t)\|_{\dot{H}^s (\mathbb{R}^3)} \geq c_s \|u(T-t)\|^{(5-2s)/5}_{L^2 (\mathbb{R}^3)} t^{-2s/5}. align}{\end{eqnarray*}{align*} align}{\end{eqnarray*}{lemma} We also need the following nonlinear generalization of the Gronwall inequality, which applies to the case of a nonlinear but positive vector field. For the proof, see Theorem~2.4 of \cite{online1}. align}{\begin{eqnarray*}{lemma} \cite{online1} \label{lemma:nonlinear_Gronwall} Suppose that \(F(u,t)\) is a Lipschitz continous and monotonically increasing in $u$. Suppose that $u(t)$ is continuously differentiable, and $\displaystyle \frac{d}{dt} u(t) \leq F(u(t),t)$ for all \(t\in[0,T]\). Let $v$ be the solution of $\displaystyle \frac{d}{dt} v(t) = F(v(t),t),$ $v(0) = u(0),$ and define \[ T^* = \sup\left\{t>0 \:|\: \sup_{[0,t]} v(t) < \infty\right\}. \] Then \( u(t) \leq v(t) \) for all \( t\in \left[0,\min\{T,T^*\}\right]\). align}{\end{eqnarray*}{lemma} In addition to the previous lemmas, we will also need to make use of several standard inequalities, which we present here for convenience. Young's inequality for products says that for nonnegative real numbers $a$ and $b$ and positive real numbers $p$ and $q$ satisfying $\frac{1}{p}+\frac{1}{q}=1$, we have: $\displaystyle ab\leq \frac{a^p}{p}+\frac{b^q}{q}. $ We will frequently use Young's inequality with $p=q=2$: $\displaystyle ab\leq \frac{a^2}{2}+\frac{b^2}{2}. $ Young's inequality with $\epsilon > 0$ will also be used: $\displaystyle ab\leq \frac{a^2}{2\epsilon}+\frac{\epsilon b^2}{2}. $ H\"older's inequality for sequences generalizes the Cauchy--Schwartz inequality. It states that for $p, q \in [1, \infty)$ satisfying $\frac{1}{p}+\frac{1}{q}\leq 1$ align}{\begin{eqnarray*}{align*} \sum^{\infty}_{k=1}|x_k y_k|\leq \left (\sum^{\infty}_{k=1}|x_k|^p\right )^{\frac{1}{p}}\left (\sum^{\infty}_{k=1}|y_k|^q\right )^{\frac{1}{q}}. align}{\end{eqnarray*}{align*} The following energy estimate for the incompressible NSE (due to Leray) is essential, and allows us to bound the $L^2$ norm of any solution of \eqref{nse} by that of its initial data align}{\begin{eqnarray*}{align} \label{energy1} \|u(t)\|^2 _{L^2}+2 \int_{0}^{t}\|\nabla u(s)\|^2 _{L^2}ds\leq \|u^0\|^2 _{L^2}. align}{\end{eqnarray*}{align} \section{Estimates on the velocity equation}\label{sec:velocity} We start from the functional form (\ref{functional_form}) of the NSE align}{\begin{eqnarray*}{align*} u_t+Au+B(u,u)=0. align}{\end{eqnarray*}{align*} We can obtain the following estimates for the nonlinear term. The proofs of the following two lemmas which provide the main estimates of the nonlinear term are in the Appendix. align}{\begin{eqnarray*}{lemma} \label{lem1} (i) For $\forall s>0$, and $\forall u\in Gv(s+1, \alpha)\cap F^0$, we have align}{\begin{eqnarray*}{align} \label{term329} \left|\left(B(u,u),A^{s}e^{2\alpha A^{\frac{1}{2}}}u\right)\right |\leq c_{s}\|e^{\alpha A^{\frac{1}{2}}}u\|_{F^0} \|u\|_{s, \alpha} \|u\|_{s+1, \alpha}. align}{\end{eqnarray*}{align} (ii) For $\forall s\geq 1$, and $\forall u\in Gv(s+1, \alpha)\cap F^1$, we have align}{\begin{eqnarray*}{align} \label{term32} \left|\left(B(u,u),A^{s}e^{2\alpha A^{\frac{1}{2}}}u\right)\right |\leq c_{s}\|e^{\alpha A^{\frac{1}{2}}}u\|_{F^1} \|u\|^2_{s, \alpha} +c_s \alpha \|e^{\alpha A^{\frac{1}{2}}}u\|_{F^1}\|u\|_{s+1, \alpha} \|u\|_{s, \alpha}, align}{\end{eqnarray*}{align} and consequently, align}{\begin{eqnarray*}{align} \label{term33} \left|\left(B(u,u),A^{s}e^{2\alpha A^{\frac{1}{2}}}u\right)\right |\leq c_{s}\|e^{\alpha A^{\frac{1}{2}}}u\|_{F^1} \|u\|^2_{s, \alpha} +{c_s \alpha^2} \|e^{\alpha A^{\frac{1}{2}}}u\|^2_{F^1}\|u\|^2_{s, \alpha}+\frac{1}{2}\|u\|^2_{s+1, \alpha}. align}{\end{eqnarray*}{align} align}{\end{eqnarray*}{lemma} We also obtain the following estimates on $\|e^{\alpha A^{\frac{1}{2}}}u\|_{L^2}$. align}{\begin{eqnarray*}{lemma} \label{lem33} For all $s>0$ and for all $u\in Gv(s, \alpha )\cap L^2$, $$ \|e^{ \alpha A^{\frac{1}{2}}}u\|_{L^2}\leq \sqrt{e}\|u\|_{L^2}+(2 \alpha )^s \|u\|_{s, \alpha }. $$ align}{\end{eqnarray*}{lemma} \subsection{Existence time for $\|u\|_{Gv(s, \beta_0+\beta t)}$ when $s>\frac{1}{2},\ s\neq \frac{3}{2}$}\label{sec:general case} In the proofs below, we follow the customary practice of providing \emph{a priori} estimates which can be rigorously justified by first obtaining these estimates for the finite dimensional Galerkin system, the solutions to which exist for all times, and then passing to the limit. align}{\begin{eqnarray*}{lemma} \label{inequ1a9} When $s>0$, $\beta_0,\ \beta \geq 0$, the solution, $u$, of \eqref{nse} with initial data $u^0\in Gv(s, \beta_0)$ satisfies the following differential inequality align}{\begin{eqnarray*}{align} \label{velnew1} &\frac{1}{2}\frac{d }{d t}\|u\|^2_{s, \beta_0+\beta t}-\beta \|A^{\frac{1}{4}}e^{(\beta_0+\beta t)A^{\frac{1}{2}}}u\|^2_s+ \|u\|^2_{s+1, \beta_0+\beta t}\\ &\leq c_{s}\|e^{(\beta_0+\beta t)A^{\frac{1}{2}}}u\|_{F^0} \|u\|_{s, \beta_0+\beta t} \|u\|_{s+1, \beta_0+\beta t} \nonumber. align}{\end{eqnarray*}{align} align}{\end{eqnarray*}{lemma} align}{\begin{eqnarray*}{proof} Starting from the functional form of the NSE align}{\begin{eqnarray*}{align*} u_t+Au+B(u,u)=0, align}{\end{eqnarray*}{align*} and taking inner product with $A^{s}e^{2(\beta_0+\beta t)A^{\frac{1}{2}}}u$, we have align}{\begin{eqnarray*}{align} \label{innerpro} \left (\frac{d u}{d t},A^{s}e^{2(\beta_0+\beta t)A^{\frac{1}{2}}}u\right )+\left (Au,A^{s}e^{2(\beta_0+\beta t)A^{\frac{1}{2}}}u\right )+\left (B(u,u),A^{s}e^{2(\beta_0+\beta t)A^{\frac{1}{2}}}u\right )=0. align}{\end{eqnarray*}{align} We can explore (\ref{innerpro}) term by term. For the first term, align}{\begin{eqnarray*}{align} \label{term1} \left (\frac{d u}{d t},A^{s}e^{2(\beta_0+\beta t)A^{\frac{1}{2}}}u \right )&=\frac{1}{2} \frac{d}{dt} \|A^{\frac{s}{2}}e^{(\beta_0+\beta t)A^{\frac{1}{2}}}u\|_{L^2}^2-\beta (A^{s+\frac{1}{2}}e^{2(\beta_0+\beta t)A^{\frac{1}{2}}}u,u) \nonumber \\ &=\frac{1}{2} \frac{d}{dt} \|A^{\frac{s}{2}}e^{(\beta_0+\beta t)A^{\frac{1}{2}}}u\|_{L^2}^2 - \beta \|A^{\frac{1}{4}}e^{(\beta_0+\beta t)A^{\frac{1}{2}}}u\|^2_{s}. align}{\end{eqnarray*}{align} For the second term of (\ref{innerpro}), we can write it in terms of the Gevrey norm align}{\begin{eqnarray*}{align} \label{term2} \left(Au,A^{s}e^{2(\beta_0+\beta t)A^{\frac{1}{2}}}u\right)=\left(A^{\frac{s}{2}}A^{\frac{1}{2}}e^{(\beta_0+\beta t)A^{\frac{1}{2}}}u,A^{\frac{s}{2}}A^{\frac{1}{2}}e^{(\beta_0+\beta t)A^{\frac{1}{2}}}u\right)=\|u\|^2_{s+1, \beta_0+\beta t}. align}{\end{eqnarray*}{align} For the third term of (\ref{innerpro}), applying (\ref{term329}) with $\alpha=\beta_0+\beta t$, we have align}{\begin{eqnarray*}{align} \label{term3n} \left|\left(B(u,u),A^{s}e^{2(\beta_0+\beta t)A^{\frac{1}{2}}}u\right)\right | \leq c_{s}\|e^{(\beta_0+\beta t)A^{\frac{1}{2}}}u\|_{F^0} \|u\|_{s, \beta_0+\beta t} \|u\|_{s+1, \beta_0+\beta t}. align}{\end{eqnarray*}{align} Substituting (\ref{term1}), (\ref{term2}), and (\ref{term3n}) into (\ref{innerpro}), we have (\ref{velnew1}). align}{\end{eqnarray*}{proof} align}{\begin{eqnarray*}{proof}[\textbf{Proof of Theorem \ref{mainthem213}}] With $0 \leq \beta \leq \frac{1}{2}$, we have align}{\begin{eqnarray*}{align*} \beta \|A^{\frac{1}{4}}e^{(\beta_0+\beta t)A^{\frac{1}{2}}}u\|^2_s\leq\frac{1}{2} \|e^{(\beta_0+\beta t)A^{\frac{1}{2}}}u\|^2_{s+1}. align}{\end{eqnarray*}{align*} When $s>\frac{1}{2}$, we have $$\|e^{(\beta_0+\beta t)A^{\frac{1}{2}}}u\|_{F^0}\leq c_s \|e^{(\beta_0+\beta t)A^{\frac{1}{2}}}u\|_{s+1}.$$ Therefore, (\ref{velnew1}) becomes align}{\begin{eqnarray*}{align*} \frac{1}{2}\frac{d }{d t}\|u\|^2_{s, \beta_0+\beta t}+\frac{1}{2} \|u\|^2_{s+1, \beta_0+\beta t} \leq c_{s}\|u\|_{s, \beta_0+\beta t} \|u\|^2_{s+1, \beta_0+\beta t} \nonumber. align}{\end{eqnarray*}{align*} If $\|u^0\|_{s, \beta_0}\leq \frac{1}{2c_s}$, then $\frac{d }{d t}\|u\|^2_{s, \beta_0+\beta t}\leq 0$, $\|u\|_{s, \beta_0+\beta t}$ remains bounded for all time and $\|u\|_{s, \beta_0+\beta t} \le \|u\|_{s, \beta_0}$. Now suppose $\|u^0\|_{s, \beta_0}> \frac{1}{2c_s}$. Then we have the following cases. (1) $\frac{1}{2}<s<\frac{3}{2}$: Applying Lemma \ref{lem3} on $e^{(\beta_0+\beta t)A^{\frac{1}{2}}}u$ with $r=0$, $s_1=s$, and $s_2=s+1$, we obtain align}{\begin{eqnarray*}{align} \label{estoff0} \|e^{(\beta_0+\beta t)A^{\frac{1}{2}}}u\|_{F^0}\leq c \|e^{(\beta_0+\beta t)A^{\frac{1}{2}}}u\|^{s-1/2}_{s} \|e^{(\beta_0+\beta t)A^{\frac{1}{2}}}u\|^{3/2-s}_{s+1}. align}{\end{eqnarray*}{align} Therefore, (\ref{velnew1}) becomes align}{\begin{eqnarray*}{align*} &\frac{1}{2}\frac{d }{d t}\|u\|^2_{s, \beta_0+\beta t}+\frac{1}{2}\|u\|^2_{s+1, \beta_0+\beta t}\leq c_{s}\|u\|^{s+1/2}_{s, \beta_0+\beta t} \|u\|^{5/2-s}_{s+1, \beta_0+\beta t}. align}{\end{eqnarray*}{align*} Apply Young's inequality and after simplification, we have align}{\begin{eqnarray*}{align*} \frac{d }{d t}\|u\|_{s, \beta_0+\beta t}\leq c_{s} \|u\|^{\frac{2s+3}{2s-1}}_{s, \beta_0+\beta t}. align}{\end{eqnarray*}{align*} Considering the blow up time $T^{\ast}$ of $\|u\|_{s, \beta_0+\beta t}$: if $T^{\ast} < \infty$, then, as $t\nearrow T^{\ast}$, applying Lemma \ref{lem4n}, we have align}{\begin{eqnarray*}{align*} \|e^{(\beta_0+\beta t)A^{\frac{1}{2}}}u(t)\|_{{s}}>\frac{c_{s} }{ (T^{\ast}-t)^{\frac{2{s}-1}{4}}}. align}{\end{eqnarray*}{align*} This is equivalent to align}{\begin{eqnarray*}{align*} T^{\ast}> \frac{c_s}{\|u^0\|^{\frac{4}{2s-1}}_{s, \beta_0}}. align}{\end{eqnarray*}{align*} (2)$s>\frac{3}{2}$: We have $$\|e^{(\beta_0+\beta t)A^{\frac{1}{2}}}u\|_{F^0}\leq c_s \|e^{(\beta_0+\beta t)A^{\frac{1}{2}}}u\|_{s},$$ therefore, (\ref{velnew1}) becomes align}{\begin{eqnarray*}{align*} &\frac{1}{2}\frac{d }{d t}\|u\|^2_{s, \beta_0+\beta t}+\frac{1}{2}\|u\|^2_{s+1, \beta_0+\beta t}\leq c_{s}\|u\|^2_{s, \beta_0+\beta t} \|u\|_{s+1, \beta_0+\beta t}. align}{\end{eqnarray*}{align*} Apply Young's inequality and after simplification, we have align}{\begin{eqnarray*}{align*} \frac{d }{d t}\|u\|_{s, \beta_0+\beta t}\leq c_{s} \|u\|^{3}_{s, \beta_0+\beta t}. align}{\end{eqnarray*}{align*} Considering the blow up time $T^{\ast}$ of $\|u\|_{s, \beta_0+\beta t}$: if $T^{\ast} < \infty$, then, as $t\nearrow T^{\ast}$, applying Lemma \ref{lem4n}, we have align}{\begin{eqnarray*}{align*} \|e^{(\beta_0+\beta t)A^{\frac{1}{2}}}u(t)\|_{{s}}>\frac{c_{s} }{ (T^{\ast}-t)^{\frac{1}{2}}}. align}{\end{eqnarray*}{align*} This is equivalent to align}{\begin{eqnarray*}{align*} T^{\ast}> \frac{c_s}{\|u^0\|^{2}_{s, \beta_0}}. align}{\end{eqnarray*}{align*} align}{\end{eqnarray*}{proof} \textbf{Proof of Theorem \ref{analyticthm}} align}{\begin{eqnarray*}{proof} We start from: align}{\begin{eqnarray*}{align*} &\frac{1}{2}\frac{d }{d t}\|u\|^2_{s, \beta_0+\beta t}-\beta \|A^{\frac{1}{4}}e^{(\beta_0+\beta t)A^{\frac{1}{2}}}u\|^2_s+ \|u\|^2_{s+1, \beta_0+\beta t}\\ &\leq c_{s}\|e^{(\beta_0+\beta t)A^{\frac{1}{2}}}u\|_{F^0} \|u\|_{s, \beta_0+\beta t} \|u\|_{s+1, \beta_0+\beta t} \nonumber. align}{\end{eqnarray*}{align*} Applying (\ref{estoff0}), we have align}{\begin{eqnarray*}{align} \label{equofu1} &\frac{1}{2}\frac{d }{d t}\|u\|^2_{s, \beta_0+\beta t}-\beta \|A^{\frac{1}{4}}e^{(\beta_0+\beta t)A^{\frac{1}{2}}}u\|^2_s+ \|u\|^2_{s+1, \beta_0+\beta t}\\ &\leq c_{s}\|u\|^{s+1/2}_{s, \beta_0+\beta t} \|u\|^{5/2-s}_{s+1, \beta_0+\beta t} \nonumber. align}{\end{eqnarray*}{align} Since $\displaystyle \|A^{\frac{1}{4}}e^{(\beta_0+\beta t)A^{\frac{1}{2}}}u\|^2_s\leq \|u\|_{s, \beta_0+\beta t}\|u\|_{s+1, \beta_0+\beta t}$, applying Young's inequality, we have align}{\begin{eqnarray*}{align*} \beta \|A^{\frac{1}{4}}e^{(\beta_0+\beta t)A^{\frac{1}{2}}}u\|^2_s\leq \frac{\beta^2}{2}\|u\|^2_{s, \beta_0+\beta t}+\frac{1}{2}\|u\|^2_{s+1, \beta_0+\beta t}. align}{\end{eqnarray*}{align*} Moreover, align}{\begin{eqnarray*}{align*} \|u\|^{s+1/2}_{s, \beta_0+\beta t} \|u\|^{5/2-s}_{s+1, \beta_0+\beta t}\leq c_s \|u\|^{\frac{2(2s+1)}{2s-1}}_{s, \beta_0+\beta t}+\frac{1}{2}\|u\|^2_{s+1, \beta_0+\beta t}. align}{\end{eqnarray*}{align*} Therefore, (\ref{equofu1}) becomes align}{\begin{eqnarray*}{align*} \frac{1}{2}\frac{d }{d t}\|u\|^2_{s, \beta_0+\beta t}\leq c_{s}\|u\|^{\frac{2(2s+1)}{2s-1}}_{s, \beta_0+\beta t}+\frac{\beta^2}{2}\|u\|^2_{s, \beta_0+\beta t}, align}{\end{eqnarray*}{align*} or equivalently, since $\|u\|_{s, \beta_0+\beta t}\neq 0 $ for all $t>0$, we have align}{\begin{eqnarray*}{align*} \frac{d }{d t}\|u\|_{s, \beta_0+\beta t}\leq c_{s}\|u\|^{1+\frac{4}{2s-1}}_{s, \beta_0+\beta t}+\frac{\beta^2}{2}\|u\|_{s, \beta_0+\beta t}. align}{\end{eqnarray*}{align*} \comments{ Since align}{\begin{eqnarray*}{align*} e^{\frac{\beta^2}{2}t} \frac{d }{d t}(e^{-\frac{\beta^2}{2}t}\|u\|_{s, \beta_0+\beta t})&=e^{\frac{\beta^2}{2}t}\left(e^{-\frac{\beta^2}{2}t}\frac{d }{d t}\|u\|_{s, \beta_0+\beta t}-\frac{\beta^2}{2}e^{-\frac{\beta^2}{2}t}\|u\|_{s, \beta_0+\beta t}\right)\\ &=\frac{d }{d t}\|u\|_{s, \beta_0+\beta t}-\frac{\beta^2}{2}\|u\|_{s, \beta_0+\beta t}\\ &\leq c_{s}\|u\|^{1+\frac{4}{2s-1}}_{s, \beta_0+\beta t}, align}{\end{eqnarray*}{align*} } Multiplying both sides by $e^{- \frac{\beta^2}{2}t}$, we have align}{\begin{eqnarray*}{align*} \frac{d }{d t}(e^{-\frac{\beta^2}{2}t}\|u\|_{s, \beta_0+\beta t})\leq c_{s}e^{\frac{2\beta^2}{2s-1}t}(e^{-\frac{\beta^2}{2}t}\|u\|_{s, \beta_0+\beta t})^{1+\frac{4}{2s-1}}. align}{\end{eqnarray*}{align*} \comments{ Denoting align}{\begin{eqnarray*}{align*} y(t)=e^{-\frac{\beta^2}{2}t}\|u\|_{s, \beta_0+\beta t}, align}{\end{eqnarray*}{align*} we obtain align}{\begin{eqnarray*}{align*} \frac{d }{d t}y\leq c_{s}e^{\frac{2\beta^2}{2s-1}t}y^{1+\frac{4}{2s-1}}. align}{\end{eqnarray*}{align*} Solving it, we have align}{\begin{eqnarray*}{align*} y(t)\leq \frac{y(0)}{\left(1-\frac{2c_s}{\beta^2}y(0)^{\frac{4}{2s-1}}\left(e^{\frac{2\beta^2}{2s-1}t}-1\right)\right)^{\frac{2s-1}{4}}}. align}{\end{eqnarray*}{align*} } Consequently, align}{\begin{eqnarray*}{align} \label{unorm1} \|u\|_{s, \beta_0+\beta t}\leq \frac{e^{\frac{\beta^2}{2}t}\|u(0)\|_{s, \beta_0}}{\left(1-\frac{2c_s}{\beta^2}\|u(0)\|_{s, \beta_0}^{\frac{4}{2s-1}}\left(e^{\frac{2\beta^2}{2s-1}t}-1\right)\right)^{\frac{2s-1}{4}}}. align}{\end{eqnarray*}{align} This implies that $\|u\|_{s, \beta_0+\beta t}$ is finite on the interval $[0, t^{\ast})$, where align}{\begin{eqnarray*}{align*} t^{\ast}=\frac{2s-1}{2\beta^2}\log\left(1+\frac{\beta^2}{2c_s \|u(0)\|_{s, \beta_0}^{\frac{4}{2s-1}}}\right). align}{\end{eqnarray*}{align*} \comments{ Considering the Fourier modes, we have align}{\begin{eqnarray*}{align*} |\hat{u}(k, t)|\leq \frac{e^{\left(\frac{\beta^2}{2}-\beta|k|\right)t}e^{-\beta_0 |k|}\|u(0)\|_{s, \beta_0}}{|k|^s \left(1-\frac{2c_s}{\beta^2}\|u(0)\|_{s, \beta_0}^{\frac{4}{2s-1}}\left(e^{\frac{2\beta^2}{2s-1}t}-1\right)\right)^{\frac{2s-1}{4}}}. align}{\end{eqnarray*}{align*} Here, we can observe that the spectrum has an exponential decay length of $\beta_0+\beta t$ (when $t<t^{\ast}$).} Choosing $t=\frac{t^{\ast}}{2}$, then the associated analyticity radius $\lambda$ is $$\lambda=\beta_0+\frac{\beta t^{\ast}}{2}=\beta_0+\frac{2s-1}{4\beta}\log\left(1+\frac{\beta^2}{2c_s \|u(0)\|_{s, \beta_0}^{\frac{4}{2s-1}}}\right).$$ The value of $\beta$ that maximizes $\lambda$ is given by $$\beta=\sqrt{2c_s}\|u(0)\|_{s, \beta_0}^{\frac{2}{2s-1}}\varsigma,$$ where $\varsigma$ is the positive solution of the equation $$-\frac{1}{2\varsigma^2}\log(1+\varsigma^2)+\frac{1}{1+\varsigma^2}=0.$$ The corresponding analyticity radius at $t=\frac{t^{\ast}}{2}$ is $$\lambda=\beta_0+c_s (2s-1)\frac{1}{\|u(0)\|_{s, \beta_0}^{\frac{2}{2s-1}}}.$$ align}{\end{eqnarray*}{proof} {\bf Proof of Corollary \ref{corollary1}.} align}{\begin{eqnarray*}{proof} Assume that $T^\ddagger < \infty$. Then clearly align}{\begin{eqnarray*}{equation} \label{gevblow} \limsup_{t \nearrow T^\ddagger} \|u\|_{s,r_0; \theta}=\infty. align}{\end{eqnarray*}{equation} Assume that $\lim_{t \nearrow T^\ddagger}\|u(t)\|_{s'} \neq \infty $, then, $\liminf_{t \nearrow T^\ddagger} \|u\|_{s'} < \infty $ and there exists a sequence $\{t_j\}_{j=1}^\infty $ with $t_j \nearrow T^\ddagger$ and $\|u(t_j)\|_{s'} \le M < \infty $. From Theorem \ref{mainthem213}, it follows that there exists $T_M >0$ such that align}{\begin{eqnarray*}{equation} \label{gevbd} \sup_{t \in (0,T_M]}\|u(t_j+t)\|_{s',\beta t}= K_M < \infty. align}{\end{eqnarray*}{equation} Choose $t_{j_0}$ satisfying $t_{j_0} < T^\ddagger < t_{j_0}+T_M$. Let $2 \delta = T^\ddagger - t_{j_0}$. Then, due to \eqref{gevbd}, we have align}{\begin{eqnarray*}{equation} \label{gevbd1} \sup_{t \in [t_{j_0}+\delta, T^\ddagger)}\|u(t)\|_{s',\alpha_0}\le K_M, align}{\end{eqnarray*}{equation} where $\alpha_0=\beta \delta$. Observe now that for any $s,s',r_0,\alpha_0>0$ and $0<\theta<1$, $\forall v \in Gv(s,\alpha_0)$, it's also in $Gv(s',r_0;\theta)$. We have align}{\begin{eqnarray*}{equation} \label{andom} \|v\|_{s',r_0;\theta} \le C_{s',s,r_0,\alpha_0}\|v\|_{s,\alpha_0}. align}{\end{eqnarray*}{equation} From inequalities \eqref{gevbd1} and \eqref{andom}, we obtain a contradiction to \eqref{gevblow}. Therefore, $\lim_{t \nearrow T^\ddagger}\|u(t)\|_{s'}=\infty.$ Consequently, due to \cite{benameur2016blow}, the subanalytic norm will blow up exponentially. align}{\end{eqnarray*}{proof} \subsection{Existence time for $\|u\|_{Gv(s, \beta t)}$ when $s>\frac{5}{2}$}\label{sec:s>5/2} We will need the following two lemmas to proceed. align}{\begin{eqnarray*}{lemma}\label{lemmaonzeta} Consider the differential equation align}{\begin{eqnarray*}{align} \label{mainequ1} \frac{d}{dt} \zeta = c_{s} \gamma \zeta^{1+\frac{5}{2s}} +c_{s} (\beta t)^{s-\frac{5}{2}}\zeta^{2}+{c_s (\beta t)^2} \gamma^{2}\zeta^{1+\frac{5}{s}} +{c_s (\beta t)^{2s-3}} \zeta^{3}, align}{\end{eqnarray*}{align} with initial condition $\zeta(0)$, for $s>\frac{5}{2}$, $0<\beta\leq \frac{1}{2}$, and the local existence time $T_\zeta<\infty$. \\ When $\displaystyle {\zeta}({0})\geq c_s \beta^{-\frac{4s}{5}} \min \left\{\gamma^{\frac{2s}{2s-5}}, \gamma^{-\frac{2s}{5}}\right \} $, it holds that align}{\begin{eqnarray*}{align} \label{equzeta1} T_{\zeta}>\frac{c_s \min \left\{\gamma^{\frac{5}{2s-5}}, \ \gamma^{-1}\right \}}{\zeta(0)^{\frac{5}{2s}}}. align}{\end{eqnarray*}{align} When $\displaystyle {\zeta}({0})< c_s \beta^{-\frac{4s}{5}} \min \left\{\gamma^{\frac{2s}{2s-5}}, \gamma^{-\frac{2s}{5}}\right \} $, it holds that align}{\begin{eqnarray*}{align} \label{equzeta2n} T_{\zeta}>\min\left\{Z, Z^{2/5}\right\}, align}{\end{eqnarray*}{align} where $\displaystyle Z=\frac{c_s \min \left\{\gamma^{\frac{5}{2s-5}}, \ \gamma^{-1}\right \}}{\zeta(0)^{\frac{5}{2s}}}.$ align}{\end{eqnarray*}{lemma} The proof of the above lemma is provided in the appendix. In the next lemma, we establish the crucial differential inequality associated to the evolution of the Gevrey norm. align}{\begin{eqnarray*}{lemma} \label{inequ1a} When $s>\frac{5}{2}$ and $0\leq\beta \leq \frac{1}{2}$, the solution, $u$, of \eqref{nse} with initial data $u^0\in \dot{H}^s$ satisfies the following differential inequality align}{\begin{eqnarray*}{align} \label{newest4} \frac{d }{d t}\|u\|_{s, \beta t}&\leq c_{s}\|u\|^{1+\frac{5}{2s}}_{s,\beta t} \|u\|^{1-\frac{5}{2s}}_{L^2}+c_{s} (\beta t)^{s-\frac{5}{2}}\|u\|^{2}_{s,\beta t} \nonumber \\ &+{c_s (\beta t)^2} \|u\|^{1+\frac{5}{s}}_{s, \beta t} \|u\|^{2-\frac{5}{s}}_{L^2}+{c_s (\beta t)^{2s-3}} \|u\|^{3}_{s, \beta t}. align}{\end{eqnarray*}{align} align}{\end{eqnarray*}{lemma} align}{\begin{eqnarray*}{proof} Taking inner product with $A^{s}e^{2\beta tA^{\frac{1}{2}}}u$ of the NSE and applying (\ref{term33}) with $\alpha=\beta t$, we get align}{\begin{eqnarray*}{align} \label{newest} &\frac{1}{2}\frac{d }{d t}\|u\|^2_{s, \beta t}-\beta \|A^{\frac{1}{4}}e^{\beta tA^{\frac{1}{2}}}u\|^2_s+ \|u\|^2_{s+1, \beta t}\\ \nonumber &\leq c_{s}\|e^{\beta tA^{\frac{1}{2}}}u\|_{F^1} \|u\|^2_{s, \beta t}+{c_s \beta^2}t^2 \|e^{\beta tA^{\frac{1}{2}}}u\|^2_{F^1}\|u\|^2_{s, \beta t}+\frac{1}{2}\|u\|^2_{s+1, \beta t}. align}{\end{eqnarray*}{align} When $\beta \leq \frac{1}{2}$, applying the Poincar$\acute{e}$ inequality, we have $\displaystyle \beta \|A^{\frac{1}{4}}e^{\beta tA^{\frac{1}{2}}}u\|^2_s\leq\frac{1}{2} \|e^{\beta tA^{\frac{1}{2}}}u\|^2_{s+1}. $ Therefore, (\ref{newest}) yields align}{\begin{eqnarray*}{align} \label{newest2} \frac{1}{2}\frac{d }{d t}\|u\|^2_{s, \beta t}\leq c_{s}\|e^{\beta tA^{\frac{1}{2}}}u\|_{F^1} \|u\|^2_{s, \beta t}+ {c_s \beta^2}t^2 \|e^{\beta tA^{\frac{1}{2}}}u\|^2_{F^1}\|u\|^2_{s, \beta t}. align}{\end{eqnarray*}{align} Applying Lemma \ref{lem3}, in (\ref{est4}), and taking $r=1$, $s_1=0$, and $s_2=s$ in (\ref{est4}), for $\frac{5}{2}<s$ and $u\in L_2 \cap \dot{H}^s$, we obtain align}{\begin{eqnarray*}{align*} \|u\|_{F^1}\leq c \|u\|^{\frac{s-\frac{5}{2}}{s}}_{L^2} \|u\|^{\frac{5}{2s}}_{s}. align}{\end{eqnarray*}{align*} Replacing $u$ by $e^{\beta tA^{\frac{1}{2}}}u$, it follows that align}{\begin{eqnarray*}{align} \label{f1} \|e^{\beta tA^{\frac{1}{2}}}u\|_{F^1}\leq c\|e^{\beta tA^{\frac{1}{2}}}u\|^{1-\frac{5}{2s}}_{L^2} \|e^{\beta tA^{\frac{1}{2}}}u\|^{\frac{5}{2s}}_{s}. align}{\end{eqnarray*}{align} Squaring both sides of (\ref{f1}), we have align}{\begin{eqnarray*}{align} \label{f1sq} \|e^{\beta tA^{\frac{1}{2}}}u\|^2_{F^1}\leq c \|e^{\beta tA^{\frac{1}{2}}}u\|^{2-\frac{5}{s}}_{L^2} \|e^{\beta tA^{\frac{1}{2}}}u\|^{\frac{5}{s}}_{s}. align}{\end{eqnarray*}{align} Substituting (\ref{f1}) and (\ref{f1sq}) into (\ref{newest2}), we get align}{\begin{eqnarray*}{align} \label{newest3} \frac{1}{2}\frac{d }{d t}\|u\|^2_{s, \beta t}\leq c_{s} \|e^{\beta tA^{\frac{1}{2}}}u\|^{1-\frac{5}{2s}}_{L^2} \|u\|^{2+\frac{5}{2s}}_{s, \beta t}+{c_s \beta^2}t^2 \|e^{\beta tA^{\frac{1}{2}}}u\|^{2-\frac{5}{s}}_{L^2}\|u\|^{2+\frac{5}{s}}_{s, \beta t}. align}{\end{eqnarray*}{align} When $s>\frac{5}{2}$, $1-\frac{5}{2s}>0$, we have $(a+b)^{1-\frac{5}{2s}}\leq c_s (a^{1-\frac{5}{2s}}+b^{1-\frac{5}{2s}})$ for $a, b, c>0$. Therefore, applying Lemma \ref{lem33}, we have align}{\begin{eqnarray*}{align} \label{el2} \|e^{\beta tA^{\frac{1}{2}}}u\|^{1-\frac{5}{2s}}_{L^2}\leq c_s \|u\|^{1-\frac{5}{2s}}_{L^2}+c_s (\beta t)^{s-\frac{5}{2}}\|e^{\beta tA^{\frac{1}{2}}}u\|^{1-\frac{5}{2s}}_{s}. align}{\end{eqnarray*}{align} Similarly, since $2-\frac{5}{s}>0$, (i.e. $s>\frac{5}{2}$), we obtain align}{\begin{eqnarray*}{align} \label{el2sq} \|e^{\beta tA^{\frac{1}{2}}}u\|^{2-\frac{5}{s}}_{L^2}\leq c_s \|u\|^{2-\frac{5}{s}}_{L^2}+c_s (\beta t)^{2s-5}\|e^{\beta tA^{\frac{1}{2}}}u\|^{2-\frac{5}{s}}_{s}. align}{\end{eqnarray*}{align} Substituting (\ref{el2}) and (\ref{el2sq}) into (\ref{newest3}), and after simplification, we have align}{\begin{eqnarray*}{align*} \frac{1}{2}\frac{d }{d t}\|u\|^{2}_{s, \beta t}&\leq c_{s} \|u\|^{2+\frac{5}{2s}}_{s, \beta t} \|u\|^{1-\frac{5}{2s}}_{L^2}+c_{s} (\beta t)^{s-\frac{5}{2}}\|u\|^{3}_{s, \beta t} \nonumber \\ &+{c_s (\beta t)^2} \|u\|^{2+\frac{5}{s}}_{s, \beta t} \|u\|^{2-\frac{5}{s}}_{L^2}+{c_s (\beta t)^{2s-3}} \|u\|^{4}_{s, \beta t}, align}{\end{eqnarray*}{align*} which leads to (\ref{newest4}). align}{\end{eqnarray*}{proof} align}{\begin{eqnarray*}{proof}[\textbf{Proof of Theorem \ref{mainthem1}}] From Lemma \ref{inequ1a}, we have align}{\begin{eqnarray*}{align*} \frac{d }{d t}\|u\|_{s, \beta t}&\leq c_{s}\|u\|^{1+\frac{5}{2s}}_{s,\beta t} \|u\|^{1-\frac{5}{2s}}_{L^2}+c_{s} (\beta t)^{s-\frac{5}{2}}\|u\|^{2}_{s,\beta t} \nonumber \\ &+{c_s (\beta t)^2} \|u\|^{1+\frac{5}{s}}_{s, \beta t} \|u\|^{2-\frac{5}{s}}_{L^2}+{c_s (\beta t)^{2s-3}} \|u\|^{3}_{s, \beta t}. align}{\end{eqnarray*}{align*} Let $\gamma = \|u^0\|_{L^2}^{1-\frac5{2s}}$. Using the energy estimate (\ref{energy1}), i.e., $\|u(t)\|_{L^2} \leq \|u^0\|_{L^2}$, we have align}{\begin{eqnarray*}{align*} \frac{d }{d t}\|u\|_{s, \beta t} & \leq c_{s} \|u\|^{1+\frac{5}{2s}}_{s, \beta t} \gamma + c_{s} (\beta t)^{s-\frac{5}{2}}\|u\|^{2}_{s, \beta t} \nonumber \\ &+ {c_s (\beta t)^2} \|u\|^{1+\frac{5}{s}}_{s, \beta t} \gamma^{2} + {c_s (\beta t)^{2s-3}} \|u\|^{3}_{s, \beta t}. align}{\end{eqnarray*}{align*} We will complete the proof using Lemma~\ref{lemma:nonlinear_Gronwall}. Let \(\zeta(t)\) solve the differential equation align}{\begin{eqnarray*}{align*} \frac{d}{dt} \zeta = c_{s} \gamma \zeta^{1+\frac{5}{2s}} +c_{s} (\beta t)^{s-\frac{5}{2}}\zeta^{2}+{c_s (\beta t)^2} \gamma^{2}\zeta^{1+\frac{5}{s}} +{c_s (\beta t)^{2s-3}} \zeta^{3}, align}{\end{eqnarray*}{align*} with \(\zeta(0) = \zeta_0 = \|u^0\|_s\). Defining the local existence time of $\|u\|_{s, \beta t}$ to be \[ T_u = \sup\left\{ t>0 \:|\: \sup_{r\in[0,t]}\|u(r)\|_{s, \beta r} < \infty \right\}, \] and the local existence time of $\zeta$ to be \[ T_\zeta = \sup\left\{ t>0 \:|\: \sup_{r\in[0,t]}|\zeta(r)| < \infty \right\}. \] Then, using Lemma~\ref{lemma:nonlinear_Gronwall} we can say that $\zeta(t)\geq\|u(t)\|_{s, \beta t}$ for all $t\in\left[0,\min\{T_\zeta,T_{u}\}\right]$, and hence conclude \(T_{u}\geq T_\zeta\). Moreover, we assume $T_{u}<\infty$, so $T_{\zeta}<\infty$ (actually, we can see this easily from the differential equation of $\zeta$). To obtain a lower bound of $T_u$, we will now analyze $T_\zeta$. % % % % % % % From Lemma \ref{lemmaonzeta}, when $0<\beta\leq \frac{1}{2}$, we have the following.\\ Case (i): In case $$\|u^0\|_s={\zeta}({0})\geq c_s \beta^{-\frac{4s}{5}} \min \left\{\gamma^{\frac{2s}{2s-5}}, \gamma^{-\frac{2s}{5}}\right \}= c_s \beta^{-\frac{4s}{5}} \min \left\{\|u^0\|_{L^2}, \|u^0\|_{L^2}^{-\frac{2s-5}{5}}\right \},$$ i.e., if $$\frac{\|u^0\|_s}{\|u^0\|_{L^2}}\geq c_s \beta^{-\frac{4s}{5}} \min \left\{1, \|u^0\|_{L^2}^{-\frac{2s}{5}}\right \},$$ it holds that align}{\begin{eqnarray*}{align*} T_{u}\geq T_{\zeta}>\frac{c_s \min \left\{\gamma^{\frac{5}{2s-5}}, \ \gamma^{-1}\right \}}{\zeta(0)^{\frac{5}{2s}}}=\frac{c_s \min \left\{\|u^0\|_{L^2}^{\frac{5}{2s}}, \ \|u^0\|_{L^2}^{\frac5{2s}-1}\right \}}{\|u^0\|_s^{\frac{5}{2s}}}=c_s \min \left\{1, \ \|u^0\|_{L^2}^{-1}\right \} \left(\frac{\|u^0\|_s}{\|u^0\|_{L^2}}\right)^{-\frac{5}{2s}}. align}{\end{eqnarray*}{align*} Denoting the maximal time of existence of $\|e^{\beta tA^{\frac{1}{2}}}u\|_{s}$ to be $T^{\ast}$, we have align}{\begin{eqnarray*}{align*} T^{\ast}>c_s \min \left\{1, \ \|u^0\|_{L^2}^{-1}\right \}\left(\frac{\|u^0\|_s}{\|u^0\|_{L^2}}\right)^{-\frac{5}{2s}}. align}{\end{eqnarray*}{align*} Case (ii): In case $$\|u^0\|_s={\zeta}({0})< c_s \beta^{-\frac{4s}{5}} \min \left\{\gamma^{\frac{2s}{2s-5}}, \gamma^{-\frac{2s}{5}}\right \}= c_s \beta^{-\frac{4s}{5}} \min \left\{\|u^0\|_{L^2}, \|u^0\|_{L^2}^{-\frac{2s-5}{5}}\right \},$$ i.e., if $$\frac{\|u^0\|_s}{\|u^0\|_{L^2}}< c_s \beta^{-\frac{4s}{5}} \min \left\{1, \|u^0\|_{L^2}^{-\frac{2s}{5}}\right \},$$ it holds that align}{\begin{eqnarray*}{align} \label{equzeta2} T^{\ast}>\min\left\{\tilde{Z}, \tilde{Z}^{2/5}\right\}, align}{\end{eqnarray*}{align} where $\tilde{Z}=c_s \min \left\{1, \ \|u^0\|_{L^2}^{-1}\right \}\left(\frac{\|u^0\|_s}{\|u^0\|_{L^2}}\right)^{-\frac{5}{2s}}.$ align}{\end{eqnarray*}{proof} % \comments{ align}{\begin{eqnarray*}{proof}[\textbf{Proof of the Corollary \ref{corollary1}}] To prove this corollary, we need to show that if align}{\begin{eqnarray*}{align} \label{blowup2} \lim_{t\to T^{\ddagger}} \|e^{r_0 A^{\frac{\theta}{2}}}u(t)\|_{s}=\infty, align}{\end{eqnarray*}{align} for $s>\frac{5}{2}, r_0>0$, and $0<\theta<1$, then align}{\begin{eqnarray*}{align*} \lim_{t\to T^{\ddagger}} \|u(t)\|_{s}=\infty. align}{\end{eqnarray*}{align*} Once we prove it, (\ref{coro1}) will follow from Lemma \ref{lemrobinson}.\\ Let's prove by contradiction, assuming that when align}{\begin{eqnarray*}{align*} \lim_{t\to T^{\ddagger}} \|e^{r_0 A^{\frac{\theta}{2}}}u(t)\|_{s}=\infty, align}{\end{eqnarray*}{align*} we have align}{\begin{eqnarray*}{align*} \lim_{t\to T^{\ddagger}} \|u(t)\|_{s}=M\neq \infty, align}{\end{eqnarray*}{align*} then, for $\forall \ \epsilon>0$, $\exists\ \delta>0$, such that align}{\begin{eqnarray*}{align} \label{u_delta1} \|u(T^{\ddagger}-\delta)\|_{s}\leq M-\epsilon<2M. align}{\end{eqnarray*}{align} Actually, since we consider the blow-up solution, (\ref{u_delta1}) holds for $\forall \ 0<\delta<T^{\ddagger}$.\\ From the proof of Theorem \ref{mainthem1}, we can take $\tilde{t}=t-(T^{\ddagger}-\delta)$, and assuming there exists a minimum $0<T^{\ast}<\infty$ such that align}{\begin{eqnarray*}{align*} \lim_{\tilde{t}\to T^{\ast}} \|e^{\beta \tilde{t}A^{\frac{1}{2}}}u(\tilde{t})\|_{s}=\infty. align}{\end{eqnarray*}{align*} and align}{\begin{eqnarray*}{align*} T^{\ast}>\min\left\{\frac{c_s \|u^0\|_{L^2}^{\frac5{2s}-1}}{\|u^0\|^{\frac{5}{2s}}_{s}}, \frac{c_s \|u^0\|_{L^2}^{\frac{1}{s}-\frac{2}{5}}}{\|u^0\|^{\frac{1}{s}}_{s}}\right\}=\min \left\{\frac{c_s\|u(T^{\ddagger}-\delta)\|_{L^2}^{\frac5{2s}-1}}{\|u(T^{\ddagger}-\delta)\|^{\frac{5}{2s}}_{s}},\ \frac{c_s\|u(T^{\ddagger}-\delta)\|_{L^2}^{\frac{1}{s}-\frac{2}{5}}}{\|u(T^{\ddagger}-\delta)\|_s^{\frac{1}{s}}}\right\}. align}{\end{eqnarray*}{align*} (\ref{u_delta1}) yields that align}{\begin{eqnarray*}{align*} T^{\ast}>\min \left\{\frac{c_s\|u(T^{\ddagger}-\delta)\|_{L^2}^{\frac5{2s}-1}}{(2M)^{\frac{5}{2s}}},\ \frac{c_s\|u(T^{\ddagger}-\delta)\|_{L^2}^{\frac{1}{s}-\frac{2}{5}}}{(2M)^{\frac{1}{s}}}\right\} align}{\end{eqnarray*}{align*} Taking $\displaystyle \delta<\min \left \{T^{\ddagger}, \frac{c_s\|u(T^{\ddagger}-\delta)\|_{L^2}^{\frac5{2s}-1}}{(2M)^{\frac{5}{2s}}},\ \frac{c_s\|u(T^{\ddagger}-\delta)\|_{L^2}^{\frac{1}{s}-\frac{2}{5}}}{(2M)^{\frac{1}{s}}} \right\}$, therefore, $T^{\ast}>\delta$ and align}{\begin{eqnarray*}{align*} \lim_{t\to T^{\ddagger}} \|e^{\beta tA^{\frac{1}{2}}}u(t)\|_{s}=\lim_{\tilde{t}\to \delta} \|e^{\beta \tilde{t}A^{\frac{1}{2}}}u(\tilde{t})\|_{s}<\lim_{\tilde{t}\to T^{\ast}} \|e^{\beta \tilde{t}A^{\frac{1}{2}}}u(\tilde{t})\|_{s}=\infty. align}{\end{eqnarray*}{align*} Since align}{\begin{eqnarray*}{align*} \|e^{\beta t A^{\frac{1}{2}}}u(t)\|_{s}>c \|e^{r_0 A^{\frac{\theta}{2}}}u(t)\|_{s}. align}{\end{eqnarray*}{align*} This implies align}{\begin{eqnarray*}{align*} \lim_{t\to T^{\ddagger}} \|e^{r_0 A^{\frac{\theta}{2}}}u(t)\|_{s}<\infty. align}{\end{eqnarray*}{align*} This contradicts with the assumption (\ref{blowup2}). Therefore align}{\begin{eqnarray*}{align*} \lim_{t\to T^{\ddagger}} \|u(t)\|_{s}=\infty. align}{\end{eqnarray*}{align*} align}{\end{eqnarray*}{proof} {\color{red}align}{\begin{eqnarray*}{proof}[\textbf{Proof of Corollary \ref{corollary1}}] To prove this corollary, we need to show that if align}{\begin{eqnarray*}{align} \label{nblowup2} \limsup_{t\to T^{\ddagger}} \|e^{r_0 A^{\frac{\theta}{2}}}u(t)\|_{s}=\infty, align}{\end{eqnarray*}{align} for $s>\frac{5}{2}, r_0>0$, and $0<\theta<1$, then align}{\begin{eqnarray*}{align*} \limsup_{t\to T^{\ddagger}} \|u(t)\|_{s}=\infty. align}{\end{eqnarray*}{align*} Once we prove it, (\ref{coro1}) will follow from Lemma \ref{lemrobinson}.\\ We will prove the result by contradiction. Assume that align}{\begin{eqnarray*}{align*} \limsup_{t\to T^{\ddagger}} \|e^{r_0 A^{\frac{\theta}{2}}}u(t)\|_{s}=\infty, align}{\end{eqnarray*}{align*} and align}{\begin{eqnarray*}{align*} \limsup_{t\to T^{\ddagger}} \|u(t)\|_{s}=M < \infty. align}{\end{eqnarray*}{align*} Then for all $\epsilon > 0$, there is an $n_\epsilon \in \mathbb{N}$ such that for all $\delta \in(0,\frac{1}{n_\epsilon})$, \[ M \leq \sup_{t\in[T^\ddagger - \delta, T^\ddagger)} \|u(t)\|_{s} \leq M + \epsilon, \] in particular, \[ \|u(T^\ddagger - \delta)\|_{s} \leq M + \epsilon. \] Set $\epsilon = 1$ and let \( 0 < \delta < \min\left\{\frac{1}{n_\epsilon}, c_s(M+1)^{-\frac{5}{2s}} \|u^0\|_{L^2}^{\frac5{2s} - 1}\right\}. \) Applying Theorem~\ref{mainthem1} with initial data $u(T^{\ddagger}-\delta)$, we have align}{\begin{eqnarray*}{align}\label{cor2:impossible-bound} \sup_{t\in[0,T]} \|e^{\beta t A^{\frac{1}{2}}}u(t + T^{\ddagger} - \delta)\|_{s} < \infty, align}{\end{eqnarray*}{align} with \[ T:=c_s \|u(T^{\ddagger}-\delta)\|_{L^2}^{\frac5{2s}-1} / \|u(T^{\ddagger}-\delta)\|^{\frac{5}{2s}}_{s} \geq \frac{c_s}{(M+1)^{\frac{5}{2s}} \|u^0\|_{L^2}^{1-\frac5{2s}}} > \delta.\] Because $T > \delta$, we can conclude from \eqref{cor2:impossible-bound} that align}{\begin{eqnarray*}{align*} \|e^{\beta \delta A^{\frac{1}{2}}}u(T^{\ddagger})\|_{s} \leq \sup_{t\in[0,T]} \|e^{\beta t A^{\frac{1}{2}}}u(t + T^{\ddagger} - \delta)\|_{s} < \infty, align}{\end{eqnarray*}{align*} and using \eqref{ineq:subanalytic-analytic}, we arrive at a contradiction: \[ \|e^{\beta \delta A^{\frac{1}{2}}}u(T^{\ddagger})\|_{s} = \lim_{t\to\delta} \|e^{\beta t A^{\frac{1}{2}}}u(t + T^{\ddagger} - \delta)\|_{s} > \lim_{t\to\delta} c_{\beta,r_0,\theta}\|e^{r_0 A^{\frac{\theta}{2}}}u(t + T^{\ddagger} - \delta)\|_{s} \to\infty. \] align}{\end{eqnarray*}{proof} } } \section{Existence time for $\|u\|_{Gv(s, \beta t)}$ when $\frac{3}{2}\leq s<\frac{5}{2}$.}\label{sec:1/2<s<5/2} It will be more convenient here to study the evolution in Gevrey classes using the vorticity equation instead of the velocity equation. As we will see below, this will enable us to avoid the borderline of the Sobolev embedding encountered in \cite{cheskidov2016lower, CM2018, cortissoz2014lower, mccormick2016lower, robinson2012lower}. The equation for evolution of vorticity ${\omega}=\nabla \times u$ is given by align}{\begin{eqnarray*}{align} \label{nseW} &{{\omega}}_t+ A{\omega}+B(u,{\omega})-B({\omega},u)=0, \\ &{{\omega}}^0 (x)=\mathbf{{\omega}}(x, 0)=\nabla \times u^0 (x). align}{\end{eqnarray*}{align} Here, the operators $A$ and $B$ are defined in (\ref{defA}) and (\ref{defB}), respectively. Recall $$\|{\omega}\|_{\tilde{s}, \alpha}=\|e^{\alpha A^{\frac{1}{2}}}{\omega}\|_{\tilde{s}}.$$ Since $\|{\omega}\|_{\tilde{s}, \alpha}=\|u\|_{\tilde{s}+1, \alpha}$, we are taking $s=\tilde{s}+1.$ We have the following estimates, proofs of which can be found in the Appendix. align}{\begin{eqnarray*}{lemma} \label{inequ3w} For $-\frac{1}{2}<\tilde{s}<\frac{3}{2}$ and ${\omega}\in Gv(\tilde{s}+1, \alpha)$, we have align}{\begin{eqnarray*}{align} \label{term3w} \left| \left (B({\omega}, u), A^{\tilde{s}}e^{2\alpha A^{\frac{1}{2}}}{\omega} \right )\right|\leq c_{\tilde{s}} \|{\omega}\|_{\tilde{s}, \alpha}^{\tilde{s}+\frac{3}{2}} \|{\omega}\|_{\tilde{s}+1, \alpha}^{\frac{3}{2}-\tilde{s}}. align}{\end{eqnarray*}{align} align}{\end{eqnarray*}{lemma} align}{\begin{eqnarray*}{lemma} \label{inequ4w} For $-\frac{1}{2}< \tilde{s}<\frac{3}{2}$ and ${\omega}\in Gv(\tilde{s}+1, \alpha )$, we have align}{\begin{eqnarray*}{align} \label{term4w} \left| \left (B(u, {\omega}), A^{\tilde{s}}e^{2\alpha A^{\frac{1}{2}}}{\omega} \right ) \right|\leq c_s \|{\omega}\|^{\tilde{s}+\frac{3}{2}}_{\tilde{s}, \alpha } \|{\omega}\|^{\frac{3}{2}-\tilde{s}}_{\tilde{s}+1, \alpha }+c_s \alpha \|{\omega}\|^{\tilde{s}+\frac{1}{2}}_{\tilde{s}, \alpha } \|{\omega}\|^{\frac{5}{2}-\tilde{s}}_{\tilde{s}+1, \alpha }. align}{\end{eqnarray*}{align} align}{\end{eqnarray*}{lemma} We will also need the following lemma concerning existence time of a non-autonomous differential equation to proceed the proof of which is provided in the appendix. align}{\begin{eqnarray*}{lemma}\label{lemmaonX} Let $X(t)$ satisfy align}{\begin{eqnarray*}{align} \label{mainequ1X} \frac{d}{dt} X(t) =c_{\tilde{s}}X^{1+ \frac{4}{1+2\tilde{s}}}+c_{\tilde{s}}(\beta t)^{\frac{4}{2\tilde{s}-1}}X^{1+ \frac{4}{2\tilde{s}-1}}, align}{\end{eqnarray*}{align} with initial condition $X(0)$, $\frac{1}{2}<\tilde{s}<\frac{3}{2}$, $0<\beta\leq \frac{1}{2}$, and the local existence time $T_X<\infty$. \\ When $\displaystyle X({0})\geq \frac{c_{\tilde{s}} }{(\beta)^{\frac{2\tilde{s}+1}{2}}} $, we have align}{\begin{eqnarray*}{align} \label{equzeta1X3} T_{X}>\frac{c_{\tilde{s}}}{X(0)^{\frac{4}{1+2\tilde{s}}}}. align}{\end{eqnarray*}{align} When $\displaystyle X({0})<\frac{c_{\tilde{s}} }{(\beta)^{\frac{2\tilde{s}+1}{2}}} $, we have align}{\begin{eqnarray*}{align} \label{equzeta2X} T_{X}>\min\left\{Q, Q^{1/2}\right\}, align}{\end{eqnarray*}{align} where $\displaystyle Q=\frac{c_{\tilde{s}}}{X(0)^{\frac{4}{1+2\tilde{s}}}}$. align}{\end{eqnarray*}{lemma} We can now study the existence time of the solutions of the NSE in the Gevrey spaces when $\frac{3}{2}\leq s<\frac{5}{2}$. First, we have the following Lemma. align}{\begin{eqnarray*}{lemma} \label{nvorticity} When $-\frac{1}{2}<\tilde{s}<\frac{3}{2}$, $\beta\geq 0$, we have the following differential inequality align}{\begin{eqnarray*}{align} \label{innerpro2} \frac{1}{2} \frac{d}{dt} \|{\omega}\|_{\tilde{s}, \beta t}^2 - \beta \|{\omega}\|^2_{\tilde{s}+\frac{1}{2}, \beta t}+\|{\omega}\|_{\tilde{s}+1, \beta t}^2 \leq c_{\tilde{s}} \|{\omega}\|^{{\tilde{s}}+\frac{3}{2}}_{\tilde{s}, \beta t} \|{\omega}\|^{\frac{3}{2}-\tilde{s}}_{\tilde{s}+1, \beta t}+c_{\tilde{s}} \beta t \|{\omega}\|^{{\tilde{s}}+\frac{1}{2}}_{\tilde{s}, \beta t} \|{\omega}\|^{\frac{5}{2}-\tilde{s}}_{\tilde{s}+1, \beta t}. align}{\end{eqnarray*}{align} align}{\end{eqnarray*}{lemma} align}{\begin{eqnarray*}{proof} Taking the inner product of \eqref{nseW} with $A^{{\tilde{s}}}e^{2\beta tA^{\frac{1}{2}}}{\omega}$, we have align}{\begin{eqnarray*}{align} \label{innerpro1} \left ({\omega}_t,A^{{\tilde{s}}}e^{2\beta tA^{\frac{1}{2}}}{\omega} \right )+\left ( A {\omega},A^{{\tilde{s}}}e^{2\beta tA^{\frac{1}{2}}}{\omega} \right )+\left (B(u, {\omega}), A^{{\tilde{s}}}e^{2\beta tA^{\frac{1}{2}}}{\omega} \right )-\left (B({\omega}, u), A^{{\tilde{s}}}e^{2\beta tA^{\frac{1}{2}}}{\omega} \right )=0. align}{\end{eqnarray*}{align} Similar to the calculation in Section 4, we have align}{\begin{eqnarray*}{align} \label{term1w} \left ({\omega}_t,A^{{\tilde{s}}}e^{2\beta tA^{\frac{1}{2}}}{\omega} \right )=\frac{1}{2} \frac{d}{dt} \|{\omega}\|_{\tilde{s}, \beta t}^2 - \beta \|{\omega}\|^2_{\tilde{s}+\frac{1}{2}, \beta t}, align}{\end{eqnarray*}{align} and align}{\begin{eqnarray*}{align} \label{term2w} \left ( A {\omega},A^{{\tilde{s}}}e^{2\beta tA^{\frac{1}{2}}}{\omega} \right )= \|{\omega}\|_{\tilde{s}+1, \beta t}^2. align}{\end{eqnarray*}{align} Applying Lemma \ref{inequ4w} with $\alpha=\beta t$ and combining (\ref{term3w}), (\ref{term4w}), (\ref{term1w}), and (\ref{term2w}), the estimate of (\ref{innerpro1}) becomes align}{\begin{eqnarray*}{align*} \frac{1}{2} \frac{d}{dt} \|{\omega}\|_{\tilde{s}, \beta t}^2 - \beta \|{\omega}\|^2_{\tilde{s}+\frac{1}{2}, \beta t}+\|{\omega}\|_{\tilde{s}+1, \beta t}^2 \leq c_{\tilde{s}} \|{\omega}\|^{{\tilde{s}}+\frac{3}{2}}_{\tilde{s}, \beta t} \|{\omega}\|^{\frac{3}{2}-\tilde{s}}_{\tilde{s}+1, \beta t}+c_{\tilde{s}} \beta t \|{\omega}\|^{{\tilde{s}}+\frac{1}{2}}_{\tilde{s}, \beta t} \|{\omega}\|^{\frac{5}{2}-\tilde{s}}_{\tilde{s}+1, \beta t}. align}{\end{eqnarray*}{align*} align}{\end{eqnarray*}{proof} align}{\begin{eqnarray*}{proof}[\textbf{Proof of Theorem \ref{mainthem2}}] For $\frac{3}{2}\leq s<\frac{5}{2}$, i.e., $\frac{1}{2}\leq \tilde{s}<\frac{3}{2}$ , we consider $\frac{1}{2}<\tilde{s}<\frac{3}{2}$ and $\tilde{s}=\frac{1}{2}$, separately.\\ Case (1), $\frac{1}{2}<\tilde{s}<\frac{3}{2}$: Using Young's Inequality, we have align}{\begin{eqnarray*}{align*} c_{\tilde{s}} \|{\omega}\|_{\tilde{s}, \beta t}^{\frac{3+2\tilde{s}}{2}} \|{\omega}\|_{\tilde{s}+1, \beta t}^{\frac{3-2\tilde{s}}{2}}\leq c_{\tilde{s}}\|{\omega}\|_{\tilde{s}, \beta t}^{2 \cdot \frac{3+2\tilde{s}}{1+2\tilde{s}}}+\frac{1}{4} \|{\omega}\|_{\tilde{s}+1, \beta t}^2, align}{\end{eqnarray*}{align*} and align}{\begin{eqnarray*}{align*} c_{\tilde{s}} \beta t \|{\omega}\|_{\tilde{s}, \beta t}^{\frac{1+2\tilde{s}}{2}} \|{\omega}\|_{\tilde{s}+1, \beta t}^{\frac{5-2\tilde{s}}{2}}\leq c_{\tilde{s}}(\beta t)^{\frac{4}{2\tilde{s}-1}}\|{\omega}\|_{\tilde{s}, \beta t}^{2 \cdot \frac{1+2\tilde{s}}{2\tilde{s}-1}}+\frac{1}{4} \|{\omega}\|_{\tilde{s}+1, \beta t}^2. align}{\end{eqnarray*}{align*} Taking $\beta \leq \frac{1}{2}$, appliying the Poincar$\acute{e}$ inequality, we have align}{\begin{eqnarray*}{align*} \beta \|{\omega}\|^2_{\tilde{s}+\frac{1}{2}, \beta t}\leq \frac{1}{2} \|{\omega}\|_{\tilde{s}+1, \beta t}^2. align}{\end{eqnarray*}{align*} Therefore, from (\ref{innerpro2}) we deduce align}{\begin{eqnarray*}{align*} \frac{d}{dt} \|{\omega}\|_{\tilde{s}, \beta t}^2 \leq c_{\tilde{s}}\|{\omega}\|_{\tilde{s}, \beta t}^{2 \cdot \frac{3+2\tilde{s}}{1+2\tilde{s}}}+c_{\tilde{s}}(\beta t)^{\frac{4}{2\tilde{s}-1}}\|{\omega}\|_{\tilde{s}, \beta t}^{2 \cdot \frac{1+2\tilde{s}}{2\tilde{s}-1}}. align}{\end{eqnarray*}{align*} After simplification, we have align}{\begin{eqnarray*}{align} \label{innerpro3} \frac{d}{dt} \|{\omega}\|_{\tilde{s}, \beta t} \leq c_{\tilde{s}}\|{\omega}\|_{\tilde{s}, \beta t}^{1+ \frac{4}{1+2\tilde{s}}}+c_{\tilde{s}}(\beta t)^{\frac{4}{2\tilde{s}-1}}\|{\omega}\|_{\tilde{s}, \beta t}^{1+ \frac{4}{2\tilde{s}-1}}. align}{\end{eqnarray*}{align} Let X(t) be the solution of the differential equation align}{\begin{eqnarray*}{align} \label{innerpro4} \frac{d}{dt} X(t) =c_{\tilde{s}}X^{1+ \frac{4}{1+2\tilde{s}}}+c_{\tilde{s}}(\beta t)^{\frac{4}{2\tilde{s}-1}}X^{1+ \frac{4}{2\tilde{s}-1}}. align}{\end{eqnarray*}{align} with $X_0 = X(0) = \|{\omega}^0\|_{\tilde{s}}$. Then, using Lemma~\ref{lemma:nonlinear_Gronwall}, we have $X(t)\geq\|\omega(t)\|_{s, \beta t}$ for all $t\in\left[0,\min\{T_X,T_{\omega}\}\right]$. Here, $T_X$ and $T_{\omega}$ are the local existence time of $X$ and $\|{\omega}\|_{\tilde{s}, \beta t}$, respectively. Moreover, we can conclude that \(T_{\omega}\geq T_X\), and we assume $T_{\omega}<\infty$, also, $T_{X}<\infty$. From Lemma \ref{lemmaonX}, when $0<\beta\leq \frac{1}{2}$, we get the following.\\ Case (1a): When $$ \|{u}^0\|_{s}=\|{\omega}^0\|_{\tilde{s}}=X({0})\geq \frac{c_{\tilde{s}} }{(\beta)^{\frac{2\tilde{s}+1}{2}}}=\frac{c_{s} }{(\beta)^{\frac{2s-1}{2}}}, $$ it holds that align}{\begin{eqnarray*}{align} \label{equzeta1X1} T_{\omega}\geq T_{X}>\frac{c_{\tilde{s}}}{X(0)^{\frac{4}{1+2\tilde{s}}}}=\frac{c_{\tilde{s}}}{\|{\omega}^0\|_{\tilde{s}}^{\frac{4}{1+2\tilde{s}}}}. align}{\end{eqnarray*}{align} Considering the existence time of $\|{\omega}\|_{\tilde{s}, \beta t}$ (i.e., $\|{u}\|_{s, \beta t}$): $T^{\ast}$, we have align}{\begin{eqnarray*}{align} \label{blow4-n} T^{\ast}\geq T_{X}>\frac{c_{\tilde{s}}}{\|{\omega}^0\|^{\frac{4}{1+2\tilde{s}}}_{\tilde{s}}}=\frac{c_{{s}}}{\|u^0\|^{\frac{4}{2s-1}}_{{s},\beta t}}. align}{\end{eqnarray*}{align} Case (1b): From Lemma \ref{lemmaonX}, when $$ \|{u}^0\|_{s}=\|{\omega}^0\|_{\tilde{s}}=X({0})< \frac{c_{\tilde{s}} }{(\beta)^{\frac{2\tilde{s}+1}{2}}}=\frac{c_{s} }{(\beta)^{\frac{2s-1}{2}}}, $$ it follows that align}{\begin{eqnarray*}{align} \label{equzeta1X2} T_{\omega}\geq T_{X}>\min\left\{\frac{c_{\tilde{s}}}{X(0)^{\frac{4}{1+2\tilde{s}}}}, \frac{c_{\tilde{s}}}{X(0)^{\frac{2}{1+2\tilde{s}}}}\right\}=\min\left\{\frac{c_{\tilde{s}}}{\|{\omega}^0\|_{\tilde{s}}^{\frac{4}{1+2\tilde{s}}}}, \frac{c_{\tilde{s}}}{\|{\omega}^0\|_{\tilde{s}}^{\frac{2}{1+2\tilde{s}}}}\right\}. align}{\end{eqnarray*}{align} In conclusion, for Case (1ii), we have align}{\begin{eqnarray*}{align*} T^{\ast}>\min\left\{\frac{c_{{s}}}{\|u^0\|^{\frac{4}{2s-1}}_{{s},\beta t}}, \frac{c_{{s}}}{\|u^0\|^{\frac{2}{2s-1}}_{{s},\beta t}}\right\}. align}{\end{eqnarray*}{align*} % Case (2): when $\tilde{s}=\frac{1}{2}$, i.e. $s=\frac{3}{2}$, (\ref{innerpro2}) becomes align}{\begin{eqnarray*}{align} \label{innerpro8} \frac{1}{2} \frac{d}{dt} \|{\omega}\|_{\frac{1}{2}, \beta t}^2 - \beta \|{\omega}\|^2_{1, \beta t}+ \|{\omega}\|_{\frac{3}{2}, \beta t}^2 \leq c_{\tilde{s}} \|{\omega}\|^{2}_{\frac{1}{2}, \beta t} \|{\omega}\|_{\frac{3}{2}, \beta t}+c_{\tilde{s}} \beta t \|{\omega}\|_{\frac{1}{2}, \beta t} \|{\omega}\|^{2}_{\frac{3}{2}, \beta t}. align}{\end{eqnarray*}{align} Comparing the terms on the right hand side of (\ref{innerpro8}), we can expect that there is a region (when $t$ and $\|{\omega}\|_{\frac{1}{2}, \beta t}$ are both small), the term $c_{\tilde{s}} \beta t \|{\omega}\|_{\frac{1}{2}, \beta t} \|{\omega}\|^{2}_{\frac{3}{2}, \beta t}$ can be absorbed by $ \|{\omega}\|_{\frac{3}{2}, \beta t}^2$.\\ Let $\displaystyle \breve{c}=\frac{1}{4c_{\tilde{s}}\beta }$ and let $t^{\lozenge}$ as the solution of $\displaystyle \|{\omega}\|_{\frac{1}{2}, \beta t}=\frac{\breve{c}}{t}.$ (If $\|{\omega}\|_{\frac{1}{2}, \beta t}$ does not blow up, then the Theorem holds. Assume $\|{\omega}\|_{\frac{1}{2}, \beta t}$ blows up, then such $t^{\lozenge}$ exists.) When $0<t<t^{\lozenge}$, we have align}{\begin{eqnarray*}{align*} \|{\omega}\|_{\frac{1}{2}, \beta t}<\frac{\breve{c}}{t}\Rightarrow \|{\omega}\|_{\frac{1}{2}, \beta t}<\frac{1}{4c_{\tilde{s}} \beta t}, align}{\end{eqnarray*}{align*} and consequently, from (\ref{innerpro8}), we obtain align}{\begin{eqnarray*}{align*} \frac{1}{2} \frac{d}{dt} \|{\omega}\|_{\frac{1}{2}, \beta t}^2 - \beta \|{\omega}\|^2_{1, \beta t}+ \|{\omega}\|_{\frac{3}{2}, \beta t}^2 \leq c_{\tilde{s}} \|{\omega}\|^{2}_{\frac{1}{2}, \beta t} \|{\omega}\|_{\frac{3}{2}, \beta t}+\frac{1}{4} \|{\omega}\|^{2}_{\frac{3}{2}, \beta t}. align}{\end{eqnarray*}{align*} When $\beta \leq \frac{1}{2}$, apply Young's inequality to the above inequality and simplify it, we have align}{\begin{eqnarray*}{align*} \frac{d}{dt} \|{\omega}\|_{\frac{1}{2}, \beta t}^2 <c_{\tilde{s}}\|{\omega}\|_{\frac{1}{2}, \beta t}^{4}\ \Rightarrow \frac{d}{dt} \|{\omega}\|_{\frac{1}{2}, \beta t} <c_{\tilde{s}}\|{\omega}\|_{\frac{1}{2}, \beta t}^{3}. align}{\end{eqnarray*}{align*} Denoting $Y(t)=\|{\omega}\|_{\frac{1}{2}, \beta t}$, then we have align}{\begin{eqnarray*}{align} \label{innerpro9} \frac{d}{dt} Y<c_{\tilde{s}}Y^{3}. align}{\end{eqnarray*}{align} The local existence time of $Y$ is: $\displaystyle T_Y = \sup\left\{ t>0 \:|\: \sup_{r\in[0,t]}|Y(r)| < \infty \right\}. $ We have $t^{\lozenge}<T_Y<\infty$, and when $0<t<t^{\lozenge}$, we compare $Y(t)$ with $\psi(t)$, where, $\psi(t)$ is the solution of align}{\begin{eqnarray*}{align} \label{innerpro10} \frac{d}{dt} \psi=c_{\tilde{s}}{\psi}^{3}, align}{\end{eqnarray*}{align} with $\psi(0)=Y(0)$ with local existence time $T_{\psi}$, also $T_\psi<\infty$.\\ Applying Lemma \ref{lemma:nonlinear_Gronwall} on (\ref{innerpro9}) and (\ref{innerpro10}), we have align}{\begin{eqnarray*}{align*} Y(t) \leq \psi(t),\ \text{for\ all}\ t\in\left [0, \min \left\{t^{\lozenge}, T_{X}, T_{\psi}\right\}\right]. align}{\end{eqnarray*}{align*} Denoting the interception point of $\psi(t)$ with $\displaystyle \frac{\breve{c}}{t}$ as $t_{\psi}$, we have: $\displaystyle \psi(t_{\psi})=\frac{\breve{c}}{t_\psi}. $ Moreover, $t_{\psi}\leq t^{\lozenge}<T_{Y}$. Solving (\ref{innerpro10}), we have align}{\begin{eqnarray*}{align} \label{tsol-nn} \psi(t)=(\psi(0)^{-2}-c_{\tilde{s}} t)^{-1/2}. align}{\end{eqnarray*}{align} Therefore align}{\begin{eqnarray*}{align*} (\psi(0)^{-2}-c_{\tilde{s}} t_{\psi})^{-1/2}=\frac{{\breve{c}}}{t_{\psi}}. align}{\end{eqnarray*}{align*} After simplification, we obtain align}{\begin{eqnarray*}{align*} \psi(0)^{-2}-c_{\tilde{s}} t_{\psi}={{\breve{c}}^{-2}}t_{\psi}^{2}\Rightarrow {{\breve{c}}^{-2}}t_{\psi}^{2}+c_{\tilde{s}} t_{\psi}=\psi(0)^{-2}. align}{\end{eqnarray*}{align*} This is similar to the result in (\ref{equt-n}) with $\tilde{s}=\frac{1}{2}$. We follow similar procedure as in Case (1) and obtain the results on the existence time. align}{\end{eqnarray*}{proof} align}{\begin{eqnarray*}{proof}[\textbf{Proof of the Corollary \ref{corollary2}}] From Lemma \ref{nvorticity}, when $-\frac{1}{2}<\tilde{s}<\frac{3}{2}$, we have the following inequality align}{\begin{eqnarray*}{align*} \frac{1}{2} \frac{d}{dt} \|{\omega}\|_{\tilde{s}, \beta t}^2 - \beta \|{\omega}\|^2_{\tilde{s}+\frac{1}{2}, \beta t}+\|{\omega}\|_{\tilde{s}+1, \beta t}^2 \leq c_{\tilde{s}} \|{\omega}\|^{{\tilde{s}}+\frac{3}{2}}_{\tilde{s}, \beta t} \|{\omega}\|^{\frac{3}{2}-\tilde{s}}_{\tilde{s}+1, \beta t}+c_{\tilde{s}} \beta t \|{\omega}\|^{{\tilde{s}}+\frac{1}{2}}_{\tilde{s}, \beta t} \|{\omega}\|^{\frac{5}{2}-\tilde{s}}_{\tilde{s}+1, \beta t}. align}{\end{eqnarray*}{align*} When we conside the Sobolev space, we have $\beta=0$. Applying Young's inequality on the above inequality and simplify it, we have align}{\begin{eqnarray*}{align*} \frac{1}{2} \frac{d}{dt} \|{\omega}\|_{\tilde{s}}^2 \leq c_{\tilde{s}}\|{\omega}\|_{\tilde{s}}^{2 \cdot \frac{3+2\tilde{s}}{1+2\tilde{s}}}. align}{\end{eqnarray*}{align*} Applying Lemma \ref{lem4n} and considering the existence time $T^{\ddagger}$ of $\|\mathbf{{\omega}}(t)\|_{\tilde{s}}$, we have align}{\begin{eqnarray*}{align*} \|\mathbf{{\omega}}(T^{\ddagger}-t)\|_{\tilde{s}}\geq c_{\tilde{s}} t^{-\frac{1+2\tilde{s}}{4}}\Rightarrow \|\mathbf{{\omega}}(t)\|_{\tilde{s}}\geq c_{\tilde{s}} (T^{\ddagger}-t)^{-\frac{1+2\tilde{s}}{4}}. align}{\end{eqnarray*}{align*} If we take $s=\tilde{s}+1$, so $\frac{1}{2}<s<\frac{5}{2}$, it follows that align}{\begin{eqnarray*}{align*} \|u(t)\|_{s}=\|\mathbf{{\omega}}(t)\|_{\tilde{s}}\geq c_{\tilde{s}} (T^{\ddagger}-t)^{-\frac{1+2\tilde{s}}{4}}\geq{c_{s} } (T^{\ddagger}-t)^{-\frac{2s-1}{4}}. align}{\end{eqnarray*}{align*} This is equivalent to align}{\begin{eqnarray*}{align*} T^{\ddagger}>\frac{c_s}{\|u^0\|^{\frac{4}{2s-1}}_s}. align}{\end{eqnarray*}{align*} align}{\end{eqnarray*}{proof} \section{Appendix}\label{sec:appe} \textbf{Proof of Lemma \ref{lem1}}. align}{\begin{eqnarray*}{proof} (i) Let us start by observing align}{\begin{eqnarray*}{align*} \left(B(u,u),A^{s}e^{2\alpha A^{\frac{1}{2}}}u\right)=\left(A^{s/2}e^{\alpha A^{\frac{1}{2}}}B(u,u),A^{s/2}e^{\alpha A^{\frac{1}{2}}}u\right). align}{\end{eqnarray*}{align*} We just need to estimate the term $\displaystyle \|A^{s/2}e^{\alpha A^{\frac{1}{2}}}B(u,u)\|_{L^2}.$ So we consider $\displaystyle I=\left(A^{s/2}e^{\alpha A^{\frac{1}{2}}}B(u,u),w\right),$ for an arbitrary $w\in H$ with $\|w\|_{L^2}=1$. (In fact, we may take $w\in Gv(s, \alpha)$, and then pass to the limit in $H$. Accordingly, let $w\in Gv(s, \alpha)$ with $\|w\|_{L^2}=1$). align}{\begin{eqnarray*}{align*} \left(A^{\frac{s}{2}}e^{\alpha A^{\frac{1}{2}}}B(u,u),w\right)&=\left(B(u,u),A^{\frac{s}{2}}e^{\alpha A^{\frac{1}{2}}}w\right)\\ &=i \sum_{j,k} \left(j \cdot \hat{u}_{k-j}\right)(\hat{u}_{j} \cdot \hat{w}_{-k})|k |^s e^{\alpha|k |}\\ &=i \sum_{j,k} \left(k \cdot \hat{u}_{k-j}\right)(\hat{u}_{j} \cdot \hat{w}_{-k})|k |^s e^{\alpha|k |}, align}{\end{eqnarray*}{align*} since $\hat{u}_{k-j} \cdot (k-j)=0$. The rest of the proof follows from the proof of the first inequality in Lemma 3.1 in \cite{robinson2012lower}. We also use the triangle inequality on the exponential function, namely, $$e^{\alpha|k|}\leq e^{\alpha|k-j|}e^{\alpha|j|}.$$ (ii) Starting from the relation align}{\begin{eqnarray*}{align*} \left(B(u,u),A^{s}e^{2\alpha A^{\frac{1}{2}}}u\right)=\left (A^{\frac{s}{2}}e^{\alpha A^{\frac{1}{2}}}B(u,u),A^{\frac{s}{2}}e^{\alpha A^{\frac{1}{2}}}u\right), align}{\end{eqnarray*}{align*} note that since $\left(B(u,A^{\frac{s}{2}}e^{\alpha A^{\frac{1}{2}}}u),A^{\frac{s}{2}}e^{\alpha A^{\frac{1}{2}}}u\right)=0$, we have align}{\begin{eqnarray*}{align} \label{estB} \left(B(u,u),A^{s}e^{2\alpha A^{\frac{1}{2}}}u\right)=\left(A^{\frac{s}{2}}e^{\alpha A^{\frac{1}{2}}}B(u,u)-B(u,A^{\frac{s}{2}}e^{\alpha A^{\frac{1}{2}}}u),A^{\frac{s}{2}}e^{\alpha A^{\frac{1}{2}}}u\right). align}{\end{eqnarray*}{align} % We need to estimate $$\|A^{\frac{s}{2}}e^{\alpha A^{\frac{1}{2}}}B(u,u)-B(u,A^{\frac{s}{2}}e^{\alpha A^{\frac{1}{2}}}u)\|_{L^2}.$$ Let us consider $$I=\left(A^{\frac{s}{2}}e^{\alpha A^{\frac{1}{2}}}B(u,u)-B(u,A^{\frac{s}{2}}e^{\alpha A^{\frac{1}{2}}}u),w\right),$$ for $\|w\|_{L^2}=1.$ (As before, taking $w\in D(Gv(s, \alpha))$ with $\|w\|_{L^2}=1$, and then pass to the limit). Using the Fourier expansion of $u\&w$ are given by $$u=\sum_{j\in \mathbb{Z}^3\setminus \left \{(0,0,0)\right \}} \hat{u}_j e^{ij \cdot x},\ w=\sum_{k\in \mathbb{Z}^3\setminus \left \{(0,0,0)\right \}} \hat{w}_k e^{ik \cdot x}.$$ It follows that align}{\begin{eqnarray*}{align*} \left(A^{\frac{s}{2}}e^{\alpha A^{\frac{1}{2}}}B(u,u),w\right)&=\left(B(u,u),A^{\frac{s}{2}}e^{\alpha A^{\frac{1}{2}}}w\right)\\ &=i \sum_{j,k} \left(j \cdot \hat{u}_{k-j}\right)(\hat{u}_{j} \cdot \hat{w}_{-k})|k |^s e^{\alpha|k |}, align}{\end{eqnarray*}{align*} and align}{\begin{eqnarray*}{align*} \left(B(u,A^{\frac{s}{2}}e^{\alpha A^{\frac{1}{2}}}u),w\right)=i \sum_{j,k} (j \cdot \hat{u}_{k-j})(\hat{u}_{j} \cdot \hat{w}_{-k})|j |^s e^{\alpha|j |}. align}{\end{eqnarray*}{align*} Combining the above two equations together, we have $\displaystyle I=i \sum_{j,k} (j \cdot \hat{u}_{k-j})(\hat{u}_{j} \cdot \hat{w}_{-k})\left(|k |^s e^{\alpha|k |}-|j |^s e^{\alpha|j |}\right). $ Using the reality condition $\hat{w}_{-k}=\overline{\hat{w}_{k}}$, we obtain an estimate for $I$ given by align}{\begin{eqnarray*}{align} \label{Iineq-n} |I| \leq \sum_{j,k} |j | |\hat{u}_{k-j}| |\hat{u}_{j}| |\hat{w}_{k}| \left | |k |^s e^{\alpha|k |}-|j |^s e^{\alpha|j |}\right |. align}{\end{eqnarray*}{align} Define $f$ by $f(x)=x^s e^{\alpha x}$. Then $f'(x)=s x^{s-1} e^{\alpha x}+x^{s} \alpha e^{\alpha x}$. Taking $\eta=a|j |+(1-a)|k |$, where $0\leq a \leq 1$, then $\eta$ is between $|j |$ and $|k |$. If $|k |\leq|j |$, then $|\eta|\leq|j |\leq|j |+|(k-j) |$; if $|j |<|k |$, then $|\eta|\leq|k |\leq|j |+|(k-j) |$. Therefore, we have $0<\eta\leq |j |+|(k-j) |$. Also, when $s\geq 1$, $s-1\geq 0$. Therefore, after applying the mean value theorem and the triangle inequality, it follows that align}{\begin{eqnarray*}{align*} \left | |k |^s e^{\alpha|k |}-|j |^s e^{\alpha|j |}\right |&=|f'(\eta)| \left | |k |-|j | \right |\\ &\leq |f'(\eta)| |(k-j) |\\ &=\left | s {\eta}^{s-1}e^{\alpha \eta}+{\eta}^{s}\alpha e^{\alpha \eta} \right | \left |(k-j) \right|\\ &=\left | {\eta}^{s-1}e^{\alpha \eta}(s+\alpha\eta) \right | \left|(k-j) \right|. align}{\end{eqnarray*}{align*} Replacing $\eta$ by $|j |+|l |$ with $l=k-j$, we have align}{\begin{eqnarray*}{align} \label{est1a} & \left | |k |^s e^{\alpha|k |}-|j |^s e^{\alpha|j |}\right |\\ \nonumber &\leq \left(|j |+|l |\right)^{s-1} e^{\alpha|j |}e^{\alpha|l |} \left(s+\alpha|j |+\alpha|l |\right) |l |. align}{\end{eqnarray*}{align} Substituting (\ref{est1a}) into (\ref{Iineq-n}), we can refine our estimate for $I$ \allowdisplaybreaks align}{\begin{eqnarray*}{align*} |I| &\leq\sum_{l,j} |j | |\hat{u}_l\|\hat{u}_j\|\hat{w}_{l+j}| (|j |+|l |)^{s-1}e^{\alpha(|j |+|l |)} \left(s+\alpha(|j |+|l |)\right) |l |\\ &= s\sum_{l,j} |\hat{u}_l\|\hat{u}_j\|\hat{w}_{l+j}| |l ||j |(|j |+|l |)^{s-1}e^{\alpha|j |}e^{\alpha|l |} \\ &\quad +\alpha\sum_{l,j} |\hat{u}_l\|\hat{u}_j\|\hat{w}_{l+j}| |l ||j |(|j |+|l |)^{s}e^{\alpha|j |}e^{\alpha|l |} \\ &\leq c_s\sum_{l,j}|\hat{u}_l\|\hat{u}_j\|\hat{w}_{l+j}| |l ||j |(|j |^{s-1}+|l |^{s-1})e^{\alpha|j |}e^{\alpha|l |} \\ &\quad +c_s\alpha\sum_{l,j}|\hat{u}_l\|\hat{u}_j\|\hat{w}_{l+j}| |l ||j |(|j |^{s}+|l |^{s})e^{\alpha|j |}e^{\alpha|l |} \\ &\leq c_s\sum_{l,j}|\hat{u}_l\|\hat{u}_j\|\hat{w}_{l+j}| |l |^{s}|j | e^{\alpha|j |}e^{\alpha|l |} \\ & +c_s\alpha\sum_{l,j}|\hat{u}_l\|\hat{u}_j\|\hat{w}_{l+j}\|l |^{s+1}|j |e^{\alpha|j |}e^{\alpha|l |} \\ &\leq c_s\sum_{j} |j |e^{\alpha|j |}|\hat{u}_j| \sum_{l}|l |^{s} e^{\alpha|l |}|\hat{u}_l\|\hat{w}_{l+j}| \\ & +c_s\alpha \sum_{j} |j |e^{\alpha|j |}|\hat{u}_j| \sum_{l}|l |^{s+1}e^{\alpha|l |}|\hat{u}_l\|\hat{w}_{l+j}|\\ &\leq c_{s}\|u\|_{s, \alpha} \|w\|_{L^2} \sum_{j} |j |e^{\alpha|j |}|\hat{u}_j| +c_{s}\alpha \|u\|_{s+1,\alpha} \|w\|_{L^2} \sum_{j} |j |e^{\alpha|j |}|\hat{u}_j|\\ &\leq c_{s} \|u\|_{s, \alpha} \|w\|_{L^2} \|e^{\alpha A^{\frac{1}{2}}}u\|_{F^1} + c_{s}\alpha \|u\|_{s+1, \alpha} \|w\|_{L^2} \|e^{\alpha A^{\frac{1}{2}}}u\|_{F^1}. align}{\end{eqnarray*}{align*} % Therefore align}{\begin{eqnarray*}{align*} &\left|\left(B(u,u),A^{s}e^{2\alpha A^{\frac{1}{2}}}u\right)\right |\\ &=\|A^{\frac{s}{2}}e^{\alpha A^{\frac{1}{2}}}B(u,u)-B(u,A^{\frac{s}{2}}e^{\alpha A^{\frac{1}{2}}}u)\|_{L^2} \cdot\|A^{\frac{s}{2}}e^{\alpha A^{\frac{1}{2}}}\|_{L^2}\\ &\leq c_{s}\|e^{\alpha A^{\frac{1}{2}}}u\|_{F^1} \|u\|^2_{s, \alpha} +c_s \alpha \|e^{\alpha A^{\frac{1}{2}}}u\|_{F^1}\|u\|_{s+1, \alpha} \|u\|_{s, \alpha}. align}{\end{eqnarray*}{align*} This establishes (\ref{term32}). Moreover, after applying Young's inequality, we obtain align}{\begin{eqnarray*}{align*} c_s \alpha \|e^{\alpha A^{\frac{1}{2}}}u\|_{F^1}\|u\|_{s+1, \alpha} \|u\|_{s, \alpha}\leq {c_s \alpha^2} \|e^{\alpha A^{\frac{1}{2}}}u\|^2_{F^1}\|u\|^2_{s, \alpha}+\frac{1}{2}\|u\|^2_{s+1, \alpha}. align}{\end{eqnarray*}{align*} Therefore, align}{\begin{eqnarray*}{align*} \left|\left(B(u,u),A^{s}e^{2\alpha A^{\frac{1}{2}}}u\right)\right |\leq c_{s}\|e^{\alpha A^{\frac{1}{2}}}u\|_{F^1} \|u\|^2_{s, \alpha} +{c_s \alpha^2} \|e^{\alpha A^{\frac{1}{2}}}u\|^2_{F^1}\|u\|^2_{s, \alpha}+\frac{1}{2}\|u\|^2_{s+1, \alpha}, align}{\end{eqnarray*}{align*} which is precisely (\ref{term33}). align}{\end{eqnarray*}{proof} \textbf{Proof of Lemma \ref{lem33}}. align}{\begin{eqnarray*}{proof} For $\forall m>0$, if $0\leq \alpha |k | \leq 1$, then $e^{ \alpha |k |}\leq e$, and if $ \alpha |k | \geq 1$, we have $e^{ \alpha |k |}\leq ( \alpha |k |)^m e^{ \alpha |k |}$. Therefore, for $\forall t>0$ and $k$, we have $\displaystyle e^{ \alpha |k |}\leq e+( \alpha |k |)^m e^{ \alpha |k |}$ and $\displaystyle e^{2 \alpha |k |}\leq e+(2 \alpha |k |)^m e^{2 \alpha |k |}.$ Taking $m=2s$, it follows that align}{\begin{eqnarray*}{align*} \|e^{ \alpha A^{\frac{1}{2}}}u\|^{2}_{L^2}=\sum_k e^{2 \alpha |k |} |\hat{u}_k|^2 &\leq \sum_k \left ( e+(2 \alpha |k |)^{2s} e^{2 \alpha |k |}\right ) |\hat{u}_k|^2\\ &=\sum_k e |\hat{u}_k|^2+\sum_k (2 \alpha |k |)^{2s} e^{2 \alpha |k |} |\hat{u}_k|^2. align}{\end{eqnarray*}{align*} Since $\sqrt{a+b}\leq \sqrt{a}+\sqrt{b}$, for $a,b\geq0$, we have align}{\begin{eqnarray*}{align*} \|e^{ \alpha A^{\frac{1}{2}}}u\|_{L^2}&\leq \sqrt{\sum_k e |\hat{u}_k|^2+\sum_k (2 \alpha |k |)^{2s} e^{2 \alpha |k |} |\hat{u}_k|^2}\\ &\leq \sqrt{\sum_k e |\hat{u}_k|^2}+\sqrt{\sum_k (2 \alpha |k |)^{2s} e^{2 \alpha |k |} |\hat{u}_k|^2}\\ &=\sqrt{e}\|u\|_{L^2}+\sqrt{ (2 \alpha )^{2s}\sum_k |k |^{2s} e^{2 \alpha |k |} |\hat{u}_k|^2}\\ &=\sqrt{e}\|u\|_{L^2}+(2 \alpha )^s ||A^{\frac{s}{2}}e^{ \alpha A^{\frac{1}{2}}}u||_{L^2}\\ &=\sqrt{e}\|u\|_{L^2}+(2 \alpha )^s \|u\|_{s, \alpha }. align}{\end{eqnarray*}{align*} align}{\end{eqnarray*}{proof} {\bf Proof of Lemma \ref{lemmaonzeta}.} align}{\begin{eqnarray*}{proof} Comparing the terms on the right hand side of (\ref{mainequ1}), we can expect that there is a region (when $t$ and $\zeta$ are both small) where $c_{s} \gamma \zeta^{1+\frac{5}{2s}}$ is the dominating term among the four terms on the right hand side. In order to find this specific region, we compare $c_{s} \gamma \zeta^{1+\frac{5}{2s}}$ with the other three terms (note that $c_{s}$ is positive). align}{\begin{eqnarray*}{enumerate} \label{comparison} \item Comparing $c_{s} \gamma \zeta^{1+\frac{5}{2s}} $ with $c_{s} (\beta t)^{s-\frac{5}{2}}\zeta^{2}$:\\ if $\displaystyle c_{s} \gamma \zeta^{1+\frac{5}{2s}} \geq c_{s} (\beta t)^{s-\frac{5}{2}}\zeta^{2}$, equivalently, $\zeta \leq \frac{c_s \gamma^{\frac{2s}{2s-5}}}{(\beta t)^{s}}$. \item Comparing $c_{s} \gamma \zeta^{1+\frac{5}{2s}}$ with $\displaystyle {c_s (\beta t)^2} \gamma^{2}\zeta^{1+\frac{5}{s}} $:\\ if $\displaystyle c_{s} \gamma \zeta^{1+\frac{5}{2s}}\geq {c_s (\beta t)^2} \gamma^{2}\zeta^{1+\frac{5}{s}}$, equivalently, $\displaystyle \zeta \leq \frac{c_s }{\gamma^{\frac{2s}{5}}(\beta t)^{\frac{4 s}{5}}}$. \item Comparing $c_{s} \gamma \zeta^{1+\frac{5}{2s}}$ with $\displaystyle {c_s (\beta t)^{2s-3}} \zeta^{3}$:\\ if $\displaystyle c_{s} \gamma \zeta^{1+\frac{5}{2s}}\geq {c_s (\beta t)^{2s-3}} \zeta^{3}$, equivalently, $ \displaystyle \zeta \leq \frac{c_s \gamma^{\frac{2s}{4s-5}}}{(\beta t)^{\frac{2s(2s-3)}{4s-5}}}$. align}{\end{eqnarray*}{enumerate} Therefore, if $\displaystyle {\zeta} \leq c_s \min \left\{\beta^{-s}, \beta^{-\frac{4s}{5}}, \beta^{-\frac{2s(2s-3)}{4s-5}}\right \} \cdot \min \left\{\gamma^{\frac{2s}{2s-5}}, \gamma^{-\frac{2s}{5}}, \gamma^{\frac{2s}{4s-5}}\right \} \cdot \min \left\{\frac{1}{{t}^{s}}, \frac{1}{{t}^{\frac{4s}{5}}}, \frac{1}{{t}^{\frac{2s(2s-3)}{4s-5}}}\right \}$, then the first term ($\displaystyle c_{s} \gamma \zeta^{1+\frac{5}{2s}}$) is the dominating term among the four terms on the right hand side of (\ref{mainequ1}).\\ When $s>\frac{5}{2}$, we have $\frac{4s}{5}<\frac{2s \cdot (2s-3)}{4s-5}<s$. Therefore, when $\beta\leq \frac{1}{2}$, $\displaystyle \beta^{-\frac{4s}{5}}=\min \left\{\beta^{-s}, \beta^{-\frac{4s}{5}}, \beta^{-\frac{2s(2s-3)}{4s-5}}\right \}.$ Denoting $$\tilde{c}=c_s \beta^{-\frac{4s}{5}} \min \left\{\gamma^{\frac{2s}{2s-5}}, \gamma^{-\frac{2s}{5}}, \gamma^{\frac{2s}{4s-5}}\right \}=c_s \beta^{-\frac{4s}{5}} \min \left\{\gamma^{\frac{2s}{2s-5}}, \gamma^{-\frac{2s}{5}}\right \}.$$ When $0<{t}<1$: $\displaystyle \frac{1}{{t}^{\frac{4s}{5}}}=\min \left\{\frac{1}{{t}^{s}}, \frac{1}{{t}^{\frac{4s}{5}}}, \frac{1}{{t}^{\frac{2s(2s-3)}{4s-5}}}\right \}. $ When ${t}>1$: $\displaystyle \frac{1}{{t}^{s}}=\min \left\{\frac{1}{{t}^{s}}, \frac{1}{{t}^{\frac{4s}{5}}}, \frac{1}{{t}^{\frac{2s(2s-3)}{4s-5}}}\right \}. $ From (\ref{mainequ1}), we observe that $\zeta$ starts with positve initial data and is an increasing function. Moreover, since $\zeta \nearrow \infty$ as $t\nearrow T_{\zeta}$, it will first intersect either the curve $\displaystyle \frac{\tilde{c}}{{t}^{\frac{4s}{5}}}$ or the curve $\displaystyle \frac{\tilde{c}}{{t}^{s}}$ for some $t_{\zeta}\in (0, T_{\zeta})$. We have the following cases. Case (i): when $\displaystyle {\zeta}({0})\geq \tilde{c} $, then $\displaystyle {\zeta}({1})> \tilde{c}$. In this case, $ \zeta(t)$ first intercepts with the curve of $\displaystyle \frac{\tilde{c}}{{t}^{\frac{4s}{5}}}$. Denoting the interception point as ${t}_{\zeta}$, we have $0<{t}_{\zeta}\leq1$. Therefore, when $0<t<t_{\zeta}$, we have $\displaystyle {\zeta}(t) <\frac{\tilde{c}}{{t}^{\frac{4s}{5}}}$. ($\displaystyle c_{s} \gamma \zeta^{1+\frac{5}{2s}}$) is the dominating term among the four terms on the right hand side of (\ref{mainequ1}). It follows that align}{\begin{eqnarray*}{align} \label{zetaa1} \frac{d {\zeta}}{dt}<4c_s \gamma {\zeta}^{1+\frac{5}{2s}}. align}{\end{eqnarray*}{align} Moreover, when $0<t<t_{\zeta}$, we compare $\zeta(t)$ with $\phi(t)$, where, $\phi(t)$ is the solution of align}{\begin{eqnarray*}{align} \label{phi1} \frac{d \phi}{dt}=4c_s \gamma {\phi}^{1+\frac{5}{2s}}, align}{\end{eqnarray*}{align} with $\phi(0)=\zeta(0)$. Applying Lemma \ref{lemma:nonlinear_Gronwall} on (\ref{zetaa1}) and (\ref{phi1}), we have: $\displaystyle \zeta(t) < \phi(t),\ \text{for\ all}\ t\in \left[0, \min \left\{t_{\zeta}, T_{\zeta}, T_{\phi}\right\}\right]. $ It follows that there exists a $t_{\phi}$ that align}{\begin{eqnarray*}{align} \label{tphi} \phi(t_{\phi})=\frac{\tilde{c}}{t_{\phi}^\frac{4s}{5}}. align}{\end{eqnarray*}{align} Since $\zeta(t)< \phi(t)$, we conclude $0<t_{\phi}< t_{\zeta}\leq 1$. Thus, the following relation holds: $\displaystyle t_{\phi}< t_{\zeta}<T_{\zeta}.$ % Solving (\ref{phi1}), we have align}{\begin{eqnarray*}{align} \label{tsol} \phi(t)=(\phi(0)^{-\frac{5}{2s}}-c_s \gamma t)^{-\frac{2s}{5}}. align}{\end{eqnarray*}{align} Combining (\ref{tphi}) and (\ref{tsol}), it holds that: $\displaystyle (\phi(0)^{-\frac{5}{2s}}-c_s \gamma t_{\phi})^{-\frac{2s}{5}}=\tilde{c} t_{\phi}^{-\frac{4s}{5}}. $ After simplification, we obtain: $\displaystyle \phi(0)^{-\frac{5}{2s}}-c_s \gamma t_{\phi}=\tilde{c}^{\frac{-5}{2s}}t_{\phi}^{2}. $ Therefore align}{\begin{eqnarray*}{align} \label{equt} \tilde{c}^{\frac{-5}{2s}}t_{\phi}^2+c_s \gamma t_{\phi}=\phi(0)^{-\frac{5}{2s}}. align}{\end{eqnarray*}{align} Since $t_{\phi}<1$, i.e., ${t_{\phi}}^2< t_{\phi}$, from (\ref{equt}), we have: $\displaystyle \phi(0)^{-\frac{5}{2s}}< \left(\tilde{c}^{\frac{-5}{2s}}+c_s \gamma \right) t_{\phi}. $ Therefore align}{\begin{eqnarray*}{align} \label{equtphi} \phi(0)^{-\frac{5}{2s}}<\frac{1}{\tilde{\tilde{c}}} t_{\phi}, align}{\end{eqnarray*}{align} where $\displaystyle \frac{1}{\tilde{\tilde{c}}}=2\max \left\{\tilde{c}^{\frac{-5}{2s}}, c_s \gamma \right \}$. Since $\tilde{c}=c_s \beta^{-\frac{4s}{5}} \min \left\{\gamma^{\frac{2s}{2s-5}}, \gamma^{-\frac{2s}{5}}\right \}$, therefore $$\tilde{c}^{\frac{-5}{2s}}=c_s \beta^2 \max \left\{\gamma^{-\frac{5}{2s-5}}, \ \gamma\right \}.$$ Since $\beta<1$, we have $\displaystyle \frac{1}{\tilde{\tilde{c}}}=c_s \max \left\{\gamma^{-\frac{5}{2s-5}}, \ \gamma\right \}$, i.e., $\displaystyle\displaystyle {\tilde{\tilde{c}}}=c_s \min \left\{\gamma^{\frac{5}{2s-5}}, \ \gamma^{-1}\right \}.$ From (\ref{equtphi}), we have align}{\begin{eqnarray*}{align*} t_{\phi}>\frac{\tilde{\tilde{c}}}{\phi(0)^{\frac{5}{2s}}}=\frac{c_s \min \left\{\gamma^{\frac{5}{2s-5}}, \ \gamma^{-1}\right \}}{\zeta(0)^{\frac{5}{2s}}}. align}{\end{eqnarray*}{align*} Therefore align}{\begin{eqnarray*}{align*} T_{\zeta}> t_{\phi}>\frac{c_s \min \left\{\gamma^{\frac{5}{2s-5}}, \ \gamma^{-1}\right \}}{\zeta(0)^{\frac{5}{2s}}}. align}{\end{eqnarray*}{align*} Case (ii): when $\displaystyle {\zeta}({0})< \tilde{c} $, if $\displaystyle {\zeta}({1})\geq \tilde{c}$, same as Case (i), we have $\displaystyle T_{\zeta}> t_{\phi}>\frac{c_s \min \left\{\gamma^{\frac{5}{2s-5}}, \ \gamma^{-1}\right \}}{\zeta(0)^{\frac{5}{2s}}}. $ If $\displaystyle {\zeta}({1})< \tilde{c}$, in this case, $ \zeta(t)$ first intercepts with the curve of $\displaystyle \frac{\tilde{c}}{{t}^{s}}$. Denoting the interception point as ${t}_{\zeta}$, we have ${t}_{\zeta}>1$. Similar to Case (i), we have: when $0<t<t_{\zeta}$, $\displaystyle \frac{d {\zeta}}{dt}<4c_s \gamma {\zeta}^{1+\frac{5}{2s}}, $ Also, when we consider $\phi(t)$ as the solution of align}{\begin{eqnarray*}{align} \label{newphi} \frac{d \phi}{dt}=4c_s \gamma {\phi}^{1+\frac{5}{2s}}, align}{\end{eqnarray*}{align} with $\phi(0)=\zeta(0)$, we have: $\displaystyle \zeta(t) < \phi(t), \text{for\ all}\ t\in \left[0, \min \left\{t_{\zeta}, T_{\zeta}, T_{\phi}\right\}\right]. $ Moreover, $\displaystyle t_{\phi}< t_{\zeta}<T_{\zeta}.$ If $0<t_{\phi}\leq1$, same as Case (i), we have align}{\begin{eqnarray*}{align*} T_{\zeta}> t_{\phi}>\frac{c_s \min \left\{\gamma^{\frac{5}{2s-5}}, \ \gamma^{-1}\right \}}{\zeta(0)^{\frac{5}{2s}}}. align}{\end{eqnarray*}{align*} If $t_{\phi}>1$, then align}{\begin{eqnarray*}{align} \label{tphi2} \phi(t_{\phi})=\frac{\tilde{c}}{t_{\phi}^{s}}. align}{\end{eqnarray*}{align} Solving (\ref{newphi}), we have align}{\begin{eqnarray*}{align} \label{tsol1} \phi(t)=(\phi(0)^{-\frac{5}{2s}}-c_s \gamma t)^{-\frac{2s}{5}}. align}{\end{eqnarray*}{align} Combining (\ref{tphi2}) and (\ref{tsol1}), we have: $\displaystyle (\phi(0)^{-\frac{5}{2s}}-c_s \gamma t_{\phi})^{-\frac{2s}{5}}=\tilde{c} t_{\phi}^{-s}. $ After simplification, we obtain: $\displaystyle \phi(0)^{-\frac{5}{2s}}-c_s \gamma t_{\phi}=\tilde{c}^{\frac{-5}{2s}}t_{\phi}^{5/2}. $ Therefore align}{\begin{eqnarray*}{align} \label{equt1} \tilde{c}^{\frac{-5}{2s}}t_{\phi}^{5/2}+c_s \gamma t_{\phi}=\phi(0)^{-\frac{5}{2s}}. align}{\end{eqnarray*}{align} Since $t_{\phi}>1$, then ${t_{\phi}}^{5/2}> t_{\phi}$, from (\ref{equt1}), we have: $\displaystyle \phi(0)^{-\frac{5}{2s}}< \left(\tilde{c}^{\frac{-5}{2s}}+c_s \gamma \right) {t_{\phi}}^{5/2}. $ Therefore align}{\begin{eqnarray*}{align} \label{equtphi1} \phi(0)^{-\frac{5}{2s}}<\frac{1}{\tilde{\tilde{c}}} {t_{\phi}}^{5/2}. align}{\end{eqnarray*}{align} Following similar analysis as Case (i), we have $\displaystyle {\tilde{\tilde{c}}}=c_s \min \left\{\gamma^{\frac{5}{2s-5}}, \ \gamma^{-1}\right \}$ and align}{\begin{eqnarray*}{align*} T_{\zeta}>t_{\phi}>\frac{\tilde{\tilde{c}}^{2/5}}{\phi(0)^{\frac{1}{s}}}=\frac{c_s \min \left\{\gamma^{\frac{2}{2s-5}}, \ \gamma^{-2/5}\right \}}{\zeta(0)^{\frac{1}{s}}}. align}{\end{eqnarray*}{align*} Therefore, for Case (ii), we have align}{\begin{eqnarray*}{align*} T_{\zeta}>t_{\phi}>\min\left\{Z, Z^{2/5}\right\}, align}{\end{eqnarray*}{align*} where $\displaystyle Z=\frac{c_s \min \left\{\gamma^{\frac{5}{2s-5}}, \ \gamma^{-1}\right \}}{\zeta(0)^{\frac{5}{2s}}}.$ align}{\end{eqnarray*}{proof} \textbf{Proof of Lemma \ref{inequ3w}}. align}{\begin{eqnarray*}{proof} align}{\begin{eqnarray*}{align} \label{bwu} \left | \left (B({\omega}, u), A^{\tilde{s}}e^{2\alpha A^{\frac{1}{2}}}{\omega} \right ) \right |&=\left |\left(A^{\frac{\tilde{s}}{2}} e^{\alpha A^{\frac{1}{2}}}B({\omega}, u), A^{\frac{\tilde{s}}{2}} e^{\alpha A^{\frac{1}{2}}} {\omega} \right )\right |\\ \nonumber &\leq \|{\omega} \cdot \nabla u\|_{\tilde{s}, \alpha } \|{\omega}\|_{\tilde{s}, \alpha }. align}{\end{eqnarray*}{align} When $-\frac{1}{2}<\tilde{s}<\frac{3}{2}$, applying Lemma \ref{lem-gev1} with $s_1=\frac{3+2\tilde{s}}{4}$ and $s_2=\frac{3+2\tilde{s}}{4}$, we have: $\displaystyle \|{\omega} \cdot \nabla u\|_{\tilde{s}, \alpha }\leq c_{\tilde{s}}\|{\omega}\|^2_{\frac{3+2\tilde{s}}{4}, \alpha }. $ Furthermore, $\displaystyle \|{\omega}\|^2_{\frac{3+2\tilde{s}}{4}, \alpha } \leq c_{\tilde{s}}\|{\omega}\|^{\frac{1+2\tilde{s}}{2}}_{\tilde{s}, \alpha } \|{\omega}\|^{\frac{3-2\tilde{s}}{2}}_{\tilde{s}+1, \alpha }. $ Therefore, (\ref{bwu}) beomes $$\left| \left (B({\omega}, u), A^{\tilde{s}}e^{2\alpha A^{\frac{1}{2}}}{\omega} \right ) \right |\leq c_{\tilde{s}} \|{\omega}\|_{\tilde{s}, \alpha }^{\frac{3+2\tilde{s}}{2}} \|{\omega}\|_{\tilde{s}+1, \alpha }^{\frac{3-2\tilde{s}}{2}} .$$ align}{\end{eqnarray*}{proof} \textbf{Proof of Lemma \ref{inequ4w}}. align}{\begin{eqnarray*}{proof} Starting from align}{\begin{eqnarray*}{align*} \left(B(u, {\omega}),A^{\tilde{s}}e^{2\alpha A^{\frac{1}{2}}}{\omega}\right)=\left(A^{\frac{\tilde{s}}{2}}e^{\alpha A^{\frac{1}{2}}}B(u,{\omega}),A^{\frac{\tilde{s}}{2}}e^{\alpha A^{\frac{1}{2}}}{\omega}\right). align}{\end{eqnarray*}{align*} Since $\left(B(u,A^{\frac{\tilde{s}}{2}}e^{\alpha A^{\frac{1}{2}}}{\omega}),A^{\frac{\tilde{s}}{2}}e^{\alpha A^{\frac{1}{2}}}{\omega}\right)=0$, it follows that align}{\begin{eqnarray*}{align} \label{estB-n} \left(B(u,{\omega}),A^{\tilde{s}}e^{2\alpha A^{\frac{1}{2}}}{\omega}\right)=\left(A^{\frac{\tilde{s}}{2}}e^{\alpha A^{\frac{1}{2}}}B(u,{\omega})-B(u,A^{\frac{\tilde{s}}{2}}e^{\alpha A^{\frac{1}{2}}}{\omega}),A^{\frac{\tilde{s}}{2}}e^{\alpha A^{\frac{1}{2}}}{\omega}\right)= P. align}{\end{eqnarray*}{align} Futhermore align}{\begin{eqnarray*}{align*} \left(A^{\frac{\tilde{s}}{2}}e^{\alpha A^{\frac{1}{2}}}B(u,{\omega}),A^{\frac{\tilde{s}}{2}}e^{\alpha A^{\frac{1}{2}}}{\omega}\right)&=\left(B(u,{\omega}),A^{\tilde{s}}e^{2\alpha A^{\frac{1}{2}}}\omega\right)\\ &=i \sum_{j,k} (j \cdot \hat{u}_{k-j})(\hat{\omega}_{j} \cdot \hat{\omega}_{-k})|k |^{2\tilde{s}} e^{2\alpha |k |}, align}{\end{eqnarray*}{align*} and align}{\begin{eqnarray*}{align*} \left(B(u,A^{\frac{\tilde{s}}{2}}e^{\alpha A^{\frac{1}{2}}}{\omega}),A^{\frac{\tilde{s}}{2}}e^{\alpha A^{\frac{1}{2}}}{\omega}\right)=i \sum_{j,k} (j \cdot \hat{u}_{k-j})(\hat{\omega}_{j} \cdot \hat{\omega}_{-k})|j |^{\tilde{s}} e^{\alpha |j |}|k |^{\tilde{s}} e^{\alpha |k |}. align}{\end{eqnarray*}{align*} Combining the above two equations together, we have align}{\begin{eqnarray*}{align*} P=i \sum_{j,k} (j \cdot \hat{u}_{k-j})(\hat{\omega}_{j} \cdot \hat{\omega}_{-k})|k |^{\tilde{s}} e^{\alpha |k |}\left(|k |^{\tilde{s}} e^{\alpha |k |}-|j |^{\tilde{s}} e^{\alpha |j |}\right). align}{\end{eqnarray*}{align*} Since $u$ is divergence free, we have $(k-j) \cdot \hat{u}_{k-j}=0$ and align}{\begin{eqnarray*}{align*} P=i \sum_{j,k} (k \cdot \hat{u}_{k-j})(\hat{\omega}_{j} \cdot \hat{\omega}_{-k})|k |^{\tilde{s}} e^{\alpha |k |}\left(|k |^{\tilde{s}} e^{\alpha |k |}-|j |^{\tilde{s}} e^{\alpha |j |}\right). align}{\end{eqnarray*}{align*} Since $\hat{\omega}_{-k}=\overline{\hat{\omega}_{k}}$, we obtain the estimate of $P$ align}{\begin{eqnarray*}{align} \label{Iineq} |P| \leq \sum_{j,k} |k | |\hat{u}_{k-j}| |\hat{\omega}_{j}| |\hat{\omega}_{k}||k |^{\tilde{s}} e^{\alpha |k |} \left | |k |^{\tilde{s}} e^{\alpha |k |}-|j |^{\tilde{s}} e^{\alpha |j |}\right |. align}{\end{eqnarray*}{align} Defining $f(x)=x^{\tilde{s}} e^{\alpha x}$, then $f'(x)=\tilde{s} x^{{\tilde{s}}-1} e^{\alpha x}+x^{{\tilde{s}}} \alpha e^{\alpha x}$. Taking $\eta=a|j |+(1-a)|k |$, where $0\leq a \leq 1$, then $\eta$ is between $|j |$ and $|k |$. If $|k |<|j |$, then $|\eta|<|j |<|j |+|(k-j) |$; if $|j |<|k |$, then $|\eta|<|k |\leq|j |+|(k-j) |$. Therefore, we have $0<\eta\leq |j |+|(k-j) |$. Applying the mean value theorem, it follows that align}{\begin{eqnarray*}{align*} \left | |k |^{\tilde{s}} e^{\alpha |k |}-|j |^{\tilde{s}} e^{\alpha |j |}\right |=|f'(\eta)| \left | |k |-|j | \right | &\leq |f'(\eta)| |(k-j) |\\ &=\left | (\tilde{s} {\eta}^{{\tilde{s}}-1}e^{\alpha \eta}+{\eta}^{{\tilde{s}}}\alpha e^{\alpha \eta}) \right | \left |(k-j) \right |. align}{\end{eqnarray*}{align*} Therefore, taking $l=k-j$, (\ref{Iineq}) becomes align}{\begin{eqnarray*}{align*} |P|&\leq \sum_{l+j=k} |k | |\hat{u}_{l}| |\hat{\omega}_{j}| |\hat{\omega}_{k}||k |^{\tilde{s}} e^{\alpha {|k |}} \left | (\tilde{s} {\eta}^{{\tilde{s}}-1}e^{\alpha \eta}+{\eta}^{{\tilde{s}}}\alpha e^{\alpha \eta}) \right | |l |\\ &\leq |\tilde{s}| \sum_{l+j=k} |k | |\hat{u}_{l}| |\hat{\omega}_{j}| |\hat{\omega}_{k}||k |^{\tilde{s}} e^{\alpha {|k |}} \left | {\eta}\right |^{{\tilde{s}}-1}e^{\alpha \eta} |l |\\ &+\alpha \sum_{l+j=k} |k | |\hat{u}_{l}| |\hat{\omega}_{j}| |\hat{\omega}_{k}||k |^{\tilde{s}} e^{\alpha {|k |}} \left | {\eta} \right |^{{\tilde{s}}}e^{\alpha \eta} |l |\\ &=P_1+P_2. align}{\end{eqnarray*}{align*} We first analyze $\displaystyle P_1=|\tilde{s}| \sum_{l+j=k} |k | |\hat{u}_{l}| |\hat{\omega}_{j}| |\hat{\omega}_{k}||k |^{\tilde{s}} e^{\alpha {|k |}} \left | {\eta}\right |^{{\tilde{s}}-1}e^{\alpha \eta} |l |$.\\ Case (i): When $-\frac{1}{2}<\tilde{s}<1$, since $\left | {\eta}\right |=a|j |+(1-a)|k |$, $0\leq a \leq 1$ and we have\\ Case (ia): if $|j |\leq |k |$, then $\left | {\eta}\right |\geq |j |$, we have: $\displaystyle \left | {\eta}\right |^{{\tilde{s}}-1}\leq |j |^{{\tilde{s}}-1}. $ Moreover, since $0<\eta\leq |j |+|l |$, we have: $\displaystyle e^{\alpha \eta}\leq e^{\alpha |j |}e^{\alpha |l |}. $ Taking $0<\delta<1$, it follows that align}{\begin{eqnarray*}{align*} P_1&\leq |\tilde{s}| \sum_{l+j=k}|k | |\hat{u}_{l}| |\hat{\omega}_{j}| |\hat{\omega}_{k}||k |^{\tilde{s}} e^{\alpha |k |} |j |^{{\tilde{s}}-1} e^{\alpha |j |}e^{\alpha |l |} |l | \\ &\leq |\tilde{s}| \sum_{l+j=k} |k |^{1-\delta} (|l ||\hat{u}_{l}|e^{\alpha |l |}) \cdot (|j |^{{\tilde{s}}-1}|\hat{\omega}_{j}|e^{\alpha |j |})\cdot (|\hat{\omega}_{k}||k |^{{\tilde{s}}+\delta} e^{\alpha |k |} ) \\ &\leq |\tilde{s}| \|{\omega}_1\ast {\omega}_2\|_{\dot{H}^{1-\delta}} \|{\omega}\|_{\tilde{s}+\delta, \alpha }, align}{\end{eqnarray*}{align*} where align}{\begin{eqnarray*}{align*} \|{\omega}_1\|^2_{L^2}=\sum_{l} |\hat{{\omega}}_{l}|^2e^{2\alpha |l |},\ \|{\omega}_2\|^2_{L^2}=\sum_{l} |l |^{2(\tilde{s}-1)}|\hat{{\omega}}_{l}|^2e^{2\alpha |l |}. align}{\end{eqnarray*}{align*} When $-\frac{1}{2}< \tilde{s}<1$ and $\max\left\{\frac{1}{2}-\tilde{s},0\right\}<\delta<1$, from Lemma \ref{lem-gev1} with $s_1=\frac{3-2\delta+2\tilde{s}}{4}$ and $s_2=\frac{7-2\delta-2\tilde{s}}{4}$, we have align}{\begin{eqnarray*}{align*} \|{\omega}_1\ast {\omega}_2\|_{\dot{H}^{1-\delta}}\leq c_{\tilde{s}}\|{\omega}_1\|_{\frac{3-2\delta+2\tilde{s}}{4}} \|{\omega}_2\|_{\frac{7-2\delta-2\tilde{s}}{4}} =c\|{\omega}\|^2_{\frac{3+2\tilde{s}-2\delta}{4}, \alpha }. align}{\end{eqnarray*}{align*} Therefore align}{\begin{eqnarray*}{align*} P_1\leq c_{\tilde{s}} \|{\omega}\|^2_{\frac{3+2\tilde{s}-2\delta}{4}, \alpha }\|{\omega}\|_{\tilde{s}+\delta, \alpha }. align}{\end{eqnarray*}{align*} When $-\frac{1}{2}< \tilde{s}<1$ with $\max\left\{\frac{1}{2}-\tilde{s}, 0\right\}<\delta<\min\left\{\frac{3}{2}-\tilde{s}, 1\right\}$, we have align}{\begin{eqnarray*}{align*} \|{\omega}\|^2_{\frac{3+2\tilde{s}-2\delta}{4}, \alpha }\leq c_{\tilde{s}} \|{\omega}\|^{\frac{2\delta+2\tilde{s}+1}{2}}_{\tilde{s}, \alpha } \|{\omega}\|^{\frac{3-2\delta-2\tilde{s}}{2}}_{\tilde{s}+1, \alpha }\ \text{ and}\ \|{\omega}\|_{\tilde{s}+\delta, \alpha }\leq c_{\tilde{s}} \|{\omega}\|^{1-\delta}_{\tilde{s}, \alpha } \|{\omega}\|^{\delta}_{\tilde{s}+1, \alpha }. align}{\end{eqnarray*}{align*} Therefore align}{\begin{eqnarray*}{align*} P_1&\leq c_{\tilde{s}} \|{\omega}\|^{\frac{2\delta+2\tilde{s}+1}{2}}_{\tilde{s}, \alpha } \|{\omega}\|^{\frac{3-2\delta-2\tilde{s}}{2}}_{\tilde{s}+1, \alpha }\|{\omega}\|^{1-\delta}_{\tilde{s}, \alpha } \|{\omega}\|^{\delta}_{\tilde{s}+1, \alpha }\\ \nonumber &=c_{\tilde{s}} \|{\omega}\|^{\frac{3+2\tilde{s}}{2}}_{\tilde{s}, \alpha } \|{\omega}\|^{\frac{3-2\tilde{s}}{2}}_{\tilde{s}+1, \alpha }. align}{\end{eqnarray*}{align*} Case (ib): if $|j |> |k |$, then $\left | {\eta}\right |\geq |k |$, we have: $\displaystyle \left | {\eta}\right |^{{\tilde{s}}-1}\leq |k |^{{\tilde{s}}-1}. $ Therefore align}{\begin{eqnarray*}{align*} P_1&\leq |\tilde{s}|\sum_{l+j=k}|k | |\hat{u}_{l}| |\hat{\omega}_{j}| |\hat{\omega}_{k}||k |^{\tilde{s}} e^{\alpha |k |} |k |^{{\tilde{s}}-1} e^{\alpha |j |}e^{\alpha |l |} |l | \\ &\leq |\tilde{s}|\sum_{l+j=k} |k |^{\tilde{s}} (|l ||\hat{u}_{l}|e^{\alpha |l |}) \cdot (|\hat{\omega}_{j}|e^{\alpha |j |})\cdot (|\hat{\omega}_{k}||k |^{\tilde{s}} e^{\alpha |k |} ) \\ &\leq |\tilde{s}| \|{\omega}_1\ast {\omega}_1\|_{\dot{H}^{\tilde{s}}} \|{\omega}\|_{\tilde{s}, \alpha }. align}{\end{eqnarray*}{align*} When $-\frac{1}{2}<\tilde{s}<1$, from Lemma \ref{lem-gev1} with $s_1=\frac{3+2\tilde{s}}{4}$ and $s_2=\frac{3+2\tilde{s}}{4}$, we have: $\displaystyle \|{\omega}_1\ast {\omega}_1\|_{\dot{H}^{\tilde{s}}}\leq c_{\tilde{s}}\|{\omega}\|^2_{\frac{3+2\tilde{s}}{4}, \alpha }. $ Therefore, $\displaystyle P_1\leq c_{\tilde{s}} \|{\omega}\|^2_{\frac{3+2\tilde{s}}{4}, \alpha }\|{\omega}\|_{\tilde{s}, \alpha }. $ Since align}{\begin{eqnarray*}{align} \label{inter2} \|{\omega}\|^2_{\frac{3+2\tilde{s}}{4}, \alpha }\leq c_{\tilde{s}} \|{\omega}\|^{\frac{1+2\tilde{s}}{2}}_{\tilde{s}, \alpha } \|{\omega}\|^{\frac{3-2\tilde{s}}{2}}_{\tilde{s}+1, \alpha }, align}{\end{eqnarray*}{align} we have: $\displaystyle P_1\leq c_{\tilde{s}} \|{\omega}\|^{\frac{3+2\tilde{s}}{2}}_{\tilde{s}, \alpha } \|{\omega}\|^{\frac{3-2\tilde{s}}{2}}_{\tilde{s}+1, \alpha }. $ Case (ii): When $1\leq \tilde{s}<\frac{3}{2}$, since $|\eta|\leq |j |+|l |$, we have: $\displaystyle \left | {\eta}\right |^{{\tilde{s}}-1}\leq (|j |+|l |)^{{\tilde{s}}-1}. $ Therefore align}{\begin{eqnarray*}{align*} P_1&\leq \tilde{s}\sum_{l+j=k}|k | |\hat{u}_{l}| |\hat{\omega}_{j}| |\hat{\omega}_{k}||k |^{{\tilde{s}}} e^{\alpha |k |} (|j |+|l |)^{{\tilde{s}}-1} e^{\alpha |j |}e^{\alpha |l |} |l | \\ &\leq c_{\tilde{s}}\tilde{s}\sum_{l+j=k} |k | (|l ||\hat{u}_{l}|e^{\alpha |l |}) \cdot (|j |^{{\tilde{s}}-1}+|l |^{{\tilde{s}}-1})\cdot (|\hat{\omega}_{j}|e^{\alpha |j |})\cdot (|\hat{\omega}_{k}||k |^{\tilde{s}} e^{\alpha |k |} ) \\ &\leq c_{\tilde{s}}\tilde{s}\sum_{l+j=k} |k | (|l ||\hat{u}_{l}|e^{\alpha |l |}) \cdot (|j |^{{\tilde{s}}-1}|\hat{\omega}_{j}|e^{\alpha |j |})\cdot (|\hat{\omega}_{k}||k |^{{\tilde{s}}} e^{\alpha |k |} )\\ &\leq c_{\tilde{s}} \tilde{s} \|{\omega}_1\ast {\omega}_2\|_{\dot{H}^{1}} \|{\omega}\|_{\tilde{s}, \alpha }. align}{\end{eqnarray*}{align*} When $1\leq \tilde{s}<\frac{3}{2}$, from Lemma \ref{lem-gev1} with $s_1=\frac{3+2\tilde{s}}{4}$ and $s_2=\frac{7-2\tilde{s}}{4}$, we have align}{\begin{eqnarray*}{align*} \|{\omega}_1\ast {\omega}_2\|_{\dot{H}^1}\leq c_{\tilde{s}}\|{\omega}_1\|_{\frac{3+2\tilde{s}}{4}} \|{\omega}_2\|_{\frac{7-2\tilde{s}}{4}} =c_{\tilde{s}}\|{\omega}\|^2_{\frac{3+2\tilde{s}}{4}, \alpha }. align}{\end{eqnarray*}{align*} Therefore, $\displaystyle P_1\leq c_{\tilde{s}} \|{\omega}\|^2_{\frac{3+2\tilde{s}}{4}, \alpha }\|{\omega}\|_{\tilde{s}, \alpha }. $ From (\ref{inter2}), we have: $\displaystyle P_1\leq c_{\tilde{s}} \|{\omega}\|^{\frac{3+2\tilde{s}}{2}}_{\tilde{s}, \alpha } \|{\omega}\|^{\frac{3-2\tilde{s}}{2}}_{\tilde{s}+1, \alpha }. $ Combining case (i) and (ii), when $-\frac{1}{2}<\tilde{s}<\frac{3}{2}$, we always have align}{\begin{eqnarray*}{align} \label{p1case3} P_1\leq c_{\tilde{s}} \|{\omega}\|^{\frac{3+2\tilde{s}}{2}}_{\tilde{s}, \alpha } \|{\omega}\|^{\frac{3-2\tilde{s}}{2}}_{\tilde{s}+1, \alpha }. align}{\end{eqnarray*}{align} Next, we can analyze the estimate for $$ P_2=\alpha \sum_{l+j=k} |k | |\hat{u}_{l}| |\hat{\omega}_{j}| |\hat{\omega}_{k}||k |^{\tilde{s}} e^{\alpha {|k |}} \left | {\eta} \right |^{{\tilde{s}}}e^{\alpha \eta} |l |.$$ Case (a): $0\leq \tilde{s}<\frac{3}{2}$, we have: $\displaystyle \left | {\eta}\right |^{{\tilde{s}}}\leq (|j |+|l |)^{{\tilde{s}}}. $ Therefore, align}{\begin{eqnarray*}{align*} P_2 & \leq \alpha \sum_{l+j=k} |k | |\hat{u}_{l}| |\hat{\omega}_{j}| |\hat{\omega}_{k}||k |^{\tilde{s}} e^{\alpha |k |} (|j |+|l |)^{{\tilde{s}}} e^{\alpha |j |}e^{\alpha |l |} |l |\ \\ &\leq c_{\tilde{s}}\alpha \sum_{l+j=k}|k | |\hat{u}_{l}| |\hat{\omega}_{j}| |\hat{\omega}_{k}||k |^{\tilde{s}} e^{\alpha |k |} (|j |^{{\tilde{s}}}+|l |^{{\tilde{s}}}) e^{\alpha |j |}e^{\alpha |l |} |l |\ \\ &\leq c_{\tilde{s}}\alpha \sum_{l+j=k}|k | |\hat{u}_{l}| |\hat{\omega}_{j}| |\hat{\omega}_{k}||k |^{\tilde{s}} e^{\alpha |k |} |l |^{{\tilde{s}}}e^{\alpha |j |}e^{\alpha |l |} |l |\\ &\leq c_{\tilde{s}}\alpha \sum_{l+j=k}|k |^{1-\delta}|l |^{{\tilde{s}}} (|l ||\hat{u}_{l}|e^{\alpha |l |})\cdot (|\hat{\omega}_{j}|e^{\alpha |j |})\cdot (|\hat{\omega}_{k}||k |^{{\tilde{s}}+\delta} e^{\alpha |k |} ) \\ &\leq c_{\tilde{s}}\alpha \|{\omega}_1\ast {\omega}_3\|_{\dot{H}^{1-\delta}} \|{\omega}\|_{\tilde{s}+\delta, \alpha }, align}{\end{eqnarray*}{align*} where $\displaystyle \|{\omega}_3\|^2_{L^2}=\sum_{l} |l |^{2\tilde{s}}|\hat{{\omega}}_{l}|^2e^{2\alpha |l |}. $ When $0\leq \tilde{s}< \frac{3}{2}$ with $\max\left\{\tilde{s}-\frac{1}{2},0\right\}<\delta<1$, from Lemma \ref{lem-gev1} with $\displaystyle s_1=\frac{5+2\tilde{s}-2\delta}{4},$ and $\displaystyle s_2=\frac{5-2\delta-2\tilde{s}}{4}.$ We have align}{\begin{eqnarray*}{align*} \|{\omega}_1\ast {\omega}_3\|_{\dot{H}^{1-\delta}}\leq c_{\tilde{s}}\|{\omega}_1\|_\frac{5+2\tilde{s}-2\delta}{4} \|{\omega}_3\|_\frac{5-2\delta-2\tilde{s}}{4} =c_{\tilde{s}}\|{\omega}\|^2_{\frac{5+2\tilde{s}-2\delta}{4}, \alpha }. align}{\end{eqnarray*}{align*} Therefore, $\displaystyle P_2\leq c_{\tilde{s}} \alpha \|{\omega}\|^2_{\frac{5+2\tilde{s}-2\delta}{4}, \alpha } \|{\omega}\|_{\tilde{s}+\delta,\alpha }. $ When $0\leq \tilde{s}< \frac{3}{2}$ with $\max\left\{\tilde{s}-\frac{1}{2}, \frac{1}{2}-\tilde{s}, 0\right\}<\delta<1$, we have align}{\begin{eqnarray*}{align*} \|{\omega}\|^2_{\frac{5+2\tilde{s}-2\delta}{4}, \alpha }\leq c_{\tilde{s}} \|{\omega}\|^{\frac{2\delta+2\tilde{s}-1}{2}}_{\tilde{s}, \alpha } \|{\omega}\|^{\frac{5-2\delta-2\tilde{s}}{2}}_{\tilde{s}+1, \alpha } \quad \mbox{and}\quad \|{\omega}\|_{\tilde{s}+\delta, \alpha }\leq c_{\tilde{s}} \|{\omega}\|^{1-\delta}_{\tilde{s}, \alpha } \|{\omega}\|^{\delta}_{\tilde{s}+1, \alpha }. align}{\end{eqnarray*}{align*} \comments{ and align}{\begin{eqnarray*}{align*} \|{\omega}\|_{\tilde{s}+\delta, \alpha }\leq c_{\tilde{s}} \|{\omega}\|^{1-\delta}_{\tilde{s}, \alpha } \|{\omega}\|^{\delta}_{\tilde{s}+1, \alpha }. align}{\end{eqnarray*}{align*} } Thus align}{\begin{eqnarray*}{align*} P_2&\leq c_{\tilde{s}} \alpha \|{\omega}\|^{\frac{2\delta+2\tilde{s}-1}{2}}_{\tilde{s}, \alpha } \|{\omega}\|^{\frac{5-2\delta-2\tilde{s}}{2}}_{\tilde{s}+1, \alpha }\|{\omega}\|^{1-\delta}_{\tilde{s}, \alpha } \|{\omega}\|^{\delta}_{\tilde{s}+1, \alpha }\\ \nonumber &=c_{\tilde{s}} \alpha \|{\omega}\|^{{\tilde{s}}+\frac{1}{2}}_{\tilde{s}, \alpha } \|{\omega}\|^{\frac{5}{2}-s}_{\tilde{s}+1, \alpha }. align}{\end{eqnarray*}{align*} Case (b): $-\frac{1}{2}<\tilde{s}<0$.\\ Case (b1): if $|j |\leq |k |$, then $\left | {\eta}\right |\geq |j |$, we have: $\displaystyle \left | {\eta}\right |^{{\tilde{s}}}\leq |j |^{{\tilde{s}}}. $ Therefore, align}{\begin{eqnarray*}{align*} P_2 & \leq \alpha \sum_{l+j=k} |k | |\hat{u}_{l}| |\hat{\omega}_{j}| |\hat{\omega}_{k}||k |^{\tilde{s}} e^{\alpha |k |} |j |^{{\tilde{s}}} e^{\alpha |j |}e^{\alpha |l |} |l |\ \\ &\leq \alpha \sum_{l+j=k}|k |^{1-\delta}|j |^{{\tilde{s}}} (|l ||\hat{u}_{l}|e^{\alpha |l |})\cdot (|\hat{\omega}_{j}|e^{\alpha |j |})\cdot (|\hat{\omega}_{k}||k |^{{\tilde{s}}+\delta} e^{\alpha |k |} ) \\ &\leq \alpha \|{\omega}_1\ast {\omega}_3\|_{\dot{H}^{1-\delta}} \|{\omega}\|_{\tilde{s}+\delta, \alpha }. align}{\end{eqnarray*}{align*} When $- \frac{1}{2}< \tilde{s}< 0$ with $0<\delta<1$, from Lemma \ref{lem-gev1} with $s_1=\frac{5+2\tilde{s}-2\delta}{4}$ and $s_2=\frac{5-2\delta-2\tilde{s}}{4}$, we have align}{\begin{eqnarray*}{align*} \|{\omega}_1\ast {\omega}_3\|_{\dot{H}^{1-\delta}}\leq c_{\tilde{s}}\|{\omega}_1\|_\frac{5+2\tilde{s}-2\delta}{4} \|{\omega}_3\|_\frac{5-2\delta-2\tilde{s}}{4} =c\|{\omega}\|^2_{\frac{5+2\tilde{s}-2\delta}{4}, \alpha }. align}{\end{eqnarray*}{align*} Therefore, $\displaystyle P_2\leq c_{\tilde{s}} \alpha \|{\omega}\|^2_{\frac{5+2\tilde{s}-2\delta}{4}, \alpha } \|{\omega}\|_{\tilde{s}+\delta,\alpha }. $ When $- \frac{1}{2}< \tilde{s}< 0$ with $\frac{1}{2}-\tilde{s}<\delta<1$, we have align}{\begin{eqnarray*}{align*} \|{\omega}\|^2_{\frac{5+2\tilde{s}-2\delta}{4}, \alpha }\leq c_{\tilde{s}} \|{\omega}\|^{\frac{2\delta+2\tilde{s}-1}{2}}_{\tilde{s}, \alpha } \|{\omega}\|^{\frac{5-2\delta-2\tilde{s}}{2}}_{\tilde{s}+1, \alpha }\ \text{and}\ \|{\omega}\|_{\tilde{s}+\delta, \alpha }\leq c_{\tilde{s}} \|{\omega}\|^{1-\delta}_{\tilde{s}, \alpha } \|{\omega}\|^{\delta}_{\tilde{s}+1, \alpha }, align}{\end{eqnarray*}{align*} we have align}{\begin{eqnarray*}{align*} P_2&\leq c_{\tilde{s}} \alpha \|{\omega}\|^{\frac{2\delta+2\tilde{s}-1}{2}}_{\tilde{s}, \alpha } \|{\omega}\|^{\frac{5-2\delta-2\tilde{s}}{2}}_{\tilde{s}+1, \alpha }\|{\omega}\|^{1-\delta}_{\tilde{s}, \alpha } \|{\omega}\|^{\delta}_{\tilde{s}+1, \alpha }\\ \nonumber &=c_{\tilde{s}} \alpha \|{\omega}\|^{{\tilde{s}}+\frac{1}{2}}_{\tilde{s}, \alpha } \|{\omega}\|^{\frac{5}{2}-s}_{\tilde{s}+1, \alpha }. align}{\end{eqnarray*}{align*} Case (b2): if $|j |> |k |$, then $\left | {\eta}\right |\geq |k |$, we have: $\displaystyle \left | {\eta}\right |^{{\tilde{s}}}\leq |k |^{{\tilde{s}}}. $ Therefore align}{\begin{eqnarray*}{align*} P_2 & \leq \alpha \sum_{l+j=k} |k | |\hat{u}_{l}| |\hat{\omega}_{j}| |\hat{\omega}_{k}||k |^{\tilde{s}} e^{\alpha |k |} |k |^{{\tilde{s}}} e^{\alpha |j |}e^{\alpha |l |} |l |\ \\ &\leq \alpha \sum_{l+j=k}|k |^{{\tilde{s}}+1-\delta} (|l ||\hat{u}_{l}|e^{\alpha |l |})\cdot (|\hat{\omega}_{j}|e^{\alpha |j |})\cdot (|\hat{\omega}_{k}||k |^{{\tilde{s}}+\delta} e^{\alpha |k |} ) \\ &\leq \alpha \|{\omega}_1\ast {\omega}_1\|_{\dot{H}^{{\tilde{s}}+1-\delta}} \|{\omega}\|_{\tilde{s}+\delta, \alpha }. align}{\end{eqnarray*}{align*} When $- \frac{1}{2}< \tilde{s}< 0$ with $0<\delta<1$, from Lemma \ref{lem-gev1} with $s_1=s_2=\frac{5+2\tilde{s}-2\delta}{4}$, we have align}{\begin{eqnarray*}{align*} \|{\omega}_1\ast {\omega}_2\|_{\dot{H}^{{\tilde{s}}+1-\delta}} \leq c_{\tilde{s}}\|{\omega}\|^2_{\frac{5+2\tilde{s}-2\delta}{4}, \alpha }. align}{\end{eqnarray*}{align*} Therefore, $\displaystyle P_2\leq c_{\tilde{s}} \alpha \|{\omega}\|^2_{\frac{5+2\tilde{s}-2\delta}{4}, \alpha } \|{\omega}\|_{\tilde{s}+\delta,\alpha }. $ When $- \frac{1}{2}< \tilde{s}< 0$ with $\frac{1}{2}-\tilde{s}<\delta<1$, we have align}{\begin{eqnarray*}{align*} \|{\omega}\|^2_{\frac{5+2\tilde{s}-2\delta}{4}, \alpha }\leq c_{\tilde{s}} \|{\omega}\|^{\frac{2\delta+2\tilde{s}-1}{2}}_{\tilde{s}, \alpha } \|{\omega}\|^{\frac{5-2\delta-2\tilde{s}}{2}}_{\tilde{s}+1, \alpha }\ \text{and}\ \|{\omega}\|_{\tilde{s}+\delta, \alpha }\leq c_{\tilde{s}} \|{\omega}\|^{1-\delta}_{\tilde{s}, \alpha } \|{\omega}\|^{\delta}_{\tilde{s}+1, \alpha }. align}{\end{eqnarray*}{align*} Therefore align}{\begin{eqnarray*}{align*} P_2&\leq c_{\tilde{s}} \alpha \|{\omega}\|^{\frac{2\delta+2\tilde{s}-1}{2}}_{\tilde{s}, \alpha } \|{\omega}\|^{\frac{5-2\delta-2\tilde{s}}{2}}_{\tilde{s}+1, \alpha }\|{\omega}\|^{1-\delta}_{\tilde{s}, \alpha } \|{\omega}\|^{\delta}_{\tilde{s}+1, \alpha }\\ \nonumber &=c_{\tilde{s}} \alpha \|{\omega}\|^{{\tilde{s}}+\frac{1}{2}}_{\tilde{s}, \alpha } \|{\omega}\|^{\frac{5}{2}-s}_{\tilde{s}+1, \alpha }. align}{\end{eqnarray*}{align*} Combing Case (a) and Case (b), we have align}{\begin{eqnarray*}{align} \label{P2} P_2\leq c_{\tilde{s}} \alpha \|{\omega}\|^{{\tilde{s}}+\frac{1}{2}}_{\tilde{s}, \alpha } \|{\omega}\|^{\frac{5}{2}-s}_{\tilde{s}+1, \alpha }. align}{\end{eqnarray*}{align} Combing (\ref{p1case3}) and (\ref{P2}), when $-\frac{1}{2}<\tilde{s}<\frac{3}{2}$, it yields that align}{\begin{eqnarray*}{align*} \left| \left (B(u, {\omega}), A^{{\tilde{s}}}e^{2\alpha A^{\frac{1}{2}}}{\omega} \right ) \right|=P=P_1+P_2\leq c_{\tilde{s}} \|{\omega}\|^{{\tilde{s}}+\frac{3}{2}}_{\tilde{s}, \alpha } \|{\omega}\|^{\frac{3}{2}-\tilde{s}}_{\tilde{s}+1, \alpha }+c_{\tilde{s}} \alpha \|{\omega}\|^{{\tilde{s}}+\frac{1}{2}}_{\tilde{s}, \alpha } \|{\omega}\|^{\frac{5}{2}-\tilde{s}}_{\tilde{s}+1, \alpha }. align}{\end{eqnarray*}{align*} align}{\end{eqnarray*}{proof} {\bf Proof of Lemma \ref{lemmaonX}.} align}{\begin{eqnarray*}{proof} Comparing the terms on the right hand side of (\ref{mainequ1X}), we can expect that there is a region (when $t$ and $X$ are both small), $c_{\tilde{s}}X^{1+ \frac{4}{1+2\tilde{s}}}$ is the dominating term among the two terms on the right hand side. In order to find this specific region, we compare $c_{\tilde{s}}X^{1+ \frac{4}{1+2\tilde{s}}}$ with $c_{\tilde{s}}(\beta t)^{\frac{4}{2\tilde{s}-1}}X^{1+ \frac{4}{2\tilde{s}-1}}$. If $c_{\tilde{s}}X^{1+ \frac{4}{1+2\tilde{s}}}\geq c_{\tilde{s}}(\beta t)^{\frac{4}{2\tilde{s}-1}}X^{1+ \frac{4}{2\tilde{s}-1}},$ then $\displaystyle X \leq \frac{c_{\tilde{s}} }{(\beta t)^{\frac{2\tilde{s}+1}{2}}}.$ Considering the function align}{\begin{eqnarray*}{align*} K(t)=X(t)-\frac{c_{\tilde{s}} }{(\beta t)^{\frac{2\tilde{s}+1}{2}}}. align}{\end{eqnarray*}{align*} From (\ref{mainequ1X}), we observe that $X$ starts with positve initial data and is an increasing function. Moreover, since $X \nearrow \infty$ as $t\nearrow T_{X}$, it will intersect the curve $\displaystyle \frac{c_{\tilde{s}} }{(\beta t)^{\frac{2\tilde{s}+1}{2}}}$. Therefore, there exists a $t_{X}$ such that $K(t_{X})=0$ and $K(t)<0$ when $t<t_{X}$. Therefore, when $0<t<t_{X}$ and we have align}{\begin{eqnarray*}{align} \label{innerpro5} \frac{d X}{dt}<2c_{\tilde{s}}X^{1+ \frac{4}{1+2\tilde{s}}}:=c_{\tilde{s}}X^{1+ \frac{4}{1+2\tilde{s}}}. align}{\end{eqnarray*}{align} When $0<t<t_{X}$, we compare $X(t)$ with $\varphi(t)$, where, $\varphi(t)$ is the solution of align}{\begin{eqnarray*}{align} \label{innerpro6} \frac{d \varphi}{dt}=c_{\tilde{s}}\varphi^{1+ \frac{4}{1+2\tilde{s}}}, align}{\end{eqnarray*}{align} with $\varphi(0)=X(0)$ and $T_{\varphi}$ is the local existence time of $\varphi$.\\ Applying Lemma \ref{lemma:nonlinear_Gronwall} on (\ref{innerpro5}) and (\ref{innerpro6}), we have align}{\begin{eqnarray*}{align*} X(t) < \varphi(t), \text{for\ all}\ t\in \left[0, \min \left\{t_{X}, T_{X}, T_{\varphi}\right\}\right]. align}{\end{eqnarray*}{align*} From (\ref{innerpro6}), $\varphi(t)$ will also intercepts with the curve $\displaystyle \frac{c_{\tilde{s}} }{(\beta t)^{\frac{2\tilde{s}+1}{2}}}$. Denote the interception point as $t_{\varphi}$, then $t_{\varphi}< t_{X}<T_{X}$. To calculate $t_{\varphi}$, we have align}{\begin{eqnarray*}{align} \label{tphi-n} \varphi(t_{\varphi})=\frac{c_{\tilde{s}} }{(\beta t_{\varphi})^{\frac{2\tilde{s}+1}{2}}}. align}{\end{eqnarray*}{align} Solving (\ref{innerpro6}), we have align}{\begin{eqnarray*}{align} \label{tsol-n} \varphi(t)=(\varphi(0)^{-\frac{4}{1+2\tilde{s}}}-c_{\tilde{s}} t)^{-\frac{1+2\tilde{s}}{4}}. align}{\end{eqnarray*}{align} Therefore: $\displaystyle (\varphi(0)^{-\frac{4}{1+2\tilde{s}}}-c_{\tilde{s}} t_{\varphi})^{-\frac{1+2\tilde{s}}{4}}=\frac{c_{\tilde{s}} }{(\beta t_{\varphi})^{\frac{2\tilde{s}+1}{2}}}. $ After simplification, we obtain: $\displaystyle \varphi(0)^{-\frac{4}{1+2\tilde{s}}}-c_{\tilde{s}} t_{\varphi}=c_{\tilde{s}} \beta^{2} t_{\varphi}^{2}. $ Therefore align}{\begin{eqnarray*}{align} \label{equt-n} c_{\tilde{s}} \beta^{2} t_{\varphi}^{2}+c_{\tilde{s}} t_{\varphi}=\varphi(0)^{-\frac{4}{1+2\tilde{s}}}. align}{\end{eqnarray*}{align} Case (i): when $\displaystyle X({0})\geq \frac{c_{\tilde{s}} }{(\beta)^{\frac{2\tilde{s}+1}{2}}} $, then, $\displaystyle \varphi({0})\geq \frac{c_{\tilde{s}} }{(\beta)^{\frac{2\tilde{s}+1}{2}}}\Rightarrow \varphi({1})> \frac{c_{\tilde{s}} }{(\beta)^{\frac{2\tilde{s}+1}{2}}}$. This implies $t_{\varphi}<1$, then ${t^2_{\varphi}}<t_{\varphi}$, since $\beta<\frac{1}{2}$, we have align}{\begin{eqnarray*}{align} \label{equtphi-n} \varphi(0)^{-\frac{4}{1+2\tilde{s}}}\leq c_{\tilde{s}}t_{\varphi}, align}{\end{eqnarray*}{align} this implies: $\displaystyle t_{\varphi}\geq \frac{c_{\tilde{s}}}{\varphi(0)^{\frac{4}{1+2\tilde{s}}}}. $ Therefore align}{\begin{eqnarray*}{align} \label{TX1} T_{X}>t_{\varphi}\geq \frac{c_{\tilde{s}}}{\varphi(0)^{\frac{4}{1+2\tilde{s}}}}=\frac{c_{\tilde{s}}}{X(0)^{\frac{4}{1+2\tilde{s}}}}. align}{\end{eqnarray*}{align} Case (ii): when $\displaystyle X({0})< \frac{c_{\tilde{s}} }{(\beta)^{\frac{2\tilde{s}+1}{2}}}$, then, $\displaystyle \varphi({0})< \frac{c_{\tilde{s}} }{(\beta)^{\frac{2\tilde{s}+1}{2}}}$. If $ \varphi({1})> \frac{c_{\tilde{s}} }{(\beta)^{\frac{2\tilde{s}+1}{2}}}$. This implies $t_{\varphi}<1$, same as Case (i), we have: $\displaystyle T_{X}>\frac{c_{\tilde{s}}}{X(0)^{\frac{4}{1+2\tilde{s}}}}. $ If $ \varphi({1})\leq \frac{c_{\tilde{s}} }{(\beta)^{\frac{2\tilde{s}+1}{2}}}$. This implies $t_{\varphi}\geq1$, then ${t^2_{\varphi}}\geq t_{\varphi}$, then (\ref{equt-n}) becomes align}{\begin{eqnarray*}{align} \label{equtphi-n-n} \varphi(0)^{-\frac{4}{1+2\tilde{s}}}\leq c_{\tilde{s}}t_{\varphi}^2, align}{\end{eqnarray*}{align} this implies $\displaystyle t_{\varphi}\geq \frac{c_{\tilde{s}}}{\varphi(0)^{\frac{2}{1+2\tilde{s}}}}. $ Therefore align}{\begin{eqnarray*}{align} \label{TX1n} T_{X}>t_{\varphi}\geq \frac{c_{\tilde{s}}}{\varphi(0)^{\frac{2}{1+2\tilde{s}}}}=\frac{c_{\tilde{s}}}{X(0)^{\frac{2}{1+2\tilde{s}}}}. align}{\end{eqnarray*}{align} Therefore, in Case (ii), we have align}{\begin{eqnarray*}{align*} T_{X}>\min\left\{Q, Q^{1/2}\right\}, align}{\end{eqnarray*}{align*} where $\displaystyle Q=\frac{c_{\tilde{s}}}{X(0)^{\frac{4}{1+2\tilde{s}}}}$. align}{\end{eqnarray*}{proof} \section*{Acknowledgement} A. Biswas and J. Hudson are partially supported by NSF grant DMS-1517027. J. Tian is partially supported by the AMS Simons Travel Grant. \section*{References}
1,941,325,220,985
arxiv
\section{Introduction} \label{intro} The collective behaviors, such as the emergence of system-wide coordination in nature, the appearance of the consensuses of opinions in social systems, and other related phenomena, are of great interested for many researchers\cite{lampl,boyd,krapivsky,young,bikhchandani}. The local majority-rule (\textit{LMR}) has often been employed to study the arising of such behaviors. The rule is simple, based on the principle of majority vote without much consideration in psychological level; it dictates that the time evolution of the state of an unit (individual or agent) is determined by the majority-favored state of its neighbors\cite{hopfield}. The neighbors of an unit can be given by geographic, cultural, social, or organizational proximity, here we use artificially constructed networks to define the neighbors of an unit as its nearestly connected nodes. Because of the locality in \textit{LMR}, one may expect that the distribution of cliques of a system can affect the occurrence probability of collective behavior, and this work is devoted to analyze such effect. The global topology of a network can be characterized by two quantities, the degree distribution and the clustering coefficient\cite{dorogovtsev}. The total number of connections of a node is referred as the degree of the node k$, and the probability that a randomly chosen node has $k$ connections is given by the degree distribution $P\left( k\right) $; the tightness of a clique formed by a site and its directly connected neighbors can be characterized globally by the clustering coefficient of a network $C$. The question concerning with the role of network topology on dynamical cooperative behavior were discussed by Sood and Render\cite{sood} and by Suchecki et al.\cite{suchecki} in the voter model which may be viewed as a statistical model of \textit{LMR}\cite{liggett}. The mean time for reaching the state of collective behavior was shown to have different scaling behaviors with respect to the number of nodes for different decay exponents \gamma $ of scale-free (\textit{SF}) networks between $\gamma >3$, $\gamma =3 $, $2<\gamma <3$, $\gamma =2$, and $\gamma <2$\cite{sood}. Moreover, network geometry was also shown to have important effect on the dynamics such as the average survival time of metastable states in finite networks, the linear size scaling law of the survival time, and the size of an ordered domain\cite{suchecki}. As \textit{LMR} is the root of the voter model, these features may be traced to the properties of the equilibrium states of \textit{LMR}. Hence the study within \textit{LMR} may provide more insights to the question. Different $\gamma $ values of \textit{SF} networks characterize the difference in the appearance of hub-nodes. Here the hub-nodes are referred to those possessing large degree of connections. The existence of hub-nodes may affect strongly the efficiency of reaching an equilibrium state of \textit{LMR}. For this aspect, Zhou and Lipowsky showed that there exists categorical difference between the \textit{SF} networks of $\gamma <5/2$ and those of $\gamma >5/2$ for the scaling behavior of the relaxation time from a strongly disorder state towards an order state\cite{zhou}. But there are other equilibrium states associated with \textit{LMR} that are different characteristically from the state of collective behavior. Moreover, as a system evolves from an initial state, the distribution of cliques in the system affects the corresponding trajectory strongly, and hence affects the type of equilibrium states reached by the trajectory. One of the questions we intend to address in this work is as follows: Starting with strongly disorder states, what role does the clustering coefficient of the system play in the type of the equilibrium states reached by the system? Another question that attracts our attention is the relation between the robustness of a equilibrium state and the clustering coefficient of a system. The study in this aspect may not only reveal the stability of an equilibrium state but also provide an estimation for the external strength required to break the state. The robustness of an equilibrium state can be characterized by its escape rate after introducing fluctuation to perturb the system. In fact, fluctuation is an unavoidable component for real systems. The attempt was made by Moreira et al. to include noise into the dynamics by changing the transition probability of \textit{LMR} from $1$ to 1-\eta $, where the parameter $\eta $ characterizes the average effect of fluctuation\cite{moreira}. The authors showed that the presence of fluctuation may increase the probability and the efficiency of occurring collective behavior for systems with small-world characters. In this work, we take a microscopic approach by proposing a stochastic \textit{LMR} in which, each node-state contains a component of white noise. As the equilibrium states of \textit{LMR} become transient in stochastic dynamics, this proposal yields the Arrhenius equation for the escape rates. Then, we determine the dependence of the prefactor and the activation energy of the Arrhenius equation with respect to the clustering coefficient. This information allows us to show the effect of the clustering coefficient on the robustness of a state explicitly. As the state of collective behavior is of great interested, we also study the mean first-passage time from a strongly disorder state to the state of collective behavior. Such study, in addition to the robustness of the state, may provide further understandings about the role of fluctuation in the process of reaching the state of collective behavior. \ \ This paper is organized as follows. In Sec. II we define the \textit{LMR}, classify its equilibrium states, and briefly describe the generating processes for the networks used in the numerical study. Based on the \textit LMR}, we numerically calculate the occurrence probabilities of different classes of equilibrium states for the systems starting with strongly disorder states, and the results as functions of clustering coefficient are shown in Sec. III where the dependence on the system size for the occurrence probability is also discussed. In Sec. IV, we first introduce the stochastic \textit{LMR}, then the analysis on the escape rates of different classes of equilibrium states based on the stochastic dynamics are given. Moreover, the results for the mean first-passage time to the state of system-wide coordination are also presented in this section. Finally, a summary of the results and some general conclusions are given in Sec. V. \section{Deterministic Dynamics and Networks} \label{s2} We first specify the \textit{LMR} and classify the corresponding equilibrium states. Consider a network system with the distribution of edges given by an $N\times N$ adjacency matrix $A$. Here the matrix $A$ is symmetric with the elements $a_{ij}=1$ for the connected sites $i$ and $j$, and $0$ otherwise.\ The dynamic variable associated with a site $i$ is denoted as $x_{i}$, which takes two possible values, either $1$\ or $-1$. The system evolves from an initial to a new configuration in discrete time step according to \textit{LM } whose operation can be either synchronous or asynchronous. In this work, we consider the synchronous dynamics for which, the rule can be written as \begin{equation} x_{i}\left( t+1\right) ={\mathbf{sgn}}\left( \sum_{j=1}^{N}a_{ij}x_{j}\left( t\right) \right) \label{eq001} \end{equation for $i=1,...,N$, where the \textbf{sgn} function is a standard threshold function with \textbf{sgn}$\left( x\right) =+1$ for $x>0$ and $-1$ for $x<0 , and we set $x_{i}\left( t+1\right) =x_{i}\left( t\right) $ for \sum_{j=1}^{N}a_{ij}x_{j}\left( t\right) =0$. The dynamics of Eq. (\re {eq001}) has been widely studied in discrete neural networks, and the existence of equilibrium states can be shown by employing the Lyapunov energy function\cite{hopfield,wilde}, \begin{equation} E\left( t\right) =-\sum_{i=1}^{N}x_{i}\left( t\right) \left[ \sum_{j=1}^{N}a_{ij}x_{j}\left( t-1\right) \right] . \label{eq002} \end{equation Moreover, the period of an equilibrium state is either $1$ or $2$\cite{wilde . The number of equilibrium states indicates the capacity of a neural network. But, we are interested in the occurrence of the state of collective behavior and the global characters of other equilibrium states in case that the collective behavior can not be reached by systems. Thus, the equilibrium states of period-$1$ are divided into two classes, $S_{0}$ and $S_{1}$. Here the class $S_{0}$ is for the states of collective behavior for which, all node-states have the same value, either $1$\ or $-1$; and the class $S_{1}$ consists of all trapped states. For the equilibrium states of period-$2$, the system oscillates between a pair of states, and we refer all pairs of equilibrium states as the class $S_{2}$. As the order parameter is defined as $M_{s}$ $=\sum_{i=1}^{N}x_{i}/N$, the two states of $S_{0}$ have the value $1$ and $-1$, respectively, and the states of $S_{1}$ and $S_{2}$ have $\left\vert M_{s}\right\vert <1$. Two types of networks, Watts-Strogatz (\textit{WS}) and \textit{SF} networks, are used to define the adjacency matrix $A$ in the dynamics of Eq. (\ref{eq001}). The \textit{WS }networks are made from a regular lattice for which, $N$ sites are placed around a circle and each site has degree, say k_{0}$, connecting to the right and to the left symmetrically, then a probability $p$ is assigned to rewire the edges randomly\cite{watts}. As the $p$ value increases from $0$ to $1$, the resultant network changes from a regular lattice to a random graph with the clustering coefficient $C$ decreasing from the highest value down to $k_{0}/N$ . Here the $C$ value of a network is defined as the average of the clustering coefficients associated with all sites, and the clustering coefficient of a site, say $i , is given as \begin{equation} C_{i}=\frac{2y_{i}}{k_{i}\left( k_{i}-1\right) }, \label{eq002-1} \end{equation where $y_{i}$ is the number of existent edges between the $k_{i}$ neighbors of the site $i$. The \textit{SF} networks are a special category of networks for which, the degree distributions take the form of power low as $P\left( k\right) \sim k^{-\gamma }$ with decay exponent $\gamma $. A conventional way of generating a \textit{SF} network is the scheme of preferential attachment proposed by Barabasi and Albert\cite{albert1}, and it yields a network with \gamma \approx 2.9$ and small $C$ value $C\approx N^{-0.75}$\cite{albert2}. The \textit{SF} networks with different $\gamma $ and $C$ values are employed for numerical study in this work, and they were generated by using the modified schemes of preferential attachment, proposed by Holme et al \cite{holme} and by Leary et al.\cite{leary}, in a systematic way. For tuning the $C$ value without altering the $\gamma $ value, a step, called triad-formation, is added to the process of preferential attachment with a assigned probability; the larger the assigned probability of performing triad-formation, the larger the $C$ value of the resultant network is\cit {holme}. Alternatively, we alter the $\gamma $ value without affecting the C $ value by switching the uniform distribution for the random numbers used in the process of preferential attachment to the distribution of a designed probability density function\cite{leary}. As the designed probability density function further enhances the probability of connecting a new edge to a site with large degree, we obtain a network with a larger $\gamma $ value. On the other hand, for the opposite tendency in the designed function the resultant network has a smaller $\gamma $ value. \section{Occurrence Probabilities of Equilibrium States} \label{s3} \ \ \ We first calculate numerically the occurrence probability for three different classes of equilibrium states of Eq. (\ref{eq001}) to analyze the clustering effect. The occurrence probabilities denoted as $P_{i}$ for the class $S_{i}$ of the equilibrium states with $i=0$, $1$, or $2$ for systems modelled by the \textit{WS} and the \textit{SF} networks. To minimize statistical errors in the simulation results, we generate $1000$ samples for the \textit{WS }networks with a given value of rewiring probability, and the corresponding clustering coefficient $\overline{C}$ is defined as the average of the coefficients of all samples. Then, the $P_{i}$ value associated with a $\overline{C}$ is given by the fractional percentage of occurrence for the equilibrium states belonging to the class $S_{i}$ over the $10^{6}$ trajectories. Here the trajectories all start from strongly disorder states and are equally distributed in the $1000$ samples. The results of $P_{i}$ for the \textit{WS} networks are shown as the plots of $P_{i}$ vs. $\overline{C}$ in Fig. 1(a)\ for different $k_{0}$ values with $N=1000$ and in Fig. 1(b) for different $N$ values with $k_{0}=8$. \ Here the $\overline{C}$ values are in the range $0<\overline{C}<0.5$ for which, the corresponding rewiring probabilities are ranged between 0.04<p<0.5$; this is also the range of $\overline{C}$ generated for the \textit{SF} networks. As shown in the plots, systems are more likely to be led to the states of collective behavior for small $\overline{C}$ values \left( \overline{C}\sim 0.05\right) $, to the equilibrium states of $S_{1}$ for medium $\overline{C}$ values $\left( \overline{C}\sim 0.15\right) $, and to the equilibrium states of $S_{2}$ for high $\overline{C}$ values $\left( \overline{C}\sim 0.3\right) $. The results also indicate that the $P_{0}$ value is suppressed and the $P_{2}$ value is enhanced as the node number $N$ increases, and the tendency is opposite for increasing the average number of degree $k_{0}$. Analytic expressions $P_{i}\left( N,\overline{C}\right) $, which can fit the data shown in Fig. 1(b) properly, are found for a better understanding about the $N$-dependence of the occurrence probabilities $P_{i}$. The results are \begin{equation} P_{0}\left( N,\overline{C}\right) =\frac{1.11-\left( 0.06\right) N^{0.16}} 1+\exp \left\{ \left[ 91.53-\left( 407.28\right) /N^{0.35}\right] \overline{ }-6.70\right\} }, \label{eq002-2} \end{equation \begin{equation} P_{2}\left( N,\overline{C}\right) =1.03-\frac{0.95+\left( 1.04\times 10^{5}/N^{2.07}\right) z\left( N,\overline{C}\right) }{1+z\left( N,\overline C}\right) }, \label{eq002-3} \end{equation and \begin{equation} P_{1}\left( N,\overline{C}\right) =1-P_{0}\left( N,\overline{C}\right) -P_{2}\left( N,\overline{C}\right) , \label{eq002-4} \end{equation where $z\left( N,\overline{C}\right) $ is \begin{equation} z\left( N,\overline{C}\right) =\left[ \frac{\overline{C}}{\left( 0.65/N^{0.07}-0.25\right) }\right] ^{5}. \label{eq002-5} \end{equation As shown as the solid lines in Fig. 1(b), the expressions agree with the numerical data very well. Eq. (\ref{eq002-2}) reveals an important feature about $P_{0}$, that is, the $P_{0}$ value is suppressed as increasing $N$ or $\overline{C}$. Consequently, there exists a site number $N^{\ast }\left( k_{0}\right) $ that $P_{0}\approx 0$ for $N>N^{\ast }\left( k_{0}\right) $, where $N^{\ast }\left( k_{0}\right) $ decreases as $k_{0}$ decreases. Moreover, Eq. (\ref{eq002-3}) indicates that the $P_{2}$ value increases as N$ increases. Thus, we may have $P_{0}\approx 0$, $P_{1}\approx 0$, and P_{2}\approx 1$ for $N>N^{\ast }\left( k_{0}\right) $. This is demonstrated by the numerical results shown in Fig. 2 where the $P_{i}$ values as functions of $\overline{C}$ for $N=6000$ and $k_{0}=4$ are given. The statistics for the numerical study in the \textit{SF} networks is similar to the case of the \textit{WS }networks. As the modified schemes of preferential attachment are applied, the $\gamma $ value may change slightly in tuning the $C$ value\cite{holme}, and the $C$ value may alter slightly in tuning the $\gamma $ value\cite{leary}. Thus, the samples are characterized by the set-values $\left( \overline{\gamma },\overline{C}\right) $ with 1000 $ members belonging to a $\left( \overline{\gamma },\overline{C}\right) $. There are three distinct $\overline{\gamma }$ values, $\overline{\gamma =2.82$, $2.55$, and $2.29$, and the possible $\overline{C}$ values for a \overline{\gamma }$ value are in the range between $0$ and $0.5$. All samples have the site number $N=5000$ and the average degree of a site \left\langle k\right\rangle =4$. The $P_{i}$ value is the result over the 10^{6}$ trajectories for which, they are equally distributed in the $1000$ samples of the \textit{SF} networks belonging to the same $\left( \overline \gamma },\overline{C}\right) $ and each trajectory starts from a strongly disorder state. The numerical results are shown as the plots of $P_{i}$ vs. \overline{C}$ in Fig. 3 for three different $\overline{\gamma }$ values. The results obtained from the \textit{SF} networks indicate that although the $\overline{\gamma }$ value may affect significantly the convergent speed of the system leading to a equilibrium state, it has little effect on the P_{i}$ value for which, the $\overline{C}$ value plays a major role. Moreover, similar to the case of the \textit{WS} networks, as shown in Fig. 3 the clustering coefficient $\overline{C}$ drives the system from the state of collective behavior at low $\overline{C}$ $\left( \overline{C}\leq 0.1\right) $\ to the the phase of oscillation between two states at high \overline{C{\ }}$ $\left( 0.5>\overline{C}>0.3\right) $, and the system is trapped in a state of $S_{1}$ at medium $\overline{C}$. One may further expect that the $N$-dependence for $P_{i}$ has the same feature qualitatively as that for the \textit{WS} networks. \section{Stochastic Dynamics} \label{s4} \ \ Noise is a very natural component for real systems, its physical origins can be traced to incomplete information, processing errors, or other environmental perturbations. To add a component of noises to the \textit{LMR , we propose a stochastic version based on the assumption that the effect of noise is localized and appears as the fluctuation in recognizing the value of a node-state by its connected neighbors. Then, the stochastic \textit{LMR} is given as \begin{equation} x_{i}\left( t+1\right) =\mathbf{sgn}\left( \sum_{j=1}^{N}a_{ij}\mathbf{sgn \left( x_{j}\left( t\right) +\sqrt{2D}\xi _{j}\left( t\right) \right) \right) , \label{eq05} \end{equation where $\xi _{i}\left( t\right) $ is the Gaussian white noise with the zero mean and the $\delta $-function correlation, i,e, $\left\langle \xi _{i}\left( t\right) \right\rangle =0$ and $\left\langle \xi _{i}\left( t\right) \xi _{j}\left( s\right) \right\rangle =\delta _{i,j}\delta \left( t-s\right) $, and $D$ is the diffusion constant which characterizes the strength of noise. The equilibrium states of Eq. (\ref{eq001}) become transient for the stochastic dynamics of Eq. (\ref{eq05}). Consequently, the escape time for different classes of the equilibrium states can be measured, the results are denoted as $\left\langle \tau ^{\left( i\right) }\right\rangle $ with $i=0$, $1$, and $2$ for the class $S_{0}$, $S_{1}$, and $S_{2}$, respectively, and the inverse of $\left\langle \tau ^{\left( i\right) }\right\rangle $ yields the escape rate $\left\langle \kappa _{i}\right\rangle $, where the bracket of $\tau ^{\left( i\right) }$ or \kappa _{i}$ represents the average of the results over the samples of different networks. As the stochastic dynamics of Eq. (\ref{eq05}) is applied to the equilibrium states of Eq. (\ref{eq001}), one can expect that the escape rates obey the Arrhenius equation\cite{kampen}, \begin{equation} \left\langle \kappa _{i}\right\rangle =A_{i}\exp \left( -\frac{\Delta G_{i}} D}\right) , \label{eq06} \end{equation where $A_{i}$ is the prefactor, and $\Delta G_{i}$ is the activation energy. Note that based on the fluctuation-dissipation theorem we may identify D=k_{B}T$ with the Boltzman constant $k_{B}$ and the absolute temperature $T \cite{kubo}. The factor $A_{i}$, which is equivalent to the rate constant in a chemical reaction, signifies the entropy effect, and one can expect that it has a strong dependence on the clustering coefficient. The value of \Delta G_{i}$ gives the maximum potential energy required to escape from the equilibrium state. Thus, the results of $A_{i}$ and $\Delta G_{i}$ provide insights on the robustness of the equilibrium states of different classes as the geometric structure of a system varies. \ We perform the numerical measurements for $\left\langle \kappa _{i}\right\rangle $ in the \textit{SF} networks, and the statistics for the measurements is described as follows. We first single out the $m_{0}$ equilibrium states belonging to the class $S_{i}$ for a set-value $\left( \overline{\gamma },\overline{C}\right) $ with $m_{0}=25000$, where the m_{0} $ states of the class $S_{0}$ are identical as $x_{i}=+1$ for all nodes. Then, the number of time-steps required to escape from each of the m_{0}$ states is measured with a preassigned cut-off time-step $t_{\max }$, such a measurement is repeated for $20$ times to realize the Gaussian distribution of the noise $\xi _{i}\left( t\right) $, and the escape time-step $\left\langle \tau ^{\left( i\right) }\right\rangle $ for the class $S_{i}$ is given by the average value over the $20$ simulations and over the $m_{0}$ states. \ Our results for the \textit{SF} networks with $\left( \overline{\gamma } \overline{C}\right) =\left( 2.82,0.3026\right) $ are shown in Fig. 3 as the plots of $\log \left\langle \tau ^{\left( i\right) }\right\rangle $ vs. $1/D$ with $i=0$, $1$, and $2$, and those for different $\left( \overline{\gamma } \overline{C}\right) $ values also yield straight lines in the same plots. The results indicate that the escape rates $\left\langle \kappa _{i}\right\rangle $ for the equilibrium states belonging to the class $S_{i}$ agree with Eq. (\ref{eq06}), such agreement persists as the cut-off time-steps $t_{\max }$ increases from $10000$ to $20000$ as shown in Fig. 4. Then, the activation energy $\Delta G_{i}$ of Eq. (\ref{eq06}) is determined from the slope of $\log \left\langle \tau ^{\left( i\right) }\right\rangle $ vs. $1/D$, it yields $\Delta G_{0}=0.81$ and $\Delta G_{1}=\Delta G_{2}=1.0 , independent of the $\left( \overline{\gamma },\overline{C}\right) $ value. Moreover, the prefactor $A_{i}$ can be determined by the intersection between the straight line of $\log \left\langle \tau ^{\left( i\right) }\right\rangle $ vs. $1/D$ and the vertical line of $1/D=0$, we have A_{0}=97.08$, which is independent of the $\left( \overline{\gamma } \overline{C}\right) $ value, and the other two prefactors $A_{1}$ and $A_{2}$ depend on the network geometry. The $\overline{\gamma }$-dependence may occur for $A_{S_{1}}$ when $\overline{C}$ is large, for example, we have A_{S_{1}}=10.11$, $7.20$, and $5.88$ for $\overline{C}=0.3$ but $\overline \gamma }=2.29$, $2.55$, and $2.82$, respectively; such dependence is not found for $A_{S_{2}}$. On the other hand, both $A_{1}$ and $A_{2}$ are very sensitive to the $\overline{C}$ value, and their $\overline{C}$-dependence is shown explicitly in Fig. 5 for $\overline{\gamma }=2.82$. \ \ Some important features for Eq. (\ref{eq06}) are revealed from the results shown in Figs. 4 and 5. Firstly, the decay rate for the state of collective behavior is universal in the sense that it is independent of the network geometry, so are the $\Delta G_{i}$ values for the equilibrium states belonging to the classes $S_{1}$ and $S_{2}$. Moreover, the activation energy for the states of $S_{1}$ and $S_{2}$ is larger than that for the states of $S_{0}$. For the entropy effect on the decay rate, the $\overline{ }$ value has a significant role in determining the values of $A_{1}$ and A_{2}$. As the results of Fig. 5 indicate, the $A_{2}$ value decreases for increasing the $\overline{C}$ value, and the $A_{1}$ value behaves oppositely; moreover, the equilibrium states of $S_{2}$ possess a very large prefactor, and this overcomes the higher activation energy and renders the states to be very fragile. Then, as the conceptual sketch shown in Fig. 6 for the potential barriers of the equilibrium states belonging to different classes, we may conclude that the states of $S_{2}$ are very easy to\ break up, and the states of $S_{1}$ are the most robust among the equilibrium states of three classes. As the state of collective behavior is of great interested, we then study its mean first-passage time for the system with the \textit{SF} networks in a noisy environment. In the numerical calculations, we first assign a strongly disorder configuration to the system, then follow the stochastic \textit{LMR} of Eq. (\ref{eq05}) to generate a trajectory and to record the time-steps for the first appearance of a state of $S_{0}$. Here, the recorded time-steps is set as $t_{\max }=10^{4}$ for the absence of the states of $S_{0}$ at $t=t_{\max }$. We generate $10^{6}$ trajectories totally for a given set of $\left( \overline{\gamma },\overline{C}\right) $, and the mean first-passage time, denoted as $\left\langle \tau _{0}\right\rangle $, is given\textit{\ }as the average of the recorded time-steps over all trajectories. As the differences in $\left\langle \tau _{0}\right\rangle $ caused by different $\gamma $ values are insignificant, we show the results as the plot of $\left\langle \tau _{0}\right\rangle $ vs. $D$ for $\overline{\gamma }=2.82$ and $\overline{C}=0.0066$, $0.2016$, and $0.3026$, respectively in Fig. 7. It is interesting to observe that the \left\langle \tau _{0}\right\rangle $ value as a function of $D$ is non-monotonic. As the $D$ value increases, the $\left\langle \tau _{0}\right\rangle $ value first decreases, reaches a minimum flat valley in the range of $0.07\lesssim D\lesssim 0.22$ with $\left\langle \tau _{0}\right\rangle <10$, then increases abruptly at $D_{\max }\simeq 0.22$, and the state of collective behavior becomes unreachable for $D>D_{\max }$. Moreover, the role of clustering coefficient in the mean first-passage time is not very significant, as the results shown in Fig. 5 indicate, the difference in clustering coefficients is noticed by the mean first-passage time for $D\leq 0.10$. \section{Summary and Conclusion} \label{s5} \ \ In summary, we show explicitly how the clustering coefficient of a system affect the occurrence probability of the different types of equilibrium states associated with the \textit{LMR}. As the state of system-wide coordination is concerned, the increasing of the clustering coefficient would suppress its probability of occurrence. On the other hand, systems with large clustering coefficients are easily led to trapped states or oscillations between pairs of states. We also propose a stochastic version of the \textit{LMR} for which, the decay rate of an equilibrium state obeys the Arrhenius equation. This allows us to quantify the robustness of the equilibrium states through the values of the prefactor and the activation energy in the Arrhenius equation. Our results indicate that the states of period-$2$ are very fragile, and the trapped states are the most robust among the three types. For systems in noisy environments, our results obtained from the stochastic \textit{LMR} indicate that there exists a range of noise for which, the mean first-passage time from strongly disorder states to the states of system-wide coordination is the shortest. Thus, the efficiency in reaching the state of collective behavior may be improved for systems with certain amount of noise. As the distribution of subgroups in a social system can be characterized by the clustering coefficient, all these results may provide wide applications in the study of collective behaviors of social systems. \textbf{Acknowledgement}: We would like to thank Dr. Yutin Huang for fruitful discussions and carefully reading and editing the manuscript. We also thank the National Center for High-performance Computing for providing the computing facilities.This work was partially supported by the National Science Council of Republic of China (Taiwan) under the Grant No. NSC 96-2112-M-033-006 (MCH) and 99-2112-M-150-002 (YPL).
1,941,325,220,986
arxiv
\section{Introduction} Let $A = R/I$, where $R = k[x_1,\dots,x_n]$, $k$ is an algebraically closed field of characteristic zero and $R/I$ is artinian. We say that $A$ has the {\it Weak Lefschetz Property (WLP)} if, for a general linear form $L$, the homomorphism $\times L : [A]_t \rightarrow [A]_{t+1}$ has maximal rank for all $t$. When $I = \langle F_1, \dots, F_n \rangle$ is a complete intersection, we know that for a general choice of $F_1,\dots, F_n$, $R/I$ has the WLP thanks to a result of \cite{stanley}, \cite{watanabe} and \cite{RRR}. In \cite{HMNW}, it was shown that when $n=3$, {\it every} complete intersection $R/(F_1,F_2,F_3)$ has the WLP, and previously it was known for $n=2$. It has been asked and conjectured by many authors whether every complete intersection has the WLP (and also whether every complete intersection has the related Strong Lefschetz Property, which we do not consider here). The first occurrence that we could find of this conjecture is in \cite{RRR} from 1991. In four or more variables very little is known beyond the results mentioned above. In this paper we consider the case of four variables. For most of this paper we specialize to the situation where all generators have the same degree, $d$, although our last result is for arbitrary degrees. As an application, we look at the special case where $I$ is the Jacobian ideal of a smooth surface of degree $d+1$ in $\mathbb P^3$. We begin by translating the problem to a much more geometric setting. If $I = \langle F_1,F_2,F_3,F_4\rangle$, where the generators have degree $d$, we say that $I$ is a {\it complete intersection of type $(d,d,d,d)$}. Choose two {\it general} 2-dimensional subspaces of the 4-dimensional subspace spanned by these generators. These define two disjoint smooth complete intersection curves $C_1, C_2$ in $\mathbb P^3$, and we show that the Hartshorne-Rao module of $C = C_1 \cup C_2$ is exactly $A$, viewed now as an $R$-module. Most of the paper focuses on the intersection, $Z$, of $C$ with a general hyperplane $H$. In the coordinate ring $S$ of $H$ let $I_Z$ be the homogeneous ideal of $Z$ and let $B = S/I_Z$. Let $\overline B$ be a general artinian reduction of $B$. We find a Hilbert function for $Z$ that is equivalent to the possession of the WLP for $A$, and indeed the set of possible Hilbert functions of $Z$, if $A$ were to fail to have WLP, is at the heart of our approach. An important tool that we will use to analyze the Lefschetz behavior of $A$ is the beautiful Socle Lemma of Huneke and Ulrich \cite{HU}. In Section \ref{sect tools} we give the following translation of the Socle Lemma (following the approach of Huneke and Ulrich in their paper): the socle of $\overline B$ starts no later than the least degree of a form vanishing on $Z$ that does not lift to $C$. This is crucial to our work. Another important tool is the fact that the general hyperplane section of a reduced, irreducible curve (in our case $C_1$ and $C_2$ separately) has the Uniform Position Property (UPP) (\cite{harris2} and \cite{EH}). Even though $Z$ does not have UPP (it is just the union of two sets, $Z = Z_1 \cup Z_2$, of the same size such that each has UPP), we show that the Hilbert function of $Z$ is of decreasing type. Our first main result, Theorem \ref{d1=d2=d3=d4}, gives a new range where maximal rank must hold: multiplication by a general linear form is injective from degree $t-1$ to degree $t$ for all $t < \lfloor \frac{3d+1}{2} \rfloor $ (with a corresponding statement for surjectivity, thanks to duality). We note that when $I$ is the Jacobian ideal of a smooth surface in $\mathbb P^3$, Ilardi proved injectivity for $t \leq d$. Alzati and Re subsequently proved the same result without assuming that $I$ is a Jacobian ideal. On the other hand, the full WLP result is equivalent to injectivity for $t=2d-2$. Thus our result essentially cuts in half the range that was open. Our next results prove that WLP holds for $d = 3$ (Proposition \ref{d=3}), $d=4$ (Theorem \ref{d=4}) and $d=5$ (Corollary \ref{d=5}), all of which are new. It was already shown in \cite{MN-quadrics} that WLP holds when $d=2$, and beyond $d=5$ the number of possible cases grows too large to handle in a reasonable way. The case $d=3$ follows immediately from Theorem \ref{d1=d2=d3=d4}. The cases $d=4,5$, on the other hand, are proved using a new approach involving a careful series of links applied to the hyperplane section $Z$, which we view simultaneously as a series of links starting with $Z_1$ together with a series of links starting with $Z_2$. The links that we use are very balanced, treating $Z_1$ and $Z_2$ in exactly the same way. It follows from this that a certain Symmetry Principle must hold (see (\ref{symm princ})), and it is the key to our conclusions involving the links. In section \ref{strategy} we outline an approach to prove that every complete intersection of type $(d,d,d,d)$ has the WLP, for $d \geq 6$. It extends the arguments we used to prove the cases $d=4$ and $d=5$. There are two steps missing to prove the full result, which we highlight as Conjectures \ref{force exactly one} and \ref{get contra}. The idea is to make very careful calculations involving the minimal free resolutions and the $h$-vectors of all the sets in the series of links and show how it should be possible to reach a contradiction of the Symmetry Principle, thus showing that WLP holds. We have not been able to prove these conjectures for the general case, but we were able to get around them for the cases $d=4,5$. In subsection \ref{jacobian ideals}, we apply our results to the case of Jacobian ideals of smooth surfaces in $\mathbb P^3$. In addition to the observations made above, we observe that every smooth hypersurface in $\mathbb P^3$ of degree 3, 4, 5 or 6 has a Jacobian ideal that has the WLP, improving the known range. Finally, in subsection \ref{not equigenerated subsec}, we use completely different methods to prove an injectivity result for arbitrary complete intersections, removing the assumption that all generators have the same degree. More precisely, if the generator degrees of $I$ are $d_1,d_2,d_3,d_4$ then we set $d_1 + d_2 + d_3 + d_4 = 3 \lambda + r$, $0 \leq r \leq 3$, and prove that the multiplication by a general linear form is injective for $t< \lambda$. The restriction to the case $d_1 = \dots = d_4 = d$ is not as strong as the result in Theorem \ref{d1=d2=d3=d4}, but it is still stronger than the Alzati-Re result and in any case it omits the restriction that the ideal is equigenerated. \section{Some tools for height four complete intersections} \label{sect tools} Assume from now on that $R = k[x_1,x_2,x_3,x_4]$ where $k$ is an algebraically closed field of characteristic zero, and $I = \langle F_1,F_2,F_3,F_4 \rangle \subset R$ is a complete intersection, with $\deg F_i = d$ for $i = 1,2,3,4$ (except for Proposition \ref{d1>=d2>=d3>=d4}). If $L$ is a general linear form, it defines a hyperplane $H$. Let $S = R/\langle L \rangle \cong k[x,y,z]$ be the coordinate ring of $H$. If $L'$ is another general linear form and $Z$ is a zero-dimensional subscheme of $H$ then we set $T = S/\langle L' \rangle \cong k[x,y]$ and we recall that an artinian reduction of $S/I_Z$ has the same graded Betti numbers over $T$ as $S/I_Z$ has over $S$. Let $I_1 = \langle F_1,F_2 \rangle$ and $I_2 = \langle F_3,F_4 \rangle$ and let $C$ be the curve defined by the ideal $I_C = I_1 \cap I_2$. We make the following observations about $C$. \begin{lemma} \label{basic facts about C} \begin{itemize} \item[(a)] $I_C$ is saturated and $C = C_1 \cup C_2$ in $\mathbb P^3 = Proj(R)$, where $C_1$ and $C_2$ are the disjoint complete intersections defined by $I_1$ and $I_2$ respectively. \item[(b)] $I_C = I_1 \cdot I_2$. \item[(c)] The minimal free resolution of $I_C$ is obtained as the tensor product of the Koszul resolutions of $I_1$ and $I_2$. Hence in particular, the Hilbert function of $C$ is completely determined. \item[(d)] For the Hartshorne-Rao module $M(C) = \bigoplus_{t \in \mathbb Z} H^1(\mathcal I_C (t))$ we have \[ M(C) \cong A = R/\langle F_1,F_2,F_3,F_4 \rangle. \] In particular, the minimal free resolution of $M(C)$ is given by the Koszul resolution. \end{itemize} \end{lemma} \begin{proof} It is clear that $I_C$ is saturated. The curves are disjoint since $I$ is artinian. This proves (a). Part (b) follows from \cite{serre} Corollaire, page 143, since $C_1$ and $C_2$ are ACM curves in $\mathbb P^3$. Part (c) is \cite{MDP} Corollaire 7.6. For (d), from the exact sequence \[ 0 \rightarrow I_C \rightarrow I_1 \oplus I_2 \rightarrow I_1 + I_2 \rightarrow 0, \] sheafifying and taking cohomology, we obtain the Hartshorne-Rao module as claimed. Note that the sheafification of $I_1 + I_2$ is $\mathcal O_{\mathbb P^3}$ since $C_1$ and $C_2$ are disjoint, and that $I_1 + I_2 = I$. \end{proof} \begin{remark} \label{bertini} By successive use of Bertini's theorem (see for instance \cite{kleiman}) we can assume that all the $F_i$ are smooth, and that both $C_1$ and $C_2$ are smooth and irreducible. \end{remark} Let $L$ be a general linear form and let $H$ be the hyperplane defined by $L$. Let $Z$ be the zero-dimensional scheme cut out on $C$ by $H$. As a subscheme in $\mathbb P^3$, $Z$ has a homogeneous ideal that we will denote $I_Z$, and as a subscheme of $H$ it has a homogeneous ideal $I_{Z|H}$. Consider the exact sequence of sheaves \[ 0 \rightarrow \mathcal I_C(t-1) \stackrel{\times L}{\longrightarrow} \mathcal I_C (t) \rightarrow \mathcal I_{C|H}(t) \rightarrow 0 \] which yields the long exact sequence {\footnotesize \begin{equation} \label{std exact} 0 \rightarrow [I_C]_{t-1} \stackrel{\times L}{\longrightarrow} [I_C]_t \rightarrow [I_{Z|H}]_t \rightarrow [A]_{t-1} \stackrel{\times L}{\longrightarrow} [A]_t \rightarrow H^1(\mathcal I_{Z|H}(t)) \rightarrow H^2(\mathcal I_C(t-1)) \rightarrow H^2(\mathcal I_C(t)) \rightarrow 0. \end{equation}} Since the Hilbert function of $C$ is determined, the question of whether $A$ has the WLP depends completely on understanding the Hilbert function of $I_{Z|H}$. \begin{remark} We will make use of the following observation based on the above long exact sequence and Lemma \ref{basic facts about C} (b). \begin{quotation} {\it For $t < 2d$, the map $\times L : [A]_{t-1} \rightarrow [A]_t$ is injective if and only if $[I_{Z|H}]_t = 0$. } \end{quotation} In particular, since WLP follows from injectivity of $[A]_{2d-3} \rightarrow [A]_{2d-2}$, we have that $A$ has WLP if and only if $[_{Z|H}]_{2d-2} = 0$. \end{remark} We now recall some notation and results from \cite{HU}, which we state in our setting. \begin{definition} Let $M$ be a finitely generated graded $R$-module. We set \[ a_- (M) = \min \{ i | \ [M]_i \neq 0 \}. \] \end{definition} \begin{lemma}[\cite{HU}, Socle Lemma] \label{socle lemma} Let $M$ be a nonzero finitely generated graded $R$-module. Let $L \in [R]_1$ be a general linear form and let \[ 0 \rightarrow K \rightarrow M(-1) \stackrel{\times L}{\longrightarrow} M \rightarrow D \rightarrow 0 \] be exact. If $K \neq 0$ then $a_- (K) > a_- (\Soc (D))$. \end{lemma} \begin{remark} \label{soc of h1} Assume the following: $S = k[x,y,z]$, $Z$ is a zero-dimensional subscheme of $\mathbb P^2 = H$ with homogeneous ideal $I_Z \subset S$, $B = S/I_Z$, $\bar B$ is an artinian reduction of $B$ by a linear form, and \[ 0 \rightarrow \bigoplus_{i=1}^{b_2} S(-n_{2,i}) \rightarrow \bigoplus_{i=1}^{b_1} S(-n_{1,i}) \rightarrow S \rightarrow B \rightarrow 0 \] is the minimal free resolution. In \cite{HU} it is also pointed out that then \[ \Soc (H^1_\ast(\mathcal I_Z)) = \bigoplus_{i=1}^{b_2} k(-n_{2,i} +3) = \Soc (\bar B)(1). \] \end{remark} Still following the work of \cite{HU}, let $M = A = R/I$ and let $C$ be as above. From (\ref{std exact}) we see that $\Soc (D) \subset \Soc (H^1_\ast(\mathcal I_{Z|H}))$. We thus obtain \[ a_- (K) > a_- (\Soc (D)) \geq a_- (\Soc (H^1_\ast (\mathcal I_{Z|H}))) \geq a_- (\Soc (\bar B)) -1; \] that is \begin{equation} \label{hu ineq} a_-(K) \geq a_- (\Soc (\bar B)). \end{equation} \begin{remark} \label{trans socle lemma} Notice that $K$ represents the forms in $I_{Z|H}$ that do not lift to $I_C$. Thus one way of phrasing the result of the Socle Lemma, which we will use, is that \begin{quotation} {\it the socle of $\bar B$ starts no later than the least degree of a form vanishing on $Z$ that does not lift to $C$.} \end{quotation} \noindent Since the latter degree can sometimes be read from the Hilbert function of $\bar B$, this can be used to force socle elements in $\bar B$. \end{remark} For a finite (reduced) set of points $Z$ in projective space $\mathbb P^n$, the first difference of the Hilbert function is a finite sequence of positive integers, also known as the {\em $h$-vector} of $Z$. When $Z$ is the general hyperplane section of a reduced, irreducible curve $C$, it was shown by Harris (cf. \cite{harris2}, \cite{EH}) that $Z$ has the so-called {\em uniform position property} (UPP); \label{UPP def} that is, any two subsets of $Z$ of the same cardinality have the same Hilbert function. If the irreducible curve $C$ lies in $\mathbb P^3$, we may view its general hyperplane section $Z$ as lying in a plane $\mathbb P^2$. Harris notes that in this case UPP implies that the $h$-vector of $Z$ is of {\em decreasing type}, meaning that once the values experience a strict decrease, they are strictly decreasing until they reach zero. The following useful result of Davis \cite{davis} should be viewed as an extension of this result. Recall that the \emph{regularity}, or \emph{Castelnuovo-Mumford regularity} of $Z$ agrees with the top degree of the $h$-vector of $Z$. \begin{theorem}[Davis \cite{davis}] \label{davis thm} Let $\{ h_i \ | \ 0 \leq i \leq k \}$ be the $h$-vector of $Z$. Suppose that $h_{t-1} = h_{t} > 0$ for some $t \leq k$. Then \begin{enumerate} \item The elements of the homogeneous components $[I_Z]_{t-1}$ and $[I_Z]_t$ all have a common factor, $F$, of degree equal to $h_t$, which thus is a common factor for all components in degree $< t-1$ as well. \item $F$ is reduced. \item Let $Z_1$ be the subset of $Z$ consisting of all the points lying on the curve $F$. Then $Z_1$ has $h$-vector $\{ g_i \ | \ 0 \leq i \leq k \}$ where \[ g_i = \min \{ h_i, h_t \}. \] In particular, the regularity of $Z_1$ is the same as that of $Z$. \item Let $Z_2$ be the subset of $Z$ consisting of all points of $Z$ that do not lie on $F$. Then the $h$-vector of $Z_2$ is $\{ f_i \ | \ 0 \leq i \leq m \}$ where \[ \begin{array}{rcl} m & = & (t-2) - h_t \\ f_i & = & h_{h_t +i} - g_{h_t+i} \end{array} \] \end{enumerate} \end{theorem} \begin{example} Suppose that $Z$ has $h$-vector $(1,2,3,4,5,6,7,8,5,3,3,2,1)$. Then $t = 10$, $h_t = 3$, and we compute the $h$-vectors of $Z_1$ and $Z_2$ as follows (note that the $h$-vector for $Z_2$ displayed below is shifted by 3). \[ \begin{array}{c|cccccccccccccccccc} Z & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 5 & 3 & 3 & 2 & 1 \\ Z_1 & 1 & 2 & 3 & 3 & 3 & 3 & 3 & 3 & 3 & 3 & 3 & 2 & 1 \\ \hline Z_2 & & & & 1 & 2 & 3 & 4 & 5 & 2 \end{array} \] \noindent In particular, $Z$ contains a subset, $Z_1$, of $33$ points on a reduced cubic curve and a subset, $Z_2$, of $17$ points that do not lie on the cubic. \end{example} \section{Measuring failure of WLP} \label{measuring failure} \begin{notation} \label{def CI(d,d,d,d)} Let $d \geq 2$ be an integer. We denote by $CI(d,d,d,d)$ the space of ideals in $R = k[x_1,x_2,x_3,x_4]$ generated by a regular sequence of four forms of degree $d$. We view $CI(d,d,d,d)$ as a dense open subset of the Grassmannian $Gr(4,N)$, where $N = \binom{d+3}{3}$. \end{notation} Let $I \in CI(d,d,d,d)$. Let $h_{R/I}$ be the Hilbert function of $R/I$. We note that the socle degree of $R/I$ (the last degree in which $h_{R/I}$ is non-zero) is $4d-4$, that $h_{R/I}$ is symmetric, and that the maximum value of $h_{R/I}$ occurs exactly in degree $2d-2$. Also, $R/I$ is self-dual (after a twist). \begin{remark} We will use Hilbert functions to measure the failure of $R/I$ to satisfy the WLP. Note first that all $I \in CI(d,d,d,d)$ give rise to algebras with the same Hilbert function, since their Betti diagrams come from the Koszul sequence and so are identical. Furthermore, since $4d-4$ is even and all the generators have the same degree $d$, the Hilbert function in degree $2d-2$ is strictly greater than the Hilbert function in any other degree. \end{remark} \begin{remark} \label{duality} Since $R/I$ is, in particular, a Gorenstein algebra, it is self-dual as a graded module, and as noted above its last non-zero component is in degree $4d-4$. Hence for any $L \in [R]_1$ and any $i \geq 0$, by duality the rank of the homomorphism $\times L : [R/I]_{2d-3-i} \rightarrow [R/I]_{2d-2-i}$ is the same as the rank of the corresponding homomorphism from degree $2d-2+i$ to degree $2d-1+i$. \end{remark} The next lemma shows that in order to check whether $R/I$ has the WLP, it is enough to check surjectivity in just one degree (and injectivity for the first half follows automatically). \begin{lemma} \label{one spot} $R/I$ has the WLP if and only if $\times L : [R/I]_{2d-2} \rightarrow [R/I]_{2d-1}$ is surjective. \end{lemma} \begin{proof} Since the peak of the Hilbert function occurs in degree $2d-2$, one direction is trivial. The reverse implication follows from the exact sequence \[ \rightarrow [R/I]_{t-1} \stackrel{\times L}{\longrightarrow} [R/I]_t \rightarrow [R/(I,L)]_t \rightarrow 0 \] and the fact that once the cokernel is zero in one degree, it is zero in all subsequent degrees. \end{proof} Next we show that if $R/I$ fails to be surjective by the smallest amount possible in the degree mentioned in Lemma \ref{one spot} then it must actually be surjective in all subsequent degrees (and by duality, injectivity only fails in one degree in the first half, and it is by the smallest possible amount). \begin{lemma} \label{others have max rk} If $\times L : [R/I]_{2d-2} \rightarrow [R/I]_{2d-1}$ has a one-dimensional cokernel then in all subsequent degrees, $\times L$ is surjective. Consequently, failure of injectivity in the first half is also by the smallest amount possible. \end{lemma} \begin{proof} Looking at the Hilbert function of the cokernel of $\times L : R/I (-1) \rightarrow R/I$, namely $A' = R/(I,L)$, we have assumed that $h_{A'}(2d-1) = 1$. Then Macaulay's theorem gives that $h_{A'}(t) \leq 1$ for all $t \geq 2d$. If $h_{A'}(2d) = 1$ then this is maximal growth from degree $2d-1$ to degree $2d$. This prevents $I$ from being artinian in degree $2d$ (by the Gotzmann Persistence Theorem \cite{Go}), hence also in degree $d$, a contradiction. The second part follows by duality. \end{proof} This motivates the following definition. \begin{definition} Let $I \in CI(d,d,d,d)$ and let $L$ be a general linear form. \begin{itemize} \item We say that $R/I$ {\it fails WLP by one} if $\times L : [R/I]_{2d-2} \rightarrow [R/I]_{2d-1}$ has a one-dimensional cokernel. \item We say that $R/I$ {\it fails WLP by more than one} if $\times L : [R/I]_{2d-2} \rightarrow [R/I]_{2d-1}$ has a cokernel that is more than one-dimensional. \end{itemize} \end{definition} \begin{remark} These properties can be read immediately from the Hilbert function of $R/(I,L)$: \begin{itemize} \item if $h_{R/(I,L)} (2d-1) = 0$ then $R/I$ has WLP. \item If $h_{R/(I,L)} (2d-1) = 1$ then $R/I$ fails WLP by one. \item If $h_{R/(I,L)} (2d-1) \geq 2$ then $R/I$ fails by more than one. \end{itemize} \end{remark} We do not yet know that for all $I \in CI(d,d,d,d)$, $R/I$ has the WLP. Nevertheless, for fixed $I$ there is an open subset of linear forms for which the cokernel of $\times L$ has the smallest Hilbert function in degree $2d-1$ among all $L \in (\mathbb P^3)^*$. \begin{lemma} Fix $I \in CI(d,d,d,d)$. Let \[ m = \min_{L \in [R]_1} \{ h_{R/(I,L)}(2d-1) \}. \] (Note $m=0$ if and only if $R/I$ has the WLP.) Let \[ U_I = \{ L \in (\mathbb P^3)^* \ | \ h_{R/(I,L)} (2d-1) = m \} . \] Then $U_I$ is open in $(\mathbb P^3)^*$. \end{lemma} \begin{proof} Fix a basis for $[R/I]_{2d-2}$ and one for $[R/I]_{2d-1}$, and say the two dimensions are $s_1$ and $s_2$. Then for a linear form $L = a_1 x_1 + a_2 x_2 + a_3 x_3 + a_4 x_4$, the multiplication \[ \times L : [R/I]_{2d-2} \rightarrow [R/I]_{2d-1} \] can be represented by a $s_2 \times s_1$ matrix $M$ of linear forms in the dual variables $a_1, a_2, a_3, a_4$. The maximal minors of $M$ may or may not all be zero. Let $s$ be the largest integer for which there is an $s \times s$ minor of $M$ that is not identically zero. The vanishing locus of the ideal generated by the $s \times s$ minors gives a closed subscheme of $(\mathbb P^3)^*$, and the open complement represents the linear forms giving multiplication of maximum possible rank, i.e. $U_I$. \end{proof} \begin{remark} Given $I \in CI(d,d,d,d)$ and $L \in [R]_1$, there are two reasons why $\times L : [R/I]_{2d-2} \rightarrow [R/I]_{2d-1}$ can fail to have maximal rank. One is that $R/I$ might fail to have the WLP. The other is that $R/I$ does have the WLP but $L$ is not general enough. \end{remark} \begin{definition} We will say that $L$ is {\it general for $I$} (or for $R/I$) if $L \in U_I$. \end{definition} \section{Computations on the hyperplane section of $C$} \label{curve section} We maintain the notation from Section \ref{sect tools}, and we first note some useful facts: \begin{itemize} \item $\dim [I_C]_t = 0$ for $t \leq 2d-1$. \item $\dim [I_C]_{2d} = 4$. \item $h^2 (\mathcal I_C (t)) = 0 $ for $t \geq 2d-3$ (from the Koszul resolution for $I_{C_i}$ and the fact that \[ H^2(\mathcal I_C(t)) \cong H^1(\mathcal O_C(t)) \cong H^1(\mathcal O_{C_1}(t)) \oplus H^1(\mathcal O_{C_2}(t)) \cong H^2(\mathcal I_{C_1}(t)) \oplus H^2(\mathcal I_{C_2}(t)) \] since $C$ is a disjoint union). \end{itemize} \begin{notation} Since $C$ is a curve in $\mathbb P^3$, and a hyperplane $H$ is a plane, the ideal of the hyperplane section $I_{Z|H}$ can be viewed as the homogeneous ideal of a finite set of points in $\mathbb P^2$. As already mentioned, we denote the coordinate ring of $H$ by $S \cong k[x,y,z]$, and since there is no chance of confusion, we will denote the ideal of $Z$ as $I_Z$ instead of $I_{Z|H}$. In the work so far we used the notation $\Delta h_B$ to be consistent with \cite{HU}, but now it is more convenient to refer directly to the set of points. In particular, we will write $\Delta h_Z$ instead of $\Delta h_B$, since $B$ is the coordinate ring of $Z$. \end{notation} We first describe the Hilbert function of $B = S/I_{Z}$ under the assumption that $A = R/\langle F_1,F_2,F_3,F_4\rangle$ has the WLP; we shall call this the {\em expected Hilbert function} for~$B$. \begin{lemma} \label{exp hf} The algebra $A$ has the WLP if and only if the (expected) $h$-vector of $B = S/I_{Z}$, which is the first difference of the expected Hilbert function of $B$, is generic, namely it is \[ \begin{array}{c|cccccccccccccccc} \text{degree} & 0 & 1 & 2 & \dots & 2d-3 & 2d-2 & 2d-1 & 2d \\ \hline \Delta h_Z & 1 & 2 & 3 & \dots & 2d-2 & 2d-1 & d & 0. \end{array} \] \end{lemma} \begin{proof} Notice that the sum of the entries of the claimed $h$-vector is $2d^2 = \deg C$, as required. Observe that the socle degree of $A$ is $4d-4$, so $h_A$ is strictly increasing until degree $\frac{4d-4}{2} = 2d-2$, and then it is strictly decreasing. Recall also that $I_C = I_1 \cdot I_2$. Now consider the long exact sequence (\ref{std exact}) and set $t = 2d-2$. We know that $A$ has the WLP if and only if $\times L : [A]_{2d-3} \rightarrow [A]_{2d-2}$ is injective. Since $[I_C]_{2d-2} = 0$, we get that $A$ has the WLP if and only if $[I_Z]_{2d-2} = 0$. Furthermore, from the Koszul resolution we see from an easy computation that \[ \dim [A]_{2d-2} - \dim [A]_{3d-3} = d. \] Hence (by duality and looking at surjectivity) $A$ has the WLP if and only if $\dim \ker (\times L) = d$ from degree $2d-2$ to degree $2d-1$. Then again from (\ref{std exact}) we get that $A$ has the WLP if and only if $\dim [I_Z]_{2d-1} = d$, which (after a trivial calculation) completes the proof. \end{proof} Whether or not $A$ has the WLP, the first difference of the Hilbert function of $Z$ has a nice ``see-saw" behavior, in that if $\Delta h_Z$ falls below the predicted value by a fixed amount in a given degree before degree $2d-1$, then it is above the predicted value (0) by the same amount in the corresponding degree after $2d-1$. In particular, the value in degree $2d-1$ is $d$ regardless of the extent to which WLP fails to hold. \begin{lemma} \label{seesaw} Whether or not $A$ has the WLP, suppose that $\Delta h_Z (2d-1-m) = 2d-m-c$. Then $\Delta h_Z(2d-1+m) = c$. That is, for $0 \leq m \leq 2d-1$, we have \[ \Delta h_Z (2d-1-m) + \Delta h_Z(2d-1+m) = 2d-m. \] In particular, $\Delta h_Z(2d-1) = d$. \end{lemma} \begin{proof} We have $I_C = I_{C_1} \cap I_{C_2} = \langle F_1, F_2 \rangle \cap \langle F_3, F_4 \rangle$. Since $\langle F_1 F_3, F_2 F_4 \rangle $ is a regular sequence, it links $C$ to the curve $D$ with $I_D = \langle F_1, F_4 \rangle \cap \langle F_2, F_3 \rangle$, which is again a union of two complete intersections of the same degree, with the same Hartshorne-Rao module $M(D) = R/\langle F_1, F_2, F_3, F_4 \rangle $ (note that this module is self-dual up to twist). For a general hyperplane $H$, the set of points $Z$ cut out on $C$ is linked on $H$ to the set of points $Y$ cut out on $D$ by a complete intersection of type $(2d, 2d)$, and clearly $Z$ and $Y$ have the same Hilbert function since $C$ and $D$ have the same Hilbert function and the same Hartshorne-Rao module. The assertion of the lemma then comes immediately from the formula for the behavior of the Hilbert function under linkage (see \cite{DGO}, Theorem 3). \end{proof} \begin{lemma} \label{hf for fail by one} The complete intersection $A$ fails WLP by one if and only if the first difference of the Hilbert function of $k[x,y,z]/I_Z$ is \[ \begin{array}{c|cccccccccccccccc} \text{degree} & 0 & 1 & 2 & \dots & 2d-3 & 2d-2 & 2d-1 & 2d \\ \hline \Delta h_Z & 1 & 2 & 3 & \dots & 2d-2 & 2d-2 & d & 1 \end{array} \] \end{lemma} \begin{proof} It follows immediately from the arguments in Lemma \ref{others have max rk}, Lemma \ref{exp hf}, together with the exact sequence (\ref{std exact}). \end{proof} Following Lemma \ref{socle lemma}, let $K$ be the kernel of the multiplication $R/I(-1) \stackrel{\times L}{\longrightarrow} R/I$. Recall that for a graded module $M$ we defined $a_- (M) $ to be the degree of the first non-zero component of $M$. \begin{corollary} \label{a-(K)} For $t < 2d$ we have \[ \dim [K]_t - \dim [K]_{t-1} = t+1 - \Delta h_Z (t). \] In particular, $\displaystyle a_{-} (K) = \min \{ t \ | \ \Delta h_Z (t) < t+1 \}$. \end{corollary} \begin{proof} For any fixed $t$, consider the exact sequence {\footnotesize \[ 0 \rightarrow [I_C]_{t-1} \stackrel{\times L}{\longrightarrow} [I_C]_t \rightarrow [I_{Z}]_t \rightarrow [A]_{t-1} \stackrel{\times L}{\longrightarrow} [A]_t \rightarrow H^1(\mathcal I_{Z}(t)) \rightarrow H^2(\mathcal I_C(t-1)) \rightarrow H^2(\mathcal I_C(t)) \rightarrow 0. \]} \noindent We know that $[I_C]_t = 0$ for $t < 2d$. On the other hand, Lemma \ref{seesaw} guarantees that $\Delta h_Z(2d-1) < 2d$, so in the range $0 \leq t \leq 2d-1$ we have $\dim [K]_t = \dim [I_{Z}]_t$. Hence in this range \[ \dim [K]_t - \dim [K]_{t-1} = \dim [I_{Z}]_t - \dim [I_{Z}]_{t-1} = t+1-\Delta h_Z(t) \] as claimed. \end{proof} \begin{example} \label{1st ex} Suppose $d=6$. Then the Hilbert function of $R/I = M(C)$ is given by the following table: \[ \arraycolsep=4pt \begin{array}{rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr} 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 & 16 & 17 & 18 & 19 & 20 \\ \hline 1 & 4 & 10 & 20 & 35 & 56 & 80 & 104 & 125 & 140 & 146 & 140 & 125 & 104 & 80 & 56 & 35 & 20 & 10 & 4 & 1 \end{array} \] For a general choice of $I$, we know from Lemma \ref{exp hf} that $B = k[x,y,z]/I_{Z}$ has $h$-vector (i.e. first difference of the Hilbert function) equal to \begin{equation} \begin{array}{c|rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr} \text{degree} & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 \\ \hline \Delta h_Z & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 6 & 0 \end{array} \end{equation} \noindent Suppose that the multiplication on $A$ from degree 7 to degree 8 fails to be injective. Then the maps from degree 8 to degree 9 and from degree 9 to degree 10 also fail to be injective, and the kernels are isomorphic to the components of $I_{Z}$ in degrees 8, 9 and 10 respectively. The preceding lemmas allow the following $h$-vector for $k[x,y,z]/I_{Z}$: \begin{equation} \label{bad hf} \begin{array}{c|rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr} \text{degree} & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 \\ \hline \Delta h_Z& 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 7 & 7 & 6 & 6 & 5 & 3 & 2 \end{array} \end{equation} \noindent but not the following one: \[ \begin{array}{c|rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr} \text{degree} & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 &10 & 11 & 12 & 13 & 14 \\ \hline \Delta h_Z & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 7 & 7 & 6 & 6 & 5 & 4 & 1 \end{array} \] Now to illustrate the method used below in this simple example, suppose that (\ref{bad hf}) were to occur. By Davis' theorem (Theorem \ref{davis thm}), $Z$ contains a subset of 71 points on a curve $F$ of degree 7, and one point, $P$, not lying on $F$. But $C_1$ and $C_2$ are smooth curves of degree $36$, whose ideals were chosen as general 2-dimensional subspaces of the 4-dimensional space of sextics spanned by $F_1,F_2,F_3,F_4$. Let $Z_1$ and $Z_2$ be the hyperplane sections of $C_1$ and $C_2$, respectively, with the general plane $H$. Both $Z_1$ and $Z_2$ have the uniform position property (UPP). By symmetry (since $I_1$ and $I_2$ are chosen generally, so are indistinguishable), there cannot be one distinguished point of this sort, since $P$ would have to lie on either $Z_1$ or $Z_2$. Furthermore, to illustrate another of our tools, any curve of degree 7 containing all but one point of $Z_1$ (resp.\ $Z_2$) must contain all of $Z_1$ (resp.\ $Z_2$) by the Cayley-Bacharach property of complete intersections. \end{example} \begin{lemma} \label{rule out} An $h$-vector of the form \[ \begin{array}{c|ccccccccccccccccccccccccccccccccc} \text{degree} & 0 & 1 & 2 & 3 & \dots & t-1 & t & t+1 & \dots \\ \hline \Delta h_Z & 1 & 2 & 3 & 4 & \dots & t & t & a & \dots \end{array} \] \noindent does not occur for the coordinate ring, $B$, of a general hyperplane section $Z$ of $C$, for $a=t$ or $t-1$. \end{lemma} \begin{proof} We maintain the notation of Lemma \ref{socle lemma}, Remark \ref{soc of h1} and Remark \ref{trans socle lemma}. We see from the $h$-vector that the kernel $K$ begins in degree $t$ (Corollary \ref{a-(K)}). We thus know from (\ref{hu ineq}) (or from Remark \ref{trans socle lemma}) that $\bar B$ has socle in degree $t-1$ or $t$. Since $\bar B$ coincides with a hypersurface ring in degrees $\leq t$, it must be $t$. Suppose that $a=t$. Then in fact $\bar B$ coincides with a hypersurface ring in degrees $\leq t+1$, so we have a contradiction with the socle. Suppose that $a=t-1$. Then $I_{Z}$ has one minimal generator $F$ in degree $t$ and one minimal generator $G$ in degree $t+1$. Call $\bar F$ and $\bar G$ the corresponding elements in the artinian reduction. We first claim that these two generators have a syzygy of the form $QF + LG = 0$ with $\deg Q = 2$ and $\deg L = 1$. Indeed, since there is a socle element in degree $t$, the minimal free resolution of $I_{Z}$ has a term $ R(-t-2)$, which represents syzygies of all generators of degrees $\leq t+1$, of which there are only $F$ and $G$. Thus $F$ and $G$ have a common factor of degree $t-1=a$. By Davis' theorem, $Z$ has a subset $Z'$ with $h$-vector \[ \begin{array}{c|ccccccccccccccccccccccccccccccccc} \text{degree} & 0 & 1 & 2 & 3 & \dots & t-2 & t-1 &t & t+1 & \dots \\ \hline \Delta h_Z & 1 & 2 & 3 & 4 & \dots & t-1 & t-1 & t-1 & t-1 & \dots \end{array} \] \noindent where the rest of the $h$-vector agrees with that of $B$. Thus there are two points of $Z$ not lying on the curve defined by this common factor. As in Example \ref{1st ex}, by symmetry, one point comes from $Z_1$ and one comes from $Z_2$. By assumption and Lemma \ref{seesaw}, we must have $t+1 < 2d-1$, so $t -1 < 2d-3$. But then by the Cayley-Bacharach theorem, any curve of degree $t-1$ containing all but one of the points of the complete intersection $Z_1$ contains all of $Z_1$, giving a contradiction. \end{proof} Even though it is not true that $Z$ has the UPP (see page \pageref{UPP def}), it is still true that the Hilbert function of $Z$ has the decreasing type property: \begin{proposition} \label{decreasing type} The $h$-vector of the general hyperplane section of $C$ is of decreasing type. \end{proposition} \begin{proof} Suppose that the $h$-vector is not of decreasing type. By definition, this means that for some $s$ and $t$ we have \[ 0 < s = \Delta h_Z(t-1) = \Delta h_Z(t) < t. \] From Lemma \ref{seesaw}, a calculation gives that we must have $t \leq 2d-1$. Indeed, recall that $\Delta h_Z$ is itself an $O$-sequence, and if $t \geq 2d$ then Lemma \ref{seesaw} forces a growth of the Hilbert function $\Delta h_Z$ in degree $\leq 2d-2$ that exceeds Macaulay's bound. Hence $s \geq d$, again by Lemma \ref{seesaw}. By Theorem \ref{davis thm}, $Z$ contains a subset $Z'$ lying on a curve $D$ of degree $s$ and having $h$-vector \[ \begin{array}{c|ccccccccccccccccccccccccccccccccc} \text{degree} & 0 & 1 & 2 & 3 & \dots & s-2 & s-1 & s & \dots & t-1 &t & t+1 & \dots \\ \hline \Delta h_{Z'} & 1 & 2 & 3 & 4 & \dots & s-1 & s & s & \dots & s & s & h_{t+1} & \dots \end{array} \] We will first bound above the number of points of $Z$ that lie off $D$, thus bounding below the number of points of $Z'$. By the assumption that the $h$-vector is not of decreasing type, the number of points off $D$ is not zero. At least half of these must be points of either $Z_1$ or $Z_2$ (and in fact by symmetry it must be exactly half). Without loss of generality, say it is $Z_1$. Since $C_1$ is a smooth irreducible curve, $Z_1$ satisfies the UPP. We will use this fact to get a contradiction. We will use the fact that the $h$-vector of $Z_1$ is \[ \begin{array}{c|ccccccccccccccccccccccccccccccccc} \text{degree} & 0 & 1 & 2 & 3 & \dots & d-2 & d-1 & d & \dots & 2d-3 & 2d-2 & \dots \\ \hline \Delta h_{Z_1} & 1 & 2 & 3 & 4 & \dots & d-1 & d & d-1 & \dots & 2 & 1 \end{array} \] \bigskip \noindent \underline{Case 1}: If $t = 2d-1$ then $s=d$ by Lemma \ref{seesaw}. Then the number of points of $Z$ not lying on $D$ is at most \[ 1 + 2 + \cdots + (d-2) = \binom{d-1}{2} \] so the number of points of $Z$ on $D$ is at least $2d^2 - \binom{d-1}{2}$. Hence $Z_1$ contains at least $d^2 - \frac{1}{2} \binom{d-1}{2}$ points on $D$. On the other hand, the sum of the entries of the $h$-vector of $Z_1$ through degree $s = d = \deg D$ is \[ 1 + 2 + 3 + \dots + d + (d-1) = \binom{d+1}{2} +(d-1) \leq d^2 - \frac{1}{2} \binom{d-1}{2} \] as long as $d \geq 2$. Since $Z_1$ has the UPP, this means that any curve of degree $d$ containing at least $d^2 - \frac{1}{2} \binom{d-1}{2}$ points of $Z_1$ must contain all of $Z_1$. This contradicts the fact that some points of $Z$, hence of $Z_1$, do not lie on $D$. \bigskip \noindent \underline{Case 2}: If $t \leq 2d-2$ then the number of points of $Z$ not lying on $D$ is at most \[ 1+2+3 + \dots + (t-1-s) = \binom{t-s}{2} \] (since the value of the $h$-vector of $Z$ is greater than $s$ at most up to degree $t-2$, and the value there is at most $t-1$). Hence the number of points of $Z$ that do lie on $D$ is at least $2d^2 - \binom{t-s}{2}$, so $Z_1$ contains at least $ d^2 - \frac{1}{2} \binom{t-s}{2}$ points on $D$. Note that we have $s \geq d$, and in fact by Case 1 we can assume $s > d$. As before, the sum of the entries of the $h$-vector of $Z_1$ through degree $s$ is \[ 1+2+3+ \dots + (d-1) + d + (d-1) + (d-2) + \dots + (2d-1-s) = \binom{d+1}{2} + \binom{d}{2} - \binom{2d-1-s}{2}. \] We claim that \begin{equation} \label{claim} d^2 - \frac{1}{2} \binom{t-s}{2} \geq \binom{d+1}{2} + \binom{d}{2} - \binom{2d-1-s}{2}. \end{equation} As in case 1, UPP for $Z_1$ will then mean that any curve of degree $s$ containing at least $d^2 - \frac{1}{2} \binom{t-s}{2}$ points of $Z_1$ contains all of $Z_1$, which gives the desired contradiction. But (\ref{claim}) is equivalent to \[ (2d-1-s)(2d-2-s) \geq \frac{(t-s)(t-s-1)}{2} \] which is clearly true. \end{proof} Next we give a result for any $d > 1$, for any complete intersection in four variables (not just those coming as Jacobian ideals), which is less than full WLP but improves the range where we know maximal rank of the multiplication by a general linear form to hold. \begin{theorem} \label{d1=d2=d3=d4} Let $A = R/I = R/\langle F_1,F_2,F_3,F_4 \rangle$ where $I$ is a complete intersection and $\deg F_i = d$ for all $i$. Let $L$ be a general linear form. Then the multiplication maps $\times L \colon [A]_{t-1}\rightarrow[A]_t$ are injective for $t< \lfloor\frac{3d+1}{2}\rfloor$. \end{theorem} \begin{proof} We have to determine the $h$-vector for $Z$ with the smallest possible initial degree, given the constraints in our lemmas. Suppose first that $d$ is even. Then the smallest possible initial degree comes when the $h$-vector has the form {\small \[ (1,2,3,\dots, b-1, b, b-1,\dots,d+1,d,\dots) \hbox{ \ or \ } (1,2,3,\dots,b-1,b,b,b-2,\dots, d+1,d,\dots), \]} \hspace{-.35cm} where $d$ occurs in degree $2d-1$, which in any case have the same initial degree. One checks that this initial degree is $\frac{3d}{2}$. When $d$ is odd, the smallest possible initial degree comes, for instance, when the $h$-vector has the form \[ (1,2,3,\dots, b-1, b, b-2,\dots,d+1,d,\dots) \] thanks to Lemma \ref{rule out}. One checks that the initial degree is $\frac{3d+1}{2}$. \end{proof} \section{WLP for small $d$} \label{small d} In this section we prove the WLP for complete intersections of type $(d,d,d,d)$ for $d \leq 5$. The cases $3 \leq d \leq 5$ were previously open. In the next section we will outline an approach for all $d$ which is not yet a complete proof of WLP. We maintain the notation of Remark \ref{soc of h1}; in particular, $S = k[x,y,z]$ is (by slight abuse of notation) the coordinate ring for the plane $H$ containing the points $Z$. Furthermore, $B = S/I_Z$ and $\bar B$ is an artinian reduction of $B$ by a linear form, say $L'$. We will denote by $\bar I_Z$ the ideal $\frac{I_Z + (L)'}{(L')}$ in $T = S/(L')$, so $\bar B =T / \bar I_Z$, and recall again that the graded Betti numbers of $\bar I_Z$ over $T$ are the same as those of $I_Z$ over $S$. In a similar way, for other sets of points in $H$ we will denote by $\bar \ $ the ideal of the artinian reduction of those coordinate rings. \begin{example} \label{d=2} When $d=2$, it was shown in \cite{MN-quadrics} Corollary 4.4 that a complete intersection of four quadrics has the WLP. \end{example} \begin{proposition} \label{d=3} A complete intersection of four cubics has the WLP. \end{proposition} \begin{proof} We give two easy arguments based on our results. Let $I$ be a complete intersection of four cubics, and let $Z$ be the corresponding set of 18 points. Now $Z$ is the union of two complete intersections of type $(3,3)$. The Hilbert function of $R/I$ is \[ \begin{array}{c|cccccccccccccccc} \text{degree} & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\ \hline h_{R/I} & 1 & 4 & 10 & 16 & 19 & 16 & 10 & 4 & 1 & 0 \end{array} \] and the $h$-vector of $Z$ has the form \[ \begin{array}{c|cccccccccccccccc} \text{degree} & 0 & 1 & 2 & 3 & 4 & 5 & 6 \\ \hline \Delta h_Z & 1 & 2 & 3 & 4 & x & 3 & 5-x & 0 \end{array} \ . \] Then $x$ cannot be 4 by Lemma \ref{rule out}, $x$ cannot be 3 by Proposition \ref{decreasing type}, and it cannot be $< 3$ since then the $h$-vector would not be an $O$-sequence. Hence $x=5$ and $R/I$ has the WLP by Lemma \ref{exp hf}. Alternatively, one could simply apply Theorem \ref{d1=d2=d3=d4} for the case $d=3$ to get injectivity from degree 3 to 4, which is enough to prove WLP. \end{proof} \begin{theorem} \label{d=4} A complete intersection of four forms of degree 4 has the WLP. \end{theorem} \begin{proof} Let $A = R/I = R/\langle F_1,F_2,F_3,F_4 \rangle$ where $I$ is a complete intersection and $\deg F_i = 4$ for all $i$. Then we claim that $A$ has the WLP. The Hilbert function of $R/I$ is \[ \begin{array}{c|cccccccccccccccc} \text{degree} & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 \\ \hline h_{R/I} & 1 & 4 & 10 & 20 & 31 & 40 & 44 & 40 & 31 & 20 & 10 & 4 & 1 & 0 \end{array} \] The midpoint of the Hilbert function of $R/I$ is in degree 6, so we expect injectivity for $\times L : [A]_{t-1} \rightarrow [A]_t$ for all $t \leq 6$. Note that $Z = Z_1 \cup Z_2$ is the general hyperplane section of $C = C_1 \cup C_2$, so both $Z_1$ and $Z_2$ are complete intersections of type $(4,4)$. By Theorem \ref{d1=d2=d3=d4}, $\times L : [A]_{t-1} \rightarrow [A]_t$ is injective for $t < 6$. Thus $[K]_t = 0$ for $t < 6$, so by Corollary \ref{a-(K)} we have $\Delta h_Z(5) = 6$. As a result, the $h$-vector of $Z$ has the form \[ \begin{array}{c|cccccccccccccccc} \text{degree} & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\ \hline \Delta h_Z & 1 & 2 & 3 & 4 & 5 & 6 & 7-c & 4 & c & 0 & 0 \end{array} \] (using Lemma \ref{seesaw}). We want to show $c=0$. Our results give us that $c$ can only be 1 or 2. That is, we have to rule out $(1,2,3,4,5,6,5,4,2)$ and $(1,2,3,4,5,6,6,4,1)$. Notice that there is a minimal generator for $I_Z$ in degree 6 if and only if $A$ fails to have WLP, so we want to show that such a generator cannot exist. We first rule out $c=2$. Suppose $\Delta h_Z$ is given by \[ \begin{array}{c|ccccccccccccccccccccccccccccccccc} \text{degree} & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\ \hline \Delta h_Z & 1 & 2 & 3 & 4 & 5 & 6 & 5 & 4 & 2 & 0 \end{array} \] In particular, $I_Z$ has two minimal generators in degree 6 and possibly one in degree 7 (among possibly others), and $\bar B$ has socle in degree $\leq 6$ (using (\ref{hu ineq}) and Corollary \ref{a-(K)}). Clearly it is not in degree $\leq 4$. \begin{itemize} \item If $\bar B$ has socle in degree 5 then $I_{Z}$ has a summand $S (-7)$ in the last free module in its minimal free resolution, so the two generators of degree 6 have a linear syzygy: $L_1 F_1 + L_2 F_2 = 0$. This means that they have a common factor of degree 5. By Davis's theorem, all the points of $Z$ but one lie on a curve of degree 5. But this violates the symmetry of the situation: from the 4-dimensional vector space $[I]_4$ we chose two general two-dimensional subspaces to define the two curves $C_1$ and $C_2$, and we are looking at a general hyperplane section of their union. It is impossible that one of these two curve is distinguished by this property of containing the single point not on the curve of degree 5. \item Now assume $\bar B$ has socle beginning in degree 6 and assume that $I_{Z}$ has two minimal generators, $F_1$ and $F_2$, of degree 6 but none of degree 7. Then we have a quadratic syzygy $Q_1 F_1 + Q_2 F_2 = 0$. This means $F_1$ and $F_2$ have a common factor, $G$, of degree 4. Since $I_{Z}$ has no minimal generator in degree 7, $G$ is also common to the entire component in degree 7. By Davis's theorem, since $\Delta h_Z(7)=4$, all but 4 of the points of $Z$ lie on a curve of degree 4. By symmetry, two have to come from $Z_1$ and two from $Z_2$. By liaison and UPP, any curve of degree 4 containing 14 of the 16 points of $Z_1$ or of $Z_2$ contains all 16 points. Thus we have a contradiction. \item Finally, assume that $\bar B$ has socle beginning in degree 6 and assume that $I_{Z|H}$ has two minimal generators, $F_1$ and $F_2$, of degree 6 and a minimal generator $G$ of degree 7. Then $S/\langle F_1, F_2 \rangle$ has Hilbert function with first difference $(1,2,3,4,5,6,5,5, \dots)$, so all but one of the points lies on a curve of degree 5. Again this is impossible. \end{itemize} We note that we have reduced the problem to checking that injectivity can only fail in one place, and that the failure can only be by 1. Now the only case to rule out for the $h$-vector of $Z$ is $(1,2,3,4,5,6,6,4,1)$. We first consider the socle of $\bar B$. Thanks to Remark \ref{trans socle lemma}, there is socle in degree $\leq 6$, and from the $h$-vector it can only be in degree exactly 6. Since this is the Hilbert function of an artinian algebra over $k[x,y]$, the canonical module must have exactly one generator in its initial degree and at least two generators in the second degree. This means socle that is exactly 1-dimensional in degree 8 and at least 2-dimensional in degree 7. Near the beginning of section \ref{strategy} we show that in degree $2d-1$ there must be exactly a $(d-2)$-dimensional socle, so in this case it must be exactly 2-dimensional (and not 3-dimensional, even though this is numerically possible); we will assume that here. Hence the last free module in the minimal free resolution of $I_{Z}$ must at least have free summands $S(-8), S(-9)^2, S(-10)$. The only ambiguity is the possibility of a second summand $S(-8)$, so we will indicate this with the exponent $1+\epsilon \ (\epsilon \geq 0)$. What about generators? Certainly there is exactly one generator in degree 6 and exactly two in degree 7. We must have five or six minimal generators (depending on $\epsilon$), and considering degrees we see that the only possibility for the resolution is \begin{equation} \label{mfrZ} 0 \rightarrow \begin{array}{c} S(-8)^{1+\epsilon} \\ \oplus \\ S(-9)^2 \\ \oplus \\ S(-10) \end{array} \rightarrow \begin{array}{c} S(-6) \\ \oplus \\ S(-7)^2 \\ \oplus \\ S(-8)^{2+\epsilon} \end{array} \rightarrow I_{Z} \rightarrow 0. \end{equation} And remember that $Z$ has $h$-vector \[ (1,2,3,4,5,6,6,4,1). \] We will use an argument that analyzes how the Betti numbers change via a sequence of two links, \[ Z := Y_0 \sim Y_1 \sim Y_2 \] which we will now describe. A key idea is that each link will be viewed simultaneously as a single link and as a pair of two separate links. {\it The rest of the proof is going to rely on symmetry. We will perform a series of links in which we treat $Z_1$ and $Z_2$ equally, and come to a situation where one of them has different behavior than the other.} We start with $Z = Y_0 = Y_{0,1} \cup Y_{0,2}$, where $Y_{0,1}$ and $Y_{0,2}$ are disjoint complete intersections of type $(4,4)$ and $Z$ is the general hyperplane section of $C = C_1 \cup C_2$. We saw above that $I_Z$ has one minimal generator of degree 6, two of degree 7 and at least two of degree 8. Both $Y_{0,1}$ and $Y_{0,2}$ are general hyperplane sections of smooth curves, so they both have UPP. If the minimal generator of $I_{Y_0}$ of degree 6 were reducible, then either it consists of a cubic containing $Y_{0,1}$ and one containing $Y_{0,2}$ (which is clearly impossible since both are complete intersections of quartics) or else there are distinguished subsets of $Y_{0,1}$ and $Y_{0,2}$ consisting of those points lying on the various factors. But this violates uniformity. So without loss of generality we can assume that the sextic generator is irreducible. Furthermore, again by uniformity, no point of $Y_0$ is a singular point of the sextic. The first link will be by a complete intersection of type $(6,8)$, where the curve of degree 8 is the union of a general quartic containing $Y_{0,1}$ and a general quartic containing $Y_{0,2}$. Let $F$ be the sextic generator. $Z$ is a set of $32 = 16 + 16$ smooth points of $F$. The base locus of the linear system on $F$ of quartics containing $Y_{0,1}$ is just $Y_{0,1}$, so the residual to $Y_{0,1}$ in a general element of this linear system is a set of reduced points $Y_{1,1}$ on $F$. Similarly we get a set of reduced points $Y_{1,2}$ on $F$, and $Y_{1,1} \cap Y_{1,2} = \emptyset$ by generality. Since in both cases the quartic is a minimal generator of $I_{Y_{0,i}}$, one checks that both $Y_{1,1}$ and $Y_{1,2}$ are complete intersections of type $(2,4)$. Setting $Y_1 = Y_{1,1} \cup Y_{1,2}$, then $Y_1$ is reduced and is linked to $Y_0 = Z$. In summary, \[ Y_{0,1} \cup Y_{0,2} = Y_0 \sim Y_1 = Y_{1,1} \cup Y_{1,2} \hbox{ \ where \ } Y_{0,1} \sim Y_{1,1} \hbox{ and } Y_{0,2} \sim Y_{1,2}. \] A calculation gives that $Y_1$ has $h$-vector $(1,2,3,4,4,2)$ and free resolution (after splitting two terms in the mapping cone coming from minimal generators used in the link) \[ 0 \rightarrow \begin{array}{c} S(-6)^{1+\epsilon} \\ \oplus \\ S(-7)^2 \end{array} \rightarrow \begin{array}{c} S(-4) \\ \oplus \\ S(-5)^2 \\ \oplus \\ S(-6)^{1+\epsilon} \end{array} \rightarrow I_{Y_1} \rightarrow 0. \] Note that if $I_{Y_1}$ had two minimal generators of degree 6 then the ideal $I$ generated by the quartic and two quintics would have a common factor of degree 2. By Davis's theorem, the subscheme of $Y_1$ not lying on this conic has $h$-vector $(1,2,2)$, and so $Y_{1,1}$ and $Y_{1,2}$ behave differently: one of them contributes 2 points to this residual and the other contributes 3 points. This violates symmetry, so we conclude that $\epsilon =0$. The second link will use curves in $I_{Y_1}$ of degrees $4,5$, where the quartic is the union of the conic containing $Y_{1,1}$ and the conic containing $Y_{1,2}$, and the residual, $Y_2$, has degree $20 - 16 = 4$ and is similarly the union of two complete intersections of type $(1,2)$, so \[ Y_{1,1} \cup Y_{1,2} = Y_1 \sim Y_2 = Y_{2,1} \cup Y_{2,2} \hbox{ \ where \ } Y_{1,1} \sim Y_{2,1} \hbox{ and } Y_{1,2} \sim Y_{2,2}. \] We first justify the existence of this link. We have seen that $I_{Y_1}$ has one minimal generator, say $G$, of degree 4 (which is the union of a conic containing $Y_{1,1}$ and a conic containing $Y_{1,2}$) and two minimal generators of degree 5. Suppose that a general choice, $H$, of quintic has a component in common with $G$. This can only happen if $H$ shares a line with each of the two conics, by symmetry (since neither $Y_{1,1}$ nor $Y_{1,2}$ can play a different role from the other). So $G$ is actually the union of four lines, with four points of $Y_1$ on each. Now we revisit the first link. We have an irreducible curve $F$ of degree 6 containing both $Y_{0,1}$ and $Y_{0,2}$ and we have two parallel links. Consider the pencils $|4H_F - Y_{0,1}|$ and $|4H_F - Y_{1,1}|$ of divisors on $F$, where $H_F$ is the divisor on $F$ cut out by a hyperplane (a line). $Y_1$ is the union of a general element of the first pencil and a general element of the second pencil, and as we have seen, these elements are both complete intersections of type $(2,4)$. By Bertini, it is impossible for every element of both of these pencils to lie on a reducible conic. Thus the second link also exists. What can we say about $Y_2$? It has degree 4 and is the union of a complete intersection $Y_{2,1}$ of type $(1,2)$ linked to $Y_{1,1}$ and a complete intersection $Y_{2,2}$ of type $(1,2)$ linked to $Y_{1,2}$ as above. Its $h$-vector is $(1,2,1)$. Furthermore, there is enough choice in the links so that $Y_2$ is reduced. Next one computes that the minimal free resolution of $Y_2$ is \[ 0 \rightarrow \begin{array}{c} S(-4) \\ \oplus \\ S(-3) \end{array} \rightarrow \begin{array}{c} S(-3) \\ \oplus \\ S(-2)^2 \end{array} \rightarrow I_{Y_2} \rightarrow 0. \] The key observation is that there can be no splitting of the $S(-3)$ in both free modules. But this minimal free resolution defines the union of a point and three additional collinear points. The former point is distinguished in this set of four points. But it must arise either from the series of links $Y_{0,1} \sim Y_{1,1} \sim Y_{2,1}$ or the series of links $Y_{0,2} \sim Y_{1,2} \sim Y_{2,2}$ . By the symmetry of this construction, such a special, distinguished, point is impossible. \end{proof} Since our main result is not quite to the point of proving WLP, we also give a complete proof for the case $d=5$ even though there are no new ideas beyond the $d=4$ case. \begin{corollary} \label{d=5} Let $A = R/I = R/\langle F_1,F_2,F_3,F_4 \rangle$ where $I$ is a complete intersection and $\deg F_i = 5$ for all $i$. Then $A$ has the WLP. \end{corollary} \begin{proof} The argument is very similar to that of Theorem \ref{d=4}, with a few minor differences. Now the midpoint of the Hilbert function of $R/I$ is in degree 8, and we expect injectivity for $\times L : [A]_{t-1} \rightarrow [A]_t$ for all $t \leq 8$. In this case Theorem \ref{d1=d2=d3=d4} gives it to us for $t \leq 7$, so we only have to prove it for $t=8$. Now we get that $\Delta h_Z$ must have the form \[ \begin{array}{c|ccccccccccccccccccccccccccccccccc} \text{degree} & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 \\ \hline \Delta h_Z & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9-c & 5 & c & 0 \end{array} \] and we must have $0 \leq c \leq 3$ to preserve decreasing type. Again we are trying to show that $c=0$, and to show this we are supposing $c >0$ in order to obtain a contradiction. And this means that there is an unexpected generator in degree 8, which we will use and which will lead to the contradiction. Note first that for such $c$ there is a $c$-dimensional vector space of forms of degree 8 containing $Z$ (which is the smallest such degree). As in the proof of Theorem~\ref{d=4}, the general such element is reduced and irreducible and smooth at the points of $Z$, since (a) both $Z_1$ and $Z_2$ have UPP and lie on no curve of degree $\leq 4$, and (b) by symmetry neither can behave in a way different from the other. Since we will again have to follow a sequence of parallel links, we maintain the notation that $Z = Y_0 = Y_{0,1} \cup Y_{0,2}$, a disjoint union of complete intersections of type $(5,5)$. We link using a general element of degree 8 (which is irreducible) and the union of two forms of degree 5, being general elements of $[I_{Y_{0,1}}]_5$ and $[I_{Y_{0,2}}]_5$. \vspace{.1in} \noindent \underline{Case 1}: $c=3$. In this case the $h$-vector of $Y_0$ is \[ \begin{array}{c|ccccccccccccccccccccccccccccccccc} \text{degree} & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 \\ \hline \Delta h_{Y_0} & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 6 & 5 & 3 \end{array} \] and the $h$-vector of the residual $Y_1$ is $(1,2,3,4,5,6,4,3,2)$. Because of irreducibility and the abundance of choices for the link, we can assume that $Y_1$ lies on an irreducible sextic. We note that $Y_1$ is the union of two complete intersections of type $(3,5)$. Now we link $Y_1$ using two sextics, one of which is irreducible and the other is the union of the cubic containing $Y_{1,1}$ and the cubic containing $Y_{1,2}$. The residual has $h$-vector $(1,2,1,1,1)$, meaning there are exactly five points on a line and one point not on the line. But this violates symmetry since the distinguished point must either come originally from $Y_{0,1}$ or from $Y_{0,2}$. Contradiction. \vspace{.1in} \noindent \underline{Case 2}: $c=2$. Now the $h$-vector of $Y_0$ is \[ \begin{array}{c|ccccccccccccccccccccccccccccccccc} \text{degree} & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 \\ \hline \Delta h_{Y_0} & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 7 & 5 & 2 \end{array} \] and the $h$-vector of the residual $Y_1$ is $(1,2,3,4,5,6,5,3,1)$. But note that $Y_1$ is still the union of two complete intersections of type $(3,5)$. Linking again using two sextics, one of which is irreducible and the other is the union of the cubic containing $Y_{1,1}$ and the cubic containing $Y_{1,2}$, we obtain a residual that is the union of two complete intersections of type $(1,3)$ and has $h$-vector $(1,2,2,1)$. Thanks to the socle lemma result, $\bar B$ has socle in degree $\leq 8$. It cannot be in degree $\leq 6$. If there were socle in degree 7 then the minimal free resolution would have a summand $S(-9)$ in the second free module, meaning that the two generators of degree 8 have a linear syzygy and hence a common factor. But we know they form a regular sequence, so this is impossible. Hence there is a summand $S(-10)^{1+\alpha}$ in the second free module in the resolution for $I_{Y_0}$. There is also clearly a 2-dimensional socle in degree 10. By thinking of the canonical module and its two generators in minimal degree we see that there is at least a 1-dimensional socle in degree 9. Turning to minimal generators, in addition to the ones mentioned above there may be generators in degrees 10 and 11. So the minimal free resolution has the shape \[ 0 \rightarrow \begin{array}{c} S(-10)^{1+\alpha} \\ \oplus \\ S(-11)^{1 + \beta} \\ \oplus \\ S(-12)^2 \end{array} \rightarrow \begin{array}{c} S(-8)^2 \\ \oplus \\ S(-9) \\ \oplus \\ S(-10)^\gamma \\ \oplus \\ S(-11)^\delta \end{array} \rightarrow I_{Y_0} \rightarrow 0. \] Since the twists have to add to the same thing in both free modules, we obtain the equation \[ 20 = 10(\gamma-\alpha) + 11(\delta-\beta). \] We conclude that $\beta = \delta$ and $\gamma = 2+ \alpha $. Our first link uses a generator of degree 8 and one of degree 10 to get a residual $Y_1$ with $h$-vector $(1,2,3,4,5,6,5,3,1)$. Computing the mapping cone and splitting the summands $S(-8)$ and $S(-10)$ we obtain for the residual $Y_1$ the minimal free resolution \[ 0 \rightarrow \begin{array}{c} S(-7)^\beta \\ \oplus \\ S(-8)^{1+\alpha} \\ \oplus \\ S(-9) \\ \oplus \\ S(-10) \end{array} \rightarrow \begin{array}{c} S(-6)^2 \\ \oplus \\ S(-7)^{1+\beta} \\ \oplus \\ S(-8)^{1+\alpha} \end{array} \rightarrow I_{Y_1} \rightarrow 0. \] (In particular there is a redundant $S(-8)$ that does not split off.) As in the case $c=3$, $Y_1 = Y_{1,1} \cup Y_{1,2}$, the union of two complete intersections of type $(3,5)$. Note that to obtain $Y_{1,1}$ and $Y_{1,2}$ we found the residual to $Y_{0,1}, Y_{0,2}$ respectively on the irreducible curve of degree 8 cut out by the two pencils of quintics, each of which has as base locus the sets $Y_{0,1}, Y_{0,2}$. So $Y_{1,1}$ and $Y_{1,2}$ have UPP and in particular $Y_1$ lies in a complete intersection of type $(6,6)$. The residual, $Y_2$, has $h$-vector $(1,2,2,1)$ and is the union of two complete intersections of type $(1,3)$. Its minimal free resolution has the form \[ 0 \rightarrow \begin{array}{c} S(-4)^{1+\alpha} \\ \oplus \\ S(-5)^{1+\beta} \\ \end{array} \rightarrow \begin{array}{c} S(-2) \\ \oplus \\ S(-3) \\ \oplus \\ S(-4)^{1+\alpha} \\ \oplus \\ S(-5)^{\beta} \end{array} \rightarrow I_{Y_2} \rightarrow 0. \] From this we see immediately that $\alpha = \beta = 0$ since the $h$-vector is $(1,2,2,1)$. But the fact that there is a generator in degree 4 forces four points on a line, which is impossible since if two points from each of $Y_{2,1}$ and $Y_{2,2}$ lie on this line then all of $Y_2$ lies on the line, and otherwise we have a violation of the symmetry principle. This concludes the case $c=2$. \vspace{.1in} \noindent \underline{Case 3}: $c=1$. We now show that $c=1$ is impossible. Now the $h$-vector $h_Z$ is \[ (1,2,3,4,5,6,7,8,8,5,1). \] There is at least one socle element in degree 8 (and none earlier), at least three in degree 9 (considering the canonical module of $\bar B$ as in Theorem \ref{d=4}), and exactly one in degree 10. In terms of minimal generators for $I_Z$, we have exactly one in degree 8 and exactly three in degree 9. There may be some in degree 10, but as before (using UPP of the two subsets) we can rule out any of degree 11. So far, to build the minimal free resolution, we have \[ 0 \rightarrow \begin{array}{c} S(-10)^{1+\epsilon_1} \\ \oplus \\ S(-11)^{3+\epsilon_2} \\ \oplus \\ S(-12) \end{array} \rightarrow \begin{array}{c} S(-8) \\ \oplus \\ S(-9)^3 \\ \oplus \\ S(-10)^{\epsilon_3} \end{array} \rightarrow I_Z \rightarrow 0. \] Since the sum of the twists of the first free module must equal the sum of the twists of the second one, we get \[ 55 + 10 \epsilon_1 + 11 \epsilon_2 = 35 + 10 \epsilon_3, \ \hbox{ i.e. } \ 20 = 10(\epsilon_3 - \epsilon_1) - 11 \epsilon_2. \] This forces $\epsilon_2 = 0$ and $\epsilon_3 - \epsilon_1 = 2$ (so in particular $\epsilon_3 \geq 2$), i.e. setting $\epsilon := \epsilon_1$ we have the minimal free resolution is \[ 0 \rightarrow \begin{array}{c} S(-10)^{1+\epsilon} \\ \oplus \\ S(-11)^3 \\ \oplus \\ S(-12) \end{array} \rightarrow \begin{array}{c} S(-8) \\ \oplus \\ S(-9)^3 \\ \oplus \\ S(-10)^{2+\epsilon} \end{array} \rightarrow I_Z \rightarrow 0. \] (To prove that $\epsilon_2=0$ we could also have invoked the argument at the beginning of section \ref{strategy}, as we did in Theorem \ref{d=4}, but it seemed simpler to use the numerical argument above.) Now we play the same game as above. We have $Z = Y_0 = Y_{0,1} \cup Y_{0,2}$ and we link $Y_0 \sim Y_1$ where the octic is irreducible and the 10-ic is the union of a general element of degree 5 in $[I_{Y_{0,1}}]_5$ and a general element of degree 5 in $[I_{Y_{0,2}}]_5$. This links $Y_0$ to a residual $Y_1$ that is the union of two complete intersections, $Y_{1,1}$ and $Y_{1,2}$, of type $(3,5)$ and has $h$-vector $ (1,2,3,4,5,6,6,3)$ and minimal free resolution \[ 0 \rightarrow \begin{array}{c} S(-9)^3 \\ \oplus \\ S(-8)^{1+\epsilon} \\ \end{array} \rightarrow \begin{array}{c} S(-8)^{1+\epsilon} \\ \oplus \\ S(-7)^3 \\ \oplus \\ S(-6) \end{array} \rightarrow I_{Y_1} \rightarrow 0 \] where the redundant $S(-8)$ does not split. We also know that the generator of degree 6 is the product of the cubics in $I_{Y_{1,1}}$ and $I_{Y_{1,2}}$. Now we link $Y_1$ to $Y_2$ using the above-mentioned sextic and a general element of degree 7 in $I_{Y_1}$ (which will be irreducible). The residual, $Y_2$, has $h$-vector $(1,2,3,4,2)$ and is the union of two complete intersections of type $(2,3)$. Its minimal free resolution is \[ 0 \rightarrow \begin{array}{c} S(-6)^2 \\ \oplus \\ S(-5)^{1+\epsilon} \\ \end{array} \rightarrow \begin{array}{c} S(-5)^{1+\epsilon} \\ \oplus \\ S(-4)^3 \end{array} \rightarrow I_{Y_2} \rightarrow 0 \] We note in passing that we must have $\epsilon=0$. Indeed, from the Hilbert function we see $\epsilon \leq 1$. If $\epsilon=1$ then there are two minimal generators in degree 5. Setting $J$ to be the ideal generated by the three quartics in the artinian reduction, the Hilbert function of $\bar R/J$ is $(1,2,3,4,2,2,\dots)$ which forces the quartics to have a common factor of degree 2. By Davis's theorem, there are three points of $Y_2$ not on this conic, which violates symmetry. We would like to link $Y_2$ using two quartics, and we see now that this can only fail to be possible if the component in degree 4 has a common factor of degree 1. We again set $J$ to be the ideal generated by the three quartics in the artinian reduction. Then $\bar R/J$ has Hilbert function $(1,2,3,4,2,1,1, \dots)$. Since $\bar B$ has Hilbert function $(1,2,3,4,2)$, we conclude that $Z$ has five points on the linear common factor. Again this is impossible by symmetry. We conclude that we can link using two quartics, one of which is the obvious union of two conics, to get a residual $Y_3$ that is the union of two complete intersections of type $(1,2)$ and has $h$-vector $(1,2,1)$ and minimal free resolution \[ 0 \rightarrow \begin{array}{c} S(-4) \\ \oplus \\ S(-3) \\ \end{array} \rightarrow \begin{array}{c} S(-3) \\ \oplus \\ S(-2)^2 \\ \end{array} \rightarrow I_{Y_3} \rightarrow 0 \] This forces $Y_3$ to have a subscheme of degree 3 lying on a line, and again we have a violation of the symmetry idea. \end{proof} \section{A strategy for degree 6 and higher} \label{strategy} The results of sections \ref{measuring failure} and \ref{curve section} give strong restrictions on the possible Hilbert function of the general hyperplane section of $C$ and consequently on the possible behavior of $R/I$ from the point of view of the WLP. However, we also saw in section \ref{small d} that as $d$ increases, the cases that one has to check to prove WLP become overwhelming. Even for $d=4$ and 5 it was complicated, and we did not push beyond that point. Nevertheless, there is strong indication that this approach will work for arbitrary $d$. In this section we give a detailed outline of how a proof should proceed, refining the approach that we used for $d=4,5$. Unfortunately, we are forced to leave open two points on which the whole proof will ultimately rest, which we will label as formal conjectures. The first is a reduction step, and the second is a technical point that we have not yet resolved. The first conjecture is that it is enough to check the case where WLP fails by one. \begin{conjecture} \label{force exactly one} If there is a complete intersection $J \in CI(d,d,d,d)$ (see Definition \ref{def CI(d,d,d,d)}) that fails WLP by more than one then there also exists a complete intersection $I \in CI(d,d,d,d)$ that fails WLP by exactly one. More precisely, we conjecture that the locus $X \subset CI(d,d,d,d)$ of complete intersections that fail by more than one is contained in the closure of the locus $Y$ of complete intersections that fail by exactly one. \end{conjecture} Of course the idea is to show that neither $J$ nor $I$ actually exist, i.e. that $X$ and $Y$ are empty. Here is the difficulty to proving our conjecture. We start with the complete intersection $J$ and choose a linear form $L$ that is general for $J$. It is not too hard to show that then there is a complete intersection $I$ for which $\times L$ fails WLP by one. But it is not clear why $L$ is also general for $I$. \begin{quotation} {\it Assuming Conjecture \ref{force exactly one}, we can assume now without loss of generality that $R/I$ fails the WLP by exactly one, and seek a contradiction.} \end{quotation} \noindent It is important to note that the cases $d=4,5$ did not make such an assumption; those proofs are complete. As before we denote by $Z $ the general hyperplane section of the union $C$ of two complete intersection curves $C_1, C_2$ obtained by taking two general 2-dimensional subspaces of the 4-dimensional vector space spanned by $F_1, F_2, F_3, F_4$. Of course $Z$ is the union of two complete intersection sets of points in $\mathbb P^2$. By Lemma \ref{hf for fail by one}, the stated failure of injectivity translates to a Hilbert function $\Delta h_Z$ as follows. \[ \begin{array}{c|ccccccccccccccccccccccccccccccccc} \text{degree} & 0 & 1 & 2 & 3 & \dots & 2d-3 & 2d-2 & 2d-1& 2d & 2d+1 \\ \hline \Delta h_Z & 1 & 2 & 3 & 4 & \dots & 2d-2 & 2d-2 & d & 1 & 0. \end{array} \] As in Corollary \ref{d=5}, we first describe the minimal free resolution of $I_Z$, and we start by considering the socle of the artinian reduction of $B = S/I_Z$. It is clear that there is a one-dimensional socle in degree $2d$. In degree $2d-1$ there is either a $(d-2)$-dimensional socle or a $(d-1)$-dimensional socle. We first claim that it must be $(d-2)$-dimensional. Suppose the dimension of the socle in degree $2d-1$ were $(d-1)$-dimensional. Call $A$ the artinian reduction of $B$, whose Hilbert function is $\Delta h_B$. Quotienting out its $(d-1)$-dimensional socle in degree $2d-1$, we get an algebra $k[x,y]/J$ whose Hilbert function in degree $2d-1$ and $2d$ is equal to 1. Thus $[J]_{2d}$ has a linear gcd, which lifts to a gcd of $I_Z$. A theorem of Davis \cite{davis} then gives that $Z$ contains $2d+1$ points on a line, which is absurd since $Z$ is the union of two sets of points with the uniform position property. This proves that the dimension of the socle in degree $2d-1$ is $d-2$, as claimed. Finally, as before, there has to be non-zero socle in degree $2d-2$ since there is a curve of degree $2d-2$ containing $Z$ that does not lift to $C$. It is not clear yet what the dimension of the socle in this degree is. For generators, we have exactly one in degree $2d-2$ and $d-2$ in degree $2d-1$. There is none in degree $2d+1$. We have some in degree $2d$ that we have to compute. After a computation with the twists as we did in Theorem \ref{d=4}, we get a minimal free resolution of the form \begin{equation} \label{Z-d} 0 \rightarrow \begin{array}{c} S(-2d-2) \\ \oplus \\ S(-2d-1)^{d-2} \\ \oplus \\ S(-2d)^{1+\epsilon} \end{array} \rightarrow \begin{array}{c} S(-2d+2) \\ \oplus \\ S(-2d+1)^{d-2} \\ \oplus \\ S(-2d)^{2+\epsilon} \end{array} \rightarrow I_{Z} \rightarrow 0 \end{equation} where $\epsilon \geq 0$. The idea of our approach will rest on the fact that the redundancy in the minimal free resolution (in this case $S(-2d)$) is preserved in the sequence of linked sets of points that we will produce. More precisely, the strategy will be to study a series of $d-2$ specific links, and obtain a contradiction after the last link (or earlier). We start with the set $Z$ which is the disjoint union of two complete intersections, $Z_1$ and $Z_2$ of type $(d,d)$. The result of each link will again be a scheme-theoretic union of two complete intersections of the same type (but we do not claim a priori that they are reduced or disjoint in general). Setting the notation, let $Z = Y_0 = Y_{0,1} \cup Y_{0,2}$; then for $i = 1, \dots, d-2$, the $i$-th link will send $Y_{i-1} = Y_{i-1,1} \cup Y_{i-1,2}$ to $Y_i = Y_{i,1} \cup Y_{i,2}$, where $Y_{i,1}$ and $Y_{i,2}$ are complete intersections of the same type. More precisely, the $i$-th link will start with $Y_{i-1} = Y_{i-1,1} \cup Y_{i-1,2}$ and will consist of a regular sequence $(F_i,G_i)$ where $F_i$ is a form of some degree, say $a_i$, in $I_{Y_{i-1}}$ and $G_i = G_{i,1} G_{i,2}$ is the product of two forms of the same degree, say $b_i$, one in $I_{Y_{i-1,1}}$ and one in $I_{Y_{i-1,2}}$. To represent such a link numerically, we will use the notation $(a_i, b_i+b_i)$. We consider such a sequence of links both as linking $Y_{i-1}$ to $Y_i$, and at the same time as representing two ``parallel" sequences of links, where we view at $(F_i,G_{i,1})$ as a link from $Y_{i-1,1}$ to $Y_{i,1}$ and $(F_i,G_{i,2})$ as a link from $Y_{i-1,2}$ to $Y_{i,2}$ separately. Thus we have \[ Y_{i-1} = Y_{i-1,1} \cup Y_{i-1,2} \ \overset{(a_i, b_i+b_i)}{\scalebox{3.5}[1]{$\sim$}} \ Y_i = Y_{i,1} \cup Y_{i,2}. \] It is crucial to note that we start with $Z_1$ and $Z_2$ that are indistinguishable both geometrically and numerically, and at each step the choices of the links $(F_i, G_{i,1}G_{i,2})$ do not ``favor" either $Y_{i,1}$ over $Y_{i,2}$ or vice versa. Thus we have the \vspace{.1in} \begin{equation} \label{symm princ} \parbox{5.7in} {\noindent \bf {Symmetry Principle} \it{For each choice of $i$, there can be no geometric or numerical property distinguishing $Y_{i,1}$ from $Y_{i,2}$.} } \end{equation} \vspace{.1in} For the sake of clarity, our argument will proceed as follows: \begin{itemize} \item[(i)] describe numerically the sequence of links that we will use, \item[(ii)] look at the $h$-vectors, assuming such links exist, \item[(iii)] look at the resolutions, again assuming that the links exist, \item[(iv)] justify the existence of the links, and finally \item[(v)] put it all together for the conclusion. \end{itemize} \noindent The goal is to obtain a contradiction. We stress that for steps (i), (ii) and (iii) we will focus on the numerical information of the desired links and residuals, and will discuss the existence of these links only in step (iv). Once the existence of these links is established, the fact that the residuals are unions of complete intersections of certain types is a routine calculation obtained by looking at the ``parallel" links separately (e.g. for the first link, a complete intersection of type $(d,d)$ is linked by a complete intersection of type $(d,2d-2)$ to a complete intersection of type $(d,d-2)$). \vspace{.1in} \noindent \underline{Step (i)}: For each link the following describes the starting set $Y_{i-1} = Y_{i-1,1} \cup Y_{i-1,2}$ as a union of complete intersections, then gives the degrees of the generators of the complete intersection giving the link, and finally describes the residual as a union of complete intersections. There is a total of $d-2$ links. \begin{center} \scriptsize{ \begin{tabular}{c|c|c|ccccccccccccc} link & starting points & link & residual points \\ $\#$ & $Y_{i-1}$ in $\mathbb P^2$ & type & $Y_i$ in $\mathbb P^2$ \\ \hline 1 & $Y_0 = CI(d,d) \cup CI(d,d)$ & $(d+d, 2d-2)$ & $Y_1 = CI(d,d-2) \cup CI(d,d-2)$ \\ 2 & $Y_1 = CI(d,d-2) \cup CI(d,d-2)$ & $((d-2) + (d-2), 2d-3)$ & $Y_2 = CI(d-3,d-2) \cup CI(d-3,d-2)$ \\ 3 & $Y_2 = CI(d-3,d-2) \cup CI(d-3,d-2)$ & $((d-3) + (d-3),2d-6)$ & $Y_3 = CI(d-4,d-3) \cup CI(d-4,d-3)$ \\ 4 & $Y_3 = CI(d-4,d-3) \cup CI(d-4,d-3)$ & $((d-4) + (d-4), 2d-8)$ & $Y_4 = CI(d-5,d-4) \cup CI(d-5,d-4)$ \\ $ \vdots$ & $\vdots$ & $\vdots$ & $\vdots$ \\ $d-2$ & $Y_{d-3} = CI(2,3) \cup CI(2,3)$ & $(2+2,4)$ & $Y_{d-2} = CI(1,2) \cup CI(1,2)$. \end{tabular} } \end{center} It is certainly true, since we can view the parallel links separately, that the residuals are scheme-theoretic unions of complete intersections. Unfortunately after the first couple of links we do not know that these complete intersections are reduced or disjoint, although we believe this to be true. In any case, the last link results in a zero-dimensional scheme of degree 4, together with the promised redundancy of the minimal free resolutions, and this will allow us to conclude just as in Theorem \ref{d=4}. \vspace{.1in} \noindent \underline{Step (ii)}: Next we look at the $h$-vectors. The $h$-vector for the starting set $Z$ is given at the start of the proof, and is repeated on the first line below. Certainly assuming that there are links of the claimed types given in the table above, we can compute the sequence of $h$-vectors of the residuals. The following lists the sequence of $h$-vectors of the residuals. {\scriptsize \[ \begin{array}{c|cccccccccccccccccccc} & \hbox{deg:} \\ & 0 & 1 & 2 & 3 & 4 & \dots & 2d-10 & 2d-9 & 2d-8 & 2d-7 & 2d-6 & 2d-5 & 2d-4 & 2d-3 & 2d-2& 2d-1 & 2d \\ \hline Y_0 & 1 & 2 & 3 & 4 & 5 & \dots & 2d-9 & 2d-8 & 2d-7 & 2d-6 & 2d-5 & 2d-4 & 2d-3 & 2d-2 & 2d-2 & d & 1 \\ Y_1 & 1 & 2 & 3 & 4 & 5 & \dots & 2d-9 & 2d-8 & 2d-7 & 2d-6 & 2d-5 & 2d-4 & 2d-4 & d-2 \\ Y_2 & 1 & 2 & 3 & 4 & 5 & \dots & 2d-9 & 2d-8 & 2d-7 & 2d-6 & d-3 \\ Y_3 & 1 & 2 & 3 & 4 & 5 & \dots & 2d-9 & 2d-8 & d-4 \\ Y_4 & 1 & 2 & 3 & 4 & 5 & \dots & d-5 \\ &&& \vdots \\ Y_{d-3} & 1 & 2 & 3 & 4 & 2 \\ Y_{d-2} & 1 & 2 & 1 \end{array} \] } \vspace{.1in} \noindent \underline{Step (iii)}: Next we derive the minimal free resolutions, assuming the existence of the links. The main observation is that in each step, even if all conceivable summands split in the mapping cones, there remains a redundant term in the free resolution; if fewer splittings occur, there could only be more redundancy. The desired contradiction at the end is based on the existence of this redundancy. The first complete intersection in the first table links $Z = Y_0$ to a residual curve $Y_1 = Y_{1,1} \cup Y_{1,2}$, where $Y_{1,1}$ and $Y_{1,2}$ are complete intersections as described in the top of the fourth column in the table in Step (i). A mapping cone starting with (\ref{Z-d}) (and splitting the summand corresponding to the minimal generator of degree $2d-2$ used in the link) gives a minimal free resolution \[ 0 \rightarrow \begin{array}{c} S(-2d+2)^{1+\epsilon} \\ \oplus \\ S(-2d+1)^{d-2} \\ \end{array} \rightarrow \begin{array}{c} S(-2d+4) \\ \oplus \\ S(-2d+3)^{d-2} \\ \oplus \\ S(-2d+2)^{1+\epsilon} \\ \end{array} \rightarrow I_{Y_1} \rightarrow 0. \] (It is less obvious that the generator of degree $2d$ in the complete intersection is a minimal generator of $I_Z$, so we note that $\epsilon$ may have grown by one here, but by slight abuse of notation we continue to write $1+\epsilon $ with $\epsilon \geq 0$. This is the only time we will have this ambiguity.) The next link, using a complete intersection of type $((d-2) + (d-2),2d-3)$, uses two minimal generators and the residual, $Y_2$, has minimal free resolution \[ 0 \rightarrow \begin{array}{c} S(-2d+5)^{1+\epsilon} \\ \oplus \\ S(-2d+4)^{d-3} \\ \end{array} \rightarrow \begin{array}{c} S(-2d+6)^{d-2} \\ \oplus \\ S(-2d+5)^{1+\epsilon} \\ \end{array} \rightarrow I_{Y_2} \rightarrow 0. \] Notice the redundant $S(-2d+5)$ and the fact that the next link will involve two forms of degree $2d-6$, hence we cannot split off any redundant terms from this resolution in the next mapping cone. One checks that this redundancy persists in the same way until the end. \vspace{.1in} \noindent \underline{Step (iv)}: Notice that in the desired sequence of links described in Step (i) the first two are a bit different from the rest, in that the third and subsequent links are by forms of the same degree (one reducible) whereas the first two are not of this form. Now we justify the existence and important properties of the first two links. \begin{proposition} \label{link 1} The first link exists. That is, $Y_0$ is linked by a complete intersection $(G_{0,1}G_{0,2}, F_0)$ of type $(2d, 2d-2)$ to $Y_1$, where $G_{0,1}$ is a general element of $[I_{Y_{0,1}}]_d$ and $G_{0,2}$ is a general element of $[I_{Y_{0,2}}]_d$; both are smooth. The residual, $Y_1 = Y_{1,1} \cup Y_{1,2}$ is reduced. In particular $Y_{1,1}$ and $Y_{1,2}$ have no points in common, and both are complete intersections of type $(d,d-2)$. The unique form $F_0$ of degree $2d-2$ in $I_{Y_0}$ is irreducible. \end{proposition} \begin{proof} We have that $Z = Y_0 = Y_{0,1} \cup Y_{0,2}$ is the general hyperplane section of the curve $C = C_1 \cup C_2$ in $\mathbb P^3$, and each of the latter two curves is a smooth complete intersection curve in $\mathbb P^3$ of type $(d,d)$. Hence both $Y_{0,1} $ and $Y_{0,2} $ are reduced complete intersection sets of points with UPP. Furthermore, there are pencils $\mathcal P_{0,1}$ and $\mathcal P_{0,2}$ of forms of degree $d$ defining $Y_{0,1}$ and $Y_{0,2} $, respectively, so in particular the base locus of each of these pencils is the corresponding complete intersection set of points. Hence a general element in $\mathcal P_{0,1}$ and a general element in $\mathcal P_{0,2}$ will each be smooth. We know from the $h$-vector that there is a unique form $F_0$ of degree $2d-2$ containing $Y_0$. If this form were not reduced then removing a factor would yield a curve of lower degree containing all of the points, which is ruled out by the $h$-vector. Hence $F_0$ is reduced. We now claim that $F_0$ is smooth at the points of $Y_0$. By the monodromy principle, the alternative is that $F_0$ must be singular at each of the $d^2$ points of $Y_{0,1}$ and each of the $d^2$ points of $Y_{0,2}$. Let $G_{0,1}$ be a general element of $\mathcal P_{0,1}$ and let $G_{0,2}$ be a general element of $\mathcal P_{0,2}$. Since $\mathcal P_{0,1}$ and $\mathcal P_{0,2}$ each define a finite set of points, the product $G_0 = G_{0,1} G_{0,2}$ does not have any component in common with $F_0$. But then the complete intersection of $F_0$ and $G_0$ has degree at least $2 \cdot 2d^2 = 4d^2$, while the product of degrees is only $(2d-2)(2d) < 4d^2$. This eliminates the possibility that $F_0$ is singular at each of the points, and we are done with the claim. It follows from what we have said that the first link exists. Furthermore, by choosing a general element of each pencil, $G_0$ will meet $F_0$ transversally at every point of intersection. Consequently for the linked set we have \begin{center} {\it $Y_{1,1}$ and $Y_{1,2}$ are reduced and disjoint from each other. } \end{center} \noindent The fact that $Y_{1,1}$ is a complete intersection of type $(d,d-2)$ follows immediately from the fact that $Y_{0,1}$ a complete intersection of type $(d,d)$ and it is linked to $Y_{1,1}$ by a complete intersection of type $(d,2d-2)$; similarly $Y_{1,2}$ is a complete intersection of type $(d,d-2)$. \vspace{.1in} \noindent \underline{Claim}: {\it The unique form $F_0$ of degree $2d-2$ in $I_{Y_0}$ is irreducible}. \vspace{.1in} Suppose $F_0 = H_1\cdots H_r$ are the irreducible components of $G$. The $d^2$ points of $Y_{0,1}$ have UPP (since $C_1$ is a smooth complete intersection curve) and the same is true for $Y_{0,2}$. It would not be true to say that $Y_0$ has UPP, but we will not need this. Let $W_1$ be the subset of $Y_{0,1}$ lying on $H_1$ and let $W_2$ be the subset of $Y_{0,2}$ lying on $H_1$. Let $P \in W_1$ and choose $Q \in Y_{0,1}$ such that $Q \notin W_1$. Harris showed that the monodromy groups for $C_1$ and for $C_2$ are both the full symmetric group, so there is some loop $\gamma$ in a suitable open set of $(\mathbb P^3)^*$ so that moving the hyperplane along $\gamma$ interchanges $P$ and $Q$ but leaves all other points of $Y_{0,1}$ fixed. We can further arrange it so that $\gamma$ does not move any points of $Y_{0,2}$ (it is enough that $\gamma$ avoid planes that are tangent to $C_2$). But $W_1 \cup W_2$ determines $G_1$, so by monodromy $(W_1 \backslash \{P\}) \cup \{Q \} \cup W_2$ determines $H_1$. (If it did not, $F_0$ would not be unique.) This is a contradiction since $Q$ does not lie on $G_1$. This proves the Claim. \end{proof} \begin{proposition} \label{link 2} The second link exists. That is, $Y_1$ is linked by a complete intersection $(G_{1,1}G_{1,2},F_1)$ of type $(2d-4, 2d-3)$ to $Y_2$, where $G_{1,1} \in [I_{Y_{1,1}}]_{d-2}$ and $G_{1,2} \in [I_{Y_{1,2}}]_{d-2}$. The residual, $Y_2 = Y_{2,1} \cup Y_{2,2}$ is reduced, and in particular $Y_{2,1}$ and $Y_{2,2}$ have no points in common. Note that $G_{1,1} G_{1,2}$ is the unique element of $I_{Y_1}$ of degree $2d-4$. \end{proposition} \begin{proof} Now $Y_1 = Y_{1,1} \cup Y_{1,2}$ is the residual in the first link; note again that without loss of generality, $Y_{1,1}$ and $Y_{1,2}$ are reduced and disjoint. We have seen that $Y_{0,1}$ is linked in a complete intersection of type $(d,2d-2)$ to the complete intersection $Y_{1,1}$ of type $(d,d-2)$, and similarly $Y_{0,2}$ is linked to $Y_{1,2}$ of the same type. But above we computed the minimal free resolution of $Y_1$ and saw that there is a unique curve of degree $2d-4$ containing $Y_1$. Hence this curve is the union of the unique curve $G_{1,1}$ of degree $d-2$ containing $Y_{1,1}$, and the unique curve $G_{1,2}$ of degree $d-2$ containing $Y_{1,2}$; both $G_{1,1}$ and $G_{1,2}$ are reduced since $Y_1$ is. From the resolution we know not only that $I_{Y_1}$ has exactly one minimal generator of degree $2d-4$ (namely $G_{1,1} G_{1,2}$), but also exactly $d-2$ minimal generators of degree $2d-3$ and at least one minimal generator of degree $2d-2$. For the second link, we want to show that there is a regular sequence of type $(2d-4,2d-3)$ in $I_{Y_1}$. The alternative is that all the minimal generators of $I_{Y_1}$ of degree $\leq 2d-3$ have a common factor. For the sake of contradiction, suppose this were the case. Let $H$ be the common factor. Notice that $1 \leq \deg(H) < d-2$ since $H$ would have to be a factor of $G_{1,1}$ (and also of $G_{1,2}$). By slight abuse of notation, we will use $H$ to denote both the form and the corresponding curve. $G_{1,1}$ and $G_{1,2}$ are not necessarily irreducible, but they have to act the same way, by the Symmetry Principle. By assumption, all forms of degree $\leq 2d-3$ in $I_{Y_1}$ have $H$ as a factor so in particular the product $G_{1,1} G_{1,2}$ has $H$ as a factor. If any polynomial divides both $G_{1,1}$ and $G_{1,2}$ then in fact we can divide $G_{1,1} G_{1,2}$ by this polynomial and get a polynomial of degree $< 2d-4$ containing $Y_1$, contradicting the Hilbert function of $Y_1$. Thus $H$ must be reducible, with one (not necessarily irreducible) factor dividing $G_{1,1}$ and the other dividing $G_{1,2}$. By symmetry these factors must have the same degree, so $\deg( H)$ must be even, say $\deg(H) = 2 \ell$. Then there are reduced forms $P_1, P_2, M_1, M_2$ such that \begin{itemize} \item $\deg(P_1) = \deg(P_2) = \ell$, \item $H = P_1 P_2$, \item $G_{1,1} = M_1 P_1$, \item $G_{1,2} = M_2 P_2$, \item $G_{1,1}, G_{1,2}$ have no common factor, and \item all forms in $I_{Y_1}$ of degree $\leq 2d-3$ have $P_1 P_2$ as a factor. \end{itemize} \noindent Now, $Y_{1,1}$ is the reduced complete intersection of $M_1 P_1$ and a form of degree $d$, so it contains exactly $d \ell$ points of $H$. Similarly, $Y_{1,2}$ contains $d \ell$ points of $H$. Then all together, we conclude for the common factor $H$ that \begin{equation} \label{kd pts} \hbox{{\it $H$ contains $2 \ell d $ points of $Y_1$, an even number.}} \end{equation} Using methods introduced by Davis \cite{davis}, consider the exact sequence \begin{equation} \label{ses} 0 \rightarrow S/(I_{Y_1} : H) (-2\ell) \stackrel{\times H}{\longrightarrow} S/I_{Y_1} \rightarrow S/(I_{Y_1},H) \rightarrow 0. \end{equation} The ideal $I_{Y_1} : H$ is the saturated ideal of the set of points of $Y_1$ not on the curve defined by $H$. The saturation of the ideal $(I_{Y_1,}H)$ defines the set of points of $Y_1$ on $H$. Let us denote by $A$ the points of $Y_1$ on $H$ and by $A'$ the points not on $H$, so $Y_1 = A \cup A'$. Notice that \begin{equation} \label{describe A} [(I_{Y_1},H)]_t = [(H)]_t \end{equation} for all $t \leq 2d-3$. Thus from (\ref{ses}) we also get for the $h$-vectors \[ \Delta h_{S/I_{A'}}(t-2\ell) = \Delta h_{S/I_{Y_1}}(t) - 2\ell \] for all $2 \ell \leq t \leq 2d-3$. We know that the $h$-vector of $S/I_{Y_1}$ is {\scriptsize \begin{equation} \label{hvtrY} \begin{array}{c|cccccccccccccccccccc} \hbox{deg:} & 0 & 1 & 2 & 3 & 4 & \dots & 2d-10 & 2d-9 & 2d-8 & 2d-7 & 2d-6 & 2d-5 & 2d-4 & 2d-3 & 2d-2 \\ \hline & 1 & 2 & 3 & 4 & 5 & \dots & 2d-9 & 2d-8 & 2d-7 & 2d-6 & 2d-5 & 2d-4 & 2d-4 & d-2 & 0\\ \end{array} \end{equation} } In particular, it is zero in degree $2d-2$. From (\ref{ses}) we thus get that $\Delta h_{S/I_{A'}}(2d-2-2\ell) = 0$. These facts imply that $h^1(\mathcal I_{Y_1}(2d-3)) = 0$ and $h^1(\mathcal I_{A'}(2d-3-2\ell)) = 0$. Now from the exact sequence \[ 0 \rightarrow (I_{Y_1} : H)(-2\ell) \stackrel{\times H}{\longrightarrow} I_{Y_1} \rightarrow (I_{Y_1},H) \rightarrow 0, \] sheafifying and taking cohomology in degree $2d-3$, we get that $(I_{Y_1},H)$ is saturated in degrees $\geq 2d-3$, i.e. it agrees with $I_A$ in degrees $\geq 2d-3$. By (\ref{describe A}), we see that in degree $2d-3$ we have $(I_{Y_1},H) = (H)$, which we now know agrees with $I_A$ in that degree. Then we can complete (\ref{describe A}) as follows: \[ [(I_{Y_1},H)]_t = [(H)]_t = [(I_{Y_1},H)^{sat}]_t = [I_A]_t \] for all $t \leq 2d-3$. It follows that $S/I_A$ has $h$-vector {\scriptsize \[ \begin{array}{c|cccccccccccccccccccc} \hbox{deg:} & 0 & 1 & 2 & 3 & \dots & 2\ell-2 & 2\ell-1 & 2\ell & 2\ell+1 & \dots & 2d-6 & 2d-5 & 2d-4 & 2d-3 & 2d-2 \\ \hline & 1 & 2 & 3 & 4 & \dots & 2\ell-1 & 2\ell &2\ell &2\ell & \dots & 2\ell & 2\ell & 2\ell &2\ell & 0 \\ \end{array} \] } \noindent In particular, $Y_1$ has exactly \[ 2\ell ( 2d-2\ell-1) + \binom{2\ell}{2} = 2\ell \left [ (2d-2\ell-1) + \frac{2\ell-1}{2} \right ] \] points on $H$. But in (\ref{kd pts}) we computed that there are $2\ell d$ points of $Y_1$ on $H$, so \[ d = (2d-2\ell-1) + \frac{2\ell-1}{2} . \] But $2\ell$ is even, so the righthand side is not even an integer, and we have a contradiction. Hence there is no common factor, and the second link also exists: $Y_1$ is linked to $Y_2$ by a complete intersection of type $(2d-4, 2d-3)$. It is clear from the uniqueness of the form of initial degree $2d-4$ that both forms participating in this link are minimal generators of $I_{Y_1}$. \end{proof} \vspace{.1in} The first two links were special (numerically), but now all the rest of the links follow the same pattern. For the $i$-th links, $i \geq 3$, we want to link $Y_{i-1} = Y_{i-1,1} \cup Y_{i-1,2}$ $(i \geq 3)$ to $Y_i = Y_{i,1} \cup Y_{i,2}$ using a complete intersection of type $((d-i) + (d-i), 2d-2i)$. The ideal of $Y_{i-1}$ has $d-i+1$ minimal generators in degree $2d-2i$ (the initial degree), so certainly if the regular sequence exists, the link is done with two minimal generators (and so the residual has one fewer minimal generator). One of these minimal generators is the union of a curve $N_1$ of degree $d-i$ containing $Y_{i-1,1}$ and a curve $N_2$ of degree $d-i$ containing $Y_{i-1,2}$. Suppose first that all the desired links from Step (i) exist. We end with a scheme $Y_{d-2}$ of degree 4, whose minimal free resolution has a redundant term. This can only be a copy of $S(-3)$ in both free modules, i.e. the minimal free resolution of $Y_{d-2}$ is \[ 0 \rightarrow \begin{array}{c} S(-3) \\ \oplus \\ S(-4) \end{array} \rightarrow \begin{array}{c} S(-2)^2 \\ \oplus \\ S(-3) \end{array} \rightarrow I_{Y_{d-2}} \rightarrow 0. \] Even if all the links exist, we do not claim that the residuals continue to be reduced, even though we verified this in the first two links (Propositions \ref{link 1} and \ref{link 2}). In particular, $Y_{d-2}$ may be non-reduced. However, it must have degree 4, and it must contain a scheme $Y_{d-2,1}$ of degree 2 and a scheme $Y_{d-2,2}$ of degree 2, obtained via the parallel links. \vspace{.1in} \noindent { \underline{Claim}:} {\it $Y_{d-2}$ must have a subscheme of degree 3 lying on a line, which gives a contradiction}. Indeed, we have seen that $I_{Y_{d-2}}$ has two minimal generators of degree 2 and one of degree 3, and that it has degree 4. If there were a regular sequence of two forms of degree 2, $Y_{d-2}$ would be a complete intersection, contradicting what we know to be the minimal free resolution. Thus the forms of degree 2 have a degree 1 common divisor. By Davis's theorem $Y_{d-2}$ has a subscheme of degree 3 lying on this line. But then either $Y_{d-2,1}$ or $Y_{d-2,2}$, but not both, must lie on this line. This violates the Symmetry Principle, giving the desired contradiction. This would conclude not only the proof of the Claim, but in fact the proof that $R/I$ has the WLP (always assuming Conjecture \ref{force exactly one}). \vspace{.1in} The last issue is to deal with the existence of the remaining links. \begin{conjecture} \label{get contra} For any $i$, $3 \leq i \leq d-2$, if the $i$-th link does not exist then there is a common factor in the initial degree of the ideal of $Y_{i-1}$, and many points of $Y_{i-1}$ lie on this curve. Then a contradiction of the Symmetry Principle can be obtained in a way similar to that given in the proof of Proposition \ref{link 2}. \end{conjecture} \section{Some additional results and consequences} \subsection{Jacobian ideals} \label{jacobian ideals} We now give a consequence of our results to Jacobian ideals, since the Jacobian ideal of a smooth surface in $\mathbb P^3$ of degree $d+1$ is a complete intersection of four forms of degree $d$. In \cite{ilardi} G. Ilardi posed the following question: \begin{quotation} {\it Does the Jacobian ideal of a smooth hypersurface have the Weak Lefschetz Property?} \end{quotation} Ilardi proves the following partial answer. We will slightly change her notation to agree with ours. \begin{proposition}[Ilardi \cite{ilardi}] Let $X : f = 0$ be a hypersurface in $\mathbb P^n$ of degree $d+1 > 2$, such that its singular locus $X_s$ has dimension at most $n -3$. Then the ideal $J(X)$ has the WLP in degree $d -1$, i.e. $\times \ell : [R/J(X)]_{d-1} \rightarrow [R/J]_d$ is injective. \end{proposition} In \cite{AR}, A. Alzati and R. Re proved this same injectivity for any complete intersection generated by forms of degree $d$, not only for Jacobian ideals. We give some answers to Ilardi's question arising from our work. First we note that our results show that in some cases $R/J(X)$ has the full WLP rather than WLP in a certain degree. \begin{corollary} Let $F$ be a smooth hypersurface in $\mathbb P^3$ of degree 3, 4, 5 or 6. Then the Jacobian ideal $J = \langle \frac{\partial F}{\partial w}, \frac{\partial F}{\partial x}, \frac{\partial F}{\partial y}, \frac{\partial F}{\partial z}\rangle$ has the WLP. \end{corollary} Using Theorem \ref{d1=d2=d3=d4} we also get an improvement of Ilardi's result in the case of four variables: \begin{corollary} \label{jacobian result} Let $X : f = 0$ be a smooth hypersurface in $\mathbb P^3$ of degree $d+1 > 2$. Then the ideal $J(X)$ has the WLP in all degrees $\leq \lfloor\frac{3d+1}{2}\rfloor -2$, i.e. $\times \ell : [R/J(X)]_t \rightarrow [R/J(X)]_{t+1}$ is injective for all $t \leq \lfloor \frac{3d+1}{2} \rfloor - 2$. \end{corollary} We remark that WLP is equivalent to injectivity for all $t \leq 2d-3$, so Corollary \ref{jacobian result} covers approximately half the range left open by the Ilardi and Alzati-Re result. \subsection{A result for non-equigenerated complete intersections } \label{not equigenerated subsec} Even if we prove Conjectures \ref{force exactly one} and \ref{get contra}, it would remain to prove that {\it all} codimension four complete intersections (with generators of possibly different degree) have the WLP. In the last proposition of this paper we deal with complete intersections of arbitrary degree $d_1,d_2,d_3,d_4$ and from the stability of the associated syzygy bundle we will deduce the injectivity in a range unfortunately not as good as the one that we get when $d_1 = \dots = d_4$ (see Theorem \ref{d1=d2=d3=d4}). However, it does not assume that the degrees are equal, and it introduces a different approach. For the sake of completeness we recall the following result in vector bundles that will be crucial in the proof of Proposition \ref{d1>=d2>=d3>=d4}. \begin{proposition}[\cite{EHV} Theorem 3.4 (due to Schneider), and \cite{EHV} Theorem 6.1] \label{EHV results} Let $\mathcal E$ be a normalized (i.e. $-2\le c_1(\mathcal E) \le 0$) rank 3 stable vector bundle on $\mathbb P^3$. The restriction of $\mathcal E$ to a general plane is stable unless one of the following holds: \begin{enumerate} \item[(i)] $c_1(\mathcal E) = -2$ and $\mathcal E = T_{\PP^3}(-2) $, where $T_{\PP^3}$ is the tangent bundle on $\PP^3$; \item[(ii)] $ c_1(\mathcal E) = -1$ and $\mathcal E = \Omega ^1(1)$, where $\Omega = \Omega_{\PP^3}^1$ is the sheaf of K\"ahler differentials; \item [(iii)] $c_1 (\mathcal E) = 0$ and $c_2(\mathcal E) \le 3$; \item[(iv)] $c_1(\mathcal E) = 0$ and $\mathcal E = S^2(\mathcal N)$ where $\mathcal N$ is the null correlation bundle and $S^2(\mathcal N)$ is the second symmetric power; \item[(v)] $c_1(\mathcal E) = 0$ and $\mathcal E$ fits in the exact sequence \[ 0 \rightarrow \Omega (1) \rightarrow \mathcal E \rightarrow O_{H_0}(-c_2(\mathcal E)+1) \rightarrow 0 \] for some plane $H_0$ in $\PP^3$. \end{enumerate} \end{proposition} \begin{proposition} \label{d1>=d2>=d3>=d4} Let $A = R/I = R/\langle F_1,F_2,F_3,F_4 \rangle$ where $I$ is a complete intersection and $\deg F_i = d_i$. Set $d_1+d_2+d_3+d_4=3\lambda +r$, $0\le r\le2$. Let $L$ be a general linear form. Then the multiplication maps $\times L \colon [A]_{t-1}\rightarrow[A]_t$ are injective for $t< \lambda$. \end{proposition} \begin{proof} We will assume that $d_1\le d_2\le d_3\le d_4$. We distinguish two cases. \begin{enumerate} \item If $\frac{d_1+d_2+d_3+d_4}{3}\le d_4$, then $d_1 + d_2+d_3+d_4 = 3 \lambda + r \leq 3d_4$, so $\lambda \leq d_4$. If $t < \lambda$ then $[A]_{t-1}$ and $[A]_t$ coincide with the corresponding components of the coordinate ring of a complete intersection of positive Krull dimension, so the result is obvious. \item Assume that $\frac{d_1+d_2+d_3+d_4}{3}> d_4$. Consider the syzygy bundle \[ {\mathcal E}:= \ker \left (\bigoplus _{i=1}^4{\mathcal O}_{\PP^3}(-d_i) \stackrel{(F_1,F_2,F_3,F_4)}{\longrightarrow} {\mathcal O}_{\PP^3} \right ) \] associated to $(F_1,F_2,F_3,F_4) $. ${\mathcal E}$ is a rank 3 vector bundle on $\PP^3$ with $c_1( {\mathcal E})=-( d_1+d_2+d_3+d_4)$. Note that $H^1_*(\mathcal E) = \bigoplus_{t \in \mathbb Z} H^1(\mathcal E(t)) \cong A$ (cf.\ \cite{BK} Proposition 2.1, although this was already used implicitly in \cite{HMNW} Theorem 2.3). Let us check that ${\mathcal E}$ is $\mu$-stable. To this end, we consider the exact sequence \[ 0 \longrightarrow {\mathcal O}_{\PP^3} \longrightarrow \bigoplus _{i=1}^4{\mathcal O}_{\PP^3}(d_i) \longrightarrow {\mathcal E}^\ast\longrightarrow 0. \] By hypothesis we have \[ \mu({\mathcal E}^\ast):=\frac{c_1({\mathcal E}^\ast)}{rk({\mathcal E}^\ast)}=\frac{\sum_{i=1}^4d_i}{3}> \max \{d_i \}=d_4. \] So we can apply \cite{bs} Corollary 2.7, and conclude that ${\mathcal E}^\ast$ is $\mu $-stable. Since, $\mu$-stability is preserved under dualizing we also have that ${\mathcal E}$ is $\mu $-stable. Now we want to consider the restriction to a general plane $H$. We claim that by Proposition \ref{EHV results}, the restriction ${\mathcal E}_{|H}$ of ${\mathcal E}$ to $H\subset \PP^3$ is also $\mu$-stable. This is because our rank 3 vector bundle is not one of the few exceptions listed in that result. Indeed, recall that $H^1_*(\mathcal E)$ is isomorphic to our artinian algebra $R/I$. The non-zero summands of $R/I$ go from the homogeneous part of degree zero until the homogeneous part of degree $d_1+d_2+d_3+d_4-4$ (assuming $d_i\ge 2$; if one is smaller than 2 we are dealing with a complete intersection in 3 variables and the result is known). Moreover we know exactly the dimension of the homogeneous part of degree $i$, for $0\le i\le d_1+..+d_4-4$. They are $(1,4,h_2,\dots,h_2,4,1)$. We claim that our vector bundle $\mathcal E$ is not one of the exceptions listed in Proposition \ref{EHV results}. Exception (i) has $H^1_*(\mathcal E)=0$, while (ii) has $H^1_*(\mathcal E)$ concentrated in only {\it one} degree. So in neither case do we have $H^1_*(\mathcal E) \cong R/I$ (up to twist). For the remaining three exceptions, (v) corresponds again to a bundle $\mathcal E$ (or $\mathcal E^*$) with $H^1_*(\mathcal E)$ concentrated in only {\it one} degree and all the others are 0, which is not our case -- indeed, only $H^1 \mathcal E(-1)\ne 0$. Exception (iv) corresponds to the second symmetric power of the null correlation bundle $\mathcal N$. We claim that again is not our case because the cohomology satisfies $\dim H^1(S^2(\mathcal N)(-2))=1$, $\dim H^1(S^2(\mathcal N)(-1))= 4$ and $\dim H^1(S^2(\mathcal N))=5$, and this cannot be the start of the Hilbert function of a complete intersection. Indeed, the null correlation bundle is a rank 2 vector bundle $\mathcal N$ on $\mathbb P^3$ with $c_1(\mathcal N) = 0$. Therefore we have an exact sequence \[ 0 \rightarrow \bigwedge^2 \mathcal N \rightarrow \mathcal N \otimes \mathcal N \rightarrow S^2 (\mathcal N) \rightarrow 0, \] and $\bigwedge^2 \mathcal N \cong \mathcal O_{\mathbb P^3}(c_1 (\mathcal N)) \cong \mathcal O_{\mathbb P^3} $. We deduce that $H^1 ((\mathcal N \otimes \mathcal N) (t)) \cong H^1 (S^2 (\mathcal N)(t))$ for all $t \in \mathbb Z$, from which the result follows from a calculation. Finally, we consider exception (iii). This corresponds to $c_1=0$ and $c_2=3$ and again we claim this is not our case. Indeed, assume without loss of generality that $2 \le d_1 \le d_2 \le d_3 \le d_4$. We know that $c_1 (\mathcal E) = -(d_1+d_2+d_3+d_4)$ and $c_2(\mathcal E) = d_1d_2+d_1d_3+...+d_3d_4$. We have $c_1(\mathcal E_{norm}) = 0$ if and only if $c_1(\mathcal E) \equiv 0$ (mod 3). Thus we can write \[ -(d_1 + d_2 + d_3 + d_4) = -3p \] and in particular $-(d_1+d_2+d_3+d_4)=0 \ (~\hbox{mod } 3)$. At the beginning of this proof we divided into two cases, and the current case is $\frac{d_1+d_2+d_3+d_4}{3}> d_4$. Thus $\frac{d_1+d_2+d_3+d_4}{3} \geq d_4+1$, or $d_1 + d_2 + d_3 \ge 2d_4 + 3$. Since $p = (d_1+d_2+d_3+d_4)/3 $ we have $c_1(\mathcal E_{norm}) = c_1 (\mathcal E(p)) = 0$ and \[ c_2(\mathcal E_{norm}) = c_2 (\mathcal E(p)) = c_2(\mathcal E) - 3p^2 = c_2(\mathcal E) - \frac{1}{3} \left (d_1^2 + d_2^2 + d_3^2 + d_4^2 + 2 \cdot \sum_{1 \leq i<j \leq 4} d_i d_j \right ). \] We will use the following facts. \begin{itemize} \item $c_2(\mathcal E) = \sum_{1 \leq i < j \leq 4} d_i d_j$; \item $d_1 + d_2 +d_3 \geq 2d_4 +3$; \item $d_1 d_2 \geq d_1^2$, $d_2 d_3 \geq d_2^2$, $d_4^2 \geq d_3^2$. \end{itemize} Thus \[ \begin{array}{lcl} c_2(\mathcal E_{norm}) & = & \displaystyle \frac{1}{3} \left [ \left (\sum_{1 \leq i < j \leq 4} d_i d_j \right ) - d_1^2 - d_2^2 - d_3^2 - d_4^2 \right ] \\ & = & \displaystyle \frac{1}{3} \left [ d_4 (d_1 + d_2 + d_3) + \left ( \sum_{1 \leq i < j \leq 3} d_i d_j \right ) - d_1^2 - d_2^2 - d_3^2 - d_4^2 \right ] \\ \vspace{.05in} & \geq & \frac{1}{3} \left [ d_4 (2d_4+3) + d_1d_2 + d_1 d_3 + d_2 d_3 - d_1^2 - d_2^2 -d_3^2 - d_4^2 \right ] \\ \vspace{.05in} & \geq & \frac{1}{3} (3d_4 + d_1 d_3) \\ & \geq & \frac{1}{3} (3 \cdot 2 + 2 \cdot 2) > 3. \end{array} \] Thus our vector bundle $\mathcal E$ does not fall into any of the exceptions listed in Proposition \ref{EHV results}, and $\mathcal E_{|H}$ is stable for a general plane $H$. In particular, we have $H^0(H, {\mathcal E}_{|H}(t ))=0$ for all $t<\lambda$. Looking at the long exact sequence in cohomology of the exact sequence \[ 0 \rightarrow \mathcal E(t-1) \stackrel{\times L}{\longrightarrow} \mathcal E(t) \rightarrow \mathcal E_{|H}(t) \rightarrow 0 \] we see that if $L$ is a linear form defining $H$ then $\times L : [A]_{t-1} \rightarrow [A]_t$ is injective for all $t<\lambda$. \end{enumerate} \end{proof} \section{Final comments and questions} \begin{enumerate} \item The most obvious open question is whether the WLP holds for arbitrary complete intersections in arbitrarily many variables. This is the Holy Grail of this line of investigation. So far it is known in two or three variables (\cite{HMNW}), the complete intersection of at most four quadrics (\cite{MN-quadrics}) and now we have results for complete intersection of forms of the same degree in four variables. It might be profitable to apply the methods of this paper to complete intersections generated by six forms of the same degree $d$ in six variables. Now we study the union of two complete intersection surfaces in $\mathbb P^5$, whose general hyperplane section is the union of two complete intersection curves in $\mathbb P^4$. Of course such a union in $\mathbb P^4$ is not ACM, as was the case for the hyperplane sections in this paper. \vspace{.1in} \item Once we show that all complete intersections have the WLP, it remains to show that they all have the SLP. This is open even in codimension 3, but it is known in codimension 2 \cite{HMNW}. \end{enumerate}
1,941,325,220,987
arxiv
\section{Introduction} Dark matter dominates the outer galaxy but may not be absolutely stable. At the very least, this has to be demonstrated to within observational limits. Dark matter is generally considered to consist of weakly interacting particles that are hitherto undetected. Search for observational signatures is a major industry \citep{2010ARA&A..48..495F}, including deep underground direct searches \citep{2017NatPh..13..212L}, indirect searches in astronomical systems, including the Universe itself, and high energy particle accelerator searches at the LHC \citep{2017IJMPA..3230006K} and elsewhere. The most popular sought-after signals typically involved self-annihilation of heavy neutral particles into charged Standard Model particles \citep[see e.g.,][for the original idea]{Silk:1984zy}. However many other avenues have been considered as early as in the 1980s, including the decay of dark matter particles, e.g., \citet{Dicus:1977qy,Cabibbo:1981er} and \citet{Ellis:1984eq}. Decaying dark matter scenarios gained further traction in the past two decades after puzzling excesses in cosmic ray and X-ray observations emerged, see e.g., \citet{Chen:2008qs,Ibarra:2008jk,Yin:2008bs}, or for more modern references \citet{ 2020JCAP...08..035V,2022arXiv220306508C}, as well as \citet{Boyarsky:2014ska,Jeltema:2014qfa,Riemer-Sorensen:2014yda} and references therein. As the injection of charged particles in dark matter halos and our cosmic neighbourhood could lead to excess in cosmic ray, neutrinos, X-ray, gamma-ray and radio spectra, these could be used to set strong limits on the dark matter characteristics and, in particular, constrain its mass vs interaction strength and therefore lifetime. More recently however it was suggested that dark matter could decay or annihilate into a dark (possibly secluded) sector. While such scenarios would be impossible to detect by traditional means, \cite{2008MNRAS.388.1869A,2010PhRvD..82l3521P,2014MNRAS.445..614W} showed that their impact on the number of satellite companions of the Milky Way would provide nonetheless a way to test their validity. More recently, by comparing cosmological simulations of decaying dark matter with the observed Milky Way satellite population \citet{2022arXiv220111740M} were able to place constraints on the decay lifetime and the associated kick velocity Here we go a step further and examine whether the invisible dark matter decay would also affect galaxy dynamics and provide complementary limits to previous works, including \citet{2022arXiv220505636A}. These questions are more than academic. The Hubble tension (discrepancy between the local measurement of Hubble constant with that from the cosmic microwave background) has reinvigorated discussions about the possible instability of dark energy \citep{2019PhRvL.122v1301P,2021PhRvD.104l3550P} and dark matter \citep{2020JCAP...07..026P, 2021EPJC...81..954F}, but see also \citet{2022PhRvD.105j3512A}. Models based on either of these hypotheses have the potential of reducing the Hubble tension by modifying the early universe expansion rate relative to its current value. Another tension where the dark matter decaying scenarios might help is regarding the amplitude of matter fluctuations $S_8$ between cosmic microwave background (CMB) and gravitational lensing, as the value measured currently is smaller than LCDM expectations based on CMB \citep{2018PhRvD..98d3526A,2021PhRvD.104l3533A}. This scenario will be addressed by EUCLID studies of weak lensing \citep{2021JCAP...10..040H}. Depending on the results, one might need to invoke new physics in the dark sector and decaying dark matter in particular \cite{Poulin:2016nat}. Here we demonstrate that we can set constraints on both the lifetime and the characteristic kick velocity imparted by the decay from galactic dynamics. The kicks can significantly deplete the dark matter in low mass subhalos and alter the subhalo mass function of Milky Way like galaxies. We measure the radial motion component of halo stars in a specified shell of matter to constrain the change in Galactic mass and constrain the dark matter lifetime. Hierarchical structure formation within the cold dark matter paradigm is a noisy and complex process. A galaxy with a non-zero rate of change of mass will leave an imprint on the motions of stars within it. Bulk radial motion can be induced directly by the complex orbits of accreting or orbiting material, or by the existence of breathing modes excited by in-falling material \citep{2014MNRAS.440.1971W}. A population of stars that, to begin with, are in equilibrium with the Galaxy will drift radially outwards if the mass of the Galaxy decreases, or drift inwards if the mass increases. If the change of potential is slow, and the angular momentum is an adiabatic invariant during this change, than the net average radial motion in a spherical shell is proportional to the radius $r$ of the shell and to the fractional rate of change of mass $M$ enclosed by the shell \citep{2022RNAAS...6...26L}, \begin{eqnarray} V_{R}\equiv \frac{{\rm d} r}{{\rm d} t} & = & -\left(\frac{\dot{M}}{M}\right) r \label{equ:vr1} \\ &\approx& \left(\frac{\dot{M}/M}{{\rm Gyr^{-1}}}\right) \left(\frac{r}{{\rm kpc}}\right) \>{\rm km}\,{\rm s}^{-1}. \label{equ:vr2} \end{eqnarray} Observationally, this can be detected by measuring the radial velocity of stars in the stellar halo. This offers the possibility to constrain the rate of change of mass in the Galaxy and the processes associated with it, such as the decay of dark matter. In addition to decaying dark matter, a galaxy can gain or lose mass in a given radius for various reasons, but these are either confined to the inner galaxy or are quite small. In the hierarchical structure formation paradigm, galaxies are formed by accretion and merger events that lead to the growth of their mass with cosmic time. This sudden increase of mass can trigger inward radial motions of stars. However, the fractional rate of change of mass is high in the first $1-2$ billion years and decreases progressively with time. At late times, feedback from bursty star formation and supermassive black holes can generate an outflow of gas from the central regions of the Galaxy and trigger an outward radial motion of stars \citep{2012MNRAS.421.3464P}, but this change is mostly confined to the inner regions of the Galaxy. Galaxies lose mass over billions of years through baryonic radiative processes but the implied radial motion is of order $\langle V_{R}\rangle \sim 0.03 \>{\rm km}\,{\rm s}^{-1}$ for a Milky Way-sized galaxy \citep{2022RNAAS...6...26L}. Stellar halo stars extending up to 100 kpc and beyond \citep{2008A&ARv..15..145H} are ideal targets for constraining the rate of change of mass in Milky Way sized galaxies. In order for this to work, the mean radial motion of stellar halo stars in the absence of change in galactic mass should be as close to zero as possible. However it is not clear if that is true. In the current $\Lambda$CDM paradigm of structure formation, the stellar halo is formed by accretion of satellites onto the host galaxy \citep{2005ApJ...635..931B}. Over cosmic time, as the satellites disrupt and phase mix, the mean radial motion of the stellar halo is expected to be close to zero. However, not all satellites are fully phase-mixed and significant substructure can be seen in the Galaxy both in position and velocity space \citep{2008ApJ...689..936J} and this can lead to non zero mean radial motion. The question is how large is it? To answer this, we make use of N-body simulations and investigate the mean radial motion of stars in simulated stellar halos. We compare this with the motion expected in the scenario where dark matter undergoes decay and discuss the physical implications of our results. Finally, we discuss the observational challenges for conducting such a study and if the current and future observational facilities are sufficient equipped to do so. \section{Methods} In this paper we study the bulk radial motion of stars in the in the stellar halos and for this we make use of N-body simulations and these are described in \autoref{sec:stellar_halos}. If the stars in the stellar halo are in equilibrium then the mean radial velocity should be zero. However, some accretion events of the stellar halo are not well mixed in phase space and have not reached an equilibrium. These show up as substructures in the phase space and are associated with significant non-zero bulk motion. Since we are interested in the equilibrium component of the stellar halo, we identify and get rid of the substructures using a clustering algorithm. This is described in \autoref{sec:clustering}. For certain halos, although the mean motion of stars in a shell is not zero, the distribution of radial velocities is quite symmetrical about the mean radial velocity. Hence, we devise an alternate scheme to measure the central velocity of stars in a shell and this is described in \autoref{sec:central_velocity}. \begin{table}[htb!] \caption{Simulated stellar halos. The halos starting with \texttt{bj\_} are from \citet{2005ApJ...635..931B} while those starting with \texttt{fire\_} are from \citet{2020ApJS..246....6S}.} \begin{tabular}{lll} Name & Accretion & Simulation \\ & history & type \\ \hline \texttt{bj\_2} & $\Lambda$CDM & Idealized \\ \texttt{bj\_5} & $\Lambda$CDM & Idealized \\ \texttt{bj\_7} & $\Lambda$CDM & Idealized \\ \texttt{bj\_9} & $\Lambda$CDM & Idealized \\ \texttt{bj\_10} & $\Lambda$CDM & Idealized \\ \texttt{bj\_12} & $\Lambda$CDM & Idealized \\ \texttt{bj\_14} & $\Lambda$CDM & Idealized \\ \texttt{bj\_15} & $\Lambda$CDM & Idealized \\ \texttt{bj\_17} & $\Lambda$CDM & Idealized \\ \texttt{bj\_20} & $\Lambda$CDM & Idealized \\ \texttt{bj\_lowl} & Artificial & Idealized \\ \texttt{bj\_highl} & Artificial & Idealized \\ \texttt{bj\_rad} & Artificial & Idealized \\ \texttt{bj\_circular} & Artificial & Idealized \\ \texttt{bj\_old} & Artificial & Idealized \\ \texttt{bj\_young} & Artificial & Idealized \\ \texttt{fire\_m12f} & $\Lambda$CDM & Cosmological \\ \texttt{fire\_m12i} & $\Lambda$CDM & Cosmological \\ \texttt{fire\_m12m} & $\Lambda$CDM & Cosmological \\ \hline \end{tabular} \label{tab:dataset} \end{table} \subsection{Simulated stellar halos} \label{sec:stellar_halos} To study the radial velocity of stars in the stellar halo, we make use of N-body simulations. We use three different types of simulations, and these are listed in \autoref{tab:dataset}. First, is a suite of 10 stellar halos simulated by \citet{2005ApJ...635..931B} (BJ05), named as \texttt{bj\_X} with \texttt{X} $\in \{2,5,7,9,10,12,14,15,17,20\}$. These have accretion histories derived from a semi-analytical scheme in accordance with the $\Lambda$CDM cosmology. Here, a stellar halo is built up entirely by accretion of satellites. The satellites are modelled by N-body particles evolved individually in an analytical potential. Hence, they are called idealized simulations. Baryons are embedded deep in the inner regions. This is modelled by assigning a mass-to-light ratio to each N-body particle based on its energy, with more tightly bound particles having lower mass-to-light ratio. Second, is a suite of six stellar halos that were simulated by \citet{2008ApJ...689..936J} (JB08) but with artificial accretion histories, \texttt{lowl} (made up of predominantly low luminosity satellites), \texttt{highl} (made up predominantly high luminosity satellites), \texttt{old} (made up of predominantly old accretion events), \texttt{young} (made up of predominantly young accretion events), \texttt{rad} (made up of accretion events predominantly on radial orbits), \texttt{circ} (made up of accretion events predominantly on circular orbits). Except for the accretion history, the JB08 halos are otherwise simulated in the same way as the BJ05. Third, we use 3 Milky Way sized galaxies simulated by the FIRE team \citep{2020ApJS..246....6S,2018MNRAS.480..800H,2016ApJ...827L..23W}, \texttt{fire\_m12f}, \texttt{fire\_m12i}, \texttt{fire\_m12m}. These are state of the art hydrodynamical cosmological simulations including physical processes such as cooling, star formation and feedback. \begin{figure*} \centering \includegraphics[width=0.95\textwidth]{fig1.pdf}\caption{Radial velocity distribution of N-body particles in the stellar halos simulated by \citep{2005ApJ...635..931B}, \citep{2008ApJ...689..936J} and \citep{2020ApJS..246....6S}. Each N-body particle has a star forming mass associated with them and the distributions are weighted acording to them. First and third columns show distribution of stars in $(r,V_r)$ plane in the spherical Galactocentric coordinates. Second and fourth columns show mean radial velocity measured in spherical shells (width 5 kpc) as a function of radius (orange line). Shown alongside (blue line) is the velocity about which the distribution is symmetric. The total number of stars are denoted on the top, followed by number of stars in shell $15<r/{\rm kpc}<45$. For the same shell, the text at the bottom denotes, mean $V_r$, the error on the mean and the central velocity based on symmetry. \label{fig:r_vr}} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.95\textwidth]{fig2.pdf}\caption{Same as \autoref{fig:r_vr} but for equal stellar mass particles. The code {\it GALAXIA} was used to spawn equal stellar mass particles from N-body particles with a given star forming mass. \label{fig:r_vr_galaxia}} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.95\textwidth]{fig3.pdf}\caption{Same as \autoref{fig:r_vr_galaxia} but by filtering out substructures by using the clustering algorithm {\it ENLINK}. \label{fig:r_vr_galaxia_cluster}} \end{figure*} \subsection{Clustering} \label{sec:clustering} To identify and remove substructures in the stellar halo, we use the {\it ENLINK} clustering algorithm \citep{2009ApJ...703.1061S} which will be publicly available \footnote{\url{https://github.com/sanjibs/enlink}}. For examples of its application to BJ05 halos see \citet{2010ApJ...722..750S} and \citet{2011ApJ...728..106S}. We apply it over the six dimensional $(x,y,z,v_x,v_y,v_z)$ phase-space specified in Cartesian coordinates. The main feature of {\it ENLINK} that is useful for our application is its ability to identify structures of arbitrary shape and size in any given multidimensional space. Unlike most other clustering algorithms that use a global metric, {\it ENLINK} makes use of a locally adaptive metric based on the idea of Shannon entropy and calculated using a binary space partitioning tree \citep{2006MNRAS.373.1293S}. As mentioned earlier, in the BJ05 and JB08 halos each N-body particle has different stellar mass. It is difficult to do clustering analysis on particles with unequal weights. This is because an isolated particle of large weight will spuriously appear as a region of overdensity. Hence, for these halos we use the code GALAXIA \citep{2011ApJ...730....3S} to sample star particles from these simulations. The number of stars spawned by an N-body particle is equal to its total stellar mass divided the mean mass of a star for a given stellar initial mass function (IMF). The main advantage of using GALAXIA is that it samples the stars in the six dimensional phase space, hence the sampled stars have kinematics consistent with the original simulation. \subsection{The central radial velocity based on symmetry}\label{sec:central_velocity} For certain accretion events stars are not distributed over the full available phase space of the orbit. This means that at any given $r$ the mean motion is non zero. However, the distribution of radial velocity is symmetrical, and the center of symmetry is close to zero. To compute the central velocity, we divide the sample into two about a chosen center of symmetry. Next, we minimize the the two sample Kolmogorov-Smirnoff statistics \begin{eqnarray}} \def\ee{\end{eqnarray} D_{n,m}=\frac{nm}{n+m}{\rm sup} |F_{1,n}(v_r)-F_{2,m}(v_r)| \ee to locate the center of symmetry. Here, $F_{1,n}$ and $F_{2,m}$ are the cumulative distribution function of the first and the second sample, and $n$ and $m$ are the respective number of data points in each of the samples. \section{Mean radial velocity in simulated stellar halos} We begin by studying the radial velocity distribution of simulated stellar halos. \autoref{fig:r_vr} shows the distribution of stars in the Galactocentric $(r,V_r)$ space, where $r$ is the radial distance and $v_r$ the radial velocity (panels in first and third columns). Mean radial velocity $\langle V_r \rangle$ measured in spherical shells of width $\Delta R$ as function of radius $r$ is shown in panels of second and fourth column (orange line). The central radial velocity, measured as the velocity about which the radial velocity distribution is symmetric, is also shown alongside (blue line). The 16 and 84 percentile spread about the estimated mean and central velocity are denoted by the shaded region. The spread was estimated using the technique of bootstrapping. The mean and central radial velocity for stars in shell $15<r/{\rm kpc}<45$ is shown in the bottom right of each panel. In \autoref{fig:r_vr}, for the idealized halos the stars are weighted by the star forming mass of each N-body particle and the bound satellites are removed. Unlike the idealized halo the cosmological halo also has disc stars. To get rid of disc stars, we restrict the analysis to stars with $(R > 20\; {\rm kpc})\; {\rm or}\; (|z| > 10\; {\rm kpc})$. This is the reason for the vertical streaks at $r=10$ kpc and $r=20$ kpc in the \texttt{fire} halos. In \autoref{fig:r_vr}, significant substructure in the $(r,V_r)$ space can be seen. The mean radial velocity is also found to show significant fluctuations. Next, instead of the N-body particles we repeat the analysis with stellar particles of equal stellar mass spawned by the code {\sl Galaxia}. Results are shown in \autoref{fig:r_vr_galaxia}. Bound satellites were not removed and can be seen as dense knots. Substructures are not as clear as before and this is due to two reasons. First, the bound satellites being very dense increase the range of density being mapped by the color scale, and this lowers the contrast of the less dense substructures. Second, the star spawning process of {\sl Galaxia} also leads to some added scatter of stars in the phase space. In spite of these minor differences, the mean radial profiles are very similar to \autoref{fig:r_vr}. Next we use the ENLINK clustering algorithm to remove the substructures and retain only the dominant smooth component of the halo. These results are shown in \autoref{fig:r_vr_galaxia_cluster}. The distribution in $(r,V_r)$ space is much smoother and the mean radial velocity profiles have markedly smaller fluctuations. \begin{figure}[!htb] \centering \includegraphics[width=0.49\textwidth]{fig4.pdf}\caption{Absolute mean radial velocity of stars in a spherical shell for different simulated $\Lambda$CDM stellar halos. Results of the full sample are compared with the sample where substructures were filtered out. \label{fig:stats1}} \end{figure} \begin{figure}[!htb] \centering \includegraphics[width=0.49\textwidth]{fig5.pdf}\caption{Absolute central radial velocity (see \autoref{sec:central_velocity}) of stars in a spherical shell for different simulated $\Lambda$CDM stellar halos. Results of the full sample are compared with the sample where substructures were filtered out. \label{fig:stats2}} \end{figure} \subsection{Mean radial velocity in shell $15 < r/{\rm kpc} < 45$.}\label{sec:mean_radial_motion} We now focus on the mean radial velocity in the spherical shell $15 < r/{\rm kpc} < 45$ centered around $r=30$ kpc. We choose this radius for the following reasons. From \autoref{equ:vr1} it is clear that the mean radial velocity due to decay is proportional to radius. However, the number density of stars in the Galaxy decreases sharply with radius, making it difficult to find a large number of stars to observe. The density of stars in the stellar halo is well approximated by Hernquist profile \citep{2005ApJ...635..931B}, it is high in the center and decreases with radius (varying as $r^{-1}$ at small $r$ and $r^{-4}$ at large $r$). The further the stars are, the more exposure time is required to observe them. Additionally, the stellar halo is also less phase mixed at large $r$, which is due to the relaxation time of stars there being large. This can be seen \autoref{fig:r_vr_galaxia} and \autoref{fig:r_vr_galaxia_cluster}. Finally, for $r<15$ kpc, the stellar population is dominated by disc stars, which can have significant bulk motion due to non axis-symmetric structures like the spiral arms and the bar. Velocity fluctuations in the disc of the order of 5 to 10 km s$^{-1}$ were shown by \citep{2018A&A...616A..11G, 2019MNRAS.482.4215K, 2019MNRAS.489.4962K}. Also the orbiting satellites, like the Sagittarius dwarf galaxy can disturb the disc, an example of this is the $(z,V_z)$ phase space spiral \citep{2018Natur.561..360A,2018MNRAS.481..286L, 2019MNRAS.486.1167B, 2019MNRAS.485.3134L}. In \autoref{fig:stats1}, we show the shell radial speed using a bar blot for different stellar halos simulated with $\Lambda$CDM accretion history. Results for both the full sample and the sample where substructures were filtered out are shown together. Note, we analyse speed instead of velocity. This is to improve the statistics as we only have 13 halos. We assume that mean radial velocity of stars in a shell is equally likely to be either positive or negative. This is very close to true for our sample where the mean velocity of shells was found to be close to zero. It can be seen that for the full sample the shell speed is typically small (median over 13 halos being 1.2 km s$^{-1}$), but for four halos it is larger than 4 km s$^{-1}$. After filtering out substructures significant reduction in the speed can be seen, median shell speed being 0.6 km s$^{-1}$ and only one halo having speed above 4 km s$^{-1}$. This implies that 75\% of halos have $\langle V_r \rangle < 0.6$ km s$^{-1}$. In \autoref{fig:stats2} we compare the mean shell radial speed with the central velocity based on symmetry. Although the mean and central radial velocity values differ slightly from halo to halo, but overall the two values are very similar for most halos. \begin{figure}[!htb] \centering \includegraphics[width=0.49\textwidth]{fig6.pdf}\caption{Mass loss rate due to decay of dark matter for a dark matter halo as a function of $V_{\rm kick}$ and decay lifetime $\tau$. Mass loss rate is shown for a sphere of radius 30 kpc and 13 Gyr after the formation of the halo. The solid lines are contours for mass loss rates per Gyr of 2\%, 20\% and 200\%. The result is for an NFW halo with a virial mass of $0.8 \times 10^{12}$ and concentration parameter $c=20$, but approximated by a Plummer model following \citet{2008MNRAS.388.1869A}.\label{fig:DDM_constrain}} \end{figure} \section{Implications for detecting dark matter decay} Dark matter invisible decay is currently unconstrained by dark matter detection experiments both direct and indirect. Here we consider two decay mechanisms and explore if we can detect them using kinematics of stars in the stellar halo. First mechanism is the full decay of a dark matter particle into some form of radiation ($BR = 1$). Second mechanism is the decay into some radiation and a daughter particle lighter than the dark matter. An example of the former scenario is the 2-body decay of the supersymmetric scalar partner of the axion into two axions, where the axions can be dark radiation \citep{Kawasaki:2007mk}. Examples of the latter scenario include models where the dark matter is coupled to a dark photon/dark Z' \citep{Boehm:2003hm} or, in the case of supersymmetry, sneutrino decaying into a pair neutralino-neutrino or a pair gravitino-neutrino \citep{Kim:2022gpl}. These different channels may eventually lead to visible signatures, including in ICECUBE if the dark matter produces high energy neutrinos, but they may also stay invisible for some parts of the parameter space \footnote{We disregard $\tilde{G} \rightarrow \chi + \gamma$ as this could be in principle constrained by traditional means.}. In both of the above scenarios, the mass enclosed by a shell of any given radius will decrease with time. In the first scenario there is a direct decrease of mass enclosed by a shell. In the second scenario, the decay imparts a kick to the daughter particle, which induces an expansion of the dark matter halo. In principle, the change of mass can be detected as a non-zero outward radial motion of stars. In \autoref{sec:mean_radial_motion}, we saw that the median expected radial shell speed of stellar halo at a radius of 30 kpc is 0.6 km s$^{-1}$. Using \autoref{equ:vr1}, this translates to a mass loss rate $\dot{M}/M$ of 0.02 per Gyr. Hence, if the mass loss rate due to decay is higher than 2\% per Gyr then it should be detectable using mean motion of stellar halo stars. The dark matter decay is characterized by the lifetime of decay $\tau$ and, in case where there is a daughter particle, the kick velocity $V_{\rm kick}$ imparted to the daughter particle. We now explore in detail the region of the parameter space over which dark matter decay should be detectable using the mean motion of stellar halo stars. In general, for dark matter decaying with lifetime $\tau$ the number of unstable dark matter particles $N$ at a time $t$ since the formation of the halo is given by \begin{equation} N=N_0 \exp(-t/\tau). \end{equation} and the rate of change by ${\rm d}N/{\rm d}t=-N/\tau$. Here, $N_0$ the initial number of unstable dark matter particles at $t=0$. For dark matter decaying purely into radiation we have $-\dot{M}/M=1/\tau$. A limit of $\dot{M}/M>0.02\ {\rm Gyr}^{-1}$ implies $\tau<50$ Gyr. For the case where dark matter decays into a daughter particle, following previous studies \citep{2008MNRAS.388.1869A,2022arXiv220111740M}, we assume a dark matter particle $\chi$ of mass $m$ decays with lifetime $\tau$ into a massive daughter particle $\chi'$ of mass $m'$ and a lighter probably massless dark radiation species $\gamma'$, \begin{equation} \chi \rightarrow \chi' + \gamma'. \end{equation} Due to conservation of momentum, the decay imparts a velocity kick of \begin{equation} V_{\rm kick}=\epsilon c, \end{equation} where $\epsilon=(m-m')/m$ is the mass splitting factor. The kick increases the velocity dispersion of dark matter particles, which in turn will force the halo to expand. Given the dynamical time is in general smaller than the decay lifetime, the halo should quickly virialize such that the expansion can be considered to be adiabatic. Approximating the dark matter halo with a Plummer model, \citet{2008MNRAS.388.1869A} derived the increase of its scale radius $r_p$ with time as \begin{eqnarray} \frac{{\rm d} r_p}{{\rm d}t} &=&\frac{64 r_p^2 c^2}{3\pi GM^2}\frac{\exp[-(t+t_f)/\tau]}{\tau}\times \nonumber \\ &&\left[\frac{\chi}{1+\chi}-\left(1+\frac{3\pi GM}{64c^2 r_p}\right)\frac{\chi(2+\chi)}{2(1+\chi)}\right]. \end{eqnarray} For Plummer model, the mass enclosed by a radial shell is given by \begin{equation} M(r)=M_{\rm vir} \frac{r^3}{(r_p^2+r^2)^{3/2}} \end{equation} Due to expansion of the dark matter halo, the mass enclosed in a given radial shell should decrease. We estimate this taking the derivative of $M(r)$ with time, which gives \begin{equation} \frac{\dot{M}}{M}=\frac{{\rm d} r_p}{{\rm d}t}\frac{3r_p}{r_p^2+r^2}. \end{equation} In \autoref{fig:DDM_constrain}, we explore the mass loss rate at radius of 30 kpc for a Milky Way mass halo as a function of parameters $\tau$ and $V_{\rm kick}$. Following \citet{2008MNRAS.388.1869A} we approximate an NFW halo with a Plummer model, for an NFW halo with scale radius $r_s$ and concentration parameter $c$ the equivalent Plummer scale radius $r_p$ is given by \begin{equation} r_p=r_s\frac{3\pi}{16} \left\{\frac{[\ln(1+c)-c/(1+c)]^2}{1-1/(1+c)^2-2\ln(1+c)/(1+c)}\right\}. \end{equation} \autoref{fig:DDM_constrain} shows that for $V_{\rm kick}<100$ km s$^{-1}$ the mass loss rate per Gyr is less than 2\%. However, for $V_{\rm kick}>100$ km s$^{-1}$ the rate increases steadily with $V_{\rm kick}$ for any given $\tau$. For a given $V_{\rm kick}$ the mass loss rate seems to be maximum for $\tau$ close to 10 Gyr. Contour lines for mass loss rate of 2\%, 20\% and 200\% are shown in the figure. The region of the parameter space over which dark matter decay should be detectable, that is mass loss rate is greater than 2\%, can be seen from \autoref{fig:DDM_constrain}, it is right of the line labelled 2. For a mass loss rate as small as 2\% per Gyr, we can rule out for $\tau=10$ Gyr, $V_{\rm kick} > 100$ km\ s$^{-1}$. In contrast, a $V_{\rm kick} \approx 10^4 {\rm km\ s}^{-1}$ is required to resolve the $H_0$ tension \citep{2019PhRvD..99l1302V} while a $V_{\rm kick} \approx 10^5 {\rm km\ s}^{-1}$ is required to resolve the $S_8$ tension \citep{2021PhRvD.104l3533A}. Hence, we are sensitive to values of $V_{\rm kick}$ that are much lower than that required to resolve the tensions of Hubble parameter $H_0$ and the amplitude parameter S8. Using the observed population of Milky Way satellites \citet{2022arXiv220111740M} placed constraints of $\tau < 18$ Gyr (29 Gyr) for $V_{\rm kick}=20$ km\ s$^{-1}$ (40 km\ s$^{-1}$). This is stricter than the limits that we can set based Milky Way's stellar halo kinematics. This is because the effect of a given kick is stronger for smaller subhalos due to their shallower potential wells. However, significant assumptions related to poorly understood baryonic processes are needed in order to connect the subhalos in simulations to luminous satellite galaxies. In this sense our results based on an independent physics are useful and play a complementary role. \begin{figure}[!htb] \centering\includegraphics[width=0.49\textwidth]{fig7.pdf}\caption{Cumulative number of stars lying in the shell with $(15<R/{\rm kpc}<45)$ and having $(|z|/{\rm kpc}>10)$, as a function of $G$ band apparent magnitude based on a simulation of the Milky Way by the code {\it GALAXIA}. \label{fig:stellar_halo_cumulative}} \end{figure} \section{Observational feasibility} We now look into the feasibility of conducting a study to measure the mean radial motion of Milky Way halo stars in the $15<r/{\>{\rm kpc}}<45$ spherical shell. Two independent arguments suggest that of the order of a million stars would be required to detect mean radial motion of greater than equal to 0.6 km s$^{-1}$. First, a large sample of halo stars is required to do clustering and filter out substructures, without which the radial motion would be too noisy. \autoref{fig:r_vr_galaxia_cluster} shows that of the order of 1 million stars is sufficient to suppress the noise due to substructures. Secondly, to measure a mean motion of 0.6 km s$^{-1}$, uncertainty of less than 0.1 km s$^{-1}$ is desirable. To achieve this, given that the radial velocity dispersion of stars in the halo is 140 km s$^{-1}$ \citep{2003A&A...409..523R}, of the order of 1 million stars are required. \autoref{fig:stellar_halo_cumulative} shows the cumulative number density of stars lying in shell $15<R/{\rm kpc}<45$ as a function of $G$ band magnitude based on simulation of the Milky Way by {\it GALAXIA}. There are close to 90 stars per square degree for $G<20$. A multi-object spectroscopic survey in either north or southern hemisphere targeting 10,000 to 15,000 square degrees can easily observe close to a million stars. We now discuss the exposure time for each pointing and the total duration required to complete a million star survey of halo stars. Given the intrinsic radial velocity dispersion of halo stars is close to 140 km s$^{-1}$, the requirements on the precision of radial velocity measurements of individual stars are less stringent. Even a precision of 10-20 km s$^{-1}$ should be sufficient. Several wide-field surveys have measured stellar radial velocities for millions of stars; these include APOGEE \citep[0.6M, ][]{2017AJ....154...94M}, GALAH+ \citep[0.6M,][]{2021MNRAS.506..150B}, LAMOST \citep[7M,][]{2012RAA....12..723Z} and {\it Gaia} \citep[33M][]{2021A&A...649A...1G}. All of these surveys have been carried out on moderately small telescopes (1-4m diameter) or on mediocre sites, or both, and thus the magnitude limit ($V\lesssim 15$) is too bright for the proposed experiment, yielding a typical measurement accuracy of $1-10$ km s$^{-1}$ depending on the survey. With the dawn of wide-field positioners on 8m class telescopes (e.g. PFS on Subaru 8m, WST 12m in Chile), or 4m class telescopes on exceptional sites, e.g. 4MOST on VISTA \citep{2019Msngr.175....3D}, we are entering a new era where accurate stellar radial velocities will be routinely accessible down to fainter magnitude limits. We focus on 4MOST for a more detailed study to demonstrate the feasibility of our experiment. This is the next major ESO VLT project, to be delivered by 2025, involving a dedicated optical 4m telescope and multi-object spectrographs. 4MOST can observe 1462 stars at low spectroscopic resolution ($R = \lambda/\delta \lambda \approx 4000 - 7500$) and 812 stars in high resolution ($R \approx 18000 - 21000$) mode. The expected 4MOST limit\footnote{ \url{https://www.4most.eu/cms/facility/overview }} for a 2 hour exposure in low resolution is 1 km s$^{-1}$ (1$\sigma$) at $V\sim 18$ increasing to 3 km s$^{-1}$ at $V\sim 20$, which are feasible with proper consideration of which spectral features are not affected by stellar winds \citep{2018MNRAS.481..645Z}. In fact, the 4MOST low-resolution ($R \sim$ 4000 - 7500) halo survey \citep{2019Msngr.175...23H} is planning to observe almost all halo giants with $G<20$ mag over 10,000 square degrees, which is about 1.5 million halo stars. Based on GALAXIA about one third of these stars (0.5 million) will be in our desired radial shell. We now estimate the time required for 4MOST if it were to exclusively focus on halo stars that lie in our desired shell. 4MOST has a field of view of 2.5 square degrees, and there are about 225 targets per 4MOST pointing that lie in our desired shell, which 4MOST can easily do given its high multiplexing. With 2 hours of exposures required for a 3 km s$^{-1}$ radial velocity precision, 4MOST can acquire 4 fields per night or about 14,600 square degrees (1.3 million $G<20$ stars in our required shell) in 5 years (assuming 80\% of the time available for observations). Given the high multiplexing of 4MOST, one can in principle go fainter to say $G\approx 22$ and try to fill up all 1462 low resolution fibers with stars in our desired shell. However, in this case it is not useful to do so. The 2.5 times increase in exposure time per magnitude cancels the gains due to the increase in target density. With a view to proposed facilities in the next decade , e.g. the Wide-field Survey Telescope \citep[WST,][]{2017arXiv170101976E}, we note that 4MOST is mounted on the VISTA 4m telescope. For the same field of view and fibre density, a 12m class telescope as proposed for the ESO WST can do the above survey about 9 times faster. This remarkable prospect will allow for experiments on external galaxies and many other sophisticated experiments of dark matter properties. With large data sets of halo stars one can also learn about the aspherical nature of the halo as has been shown in simulations, e.g., twisting and stretching of halos \citep{2021ApJ...913...36E} or the LMC-induced sloshing of the halo \citep{2021MNRAS.506.2677E}. \section{Conclusions} Under the assumption that the stars in the stellar halo are in equilibrium with potential of the Milky Way, there should be no net radial motion of stars. Any change in mass of the Galaxy is predicted to generate bulk radial motion of stars in the galaxy. Hence, a measurement of non zero bulk radial motion puts constraints on the rate of change of mass in the Galaxy. With this in mind, we have studied the expected bulk radial motion of stars in the stellar halo formed in accordance with the currently favoured $\Lambda$CDM model of structure formation. Our main result is that the median radial velocity for 75\% $\Lambda$CDM halos measured in a shell of radius $15<R/{\rm kpc}<45$ is less than 0.6 km s$^{-1}$. This implies that using stellar halo stars we can measure the rate of change of mass provided it is greater that 2\% per giga year. If such rate of change of mass is due decay of dark matter purely into radiation then our results suggest that we can detect decay with lifetime of less than 50 Gyr. If the change in mass is due to the decay of dark matter into radiation and daughter particles, then our results suggest that we can detect a decay with kick velocity of the order of 100 km/s and a lifetime of 10 Gyr. If kick velocity is larger than 100 km/s then one can detect decay for a wide range of lifetimes. In order to conduct such an experiment and measure a signal in radial motion of 0.6 km s$^{-1}$, of the order of 1 million halo stars would be required. This is feasible with the current generation of astronomical facilities like the 4m class 4MOST facility operating over a period of 5 years. Future facilities with a larger telescope aperture can do this even faster. \section{Data availability}\label{sec:data_avail} The code GALAXIA used for generating mock observational surveys is available at \url{http://galaxia.sourceforge.net/}. Links to the stellar halos simulated by BJ05 and BJ07 are also provided there. The galaxies simulated by the fire team are available at \url{https://fire.northwestern.edu/ananke/}. The code ENLINK used for clustering will be available at \url{https://github.com/sanjibs/enlink}. \section*{Acknowledgements} SS is funded by a Senior Fellowship (University of Sydney), an ARC Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO-3D) Research Fellowship and JBH's Laureate Fellowship from the Australian Research Council (ARC). JBH is supported by an ARC Australian Laureate Fellowship (FL140100278) and ASTRO-3D. \bibliographystyle{aasjournal}